text
stringlengths 5.5k
44.2k
| id
stringlengths 47
47
| dump
stringclasses 2
values | url
stringlengths 15
484
| file_path
stringlengths 125
141
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Manual of Care for the Pediatric Trach
"Hello, I'm Parker, and I have a trach!"
Click here for a printable version, PDF format (Acrobat Reader required)
A tracheostomy is an opening in the windpipe (trachea) that your baby breathes through instead of breathing through his nose and mouth. Often the tracheostomy is not permanent and can be removed after the problem has been corrected or the baby grows and no longer needs the tracheostomy. Babies with the following problems may get tracheostomy:
1. Birth defects that affect the baby's breathing, such as a small jaw, vocal cord paralysis, or large tongue.
2. Tracheomalacia: noisy breathing caused by a soft or weak breathing tube.
3. Need for prolonged respiratory support (i.e., on ventilation), such as BPD.
4. Scarred or narrowed larynx: subglottic stenosis.
5. Neuromuscular diseases.
7. Respiratory control problems, such as central hypoventilation or central apnea.
1. A small opening is made from the skin to the windpipe (trachea) by a cut in the neck for a tracheostomy.
2. A tracheostomy tube is a short piece of plastic that is placed into the trachea through a surgical hole in the neck. It does not reach into the lung.
3. The baby breathes through this plastic tube instead of through his nose and mouth.
4. You will not be able to hear the baby cry or talk with the tracheostomy tube in at first. After some time, an air leak usually develops around the trach tube. Some of the air escapes through the voice box, permitting some return of voice.
1. Surgery takes approximately one hour. It is not the surgery, but the immediate post-op course that is frightening to most parents. Many families have not seen a mist-collar, and the monitors, the trach "stay stitches", and even bloody secretions seem overwhelming. The child or infant is usually sedated at first and the parents or family must wait a few days even to hold their child, for reasons of safety.
2. The baby spends the first week in the ICU for recovery.
3. Hand-on teaching follows the first trach change. Many parents need everything they can to prepare for caring for a child with a trach, and there are a number of available handbooks and articles directed at the caregiver (see end of article on literature).
3. Trach plugging.
4. Granulation (scar) tissue.
5. Skin necrosis.
1. The baby will go home on a home apnea and cardiac monitor. The monitor counts the baby's breathing rate and heart rate.
2. The monitor alarms to tell you if the baby is not breathing (apnea) or if the heart beat is too slow (bradycardia) or fast (tachycardia).
3. A pulse oximeter provides the oxygen saturation information and is routinely used early on.
2. Suctioning the trach tube: Suctioning is done to clear the trach tube of mucus, so that the trach tube will not become blocked. Suctioning is done to a premeasured depth that just allows the tip of the suction catheter to come out the end of the trach tube. Suctioning more deeply may injure the lining of the windpipe. Your child's nurse will show you how far the catheter should be inserted. You can check this depth by passing a catheter through an extra clean trach tube until the side holes close to the tip just clear the end of the tube and measuring the distance from the end of the catheter.
a. Wash your
a. Twill tape
or bias seam tape or shoe laces or Velcro holder
2. Changing the
ties: Do not change the tracheostomy ties by yourself unless
a. Change ties
daily or when:
before changing ties. Suctioning decreases chances of the baby's
ties requires two people - one person to hold tube in place and
blanket roll under shoulders to expose the tracheostomy area.
e. Slide old
ties from center of hole to top on both sides of the tracheostomy
f. Insert new
ties under old ones.
g. Secure new
ties with a square knot. Ties should be tight enough to easily slip
h. Cut off old
ties and remove. Guard tips of scissors with your fingers.
i. Examine neck daily for redness, skin breakdown, or rashes.
If using trach
holder or Velcro trach tie:
a. Tap water
a. Clean area
around tracheostomy opening in neck (stoma) daily and when the area is
3. Place gauze trach dressing around trach tube. Change dressing as often as necessary to keep skin dry.
a. May use
pre-cut trach dressings (more expensive)
4. Clean stoma 2-3 times a day if an odor is present (or more often, if there is drainage present).
5. Powders and lotions must not be used around the trach stoma.
6. If ordered by the baby's doctor to treat irritations or rashes, apply ointments in a thin layer. (Ointments under the trach collar can make the skin irritations worse. Sometimes clean and dry is best.)
tube with obturator (guide)
2. Changing the tracheostomy tube
a. Do not
change the tracheostomy tube by yourself unless absolutely necessary.
3. Your baby
may cough, cry, turn red, or sweat. He is OK. This does not hurt the
4. Change the trach tube every 1-2 weeks (as directed by your baby's doctor) or for:
infant who does not respond to suctioning or usual calming methods.
5. Change tube
before feeding or at least 2 hours after feeding. Avoid changing just
6. Inspect the removed tube for color change, mucus plugs, or odor.
The medical equipment supply company will teach you how to clean the tracheostomy tubes and what to use for cleaning.
A humidifier and tracheostomy collar (trach collar) are used to filter and moisten air entering the windpipe (trachea) because the baby does not breathe through his nose or mouth.
2. How to use:
nebulizer jar with sterile water to line on jar
2. Fill clean nebulizer jar with sterile distilled water.
3. Check to make sure suction machine is working.
4. Check suction tubing as well.
2. Clean using solution recommended by the home equipment supply company.
After use or
1. Your baby may have trach collar and humidity off during the day if allowed by your baby's doctor. An "artificial nose" type humidification device may be adequate.
2. Use trach collar and humidity during naps and at night to keep trach moist and prevent mucus plugs.
3. If humidifier is not available (during long trips or power failure), place one drop of saline every hour or two into the trach tube to moisten trach tube and windpipe.
4. The windpipe (trachea) of your baby is small and easily plugged with mucus, so the humidifier with trach collar provides a direct source of moisture that a vaporizer cannot.
5. If mucus becomes thick, move the numbered ring on the humidifier to a lower number. The usual setting is 50%. Increasing the baby's fluid intake may help thin the mucus.
1. Restlessness or increased irritability.
2. Increased breathing (respiratory) rate.
3. Heavy, hard breathing.
4. Grunting, noisy breathing.
5. Nasal flaring (sides of nostrils move in and out with breathing).
6. Retraction (sinking in of breastbone and skin between the ribs with each breath).
7. Blue or pale color.
8. Whistling from the trach tube.
10. Change in pattern of heart rate (less than 80 or more than 210 beats/minute).
11. Bleeding from trach tube.
a. Report to
You will take a basic CPR class. We will teach you how to do CPR with the trach and how to use an ambu bag to breathe for the baby.
If the baby stops breathing:
1. SUCTION TRACH TUBE AT ONCE.
2. Replace trach tube if it has come out, is blocked with mucus, or your baby does not improve with suctioning. Tie trach ties!!!
3. Begin CPR if baby does not breathe when trach tube is clear.
Call for help!!
1. Stimulate baby by gently shaking.
2. Position baby on a hard flat surface with his nose pointed straight up.
3. Suction tracheostomy. Replace if blocked.
4. Listen and feel for breath by placing ear over tracheostomy. Look at chest to see if baby is breathing.
5. Place mouth or attach ambu bag over trach tube to form a seal.
6. Give 2 quick puffs. Observe to see if chest moves like an easy breath.
7. Feel for brachial heart rate (pulse) in the bend of the arm for 5 seconds and check to see if baby is breathing on his own (look, feel, and listen for air movement).
8. If you feel a pulse, breathe with mouth or ambu bag on tube. Count 1-2 breathe, 1-2 breathe.
9. If air is leaking from the nose and mouth, close them with your hand.
10. If you do not feel a heart rate in 5 seconds or if the heart rate is less than 60 bpm, do chest compressions and breathe for baby with mouth or ambu bag on trach tube. Press ½ to 1 inch with each compression. It is a little tricky to use the ambu bag and do chest compression, but you will learn how. Count:
1 2 3 4 5
This rate is about 100 times a minute. The breath is about 1 to 1 ½ seconds long.
11. Check heart rate and breathing about every minute. Do what the baby is not doing.
12. Call your local Emergency number or ambulance team for help if your baby does not respond.
13. Have baby taken to the nearest hospital.
1. Food or liquid comes though the trach.
2. There is a rash, drainage, or unusual odor around the trach opening.
3. Mucus becomes green or foul smelling (normal color is clear or whitish).
4. Bleeding occurs from the trach tube.
5. Difficult breathing not relieved by suctioning or changing trach.
6. Unable to replace trach tube.
7. Baby stops breathing.
1. Plugged trach:
a. Suction and
use ambu bag.
2. Coughing out trach tube:
a. Insert new
clean trach tube as soon as possible.
a. Suction if
you think vomit has gone down tube.
4. Unable to replace trach tube:
a. Try to
insert smaller trach tube.
2. Burp well and place on right side or in infant seat after feeding.
3. DO NOT PROP THE BOTTLE.
4. Do not let your baby have a bottle unless you are present (in case choking occurs).
2. NEVER LEAVE YOUR BABY ALONE IN THE TUB.
3. Baby's head must be held during hair washing so that water does not enter the trach.
4. Change the trach ties after the bath if they get wet.
2. Clothing that covers the trach should not be worn. Also avoid plastic bibs.
strings, fuzzy clothing, fuzzy blankets, and stuffed animals should be
4. Purchasing a
portable intercom system so you can hear the baby when you are in
2. He will learn to talk around the trach tube.
3. It is important that you talk to him as you would any other baby.
valves such as the Passé-Muir valve can aid in talking when it is
2. It is important that parents be able to rest and go out without the baby!
3. Some parents use a TV monitor, which they find helpful in watching the child.
2. Animals with fine hair should not be in the house.
3. Keep home as free from lint and dirt as possible.
4. Do not use powders, chlorine bleach, ammonia, or aerosol sprays in the same room as the baby. Particles and fumes get into the lungs through the trach. This will cause a "burning feeling" and breathing problems.
5. Do not smoke or allow others to smoke around your baby. It's irritating to the baby's airways.
6. Watch play with other children so that toys, fingers, and food are not put into the trach tube.
7. Do not buy toys with small parts that can easily be removed.
8. Always carry your GO BAG supplies when you leave home.
9. No swimming.
2. Protect the tracheostomy on dusty windy days when dust particles may enter the trachea and cause drying or crusting mucus.
2. This is usually a frightening situation for older brothers/sisters and requires parents' support and teaching to ease their initial discomfort and fear.
3. It may be helpful to involve brother and sister's help in small tasks such as holding the baby still, helping clean equipment, etc.
4. Watch young brothers and sisters around the baby!
1. You may want to count the baby's breathing rate twice a day when the baby is quiet or asleep. You can write the number in a record book you bring to the doctor.
2. One count is a breath in and out. Sometimes the baby holds his breath briefly, breathes fast then slow, stretches or moves. Count the breathing as best you can.
3. Call the doctor if the breathing rate is 15-20 counts higher than usual or your baby is working hard to breathe. Make sure the baby is not too warm or does not have mucus in his trach.
1. You will be very busy at home.
2. It helps to have a calendar with your day's activities clearly marked.
3. Some things you will do several times a day and some things you do several times a week. Organization and a schedule are important. So is help from family members.
4. It is important to teach several people to care for the baby so you can have a break and get out by yourself.
a. 1-2 times a
day, or more if necessary.
2. Wash suction bottle in hot soapy water.
3. Chest Physiotherapy (or CPT):
a. 2-3 times a
day (if recommended by the baby's doctor)
4. Change trach collar and tubing.
5. Change water bottle for humidifier.
6. Check to make sure suction machine is working.
2. Clean suction bottle and tubing in solution recommended by home equipment supply company.
3. Clean trach collar and tubing in solution recommended by home equipment supply company.
Weekly (or as
1. De Lee suction catheter.
2. Bulb syringe.
3. Suction catheters - disposable.
4. Trach tube with tie (same size and size smaller).
6. Water soluble lubricant (sterile single use packets).
7. Saline (two or three 5 cc vials).
8. 4 x 4's or trach sponges.
9. Portable suction machine.
10. Emergency phone numbers.
11. HME devices (heat moisture exchanger)
12. Ambu bag.
13. Portable oxygen.
14. Hospital, insurance, and pharmacy cards available in baby's own "wallet"
a. ½ teaspoon
of table salt added to 8 ounces of boiled water
2. Sterile distilled water
a. Boil tap or
bottled water 10 minutes after water begins to roll.
a. Must buy.
1. All of the home supplies you need will be provided through a home equipment and supply company. The hospital makes these arrangements with a company near where you live.
2. The supply company will contact you at home or while you room-in with your baby.
3. The supply company will tell you when and how to order supplies. They will give you a phone number to call if you have equipment problems. Call them if your equipment breaks or to reorder supplies.
programs are available to help provide medical and financial care of
your baby. The Child Services Coordinator in your community or a
social worker can help find out if you are eligible for the programs.
Babies are eligible for different reasons and some may not be eligible
or approved. Information can be obtained from your baby's social
worker during the hospital stay.
2. It is a lot of hard work to care for a baby with a trach. Yet most parents still prefer to have the baby at home.
3. We ask several family members to learn the care so everyone can get some rest.
4. Some insurance companies approve home-nursing care for a baby with a trach. We contact your insurance company to find out if they provide this service.
5. Home health agencies or public health services are used for short visits. These visits are an hour or less. The nurses answer questions, help with special treatments and help with medications. They may weigh the baby or watch a feeding. They work with your doctor to follow the baby's condition and progress.
6. Even though it is difficult to find people to babysit, it is important to teach other people to care for the baby so you can go out.
7. Respite services which provide relief for parents may not be available in all communities or for babies with trachs.
8. If you get too tired or frustrated, call the doctor or social worker. We will try to help.
1. Your baby returns to the hospital clinic to follow his breathing problems and trach.
2. If you see more than one doctor (eye, surgery, breathing, x-ray, lab, development), check to see if the appointment can be made for the same day.
3. At first it seems you spend most of your time going to the doctor.
4. As the baby's health gets better, the visits become less frequent and some doctors will not need to see him.
5. You will take the baby to a local baby doctor for routine baby care and shots. Make an appointment to see him the first week the baby is home.
6. We mail your doctor a report of your visits to the hospital clinics.
1. The home equipment company will call and write the following agencies to inform them that your baby has a serious medical problem:
2. The letter asks that you be placed on the priority list for notification of anticipated interruptions of service.
3. The letter asks that you be placed on the priority list for service reinstitution in the event of unexpected interruption of service.
The American Thoracic Society: Care of the Child with a Chronic Tracheostomy. American Journal of Respiratory and Critical Care Medicine, Volume 161, pp. 297-308, 2000. Internet address: www.atsjournals.org
Fitton C, Myer
C. Home care of the child with a tracheostomy. In: H.B. Othersen,
editor. The Pediatric Airway: An Interdisciplinary Approach. J.B.
Lippincott, Philadelphia, 1995. pp. 171-179. | <urn:uuid:e97131cc-1d28-492b-80ad-162f915233ce> | CC-MAIN-2017-17 | http://www.tracheostomy.com/resources/more/trachmanual/index.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00483-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.89756 | 4,255 | 3.640625 | 4 |
Multiracial in America
Chapter 1: Race and Multiracial Americans in the U.S. Census
Every U.S. census since the first one in 1790 has included questions about racial identity, reflecting the central role of race in American history from the era of slavery to current headlines about racial profiling and inequality. But the ways in which race is asked about and classified have changed from census to census, as the politics and science of race have fluctuated. And efforts to measure the multiracial population are still evolving.
From 1790 to 1950, census takers determined the race of the Americans they counted, sometimes taking into account how individuals were perceived in their community or using rules based on their share of “black blood.” Americans who were of multiracial ancestry were either counted in a single race or classified into categories that mainly consisted of gradations of black and white, such as mulattoes, who were tabulated with the non-white population. Beginning in 1960, Americans could choose their own race. Since 2000, they have had the option to identify with more than one.
This change in census practice coincided with changed thinking about the meaning of race. When marshals on horseback conducted the first census, race was thought to be a fixed physical characteristic. Racial categories reinforced laws and scientific views asserting white superiority. Social scientists today generally agree that race is more of a fluid concept influenced by current social and political thinking.11
Along with new ways to think about race have come new ways to use race data collected by the census. Race and Hispanic origin data are used in the enforcement of equal employment opportunity and other anti-discrimination laws. When state officials redraw the boundaries of congressional and other political districts, they employ census race and Hispanic origin data to comply with federal requirements that minority voting strength not be diluted. The census categories also are used by Americans as a vehicle to express personal identity.12
The first census in 1790 had only three racial categories: free whites, all other free persons and slaves. “Mulatto” was added in 1850, and other multiracial categories were included in subsequent counts. The most recent decennial census, in 2010, had 63 possible race categories: six for single races and 57 for combined races. In 2010, 2.9% of all Americans (9 million) chose more than one racial category to describe themselves.13 The largest groups were white-American Indian, white-Asian, white-black and white-some other race.14
Some research indicates that using data from the current census race question to tally the number of multiracial Americans may undercount this population. An alternative is to use responses to the Census Bureau’s question about “ancestry or ethnic origin.” Here respondents are allowed to write in one or two responses (for example, German, Nicaraguan, Jamaican or Eskimo). These can then be mapped into racial groups. By this metric, 4.3% of Americans (more than 13 million) reported two-race ancestry in 2010-2012, an estimate that is about 70% larger than the 7.9 million who reported two races in answering the race question.15
The ancestry data also offer a longer time trend: A Pew Research analysis finds that the number of Americans with two different racial ancestries has more than doubled since 1980, when the ancestry question was first asked.
This chapter explores the history of how the U.S. decennial census has counted and classified Americans by race and Hispanic origin, with a particular focus on people of multiracial backgrounds, and examines possible future changes to the way race is enumerated in U.S. censuses. The chapter also examines the racial makeup and age structure of the nation’s multiracial population, based on the Census Bureau’s American Community Survey. The final section explores trends in the number and share of Americans who report two ancestries that have predominantly different racial compositions, also based on the Census Bureau’s American Community Survey. Readers should note that estimates here—as they are based on Census Bureau data—may differ from those derived from the Pew Research Center survey of multiracial Americans that will form the basis of the analysis for subsequent chapters of this report.
How the Census Asks About Race
Currently census questionnaires ask U.S. residents about their race and Hispanic ethnicity using a two-question format. On the 2010 census form (and current American Community Survey forms), respondents are first asked whether they are of Hispanic, Latino or Spanish origin (and, if so, which origin—Mexican, Puerto Rican, Cuban or another Hispanic origin).
The next question asks them to mark one or more boxes to describe their race. The options include white, black, American Indian/Alaska Native, as well as national origin categories (such as Chinese) that are part of the Asian or Hawaiian/Pacific Islander races. People filling out the form may also check the box for “some other race” and fill in the name of that race. Explicit instructions on the form note that Hispanic/Latino identity is not a race.
Nonetheless, many respondents write in “Hispanic,” “Latino” or a country with Spanish or Latin roots, suggesting that the standard racial categories are less relevant to them.
This two-question format was introduced in 1980, the first year that a Hispanic category was included on all census forms. (See below for more on the history of how the Census Bureau has counted Hispanics.)
The option to choose more than one race, beginning in 2000, followed Census Bureau testing of several approaches, including a possible “multiracial” category. The change in policy to allow more than one race to be checked was the result of lobbying by advocates for multiracial people and families who wanted recognition of their identity. The population of Americans with multiple racial or ethnic backgrounds has been growing due to repeal of laws banning intermarriage, changing public attitudes about mixed-race relationships and the rise of immigration from Latin America and Asia. One important indicator is in the growth in interracial marriage: The share of married couples with spouses of different races increased nearly fourfold from 1980 (1.6%) to 2013 (6.3%).
For the 2020 census, the Census Bureau is considering a new approach to asking U.S. residents about their race or origin. Beginning with the 2010 census, the bureau has undertaken a series of experiments trying out different versions of the race and Hispanic questions. The latest version being tested, as described below, combines the Hispanic and race questions into one question, with write-in boxes in which respondents can add more detail.
Counting Whites and Blacks
Through the centuries, the government has revised the race and Hispanic origin categories it uses to reflect current science, government needs, social attitudes and changes in the nation’s racial composition.16
For most of its history, the United States has had two major races, and until recent decades whites and blacks dominated the census racial categories.17 (American Indians were not counted in early censuses because they were considered to live in separate nations.) At first, blacks were counted only as slaves, but in 1820 a “free colored persons” category was added, encompassing about 13% of blacks.18
In a society where whites had more legal rights and privileges than people of other races, detailed rules limited who was entitled to be called “white” in the census. Until the middle of the 20th century, the general rule was that if someone was both white and any other non-white race (or “color,” as it was called in some early censuses), that person could not be classified as white. This was worded in various ways in the written rules that census takers were given. In the 1930 census, for example, enumerators were told that a person who was both black and white should be counted as black, “no matter how small the percentage of Negro blood,” a classification system known as the “one-drop rule.”19
Mulattos, Quadroons and Octoroons
Some race scientists and public officials believed it was important to know more about groups that were not “pure” white or black. Some scientists believed these groups were less fertile, or otherwise weak; they looked to census data to support their theories.20 From the mid-19th century through 1920, the census race categories included some specific multiracial groups, mainly those that were black and white.
“Mulatto” was a category from 1850 to 1890 and in 1910 and 1920. “Octoroon” and “quadroon” were categories in 1890. Definitions for these groups varied from census to census. In 1870, “mulatto” was defined as including “quadroons, octoroons and all persons having any perceptible trace of African blood.” The instructions to census takers said that “important scientific results” depended on their including people in the right categories. In 1890, a mulatto was defined as someone with “three-eighths to five-eighths black blood,” a quadroon had “one-fourth black blood” and an octoroon had “one-eighth or any trace of black blood.”21
The word “Negro” was added in 1900 to replace “colored,” and census officials noted that the new term was increasingly favored “among members of the African race.”22 In 2000, “African American” was added to the census form. In 2013, the bureau announced that because “Negro” was offensive to many, the term would be dropped from census forms and surveys.
Although American Indians were not included in early U.S. censuses, an “Indian” category was added in 1860, but enumerators counted only those American Indians who were considered assimilated (for example, those who settled in or near white communities). The census did not attempt to count the entire American Indian population until 1890.
In some censuses, enumerators were told to categorize American Indians according to the amount of Indian or other blood they had, considered a marker of assimilation.23 In 1900, for example, census takers were told to record the proportion of white blood for each American Indian they enumerated. The 1930 census instructions for enumerators said that people who were white-Indian were to be counted as Indian “except where the percentage of Indian blood is very small, or where he is regarded as a white person by those in the community where he lives.”
Efforts to Categorize Multiracial Americans
In the 1960 census, enumerators were told that people they counted who were both white and any other race should be categorized in the minority race. People of multiracial non-white backgrounds were categorized according to their father’s race. There were some exceptions: If someone was both Indian and Negro (the preferred term at the time), census takers were told the person should be considered Negro unless “Indian blood very definitely predominated” and “the person was regarded in the community as an Indian.”
Some Asian categories have been included on census questionnaires since 1860—“Chinese,” for example, has been on every census form since then.24 The 1960 census also included, for the first and only time, a category called “Part Hawaiian,” which applied only to people living in Hawaii. It coincided with Hawaii’s admission as a state; a full Hawaiian category also was included. (The 1960 census was also the first after Alaska’s admission as a state, and “Eskimo” and “Aleut” categories were added that year.)
In most censuses, the instructions to enumerators did not spell out how to tell which race someone belonged to, or how to determine blood fractions for American Indians or for people who were black and white. But census takers were assumed to know their communities, especially from 1880 onward, when government-appointed census supervisors replaced the federal marshals who had conducted earlier censuses. In the 1880 census, emphasis was placed on hiring people who lived in the district they counted and knew “every house and every family.” However, enumerator quality varied widely.25
Despite repeatedly including multiracial categories, census officials expressed doubt about the quality of data the categories produced. The 1890 categories of mulatto, octoroon and quadroon were not on the 1900 census, after census officials judged the data “of little value and misleading.” Mulatto was added back in 1910 but removed again in 1930 after the data were judged “very imperfect.”26
In 1970, respondents were offered guidance on how to choose their own race: They were told to mark the race they most closely identified with from the single-race categories offered. If they were uncertain, the race of the person’s father prevailed. In 1980 and 1990, if a respondent marked more than one race category, the Census Bureau re-categorized the person to a single race, usually using the race of the respondent’s mother, if available. Beginning in 2000, although only single-race categories were offered, respondents were told they could mark more than one to identify themselves. This was the first time that all Americans were offered the option to include themselves in more than one racial category. That year, some 2.4% of all Americans (including adults and children) said they were of two or more races.
Among the major race groups, the option to mark more than one race has had the biggest impact on American Indians. The number of American Indians counted in the census grew by more than 160% between 1990 and 2010, with most of the growth due to people who marked Indian and one or more additional races, rather than single-race American Indians. But other researchers have noted that the American Indian population had been growing since 1960—the first year in which most Americans could self-identify—at a pace faster than could be accounted for by births or immigration. They have cited reasons including the fading of negative stereotypes and a broadened definition on the census form that may have encouraged some Hispanics to identify as American Indian.27
Census History of Counting Hispanics
It was not until the 1980 census that all Americans were asked whether they were Hispanic. The Hispanic question is asked separately from the race question, but the Census Bureau is now considering whether to make a recommendation to the Office of Management and Budget to combine the two.
Until 1980, only limited attempts were made to count Hispanics. The population was relatively small before passage of the 1965 Immigration and Nationality Act, which broadly changed U.S. policy to allow more visas for people from Latin America, Asia and other non-European regions. Refugees from Cuba and migrants from Puerto Rico also contributed to population growth.
Until 1930, Mexicans, the dominant Hispanic national origin group, had been classified as white. A “Mexican” race category was added in the 1930 census, following a rise in immigration that dated to the Mexican Revolution in 1910. But Mexican Americans (helped by the Mexican government) lobbied successfully to eliminate it in the 1940 census and revert to being classified as white, which gave them more legal rights and privileges. Some who objected to the “Mexican” category also connected it with the forced deportation of hundreds of thousands of Mexican Americans, some of them U.S. citizens, during the 1930s.28
In the 1970 census, a sample of Americans were asked whether they were of Mexican, Puerto Rican, Cuban, Central or South American, or other Spanish origin—a precursor of the universal Hispanic question implemented later. The 1980 census asked all Americans whether they were of “Spanish/Hispanic origin,” and listed the same national-origin categories except for “Central or South American.”29 The 2000 census added the word “Latino” to the question.
The addition of the Hispanic question to census forms reflected both the population growth of Hispanics and growing pressure from Hispanic advocacy groups seeking more data on the population. The White House responded to the pressure by ordering the secretary of commerce, who oversees the Census Bureau, to add a Hispanic question in 1970. A 1976 law sponsored by Rep. Edward Roybal of California required the federal government to collect information about U.S. residents with origins in Spanish-speaking countries.30 The following year, the Office of Management and Budget released a directive listing the basic racial and ethnic categories for federal statistics, including the census. “Hispanic” was among them.
The Hispanic category is described on census forms as an origin, not a race—in fact, Hispanics can be of any race. But question wording does not always fit people’s self-identity; census officials acknowledge confusion on the part of many Hispanics over the way race is categorized and asked about. Although Census Bureau officials have tinkered with wording and placement of the Hispanic question in an attempt to persuade Hispanics to mark a standard race category, many do not. In the 2010 census, 37% of Hispanics—18.5 million people—said they belonged to “some other race.” Among those who answered the race question this way in the 2010 census, 96.8% were Hispanic. And among those Hispanics who did, 44.3% indicated on the form that Mexican, Mexican American or Mexico was their race, 22.7% wrote in Hispanic or Hispano or Hispana, and 10% wrote in Latin American or Latino or Latin.31
Possible New Combined Race-Hispanic Question
Leading up to the 1980 census, the Census Bureau tested a new approach to measuring race and ethnicity that combined standard racial classifications with Hispanic categories in one question. But at the time, the bureau didn’t seriously consider using this approach for future censuses.32 That option is on the table again, however, because of concerns that many Hispanics and others have been unsure how to answer the race question on census forms.33 In the 2010 census, the nation’s third-largest racial group is Americans (as noted above, mainly Hispanics) who said their race is “some other race.” The “some other race” group, intended to be a small residual category, outnumbers Asians, American Indians and Americans who report two or more races.
The Census Bureau experimented during the 2010 census with a combined race and Hispanic question asked of a sample of respondents. The test question included a write-in line where more detail could be provided. The bureau also tried different versions of the two-question format.
Census Bureau officials have cited promising results from their Alternative Questionnaire Experiment. According to the results, the combined question yielded higher response rates than the two-part question on the 2010 census form, decreased the “other race” responses and did not lower the proportion of people who checked a non-white race or Hispanic origin. The white share was lower, largely because some Hispanics chose only “Hispanic” and not a race.
However, fewer people counted themselves in some specific Hispanic origin groups (“Mexican,” for example) when those groups were not offered as check boxes. Some civil rights advocacy groups have expressed concern that the possible all-in-one race and Hispanic question could result in diminished data quality. According to a recent report from the Leadership Conference on Civil and Human Rights, “Civil rights advocates are cautiously optimistic about the possibility of more accurate data on the Latino population from revised 2020 census race and ethnicity question(s), but they remain concerned about the possible loss of race data through a combined race and Hispanic origin question, the diminished accuracy of detailed Hispanic subgroup data, and the ability to compare data over time to monitor trends.”34
The bureau is continuing to experiment with the combined question, with plans to test it on the Current Population Survey this year and on the American Community Survey in 2016. Any questionnaire changes would need approval from the Office of Management and Budget, which specifies the race and ethnicity categories on federal surveys. Congress also will review the questions the Census Bureau asks, and can recommend changes. The Census Bureau must submit topic areas for the 2020 census to Congress by 2017 and actual question wording by 2018.
Census Data on Multiracial Americans
Based on the Census Bureau’s American Community Survey, the nation’s multiracial population stood at 9.3 million in 2013, or 3% of the population. This number is based on the current census racial identification question and comprises 5 million adults and 4.3 million children. Among all multiracial Americans, the median age is 19, compared with 38 for single-race Americans.
The four largest multiracial groups, in order of size, are those who report being white and black (2.4 million), white and Asian (1.9 million), white and American Indian (1.8 million) and white and “some other race” (922,000).35 White and black Americans are the youngest of these groups, with a median age of only 13. Those who are white and American Indian have the oldest median age, 31. These four groups account for three-quarters of multiracial Americans.
The four largest multiracial groups are the same for both adults and children, but they rank in different order. Among multiracial adults, the largest group is white and American Indian (1.3 million). That is followed by white and Asian (921,000) and white and black (900,000). Those who are white and “some other race” number 539,000. Fully 25% of multiracial adults in 2013 also said they were Hispanic, compared with 15% of single-race adults.
Among Americans younger than 18, the groups rank in the same order as for multiracial Americans overall: white and black (1.5 million), white and Asian (941,000), white and American Indian (518,000) and white and “some other race” (383,000).
The nation’s overall multiracial population tilts young. Americans younger than 18 accounted for 23% of the total population in 2013, but they were 46% of the multiracial population. The younger the age group, the higher its share of multiracial Americans. Of those younger than 18, 6% are of more than one race, compared with about 1% of Americans age 65 and older. Among all adults, 2.1% are of more than one race. (In filling out census forms, parents report both their own race and that of their children.)
A more detailed analysis of the demographic characteristics of adults with multiracial backgrounds, based on the Pew Research survey, appears in Chapter 2.
Trends in Two-Race Ancestry
Another way to analyze the multiracial population in the U.S. involves responses to the census question about ancestry or ethnic origin. Because Americans have been asked about their ancestry since 1980, their responses provide more than three decades of data on change in the size of the U.S. population with two races in their background. By comparison, data on multiracial Americans from the race question have been available only since 2000, when people were first allowed to identify themselves as being of more than one race.
This analysis is based on Americans of all ages, not just adults. The Census Bureau reports up to two ancestry responses per person, most of which a Pew Research Center analysis matched to standard racial categories reflecting the dominant race in a given country of origin. For example, people in the 2010-2012 American Community Surveys who said they have ancestral roots in Germany would be classified as white, because over 99% of people of German ancestry said they were white when answering the race question on that same survey.36 Using this method yields a larger estimate of the U.S. two-race population than is obtained from using responses to the race question: 13.5 million compared with 7.9 million in the 2010-2012 American Community Survey.37
The analysis indicates that the U.S. population of two-race ancestry has more than doubled in size, from about 5.1 million in 1980 to 13.5 million in 2012. The share of the U.S. population with two-race ancestry has nearly doubled, from 2.2% in 1980 to 4.3% in 2010-2012. By comparison, the total U.S. population has grown by a little more than a third over the same period.
- One “extreme example of inconsistency in the classification by race over time,” described in a Census Bureau working paper, is that a person counted as an Asian Indian since 1980 could have been classified three other ways in earlier censuses: Hindu in 1920-1940, “other race” in 1950-1960 and white in 1970. See Gibson, Campbell, and Kay Jung. 2005. “Historical Census Statistics on Population Totals by Race, 1790 to 1990, and by Hispanic Origin, 1970 to 1990, for Large Cities and Other Urban Places in the United States.” Washington, D.C.: U.S. Census Bureau, February. https://www.census.gov/population/www/documentation/twps0076/twps0076.pdf ↩
- This racial self-identity can change, as demonstrated by recent research that found at least 9.8 million Americans gave a different race and/or Hispanic origin response in the 2010 census than in the 2000 census. This was particularly true for people of multiracial background. See Liebler, Carolyn, et al. 2014. “America’s Churning Races: Race and Ethnic Response Changes between Census 2000 and the 2010 Census.” Washington, D.C.: U.S. Census Bureau, August. http://www.census.gov/srd/carra/Americas_Churning_Races.pdf ↩
- 2.1% of adult Americans chose more than one racial category in 2010. ↩
- The order of categories for each multiracial group—white and black, for example—follows Census Bureau convention. As explained below, “some other race” is a residual category, with a write-in box, in addition to the five standard race categories. ↩
- The 7.9 million figure, which is derived from 2010-2012 American Community Survey data, reflects the number who reported two races. This is different from the 9 million figure, included elsewhere, which is derived from the 2010 decennial census and reflects the number who reported two or more races. ↩
- Much of the history in this chapter is drawn from Humes, Karen, and Howard Hogan. 2009. “Measurement of Race and Ethnicity in a Changing, Multicultural America.” Race and Social Problems, September http://link.springer.com/article/10.1007/s12552-009-9011-5;
Bennett, Claudette. 2000. “Racial Categories Used in the Decennial Censuses, 1790 to the Present,” Government Information Quarterly, April http://www.sciencedirect.com/science/article/pii/S0740624X00000241; Nobles, Melissa. 2000. “Shades of Citizenship.” Stanford, CA: Stanford University Press; U.S. Census Bureau. 2002. “Measuring America: The Decennial Censuses from 1790 to 2000.” Washington, D.C.: April. http://www.census.gov/prod/2002pubs/pol02-ma.pdf ↩
- The race and Hispanic origin categories used throughout the federal government (and by recipients of federal funding) currently are set by the Office of Management and Budget, and the last major revision was in 1997. In addition to their use on census questionnaires, the categories apply to federal household surveys and other forms such as birth and death certificates, school registrations, military records and mortgage applications. ↩
- See Gibson, Campbell, and Kay Jung. 2002. “Historical Census Statistics on Population Totals by Race, 1790 to 1990, and by Hispanic Origin, 1970 to 1990, for the United States, Regions, Divisions and States.” Washington, D.C.: U.S. Census Bureau, September. http://mapmaker.rutgers.edu/REFERENCE/Hist_Pop_stats.pdf ↩
- However, enumerators may not have followed instructions in all cases, according to preliminary research by Aliya Saperstein and Carolyn Liebler presented at the 2013 Population Association of America conference (http://paa2013.princeton.edu/papers/132526). Their work indicates that on average, from 1900 to 1960, nearly one-third of children ages 9 or younger with a black parent and a white parent were reported in the census as white. ↩
- See Nobles (2000). ↩
- A Census Bureau working paper that is a widely cited source of historic statistics on race says these statistics are of “dubious accuracy and usefulness.” See Gibson and Jung (2002). ↩
- See Humes and Hogan (2009). ↩
- These records can be used today by people seeking to prove they have American Indian ancestors, in order to be eligible for tribal membership or other benefits. See http://www.indian-affairs.org/resources/aaia_faqs.htm and http://publishing.cdlib.org/ucpressebooks/view?docId=ft8g5008gq&chunk.id=d0e7238&toc.depth=1&toc.id=d0e3210&brand=ucpress ↩
- Among other single-race Asian subgroups, a Japanese category has been on the census since 1880 and a Filipino category since 1920. A Korean category has been on since 1920, except for 1950 and 1960. The current Asian subgroups listed on the census form—Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese, and “other Asian”—-have been relatively stable since the 1980 census. ↩
- For more details about hiring and quality of enumerators, see Magnuson, Diana L. 1995. “History of Enumeration Procedures, 1790-1940.” IPUMS-USA, University of Minnesota. https://usa.ipums.org/usa/voliii/enumproc1.shtml ↩
- See Nobles (2000). ↩
- See Liebler, Carolyn, and Timothy Ortyl. 2014. “More Than One Million New American Indians in 2000: Who Are They?” Demography, June. ↩
- See Nobles (2000). However, the bureau continued to research ways of estimating the size of the Mexican-American population. In the 1940 census, the bureau used data for place of birth, parents’ place of birth and mother tongue to estimate the Mexican-American population. In 1950 and 1960, the bureau developed a list of Spanish last names, which it used to classify a “Spanish surname” population in some states. For more details on the history of the Hispanic question, see Mora, G. Cristina. 2014. “Making Hispanics: How Activists, Bureaucrats, and Media Constructed a New American.” Chicago: University of Chicago Press. ↩
- In 1970, many residents of the south or central U.S. regions mistakenly were classified as Hispanic. See Cohn, D’Vera. 2010. “Census History: Counting Hispanics.” Washington, D.C.: Pew Research Center, March. www.pewsocialtrends.org/2010/03/03/census-history-counting-hispanics-2/ ↩
- See Taylor, Paul, et al. 2012. “When Labels Don’t Fit: Hispanics and Their Views of Identity.” Washington, D.C.: Pew Research Center, April. http://www.pewhispanic.org/2012/04/04/when-labels-dont-fit-hispanics-and-their-views-of-identity/ ↩
- See Lopez, Mark Hugo, and Jens Manuel Krogstad. 2014. “‘Mexican,’ ‘Hispanic,’ ‘Latin American’ Top List of Race Write-ins on the 2010 Census.” Washington, D.C.: Pew Research Center, April. http://www.pewresearch.org/fact-tank/2014/04/04/mexican-hispanic-and-latin-american-top-list-of-race-write-ins-on-the-2010-census/ ↩
- See Mora (2014). ↩
- See Krogstad, Jens Manuel, and D’Vera Cohn. 2014. “U.S. Census Looking at Big Changes in How It Asks About Race and Ethnicity.” Washington, D.C.: Pew Research Center, March. http://www.pewresearch.org/fact-tank/2014/03/14/u-s-census-looking-at-big-changes-in-how-it-asks-about-race-and-ethnicity/ ↩
- See The Leadership Conference on Civil and Human Rights. 2014. Chapter III of “Race and Ethnicity in the 2020 Census: Improving Data to Capture a Multiethnic America.” Washington, D.C.: November. http://www.civilrights.org/publications/reports/census-report-2014/chapter-iii-revising-the.html ↩
- In this analysis, all multiracial subgroups include Hispanics. ↩
- Using this method, some individuals will be assigned to the wrong racial category, if they happen to be part of the very small minority of people with that ancestry who are not part of the dominant racial group. In addition, most of those reporting American Indian ancestries were classified as American Indian and Alaska Native, even though those respondents were more likely to choose white than American Indian on the race question. The assignment was made in order to have adequate sample size for analysis. ↩
- The total multiracial population using the race responses in 2012, including people with more than two races, was 9.0 million. ↩ | <urn:uuid:82fbe147-7a89-4254-b8e9-edb72b7cad25> | CC-MAIN-2017-17 | http://www.pewsocialtrends.org/2015/06/11/chapter-1-race-and-multiracial-americans-in-the-u-s-census/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00011-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.95889 | 7,215 | 3.75 | 4 |
- Slave Trade
- Colonization of Africa: The beginning
- West and Central Africa
- Southern Africa
- East Africa
- North Africa
- Timeline of Colonization: Africa + Asia
- Mock Questions
Ok where were we?
|Already covered click me|
|Already covered click me|
|Covered in the present article|
|Will be covered in the next article|
- Although Europeans started exploring Africa since late 15th century but for a long time their presence remains mainly to certain coastal areas.
- But even these limited contacts led to the most tragic and disastrous consequences for the Africans- due to slave trade.
- During this era, Spanish were ruling Americas.
- But it led to resulted in the large-scale extermination of the original inhabitants of the Americas (=Native Americans).
- Why? Because
- Native Americans were forced to work in gold and silver mines under inhumane conditions
- Native Americans lacked immunity to European diseases (smallpox, mumps, and measles)
|Continent||Slaves needed for Plantations of|
|N.America||tobacco, rice, and indigo, Cotton|
|Laborers||Why unfit for plantation work?|
|White prisoners /indentured servants|
On the other hand, African slaves offered following advantages:
- African slaves came from an environment where those who survived into adolescence acquired some immunity to such “Old World” diseases as smallpox, mumps, and measles
- They also had some immunity against tropical maladies as malaria and yellow fever.
- Hence, African laborer lived three to five times longer than white laborers under the difficult conditions on plantations.
- When Africans ran away from plantation, they could neither go home nor disguise themselves among town folks. (Unlike those white prisoners).
Thus, African slaves=inexpensive labor for the plantation owners.
Most of the slaves transported in the Atlantic slave trade were adult men. Why?
- Because African chiefs tended to retain women slaves, as agricultural workers and to bear more children.
- Children were less economical to trade: because they cost as much to enslave and transport, yet brought lower prices when sold.
- In medieval times, Arabs had dominated the slave trade. They organized slave caravans and moved them from the interior to the Gold and Slave coasts (= Now region of Ghana, Togo, Benin, and Nigeria)
- Then Portuguese entered the Slave trade business. They had two advantages over others
- Early in the exploration race of Africa
- Its Colony in Brazil was @relatively short distance from Africa.
- Portuguese established a slave market in Lisbon.
- Spaniards bought slaves from that Lisbon market and took them to American colonies. But later the demand for slaves in America increased, so slaves were sent directly from Africa to America.
- The Spanish church saw the black-slaves as an opportunity for converting them, so also gave tacit approval.
- Portuguese themselves also needed Black slaves to work in their sugar plantations of Brazil.
- Slave traders raided African villages, kidnaped people and handed over to the European traders.
- Some African chiefs also took part in this business. They sold slaves to Europeans in exchange of guns and ammunition, cloth, metal ware, spirits, cutlery, coins, decorative wear, horses, salt and paper.
- Initially the Portuguese were dominating African slave trade. But then British decided to take over this business.
- Sir John Hawkins went to Africa to bring slaves in a ship called Jesus. He also shared a part of his slave-trade profit to the British Queen Elizabeth I.
- 17th Century: a regular company received a charter from the King of England for purposes of trade in slaves. The share of the king in the profits from slave trade was fixed at 25 per cent!
- Later, Spain gave the monopoly of slave trade to Britain. (=Spain only bought slaves from Britain, to work in their American colonies).
It is the term used to describe the prosperous trading cycle across Atlantic as a result of Slave trade:
Result of Triangular trade?
- Millions of Africans were uprooted from their homes.
- Many were killed while resisting the raids on their villages.
- In the American plantations, they were forced to work in inhumane conditions.
- If a slave tried to escape from American plantations, he was beaten and tortured.
- If a (white) man killed a runaway slave, local authorities even gave him reward.
- It is the term used to describe brutal manner in which slaves were transported from Africa to Americas, in Atlantic Ocean.
- Slaves were taken in ships as inanimate objects. They were given less than half the space allotted convicts or soldiers transported by ship at the same time.
- male slaves were kept constantly shackled to each other or to the deck to prevent mutiny.
- In the ships, they were kept in such unhygienic conditions that sometimes even sailors revolted.
- Not even half of the slaves captured reached America alive.
- Lakhs of them died during the long voyage, Dysentery was the biggest killer.
- So many dead bodies were thrown into the ocean that sharks regularly followed the slave ships on their westward journey.
After 1850s, slave trade quickly declined. Why?
- European economies began to shift from agriculture to industry. Plantations remained profitable, but Europeans had promising new areas for investment.
- The slave-operated American plantations had to compete for capital and preferential laws with textile mills and other industries that hired free laborers.
- American slave societies approached the point where they could reproduce enough offspring to meet labor needs= not much need for further slave-import from Africa.
- Slavery was also a hindrance if the interior of Africa was to be opened to colonial exploitation.
- In fact, some colonial powers waged war against African chiefs/kings in the pretext of abolishing slave trade, so they could establish colony there. (recall how British used to wage wars on Indian princely states citing “maladministration” as a reason!)
- It removed of millions of young men and women from Africa, led to depopulation that stifled African creativity and production.
- Slaving and slave trading stimulated warfare, corrupted laws (making more crimes punishable by enslavement=to get more slaves.)
- It created a class of elite rulers and traders.
- Slave trade was the beginning of a dependency relationship with Europe.
- This relationship was based on the exchange of Africa’s valuable primary products (slaves, ivory, timber, gold etc.) for European manufactured goods
- This dependency continued after the slave trade ended, through a colonial period and beyond.
- In this sense, the slave trade was the first step toward modern Africa’s current status as a region where technological development has yet to match that of more industrialized nations.
- African culture mixed with Europeans and Native Americans: led to new mixed-races, music, literature, cuisine, culture, religious practices, deep impact on American history, civil wars etc.
Anyways, by the time Slave trade declined, the exploration of the interior of Africa had begun and preparations made by the European powers to impose another kind of slavery on the continent of Africa —for the direct conquest of almost entire Africa.
Initially the African coastal regions were largely in the hands of the old trading nations:
They had set up forts in those coastal regions.
There were only two places where the European rule extended deep into the interior.
|Northern Africa||French occupied Algeria|
|Southern Africa||The British occupied Cape Colony to safeguard trade routes with India|
Within a few years, however, a scramble for colonies begat and almost the entire continent had been cut up and divided among European powers. (Just like the ‘cutting’ of the Chinese watermelon).
All of them played significant respective roles in the conquest of Africa. (Discussed in first article)
- The explorers aroused the Europeans’ interest in Africa.
- Merchants saw profit in the trade of gold, ivory and timber.
- The missionaries saw the continent as a place for spreading Christianity.
- And European governments supported all these interests by sending troops. And thus the stage was set for conquest.
Three noteworthy explorers/adventurers were
|Congo Free State|
|Late part of 15th Century|
|Until the middle of 19th Century|
|last quarter of the nineteenth century|
However, within a few years almost the entire continent was partitioned among various European imperialist countries. The Europeans occupied Africa at a much faster speed than they did in Asia. Why?
- Economic might of the imperialist powers was much greater than the economic resources of the African states.
- African chief/kings did not have the financial resources to fight a long war.
- In terms of military strength, the imperialist countries were far more powerful than the African states.
- Most of the time, Africans fought with axes, bows and knives, while Europeans used a fast firing gun known as Maxium Gun. An English poet even praised this:
Whatever happens we have got,
The maxim-gun and they have not
- Even when African chiefs wanted to buy firearms, European traders only sold them rusted, junk, outdated rifles. They were no match for the new rifles and guns used by Europeans armies.
#3: Internal rivalries
- The African states were not political united. (Just like Indian princely states of 18th century.)
- There were conflicts between states and within states
- Often these African chiefs/kings sought the support of the Europeans against their rivals. (Then Europeans will force them to sign treaty and take away the land).
- But on the other hand, the imperialist countries participating in the scramble for Africa were united. (In the sense that they never waged war against each other but settled territorial claims in conference rooms).
Important: Before you proceed further, please click on following link to save certain Map files in your computer. And whenever a colony/country’s name comes you verify its location in those maps. (Otherwise everything will get mixed up by the time this article is over.)
Link for maps file:
Click me to download the Map files for African Colonization
- All European countries were eager to get the maximum of African territory in the shortest possible time.
- Often their competition/rivalry was about to result in a war.
- But in every case, they avoided war and signed agreements as to who will get which part of Africa.
- Both British and German was competing for East Africa. But in 1890 they reached an agreement to divide the region:
|British||They gave Heligoland to Germans|
|Germans||They gave Uganda to British|
In 1884-85, European States organized a Congress in Berlin to decide how to share out Africa among themselves. No African state was represented at this Congress. Treaties were signed between European powers to settle disputes over claims to African territories between themselves.
- Most of treaties signed between African chiefs and Europeans =were fraudulent and bogus.
- The Europeans gave gifts to African chiefs and made them sign their thumbs on any treaties. (We’ll see the examples of how adventurers like De Brazza and Sir HM Stanley used this technique.)
- Even when treaties were genuine, the Europeans misinterpreted the provisions in their favor.
- For example, suppose an African chief had signed a treaty with a European country “X” to seek her support against a local African rival.
- Later that European country “X” will claim the area to be their ‘protectorate’ state. And sometimes even exchange that territory with another European country “Y”, without consulting the local African chief.
- Other European powers would also accept such bogus interpretations. Thus African occupation was done without any hindrance.
- By the end of 19th Century, the partition of Africa was nearly completed in this manner.
- This is generally referred to as ‘paper partition’ because the actual partition took much longer time longer time (due to internal rebellions by Africans against the European powers).
- If you look @the African map: About thirty per cent of all boundaries in Africa are in straight lines. Why? Because the continent of Africa was partitioned on paper map, in the conference rooms of Europe.
It will be easier to understand the conquest of Africa by European powers if we study it region by region. But remember that European occupation did not take place in the order described here:
- Sir Henry Morton Stanley, an explorer, led expeditions to Congo River.
- Then he founded International Congo Association (with the financial help from Belgium King Leopold II.) and made over 400 treaties with African chiefs.
- He’d give them cloths/cheap gifts and in return he’d ask them to place their ‘marks’ on a paper. But actually, these papers transferred their land to the Congo Association!
- Stanley acquired more than 2 million sqkm land using this totally awesome technique. The whole area was rich in rubber and ivory. He called it ‘a unique humanitarian and political enterprise’, but it led to brutal exploitation of the Congo people.
- 1885 (= when our Congress was formed): King Leopold claimed his rights over this entire ‘Congo Free(!) State’.
- King Leopold was mainly interested in the wild rubber, palm oil, and ivory of Congo.
- His private army (known as Force Publique) would force the villagers to gather those resources. Anyone who resisted was beaten, mutilated or murdered.
- Sometimes King Leopold’s agents would even kidnap Congolese women and children, and force their men to meet “quotas” of rubber/oil/ivory collection before releasing the hostages.
- Force Publique troops would also chop off the hands of villagers- as a punishment and a method to further terrorize the Congolese into submission.
- The soldiers would even collect such severed hands and present it to their commanding officers, to prove their efficiency and commitment to crush the rebellion.
- King Leopold alone made profit >20 million dollars from exploitation of Congo.
- The population of the entire Congo state declined from some 20 million to 8 million.
- The treatment of the Congolese people was so bad that even other colonial powers were shocked. British citizens formed association and demanded end of Leopold’s rule.
- In 1908 (one year before Morley Minto Reforms), finally King Leopold was compelled to hand over the Congo Free State to the Belgian government
- Now Congo Free State was called “Belgian Congo.”
- Gradually, Congo’s gold, diamond, uranium, timber and copper became more important than her rubber and ivory.
- Many of the countries, including England and the United States, joined Belgium in exploiting these resources.
- British and Belgians together formed a company to exploit copper mines in Congo.
- Later, this company played a very big role in Congo’s political affairs (just like East India Company in our case.)
- While Sir HM Stanley is gathering land for King Leopold in Congo, another French explorer starts operation in the north of Congo River.
- This Frenchman, de Brazza, uses the same totally awesome technique of Sir HM Stanley and makes African chiefs sign over their land to France.
- This area named French Congo, and its capital town= “Brazzaville” (after his own name De Brazza!)
- Now France set out to extend her empire in West Africa.
- Soon she obtains Dahomey (present day Benin), the Ivory Coast and French Guinea.
- By the year 1900, the French empire extended further into the interior: including present Senegal, French Guinea, the Ivory Coast, Dahomey, Mauritania, French Sudan, Upper Volta and Niger Territory.
- Just like King Leopold’s regime over Congo, this French conquest also results in brutal exploitation of the people everywhere in Africa.
- For example, in a period of only 20 years, the population of the French Congo was reduced to 1/3rd of its former size.
- Niger is the second great river of Africa (after Nile).
- Control of Niger river = control over the Western Africa’s rich resources + easy transport of slaves.
- A British company took the initiative in the conquest of Nigeria (for slave trade)
- Another French company came in for competition. But in the end British company to buyout the French and became the ruler of Nigeria.
- After a few years the British government declared Nigeria a protectorate of Britain.
- In West Africa, Britain also occupied Gambia, Ashanti, Gold Coast and Sierra Leone.
- After 1880, Germany also starts adventures in Africa.
- First she occupied an area called Togoland on the west coast; then Cameroons, a little farther south.
- Still farther south, the Germans established themselves in South-West Africa. This led to local rebellion and German troops massacred more than half of the population.
- Still she was unsatisfied, and wanted the Portuguese colonies of Angola and Mozambique and Congo.
- But then defeat in First World War started (1914) shattered her dream.
- After the war, when the German colonies were given to the victorious powers,
|German colony before WW1||After WW1 colony given to|
|Togoland + Cameroons||Divided between England + France|
|South-West Africa||Given to South Africa.|
|German East Africa|
|had only two colonies on the western coast of Africa|
|Possessed valuable regions of Angola and Portuguese Guinea. (and the British and Germans lusted for these colonies).|
- Except Liberia, the Whole West Africa was divided up among the Europeans.
- Liberia was settled by slaves who had been freed in America.
- Though she remained independent, she came increasingly under the influence of the United States, particularly the American investors in rubber plantations.
- Cecil Rhodes was a British adventurer. He made truckload of cash through in gold mines (Transvaal) and diamond mines (Kimberly).
- He was a partner in the famous “De Beers” diamond mining company. By his will, he established the Rhodes scholarships at Oxford.
- He played instrumental role in forming the British South Africa Company, under a royal charter.
- This company acquired territories in south-central Africa and named this area “Rhodesia” after Cecil Rhodes.
|Southern Region||Zimbabwe (1980)|
Rhodes became famous as a great philanthropist. He founded the ‘Rhodes scholarships’ in Oxford university. but first of all, he was a profiteer and empire-builder. He said
“Pure philanthropy is very well in its way, but philanthropy plus five per cent is a good deal better.’ Rhodes’ dream was to extend the British rule throughout the world, and he certainly succeeded in extending the British Empire in Africa. The British occupied Bechuanaland, Rhodesia, Swaziland and Basutoland.
- In South Africa, the Dutch had established the Cape Colony. (Later British took over Cape Colony to protect their trade routes to India).
- South Africa was the part of Africa where a large number of Europeans (mainly Dutch) were settled.
- These settlers were known as Boers. They owned large farms and plantations. (later Boers were called “Afrikaners”)
- British took over Cape Colony and abolished slavery. Boers did not like it, so they went north and set up two states, the Orange Free State and the Transvaal. (Together called “Afrikaner republics”).
- Transvaal was rich in gold, so the British plotted to overthrow Boer government.
- This led to the Boer War (1899)=>Boers were defeated but they continued to live here.
- Gandhi served from British side, as an assistant superintendent of the Indian volunteer stretcher-bearer corps. He was awarded Boer war medal for his services.
- Soon after this, the Union of South Africa was formed consisting of the Cape, Natal, Transvaal and Orange River Colony.
- This Union was ruled by the white minority —Boers, Englishmen, and a few settlers from other European countries.
- Later South African government later declared itself a republic.
Gandhi also served in Boer Wars (from British Side). He wrote in his autobiography
When the war was declared, my personal sympathies were all with the Boers, but my loyalty to the British rule drove me to participation with the British in that war. I felt that, if I demanded rights as a British citizen, it was also my duty, as such to participate in the defence of the British Empire. so I collected together as many comrades as possible, and with very great difficulty got their services accepted as an ambulance corps.
- British were interested in Zululand. They wanted Zulu population to serve as labour in the diamond mines across Southern Africa.
- British troops initially suffered losses but ultimately won.
Zulu Rebellion (1906)
- In 1906, the Zulu Rebellion broke out in Natal province of South Africa
- This was actually a campaign against tax being imposed by the British on the Zulus, who were demanding their rights in their own land.
- However, the whites declared war against the Zulus.
- In this Zulu war/rebellion, Gandhi served from British side, as the officer in charge of the Indian volunteer ambulance corps. He was given Zulu War Medal for his services.
1920: During Khilafat movement, Gandhi returned the medals to Britain and wrote,
It is not without a pang that I return the Kaisar-i-Hind gold medal granted to me by your predecessor for my humanitarian work in South Africa, the Zulu War medal granted in South Africa for my services as officer in charge of the Indian volunteer ambulance corps in 1906 and the Boer War medal for my services as assistant superintendent of the Indian volunteer stretcher-bearer corps during the Boer War of 1899-1900. I venture to return these medals in pursuance of the scheme of non-cooperation inaugurated today in connection with the Khilafat movement. Valuable as these honours have been to me, I cannot wear them with an easy conscience so long as my Mussalman countrymen have to labour under a wrong done to their religious sentiment. Events that have happened during the past one month have confirmed me in the opinion that the Imperial Government have acted in the Khilafat matter in an unscrupulous, immoral and unjust manner and have been moving from wrong to wrong in order to defend their immorality. I can retain neither respect nor affection for such a Government.
- Before 1884, East Africa was not occupied by any Europeans. (Except Portuguese possession of Mozambique).
- 1884 (one year before our congress is formed), German adventurer, named Karl Peters, came to the coastal region of East Africa.
- He uses bribery and threats, makes the local chiefs to sign agreements placing themselves under German protection.
- France and Britain also has interest in this region. But instead of starting war, they sit down and make agreement to divide the land.
|Madagascar||France gets it.|
- King of Zanzibar says “East Africa as is my property.”
- Germany and England appease him by giving a strip of coast land, 1600 kilometers long and 16 kilometers deep.
- Even here, Germany and England divide the Northern and Southern half of the strip under ‘sphere of influences’.
- 1905: (same year when Lord Curzon partitioned Bengal), the local Africans start revolt again Germans. 120,000 Africans were killed in this German colony.
- In 1890, there was an agreement between Germany and England according to which Uganda was’ reserved’ for England. In exchange Germany was given Heligoland.
- In 1896, Uganda was declared a British protectorate.
- Germany also gave up her claims to Zanzibar and Pemba island, Witu and Nyasaland (present Malawi), but made more conquests in the interior.
- The Portuguese colony of Mozambique was to be shared out between Germany and England, but the First World War stopped the plan and Germany lost all her colonies.
1914: first World War start. 1919: Treaty of Versailles signed and defeated Germany had to handover her colonies to the victors. let’s recall our table
|German colony before WW1||After WW1 colony given to|
|Togoland + Cameroons||Divided between England + France|
|South-West Africa||Given to South Africa.|
|German East Africa|
- Like Germany, Italy entered the colonial race late.
- The Italians occupied two desert areas in the ‘horn of Africa’ –Somaliland and Eritrea.
- Later she got interested in Abyssinia (aka Ethiopia)
- The country of Abyssinia, now known as Ethiopia, was an independent state.
- Italy wanted to declare Abyssinia its protectorate.
- 1896: king of Abyssinia rejects Italy’s claim. Italy sends an army.
- Abyssinia was able to procure arms from France and defeated the Italians. (Unlike other African states)
- During this war, as much as 70 percent of the Italian force was killed, wounded, or captured, finally treaty of Addis Ababa was signed to declare peace.
- 1935: like a defeated gambler, Italy makes second attempt to conquer Abyssinia.
- Before the Second World War Except for a brief period during those years, Ethiopia, maintained her independence.
- French occupied Algeria in 1830, it took her about 40 years to suppress the Algerian resistance.
- It was the most profitable of France’s colonial possessions, providing her a vast market for French goods.
Both France and England wanted to control Tunisia. But they don’t go for war, they make an agreement.
- Morocco is situated on the north coast of Africa, just south of Gibraltar.
- Hence very important to the western entrance of the Mediterranean.
- Both France and Italy wanted Morroco. But they don’t go for war, they also make an agreement.
|Italy||Gets Tripoli and Cyrenaica (east of Tunisia).This region was already under Turkish Empire. So Italy sent troops, occupied two provinces and called it “Libya”.|
- While France, Italy and England were busy dividing North Africa among themselves, they had ignored Germany.
- German Minister said, “You(French) have bought your liberty in Morocco from Spain, England, and even from Italy, and you have left us out.”
- There were many international crises and it appeared as if war would break out.
- But France appeased Germans by transfering 250,000 square kilometres of French Congo to Germany.
- Similarly France also appeased Spain by giving her a small part of Morocco.
- In 1912 France established her protectorate over Morocco. However, it took the French many years after the First World War to suppress the rebellions there.
- During this era, Egypt was a province of the Turkish empire
- Egypt was ruled by a “Pasha” (representative/Governor appointed by the Turkish Sultan)
- But France was interested in Egypt, Since the time of Napoleon
- A French company had gained a concession from Ismail Pasha, the Governor of Egypt, to dig a canal across the isthmus of Suez.
- Suez Canal Connects Mediterranean and the Red seas. The canal extends 163 kilometres between Port Said in the north and Suez in the south.
- The canal was completed in 1869 and aroused British interest in the area because it’d reduce the shipping time between Europe and Asia.
- British PM Disraeli bought a large number of shares of the canal from the Pasha to make sure of keeping the route to India safe.
- Disraeli called Suez canal ‘a highway to our Indian empire’.
Pasha’s game over
- Later Egypt’s Pasha run into financial troubles. The British and French gave him loans and increased their interference in allocation of trading-mining rights. (just like in China).
- When the Pasha tried to resist, he was forced to abdicate and a new governor was appointed.
Egypt: the cotton colony
- Britain developed Egypt as a supplier of cotton for her textile industry.
- The control of foreigners over cotton was total, from owning or controlling the land it was grown on, the cotton processing and cotton cleaning industry and the steamships it was transported.
- But, There was not a single mill in Egypt. Why? Think about it!
- During first world war time, Cotton accounted for 85% of Egypt’s exports.
By 1914 cotton
constituted 43 per cent of agricultural output. It accounted for 85 per cent of exports in
1913. Being a single crop economy was disastrous as Egypt became dependent on
imports for her essential food supply.
- In 1880s, Egyptians started revolt against this Anglo-French control.
- Britain sent her army in pretext of rest orating law and order and protection of the Suez Canal
- The British assured that we’ll withdraw our troops from Egypt as soon as peace is established.
- After the revolt was suppressed, Egypt came under British control.
- When the First World War started, England announced that Egypt was no longer a Turkish province but a British protectorate!
- Then Britain fully exploilted the natural resources, manpower and economy of Egypt during WW1. Crops were seized by the army. The British Treasury took over the gold reserves of the National Bank of Egypt.
- After the First World War, Egyptian leaders started for the Paris Peace Conference to plead the case of Egypt, but they were arrested.
- In the 1920s, Britain was forced to recognize Egypt as an independent sovereign state (but still, Britain retained her rights over the Suez and many other concessions)
- Sudan, or what was earlier known as Egyptian Sudan, was jointly exploited by Egypt and Britain.
- A Sudanese leader who had proclaimed himself the Mahdi had in the 1880s succeeded in overthrowing Egyptian and British control over Sudan.
- His army had defeated Egyptian and British troops.
- Later British and Egyptian troops waged a bloody war, killed 20000 Sudanese troops and recaptured Sudan. Thus, Sudan came under British rule.
- The French at this time tried to occupy southern parts of Sudan but were forced to withdraw by the British.
- France, however, was given a free hand to extend her control over what was known as western Sudan and the Sahara. France occupied these areas after a long war of conquest.
- With these gains, France was able to connect her equatorial conquests with her west and north African conquests.
French and British Colonies
first let’s check the timeline of African Colonization only
Now let’s combine the timelines of Asian + African colonization (click to enlarge)
What was the contribution of following in the scramble for Africa (2 marks each?)
- Cecil Rhodes
- De Brazza
- Sir HM Stanley
- King Leopold II
5 marks (50 words)
- Battle of Adowa
- Zulu War
- Boer War
- Congo Free State
- Gandhi’s parturition in Boer War.
Comment on following (10 marks each)
- I put my life in peril four times for the sake of the Empire: at the time of the Boer war, at the time of the Zulu revolt…. I did all this in the full belief that acts such as mine must gain for my country an equal status in the Empire. But the treachery of Mr. Lloyd George and its appreciation by you, and the condonation of the Punjab atrocities have completely shattered my faith in the good intentions of the Government and the nation which is supporting it.
- When the war was declared, my personal sympathies were all with the Boers, but … I felt that, if I demanded rights as a British citizen, it was also my duty, as such to participate in the defence of the British Empire.
- I venture to return these medals in pursuance of the scheme of non-cooperation inaugurated today in connection with the Khilafat movement. Valuable as these honours have been to me, I cannot wear them with an easy conscience so long as my Mussalman countrymen have to labour under a wrong done to their religious sentiment.
12 marks (120 words)
- British Interests in following: 1) Suez Canal 2)Cape Colony 3)
- Examine the role of industrialization and capitalism in scramble the scramble for Africa.
- Explain these terms: i) Middle Passage ii) Triangular trade
- Analyze the impact of Triangular trade on Africa, Europe and Americas.
- How did adventurers and explorer helped in the scramble for Africa? Describe with examples.
- factors that helped Europeans colonize the Africa.
- Scramble for Western Africa and North Africa
- Scramble for Southern Africa and Eastern Africa
- Enumerate the factors that led to rise of Slade trade in Africa.
- List the consequences of African slave trade.
- By early nineteenth century, why did the trade in slaves lost its importance in the system of colonial exploitation?
- British and French occupation of Africa
25 marks (250 words)
- Why did Gandhi participate in Boer and Zulu war? Do you agree with Gandhi’s justification for his participation? Give reasons to justify your stand.
- Redrawal of national boundaries in Africa during the 19th Century.
- Colonization of Asia and Africa: similarities and differences
- Write a note on the Paper partition of Africa | <urn:uuid:6a4e0153-089b-487d-880e-c6a0f6a052d4> | CC-MAIN-2017-17 | http://mrunal.org/2013/07/world-history-imperialism-colonization-africa-scramble-for-colonies-paper-partitions-slave-trade-boer-war.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00486-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.962199 | 7,188 | 3.8125 | 4 |
More ICRtoP Resources on Darfur...
Q and A: The Responsibility to Protect and Darfur
ICRtoP Blog: Darfur ICC Referral Turns 10: Reflections on the Troubled Path to Accountability
The Crisis in Darfur
Though Darfur, in the West of Sudan, has been embroiled in conflict since 2003, the roots of the crisis stretch back much further. Much like the other disputes in Sudan, the crisis in Darfur is based upon economic and political marginalization of non-Arabs (for more on other crises in Sudan, please see our separate Crisis in Sudan page).
Unlike the Second Sudanese Civil War in the South of the country, the conflict in Darfur has not been characterized by religious divisions. Grievances instead arose from a combination of economic and ethnic tensions. Beginning in 1972, a series of droughts and intensified desertification in Darfur led to disputes over land between non-Arab sedentary farmers (from the Fur, Zaghawa and Masalit tribes) and Arab nomads. When a Libyan-sponsored Arab supremacist movement emerged as the major power in Sudan in the 1986 government of Sadiq al Mahdi, many non-Arab Darfuri farmers felt that their interests were being sidelined. The regime that formed after a coup led by Omar al-Bashir in 1989 continued to rely on Arab networks to extend its control over the country through the use of identity politics to mobilize support, driving a deeper wedge between the communities in Darfur.
Over the next two decades, these feelings were exacerbated by policies of the government, which seemed to intentionally segregate non-Arabs and split Darfur into three separate regions to break the unity of Darfuri tribes. As the Second Sudanese Civil War began to move towards a peace process in 2002-2005, the prospect of being left out of comprehensive peace talks heightened the Darfuris’ sense of governmental neglect. Members of the marginalized Fur and Zaghawa tribes began to form rebel groups. Khalil Ibrahim, of the Zaghawa tribe, founded the Justice and Equality Movement (JEM) in 2001 and the Sudanese Liberation Army (SLA) formed in 2001 as an alliance between Fur and Zaghawa tribes.
In March 2003, as these rebel groups became better organized and staged ambitious attacks, the Government of Sudan (GoS) was taken by surprise. The GoS responded by recruiting militias to fight the rebels, with support from the Sudanese Armed Forces (SAF). Human Rights Watch and Amnesty International, among others, reported that these militias engaged in an ethnically-targeted campaign of mass killings, forced displacement, destruction of property and the use of rape as a weapon of war. According to United to End Genocide, the conflict has claimed 300,000 lives, internally displaced 2.7 million people and forced another 250,000 to flee abroad, mainly to Chad. A combination of civil society and governments advocated for action to protect the people of Darfur from genocide andfollowing international pressure, a hybrid UN-AU mission was deployed to the country to monitor the 2006 Darfur Peace Agreement (DPA) and subsequent 2011 Doha Document for Peace in Darfur (DDPD).
However, the initial violence had already claimed almost two-thirds of its current total victims before international attention had turned from the North-South war to Darfur in 2005. The conflict today has morphed into an inter-ethnic battle between militias over the spoils of this campaign. The government has failed to keep these militias in line, due to dwindling resources following the reallocation of assets after the secession of South Sudan. The current violence along the Sudan-South Sudan border is also spilling into Darfur.
II. Conflict as Genocide
On 25 April 2003, two rebel groups, the Sudan Liberation Army (SLA) and the Justice and Equality Movement (JEM), began a major offensive against the GoS by attacking and capturing a number of government installations, including El Fasher airport. They had the backing of the Sudanese People’s Liberation Army (SPLA), another major rebel group fighting mainly in the South of the country in the Second Sudanese Civil War. In the opening days of the conflict, rebels were able to score some quick gains by taking advantage of the fact that GoS forces were already thinly-stretched between fighting the SPLA in the South and fighting Eritrean-sponsored rebels in the East.
In response to the rebel attacks in Darfur, the Sudanese government began aerial bombardments in Darfur and enlisted the support of a nomadic militia, the Janjaweed, which originated in the 1980s during the civil war in neighbouring Chad. In 1985, the GoS began arming Arab nomads in Darfur to defend the Chadian border from potential invasions. Together, the armed Arab nomads formed the basis of the Janjaweed. Despite a calming in relations between Chad and Sudan in 1990, the Sudanese government continued to supply the Janjaweed to fight in the Second Sudanese Civil War. During the 1990s, the Janjaweed also raided villages along the Chad-Sudan border near Darfur. The failure of the GoS to stop this added to the grievances of Darfuri farmers. When violence broke out in 2003, the government directed the Janjaweed in a counter-insurgency campaign against the aforementioned rebel groups.
Though Chad brokered a ceasefire between the parties in September 2003, this agreement quickly broke down in December 2003. The Government’s renewed counter-insurgency campaign in 2004 began to systematically target ethnic groups, according to a Human Rights Watch report, ‘Darfur Destroyed’. Members of the Fur, Zaghawa and Masalit tribes became the targets of massacres, summary executions of civilians, burnings of towns and villages, forcible depopulations, rape and sexual violence. The report documents 14 incidents of large scale killings in Dar Masalit alone between September 2003 and February 2004, attacks which left 770 dead. Populations were emptied in repeated attacks on hundreds of villages, in half of which there were reports of rape. By the spring of 2004, reportedly 30,000 people had been killed, 1.4 million people had become internally displaced and another 100,000 had fled across the border into Chad.
In March 2003, just ahead of the tenth anniversary of the genocide in Rwanda, the UN Humanitarian Coordinator for Sudan, Mukesh Kapila, warned of the similarities between the situation in Darfur and that of Rwanda. Tom Eric Vraalsen, the Secretary-General’s Special Envoy for Humanitarian Affairs in Sudan, called the situation “one of the worst in the world”. In April 2003, the U.S. reported to the UN Commission on Human Rights that atrocities, such as rape and ethnic cleansing, were taking place in Darfur. They also noted that humanitarian access and government services had been blocked from non-Arab villages while they continued in Arab villages nearby. This was followed by a report in May by Human Rights Watch, which stated “there can be no doubt about the Sudanese government’s culpability in crimes against humanity in Darfur”, and called on the International Community to act. On 9 July 2004, the houses of the US Congress referred to the situation in Darfur as genocide, a claim that was repeated in September in a report by the US state department and US Secretary of Defence. On 16 September 2004, the European Parliament called the actions of the GoS “tantamount to genocide”. On 1 June 2005, U.S. President George W. Bush labelled the situation in Darfur as genocide.
The African Union refrained from using the term in November 2004, stating that "there is mass suffering, but it is not genocide." Similarly, the League of Arab States announced that it could not find “any proof of allegations that ethnic cleansing or the eradication of communities had been perpetrated”. On 18 September 2004, the UN adopted Resolution 1564, which called on Sudan to meet its obligations to protect civilians set out in Resolution 1556 of July 2004, threatened sanctions, and established for the first time an inquiry based on the Convention on the prevention and punishment of the Crime of Genocide. On 25 January 2005, the United Nations Commission of Inquiry on Darfur determined that while “no genocidal policy has been pursued and implemented in Darfur by the Government authorities, directly or through the militias under their control”, they did warn “international offences such as the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and heinous than genocide”.
III. Peace Agreements
During the opening stages of the crisis in Darfur, international focus on Sudan was placed on negotiations to settle the Second Sudanese Civil War. International Crisis Group believes that the GoS purposefully drew out these talks, hoping that the international community would not force action on Darfur and thereby risk jeopardizing the fragile peace process between the North and South. On 9 January 2005 the GoS finally signed a Comprehensive Peace Agreement (CPA) to end the Second Sudanese Civil War. International attention turned to the situation in Darfur.
During the conflict, numerous ceasefires were signed and broken. The Darfur Peace Agreement (DPA), also known as the Abuja agreement, was signed on 5 May 2006 by the GoS and one faction of the Sudanese Liberation Army, (SLA-MM), led by Minni Minnawi. A regional governing body, the Darfur Regional Authority (DRA), was established with a mandate of power-sharing, wealth-sharing and compensation. The agreement did little to curb the violence, in part because it had failed to secure endorsement by other key rebel factions, such as the JEM and an opposing faction of the SLA (the SLA-W, led by Abdel Wahid.)
IV. International Response
i. African Union
Sudanese President Al-Bashir agreed to allow the African Union (AU) to deploy a mission (AMIS) to monitor a ceasefire agreement signed on 8 April 2004, a mission endorsed by UNSC Resolution 1556. The AU initially deployed 150 troops in August 2004 but had increased that number to 7,700 troops by April 2005. African leaders resisted efforts to widen the intervention to non-African countries. These voices included South African President Thabo Mbeki, who stated “we have not asked for anybody outside of the African continent to deploy troops in Darfur. It's an African responsibility, and we can do it”. However, according to Human Rights Watch, AMIS struggled to function, due to an uncooperative Sudanese government and a lack of resources.
ii. European Union
On 6 April 2006 the European Parliament called on the UN “to act on its responsibility to protect civilians” in Darfur. On 28 September 2006, the Parliament stated that Sudan “has failed in its ‘responsibility to protect’ its own people” and called on the GoS to accept a UN mission under UN Resolution 1706. On 15 February 2007, the European Parliament called on the UN to “act in line with its "Responsibility to Protect" doctrine (. . .) even in the absence of consent or agreement from the Sudanese Government”. On 12 July 2007 the Parliament called on the UN to act “basing its action on the failure of the Government of Sudan (GoS) to protect its population in Darfur from war crimes and crimes against humanity”.
iii. United Nations
On 24 March 2005, the UNSC authorized a UN mission (UNMIS) in Resolution 1590 to support the implementation of the CPA. On 31 August 2006, UN Security Council Resolution 1706 aimed to expand the mandate and force size of UNMIS. Resolution 1706 was the first to make reference in a country-specific situation to paragraphs 138-139 of the 2005 World Summit, by which governments endorsed unanimously the Responsibility to Protect. In the face of opposition from the GoS, the UN instead proposed the transition from AMIS to a joint UN-AU mission (UNAMID) of 25,987 personnel in Resolution 1769 on 31 July 2007, whose deployment was delayed until 31 December 2007. UNAMID’s mandate was extended in Resolution 1935 in 2010 and again in Resolution 2113 in 2013, although the mission’s strength was set at 26,167 personnel in 2012 by Resolution 2063.
iv. International Criminal Court
On 31 March 2005, the UNSC referred the situation in Darfur to the International Criminal Court (ICC) and on 14 July 2008, Luis Moreno Ocampo, Chief Prosecutor of the ICC, requested an arrest warrant for President Omar Al-Bashir of Sudan, the first time the ICC had indicted a sitting Head of State.
On 4 March 2009, the ICC issued an arrest warrant for Al-Bashir on five counts of crimes against humanity (murder, torture, rape, extermination, and forcible transfer) and two counts of war crimes (intentionally directing acts against civilians and pillaging). While the ICC judges said they did not have sufficient evidence to support charges of genocide, they did find that Al-Bashir had played an "essential role in (the) . . . coordinating . . . design (and) implementation” of a counter-insurgency campaign in which the attacks were “widespread” and “systematic” and followed “a similar pattern” to genocide. On 12 July 2010, after judging that the standard of proof for genocide had been set too high in the previous investigation, the ICC issued a second arrest warrant for al-Bashir on three counts of genocide committed in Darfur (genocide by killing, genocide by causing serious bodily or mental harm and genocide by deliberately inflicting on each target group conditions of life calculated to bring about the group’s physical destruction), the first time an arrest warrant for the crime of genocide was issued by the Court. The GoS, the Arab League, and the AU denounced the warrants. The AU Assembly, at its 16th annual summit, called for the UN Security Council to defer proceedings against President al-Bashir in accordance with Article 16 of the 2005 Rome Statute. The UN Security Council has so far not acted on this request and the warrants remain in force.
Many of the 139 state signatories of the Rome Statute of the ICC, of which 122 are State Parties to the Court, have ignored their obligation to act on these arrest warrants. Al-Bashir freely travelled to Chad in 2009, 2011 and 2013, Eritrea in 2009 and 2013,Kenya in 2010, Djibouti in 201, Egypt in 2012, Kuwait and Nigeria in 2013, and the Democratic Republic of Congo in February 2014 for a regional trade summit, prompting the Coalition for the International Criminal Court to call for Al-Bashir’s arrest. He has also travelled to non-signatory states including, China in 2011, Saudi Arabia in 2012, and Ethiopia in 2013 among others. In July 2013 UK Minister for Africa, Mark Simmonds, stated that visits, such as the one to Nigeria, “undermines the work of the ICC and sends the victims a dismaying message that the accountability they are waiting for will be delayed further”
However, there have also been numerous examples of states declining to allow Al-Bashir entry, refusing to host international and regional summits Bashir intended to attend, or Bashir cancelling trips on the grounds that states would act to enforce the ICC arrest warrant. Pressure has come from many sources, including state governments, state courts, neighbouring states, the ICC, and civil society. Bashir was refused entry to or cancelled trips to Botswana, France, Uganda, South Africa, Botswana and Turkey in 2009, Central African Republic, Kenya, South Africa again and Zambia twice in 2010, Malaysia, Kenya again and Nigeria in 2011, Malawi in 2012. In addition, the US refused to give Al-Bashir a VISA to attend a UN summit in 2013. On 4 March 2014, over 30 NGOs called for the UNSC and State Parties to the ICC to end impunity for Al-Bashir.
V. DDPD & Latest Developments
Doha Document for Peace in Darfur
In December 2010, talks began between the GoS and an umbrella organization for rebel forces, the Liberation and Justice Movement (LJM). Both the LJM and JEM (the largest single rebel group) agreed to attend talks in Doha. On 14 July 2011, the GoS and the LJM signed the Doha Document for Peace in Darfur (DDPD). The Agreement proposed power-sharing, a more equal distribution of wealth and committed to the work of the Darfur Regional Authority. At the third meeting of the DDPD in February 2014, further discussions were held on the integration of LJM battalions into the Sudan Armed Forces (SAF) and Police. However, little progress has been made in implementing the deal. The main rebels groups which refused to sign have joined the Sudan’s People’s Liberation Movement-North (SPLM-N) and formed a loose alliance known as the Sudan Revolutionary Front (SRF), formed in November 2011 with a national agenda. This has made it difficult to engage parties on the DDPD, which only focuses on Darfur.
From 2010, government-sponsored militias began to act independently in response to declining financial support from the GoS and began fighting among themselves. The Enough Project has highlighted how commercial interests in the region are now fuelling the conflict. The GoS’s loss of revenue from oil reserves ceded to South Sudan in 2011 has led to conflicts over sharing the spoils of land and loot captured by the Janjaweed in Darfur. Such groups continue to be engaged in activities such as land grabbing, extortion, smuggling and robbery. The gold mining area of Jebel Amer in North Darfur has become a locus for fighting since January 2013 as it represents an alternative source of revenue for the militias. Most notable about this new phase of violence is that there are now inter-Arab attacks, breaking from the earlier narrative of an Arab versus non-Arab conflict. A report by the Small Arms Survey found that Arab militias are now joining rebel movements such as the JEM and even fighting government forces in some cases. The UN reported that 400,000 people were displaced in 2013 alone, reversing a trend which had seen 100,000 people return home from refugee camps in 2012.
On 6 April 2013 the JEM-Sudan/JEM-Bashar, a splinter group of JEM, signed the DDPD and resumed the process in January 2014 after a brief freeze in implementation. Although more rebel groups have signed the DDPD, the peace processes’ main achievement, the Darfur Regional Authority, is due to wind up in 2015 with modest accomplishments, giving the remaining rebels little incentive to invest in it. This suggests that fighting will continue, and the surge in violence during 2013 is evidence of this.
In November 2013, Sudanese officials announced that they would be launching “a dry season campaign for the final elimination of all armed movements”. In December 2013, a Border Guard Commander, ‘Hemeti’, recruited Janjaweed forces into a ‘Rapid Support Troops’ (RST) force to fight alongside the Sudanese Armed Forces (SAF) in the neighbouring region of South Kordofan. These forces were expelled from their bases in North Kordofan in mid-February 2014 and have launched a summer campaign against the rebels in Darfur in coordination with SAF. The recent expulsion of the RST by the governor of North Kordofan has highlighted the increasingly complicated relationship between the government, local populations and the militias.
The Darfur Relief and Documentation Centre reported on a destructive military operation against the civilian population in South Darfur by the SAF and RST in the final days of February 2014 while the SLA-MM, which withdrew from the DPA in February 2011, and the SRF continued to capture towns in South and North Darfur in March 2014. UNAMID expressed concern on 3 March 2014 over the growing violence in South Darfur and the Sudanese authorities’ refusal to allow the UN mission access to affected areas. On 4 March 2014 the World Food Programme reported that 20,000 people had been displaced by fresh fighting. On 12 March 2014 US Ambassador Samantha Power reported to the UN that 120,000 people had been displaced in Darfur since January 2014.
Civil Society has been involved in raising awareness of the situation in Darfur from the earliest days of the crisis. Amnesty International and International Crisis group were among the first to draw attention to the developing situation in July and December of 2003 respectively, noting the violence and calling on the government to protect the people of Darfur. The mass movement which developed around the issue of Darfur later would depend on the analysis and reporting of these groups. In February 2004, the Washington Post was one of the first major newspapers to report on the crisis in an op-ed “Un-noticed genocide”.
In July 2004 political, religious and human rights groups formed ‘The Save Darfur Coalition’ and in October ‘The Genocide Intervention Network’ was established, with the crisis in Darfur and the emerging norm of ‘Responsibility to Protect’ at its core. The Save Darfur Coalition has played a significant role in shaping the US response to the crisis in Darfur and building the case for Genocide. The group also had an influence in the appointment of a Special Envoy for Sudan, and the Genocide Intervention Network worked to boost media coverage of Darfur which doubled in 2007-2008. In 2011, these organizations merged to form ‘United to End Genocide’.
The Save Darfur Coalition was also credited with raising awareness of China’s involvement in the crisis, which the European Parliament and Amnesty International reported included the delivering of weapons. These groups also pointed to China as the main provider of income to the GoS and are seen to have played a part in altering China’s stance on Darfur. At first, China abstained from a vote authorizing a UN mission to Sudan, but subsequently supported the resolution which set up the hybrid UNAMID force.
Since 2006, NGOs concerned about the situation in Darfur, such as The Save Darfur Coalition, Human Rights Watch, Amnesty International, Aegis Trust, International Crisis Group and STAND Canada began holding the ‘Global Day for Darfur’. In Africa, the Darfur Consortium (now the Sudan Consortium) formed a coalition of 50 African-based NGOs to raise awareness about the conflict. In 2009, the GoS expelled 13 NGOs from the country, in response to the indictment of Al-Bashir by the ICC. These NGOs accounted for almost half the humanitarian aid in Darfur.
Groups such as Human Rights Watch and Amnesty International continued to report on the destruction of villages by militias in 2013 and called on the government to investigate the attacks. Such reports also highlighted instances where Sudanese authorities have blocked UNAMID’s access to certain areas. International Crisis Group released a report in January 2014 noting the failures of the implementation of the DDPD and called on the UNSC to refocus UNAMID towards the protection of civilians.
Looking forward, many voices, including that of International Crisis Group, argue that piecemeal deals on local and regional levels are no longer appropriate. Instead, solutions need to reflect the national nature of the crisis. The conflicts across the country are becoming increasingly intertwined, as rebels in Darfur have joined those fighting in the South Kordofan and Blue Nile regions, further diminishing the chances of success for locally-negotiated settlements.
International attention on Darfur has been in a continuous ebb and flow since the outbreak of violence. Human Rights Watch believes this was due to a combination of geography, limited media access to Darfur, and the mixed relationship the international community has with the GoS. The international community has at times muted its condemnation of the conflict in Darfur to ensure successful negotiations on the CPA, cooperation with the US ‘War on Terror’, a peaceful secession of South Sudan in 2011, and most recently focus has drifted to the conflicts in Abyei and the South Kordofan and Blue Nile regions. The Enough Project illustrated that throughout the crisis, the international community has only responded “to put out the worst fires as they arose”, however, in 2014 a more comprehensive approach is needed to ensure that Darfuris are finally protected from the four RtoP crimes and violations.
Read more about other related conflicts in Sudan on our "Crisis in Sudan" page, and be sure to check out our "Crisis in South Sudan" page as well.
Special thanks to Neil Dullaghan for his work in compiling this page. | <urn:uuid:4a55d1f9-ac72-41b7-9224-230b02e3dc31> | CC-MAIN-2017-17 | http://www.responsibilitytoprotect.org/index.php/crises/crisis-in-darfur | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00014-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.961998 | 5,160 | 2.921875 | 3 |
by Allen, Renée
This object is available for public use. Individuals interested in reproducing this object in a publication or website, or for any commercial purpose, must first receive written permission from the Modernist Journals Project.
For further information, please contact:
Modernist Journals Project
Box 1957, Brown University, Providence, RI 02912
In the six months covered by this volume, the government of Prime Minister Herbert Henry Asquith passed the Insurance Act and had a Minimum Wages Act forced on it by striking coal workers. Despite--often because of--such historic social welfare legislation, the government continued to be harshly criticized by the left, especially The New Age, and lost votes in by-elections. The Women's Social and Political Union (WSPU) increased its violence as Liberals waffled on suffrage for fear that the women to be enfranchised would vote conservative. By waffling, Liberal England helped its own death along: when the National Union of Women's Suffrage Societies (NUWSS) gave up on obtaining Liberal support, it took its organization and money to Labour. James Ramsay MacDonald, as Labour leader, struggled to exercise Labour's limited power through coalition with the Liberals while growing numbers of trade union members, and the suffragettes with whom they were beginning to work, pushed for a more radical approach. On the opposition side, Arthur Balfour resigned as Unionist leader in the second week of November. Competing factions compromised on Andrew Bonar Law as his replacement. Law's public approval of Ulster Protestant militancy contributed to the growth and lawlessness of Irish militia groups. Asquith's cabinet included:
- David Lloyd George, Chancellor of the Exchequer.
- Sir Edward Grey, Foreign Secretary.
- Winston Churchill, First Lord of the Admiralty.
- Reginald McKenna, Home Secretary. (Churchill and McKenna had exchanged positions in October 1911.)
Events of the Period
- Treaties ended the five-month old Second Moroccan Crisis--in which Germany threatened to fire on French-controlled Morocco unless given portions of the Congo--in the first week of November. Disagreement between the War Office and the Admiralty over how best to prepare England for the eventual war between Germany and France led to Churchill replacing McKenna as First Lord of the Admiralty.
- The Italo-Turkish War began in September when Italy seized Tripoli, taking advantage of the fact that Turkey's only remaining European ally, Germany, was busy in Morocco. Meanwhile the Balkan states were secretly allied against Turkey, awaiting their chance to take part of the Ottoman Empire. The First Balkan War began only days after this war ended in October 1912.
- The Chinese Revolution began in October 1911 when railways in Hupeh province were nationalized, infuriating investors. Simultaneously, heavy taxes and natural disasters provoked unrelated popular revolts in rural areas. The western-educated Sun Yat-sen, who had been fomenting revolution for years from abroad, returned to China in December in time to be nominated president of the republic that the rebels declared when they took control of the provincial capital, Wu-ch'ing. The Manchu government called in the warlord Yüan-Shih-K'ai to put the rebellion down. Yüan played both sides until Sun resigned, Yüan became president, and the last emperor, the six-year old Henry Puyi, was deposed in February 1912. After the leader of the election-winning (Kuomintang) Nationalist Party was assassinated a year later, Yüan became hereditary president for life.
- Persia had been divided into two spheres of influence by the Anglo-Russian Convention of 1907. In 1909 Persia established a parliamentary government for the second time. From May to December 1911, the treasurer-general Morgan Schuster, an American, encouraged the government to ignore Russia's influence. Russia demanded his dismissal; when Persia refused, Russia invaded, nearly eliminating the buffer between Russia and India. Shuster was expelled and Parliament dissolved. The same year the deposed Shah attempted an invasion.
- Raymond Poincaré replaced Joseph Caillaux when the French cabinet headed by Caillaux was dissolved in January 1912, after only 7 months, because of the concessions made to Germany in order to end the Moroccan Crisis.
- The Social Democratic Party won the last Reichstag elections of the German Empire in January, capturing a third of the vote.
- Turkey. Mehmed V was the sultan, but The Committee of Union and Progress (the Young Turks) was, loosely, in charge. CUP won an overwhelming majority in the April elections, but lost support throughout the year as the war with Italy went badly.
Domestic and Colonial
- The Parliament Act, passed in August 1911, ended the Lords' absolute veto, but left them able to delay action. Any bill that passed Commons three times over a period of at least two years could bypass Lords and go directly to the king. The Act also set a five year maximum between general elections; however, this government remained in power through 1918 due to the war.
- In September, the Ulster Unionist Council, under its new leader, Edward Carson, drafted a constitution for the (illegal) provisional government it would form if Home Rule were declared. In January 1912 the Ulster Unionists formed and began (legally) training a volunteer force that numbered 80,000 by the time the Home Rule Bill was officially introduced in April.
- The National Insurance Act was passed in December 1911 by overwhelming majority, though some socialist MPs opposed it. It was designed to provide health and unemployment insurance to a majority of the population with contributions from workers, employers, and the state in a 4:3:2 ratio. Rather than create a state insurance agency, existing friendly societies and insurance companies were used with an insurance commission to oversee them, which resulted in unequal benefits. Because the Act covered three million working women and, to a lesser extent, the wives of working men, the government could claim that it had improved the situation of women, despite its lukewarm support for suffrage. The Act was unpopular because many workers saw their contributions as "an enforced docking of wages." Also, the private companies involved had ensured that life insurance and pensions for widows and orphans were not included. Initial opposition weakened once benefits began to be paid--several months after deductions were first taken--but was not wiped out until the inter-war period.
- Early in 1911 a majority of Commons MPs supported women's suffrage in general elections (some women already voted in local elections). In 1910-11 three Conciliation (non-partisan) Bills enfranchising propertied-women were initiated. In July 1910, Asquith ensured one Bill would never come to a third reading by sending it to a committee of the whole house. The WSPU suspended its window-breaking campaign until 18 November 1910, "Black Friday," when large numbers of suffragettes marched on the just-reopened Parliament. Despite orders to simply keep them away from the building, the police arrested and abused hundreds of women, who were then force-fed in jail. This torture increased public support for the women. Also on Black Friday Asquith prevented consideration of another Conciliation Bill. A new truce began after the Liberals won the December 1910 elections and introduced a new Conciliation Bill. In November 1911, Asquith's announcement of a plan to eliminate restrictions on male suffrage undermined proposals for female suffrage: universal suffrage for women was impossible, but why should only a few propertied women have the vote, if all men were to get it? The WSPU returned to violence, at the cost of both public and political support. The Bill was defeated in March 1912. The male suffrage bill was officially introduced in July. When Asquith amended women's suffrage to it, the Speaker decided the Bill would have to be withdrawn because the amendment altered its original purpose; thus both women's and men's suffrage were defeated. The WSPU (and others) saw this as Asquith's intention and escalated its violence. The NUWSS gave up on Conciliation and the Liberals and turned to Labour.
- Trade Union Activism. The number of work days lost to strikes doubled from 1910 to 1911 and then tripled in 1912. The strikes were mutually reinforcing: on the one hand, work stoppage in a given industry affected related industries; on the other, a successful strike in one industry encouraged workers in another to strike for similar improvements. The main impetus behind the strikes was the fact that real wages had been falling for many years despite full employment, record industrial profits, and declarations by the major newspapers that England was in a period of unsurpassed prosperity. French syndicalism--the use of direct, often violent, action to give workers control of industries--had a growing but uneven influence in England. Strikers often used violence, but not strategically. The Railway Strike ended in August, but in mid-January 1912 the Coal miners voted to strike for a minimum wage. Government negotiations with miners and owners failed to prevent 850,000 (mostly Welsh) miners from walking out 1 March, leaving a total of over one million workers out of work. The police were not called out for the coal strike, in part because the government was not sure the railway workers would transport them to the coal mines in Wales.
- In mid-march the government was forced to introduce the Miners' Minimum Wages Bill in order to resolve the coal strike. The bill passed despite Unionist opposition.
- The Futurist Exhibition opened at the Sackville Gallery 1 March 1912.
- The Titanic sank during the night of 14-15 April 1912.
- The Third Irish Home Rule Bill and a Welsh disestablishment bill were introduced in April. Both were defeated in the House of Lords, but, in accordance with the Parliament Act, were reintroduced and, having passed Commons three times, received royal assent in the summer of 1914. Both were also suspended by the outbreak of WWI. Disestablishment went into effect after the war.
- The Indian capital was moved from Calcutta to Delhi, capital of the former Indian Empire, as part of "devolution" (decentralization).
The Journal Itself
All issues are 24 pages and cost threepence. Orage's "Notes of the Week" opens, followed by S. Verdad's "Foreign Affairs" column and other articles on politics. While Orage continues to champion guild socialism in "Notes of the Week," plenty of space is given to the Fabians. In addition to the many articles on the Insurance Act and the labor movement, there are a number scattered throughout the volume on how various laws harm the poor, e.g. Beatrice Hastings' “The State vs. the Innocent” (10:296). Orage swings, uncharacteristically, with the majority to condemn suffragette tactics and goals early in this volume, even though he had supported suffrage as late as 1909. Consequently a still lively discussion of suffrage is relegated to the letters. Another indicator of Orage's more conservative attitudes after 1910 is the "Tales for Men Only" series (Nos. 15-17), written under the pseudonym R. H. Congreve, about a group of artist-philosophers who find that women impede their "creation of a common mind."
Although there are no literary supplements and only six art supplements of 1-2 pages each, the arts receive nearly as much coverage as politics. "Present Day Criticism" and Huntly Carter's "Art and Drama" usually open the arts section about halfway through each issue. The writers of these columns are hostile to nearly everything written or painted in England. Many issues also contain other columns on art, e.g. "Criteria in Art" (10:136) by M. B. Oxon (Lewis Wallace), and all contain poetry and fiction by a variety of authors. Walter Sickert's sketches, first seen in volume 9, appear in the middle of 13 issues. Tom Titt's caricatures of political figures and New Age contributors continue to be featured on the back page of each number.
Reproduction of a cubist study by Picasso in the art supplement for 23 November 1911 and John Middleton Murry's article on his art a week later (10:115) spark the heated "Picarterbin" debate in the Letters over whether works like Picasso's can be considered art. Only a handful of Picassos had been exhibited at the much-derided winter 1910-11 Post-Impressionism Exhibit, and few people in London were ready for cubism. Fuel is added to this fire with later reproductions of works by Auguste Herbin, André Dunoyer de Ségonzac, and M. Ben Zies. Carter, a champion of cubism, and M.B. Oxon, a detractor, are the main combatants, but there are plenty of others.
Other topics in the Letters include: freemasonry vs. Jesuitry, the National Insurance Bill, race relations, proportional representation, feminism and suffrage, the coal miner's strike, minimum wages vs. profit margins, the 8hr. day, the influence of Europe in Asia, Bergson, capital punishment, Hamlet, Whistler, and responses to "Present-Day Criticism."
The major changes to the journal in this volume occur in the arts section:
- Ezra Pound begins a series entitled "I Gather the Limbs of Osiris" (Nos. 5-13, 15-17).
- Jacob Tonson's (Arnold Bennett's) "Books and Persons" column ended its run with volume 9--according to the writer of "Present-Day Criticism" because Arnold went to America to produce a play. No single person replaces Tonson in volume 10. A.E.R.'s "Views and Reviews" becomes a regular column toward the end of the volume. Full-page reviews by other writers appear in twelve issues, and new books are covered in single paragraphs in the "Reviews" section (12 of 26 issues).
- Huntly Carter's contribution is greatly increased: he writes "Art and Drama" for 25 of 26 issues and often contributes a separate article on art as well.
The writer of "Notes of the Week" (Orage) views the two main domestic issues of the period, the Insurance Bill and the trade union strikes, with a pessimistic eye. He sees three ways to organize society: capitalism, in which employers control industry privately, syndicalism, in which workers hold exclusive control of production, or socialism, in which workers and employers jointly manage state-owned industries. Beyond any real shortcomings of the Insurance Bill and the coal and railway strike resolutions, the unbridled criticisms of these events and everyone associated with them--especially Labour leader MacDonald--is explained by his belief that anything not leading to (guild) socialism is a waste of time. Throughout the volume Orage also continues to proudly hurl any and every insult at Lloyd George and the Liberal Party, which he considers only less-vile copies of American capitalists.
Until January "NOTW" and many other articles argue steadfastly against the Insurance Bill. The "Fabian Manifesto" (10:4) in the first issue details the reasons for this opposition: the strongest benefits go to those already best protected, contributions are not based on income, there is no guarantee that employers will not pass their costs onto employees or the public, there are no provisions for improving the living conditions that cause illness, and families are protected if their breadwinner falls sick but receive nothing if he dies. Although the Bill's passage is inevitable, no opportunity is lost to exaggerate the opposition to it or, after its passage, to declare that paycheck deductions will start a popular revolt and the Act will never work. 27,000 doctors, "NOTW" says, have pledged not to comply with the Act, and there were riots over the enactment of a similar bill in Luxembourg.
In January "NOTW" turns with delight to the prospect of a coal strike. "NOTW" considers the August settlement of the railway strike a selling-out of the workers by unions and politicians: prices were raised even before salaries were increased yet the railways recorded record profits. "NOTW" attributes this failure to the railway workers' amorphous demand for "recognition." In contrast, the coal workers are demanding a solid 7s/day minimum wage. "NOTW" insists that raising wages in all industries is a top priority but that simultaneous nationalization is necessary to ensure that capitalists cannot make up for increased wages by exporting or mechanizing labor. "NOTW" takes other papers to task for their willful or ignorant conflation of trade unionism, socialism, syndicalism, and the "red peril" and for fearmongering with the latter terms. "The Public" is accused of being more concerned with convenience than justice. Nothing less than civil war is envisioned as a general strike begins to seem likely. When the strike ends with a Minimum Wages Bill, though not at the miners' proposed rates, "NOTW" pronounces the strike another total failure.
The final issue of the volume appears ten days after the sinking of the Titanic. "NOTW" is less interested in the human tragedy than in reading the ship as a microcosm of society's plutocratic organization and proof that government needs to oversee industrial safety standards (10:601).
Other articles on domestic politics:
- In "The Third Home Rule Bill" (10:246) J.C. Squire notes with surprise that the current discussion of Irish Home Rule is getting little attention. Aside from being owed to Ireland, Home Rule, he argues, will be good for government because the Irish and their concerns are disproportionately represented in Parliament. Despite his protest, even The New Age, which supports Home Rule, devotes little space to it.
- In a three part series on "The Peril of Large Organisations" A. J. Penty argues that all large organizations develop the same evils--notably loss of individuality and the limitations put on art and artists--due to their size, not whether they are for profit or not. Emil Davies responds in "The Success of Large Organisations" (10:517).
- Avalon proposes a minimum wage for rural laborers so that they might purchases homes in one of the "Rural Notes" columns (10:8).
- A "Manifesto on Fabian Policy" (10:271) issued by the Fabian Reform Committee argues that Fabian support of Liberal MPs contradicts Fabianism's stated goals, which would be better served by supporting Labour candidates. "Poppycock in Parliament" (10:390), on the other hand, asserts that even Labour politicians are anti-labor.
"Foreign Affairs": S. Verdad provides a weekly analysis of the prospects of world war as Europe scrambles to steal the pieces of the crumbling Ottoman Empire. Part of the reason for this break up, Verdad asserts, is that "[d]espotism and a strong empire; parliamentary government and the splitting of the empire in fragments" are the options for "Oriental" countries like Japan, Turkey, and China (10:437). The Young Turks, he argues, are relying too much on Germany, the French Cabinet is confused, and the British Navy is unprepared because the Admiralty is fighting with the War Office. Verdad considers Italy irrelevant, but thinks the English need to know more about Russia. He approves heartily of the Triple Entente with France and Russia: Britain cannot afford to lose the coming war because creative art cannot be made in the face of a national defeat.
Other articles on international politics:
- "China" (10:414) by Colonel W.G. Simpson is the only article in this volume devoted to events there.
- "Triumphant Republicanism" (10:56) by V. de Braganca Cunha is actually about what form the Portuguese Republic, established in 1910, will eventually take.
- C.H. Norman writes an article-letter objecting to The New Age's practice of worrying about other people's business: detailing foreign atrocities and ignoring those going on in Britain or perpetrated by British citizens in the colonies (10:149).
- The author of the series "An Australian View of Imperial and Foreign Affairs" insists that to avoid world war as well as the threat of Muslim and Asian powers, White Europe must reorganize into "Civilization Limited" and provide an adequate number of White men to Australia, Canada, and Africa, lest those colonies be forced to take immigrants from Asia.
In his important new column, "I Gather the Limbs of Osiris" (Nos. 5-13, 15-17), Ezra Pound provides "expositions and translations in illustration of 'the New Method' in scholarship." Pound's first contribution is a partial translation of The Seafarer (10:107) that has often been criticized for being more creation than translation. Pound responds to questions about his translations by offering prose versions and insisting The Seafarer "was as nearly literal. . . as any translation can be" (10:369). Readers do not find out what "the New Method" is until the second installment. It turns out not to be new at all, but rather the method of "all good scholars since the beginning of scholarship" (10:130). Pound wants scholars to bridge the gap between scholars and normal men by highlighting the "Luminous Detail," not just listing facts, so that "accuracy of sentiment" will be communicated to non-specialists. In other columns Pound gives short histories of Provençal poets and explains what he did to translate their poems. Generally, he aims to translate them as exactly as possible, rather than use them as he had in the recently published Canzoni (reviewed by Jack Collings Squire in "Recent Verse" (10:183)). Nonetheless, these translations are, in the words of one biographer, "still old-fashioned and often uncertain." In later installments, he emphasizes the need for technique and tradition (10:297), discusses the composition of souls and the attainment of virtù (10:224), and explores the relation of music to poetry (10:343).
This was a very busy period for Pound, whom T.E. Hulme had introduced to Orage. He traveled frequently between France and England. He was trying to convince Dorothy Shakespear's father to make their engagement official, when Hilda Doolittle, who also believed herself engaged to Pound, moved in across the street. Both H.D. and Pound met Richard Aldington in the winter of 1911-12 and were fascinated by his knowledge of Greek language and culture. Pound was apparently so caught up in things aesthetic that he failed to notice that the other two Imagists, as he named them in the spring, were romantically involved. In 1912 Pound met editors Harold Monro of the Poetry Review and Harriet Monroe of Poetry. His "Credo" appeared in Poetry Review in February. That month he also met Henry James, whom he found "delightful" after his initial intimidation wore off. During this period Pound was partly kept alive by Orage, whose publication of the right-wing, if not yet very political, poet is yet another testament to his open-mindedness as an editor. In 1917 Pound would return to The New Age as music critic William Atheling and art critic B.H. Dias.
The author of "Art and Drama," Huntly Carter is the main art critic of this volume. In covering the theater, he alternates between deriding English productions because they are didactic and/or are adaptations of novels, and imagining a new kind of theater. He declares that "we are entering upon the third great period of dramatic renascence" after the Greek and Elizabethan (10:251). This new period requires intimate theaters because large theaters detract from play and actors. Ibsen is the leading playwright of this renaissance, but even he is perverted into didacticism on the English stage.
In painting, Carter finds that the Old Masters and contemporary London realists are concerned with "unessentials" (10:84). Good painting, by contrast, is "concerned solely with the quintessence of ideality"; among the idealists we find Gauguin, Cézanne and Wyndham Lewis (10:203). He likes cubism, but not its name. He considers the Futurists intelligent and talented, but not geniuses: they too represent excrescences, instead of essences. No reference is made to Marinetti; Boccioni is declared "the biggest futurist"--perhaps because Carter did not read any futurist manifestoes, "preferring instead to see the result of their work" (10:443). Marinetti's lecture of 19 March is anonymously satirized in the Pastiche at the end of the month (10:524).
Other articles on the arts:
- "Present Day Criticism" is mostly criticism of other critics, but is plenty critical of art too. Current novels are loathsome and contemporary poetry is equally bad. In one of the few cases in which specifics are given, Thomas Hardy's Jude the Obscure, Joseph Conrad's Lord Jim, H.G. Well's Ann Veronica, and Bennett's Hilda Lessaways are all derided as characterless strings of fortuitous circumstances resulting from well-intentioned attempts to seek the truth (10:277). There are regular complaints about the column in the letters to which the reviewer responds.
- Among the views expressed in A.E.R.'s "Views and Reviews" is that art will be swallowed up by a "psychology without Psyche" if artists do not do a better job of showing that art cannot be explained by science (10:521). Disapproval of Max Stirner's The Ego and His Own is among the reviews (10:592).
- There is a short article on what unites "Stuart Merill" and other symbolistpoets (10:17).
- T.E. Hulme contributes his "Complete Poetical Works" (10:307) which are imagist, before Imagism has a name.
- Jack Collings Squire has an occasional column reviewing "Recent Verse." He is very specific, provides quotations, and doles out both positive and negative criticism.
- An amateur performance of Mozart's Magic Flute inspires John Playford, in his rare "Music and Musicians" column, to compare Mozart favorably to Wagner (10:184). This unusually positive criticism continues as Playford declares that the English listening public is becoming quite discerning and that there is much good music to hear, even if most of it is still foreign. The Royal College contributes to this problem by not distributing its Patron's Fund to promising young artists (10:64).
T.E. Hulme had been writing on Henri Bergson since 1909 and in 1913 published a collaborative translation of Bergson's Introduction à la métaphysique. Bergson and J.C. Squire wrote the letters that got Hulme readmitted to university in 1912 (he was soon expelled again). Hulme was drawn to Bergson's response to "the nightmare of universal mechanism." Bergson argued that there are "intensive manifolds," aspects of experience that are not part of the exterior world and so are not subject to materialist analysis. Yet, early in this volume Hulme writes, in "Mr. Balfour, Bergson, and Politics" (10:38), that he agrees with Pierre Lasserre's attack on Bergson's Romanticism in La Morale de Nietzsche. The last 3 of Hulme's 5 "Notes on Bergson" (the series began in volume 9) are a personal account of how he is working through this contradiction. Although critics agree that Hulme was not an original thinker, he was influential--in part through the salon of sorts he held Tuesdays at his home in the former Venetian embassy in London. Attendees included Jacob Epstein, Pound, Orage, and many other New Age writers.
Other articles on philosophy:
- Thomas Gratton also writes about "Bergson Lecturing" (10:15) in England in the summer of 1911.
- Orage continues to write anonymous and "Unedited Opinions" on topics including marriage (10:442) and Rousseau's horrible humanitarianism (10:347).
- John Middleton Murry discusses "The Importance of Hegel to Modern Thought" (10:204) and concludes that Bergson is the antithesis of Kant, while Hegel is their synthesis.
- Professor A. Messer of Giessen University compares "Kant and Nietzsche"'s approaches to the problems of God, freedom and immortality (10:419).
- G.K. Chesterton and Oscar Levy, editor of the English translation of the Nietzsche's Complete Works (reviewed by R.M. (10:320)), exchange letters and then articles on Christianity.
- J.M. Kennedy has a series entitled "Eupeptic Politicians" whose point of departure is to define the over-used terms "optimistic" and "pessimistic" as they apply to Nietzsche, Schopenhauer, Christianity, and paganism (10:445).
- The author of "The Coming of Oedipus" (10:199) argues that rather than offering competing and equally irrelevant theories about living matter both Mechanists and Vitalists need to study "the organism as a whole," whether human or single-celled, in order to understand life itself, and says the study of cancer cells hold the key to this understanding.
- "The Englishman Abroad" (10:206) is a lengthy commentary on the behavior of the English on the Continent by Karl Hillebrand, a German. He argues that the English are only really English when at work in England. The few of who live outside England are all guilty of remaining so aloof that they know no more about their host countries after 20 years than after 2 weeks, to their detriment and that of Europe.
- "Co-education in America" (10:175) looks at the trend (?) away from co-education in American universities and argues that though women consistently outperform men at university, they are generally incapable of equaling men's later accomplishments because women's brains peak at 25 while men's brains develop slowly for a long period of time.
Works Cited and Consulted
- Brooks, David. The Age of Upheaval: Edwardian Politics 1899-1914. Manchester: Manchester UP, 1995. New Frontiers in History. Mark Greengrass and John Stevenson, eds.
- Dangerfield, George. The Strange Death of Liberal England. Stanford: Stanford UP, 1997. Orig. pub. 1935.
- Davis, John. A History of Britain, 1885-1939. New York: St. Martin's P, 1999.
- Ensor, R.C.K. England 1870-1914. Oxford: Oxford UP, 1936. The Oxford History of England. G. N. Clark, ed.
- Gibbons, Tom. Rooms in the Darwin Hotel: Studies in English Literary Criticism and Ideas 1880-1920. Nedlands, Western Australia: U Western Australia P, 1973.
- Levenson, Michael H. A Genealogy of Modernism: A Study of English Literary Doctrine 1908-1922. Cambridge: Cambridge UP, 1984.
- Martin, Wallace. The New Age Under Orage: Chapters in English Cultural History. Manchester and New York: Manchester UP and Barnes and Noble, 1967.
- Roberts, Michael. T.E. Hulme. Manchester: Carcanet New Press, 1982. Orig. pub. 1938.
- Wilhelm, J.J. Ezra Pound in London and Paris: 1908-25. University Park and London: Pennsylvania State UP: 1990.
- "Ancient Persian Timeline" excerpted from Edward S. Ellis and Charles F. Horne's The Story of the Greatest Nations and the World's Famous Events (1913). Public Bookshelf. 31 Oct. 2001.
- "History: The First Republic" Chinatown Online. 31 Oct. 2001. | <urn:uuid:940579c8-cfcb-49b9-aff4-e7575cac38de> | CC-MAIN-2017-17 | http://modjourn.org/render.php?id=mjp.2005.00.011&view=mjp_object | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00192-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.956696 | 6,734 | 2.625 | 3 |
Friday, September 4, 2009
Friday, January 23, 2009
Wednesday, October 22, 2008
The Imperial Oil Foundation is asking young writers to send in a recently-written story or poem that they think could become a classic and be read for the next 100 years.
One winner from each grade will receive a $200 gift certificate to the bookstore of his or her choice.
Entries must be postmarked by December 15, 2008. For more information, visit www.bookweek.ca
Thursday, September 25, 2008
Are you looking for something to make reading more exciting for your students? Check Out! http://www.nba.com/raptors/community/read_achieve.html
Register your class or school for Overtime Readers Club and/or Reading Time Out by September 26th and you will automatically be invited to attend the first ever Read To Achieve Open Practice at Air Canada Centre on October 16th.
The Overtime Readers Club (October 2008 – March 2009, Grade 1-12),
Reading Time Out (November 2008 - March 2009, Grade 1-8)
Black History Month Heroes Challenge (January 19-February 13, 2009, Grade 1-12).
Friday, September 5, 2008
Thursday, August 28, 2008
1. BE A WRITER: write often and for many purposes, not all written work has to be published.
2. BUILD A COMMUNITY: share writing experiences and have everyone join together at specific times to write.
3. READ ALOUD: explore the writing process.
4. STORYTELLING: have students tell stories and write them down.
5. READ LIKE WRITERS: note the ways an author keeps the reader in mind as they write.
6. MENTOR TEXTS: read these texts several times and focus on the author's craft.
7. GUIDE YOUNG WRITERS: encourage different topics based on experiences and interests.
8. PUT PEN TO PAPER: write, write, write.
9. MODEL: show and guide with clear examples.
10. USE WRITING TO PRESERVE MEMORIES: help children find their voice and show how they feel or think.
11. ASSESSMENT: look carefully at your writers and their writing to guide instruction.
12. HIGH EXPECTATIONS: challenge students to write great literature.
Tuesday, June 24, 2008
Date: Mon. Nov. 3, 2008
Time: 10:30 am & 6:30pm
Location: Sanderson Centre - Brantford
For Tickets Call 1-800-265-0710 or 519-758-8090
Monday, June 16, 2008
Register on-line at www.scholarschoice.ca
WEDNESDAY, JULY 23
8:30 am to 4:00 pm
Mississauga Convention Centre
75 Derry Road West,
THURSDAY, JULY 24
8:30 am to 4:00 pm
Best Western Lamplighter Inn
591 Wellington Rd. S,
$75 per 1/2 day workshop, $25 per 50 minute workshop
Thursday, June 12, 2008
Tuesday, June 3, 2008
Fact: Gifted students are those "who have outstanding abilities, are capable of high achievement, " but who may or may not achieve it.
Differentiated instruction is an excellent way to engage all students (including our high achievers).
Ideas for differentiating:
1. Style: Find out what interests your students and let them explore.
2. Grouping: Use both ability and interest grouping. Keep in mind that groupings may change. Often we have three different groups of students in the classroom: one following the curriculum, one that is beyond the curriculum and one group that needs more help.
3. Allow for open-ended questioning and assignments.
4. Engage students in authentic problem solving.
6. Provide opportunities for choice.
7. Find the right books that are appropriately challenging and interesting to your students.
Friday, May 30, 2008
LTC/HCC Speech Services and the Child Resource Centre, invite you and your children to their
'Reading Adventure' Program
This program helps parents and children learn and practice early literacy and language skills through storytelling and crafts and promotes literacy at home.
Each week will feature story time, a literacy related activity, a craft for two specific age groups, and a nutritious snack for the children.
The program runs every Wednesday morning July 2, 9, 16, 23, 30, August 6, 13, 20
10:00 a.m to 12:00 p.m at the Child Resource Centre located at 18 Stoneridge Circle.
So come and join them for this fun eight week program filled with fun, learning and ADVENTURE!!!
No pre-registration is needed.
Monday, May 26, 2008
for enjoyment - The teacher makes time for independent reading and makes available texts of all types.
for vicarious experiences - The teacher reads aloud and discusses a broad selection of literary and informational texts.
to learn more about themselves and others - The teacher shares critical responses and gives the students opportunities to respond to appropriate content for the age, gender and cultural diversity of the classroom.
to gain information - The teacher demonstrates strategies to deal with informational texts.
to understand issues - The teacher models how to question texts and think critically while exploring social issues.
for aesthetic appreciation - The teacher gives the students opportunities to respond to texts through the arts (choral reading, readers' theatre. literature circles, visual arts, book clubs).
Tuesday, May 6, 2008
The Sun, The Moon and Me
Write a poem about nature, the environment and your role/relationship within each.
1. All students in grades 4-6 are invited to participate through their school.
2. Entries must be original poems about nature, the environment, and your role/relationship within each. Poems must be written in class or at home. No copied poems.
3. Entries must be submitted on 8 1/2 x 11 inch paper.
4. Entries may be illustrated. Illustrations/drawings should reflect the meaning of the poem.
5. Authors must write their name, grade and school on their submission.
6. Attach a signed Parental Release Form. No poem will be judged without a signed parental release form.
7. DEADLINE: All poems must be received no later than May 23, 2008 at 4:00 p.m.
8. Poems will not be returned.
9. You may send photocopies.
10. Make sure students and parents have a copy before you send poems to us.
11. Fill out one entry form for your school and include it with student entries. You will find it in this contest packet (titled School Entry Form).
Send entries to:
Chiefswood Museum Community Awareness Week Poetry Contest
P.O. Box 640
Or to deliver:
Chiefswood Curators Cottage
1037 Highway 54 at Chiefswood Road
(Small green and white building located within Chiefswood Park, directly across from Museum)
Monday, April 28, 2008
There are lots of things I can do.
I can do them all, please, by myself;
I don't need help from you.
I can look at the pictures to get a hint.
Or think what the story's about.
I can "get my mouth ready" to say the first letter.
A kind of "sounding out".
I can chop up the words into smaller parts,
Like on or ing or ly,
Or find smaller words in compound words
Like raincoat and bumblebee.
I can think of a word that makes sense in that place,
Guess or say "blank" and read on
Until the sentence has reached its end,
Then go back and try these on:
"Does it make sense?"
"Can we say it that way?"
"Does it look right to me?"
Chances are the right word will pop out like the sun
In my own mind, can't you see?
If I've thought of and tried out most of these things
And I still do not know what to do,
Then I may turn around and ask
For some help to get me through.
- Jill Marie Warner
Thursday, April 24, 2008
1st Place: Kiana (I.L. T), Winning Word - oil
2nd Place: Clarisa (ECG), Winning Word - straight
3rd Place: Dallas (OMSK), Winning Word - festoon
1st Place: Dylan (OMSK), Winning Word - enumerate
2nd Place: Kristina (CGW), Winning Word - duplicity
3rd Place: Cassandra (OMSK), Winning Word - gaiety
1st Place: Ian (RH), Winning Word - quiche
2nd Place: Jacob (JCH), Winning Word - augment
3rd Place: Kylie (JCH), Winning Word - Gauss
Well Done! We are proud of your accomplishments!
Tuesday, April 1, 2008
1. Make Connections: Create a bridge from the new to the known, connecting the text to yourself, what you know about the world, and what you have read in other texts.
2. Question: Ask questions as you read to enhance understanding, find answers, solve problems, and find specific information.
3. Make Inferences: Connect ideas or fill in information to make sense of unstated ideas.
4. Visualize: Generate mental images to stimulate thinking and heighten engagement.
5. Summarize: Synthesize and organize key information to identify main points and major themes, distinguish important from unimportant information, and enhance meaning.
6. Monitor/Regulate: Pay attention to meaning, clarify or correct comprehension difficulties, or promote a problem-solving stance during reading.
7. Evaluate: Make judgements about the text to form ideas and opinions, or determine the author's purpose.
(Marjorie Y. Lipson - INSTRUCTOR)
Tuesday, March 18, 2008
G.R.E.A.T. is hosting a book release - meet the author night for Six Nations newest young author. Eleven year old Chris, a student at Six Nations, has finished writing the first four books in his six-part series on adventures in the land of Grillbowa. The first two books will be available for purchase. The next two books are still in editing and will be available soon.
Come meet Chris and wish him well on his journey into a bright future.
Thursday, March 6, 2008
Friday, February 29, 2008
1. GOLDEN WORDS:
* Use Highlighters
* Students exchange their written work.
* Look for examples of effective language.
* Highlight the "golden" words or phases.
* Have students share their favourite golden word/phase.
* Discuss why they choose the word/phase.
* Divide the students into groups of three.
* Each student is given the opportunity to read their written work, while the other group members stop the reader to ask questions, and make positive or constructive comments.
* The student reading is also encouraged to respond with comments or questions.
* The student may take notes or make revisions.
*** This is an informal chat about ideas, characters, etc. and is NOT a formal critique.
Have fun writing and providing your students with meaningful ways to give feedback to their peers!
INSTRUCTOR Jan./Feb. 2008
Tuesday, February 5, 2008
1. PREDICTION POP (Examining titles and illustrations)
- The teacher reads the title on the cover of a picture book.
- Students predict what the story might be about.
- The responses are written in balloon shapes.
- Review all the predictions.
- Read the story and pause to confirm, modify or reject predictions.
- If a prediction is wrong, "pop" the ballon (erase).
- After reading, the children decide which of the predictions on the board are correct.
2. THUMB-THROUGH PREVIEW (Scanning books) * Similar to a picture walk.
- Before reading, have the class walk through the book (pictures, clues, words).
- Make predictions about the story.
- Read the story.
- The children will make a "thumbs up" sign when you reach a correct prediction.
3. CURIOSITY CHART (text features)
- Examine a nonfiction text (subtitles, photographs, captions, charts, maps).
- Record the text features that are pointed out on chart paper.
- "Think-aloud" - make predictions about how the text features might provide information.
- Read the text.
- Check off each feature as you come to it.
(Mackie Rhodes INSTRUCTOR Jan./Feb. 2008)
Friday, February 1, 2008
Where: Green of Renton
969 Concession 14
Who: Early Learning Providers, Teachers and anyone interested in Early Literacy
Why: * Explore gaps in literacy.
* Help develop a strategic plan.
* Hear the latest research.
How: Register by Feb. 29th, 2008
Call Karla Neil at 519-429-2875 or 1-866-463-2759
Fee: $10 (Lunch, snack and a drink included)
Here are the Bebop titles levelled by Grade, Fountas & Pinnell's alphabet system and DRA numbers:
* Laundry Day by Karen Hjemboe (K, C, 3)
* At the Park by Judy Nayer (Gr. 1, D, 4)
* My Family by Karen Hjemboe (Gr. 1, D, 4)
* My Horse by Karen Hjemboe (Gr. 1, D, 4)
* I make Clay Pots by Leslie Johnson (Gr. 1, D, 4)
* Fancy Dance by Leslie Johnson (Gr.1, G, 12)
* Living in an Igloo by Jan Reynolds (Gr. 1, G, 12)
* I'm Heading to the Rodeo by Emmi S. herman (Gr.1, I, 16)
Check out www.goodminds.com
Thursday, January 10, 2008
Thursday, December 6, 2007
Check it out! http://www.oct.ca/additional_qualifications/default.aspx?lang=en-CA
Monday, December 3, 2007
TIP #1: STAY ON TOPIC ( a very narrow topic)
Activity: Graphic Organizer (model for the class)
* write a topic for the centre bubble
* write subtopics in the branching bubbles
* select one of the branching bubbles and start a new web
* keep going until you have a narrow topic
TIP#2: CHOOSE THE RIGHT DETAILS (meat and bones)
Activity: An outline of a person (copy for each student)
* convince the class why their favourite celebrity, athlete, or role model is the best at what they do
* fill in the outline with supporting details
* share their work with the class
TIP#3: SKIP THE EXTRAS
Activity: Cut up sentence strips in an envelop
* use news stories, but add some additional sentences that are related, but don't belong
* students can work in groups to identify the extraneous information
TIP#4: USE SEVERAL SOURCES
Activity: Interview family members
* talk to family members about an important event
* the children record the different responses
* the student then writes a single paragraph that incorporates their various interviews
* discuss with the class how people remember things differently, but how combining the memories results in a richer piece of writing
TIP#5: OFFER A CONCLUSION
Activity: Teacher prepared yes-or-no questions (Do you think the Pioneer children were happy with their homemade toys?)
* students are required to make inferences about the topics they are studying
* work in small discussion groups and share responses with the class (Why did groups answer the way they did?)
* talk about how we go from facts to conclusions
-Hannah Trierweiler (Instructor Nov./Dec. 2007)
Amazing prizes to be won!
Contest Theme: "In The News"
Contest Deadline: February, 2008.
For complete contest rules and regulations, please visit: www.worldlit.ca or check out the fax I sent to your school on Dec.3, 2007.
Wednesday, November 28, 2007
Student A reads 20 minutes five nights of every week;
Students B reads only 4 minutes a night... or not at all!
Step 1: Multiply minutes a night x 5 times each week.
Student A reads 20 min. x 5 times a week = 100 mins./week
Student B reads 4 min. x 5 times a week = 20 minutes
Step 2: Multiply minutes a week x 4 weeks each month.
Student A reads 400 minutes a month.
Student B reads 80 minutes a month.
Step 3: Multiply minutes a month x 9 months/school year
Student A reads 3600 min. in a school year.
Student B reads 720 min. in a school year.
Student A practices reading the equivalent of ten whole school days a year. Student B gets the equivalent of only two school days of reading practice.
By the end of the 6th grade if Student A and Student B maintain these same reading habits, Student A will have read the equivalent of 60 whole school days, and Student B will have read the equivalent of only 12 whole school days.
One would expect the gap of information retained will have widened considerably and so, undoubtedly, will school performance. How do you think Student B will feel about him/herself as a student?
Some questions to ponder:
Which student would you expect to read better?
Which student would you expect to know more?
Which student would you expect to write better?
Which student would you expect to have a better vocabulary?
Which student would you expect to be more successful in school... and in life?
(shared on mailring by Emmy Ellis; source unknown)
Friday, November 16, 2007
F is for finding a book that looks interesting.
I is for investigation to see whether the book is too hard or too easy.
T is for trying the book or trading it in for another.
I choose a book.
Purpose - Why do I want to read it?
Interest - Does it interest me?
Comprehend - Am I understanding what I am reading?
Know - I know most of the words.
Friday, October 26, 2007
Students should create a poem or prose no longer than 1 page.
This may be hand written or typed.
Students are encouraged to decorate their page with art or graphics.
* Have a Remembrance Day or Veterans theme
* 1 page in length (maximum)
* 1/2 inch margins (minimum)
* Author name, school and grade should be lightly written on the reverse in pencil.
All Finalists will be posted at the Six Nations Public Library
The top winners will receive a Book Gift Certificate
All entries must be ready for pick up in the afternoon of Mon., Nov. 5th or you can drop them off at the library.
Thursday, October 25, 2007
Friday, October 12, 2007
Visit http://www.bookweek.ca/ to find out more information about how you can celebrate the magic of books in your school or library. Your class or school can celebrate with a kit for $14.95. Download the form at www.bookweek.ca/bookweekkit.htm Also, the Imperial Oil Foundation is having a writing contest for grades 2-6. More information and contest details can be found at www.bookweek.ca/writingcontest.html . Writing contest entries must be received no later than December 15, 2007.
In addition, I did apply for an author visit for the schools that showed an interest. Thank you for your continued support, assistance and flexibility during this process.
Special Thanks to Vanessa at Jamieson
Tuesday, October 2, 2007
Special Thanks to Dar at J.C. Hill
Friday, September 28, 2007
The Toronto Raptors offer two excellent programs for your school to get involved in this year. TeamUp for Literacy (Oct.-Mar.) for grades 1-12 encourages schools to start student literacy initiatives. Reading Time-Out (Oct.-Mar.) for grades 1-8 is a great idea for a class to begin a read-a-thon. Sign up by Oct. 5th www.nba.com/raptors/community/read_achieve.html
Special Thanks to Judy at ECG
Tuesday, September 25, 2007
Date: Mon., Oct 22, 2007.
Event: Mem Fox - Hosted by the Family Literacy Committee of Brant - "Which Reading Road Shall We Travel"
Time: 7:30 p.m.
Tickets: $8.00 - general seating
Sanderson Centre Box Office 519-758-8090 or visit www.sandersoncentre.ca
Tuesday, June 26, 2007
What Can We Do?
- Send home summer reading lists, books and tips for parents.
- Encourage library visits and programs.
- READ, READ, READ!
Recommended web sites with fun reading activities:
- Kidsreads is the best place on the web for kids to find info about their favourite books, series and authors. They also have trivia games, word scrambles and awesome contests.
- www.gigglepoety.com - This site is full of funny poems kids will love to read. Others links include places where you can write your own poems and read ones written by other kids.
- Storybooks Online - Choose from a selection of a dozen stories, young, middle-aged and older children might like to read right from your computer screen.
- KidsDomain: Summer Fun
- Stories Online - Follow the news written for children.
"The single summer activity that is most strongly and consistently related to summer learning is reading". (Anne McGill Franzen & Richard Allington) For example, to maintain their reading skills a Gr. 2 student should read 4 chapter books during the summer. | <urn:uuid:19b54181-db36-481f-b222-69d68cca7e85> | CC-MAIN-2017-17 | http://snliteracy.blogspot.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00423-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.9206 | 4,735 | 2.515625 | 3 |
Variscite and Associated Phosphates from Fairfield,
Richard W. Thomssen
Variscite from the Clay Canyon deposit near Fairfield, Utah was first
identified in 1893. Its compact, microcrystalline nature and pleasing
color in various shades of green led to early recognition of its use
as a semi-precious gemstone. Mining and marketing of the variscite
as "chlor-utahlite" on a small scale for the jewelry trade
continue up to about the time of the World War 1. Associated with
the variscite in nodular masses was a compact banded yellow material
that soon found a market as "sabalite" though it was not
nearly as popular as variscite. Some of the nodules are sufficiently
altered that open pockets have developed. Occasionally crystals of
a blue green mineral are to be found and these turned out to be a
new mineral which was named "wardite" (Packard, 1896). Strangely,
thirty-four years were to pass before another mineralogical paper
concerning the unusual phosphate minerals of this deposit was to appear.
Eight new minerals were then described and, ten years later, three
additional new minerals were named. Subsequently, re-examination of
these eleven new minerals led to five being discredited and an additional
one was found to have been already described under another name. The
remaining five have stood the test of re-examination and appear to
be safely established. This is not to say that further work will not
add to the sum of knowledge about these five and, perhaps, disclose
the presence of additional new minerals.
Variscite from the Clay Canyon deposit was first brought to the attention of the scientific world in December, 1893 by Mr. F. T. Millis of Lehi, Utah, who sent a specimen to Mr. Merrill, Curator of Geology in the U.S. National Museum (Smithsonian Institution). Mr. Millis related that the material occurred in the form of "nuggets" in a quartz vein near Lewiston (now Mercur), Utah, some twenty miles west of Lehi (Figure 1). This specimen was subjected to blowpipe examination, a useful technique which has unfortunately fallen into disuse, and found it to have the characteristics of "peganite". A chemical analysis of the specimen showed that its composition was the same as variscite and "peganite" has subsequently been considered to be a poorly analyzed variscite (Packard, 1894). At about the same time or slightly later, Mr. Don McGuire of Ogden, Utah discovered compact nodular variscite in Cedar Valley, near old Camp Floyd (first name for Mercur mining district), Utah (Kunz, 1894). This is certainly the same locality as that from which Mr. Millis obtained his material. Unfortunately, there is no record of the relationship of Millis and McGuire and we can only surmise who actually found the locality first. However, Don McGuire acquired the deposit and produced variscite for the jewelry trade from it for many years (Sterrett, 1908, 1914). Kunz suggested the name "utahlite" for the material and this name was shortly amended to "chlor-utahlite" Sterrett, 1909) in apparent reference to the materialÕs green color and, possibly to more easily promote the material to the jewelry trade. The specimen examined by Merrill and Packard at the National Museum was described as a large nodular mass, measuring nearly seven inches in its longest dimension. Green variscie in sections were separated from each other by banded envelopes of a yellow mineral, crandallite between which and the green is a powdery white coating (Packard, 1894). Subsequently, John M. Davison wrote that a considerable quantity of variscite had been received by WardÕs Natural Science Establishment of Rochester, New York. In the variscite nodules he found cavities left by the decomposition of the variscite. These were encrusted with light green to bluish green crystals of a new phosphate mineral, which he named wardite after Professor Henry A. Ward (Davison, 1896). Mineralogical examination of the variscite nodules and the various alteration minerals then ceased for 27 years when in 1923, Esper S. Larsen of Harvard University and Earl V. Shannon of the U. S. National Museum undertook an intensive study of the variscite nodules utilizing material in the collection of their respective institutions and a large collection loaned to them by George L. English at WardÕs Natural Science Establishment. In the summer of 1927, Larsen visited the locality and collected a few specimens from the dump.
Larsen found that the deposit had been developed by a short tunnel and drift. The results of Larsen and ShannonÕs work disclosed the presence of eight new minerals: dehrnite, deltaite, dennisonite (davisonite), englishite, gordonite, lehiite, lewistonite and millisite (Larsen and Shannon, 1930a). Only three (englishite, gordonite and millisite) of the original eight minerals have stood the test of further examination by mineralogists in the intervening 60 years and have not been discredited. In the fall of 1936, the Clay Canyon locality was visited by Arthur Montgomery and Edwin Over, fresh from their collecting trip to the classic epidote localities at the Jumbo mine and Green Monster Mountain on Prince of Wales Island, Alaska, and plans made to mine the deposit for both variscite and the rarer phosphate species. In the summer of 1937 and again in 1939 they managed to mine thousands of pounds of nodules which were distributed among major museums, especially Harvard and the Smithsonian Institution, collectors and mineral dealers. Far better crystals of several of the phosphates were found including gordonite and wardite and three unknowns. Frederick H. Pough, Curator of Minerals, American Museum of Natural History, acquired some of the gordonite crystals and characterized the crystal forms present, noting the resemblance as did Larsen and Shannon with the triclinic species paravauxite described by Samuel Gordon in 1923 (Pough, 1937b). Pough also found wardite crystals suitable for measurement on the optical goniometer and characterized the crystal forms for this species (Pought, 1937a). Duncan McConnell of the University of Minnesota demonstrated that both dehrnite and lewistonite were members of the apatite group. He suggested that dehrnite was a sodium member and that lewistonite was a potassium member on the basis of the analyses given in Larsen and ShannonÕs original 1930 paper (McConnell, 1938). In 1940 in one of a series of papers reporting on research done for his PhD. Dissertation on the Clay Canyon deposit, Esper S. Larsen, III, the son of Esper S. Larsen noted above, described two new phosphate minerals, overite and montgomeryite (Larsen, 1940a). Together with Arthur Montgomery, Larsen described sterrettite, unknowingly adding to a tale of error and confusion about which more will be forthcoming below (Larsen, 1940b). In 1942, Esper S. Larsen, III, published a general paper, which appeared in three parts, on the mineralogy of the varisicite nodules (Larsen, 1942a, 1942b and 1942c). He discussed the characteristics of some of the phosphate minerals and went into considerable detail about their sequence of formation from the precursor, variscite. Commencing in 1960 a series of papers by several investigators demonstrated that five of the species named and described by Larsen and Shannon were actually known species already described and well entrenched in the mineralogical literature. It was concluded that deltaite is a mixture of crandallite and hydroxylapatite (Elberty and Greenberg, 1960). Alice M. Blount of the University of Wisconsin studied the crystal structure of crandallite and concluded that "deltaite" is identical essentially, corroborating the conclusion of Elberty and Greenberg (Blount, 1974. Pete J. Dunn of the department of Mineral Sciences at the Smithsonian Institution examined in detail dehrnite and lewistonite. He concluded that both minerals were carbonate-fluorapatite with no sodium or potassium, respectively. The I.M.A. Commission on New Minerals and Mineral Names approved the discreditations (Dunn, 1978). Eight years later Pete Dunn and Carl A. Francis of Harvard Mineralogical Museum discredited both davisonite and lehiite. Davisonite was found to be a mixture of apatite and crandallite and lehiite is identical to crandallite. The I.M.A. Commission on New Minerals and Mineral Names has approved the discreditations (Dunn and Francis, 1986). (go to top)
A brief review of some of the characteristics of each mineral will be given in the approximate order in which they are believed to have formed in the deposit.
Variscite This mineral, a hydrous aluminum phosphate, occurs in dense microcrystalline nodular masses; No crystals have been found at this location. The beautiful green color has been attributed to small quantities of vanadium (0.53%) and chromium (0.069%) substituting for phosphorus (Foster and Schaller, 1966). Significant amounts of scandium, 0.001-0.1%, have been found (Frondel, Ito and Montgomery, 1968). All other phosphates in the deposit are believed to have formed at the expense of variscite through the action of hydrothermal solutions.
Crandallite The first mineral to form from variscite through the addition of calcium, this yellow to light olive green species occurs in a variety of massive and crystal habits. The most abundant forms of this mineral cover the entire spectrum from massive, cherty material through yellowish and pinkish spherulitic cleavages to white, chalky crusts. When crystallized, this mineral varies from feathery clusters of fibrous needles through more substantial, but tiny prismatic crystals to distinct flattened rhombs with an equally developed base. The Clay Canyon material was first called pseudowavellite, however, the name crandallite had priority (Palache, Berman and Frondel, 1957). This mineral in its various modes has been shown to contain significant amounts of vanadium, 0.37%, and chromium 0.67% (Foster and Schaller, 1966); strontium, >1.0% (Foster and Schaller, 1966); and scandium, 0.01-0.80% (Frondel, Ito and Montgomery, 1968). The first two elements certainly are responsible for the color.
Goyazite The existence of this mineral was obscured for many years by its resemblance to and close association with crandallite. Its strontium content was the first clue to its existence in the deposit (Frondel, Ito and Montgomery, 1968; Blount, 1974). Although tiny crystals may be present, they are impossible to distinguish from crandallite without chemical or optical tests. (go to top)
Wardite The blue green to bluish grey component of the "eyes" or spherules and veining within variscite, this species also occurs in blue green to yellow crystals lining cavities in the more altered nodules (Packard, 1896; Montgomery, 1970a). (photograph and single crystal drawing) The blue green variety owes its color, no doubt, to minor amounts of vanadium and or chromium substituting for phosphorus as in the case of variscite. Solutions altering the variscite now have become enriched in sodium.
Concentric accretion of wardite and millisite.
Millisite White to clear component along with wardite of the spherules and veining noted above. No isolated crystals have been found of this species, which is similar in composition to wardite but containing calcium.
Gordonite This mineral occurs within open cavities generally near altering variscite as clusters of brilliant prismatic crystals. Crystals up to 7 mm have been reported, but they usually are in the millimeter range (Montgomery, 1970b). (photograph and single crystal drawing) Gordonite is usually colorless, but can be faintly yellow or a pleasing shade of pale violet. Here again we possibly are seeing the effects of one of the chromophores, vanadium and (or) chromium. Gordonite is the first species to appear in the deposition sequence containing magnesium and may be forming at the expense of crandallite.
Montgomeryite The bright blue green bladed crystals of this mineral are the most distinctive of all the well-crystallized phosphates from the Clay Canyon deposit. Crystals are in the millimeter range and typically occur in cavities implanted upon crandallite and near variscite (Larsen, 1940). (photograph and single crystal drawing)
Overite The rarest of the phosphate minerals, clear pale yellow clusters of tiny orthorhombic crystals of this mineral are most distinctive. (photograph and single crystal drawing) As in the case of montgomeryite, this species occurs in cavities implanted on crandallite and near variscite. These two species, like gordonite contain magnesium and probably formed at the expense of crandallite. (Larsen, 1940) (go to top)
Englishite Similar to gordonite in its position within cavities close to variscite, this mineral can be readily identified by its grayish to colorless, bladed habit and, where broken, its prominent cleavage. Crystal aggregates range in size up to about 2 mm. (photograph) This is the only phosphate in which potassium is essential along with sodium and the ever-present calcium (Dunn, Rouse and Nelen, 1984, More, 1976)
Kolbeckite This rare species was first described from Clay Canyon deposit under the name, sterrettite, as an aluminum phosphate (Larsen and Montgomery, 1940). The identity of "eggonite" from Altenberg Belgium with sterrettite was proposed while both were still considered aluminum phosphates (Bannister, 1941). Then in 1959, it was discovered that both sterrettite and kolbeckite were, in fact scandium phosphates (Mrose and Wappner, 1959). In 1980, the I.M.A. Commission on New Minerals and Mineral Names while accepting that all three minerals were identical, rejected the name sterrettite, and were almost equally divided over the name kolbeckite and eggonite. (Hey, Milton and Dwornik, 1982). Kolbeckite currently is accepted as the valid name for this hydrous scandium phosphate (Nickel and Nichols, 1991; Fleischer and Mandarino, 1991). Clear crystals of kolbeckite are generally tiny, measuring < 0.5 mm, although a few giants of 8 mm have been found. They are always on crandallite, which can be well crystallized, and are frequently associated with yellow wardite. (photograph and single crystal drawing) (go to top)
Kolbeckite with Crandallite
Carbonate-fluorapatite Among the last minerals to form, a large variety of hexagonal crystals of this species occur in cavities generally associated with crandallite and, occasionally, with other species (Dunn, 1980). With similar habit crystals occurring in nodule after nodule, a specific difference in composition may be responsible. Perhaps it lies in differences in relative amounts of carbonate and fluorine present. Further investigation is necessary to illuminate this matter. Still enriched in calcium and phosphate, the solutions precipitating carbonate-fluorapatite no longer contain aluminum (photographs)
Additional species Alunite, calcite and quartz together with more
or less argillic limestone comprise the matrix for the phosphate nodules.
Alunite is cream to white in color and moderately to coarsely crystalline.
It is fairly common in the brecciated, unweathered portions of the
deposit. Quartz is the dominant component in the dark-colored cherty
material that is so prevalent. Limonite pseudomorphs after pyrite
occur on crandallite in highly weathered nodules. This is the only
evidence of the presence of sulfides having occurred in the Clay Canyon
deposit, although there is locally abundant limonite-staining of the
altered portions of the deposit, it cannot definitely be attributed
to the weathering of pyrite. (go to top)
Bannister, F. A. (1941) The identity of "eggonite"with sterrettite, Mineralogical Magazine, Volume 26, pages131-133.
Blount, Alice M. (1974) The crystal structure of crandallite, American Mineralogist, Volume 59, pages 41 to 47.
Davison, John M. (1896) Wardite: a new hydrous basic phosphate of alumina, American Journal of Science, Fourth Series, Volume 2, pages 154 to 155.
Dunn, Pete J. (1978) Dehrnite and lewistonite: discredited, Mineralogical Magazine, Volume 42, pages 282 to 284. Dunn, Pete J. (1980) Carbonate-fluorapatite from near Fairfield, Utah, Mineralogical Record, Volume 11, pages 33 to 34.
Dunn, Pete J. and Rouse, Roland C. and Nelen, Joseph A. (1984) Englishite: new chemical data and a second occurrence, from the Tip Top Pegmatite, Custer, South Dakota, Canadian Mineralogist, Volume 22, pages 469 to 470.
Dunn, Pete J. and Francis, Carl A. (1986) Davisonite and lehiite discredited, American Mineralogist, Volume 71, pages 1515 to 1516.
Elberty, W. T. and Greenberg, S. S. (1960) Deltaite is crandallite plus hydroxylapatite, Geological Society of America Bulletin, Volume 71, page 1857 (abstract).
Fleischer, Michael and Mandarino, Joseph (1991) Glossary of Mineral Species, Seventh Edition, Mineralogical Record, Tucson, Arizona.
Foster, Margaret D. and Schaller, Waldemar T. (1966) Cause of colors in wavellite from Dug Hill, Arkansas, American Mineralogist, Volume 51, pages 422 - 429.
Frondel, Clifford and Ito, Jun and Montgomery, Arthur (1968) Scandium content of some aluminum phosphates, American Mineralogist, Volume 53, pages 1223-1231.
Gilluly, James (1932) Geology and ore deposits of the Stockton and Fairfield Quadrangles, Utah, Professional Paper 173, U.S. Geological Survey, Washington, D.C.; 171 pages.
Hamilton, Howard V. (1959) Variscite and associated minerals of Clay Canyon, Utah, Mineralogical Society of Utah Bulletin, Volume 9, Number 1, pages 13-17.
Hey, Max H. and Milton, Charles and Dwornik, E.J. (1982) Eggonite (Kolbeckite, Sterrettite), ScPO4.2H20, Mineralogical Magazine, Volume 45, pages 493-497.
Jewell, Paul W. and Parry, W.T. (1987) Geology and hydrothermal alteration of the Mercur Gold Deposit, Utah, Economic Geology, Volume 82, pages 1958-1966.
Kunz, George F. (1984) Utahlite, U.S. Geological Survey 16th Annual Report, Part IV, page 602.
Larsen, Esper S. and Shannon, Earl V. (1930a) The minerals of the phosphate nodules from near Fairfield, Utah, American Mineralogist, Volume 15, pages 307-337.
Larsen, Esper S. and Shannon, Earl V. (1930b) Two Phosphates from Dehrn; Dehrnite and Crandallite, American Mineralogist, Volume 15, pages 303-306.
Larsen, Esper S., III (1940) Overite and Montgomeryite: Two new minerals from Fairfield, Utah, American Mineralogist, Volume 25, pages 315-326.
Larsen, Esper S., III (1942a) The mineralogy and paragenesis of the variscite nodules from near Fairfield, Utah, part 1, American Mineralogist, Volume 27, pages 281-300.
Larsen, Esper S., III (1942b) The mineralogy and paragenesis of the variscite nodules from near Fairfield, Utah, part 2, American Mineralogist, Volume 27, pages 350-372.
Larsen, Esper S., III (1942c) The mineralogy and paragenesis of the variscite nodules from near Fairfield, Utah, part 3, American Mineralogist, Volume 27, pages 441-451.
Larsen, Esper S., and Montgomery, Arthur (1940) Sterrettite, a new mineral from Fairfield, Utah, American Mineralogist, Volume 25, pages 513-518.
McConnell, Duncan (1938) A structural investigation of the isomorphism of the apatite group, American Mineralogist, Volume 23, pages 1-19.
Modreski, Peter J. (1976) Little Green Monster Variscite mine, Mineralogical Record, Volume 7, pages 269-270. Montgomery, Arthur, (1970a) the phosphate minerals of Fairfield, Utah, Rocks and Minerals, Volume 45, Number 11, pages 667-674.
Montgomery, Arthur, (1970b) The phosphate minerals of Fairfield, Utah, part 2, Rocks and Minerals, Volume 45, Number 12, pages 739-745.
Montgomery, Arthur, (1971a) The phosphate minerals of Fairfield, Utah, part 3, Rocks and Minerals, Volume 46, Number 1, pages 3-9.
Montgomery, Arthur, (1971b) The phosphate minerals of Fairfield, Utah, part 4, Rocks and Minerals, Volume 46, Number 2, pages 75-80.
Moore, Paul B. (1976) Derivative structures based on the alunite octahedral sheet: mitridatite and englishite, Mineralogical Magazine, Volume 40, pages 863-866.
Mrose, Mary E. and Wapner, Blanca (1959) New data on the hydrated scandium phosphate minerals: sterrettite, "eggonite", and kolbeckite, Geological Society of America Bulletin, volume 70, pages 1648-1649 (abstract).
Nickel, Ernest H. and Monte C. Nichols (1991) Minerals reference manual, Van Nostrand Reinhold, New York, 250 pages.
Packard, R.L. (1894) Variscite from Utah, American Journal of Science, Third Series, Volume 47, pages 297-298.
Palache, Charles and Harry Berman and Clifford Frondel (1951) The system of mineralogy, Seventh Edition, Volume II, New York. Pough, Frederick H. (1937a) The morphology of wardite, American Museum Novitates, Number 932, 5 pages.
Pough, Frederick H. (1937b) The morphology of gordonite, American Mineralogist, Volume 22, pages 625-629.
Sinkankas, John (1959) Gemstones of North America in two volumes, Volume I, Van Nostrand Reinhold, New York, 675 pages.
Sterrett, Donald B. (1908) Variscite, Mineral resources of the United States for 1907, Part II Nonmetallic Products, U.S. Geological Survey, pages 853-856.
Sterrett Donald B. (1909) Variscite, mineral resources of the United States for 1908, Part II Nonmetallic Products, U.S Geological Survey, pages 795-801.
Sterrett, Donald B. (1910) Variscite, Mineral resources of the United States for 1909, Part II Nonmetallic Products, U.S. geological Survey, Pages 888-897.
Sterrett, Donald B. (1911) Variscite, Mineral resources of the United States for 1910, Part II Nonmetallic Products, U.S. Geological Survey, pages 1073-1074.
Sterrett, Donald B. (1912) Variscite, Mineral resources of the United States for 1911, Part II Nonmetallic Products, U.S. Geological Survey, Pages 1056-1057.
Sterrett, Donald B. (1914) Variscite, Mineral resources of the United States for 1913, Part II Nonmetallic Products, U.S. Geological Survey, page 334.
Welsh, John E. and James, Allan H. (1961) Pennsylvanian and Permian Stratigraphy of the Central Oquirrh Mountains, Utah: In Geology of the Bingham Mining District and Northern Oquirrh Mountains, Utah Geological Guidebook, No. 16, pages 1-16Western Silver Information | Durango Silver Co.
For Email Marketing you can trust
Address: 17897 Hwy 160
Contact Dillon Hartman at:
If you have a collection
of Turquoise, Coral, Sugilite, Old Pawn Jewelry, Early Native American or
we are always in the market to buy! | <urn:uuid:6cf88f46-e36a-414c-ac3a-a993d61bea8d> | CC-MAIN-2017-17 | http://indianvillage.com/fairfieldinfo.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123270.78/warc/CC-MAIN-20170423031203-00308-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.904849 | 5,433 | 3.234375 | 3 |
Nutrition and You:: 15 Works Cited
Length: 3612 words (10.3 double-spaced pages)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Nutrition is the relationship of foods to the health
of the human body . Proper nutrition means that you are receiving enough
foods and supplements for the body to function at optimal capacity. It is
important to remember that no single nutrient or activity can maintain
optimal health and well being, although it has been proven that some
nutrients are more important than others. Nutrition plays a critical role in
athletic performance, but many active people do not eat a diet that helps
them do their best. Without a basic understanding of nutrition, popping a
pill seems easier than planning a menu. In reality, there is no pill, potion,
or powder that can enhance your performance like the right foods and
All of the nutrients are necessary
in different amounts along with exercise to maintain proper health. There
are six main types of nutrients used to maintain body health. They are:
carbohydrates, fats, proteins, vitamins, minerals, and water . They all must
be in balance for the body to function properly. There are also five major
food groups. The groups are: fats and oils, fruits and vegetables, dairy
products, grains, and meats.
Exercise is also an important part of nutrition.
Exercise helps tone and maintain muscle tissue and ensure that the body?s
organs stay in good condition. Healthy eating without exercise will not
result in good nutrition and a healthy body - neither will exercise without
nutrition. The most important thing about exercise is that it be practiced
regularly and that it be practiced in accompaniment with a healthy diet. It
is also desirable to practice more that one sport as different sports exercise
different areas of the body.
Carbohydrates, proteins, and fats are the sources of energy for the
body. To have enough energy you need to consume enough energy.
Getting adequate calories is one of the keys to an ergogenic, or
performance-enhancing, diet. With too few calories you will feel tired and
weak, and you will be more prone to injuries. The contained energy is
expressed in calories.
There are 9 calories per gram in fat and there are about 4 calories
per gram in proteins and carbohydrates . Carbohydrates are the main
source of energy for the body. A high-carbohydrate diet increases stores of
glycogen, the energy for muscles, and improves overall athletic
performance. The bulk of the day's calories--60% to 70%--should come
from carbohydrates such as bread, cereal, grains, pasta, vegetables, and
fruit.Different carbohydrate foods can affect your energy level in different
ways. Digestion rates are expressed as a "glycemic index." Foods with a
high glycemic index release energy into the bloodstream rapidly, while
foods with a moderate or low glycemic index release their energy more
slowly . However, beware of the old idea that simple
sugars are always digested rapidly and cause wide swings in blood sugar,
and that all complex carbohydrates like bread are digested more slowly
and don't cause blood sugar fluctuations. This turned out to be wrong. If
you exercise for longer than an hour, you can begin to
deplete your muscles of glycogen. By consuming 30 to 75 grams per hour
of high-glycemic-index carbohydrate in liquid or solid form when you
exercise, you can minimize this effect.
This energy is mostly used for muscle
movement and digestion of food. Some sources of carbohydrates are :
grains, fruits, vegetables, and anything else that grows out of the ground.
The energy in carbohydrates is almost instantly digested. This results in a
quick rise in blood sugar which is soon followed by a drop in blood sugar
which is interpreted by the body as a craving for more sugars. After a long
workout or competition, your depleted muscle glycogen stores must be
replenished, especially if you will be exercising again within the next 8
hours. Eat at least 50 grams of high-glycemic-index carbohydrate just
after exercise, and consume a total of at least 100 grams of high-glycemic-
index carbohydrate in the first 4 hours afterward. Moderate-glycemic-
index foods may be added for the next 18 to 20 hours, with a goal of
consuming at least 600 grams of carbohydrate during the 24 hours after an
intense workout or competition. This sugar low may also result in fatigue,
dizziness, nervousness, and headache.However, not all carbohydrates do
this. Most fruits, vegetables, legumes, and whole grains are digested more
slowly. Oatmeal is an excellent choice for an inexpensive carbohydrate-
Fat is definitely an important energy source, particularly for athletes
involved in prolonged, low-intensity activity. (For high-intensity, short-
term activity, carbohydrate is the primary fuel source.) About 20% of the
calories in a performance-enhancing diet should come from fat (1), most
of it unsaturated fat like vegetable and fish oils. Fats, which are lipids, are
the source of energy that is the most concentrated.
Fats produce more that
twice the amount of energy that is in carbohydrates or proteins. Besides
having a high concentration of energy, fat acts as a carrier for the fat
soluble vitamins, A, D, E, and K. Also, by helping in the absorption of
vitamin D, fats help make calcium available to various body tissues, in
particular, the bones and teeth. Another function of fat is to convert
carotene to vitamin A. Fat also helps keep organs in place by surrounding
them in a layer of fat. Fat also surrounds the body in a layer that preserves
body temperature and keeps us warm. One other function of fat is to slow
the production of hydrochloric acid thereby slowing down digestion and
making food last longer. Some sources of fats are meats and nuts as well
as just plain oils and fats.
Protein plays a minor role in energy production, contributing only
5% to 10% of the energy used during prolonged exercise. Although the
current recommended dietary allowance for protein is about 0.4 grams per
pound of body weight per day, most active people need slightly more. And
athletes involved in heavy resistance exercise or prolonged endurance
events may require 0.7 to 0.9 grams per pound per day. Even this amount
is relatively easy to eat, since 3 ounces of fish or chicken, 1 1/2 cups of
tofu, or 1 1/2 cups of garbanzo beans contain 20 to 24 grams of protein.
As an active person, you need protein for building muscles, repairing
tissues, growing hair and nails, making hormones, and assisting in
numerous other functions that contribute to a strong and healthy body.
Protein is found in many foods--such as meats and dairy products--besides
The daily amount of protein you need ranges from 0.5 to 0.9 grams
per pound of body weight per day; the higher end of the range is
appropriate for athletes who are growing, building muscles, doing
endurance exercise, or restricting calories. A 6-ounce serving of fish
provides about 40 grams of protein--a good part of the daily 75 to 135
grams of protein needed by a 150-pound athlete.The protein in fish is
among the most healthful animal sources of protein. That's because fish is
low in saturated fat, the type associated with clogged arteries and heart
disease. Saturated fat (as in beef lard and cheese) is solid at room
temperature. Fish would be unable to function if their fat were saturated
like that of many warm-blooded animals. Instead, fish store energy in the
form of polyunsaturated oils that are soft and flexible in the cool
temperatures of oceans and mountain streams.
Proteins, besides water, are the most plentiful
substance in the body. Protein is also one of the most important element
for the health of the body. Protein is the major source of building material
in the body and is important in the development and growth of all body
tissues. Protein is also needed for the formation of all hormones. It also
helps regulate the body?s water balance. When proteins are digested they
are broken down into simpler sections called amino acids. However, not
all proteins will contain all the necessary amino acids. Most meat and
dairy products contain all necessary amino acids in their proteins. Proteins
are available from both plants and animals. However, Animal proteins are
more complete and thus desirable.
As mentioned above,
there are six nutrients. All vitamins are organic food
substances that are found only in living things, plants and animals . It is
believed that there are about twenty substances that are active as vitamins
in human nutrition . Every vitamin is essential to the proper growth and
development of the body. With a few exceptions, the body cannot make
vitamins and must be supplied with them. Vitamins contain no energy but
are important as enzymes which help speed up nearly all metabolic
functions. Also, vitamins are not building components of body tissues, but
aid in the construction of these tissues. It is impossible to reliably
determine the vitamin requirements of an individual because of
differences in age, sex, body size, genetic makeup, and activity.
source of a recommendation is the RDA. The RDA makes it?s
recommendations based on studies of consumption of the given nutrient.
On the recommendation it will usually specify what size diet the
recommendation is based on, for example, a two thousand calorie per day
diet. It is harmless to ingest excess of most vitamins. However, some
vitamins are toxic in large amounts. Vitamin A is a fat soluble vitamin
which is only available in two forms. Pre-formed, which is found in
animal tissue. The other is carotene, which can be converted into Vitamin
A by animals . Carotene is found in easily found in carrots as well as other
vegetables . Vitamin A is important to the growth and repair of body
tissues and helps maintain a smooth, soft, and disease free skin. It also
helps protect the mucus membranes of the mouth, nose, throat, and lungs
which reduces the chance of infection. Another function is helping mucus
membranes combat the effects of air pollutants. Vitamin A also protects
the soft lining of all the digestive tract. Another function of vitamin A is to
aid in the secretion of gastric juices. The B complex vitamins have many
known sub-types, but they all are water soluble vitamins. The B vitamins
can be cultivated from a variety of bacteria, yeast, fungi, or molds . They
are active in the body by helping the body convert carbohydrates into
glucose, a form of sugar. B vitamins are also vital in the metabolism of
proteins and fats. They are also the single most important element in the
health of the nerves. B vitamins are also essential for the maintenance of
the gastrointestinal tract, the health of the skin, hair, eyes, mouth, liver,
and muscle tone. The intestine contain a bacteria that produces vitamin b
but milk-free diets, and taking sulfonamides or antibiotics can destroy
these bacteria . Whole grains contain high concentrations of B complex
vitamins. Also, enriched bread and cereal products contain high
concentrations of B vitamins due to a governmental intervention of the
whole food group to ensure that the nation was getting enough B vitamins
Vitamin C, also known as ascorbic acid, is a water soluble vitamin. It is
sensitive to oxygen and is the least stable of all vitamins . One primary
function of vitamin C is to maintain collagen, a protein necessary for the
formation of skin, ligaments, and bones. Vitamin C also plays a role in
healing of burns and wounds because it aids the formation of scar tissue.
It also helps form red blood cells and prevent hemorrhaging. Another
function is to prevent the disease, scurvy, which used to be seen in sailors
because of their lack of vitamin C in their diet. This was corrected by
issuing each sailor one lime per day which supplied citric acid, a source of
vitamin C. Other sources include broccoli, Brussels sprouts, Strawberries,
Oranges, and grapefruits . Vitamin E is a fat soluble vitamin which is
made up of a group of compounds called tocoherols. There are seven
forms of it but the form known as Alpha tocoherol is the most potent .
Tocoherols occur in the highest concentrations in cold pressed vegetable
oils, all whole raw seeds and nuts, and soybeans. Vitamin E plays an
essential role in cellular respiration of all muscles, especially the cardiac
and skeletal. It makes these muscles able to function with less oxygen,
thereby increasing efficiency and stamina. It also is an antioxidant, which
prevents oxidization. This prevents saturated fatty compounds from
breaking down and combining to form toxic compounds.
nutrients that exist in the body and in organic and inorganic combinations.
There are approximately seventeen minerals that are necessary in human
nutrition . Although only about four or five percent of the body weight is
mineral matter, minerals are important to overall mental and physical
health. All of the body?s tissues and fluids contain some amount of
mineral. Minerals are necessary for proper muscle function and many
other biological reactions in the body. Minerals are also important in the
production of hormones. Another important function of minerals is to
maintain the delicate water balance of the body and to regulate the blood?s
pH. Physical and emotional stress causes a strain on the body?s supply of
minerals. A mineral deficiency often results in illness, which may be
treated by the addition of the missing mineral to the diet. Calcium, a
primary mineral, is available through dairy products. In order to get all the
other minerals, one should eat protein rich foods, seeds, grains, nuts,
greens, and limited amounts of salt or salty foods. They don't contribute
energy themselves, but vitamins and minerals are integral to food
metabolism and energy production. Iron and calcium are the minerals
most commonly deficient in athletes, and strict vegetarians may be
deficient in vitamin B12. By consuming adequate calories and following
the food guide pyramid plan, your needs for all the important
micronutrients can be met.
Water is the ultimate ergogenic aid--but because the body has a poor
thirst mechanism, you must drink before you feel thirsty. Once you are
thirsty you are already slightly dehydrated, and your performance will be
To stay well hydrated, you need to drink about a quart of caffeine-
free, nonalcoholic fluids for every 1,000 calories of food you eat,
assuming you maintain your weight. To ensure that you are well hydrated
before you exercise, drink 2 cups of water or sports drink 2 hours
beforehand. To avoid dehydration during exercise, begin drinking early
and at regular intervals. For exercise lasting an hour or less, 4 to 6 ounces
of cool water every 15 to 20 minutes provides optimal fluid replacement.
During exercise that lasts longer than 60 minutes, carbohydrate-
electrolyte beverages containing 5% to 8% carbohydrate should be drunk
at the same rate to replace fluid and spare muscle glycogen. Also,
consuming sports drinks during intense activities such as soccer or
basketball may enhance performance. After exercise, replace every pound
lost during exercise with at least 2 cups of fluid.
Fiber, found only in plant foods, is an indigestible form of
carbohydrate that provides plants with structural rigidity. Fiber is
classified by its ability or inability to dissolve in water. Most plant foods
contain both types. (See "Soluble and Insoluble Fibers," below.) Both
soluble and insoluble fibers enhance the work of the intestines, but in
different ways. Following are some of the health benefits of these types of
Soluble fiber slows the absorption of sugars and starches from the
small intestine into the bloodstream. This action helps smooth out the
peaks and valleys in blood sugar levels, possibly helping to ward off type
2 ("adult onset") diabetes. For someone who already has diabetes, soluble
fiber helps control blood sugar levels
Cholesterol made by the body is an ingredient in bile, a substance
that is used in digestion and is recycled. Soluble fiber binds to bile acids in
the intestines, thereby lowering the body's cholesterol pool. Soluble fiber
can lower blood cholesterol levels by at least 5% in people with healthy
cholesterol levels, and even more in those who have elevated cholesterol.
Insoluble fiber provides bulk that helps move food residues through
the intestine, which helps prevent constipation and diverticular disease.
Insoluble fiber also flushes carcinogens, bile acids, and cholesterol out of
the system. Studies of total fiber intake (soluble and insoluble) show a
decreased risk of colon, rectal, breast, prostate, and other cancers with
consumption of a high-fiber diet.
Dietary fiber plays an important role in weight management.
Because fiber helps you feel full and slows the emptying of your stomach,
you eat less. Also, high-fiber diets tend to be low in calories and less likely
to contribute to obesity. By avoiding obesity, you lower your risks for the
development and progression of heart disease, cancer, high blood pressure,
To increase your fiber intake, make plant foods the foundation of
your diet. For packaged foods, read nutrition labels for the amount of fiber
per serving--a good source of fiber contains more than 1 gram per serving.
Refined bread and cereals usually contain less than that, and beans, whole
grains, and fiber-fortified bread and cereals usually have more (table
below). Be sure to get plenty of fluid with a high-fiber diet.
Common Fiber-Containing Foods
Food Dietary Fiber Content (grams)
Kidney beans, cooked (3/4 c) 9.3
Cereal, All Bran (1/3 c) 8.5
Prunes, dried (3 medium) 4.7
Popcorn, air popped (3 1/2 c) 4.5
Pear (1 medium) 4.1
Apple (1 large) 4.0
Orange (1 large) 4.0
Potato, baked, with skin (1 medium) 4.0
Spinach, cooked (1 c) 4.0
Sunflower seeds (1 oz) 4.0
Banana (1 medium) 3.8
Rice, brown, long-grain, cooked (1 c) 3.3
Carrots, cooked (1/2 c) 3.2
Barley, cooked (1/2 c) 3.0
Strawberries (1 c) 2.8
Bread, whole wheat (1 slice) 2.4
Cranberries (1/2 cup) 2.0
Cereal, wheat flakes (3/4 c) 1.8
Oatmeal, cooked (3/4 c) 1.6
Seaweed, nori or kombu (1 c) 1.0
Bread, white (1 slice) 0.6
Increase your fiber slowly to prevent cramping, bloating, and other
unpleasant symptoms. Be aware, too, that you can get too much fiber.
Excess fiber decreases the absorption of minerals, and large amounts over
a short time--as in supplements--can lead to a serious intestinal
obstruction. More than 50 grams per day is probably too much.
Nutrition is just one aspect of total body health. It is important to remember that on must
compliment good nutrition with good exercise and emotional health in
order to achieve complete well being. It is also important to remember that
no one part of nutrition will completely fulfill the body?s requirements for
health. Knowledge of the nutrients and their function is essential to
understanding the importance of good nutrition.
Kromhout DE, Bosschieter EB, de Lezenne, et al: The inverse relation between fish consumption and 20-year mortality from coronary heart disease. N Engl J Med 1985;312(19):1205- 1209
Bonaa KH, Bjerve KS, Nordoy A: Habitual fish consumption, plasma phospholipid fatty acids, and serum lipids: the Tromso study. Am J Clin Nutr 1992; 55(6):1126-1134
Ascherio A, Rimm EB, Stampfer MJ, et al: Dietary intake of marine n-3 fatty acids, fish intake, and the risk of coronary heart disease among men. N Engl J Med 1995;332(15):977-982
Schaefer EJ, Lichenstein AH, Lamon-Fava S, et al: Effects of National Cholesterol Education Program Step 2 diets relatively high or relatively low in fish-derived fatty acids or plasma lipoproteins in middle-aged and elderly subjects. Am J Clin Nutr 1996;63(2):234-241
Nancy Clark, MS, RD :Fueling Workouts on a Shoestring. The Physician and Sportsmedicine . Vol 25 - No. 9 - September 97
Susan M. Kleiner, PhD, RD: Fiber Fundamentals: Up-to-Date Answers to Common Questions .The Physician and Sportsmedicine .Vol 26 - No. 3 - March 98
NSCA. "Nutrition News: Vitamin F: Vegetable Oil Extracts and Their Use in Menopause and Athletic Conditioning." National Strength & Conditioning Association Journal 1(5):22.
Werblow, Joan, and Alice Hennemen, Hazel Fox. "Nutrition Report: Vitamin B-15." National Strength & Conditioning Association Journal 1(6):37.
NSCA. "Women's Report: Nutrition and Women in Sports." National Strength & Conditioning Association Journal 1(6):40-41.
Giampaolo, Dave. "Nutrition and the Athlete: The Building Blocks of Life." National Strength & Conditioning Association Journal 2(1):41.
Werblow, Joan, Annable, and Alice Henneman, MS, Hazel Fox, PhD. "Nutrition: What's the Score." National Strength & Conditioning Association Journal 2(2):20-21.
Grandjean, Ann, C. "Nutrition Report: The Importance of Water for the Athlete." National Strength & Conditioning Association Journal 2(3):40-41.
Grandjean, Ann, C. "Nutrition Report: The Pregame/Workout Meal." National Strength & Conditioning Association Journal 2(4):29-30.
Grandjean, Ann, C., and Linda J. Schroeder, RD, MS "Nutrition REport: Nutrition for Athletes." National Strength & Conditioning Association Journal 2(5):44-45.
Grandjean, Ann, C., and Daniel F. Hanley, MD "Nutrition Report: Weight Control and Weight Loss for Competition and Performance." National Strength & Conditioning Association Journal 2(6):50-51.
Grandjean, Ann, C., and Arnold E. Schaefer, PhD. "Nutrition: Protein Needs and Muscle Gain." National Strength & Conditioning Association Journal 3(2):48-49.
Garl, Tim. "Nutrition: Effects of Ascorbic Acid on Athletic Performance." National Strength & Conditioning Association Journal 3(3):40-42.
Johnston, Linda, S., and Ann C. Grandjean, RD, MS "Nutrition: Glandular Concentrates: What Are They and What Do They Do for the Athlete?" National Strength & Conditioning Association Journal 3(4):34-35.
Grandjean, Ann. "Nutrition: Research in Sports Nutrition." National Strength & Conditioning Association Journal 3(5):52.
Grandjean, Ann. "Nutrition: Anabolic Steroids--Where We Stand Today." National Strength & Conditioning Association Journal 3(6):58-59, 63.
Grandjean, Ann, C., and Herm J. Schneider. "Nutrition: Off-Season Weight Control for Baseball." National Strength & Conditioning Association Journal 4(1):52-54.
Gieck, Joe, and Esther Haskvitz. "Nutrition: The Effects of a Liquid Supplement on Weight Gain and Percent Body Fat in College Football Players During a Weight Training Program." National Strength & Conditioning Association Journal 4(2):45-46.
Grandjean, Ann, C. "Nutrition: Special Considerations for Weight Loss and Glycogen Loading for Wrestling." National Strength & Conditioning Association Journal 4(3):50-51, 66.
Hickson, James, F., and John Schrader. "Nutrition: Female Athletes and Their Problem Nutrients." National Strength & Conditioning Association Journal 4(4):20-21. | <urn:uuid:63d8856e-fe3a-4c00-b0ef-806432f49003> | CC-MAIN-2017-17 | http://123helpme.com/view.asp?id=124174 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123270.78/warc/CC-MAIN-20170423031203-00308-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.89361 | 5,400 | 3.8125 | 4 |
Chicago School of Architecture (c.1880-1910)
What is the
Chicago School of Architecture?
American Architecture Series
BOOKS ON SKYSCRAPER
In the history of American art, the term "Chicago School" commonly refers to the groundbreaking skyscraper architecture developed during the period 1879-1910 by the designer-engineer William Le Baron Jenney (1832-1907), along with a number of other innovative American architects including William Holabird (1854-1923), Martin Roche (1853-1927), Daniel Hudson Burnham (1846-1912), John Wellborn Root (1850-91), Dankmar Adler (1844-1900), Louis Sullivan (1856-1924). These individuals went on to form some of the most famous firms in 19th century architecture, such as Holabird & Roche, Burnham and Root (later D.H.Burnham and Co), Adler and Sullivan. Frank Lloyd Wright (1867-1959), who worked for Adler and Sullivan, was another important Chicago building designer but left to focus on domestic design.
Although the phrase "Chicago School" properly identifies the midwest city as the locus of the new developments in high-rise design - Root, Burnham, Adler and Sullivan actually formed the Western Association of Architects in opposition to East Coast architects. - it had no unified or coherent set of principles, and the landmark buildings created by the members of the school used a wide variety of designs, construction techniques and materials.
Later, during the 1940s, a new wave of building design - known today as the Second Chicago School of architecture - appeared in the city. This centred around European Modernism, the work of the ex-Bauhaus director Mies van der Rohe (1886-1969), as well as his teaching activities at the Illinois Institute of Technology. Closely associated with the "International Style" and its idiom of modern minimalism, which derived in part from the Bauhaus design school, whose founder Walter Gropius also emigrated to the US, the Second Chicago School is famous for structures like the Lake Shore Drive Apartments (1948-51), and the Seagram Building (1954-58). The principal firm of architects associated with the Second Chicago School is Skidmore, Owings and Merrill, whose breakthroughs in design and structural engineering during the 1960s, spearheaded by Fazlur Khan (1929-82), confirmed America as the undisputed leader in high-rise 20th-Century architecture and led to a new generation of supertall towers.
There were several reasons why Chicago produced such an outstanding group of architects in the 1880s, whose work would have such a profound effect upon high-rise building design. To begin with, the disastrous fire of 1871 coupled with a resurgence of civic pride had (by 1880) led to a building boom. At the same, the city's population was rapidly expanding: by 1890 it totalled more than a million people, surpassing Philadelphia to become the second-biggest city (after New York) in the United States. All this caused a surge in property prices, notably in the Loop area, where landlords were desperately looking for ways to add value to their investment in real estate. Under these conditions, the only feasible way forward was to build upwards. More floors meant more office space to rent and thus more profits. Furthermore, the city of Chicago - already home to inventions like the McCormick reaper, the Pullman sleeping car, and mail-order retailing - was a location where new ideas thrived.
There were two other timely factors which had no connection with the city. First, in the mid-1880s, came the introduction of the electric motor for Elisha Otis's safety elevator. This increased the speed and height of ascent, and led to convenient bush-button controls. Second, the price of steel tumbled - from $166/ton in 1867, to $32/ton in 1895 - which greatly facilitated the adoption of steel-frame designs, which in turn enabled the erection of taller buildings.
The first design breakthrough by the Chicago School was in the area of structural foundations. It arose largely because Chicago was built on marshy ground, which was unable to support tall buildings. The city's architects solved the problem in stages. As far back as 1873, Frederick Baumann had suggested that each vertical foundation of a building should stand on a wide pad that would distribute its weight more widely over the marshy land. A decade later, Daniel Burnham and John Root incorporated this exact same idea in their Montauk Building (Montauk Block) (1882-83) on West Monroe Street. But this type of foundation took up too much basement space and was only able to support a structure of 10 stories in height. The way forward was provided by Dankmar Adler, who used his experience as a military engineer in the Union army, to devise a foundation "raft" of timbers, steel beams, and iron I-beams. An idea used successfully in the construction of Adler and Sullivan's Auditorium Building (1889). Adler made a final improvement in 1894 when he invented a type of underground, watertight foundation structure for the Chicago Stock Exchange which quickly became the template foundation for skyscrapers across the United States.
The first series of high-rises in both New York and Chicago - including the Tribune Building (1873-5) designed by Richard Morris Hunt, and the Auditorium Building (1889), by Adler and Sullivan - had traditional load-bearing walls of stone and brick. Unfortunately, these could not support supertall structures, a problem which stimulated Chicago School designers to invent a metal skeleton frame - first used in Jenney's Home Insurance Building (1884) - that enabled the construction of real skyscrapers. A metal frame was virtually fireproof and, since the walls no longer carried the building's weight, enabled architects to use thinner curtain walls, thus freeing up more usable space. The same applied to the exterior walls, which could now be replaced by glass, reducing the amount of electrical lights required. An important European influence in the use of metal skeletal frames, was the French architect Viollet-le-Duc (1814-79).
To complement the technical and structural advances made, Chicago architects invented a new set of skyscraper aesthetics, the impetus for which emanated from two totally different sources.
The first was the architect Henry Hobson Richardson. Although a graduate of the Ecole des Beaux-Arts in Paris, Richardson rejected its credo that French Neoclassical architecture embodied the ultimate design-standard. Instead, he preferred the more rugged Romanesque art of southern France, upon which he based the Marshall Field Warehouse (1885-7), whose harmonious massiveness completely altered Louis Sullivan's design of his famous Auditorium Building (1889). Sullivan's original drawing showed an eclectic structure with a high, gabled roof. In response to the Marshall Field Warehouse, Sullivan destroyed his original designs and replaced them with a restrained Romanesque structure with a single massive tower. "Richardsonian Romanesque" also influenced Solon S. Beman (1853-1914) in his design of both the brick and granite Pullman Building (1883) and the Fine Arts Building (1885), and Burnham & Root's design for the Rookery Building (1885-87). But perhaps the greatest master of Romanesque skyscraper design was Sullivan - notably in his interior of the Auditorium Building and the entrance to the Chicago Stock Exchange Building (1893-94) - although he was the first to embrace the new vertical shape entailed by buildings that for trhe first time had greater height than width. (See below for more about Sullivan's modern aesthetic.)
The second source of stylistic inspiration for the modern art created by the First Chicago School, stemmed from the nature of their prime building material: steel. The physical attributes of this crucial material lent themselves to the creation of the sinuous curve, an outcome which made it a perfect match for the fashionable style known as Art Nouveau, which flourished in both Europe and America, and which was a feature of both the Rookery Building and Chicago Stock Exchange. [See, for instance, the use of iron and steel by European Art Nouveau architects such as Victor Horta (1861-1947) and the Frenchman Hector Guimard (1867-1942). See also the use of wrought-iron in the design of the Eiffel Tower (1887-89, by Gustave Eiffel (1832-1923).] Steel also facilitated the emergence of the right angle, boldly expressed in Holabird and Roche's 13-story Tacoma Building (1889). This idiom was also an important factor in the upper floors of Adler & Sullivan's Stock Exchange Building, and most exquisitely in the sense of the sharp edges of the steel frame lying just beneath the thin, terracotta and glass walls of Burnham & Root's Reliance Building (1895).
The Chicago World Fair of 1893 signalled the end of the city's dominance in skyscraper design, although its reputation would soon be restored with the emergence of the Second Chicago School during the 1940s, due to the arrival of Bauhaus ideology, and later in the work of Mies van der Rohe and his followers, along with the outstanding multi-disciplinary achievements of Skidmore, Owings & Merrill (SOM), formed in Chicago in 1936 by Louis Skidmore and Nathaniel Owings.
A highly successful architect and the first Professor of Architecture (1876-77) at the University of Michigan, Ann Arbor, William Le Baron Jenney influenced a generation of pupils and apprentices, some of whom became famous across America, including Daniel Burnham, Louis Sullivan, William Holabird, and Martin Roche. He is best-known for designing the 10-story Home Insurance Building in Chicago (1884-85), the first high-rise in America to use a metal frame rather than stone and brick. This landmark structure influenced numerous architects, including Edward Baumann and Harris W. Huehl who designed the Chicago Chamber of Commerce Building (1888-9), whose interior light court extended the entire height of the building. Jenney also pioneered the use of terracotta and iron to reduce the risk of skyscraper fires.
Jenney's younger contemporary, Henry Hobson Richardson, favoured lithic structures with load-bearing walls like his Trinity Church, Boston (1873) rather than Jenney's steel frame. Even so, his masterpiece - the Marshall Field Wholesale Store (18851887, demolished 1930) - had a huge influence on the development of Chicago School building facades, notably those of Daniel Burnham and Louis Sullivan. Borrowing elements from both Romanesque and Renaissance architecture, this monumental structure emphasized form rather than ornamentation. Its multi-storied windows, for instance, topped by semicircular arches, lent the structure a beautiful, unified appearance.
The architectural partnership of Holabird, Simonds & Roche was founded in Chicago in 1880, by Holabird, Roche and the landscape architect Ossian Cole Simonds (18551931), all former trainees under Jenney. After Simonds left, it became Holabird & Roche. Its most important skyscraper projects included the Tacoma Building (completed 1889; demolished 1929), the Marquette Building (1895), the three Gage Group Buildings (1899) (at 18, 24 and 30 S. Michigan Avenue), and the Chicago Savings Bank Building (1904-5). In addition, Holabird and Roche designed a number of large, opulent hotels across America, including the Muehlebach Hotel (1915), Palmer House Hotel (1925) and Stevens Hotel (1927).
Root was born in Lumpkin, Georgia. Burnham was born in Henderson, New York, but grew up in Chicago. Of the two, Root had a better formal education, with preparatory schooling in Liverpool, England, and a degree in civil engineering from New York University; he also worked as an unpaid apprentice in the firm of James Renwick (1818-95), one of the great champions of 19th century Gothic architecture. Burnham's scholarship was less impressive than his social, artistic and managerial talents, and he was rejected by both Harvard and Yale universities. Yet, his sketching was good enough to get him work in Jenney's office and later in the firm of Peter B. Wight, both busily engaged in rebuilding Chicago after the fire of 1871. In Wight's office, Burnham met John Root, and the two formed a partnership in 1873. They began by designing private houses for the barons of Chicago's meat industry. The fact that both young men married into wealthy families also helped them to establish the necessary contacts among the midwest elite.
Both men sensed the differing but reciprocal qualities of talent and temperament that, when integrated, would form the ideal partnership. Amiable, quick-witted and brilliant among friends, Root was shy and reserved in public. Unless guided and stimulated, he also tended to procrastinate. Burnham, on the other hand, toughened by his earlier failures, had grown increasingly determined, aggressive and persuasive and ultimately became the chief office administrator and liaison with clients. He was also, Root acknowledged, mainly responsible for the planning and layout of most of the firm's buildings and served as a perceptive critic of the architectural designs, which both partners considered Root's special domain. Their respective views of architectural design were also different: Root greatly admired the Romanesque idiom of Henry Hobson Richardson (1838-86), while Burnham was influenced by European Beaux-Arts and neoclassical architecture.
During their 18-year partnership, Burnham and Root built hotels, railway stations, stores, warehouses, schools, hospitals, churches and more than 200 private residences and apartment buildings. Yet, their greatest achievements were the tall office buildings, or skyscrapers.
Burnham and Root's Most Famous Buildings
Although Burnham and Root built numerous metal-cage, steel-framed buildings in the late 1880s and 1890s, their most famous skyscrapers, ironically, were three wall-bearing structures, all built in Chicago for the developers Peter and Shepherd Brooks. The 10-story Montauk Block (1882-83) was virtually without traditional historical references, predicting in its stern obeisance to functionalism much of the ethic and aesthetic of the subsequent Modern movement. Its design incorporated Root's floating raft system of interlaced steel beams, which kept the building stable in Chicago's notoriously marshy ground. The Rookery (1885-87) was a more consciously elaborate building with Root's lush ornament highlighting the Romanesque stylistic references. Its logical internal plan, attributed to Burnham, with four connecting wings surrounding a light court, would long serve as a model for skyscraper layout. (Its lobby was remodelled in 1905 by Frank Lloyd Wright.) The stark, dark Monadnock Building (1889-91) divested itself of ornament even more explicitly than the Montauk. Despite the anachronism, demanded by the client, of its dramatically flared, wall-bearing structure, the Monadnock would become another canonical monument of Modernism. Another of the firm's important structures was the Rand McNally Building, completed 1890 but demolished 1911, which was the world's first ever steel-framed skyscraper. In San Francisco, Burnham and Root's Mills Building (1890-91) reflected a significant synthesis of the essential skyscraper elements: steel frame, four-winged plan around a central light court and orderly Chicago School proportions, as accented by Root's exuberant ornament.
D.H.Burnham and Company
Before Root's premature death from pneumonia in 1891, the firm had made preliminary plans for the elegant Reliance Building, also in Chicago, which Burnham completed with Charles Atwood in 1894. He also completed several other designs begun by Root including the 21-story Masonic Temple Building (1892). Burnham was also left to choreograph the epochal World's Columbian Exposition of 1893 (whose Beaux-Arts image ironically signalled the decline of the Chicago School) and to pilot the firm, reconstituted as D.H.Burnham and Company. Burnham's work over the next 20 years would make continuing contributions to skyscraper architecture - notably the iconic Flatiron Building (1901-3) in New York - and urban planning. His grand vision for Chicago as a "Paris on the Prairie", along with his interest in art in general and the classical revival in particular, gave impetus to the City Beautiful movement, whose principles were reflected in the 1909 "Plan of Chicago", the 1902 plan for the renewal of the Mall area of Washington DC, and in the urban plans for Cleveland (1903) and San Francisco (1905). Yet, Burnham never found a replacement for Root in what had indeed been an ideal partnership. He died in a car crash in 1912 while on holiday in Germany.
During its 12 years of existence (1883-95), the Chicago firm of Adler and Sullivan left an imprint on urban public art far beyond the American Midwest. Dankmar Adler led the movement to license architects, with the result that the first registration act was passed in Illinois in 1897. Louis Sullivan became the first American architect to produce a modern style of architecture and the first architect anywhere to revolutionize skyscraper aesthetics and give a stylish unity to the tall building. The pair were also early employers and mentors of Frank Lloyd Wright, who revered both men for decades.
Adler, born in Germany, emigrated with his parents first to Detroit and then to Chicago. After training in architectural offices in both cities, he became a practicing architect in Chicago during the 1870s. Sullivan joined him in 1882 as a minor partner. Full partnership came in 1883, when Adler and Sullivan was founded. Adler's father was the rabbi of an important Chicago congregation, and many of the firm's clients came from the Jewish community in Chicago.
Born in Boston, Sullivan was the son of artistic parents and was drawn to the arts at a young age. His formal education was restricted to one year in architecture at the Massachusetts Institute of Technology and another year in Paris at the Ecole des Beaux Arts. Work in architectural offices in Philadelphia and New York provided the finishing touches. Known today as the "father of the modern skyscraper" and regarded, along with H.H.Richardson and Frank Lloyd Wright, as one of the great threesome of American architecture, Sullivan early on stated his determination to create a "modern" style of architecture, with buildings that were largely original in form and detail instead of being dependent for inspiration on historic styles, like Romanesque, Gothic, Renaissance, Baroque, or Neoclassicism.
The results of his ambition are exemplified in several projects which began in the mid-1880s. First, in keeping with Adler and Sullivan's initial reputation as theater architects, came the Auditorium Building (1886-89), a building which incorporated not only a magnificent 4,000-seat theater, but also The Auditorium Hotel, plus a 17-floor office building with ground level commercial storefronts. It was the Auditorium Building that put Chicago on the cultural map, elevating the city's profile sufficiently to enable it to host the World Fair of 1893.
This was followed by the Wainwright Building, (1890-91), a steel-frame skyscraper built for Ellis Wainwright in St. Louis. For this building, Sullivan devised a scheme for unifying the fronts of a building that was taller than it was wide. By abandoning historic styles, most of which had been developed for buildings that were wider than tall, Sullivan was free to manipulate his materials in an original way that achieved aesthetic unity. This he achieved in the Wainwright by knitting together thin, vertical piers and textured, horizontal spandrels into an integrated architectural fabric.
Adler the Structural Engineer
Adler made these designs possible by his efficient and forward-looking management of the firm's business affairs, for he secured the clients and encouraged them to build Sullivan's unusual designs. As outlined above, he also took charge of the mechanical and structural aspects of design. Together they worked as an effective design team that produced numerous architectural milestones, especially between 1888 and 1895, including - in addition to the above-mentioned Auditorium and Wainwright Buildings - the Getty Tomb (1890-91), the Schiller Theater Building (1891-93), the Palazzo-style Guaranty Building (Prudential Building) Buffalo (1894), and the Chicago Stock Exchange building (1893-94) - whose trading floor is now preserved at the Art Institute of Chicago. Another of their masterpieces was the Carson, Pirie, Scott and Company Department Store Building (1899), complete with Art Nouveau ironwork by the entrance.
Sullivan's Modern Aesthetics
Of all the architects associated with the Chicago School, it was Louis Sullivan who first rose to the challenge of creating a new "modern" aesthetic for high-rise towers. He did so by accepting the new but inevitable rectangular box-like shape created by the steel frame, and adopting the credo "form follows function" - meaning, practical matters determine shape. He therefore gave his buildings a new image - one that recalled the classical tripartite division associated with the classic column of Greek and Roman architecture, namely base, shaft, and capital - while simplifying the appearance of the building by using vertical bands to draw the eye upwards. But although his "modern" structures with their simplified vertical aesthetics paved the way for the next wave of modernist architecture - a late 1920s style heavily influenced by the Bauhaus School in Weimar led by Walter Gropius, which became known as the International Style of modern architecture - they also displayed an equally modern type of ornament. This decoration would later be rejected by the International Style architects who sought a modern style entirely devoid of historical precedents. Sullivan's designs are thus more properly seen as a bridge between the stylistic Romanesque architecture of 19th century skyscrapers, and the clean, unadorned lines of 20th century modernism.
Without Adler, it is unlikely that Sullivan could have achieved what he did; and without Sullivan, Adler would probably be virtually unknown today. Yet, in 1895 they dissolved their partnership for reasons still not fully explained.
In the following years, neither architect received many commissions. Adler died in 1900, but Sullivan endured a 20-year-long decline, plagued by financial problems and alcoholism. He managed to obtain a few commissions for a number of small-town midwestern banks and, at the same time explained his ideas and goals in a series of books - Kindergarten Chats and Other Writings (revised 1918), The Autobiography of an Idea (1924) and A System of Architectural Ornament According to a Philosophy of Man's Powers (1924). One lasting contribution of his style was that it provided the basis for the modern architectural idiom developed by his student Frank Lloyd Wright. Sadly, Sullivan died in poverty in a Chicago hotel room, at the age of 67.
Here is a short chronological list of the most important high-rise buildings associated with the First Chicago School of architecture, together with the architects responsible. Unless indicated, all structures are located within the windy city.
- First Leiter Building (1879) William
Le Baron Jenney
Earlier American Architects
For more biographies of designers active in America during the colonial era and the early-mid 19th century, please use these resources:
Gothic Revival Style
For more about skyscraper design in the United States, see: Homepage. | <urn:uuid:ded2aa39-a4c7-4127-871f-77fa483a774b> | CC-MAIN-2017-17 | http://www.visual-arts-cork.com/architecture/chicago-school.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00188-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.959432 | 4,962 | 3.34375 | 3 |
Breast milk jaundice (BMJ), benign unconjugated hyperbilirubinemia associated with breast-feeding, is a common cause of prolonged jaundice in otherwise healthy breast-fed infants born at term (1). BMJ presents in the first or second week of life, and can persist for as long as 12 weeks before spontaneous resolution. The incidence of BMJ in the exclusively breast-fed infant during the first 2 to 3 weeks of life has been reported at 36% (2). Despite the fact that prolonged jaundice caused by breast milk is a common occurrence in the neonatal period, a large number of tests are required to rule out pathological causes (3).
Although many theories have been proposed to explain BMJ, the jaundice of breast-fed infants is commonly of undetermined etiology. Presently, the increased intestinal absorption of bilirubin, and the resultant increase in its enterohepatic circulation, appears to be one of the most convincing mechanisms to explain the neonatal jaundice associated with breast-feeding (1,4,5). An important consideration related to intestinal bilirubin absorption is the establishment of a population of intestinal bacteria that converts bilirubin glucuronides to various urobilinoids, and therefore reduces the availability of bilirubin for intestinal reabsorption (6,7).
Recent studies have demonstrated that human milk, far from being a sterile fluid, constitutes an excellent and continuous source of commensal bacteria for the infant gut. Staphylococcus, Streptococcus, Bifidobacterium, and Lactobacillus are the most commonly determined bacterial spp (species) in human milk (8–12). Molecular analysis shows that these bacteria are metabolically active in the human gut, increasing the production of functional metabolites such as butyrate, which is the main energy source for colonocytes and plays a key role in the modulation of intestinal function (13).
We hypothesized that bacteria in breast milk may play a role in reducing the occurrence of BMJ because of their influence on the gut microflora. The present study aims to investigate the effects of the bacterial content of breast milk and the infant's feces on the occurrence of BMJ.
The present study was approved by the Human Ethics Committee of the Dokuz Eylul University Faculty of Medicine. The participating women received verbal and written information about the aim and structure of the study, and they were asked for written consent to participate.
A total of 600 infant-mother pairs were screened at the Newborn Outpatient Clinic, and 60 infant-mother pairs were enrolled consecutively to the study according to the following selection criteria: infants who were fed exclusively by breast-feeding; healthy women without present or past underlying conditions; normal, full-term pregnancy; and absence of infant and/or maternal perinatal problems, including mastitis. None of the mothers enrolled in the present study had received a probiotic treatment during pregnancy or after birth, and none of them had taken antibiotics therapy after birth. Also, the infants did not receive antibiotics, probiotics, probiotic-supplemented products, or probiotic- and prebiotic-supplemented formulas before their enrollment and thereafter. The participants provided samples of breast milk and infant feces between the 14th and 28th postnatal days; the mother-infant pairs failed to provide proper breast milk and infant feces samples were not included in the study.
Thirty infants who developed prolonged jaundice and were considered BMJ were enrolled to the study. BMJ has been defined as jaundice beginning after 5 to 7 days of life, and peaking around the 10th day or later of life. BMJ has been diagnosed when other causes of prolonged jaundice are ruled out, and serum bilirubin levels return to normal before 3 months in fully breast-fed infants (2). Infants were excluded who had known risk factors, such as blood group incompatibilities, positive Coombs test, glucose-6-phosphate dehydrogenase deficiencies, any laboratory evidence of hemolytic disease as evidenced by anemia, reticulocytosis or abnormality of the blood smear, and perinatal factors associated with an increased risk of hyperbilirubinemia, including maternal diabetes mellitus, polycythemia, cephalohematoma, asphyxia, hypothermia, intracranial hemorrhage, or perinatal infection. The control group was composed of 30 healthy infants who were born in our hospital and had no clinical jaundice during their follow-up period.
Sample collection date and time were recorded, as were the infants’ birth dates, gestational ages, birth weights, sex, mothers’ demographic, race, and anthropometrical characteristics (age, parity, body mass index), and route of delivery (cesarean section versus vaginal).
Milk and Feces Specimen Collection
Breast milk samples were collected during the third and the fourth postpartum week. For this purpose, nipples and mammary areola were cleaned with soap and sterile water, and then chlorhexidine was applied. The breast milk sample was collected in a sterile tube after manual expression using sterile gloves. The first drops (approximately 0.5 mL) were discarded. Milk samples were taken bilaterally and then 5 mL of these specimens were put into sterile tubes. Fecal samples of the infants were also put into sterile tubes. All samples were placed on ice, immediately sent to the laboratory, and then stored at −70°C until polymerase chain reaction (PCR) analysis.
Bacterial DNA Isolation From Feces
Fecal samples of each infant were homogenized, and 150 mg of homogenized feces were obtained from each sample. A ZR Fecal DNA MiniPrep (Zymo Research Corp, Irvine, CA) was used for the isolation of bacterial DNA from the homogenized stool according to the manufacturer's instructions (14).
Bacterial DNA Isolation From Breast Milk
After homogenization of breast milk samples, 300 μL of homogenized breast milk samples were centrifuged at 3000g for 5 minutes. High Pure PCR Template Preparation Kit (Roche Applied Science, Penzberg, Germany) was used for the isolation of bacterial DNA from lysozyme-treated breast milk according to the manufacturer's instructions.
Primers and Universal Probe Library Probes
Eleven different genus-specific primer sets were used in the present study. A Multiple Sequence Alignment Web tool, CLUSTALW2 (http://www.ebi.ac.uk), was used to identify the homologous regions of bacterial strains for each bacteria. Primer sets were designed using the Primer-BLAST program (NCBI, GenBank, BLAST), and Universal Probe Library probes were selected by using ProbeFinder version 2.45, a Web-based software tool (http://www.roche-applied-science.com).
Real-time PCR reaction for bacterial DNA extracted from breast milk and fecal samples, in duplicates, was carried out in a total volume of 20 μL with a reaction mixture. Fifteen different bacterial species were screened for each sample. Bacterial species that were screened in the infant's feces and the breast milk samples were the following: Bifidobacterium spp (B bifidum, B adolescentis, B longum), Lactobacillus spp (L gasseri, L rhamnosus, L fermentum, L plantarum), Staphylococcus spp (S epidermidis, S hominis, S aureus), Streptococcus spp (S salivarius, S mitis), Clostridium spp (C perfringens, C difficile), and Bacteroides spp (B fragilis).
The results were analyzed by using the “Abs Quant/2nd Derivative Max” method with the LightCycler 480 II (Roche) analysis program. The results were expressed as “Crossing Point value” (cycle number in a log-linear region). Calculated Crossing Point values are inversely correlated with microorganism concentrations (15).
All analyses were performed using computer software SPSS for Windows release 15.01 (SPSS Inc, Chicago, IL). The bacterial counts in feces and in breast milk were non-normally distributed, and thus nonparametric statistical methods were applied. The Mann-Whitney test was used to test differences between the 2 groups. Unless otherwise specified, the results are expressed as mean (±standard error) and median (upper-lower quartiles). A probability level of <0.05 was considered to be statistically significant. The Pearson Correlation Index was used to calculate the relation between milk's and feces’ microbiological concentrations and peak bilirubin levels. The association between frequencies was tested using Fisher exact test. An estimated sample size of 30 was determined to be necessary to detect a relative difference of 0.50 between groups in terms of breast milk microbial colonization with 80% power and a 2-tailed α level of 0.05.
The Hospital Ethical Committee for Human Research of the participating hospital approved the research protocol.
Thirty infants with prolonged jaundice and 30 healthy infants for the control group were included in the study. Fecal DNA isolation could not be managed in 1 sample from the control group because of technical reasons. The 2 groups were similar in terms of maternal, fetal, and neonatal demographic characteristics. The mode of delivery, which may potentially affect our results, was similar in the 2 groups. The total bilirubin levels at the time of sample collection for the study and control groups were 14.0 ± 2.7 and 3.8 ± 1.1, respectively. There was no significant difference between the 2 groups in terms of specimen collection day (17.4 ± 2.6 vs 18.1 ± 2.0).
Of all studied bacteria, Bifidobacterium spp was the most common microorganism in breast milk samples. The most frequently detected Bifidobacterium spp were B bifidum (86%), B adolescentis (62%), and B longum (32%). The proportion of positive breast milk samples for different Bifidobacterium and Lactobacillus spp in the 2 groups is shown in Figure 1. As illustrated, the presence of these microorganisms in breast milk was higher in the control group in comparison with the BMJ group, but was not statistically significant.
When we compared the 2 groups in terms of the concentration of these microorganisms in breast milk samples, B adolescentis and B bifidum concentrations were found to be higher in the control group than in the jaundiced group (P < 0.01). Although the breast milk concentration of Lactobacillus spp seemed to be higher in the control group than in the jaundiced group, there was no statistical significance (Table 1).
When the microbial content of fecal samples was compared, concentrations of B adolescentis, B longum, and B bifidum spp were found to be significantly higher in the control group than in the jaundiced group (P < 0.01). There was no significant difference between the 2 groups in terms of fecal Lactobacillus spp concentrations. No significant difference between the 2 groups was found in terms of clostridial organisms and other bacterial species.
Correlations between the feces and breast milk bacterial concentrations were evaluated for the microorganisms showing a significant concentration difference between the 2 groups. Only the concentrations of B bifidum showed a positive correlation between the feces and breast milk samples (Fig. 2). Also, the correlation between the concentration of the microorganisms in the milk and feces samples and serum bilirubin levels was evaluated. The concentrations of B adolescentis, B bifidum, and B longum in feces samples were negatively correlated with the serum bilirubin levels (r = −0.88, P < 0.001; r = −0.77, P < 0.001; and r = −0.43, P = 0.005, respectively). Although a negative correlation was detected between the breast milk's B bifidum concentration and the serum bilirubin levels (Fig. 3), breast milk's B adolescentis and B longum concentrations showed no correlation with the bilirubin levels.
A wide array of hypotheses has been proposed in an attempt to understand the mechanism of BMJ. It was believed that an increased absorption of bilirubin played a key role in BMJ (1,2). To our knowledge, the present study is the first to suggest that bacteria in breast milk may play a role in reducing the occurrence of BMJ, influencing the gut microflora.
The influence of intestinal microflora on serum bilirubin levels was first shown directly in hyperbilirubinemic Gunn rats. Oral administration of a wide-spectrum antibiotic therapy resulted in the disappearance of fecal urobilinoids and, simultaneously, in a dramatic increase in serum bilirubin levels. Furthermore, intestinal colonization with C perfringens led to the reappearance of fecal urobilinoids with a partial decrease in serum bilirubin levels (16). Only 4 bacterial strains capable of bilirubin conversion have been isolated with certainty so far: C ramosum, C perfringens, C difficile, and Bacteroides fragilis(17). In the present study, effects of feces’ Clostridium spp concentrations on prolonged jaundice were not observed.
Bifidobacterium spp comprise the predominant intestinal bacteria concentration in full-term, breast-fed infants at as early as 3 to 6 days of age (12,18,19). More recently, the Bifidobacterium predominance in the intestinal microbiota of breast-fed infants has been linked to the direct transfer of maternal bifidobacteria to newborns in breast milk (11,12). In the present study, 86% of breast milk samples were found to contain Bifidobacterium spp and concentrations of Bifidobacterium in breast milk and feces were positively correlated. In a previous study using similar methods, all of the breast milk samples were found to contain Bifidobacterium spp but no correlation was found between the total count of Bifidobacterium in breast milk and the infants’ feces (20); however, another study covering a large number of various countries found Bifidobacterium spp in breast milk samples with varying frequencies (0%–100%), implying that the microbial colonization of breast milk seems to be highly dependent on the bacteriological status of the society (21). This may be related to the other factors affecting the growth of bifidogenic bacteria in the breast milk, such as oligosaccharides, or the colonization of the breast milk by other environmental bacteria.
Prebiotic levels in maternal milk and their relation with fecal probiotic and bilirubin levels could not be evaluated in our study. But it can be speculated that prebiotics may be protective against hyperbilirubinemia by affecting intestinal motility and intestinal microbial flora. In a prospective randomized controlled study including formula-fed healthy term newborns, the addition of prebiotics to a standard infant diet resulted in lower bilirubin levels (22). The association of prebiotic supplementation with lower bilirubin levels supports the role of microbiota in BMJ, since prebiotics modified intestinal microbiota.
Although the biological properties of probiotic microorganisms have been well known for many years, the data related to the influence of intestinal probiotic bacteria on serum bilirubin levels in infants are scarce. How enteric probiotic microorganisms could influence the occurrence of jaundice remains in question. As already suggested, probiotic bacteria increase the fecal moisture, frequency, and volume of stool. Moreover, they contribute to the development of an intestinal barrier through the formation of mucin (23). They also regulate the human epithelial tight junction and protect the epithelial integrity (24). Our results suggest that intestinal probiotic bacteria may be protective against hyperbilirubinemia possibly through the reduction in bilirubin absorption.
The main potential limitations of the present study were the use of a semi-quantitative method for the detection of bacterial concentrations by real-time PCR, and an absence of confirmation by culture-dependent methods, and the inability to demonstrate possible action mechanisms of these microorganisms. It should be noted that there is no evidence that BMJ is actually harmful, and only the infants that approach potentially toxic levels need to be treated (2). Mild to moderate levels of unconjugated hyperbilirubinemia may be protective to the newborn infant by providing antioxidant effects, which are otherwise absent in the newborn period (25,26).
In conclusion, preliminary results presented here show that breast milk microbial content and the composition of enteric microbiota may play a role not only in the regulation of gut immunology and nonimmunologic defense mechanisms, but also in BMJ. Particularly high concentrations of Bifidobacterium spp in maternal milk and infants’ feces seem to be protective against BMJ. Because BMJ is not actually harmful, and mild to moderate levels of unconjugated hyperbilirubinemia are protective against oxidative stress, the biological plausibility of the potential effects of these bacteria in terms of reducing bilirubin levels is questionable.
The authors are grateful to Cankut Cubuk and Ceren Senkal for technical assistance.
1. Gartner LM. Breastfeeding and jaundice. J Perinatol
2001; 21 1:25–29.
2. Preer GL, Philipp BL. Understanding and managing breast milk jaundice. Arch Dis Child Fetal Neonatal Ed
3. Hannam S, McDonnell M, Rennie JM. Investigation of prolonged neonatal jaundice. Acta Paediatr
4. Gourley GR, Gourley MF, Arend R, et al. The effect of saccharolactone on rat intestinal absorption of bilirubin in the presence of human breast milk. Pediatr Res
5. Alonso EM, Whitington PF, Whitington SH, et al. Enterohepatic circulation of nonconjugated bilirubin in rats fed with human milk. J Pediatr
6. Gustafsson BE, Lanke LS. Bilirubin and urobilins in germfree, ex-germfree, and conventional rats. J Exp Med
7. Saxerholt H, Midtvedt T, Gustafsson BE. Deconjugation of bilirubin conjugates and urobilin formation by conventionalized germ-free rats. Scand J Clin Lab Invest
8. Lara-Villoslada F, Olivares M, Sierra S, et al. Beneficial effects of probiotic bacteria isolated from breast milk. Br J Nutr
2007; 98 1:96–100.
9. Collado MC, Delgado S, Maldonado A, et al. Assessment of the bacterial diversity of breast milk of healthy women by quantitative real-time PCR. Lett Appl Microbiol
10. Martin R, Jimenez E, Heilig H, et al. Isolation of bifidobacteria from breast milk and assessment of the bifidobacterial population by PCR-denaturing gradient gel electrophoresis and quantitative real-time PCR. Appl Environ Microbiol
11. Gueimonde M, Laitinen K, Salminen S, et al. Breast milk: a source of bifidobacteria for infant gut development and maturation? Neonatology
12. Martin V, Maldonado-Barragan A, Moles L, et al. Sharing of bacterial strains between breast milk and infant feces. J Hum Lact
13. Olivares M, Diaz-Ropero MP, Gomez N, et al. The consumption of two new probiotic strains, Lactobacillus gasseri CECT 5714 and Lactobacillus coryniformis CECT 5711, boosts the immune system of healthy humans. Int Microbiol
14. Yoshikawa H, Dogruman-Ai F, Turk S, et al. Evaluation of DNA extraction kits for molecular diagnosis of human Blastocystis subtypes from fecal samples. Parasitol Res
15. Abdulamir A, Yoke TS, Nordin N, et al. Detection and quantification of probiotic bacteria using optimized DNA extraction, traditional and real-time PCR methods in complex microbial communities. Af J Biotechnol
16. Vitek L, Zelenka J, Zadinova M, et al. The impact of intestinal microflora on serum bilirubin levels. J Hepatol
17. Vitek L, Kotal P, Jirsa M, et al. Intestinal colonization leading to fecal urobilinoid excretion may play a role in the pathogenesis of neonatal jaundice. J Pediatr Gastroenterol Nutr
18. Turroni F, Peano C, Pass DA, et al. Diversity of bifidobacteria within the infant gut microbiota. PloS ONE
2012; 7 5:36957.
19. Tsuji H, Oozeer R, Matsuda K, et al. Molecular monitoring of the development of intestinal microbiota in Japanese infants. Benef Microbes
20. Gronlund MM, Gueimonde M, Laitinen K, et al. Maternal breast-milk and intestinal bifidobacteria guide the compositional development of the Bifidobacterium microbiota in infants at risk of allergic disease. Clin Exp Allergy
21. Sinkiewicz G NoE. Occurrence of Lactobacillus reuteri, Lactobacilli and Bifidobacteria in human breast milk. Pediatr Res
22. Bisceglia M, Indrio F, Riezzo G, et al. The effect of prebiotics in the management of neonatal hyperbilirubinaemia. Acta Paediatr
23. Mack DR, Michail S, Wei S, et al. Probiotics inhibit enteropathogenic E. coli adherence in vitro by inducing intestinal mucin gene expression. Am J Physiol
24. Karczewski J, Troost FJ, Konings I, et al. Regulation of human epithelial tight junction proteins by Lactobacillus plantarum in vivo and protective effects on the epithelial barrier. Am J Physiol Gastrointest Liver Physiol
25. Gopinathan V, Miller NJ, Milner AD, et al. Bilirubin and ascorbate antioxidant activity in neonatal plasma. FEBS Lett
26. Shekeeb Shahab M, Kumar P, Sharma N, et al. Evaluation of oxidant and antioxidant status in term neonates: a plausible protective role of bilirubin. Mol Cell Biochem
Keywords:Copyright 2013 by ESPGHAN and NASPGHAN
Bifidobacterium; breast milk jaundice; feces; intestinal microflora; newborn; probiotics | <urn:uuid:bd9670ec-6778-430b-b549-b96f61686dd3> | CC-MAIN-2017-17 | http://journals.lww.com/jpgn/Fulltext/2013/03000/Breast_Milk_Jaundice___Effect_of_Bacteria_Present.19.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00486-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.916509 | 4,893 | 2.5625 | 3 |
Anti-influenza virus effect of aqueous extracts from dandelion
© He et al; licensee BioMed Central Ltd. 2011
Received: 6 August 2011
Accepted: 14 December 2011
Published: 14 December 2011
Human influenza is a seasonal disease associated with significant morbidity and mortality. Anti-flu Traditional Chinese Medicine (TCM) has played a significant role in fighting the virus pandemic. In TCM, dandelion is a commonly used ingredient in many therapeutic remedies, either alone or in conjunction with other natural substances. Evidence suggests that dandelion is associated with a variety of pharmacological activities. In this study, we evaluated anti-influenza virus activity of an aqueous extract from dandelion, which was tested for in vitro antiviral activity against influenza virus type A, human A/PR/8/34 and WSN (H1N1).
Results obstained using antiviral assays, minigenome assay and real-time reverse transcription-PCR analysis showed that 0.625-5 mg/ml of dandelion extracts inhibited infections in Madin-Darby canine kidney (MDCK) cells or Human lung adenocarcinoma cell line (A549) of PR8 or WSN viruses, as well as inhibited polymerase activity and reduced virus nucleoprotein (NP) RNA level. The plant extract did not exhibit any apparent negative effects on cell viability, metabolism or proliferation at the effective dose. This result is consistent with the added advantage of lacking any reported complications of the plant's utility in traditional medicine over several centuries.
The antiviral activity of dandelion extracts indicates that a component or components of these extracts possess anti-influenza virus properties. Mechanisms of reduction of viral growth in MDCK or A549 cells by dandelion involve inhibition on virus replication.
KeywordsDandelion Anti-influenza virus Traditional Chinese Medicine
Influenza A viruses are negative strand RNA viruses with a segmented genome that belong to the family of orthomyxoviridae. Both influenza A and B viruses can infect humans and cause annual influenza epidemics which result in significant mobidity and mortality worldwide. There are 16 hemagglutinin (HA) and 9 neuraminidase (NA) subtypes of the influenza A virus that infect a wide variety of species . The introduction of avian virus genes into the human population can happen at any time and may give rise to a new pandemic. There is even the possibility of a direct infection of humans by avian viruses, as evidenced by the emergence of the highly pathogenic avian influenza viruses of the H5N1 subtype that were capable of infecting and killing humans .
Vaccines are the best option for the prophylaxis and control of a pandemic; however, the lag time between virus identification and vaccine distribution exceeds 6 months and concerns regarding vaccine safety are a growing issue leading to vaccination refusal. In the short-term, antiviral therapy is vital to control the spread of influenza. To date, only two classes of anti-influenza drugs have been approved: inhibitors of the M2 ion channel, such as amantadine and rimantadine, or neuraminidase inhibitors, such as oseltamivir or zanamivir . Treatment with amantadine, and its derivatives, rapidly results in the emergence of resistant variants and is not recommended for general or uncontrolled use . Among H5N1 isolates from Thailand and Vietnam, 95% of the strains have been shown to harbor genetic mutations associated with resistance to the M2 ion channel-blocking amantadine and its derivative, rimantadine . Furthermore, influenza B viruses are not sensitive to amantadine derivatives . Recent studies have reported that the development of resistance can also occur against neuraminidase inhibitors . According to a recent study, oseltamivir-resistant mutants in children being treated for influenza with oseltamivir appear to arise more frequently than previously reported . In addition, there are several reports suggesting that resistance in H5N1 viruses can emerge during the currently recommended regimen of oseltamivir therapy and that such resistance may be associated with clinical deterioration . Thus, it has been stated that the treatment strategy for influenza A (H5N1) viral infections should include additional antiviral agents. All these highlight the urgent need for new and abundantly available anti-influenza agents.
A number of anti-flu agents have been discovered from Traditional Chinese Medicine (TCM) herbs. Ko et al. found that TCM herbal extracts derived from Forsythia suspensa ('Lianqiao'), Andrographis paniculata ('Chuanxinlian'), and Glycyrrhiza uralensis ('Gancao') suppressed influenza A virus-induced RANTES secretion by human bronchial epithelial cells . Mantani et al. reported that the growth of influenza A/PR/8/34 (H1N1) (PR8) virus was inhibited when the cells were treated with an extract of Ephedraspp ('Mahuang') . Hayashi et al. found that trans-cinnamaldehyde of Chinese cinnamon ('Rougui') could inhibit the growth of influenza A/PR/8 virus in vitro and in vivo . Park et al. found that Alpinia Katsumadai extracts and fractions had strong anti-influenza virus activity in vitro . Many TCM herbs have been found to be anti-flu agents, but their mechanisms of action have not yet been elucidated [14, 15].
Plants have a long evolutionary history of developing resistance against viruses and have increasingly drawn attention as potential sources of antiviral drugs [16, 17]. Dandelion belongs to the Compositae family, which includes many types of traditional Chinese herbs . Dandelion is a rich source of vitamins A, B complex, C, and D, as well as minerals such as iron, potassium, and zinc. Its leaves are often used to add flavor to salads, sandwiches, and teas. The roots can be found in some coffee substitutes, and the flowers are used to make certain wines. Therapeutically, dandelion has the ability to eliminate heat and toxins, as well as to reduce swelling, choleresis, diuresis, and inflammation . Dandelion has been used in Chinese folklore for the treatment of acute mastitis, lymphadenitis, hepatitis, struma, urinary infections, cold, and fever. Choi et al. found that dandelion flower ethanol extracts inhibit cell proliferation and induce apoptosis in human ovarian cancer SK-OV-3 cells . Hu et al. detected antioxidant, pro-oxidant, and cytotoxic activities in solvent-fractionated dandelion flower extracts in vitro . Kim et al. demonstrated antioxidative, anti-inflammatory and antiatherogenic effects of dandelion (Taraxacum officinale) extracts in C57BL/6 mice, fed on an atherogenic diet . Ovadje et al. suggested that aqueous dandelion root extracts contain components that induce apoptosis selectively in cultured leukemia cells, emphasizing the importance of this traditional medicine . Furthermore, there are no side effects associated with the prolonged use of dandelion for therapeutic purposes.
In this report, we attempted to analyze whether dandelion have anti-influenza virus activity in cell culture. We found dandelion could inhibit the influenza virus infection. We further identified the inhibition of viral polymerase activity and the reduction of the virus nucleoprotein (NP) RNA level contributed to the antiviral effect. Thus, dandelion may be a promising approach to protect against influenza virus infections.
Evaluation and extraction of plant materials
Extracts made by boiling the herb in water. The voucher specimen of the plant material was deposited in the CAS Key Laboratory of Pathogenic Microbiology and Immunology (CASPMI), Institute of Microbiology, Chinese Academy of Sciences. Dandelion, purchased from a medicine store, was dissolved in sterile H2O (100 mg/ml) at room temperature for 2 h and then extracted twice with water at 100°C for 1 h. The aqueous extracts were filtered through a 0.45 μm membrane. This aqueous dandelion extract lyophilized, and the resulting light yellow powder (17% w/w yield) was dissolved with cell culture medium when needed.
Viruses, cells and viral infections
Human influenza virus A/Puerto Rico/8/34 (H1N1) (PR8) and A/WSN33 (WSN) were grown in 10-day old fertilized chicken eggs. After incubation at 37°C for 2 days, the allantoic fluid was harvested and used for infection.
All cell lines were purchased from ATCC (Rockville, MD, USA). Madin-Darby canine kidney (MDCK) cells or Human lung adenocarcinoma cell line (A549) were cultured in Dulbecco's modified eagle medium (DMEM) or RPMI-1640 medium, respectively, with 10% fetal bovine serum (FBS, Gibco, USA), penicillin 100 U/ml, and streptomycin 10 μg/ml. Prior to infection, the cells were washed with phosphate-buffered saline (PBS) and were cultured in infection medium (DMEM without FBS, 1.4% BSA) supplemented with antibiotics and 2 μg/ml of trypsin (Gibco; Invitrogen, Carlsbad, CA).
Hemagglutination inhibition test
Influenza viruses are characterized by their ability to agglutinate erythrocytes. This hemagglutination activity can be visualized upon mixing virus dilutions with chicken erythrocytes in microtiter plates. The chicken erythrocytes were supplemented with 1.6% sodium citrate (Sigma, USA) in sterile water, separated by centrifugation (800 × g, 10 min, room temperature) and washed three times with sterile PBS. Serial two-fold dilutions of dandelion extracts were made in 25 μl of PBS in 96-well V-bottom plates. Influenza viruses in 25 μl of PBS (4 HAU) were added to each dilution, and the plates were incubated for 1 h at room temperature. 25 μl of 1% (v/v) chicken erythrocytes in PBS was added to each well. The hemagglutination pattern was read following the incubation of the plates for 0.5 h at room temperature. The highest dilution that completely inhibited hemagglutination was defined as the hemagglutination inhibition (HI) titer.
Cell viability assay
A549 or MDCK cells were left untreated or treated with the indicated amounts of dandelion extracts ranging from 20 to 0.1563 mg/ml, and oseltamivir ranging from 12.5 to 0.098 mg/ml for 48 h; MDCK cells were left untreated or treated with 0.1 mg/ml oseltamivir, 2.5 mg/ml and 15 mg/ml dandelion extracts for 72 h. All drugs were multiproportion diluted in serum-free medium. Cell-proliferation and metabolism were measured using the CCK8-assay. Briefly, the cells were treated with CCK-8 solution (dojindo, 10 μl/well) and incubated for 4 h at 37°C. The absorbance was measured using a microplate reader (DG5032, Huadong, Nanjing, China) at 450 nm. The untreated control was set at 100%, and the treated samples were normalized to this value according to the following equation: Survival rate (%) = optical density (OD) of the treated cells - OD of blank control/OD of negative control - OD of blank control × 100.
Plaque titrations and antiviral assays
Plaque titrations: MDCK cells grown to 90% confluency in 96-well dishes were washed with PBS and infected with serial dilutions of the supernatants in PBS for 1 h at 37°C. The inoculum was aspirated and cells were incubated with 200 μl DMEM (medium containing 1.4% BSA, 2 μg/ml of trypsin and antibiotics) at 37°C, 5% CO2 for 2-3 days. Virus plaques were visualized by staining with trypan blue.
Antiviral assay: MDCK cells were infected with the influenza A virus strain PR8 or WSN (1 × 106 PFU) and were left untreated or treated with dandelion extracts (0.0782-5 mg/ml), oseltamivir (0.0047-0.3 mg/ml) (Sigma), or suxiaoganmaojiaonang (0.069-4.375 mg/ml). At 16 h post infection supernatants were taken. This procedure was repeated two times in triplicate. Supernatants were assayed for progeny virus yields by standard plaque titrations. Virus yields of mock-treated cells were arbitrarily set as 100%.
Simultaneous treatment assay: dandelion extracts (2.5 mg/ml), oseltamivir (0.1 mg/ml) or suxiaoganmaojiaonang (4.375 mg/ml) was mixed with virus individually and incubated at 4°C for 1 h. The mixture was inoculated onto near confluent MDCK cell monolayers (1 × 105 cells/well) for 1 h with occasional rocking. The solution was removed, the cells were washed twice with PBS and the inoculum was aspirated, and then the cells were incubated with 2 ml of DMEM supplemented with 1.4% BSA, antibiotics, 2 μg/mL trypsin at 37°C under 5% CO2 atm.
Post treatment assay: Influenza viruses (1 × 106 PFU) were inoculated onto near confluent MDCK cell monolayers (1 × 105 cells/well) for 1 h with occasional rocking. The media was removed and replaced by DMEM containing 1.4% BSA, antibiotics, 2 μg/mL trypsin and dandelion extracts (2.5 mg/ml), or oseltamivir (0.1 mg/ml), or suxiaoganmaojiaonang(4.375 mg/ml). The cultures were incubated at 37°C under 5% CO2 atm.
After 6, 12, 24, 36 and 48 h incubation in all antiviral assays, the supernatant was analyzed for the production of progeny virus using the hemagglutinin test and was compared with the untreated control cells. Cell proliferation and metabolism were analyzed by the CCK8-assay at 48 h post-treatment. Virus yields from the mock-treated cells were normalized to 100%.
Real-time reverse transcription-PCR analysis
MDCK cells were grown to about 90% confluence infected with influenza virus (1 × 106 PFU). Medium was removed after 1 h, and cultured in the presence of dandelion extracts (2.5 mg/ml) 13 h. The inoculum was aspirated after 13 h. Cells were scraped off, washed twice with PBS, and collected by centrifugation (500 g for 5 min). Total RNA was prepared using the RNApure total RNA fast isolation kit (Shanghai Generay Biotech Co., Ltd). The primer sequence used for quantitative real-time PCR of viral RNA were 5' -TGTGTATGGACCTGCCGTAGC - 3' (sense) and 5' - CCATCCACACCAGTTGACTCTTG - 3' (antisense). The Canis familiaris beta-actin was used as internal control of cellular RNAs, with primer sequences of 5' -CGTGCGTGACATCAAGGAAGAAG - 3' (sense) and reverse: 5' -GGAACCGCTCGTTGCCAATG - 3' (antisense). The primer sequences used in real-time PCR were designed using Beacon Designer 7 software.
Real-time reverse transcription-PCR was performed using 100 ng of RNA and the One-step qPCR kit (RNA-direct SYBR Green Real-time PCR Master Mix, TOYOBO). Cycling conditions for real-time PCR were as follows: 90°C for 30 s, 61°C for 20 min, and 95°C for 1 min, followed by 45 cycles of 95°C for 15 s, 55°C for 15 s and 74°C for 45 s. As the loading control, we measured the level of Canis familiaris beta-actin mRNA. Real-time PCR was conducted using the ABI Prism 7300 sequence detection system, and the data were analyzed using ABI Prism 7300 SDS software (Applied Biosystems).
Minigenome assay and transient transfection
To test the transcription efficiency of the influenza virus polymerases after drug treatment, a minigenome assay was performed in Human embryonic kidney (293T) cells. Briefly, ambisense plasmids encoding PB2, PB1, PA and NP were cotransfected together with the influenza virus replicon reporter plasmid pPOLI-luciferase. The reporter plasmid pPOLI-luciferase was constructed by inserting the luciferase protein open reading frame (ORF) flanked by the noncoding regions of the M gene of influenza A virus between the BamHIand NotI site of the pPOLI vector (a generous present from Dr. Edward Wright). Calcuim phosphate transfection was used. Briefly, the cell culture was replaced by Opti-medium; 0.5 μg of each plasmid was mixed, incubated at room temperature for 15 min, and added over 80% confluent 293T cells seeded the day before in six-well plates. Six hours later, the DNA-transfection mixture was replaced by DMEM containing 10% FBS. At 48 h posttransfection, the cells were treated with cell lysis buffer, centrifuged, and supernatant was collected. Add 5 μl aliquots of cell lysate to individual luminometer tubes containing 180 μl of luciferase assay buffer at room temperature. To start the assay, inject 100 μl of luciferin solution into the luminometer tube and measure the light output in the luminometer.
Data were presented as mean ± SD. The data were statistically evaluated using a one-way ANOVA to compare differences between the groups. A p-value of < 0.05 was considered to be significant. The IC50 and CC50 values were calculated using GraphPad Prism programme.
Treatment with aqueous dandelion extracts results in a reduction of progeny virus titers
Dandelion treatment does not affect cell morphology, viability, or negatively interfere with proliferation and metabolism
Inhibitory activity of dandelion extracts on influenza virus replication
Dandelion extracts does not block the hemagglutination activity of pre-treated virus particles
Viral RNA synthesis is affected in the treatment of dandelion extracts
Treatment with dandelion extracts inhibit viral polymerase activity
Outbreaks of avian H5N1 pose a public health risk of potentially pandemic proportions. Infections with influenza A viruses are still a major health burden, and the options for the control and treatment of the disease are limited. Natural products and their derivatives have, historically, been invaluable sources of therapeutic agents. Recent technological advances, coupled with unrealized expectations from current lead-generation strategies, have led to renewed interest in natural products in drug discovery. This is also true in the field of anti-influenza research . Here, we show that aqueous dandelion extracts exert potent antiviral activity in cell culture.
Dandelion is a natural diuretic that increases urine production by promoting the excretion of salts and water from the kidney. Dandelion extracts may be used for a wide range of conditions requiring mild diuretic treatment, such as poor digestion, liver disorders, and high blood pressure. Dandelion is also a source of potassium, a nutrient often lost through the use of other natural and synthetic diuretics. Additionally, fresh or dried dandelion herb is used as a mild appetite stimulant and to improve stomach symptoms, including feelings of fullness, flatulence, and constipation. The root of the dandelion plant is believed to have mild laxative effects and is often used to improve digestion.
Dandelion has a very high polyphenol content . It is well known that polyphenols have protein-binding capabilities, which suggests that components of dandelion extracts may interact with pathogens through physical, non-specific interactions. Two potential advantages of this non-specific mechanism of action may be that resistant variants only emerge rarely and that dandelion extracts may also act against bacterial co-infections that represent a major complication in severe influenza virus infections. A non-specific interaction with viral HA has been reported for the polyphenolic compound epigallocatechin-gallate . Simultaneous treatment was used to identify whether dandelion extracts block the viral adsorption to cells. The simultaneous treatment assay did not show significant antiviral activity (Figure 3A). These data indicate that dandelion extracts can not directly interfere with viral envelope protein at the cell surface. Therefore, we used HI assays to determine whether dandelion extracts interacted with HA of influenza virus (Figure 4). Dandelion extracts did not exhibit inhibition of viral HA in both A/PR/8/34 and WSN (H1N1), which agrees with the simultaneous treatment assay results.
To evaluate the anti-influenza activity after virus infection, we employed the post treatment assay (Figure 3B), quantitative real-time PCR (Figure 4) and minigenome assay (Figure 6) to test the in vitro effect of dandelion extracts on viral replication. Our studies do not show the prevention of receptor binding of the virus after dandelions treatment, but reduction of the nucleoprotein (NP) RNA level and the viral polymerase activity are obvious. Currently, anti-influenza targets include viral factors (such as hemagglutinin (HA), M2 ion channel protein, RNA-dependent RNA polymerase (RdRp), nucleoprotein (NP), non-structural protein (NS) and neuraminidase (NA) and host factors (such as v-ATPase, protease, inosine monophosphate dehydrogenase (IMPDH) and intracellular signalling cascades), and their relevant inhibitors . In virus particles, the genomic RNAs (vRNAs) are associated with the RNA-dependent RNA polymerase proteins and the NP, which together form the ribonucleoprotein (RNP) complexes. The NP viral RNA level reflected the RNP complexes's action. Our results indicate that dandelion extracts inhibit influenza virus infection probably by decreasing the NP viral RNA level and viral polymerase activity, and thus affecting the RNP complexes' activities, further to inhibit viral RNA replication.
Vaccines play an important role in combating influenza. However, vaccination has only been able to provide a limited control of the infection, because the virus has a tendency to mutate and thus, escape the immune system. Plants have a long evolutionary history of developing resistance against viruses and have increasingly drawn attention as potential sources of antiviral drugs [24, 26]. Many plant extracts and compounds of plant origin have been shown to possess activity against influenza viruses. Our results indicate that aqueous dandelion extracts can inhibit influenza virus infections. Dandelion is composed of multiple compounds that are able to regulate multiple targets for a range of medical indications and that are able to be titrated to the specific symptoms of an individual.
This study has shown that dandelion extracts can inhibit both A/PR/8/34 and WSN (H1N1) influenza viruses by inhibiting viral nucleoprotein synthesis and polymerase activity. These results lead to further investigation about characterization of active compounds and their specific mechanism against influenza virus. Given the urgent need for new and abundantly available anti-influenza drugs, dandelion extracts appear to be a promising option as a replacement or supplemental strategy to currently available anti-influenza therapies.
This work was supported by grants 2008ZX10003-012 and 2009ZX10004-305.
- Fouchier RAM, Munster V, Wallensten A, Bestebroer TM, Herfst S, Smith D, Rimmelzwaan GF, Olsen B, Osterhaus ADME: Characterization of a novel influenza a virus hemagglutinin subtype (H16) obtained from black-headed gulls. J Virol 2005, 79: 2814-2822. 10.1128/JVI.79.5.2814-2822.2005PubMed CentralView ArticlePubMedGoogle Scholar
- Webster RG, Guan Y, Krauss S, Shortridge K, Peiris M: Pandemic spread: Influenza. Gene Ther 2001, 8: S1-S1. 10.1038/sj.gt.3301383View ArticleGoogle Scholar
- Boltz DA, Aldridge JR, Webster RG, Govorkova EA: Drugs in Development for Influenza. Drugs 2010, 70: 1349-1362. 10.2165/11537960-000000000-00000View ArticlePubMedGoogle Scholar
- Fleming DM: Managing influenza: amantadine, rimantadine and beyond. Int J Clin Pract 2001, 55: 189-195.PubMedGoogle Scholar
- Cheung CL, Rayner JM, Smith GJ, Wang P, Naipospos TS, Zhang J, Yuen KY, Webster RG, Peiris JS, Guan Y, Chen H: Distribution of amantadine-resistant H5N1 avian influenza variants in Asia. J Infect Dis 2006, 193: 1626-1629. 10.1086/504723View ArticlePubMedGoogle Scholar
- Pinto LH, Lamb RA: The M2 proton channels of influenza A and B viruses. J Biol Chem 2006, 281: 8997-9000.View ArticlePubMedGoogle Scholar
- Hatakeyama S, Kawaoka Y: The molecular basis of resistance to anti-influenza drugs. Nippon Rinsho 2006, 64: 1845-1852.PubMedGoogle Scholar
- Kiso M, Mitamura K, Sakai-Tagawa Y, Shiraishi K, Kawakami C, Kimura K, Hayden FG, Sugaya N, Kawaoka Y: Resistant influenza A viruses in children treated with oseltamivir: descriptive study. Lancet 2004, 364: 759-765. 10.1016/S0140-6736(04)16934-1View ArticlePubMedGoogle Scholar
- de Jong MD, Tran TT, Truong HK, Vo MH, Smith GJ, Nguyen VC, Bach VC, Phan TQ, Do QH, Guan Y, et al.: Oseltamivir resistance during treatment of influenza A (H5N1) infection. N Engl J Med 2005, 353: 2667-2672. 10.1056/NEJMoa054512View ArticlePubMedGoogle Scholar
- Ko HC, Wei BL, Chiou WF: The effect of medicinal plants used in Chinese folk medicine on RANTES secretion by virus-infected human epithelial cells. J Ethnopharmacol 2006, 107: 205-210. 10.1016/j.jep.2006.03.004View ArticlePubMedGoogle Scholar
- Mantani N, Andoh T, Kawamata H, Terasawa K, Ochiai H: Inhibitory effect of Ephedrae herba, an oriental traditional medicine, on the growth of influenza A/PR/8 virus in MDCK cells. Antiviral Res 1999, 44: 193-200. 10.1016/S0166-3542(99)00067-4View ArticlePubMedGoogle Scholar
- Hayashi K, Imanishi N, Kashiwayama Y, Kawano A, Terasawa K, Shimada Y, Ochiai H: Inhibitory effect of cinnamaldehyde, derived from Cinnamomi cortex, on the growth of influenza A/PR/8 virus in vitro and in vivo. Antiviral Res 2007, 74: 1-8. 10.1016/j.antiviral.2007.01.003View ArticlePubMedGoogle Scholar
- Park SJ, Kwon HJ, Kim HH, Yoon SY, Ryu YB, Chang JS, Cho KO, Rho MC, Lee WS: In Vitro inhibitory activity of Alpinia katsumadai extracts against influenza virus infection and hemagglutination. Virol J 2010., 7:Google Scholar
- Chen CYC, Chang TT, Sun MF, Chen HY, Tsai FJ, Fisher M, Lin JG: Screening from the World's Largest TCM Database Against H1N1 Virus. J Biomol Struct Dyn 2011, 28: 773-786.View ArticlePubMedGoogle Scholar
- Hudson JB: The use of herbal extracts in the control of influenza. J Med Plants Res 2009, 3: 1189-1194.Google Scholar
- Kuroda K, Sawai R, Shibata T, Gomyou R, Osawa K, Shimizu K: Anti-influenza virus activity of Chaenomeles sinensis. J Ethnopharmacol 2008, 118: 108-112. 10.1016/j.jep.2008.03.013View ArticlePubMedGoogle Scholar
- Ludwig S, Ehrhardt C, Hrincius ER, Korte V, Mazur I, Droebner K, Poetter A, Dreschers S, Schmolke M, Planz O: A polyphenol rich plant extract, CYSTUS052, exerts anti influenza virus activity in cell culture without toxic side effects or the tendency to induce viral resistance. Antiviral Res 2007, 76: 38-47. 10.1016/j.antiviral.2007.05.002View ArticlePubMedGoogle Scholar
- Chu QC, Lin M, Ye JN: Determination of polyphenols in dandelion by capillary zone electrophoresis with amperometric detection. Am Lab 2006, 38: 20-+.Google Scholar
- Sweeney B, Vora M, Ulbricht C, Basch E: Evidence-based systematic review of dandelion (Taraxacum officinale) by natural standard research collaboration. J Herb Pharmacother 2005, 5: 79-93.View ArticlePubMedGoogle Scholar
- Choi EJ, Kim GH: Dandelion (Taraxacum officinale) Flower Ethanol Extract Inhibits Cell Proliferation and Induces Apoptosis in Human Ovarian Cancer SK-OV-3 Cells. Food Sci Biotechnol 2009, 18: 552-555.Google Scholar
- Hu C, Kitts DD: Antioxidant, prooxidant, and cytotoxic activities of solvent-fractionated dandelion (Taraxacum officinale) flower extracts in vitro. J Agric Food Chem 2003, 51: 301-310. 10.1021/jf0258858View ArticlePubMedGoogle Scholar
- Kim JJ, Noh KH, Cho MY, Jang JY, Song YS: Anti-oxidative, anti-inflammatory and anti-atherogenic effects of dandelion (Taraxacum officinale) extracts in C57BL/6 mice fed atherogenic diet. Faseb J 2007, 21: A1122-A1122.Google Scholar
- Ovadje P, Chatterjee S, Griffin C, Tran C, Hamm C, Pandey S: Selective induction of apoptosis through activation of caspase-8 in human leukemia cells (Jurkat) by dandelion root extract. J Ethnopharmacol 2011, 133: 86-91. 10.1016/j.jep.2010.09.005View ArticlePubMedGoogle Scholar
- Wang YF, Ge H, Xu J, Gu Q, Liu HB, Xiao PG, Zhou JJ, Liu YH, Yang ZR, Su H: Anti-influenza agents from Traditional Chinese Medicine. Nat Prod Rep 2010, 27: 1758-1780. 10.1039/c0np00005aView ArticlePubMedGoogle Scholar
- Xu WF, Gong JZ, Fang H, Li MY, Liu Y, Yang KH, Liu YZ: Potential Targets and Their Relevant Inhibitors in Anti-influenza Fields. Curr Med Chem 2009, 16: 3716-3739. 10.2174/092986709789104984View ArticlePubMedGoogle Scholar
- Hsu WL, Chen DY, Shien JH, Tiley L, Chiou SS, Wang SY, Chang TJ, Lee YJ, Chan KW: Curcumin inhibits influenza virus infection and haemagglutination activity. Food Chem 2010, 119: 1346-1351. 10.1016/j.foodchem.2009.09.011View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:d3e8da70-d5f2-4270-b6aa-031e2e8565c5> | CC-MAIN-2017-17 | http://virologyj.biomedcentral.com/articles/10.1186/1743-422X-8-538 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00133-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.892525 | 7,155 | 2.859375 | 3 |
CHAPTER 7: HOKUSAI AND HIROSHIGE, AND THEIR PUPILS
HOKUSAI (1760-1849) is generally classed as a landscape artist, as his chief work was done in this field, though he drew almost everything that could be drawn. He lived entirely for his work, and became the master-artist of Japan, dying at the age of eighty-nine, after a life of incessant work and almost continuous poverty, with the regret upon his lips that he had not been granted a longer spell of life to devote to his idol art.
No artist adopted so many artistic names with which to bewilder the collector of the present day as Hokusai.
As a pupil of Shunsho, at the age of nineteen, he used the name Shunro, but owing to a quarrel with his master, due, it is said, to his having taken lessons from a painter of the Kano school, he left Shunsho's studio in 1785, and started for himself as an independent artist.
Sori, Kako, Taito, I-itsu, are some of the names he used in addition to that by which he is universally known, and as he often passed them on to a pupil when himself adopting a new name it is not always possible to say if a print signed Taito, for example, is by the master or the pupil of that name.
For instance, a well-known print, signed Katsushika Taito, representing a carp swimming in a whirlpool, is by some
authorities attributed to Hokusai, and by others to the pupil, but the latter has the more numerous supporters.
Many of Hokusai's prints are signed
Hokusai Mad-on-drawing (Gwakio jin Hokusai), thus showing the
fervour of his spirit. Other names he used are Shinsai and Manji.
A signature Hokusai sometimes used on surimono reads
tired of living in same house, in allusion to his constant change of residence, as he is said to have
altered his place of abode nearly a hundred times during his long lifetime.
Hokusai's masterpieces, by which we recognize him as one of the world's greatest artists, are the following series:
The Imagery of the Poets, a series of ten large vertical prints, issued about 1830. This series is very rare,
particularly in a complete set.
The Thirty-six Views of Fuji (Fugaku San-ju Rok'kei), with the ten additional views, really
forty-six views, full size, oblong. Some prints in this series are much rarer than others, and really good
copies of any are not easy to procure, though poor and faded copies are fairly common of some of them. The
three rarest and most coveted by collectors are The Great Wave; Fuji in Calm Weather; and Fuji
in a Thunderstorm, with lightning playing round its base. The first of these, The Great Wave, has been
described, more particularly by American collectors, as one of the world's greatest pictures; and certainly,
even if this description is perhaps some-what exaggerated, it is a wonderful composition, such as could only
have emanated from the brain and hand of a great master. This series was issued between 1823 and 1829.
The Hundred Poems explained by the Nurse (1839). Of this series only twenty-seven prints are known to
exist, and Hokusai never completed it. About fourteen original drawings, which were never used for producing prints
from, are also known. Moderately rare as a whole, some plates being much rarer than others.
Travelling around the Waterfall Country, a set of eight vertical prints, about 1825; rare.
Views of the Bridges of various Provinces, a set of eleven oblong prints, similar to the
Views of Fuji
series, about 1828; rare.
Eight Views of the Loo-choo Islands; full size, oblong, c. 1820; very rare.
Modern reproductions and reprints of all the foregoing series are met with, particularly his
Imagery of the Poets,
Views of Fuji series.
The various prints comprising the foregoing series are described in detail elsewhere in this volume in the chapters dealing with landscape as a subject of illustration.
Besides landscape scenes and innumerable single prints, Hokusai designed some very fine - and very rare -
bird and flower studies, of which modern reproductions exist, many surimono, and a very large number
of book-illustrations. Amongst the latter may be mentioned his famous
Hundred Views of Fuji, and his
Mangwa (sketches). It is computed that altogether he produced some thirty thousand drawings and illustrated
about five hundred books (Von Seidlitz).
Of Hokusai's pupils, of whom about fifteen to twenty are known, Totoya HOKKEI (1780-1850) is considered the
foremost, and excelled even his master in the design of surimono, a fine example of which is
illustrated at Plate E, in colour, signed
copied by Hokkei, being taken from a painting of the Tosa school.
He also illustrated books.
Another pupil famous for his surimono is Yashima GAKUTEI (w. 1800-1840), who also designed a set of very
fine land and seascape drawings, full size, oblong, for a book,
Views of Tempozan (Tempozan Shokei Ichiran,
Osaka, 1838) (see Plate 19). A description of the prints comprised in this set will be found in
A third good designer of surimono is Yanagawa SHIGENOBU (1782-1832), the scapegoat son-in-law of Hokusai, whose daughter, Omiyo, he married. This Shigenobu must not be confused with a later Ichiryusai Shigenobu, the pupil of Hiroshige, better known as the second Hiroshige, and a considerably less capable artist (see Plate 8.)
Shotei HOKUJIU (w. 1800-1830) is remarkable for his curious landscapes, done in a semi-European manner, known as
Rangwa pictures, meaning literally
Dutch pictures, as it was from the Dutch, the first Europeans
allowed to trade with Japan (and then only under severe restrictions), that the idea of perspective, as we understand
it, was learnt by Japanese artists.
His mountains are drawn in a very peculiar angular manner, almost cubist in effect, and his clouds are cleverly rendered by means of gauffrage. These characteristics are clearly shown in the print by him reproduced at Plate 19, page 132.
Shunkosai HOKUSHIU (w. 1808-1835) was another pupil of Hokusai, who designed figure-studies, in which the dress is sometimes rendered in gauffrage, a method of heightening the effect of colour-printing generally confined to surimono. Hokushiu, however, employed it largely in his ordinary full-sized prints.
Considering the very large number of landscape designs produced by Hokusai, one would have expected a corresponding activity on the part of his pupils in the same direction. As a matter of fact, but few of them seem to have turned their brush to this class of subject, and even then only to a limited extent, Hokujiu being almost the sole pupil who persevered in landscape design beyond an initial effort, and his prints are by no means common.
The others appear to have confined themselves almost entirely to the production of surimono, actor-portraits, and figure-studies, showing that, great as was the demand for the landscapes of Hokusai and Hiroshige, the populace still hankered after their favourite subjects from the theatre, street, and Yoshiwara. The names of several are given in Appendix III, and a fine surimono by SHINSAI illustrated at Plate 8.
Ichiryusai HIROSHIGE (1797-1858) shares with Hokusai the reputation of being the foremost landscape artist of Japan. He showed signs of the latent talent that was in him as a boy, and received his earliest tuition in painting in the studio of an artist, Rinsai by name, of the Kano school. On the death of his parents, at the age of fourteen, he applied for admission to the school of Toyokuni, but there being no vacancy for him, he turned to Toyohiro, who accepted him as a pupil. This was in 1811; in the following year his master gave him the artist names of Ichiyusai Hiroshige, the first of which he changed, on the death of Toyohiro, in 1829, to Ichiryusai. This small change of but one letter in his first name is important, as by it we can distinguish between his early work, and that of his middle and later periods.
We now come to what was destined to be the great turning-point in his career. In 1830 he was commissioned by the
Tokugawa Government, in whose service he was as a doshin, or official, to go to Kyoto and paint the
presenting the horses, which it was the custom of the Shogun to send to the Emperor
every year at Kyoto. It was not unusual for government officials in those days to eke out their slender salaries
by adopting some other profession, such as painting, as well, hence Hiroshige's selection for this post.
Deeply impressed by the scenery of the Tokaido on his journey from Yedo to Kyoto in company with the party in charge of the horses, he made sketches on the way of each of the fifty-three relay stations along the road, resolving thenceforth to devote himself to landscape painting. To this end he spent three years travelling throughout Japan on a sketching tour after his return from Kyoto, his wife accompanying him on his journeyings and defraying all expenses, for which his official salary was quite inadequate.
On his return, in 1834, he completed his sketches of the Tokaido, which were then published in album form, and became an immediate success, landscape having never before, in the history of Ukiyoye, been so treated. Hiroshige himself took particular pains over their production, and supervised the engraving and printing. Hence it is that this
There are, of course, other landscape series, some of which are rarer, certain of them much rarer, and which contain many masterpieces, besides his first Tokaido set, but the latter remains his magnum opus, as it was through this he made his fame as a landscape artist.
It is generally through his landscapes that the collector first becomes acquainted with Japanese colour-prints, and through which he is attracted to them. Hiroshige's prints more nearly approach our ideas of pictorial representation than those of any other artist of Ukiyoye, with the exception perhaps, though to a less extent, of Hokusai, yet at the same time he remains essentially Japanese.
Hiroshige gives us the effect of atmosphere and mist, sunrise and sunset, snow and rain, in his designs which Hokusai, with his sharper and more vigorous outline, does not. The latter's scenes are full of that restless activity which reflects his own untiring energy, an energy which nothing could damp, while misfortune merely spurred him to greater effort.
Hokusai, also, treats his subject from a different standpoint to Hiroshige; the former depicts the relationship of man and nature to each other with a vividness not found in Hiroshige's compositions. Hiroshige shows us the real world as he saw it passing before him along the great highway he so realistically portrayed. Hokusai, on the other hand, puts before us his idea of it, as he saw it in his mind's eye, making the grandeur and force of nature his principal theme, and his humanity merely subordinate to it.
These divergent characteristics are well shown in Hokusai's Great Wave, a picture contrasting the all-devouring force of nature and the littleness of man, and Hiroshige's Autumn Moon on the Tama River, from his Homing Geese at Katada, from his
To compare the work of these two masters is difficult, if not invidious; their characteristics are so distinct one
from the other, and their prints are admired for such different reasons. Much of Hiroshige's work is of a later
period than that of Hokusai. Hiroshige's earliest work is assigned to about the year 1820; Hokusai had produced
prints before 1800. The entirely different colour-schemes employed also render it difficult to make comparisons;
while the later and inferior impressions of many of Hiroshige's later series which are so relatively numerous, are a
libel upon his powers as a colourist. His best work, namely his
Great Tokaido series, is equal to anything
Hokusai produced; but on the whole it must be said that the latter's work shows a much higher average quality
throughout, whereas that of Hiroshige varies to a considerable extent, many of his later series containing some
inferior designs, apart from those obviously the work of his pupil.
Though this falling off was no doubt due to increasing age, yet in the case of Hokusai, who lived very nearly half as long again as Hiroshige, his work shows practically no traces of advancing years, in fact it improves. As he himself says, he did not expect to become a really great artist till he had reached the age of eighty, while he was dissatisfied with everything which he had produced prior to his seventieth year.
Fenollosa, one of the leading authorities on the artists of the Ukiyoye, while he classes Hokusai in the first rank, puts Hiroshige in the third only, though his classification refers to them as painters, while he does not specifically class them as colour-print designers.
There is, however, this great difference between Hokusai and Hiroshige. The latter was a great colourist; Hokusai was both a draughtsman and colourist.
As an actual draughtsman Hiroshige is not eminent; the beauty and charm of his prints lie almost solely in their colouring, and the atmospheric and other effects obtained thereby, which are due to the co-operation and skill of the engraver and, more particularly, the printer, who should receive the merit rather than the artist. In plain black and white outline, most of Hiroshige's prints would fail to arouse any particular enthusiasm, while, on the other hand, Hokusai's skill as a draughtsman is but enhanced by the colour-effects of his prints, so that his illustrations in black and white only and his designs for prints are as interesting as the colour-prints themselves.
We have seen the view put forward (and it is one which we consider very probable), that it was because the engraver and printer were often so much the better artists than the designer himself, that their seals, more particularly that of the engraver, are occasionally found on prints besides the artist's signature.
Where Hiroshige has turned his brush to the illustration of subjects outside the province of pure landscape, he has generally not been particularly successful (at times, indeed, very bad), unless the landscape element happens to predominate in the general composition. Even in purely landscape scenes, an otherwise effective composition is often spoilt by the crude drawing of the figures introduced, particularly if they are at all prominent in proportion to their setting.
The fact that the artist only supplied the design which was destroyed on cutting the outline or key-block, and
gave instructions as to the colours to be employed, somewhat modifies the answer to the question,
Is the work of
one artist better, or of greater value, than that of another? as the artist was almost entirely at the mercy
of his engraver and printer, upon whose combined skill the excellence of the finished print depended. Added to this,
there must be taken into account the fact that the same engraver and printer might be employed upon the designs of
more than one artist, just in the same way that a printer does not confine himself to producing the books of only
one writer. It is to be regretted that the engravers of these prints are almost totally lost in oblivion, and that
nothing is known of them, and only a comparatively few prints even bear their mark, as it is due to them that the
most beautiful pictorial art in the world came into being, or at least in such a form that it could be enjoyed by
thousands, where a single painting is but the delight of a select few.
A print is associated only with the artist whose signature it bears, or whose work it is known to be, or, in doubtful cases, to whom it is attributed. Yet the excellence of the print, and, in consequence, the reputation of the designer, rested with the engraver and printer. As pointed out above, the beauty of most of Hiroshige's work is due to the skilful co-operation of printer and engraver.
While Japanese literature tells us much about the artists, it is silent about the engravers, upon whom the former were so dependent for their reputation as designers. This lack of recognition was no doubt due to the fact that the engraver was looked upon as nothing more than a mere mechanic-albeit an extremely dexterous one--whose sole province was to reproduce, line for line and dot for dot, the design given him by the artist.
His work, therefore, was purely mechanical, and wonderful as it may appear to us from the point of view of manual skill, there was nothing original about it; it was pure copying. Had the original drawing been preserved, and only a copy made for the engraver to work from, we should then have been able to compare his work with the original.
There is, however, in the Victoria and Albert Museum, a block which has been cut from a copy of the artist's drawing, in this case an illustration to a book of birds by Kono Bairei. The original drawing, the copy, the block, and a print therefrom, are shown together for sake of comparison; notwithstanding the intervention of the copyist, it is very difficult to say which is the print and which is the original drawing, so skilfully has the engraver done his part.
Little as is our knowledge of the engraver, it is even less in the case of the printer. While a print occasionally bears the mark of the former, the writer can only recall a very few instances, from amongst many hundreds of prints examined, of sheets bearing the name or mark of the printer. Both, however, are often given on illustrated books.
In the writer's opinion, since these prints are (or should be) collected for their aesthetic charm, the standard to be aimed at is one in which subject and artistic merit come first.
The artist's signature is not, by itself, sufficient to satisfy the discriminating collector whose chief desire is to possess beautiful examples of these prints. Beauty of drawing, harmony of colour-scheme, and all those qualities which appeal to his artistic sense, should form the chief consideration.
To the true collector a work of art is no better or no worse for being the product of one person rather than another. To pay a high price for a picture or print merely because it is - or is supposed to be - the work of a particular master, is a mistake, if the purchaser does not consider that, at the same time, it is worth that sum from an artistic point of view, and that its possession will bring him proportionate satisfaction.
Hiroshige's numerous landscape and other series are described in detail elsewhere, but certain single prints, which are reckoned his chief masterpieces as such, cannot be overlooked in any reference to him.
These are two large kakemono-ye and three very fine triptychs.
The former are the famous Monkey Bridge by Moonlight (date about 1840; publisher Tsutaya; title Koyo Saruhashi), and Snow Gorge of the Fuji River (publisher Sanoki; no title). Both these kakemono-ye (formed by joining two full-size vertical sheets together) are extremely rare; a good copy of the Monkey Bridge, for example, is probably worth to-day from £250 to £300 (see Plate 61, page 340).
It is not unlikely that there was originally a third one, of Cherry Blossoms, making a Settsu-Gekka series
Snow, Moon, and Flower, a favourite subject of illustration with artists, but no such print has ever
been found (see Note, Appendix II).
There is also in existence a very rare small panel print measuring 8¾ in. by 4½ in., under the same title, Koyo Saruhashi, showing the bridge between the tops of the tree-clad cliffs on either side of the gorge, and a solitary foot passenger crossing over to the right bank, and below the bridge a full moon across the face of which is a flight of wild geese. Signed Hiroshige; no publisher's mark nor date-seal; kiwame seal next signature.
The following are Hiroshige's famous landscape triptychs:
1. Kiso-ji no Yama Kawa,
Mountain and River on the Kiso Road. A huge mass of snowy mountains fringed, here and there, with scattered pines,
and broken into narrow ravines, down which pour torrents into the main river in the foreground. Title on narrow upright
panel, and signature on right-hand sheet; dated Snake 8 (1857); publisher, Tsutaya (Kichizo).
2. Kanazawa Ha'ssho Yakei,
Full Moon on the Eight Views of Kanazawa. A magnificent land and seascape of the inlet of Kanazawa,
looking across to a hilly coast-line opposite; in the centre rises an island at whose base shelters a fishing
village, and connected by a level isthmus with the mainland; overhead shines a full moon.
Title on narrow upright label on right-hand sheet; signature on left sheet; publisher, Tsutaya; dated Snake 7 (1857).
3. Awa no Naruto Fukei,
View of the Rapids of Awa no Naruto. A wide view of the channel dividing the islands of Shikoku and Awaji,
the sea foaming in whirlpools as it rushes over the sunken rocks; in the fore-ground rise two small rocky islands.
Across the channel rises a hilly coast. Title on label on right-hand sheet, signature on left; publisher,
Tsutaya; dated Snake 4 (1857). (See Note, Appendix II.)
Amongst other famous single sheets by Hiroshige is a large vertical print, measuring 15 in. by 7 in., known as the
representing a crescent moon seen
through a narrow gorge, behind cliffs spanned by a rustic bridge. This large panel forms one of a set entitled
Twenty-eight Views of the Moon, of which only this and
another plate, a full moon rising
behind a branch of a maple tree, are known. The series was, therefore, apparently not completed.
The bow-moon is illustrated in colours in the frontispiece to Mr. Ficke's book, and the original, in flawless condition, realized $475 (£95 at normal exchange) at the sale of his collection in New York, February, 1920.
Yet a third famous moonlight scene of Hiroshige's is another panel (14½ x 5),
Moonlight at Tsukuda-jima,
showing a cluster of huts on an island, and junks moored in the foreground, the whole scene bathed in the light of a
full moon, and a flight of birds across the sky. This print forms one of a panel series of Toto Meisho
Two other very fine snow scenes should also be recorded. One is a panel print (15 x 5), forming one of a series,
Famous Yedo Views at the Four Seasons, a
man poling a raft along the Sumida River past a steep, snow-covered slope, at the foot of which are rows of piles;
a grey sky full of falling snow-flakes. Signed Hiroshige.
The second is a full-size, upright sheet from a very rare series, Wakan Royei Shu,
Poems from China and Japan; publisher Jo-shu-ya; title on a red narrow upright label in white
characters, and poem alongside on sky background. Peasants crossing a bridge over a stream running through a
mountainous country; in the background looms up a great white mountain; signed Hiroshige fude.
Hiroshige in his very early days, while still a pupil of Toyohiro, designed figure-studies, in response, we presume, to the insistent demand for this class of subject, before his genius for landscape diverted public taste into another channel. Such designs are very rare, and are interesting both for comparison with the work of recognized figure-study designers and for the fact that they represent the skill of an artist in one direction who made his name by striking out in another. Plate 7 reproduces a figure-study by Hiroshige from a series entitled
Another (also very rare) set of figure-studies done between 1840 and 1850 is mentioned in Chapter XXXIV.
Of Hiroshige's pupil, Ichiyusai SHIGENOBU (w. 1840-1866), afterwards Hiroshige H, little need be said. As a rule his work, which closely follows that of his master, is very inferior, though at times it was of sufficient merit to compare very favourably with it, but he lacked originality and merely trod in the footsteps of his master.
As seems to have been a common habit with pupils, Shigenobu, on the death of his master, married his daughter, and at the same time assumed the great name. Six years later he divorced his wife, went to Yokohama, changed his name to Rissho, and died there in 1866.
Another pupil of Hiroshige, Shigemasa, married the divorced wife of Shigenobu and assumed the master's name as Hiroshige HI. He is, how-ever, a wholly commonplace and unimportant artist. He died as recently as 1894.
Reference has already been made to those landscape series which, while attributed as a whole to Hiroshige, yet contain
certain views contributed by the pupil, and pointing out by certain characteristics how such may be distinguished
from the work of the master. One series, at least, entirely by Hiroshige II, entitled
Thirty-six Views of Toto
(i.e. Yedo), contains some plates equal to any of the master's similar series, when carefully printed (see Plate 29,
Another series of Yedo views (oblong), printed almost entirely in blue, is also above his usual work, the purity of the blue atoning for the, at times, somewhat faulty drawing (see Plate 7).
This series was published by Senichi, and bears the date 1862. These blue prints owed their origin to an edict issued in 1842, and which was in force for nearly twelve years, limiting, amongst other restrictions, the number of blocks that might be used. It must be admitted that the printers overcame this restriction in a remarkably effective manner. It is not unlikely that this edict, which also prohibited the sale of prints depicting actors and courtesans, was one of the causes (perhaps, even, a very important cause) that contributed to the decline and extinction of the art of the print-designer. Official interference and restrictions were bound to have an injurious effect upon an art which owed its existence to its ability to cater for the tastes of the multitude. Circumscribe and limit these tastes, and it is bound to suffer. It was from the date of this law that censor's and inspector's seals had to appear on all prints, a custom which was continued after the edict ceased to be in force. By prohibiting prints of actors and courtesans, by limiting the number of blocks which might be used, and the size of compound prints to triptychs, the law was aiming at raising the morals of the community and checking extravagance. This legal restriction of the subjects allowed to be portrayed naturally created a great demand for the landscape designs of Hiroshige, the result of which we sec to-day in the great preponderance of his prints over those of any other individual artist, that is to say, in the number of copies still extant. It also caused other artists, who were hitherto figure-designers, to apply their brush in the same direction, or else cease work, and to this period belong the numerous prints depicting stories, folk-lore, and legend, such as the many series of this nature designed by Kuniyoshi.
Further consideration of the work of Hiroshige II in landscape will be found in a later chapter dealing with this subject.
Thanks to Mr. Happer's investigations in respect of the date-seals found on Hiroshige's prints after 1840, much confusion at one time existing between the two Hiroshiges has now been definitely cleared up, and prints formerly attributed to the pupil are now properly accorded to Hiroshige himself, though it is known he sometimes called in his pupil to assist him in completing some of his numerous series.
Owing to the difference in the signature Hiroshige appearing on the early oblong views
Tokaido series) as compared with that on the later vertical | <urn:uuid:198c7a4b-f974-475e-b2c1-6d0d19496d3a> | CC-MAIN-2017-17 | http://hiroshige.org.uk/hiroshige/stewart/chapter_07.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00486-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.976549 | 6,302 | 2.921875 | 3 |
|King of the United Kingdom and the British Dominions, Emperor of India|
|King Edward after his coronation in 1902 painted by Sir Luke Fildes. National Portrait Gallery, London.|
|Reign||January 22, 1901–May 6, 1910|
|Coronation||August 9, 1902|
|Consort||Alexandra of Denmark|
|Albert Victor, Duke of Clarence
Louise, Princess Royal
Princess Victoria Alexandra
Maud of Wales
Prince Alexander John
|HM The King
HRH The Prince of Wales
HRH The Duke of Cornwall and Rothesay
|Royal House||House of Saxe-Coburg-Gotha|
|Royal anthem||God Save the King|
|Father||Albert, Prince Consort|
|Born||9 November 1841
Buckingham Palace, London
|Baptised||January 25, 1842
St George's Chapel, Windsor
|Died||6 May 1910 (aged 68)
Buckingham Palace, London
|Buried||May 20, 1910
St George's Chapel, Windsor
Edward VII (November 9, 1841 – May 6, 1910) was King of the United Kingdom of Great Britain and Ireland, of the British Dominions beyond the Seas, and Emperor of India from January 22, 1901, until his death on May 6, 1910.
Before his accession to the throne, Edward held the title of Prince of Wales, and has the distinction of having been heir apparent to the throne longer than anyone in English or British history. During the long widowhood of his mother, Queen Victoria, he was largely excluded from wielding any political power but came to represent the personification of the fashionable, leisured elite.
Edward's reign, now called the Edwardian period after him, saw the first official recognition of the office of the Prime Minister in 1905. Edward played a role in the modernization of the British Home Fleet, the reform of the Army Medical Services, and the reorganization of the British army after the Second Boer War. His fostering of good relations between Great Britain and other European countries, especially France, for which he was popularly called "Peacemaker," were sadly belied by the outbreak of World War I in 1914.
He was the first British monarch of the House of Saxe-Coburg-Gotha, which was renamed by his son, George V, to the House of Windsor.
Edward was born on November 9, 1841, in Buckingham Palace. His mother was Queen Victoria, the only daughter of Prince Edward Augustus, Duke of Kent and granddaughter of King George III. His father was Prince Albert of Saxe-Coburg-Gotha, first cousin and consort of Victoria. Christened Albert Edward (after his father and maternal grandfather) at St. George's Chapel, Windsor, on January 25, 1842, his godparents were the King of Prussia, the Duke of Cambridge, Prince Ferdinand of Saxe-Coburg and Gotha, King Consort of Portugal, the Duchess of Saxe-Coburg and Gotha, the Dowager Duchess of Saxe-Coburg-Altenburg, and Princess Sophia. He was known as Bertie to the family throughout his life.
As the eldest son of a British sovereign, he was automatically Duke of Cornwall, Duke of Rothesay, Earl of Carrick, Baron of Renfrew, Lord of the Isles and Prince and Great Steward of Scotland at birth. As a son of Prince Albert, he also held the titles of Prince of Saxe-Coburg-Gotha and Duke of Saxony. Queen Victoria created her son Prince of Wales and Earl of Chester on December 8, 1841. He was created Earl of Dublin on January 17, 1850, and a Knight of the Garter on November 9, 1858, and a Knight of the Thistle on May 24, 1867. In 1863, he renounced his succession rights to the Duchy of Saxe-Coburg-Gotha in favor of his younger brother, Prince Alfred.
In 1846, the four year old Prince of Wales was given a scaled-down version of the uniform worn by ratings on the Royal Yacht. He wore his miniature sailor suit during a cruise off the Channel Islands that September, delighting his mother and the public alike. Popular engravings, including the famous portrait done by Winterhalter, spread the idea, and by the 1870s, the sailor suit had become normal dress for both boys and girls in many parts of the world.
Queen Victoria and Prince Albert determined that their eldest son should have an education that would prepare him to be a model constitutional monarch. At age seven, Edward embarked upon a rigorous educational program devised by the Prince Consort, and under the supervision of several tutors. However, unlike his elder sister, the Prince of Wales did not excel in his studies. He tried to meet the expectations of his parents, but to no avail. He was not a diligent student—his true talents were those of charm, sociability, and tact. Benjamin Disraeli described him as informed, intelligent, and of sweet manner.
After an educational trip to Rome, undertaken in the first few months of 1859, he spent the summer of that year studying at the University of Edinburgh under, amongst others, Lyon Playfair. In October, he matriculated as an undergraduate at Christ Church, Oxford. Now released from the educational strictures imposed by his parents, he enjoyed studying for the first time and performed satisfactorily in examinations.
|House of Saxe-Coburg and Gotha|
|Albert, Duke of Clarence|
|Louise, Princess Royal|
|Maud, Queen of Norway|
|Prince Alexander John|
|Alexandra, Duchess of Fife|
|Maud of Fife|
The following year, he undertook the first tour of North America by a British heir to the throne. His genial good humor and confident bonhomie made the tour a great success. He inaugurated the Victoria Bridge, Montreal, across the St Lawrence River, and laid the cornerstone of Parliament Hill, Ottawa. He watched Blondin traverse Niagara Falls by highwire, and stayed for three days with President James Buchanan at the White House. Vast crowds greeted him everywhere; he met Henry Wadsworth Longfellow, Ralph Waldo Emerson, and Oliver Wendell Holmes; and prayers for the royal family were said in Trinity Church, New York, for the first time since 1776.
In 1861, his studies were transferred to Trinity College, Cambridge, where he was taught history by Charles Kingsley, but he never graduated. The Prince of Wales hoped to pursue a career in the British Army, but this was denied him because he was heir to the throne. He did serve briefly in the Grenadier Guards in the summer of 1861; however, this was largely a sinecure. He was advanced from the rank of lieutenant to colonel in a matter of months. In September that year, Edward was sent to Germany, supposedly to watch military maneuvers, but actually in order to engineer a meeting between him and Princess Alexandra of Denmark, the eldest daughter of Prince Christian of Denmark. Queen Victoria and Prince Albert had already decided that Edward and Alexandra should marry. They met at Speyer on September 24, under the auspices of Victoria, Princess Royal. Alexandra was a great, great, great grandchild of George II of the United Kingdom via at least three lines (twice through her father, and once through her mother), which made her a fourth cousin of Bertie. Alexandra was also in the line of succession to the British throne, but far down the list.
From this time, Edward gained a reputation as a playboy. In December 1861, his father died from typhoid fever two weeks after visiting him at Cambridge; Prince Albert had reprimanded his son after an actress, Nellie Clifden, had been hidden in his tent by his fellow officers during army maneuvers in Ireland. The Queen, who was inconsolable and wore mourning for the rest of her life, blamed Edward for his father's death. At first, she regarded her son with distaste as frivolous, indiscreet, and irresponsible. She wrote, "I never can, or shall, look at him without a shudder."
Once widowed, Queen Victoria effectively withdrew from public life, and shortly after the Prince Consort's death, she arranged for her son to embark on an extensive tour of the Middle East, visiting Egypt, Jerusalem, Damascus, Beirut, and Constantinople. As soon as he returned to Britain, arrangements were made for his engagement, which was acted out at Laeken in Belgium on September 9, 1862. Edward and Alexandra wed at St. George's Chapel, Windsor on March 10, 1863.
Edward and his wife established Marlborough House as their London residence and Sandringham House in Norfolk as their country retreat. They entertained on a lavish scale. Their marriage was met with disapproval in certain circles because most of Victoria's relations were German, and Denmark was at loggerheads with Germany over the territories of Schleswig and Holstein. When Alexandra's father inherited the throne of Denmark in November 1863, the German Confederation took the opportunity to invade and annex Schleswig-Holstein. Victoria herself was of two minds as to whether it was a suitable match given the political climate. After the couple's marriage, she expressed anxiety about their lifestyle and attempted to dictate to them on various matters, including the names of their children.
Edward had mistresses throughout his married life. He socialized with actress Lillie Langtry, Lady Jennie Churchill (mother of Winston Churchill and wife of Lord Randolph Churchill), Daisy Greville, Countess of Warwick, actress Sarah Bernhardt, dancer La Belle Otero, and wealthy humanitarian Agnes Keyser. The extent to which these social companionships went is not always clear, as Edward always strove to be discreet, but his attempted discretion was unable to prevent either society gossip or press speculation.
In 1869, Sir Charles Mordaunt, a British Member of Parliament, threatened to name Edward as co-respondent in his divorce suit. Ultimately, he did not do so, but Edward was called as a witness in the case in early 1870. It was shown that Edward had visited the Mordaunts's house while Sir Charles was away sitting in the House of Commons. Although nothing further was proved, and Edward denied he had committed adultery, the suggestion of impropriety was still damaging.
Agnes Keyser, as recorded by author Raymond Lamont-Brown in his book, Edward VII's Last Loves: Alice Keppel and Agnes Keyser, held an emotional bond with Edward that others did not, due to her being unmarried herself, and preferring a more private affair to a public one. This trait also made her the favored in royal circles of his last two loves. He also helped her and her sister fund a hospital for military officers.
His wife, Alexandra, is believed to have been aware of most of his affairs, and to have accepted them. The diary of one of her Ladies-in-Waiting records her looking out of a window overcome with giggles at the sight of Edward and his almost equally portly mistress riding side-by-side in an open carriage. He and Lord Randolph Churchill did quarrel for a time during Edward's involvement with Churchill's wife (Jennie Jerome), but eventually mended their friendship, which would then last until Lord Randolph's death. Alexandra was said to have been quite admiring of Jennie Jerome, enjoying her company despite the affair.
His last "official" mistress (although simultaneous to his involvement with Keyser), society beauty Alice Keppel, was even allowed by Alexandra to be present at his deathbed in 1910, at his express written instruction, although Alexandra reportedly did not like her. Keppel also is rumored to have been one of the few people who could help quell Edward VII's unpredictable mood swings. However, his outbursts of temper were short-lived, and "after he had let himself go … [he would] smooth matters by being especially nice." One of Keppel's great-granddaughters, Camilla Parker Bowles, was later to become the mistress and then wife of Charles, Prince of Wales, one of Edward's great-great grandsons. It was rumored that Camilla's grandmother, Sonia Keppel (born in May 1900), was the illegitimate daughter of Edward. However, Edward never acknowledged any illegitimate children.
Edward represented his mother, after the death of his father, at public ceremonies and gatherings—opening the Thames Embankment, Mersey Tunnel, and Tower Bridge, indeed he pioneered the idea of royal public appearances as they are understood today. But even as a husband and father, Edward was not allowed by his mother to have an active role in the running of the country until 1898. He annoyed his mother by siding with Denmark on the Schleswig-Holstein Question in 1864 (she was pro-German), and in the same year, annoyed her again by making a special effort to meet Garibaldi.
In 1870, republican sentiment in Britain was given a boost when the French Emperor, Napoleon III, was defeated in the Franco-Prussian War and the French Third Republic was declared. However, in the winter of 1871, Edward contracted typhoid, the disease that had killed his father, while staying at Londesborough Lodge. There was great national concern. One of his fellow guests (Lord Chesterfield) died, but the Prince managed to pull through. His near brush with death led to an improvement both in his relationship with his mother, as well as in his popularity with the public. He cultivated politicians from all parties, including republicans, as his friends, and thereby largely dissipated any residual feelings against him.
An active Freemason throughout his adult life, Edward VII was installed as Grand Master in 1875, giving great impetus and publicity to the fraternity. He regularly appeared in public, both at home and on his tours abroad, as Grand Master, laying the foundation stones of public buildings, bridges, dockyards, and churches with Masonic ceremony. His presence ensured publicity, and reports of Masonic meetings at all levels appeared regularly in the national and local press. Freemasonry was constantly in the public eye, and Freemasons were known in their local communities. Edward VII was one of the biggest contributors to the fraternity.
In 1875, the Prince set off for India on an extensive eight-month tour of the sub-continent. His advisers remarked on his habit of treating all people the same, regardless of their social station or color. The Prince wrote, complaining of the treatment of the native Indians by the British officials, "Because a man has a black face and a different religion from our own, there is no reason why he should be treated as a brute." At the end of the tour, his mother was given the title Empress of India, in part as a result of the tour's success.
He enthusiastically indulged in pursuits such as gambling and country sports. Edward was also a patron of the arts and sciences and helped found the Royal College of Music. He opened the college in 1883, with the words, "Class can no longer stand apart from class…I claim for music that it produces that union of feeling which I much desire to promote." He laid out a golf course at Windsor, and was an enthusiastic hunter. He ordained that all the clocks at Sandringham be put forward by half an hour in order to create more time for shooting. This so-called tradition of Sandringham Time continued until 1936, when it was abolished by Edward VIII. By the 1870s, the future king had taken a keen interest in horse racing and steeplechasing. In 1896, his horse, Persimmon, won both the Derby Stakes and the St Leger Stakes; Persimmon's brother, Diamond Jubilee, won all five classic races (Derby, St Leger, Two Thousand Guineas, Newmarket Stakes, and Eclipse Stakes) in a single year, 1900. Edward was the first royal to enter a horse in the Grand National; his Ambush II won the race in 1900. In 1891, he was embroiled in the Royal Baccarat Scandal, when it was revealed he had played an illegal card game for money the previous year. The Prince was forced to appear as a witness in court for a second time when one of the players unsuccessfully sued his fellow players for slander after being accused of cheating. The same year he became embroiled in a personal conflict, when Lord Charles Beresford threatened to reveal details of Edward's private life to the press, as a protest against Edward interfering with Beresford's affair with Daisy Greville, Countess of Warwick. The friendship between the two men was irreversibly damaged, and their bitterness would last for the remainder of their lives.
In 1892, Edward's eldest son, Albert Victor, was engaged to Princess Victoria Mary of Teck. Just a few weeks after the engagement, Albert Victor died of pneumonia. Edward was grief stricken. "To lose our eldest son," he wrote, "is one of those calamities one can never really get over." Edward told Queen Victoria, "[I would] have given my life for him, as I put no value on mine."
On his way to Denmark through Belgium on April 4, 1900, Edward was the victim of an attempted assassination, when Jean-Baptiste Sipido shot at him in protest over the Boer War. Sipido escaped to France; the perceived delay of the Belgian authorities in applying for extradition, combined with British disgust at Belgian atrocities in the Congo, worsened the already poor relationship between the United Kingdom and the Continent. However, in the next ten years, Edward's affability and popularity, as well as his use of family connections, would assist Britain in building European alliances.
When Queen Victoria died on January 22, 1901, the Prince of Wales became King of the United Kingdom, Emperor of India and, in an innovation, King of the British Dominions. Then 59, he had been heir apparent for longer than anyone else in British history. To the surprise of many, he chose to reign under the name Edward VII instead of Albert Edward, the name his mother had intended for him to use. (No English or British sovereign has ever reigned under a double name.) The new King declared that he chose the name Edward as an honored name borne by six of his predecessors, and that he did not wish to diminish the status of his father with whom alone among royalty the name Albert should be associated. Some observers, noting also such acts of the new king as lighting cigars in places where Queen Victoria had always prohibited smoking, thought that his rejection of Albert as a reigning name was his acknowledgment that he was finally out from under his parents' shadows. The number VII was occasionally omitted in Scotland, in protest at his use of a name carried by English kings who had "been excluded from Scotland by battle."
He donated his parents' house, Osborne on the Isle of Wight, to the state and continued to live at Sandringham. He could afford to be magnanimous; it was claimed that he was the first heir to succeed to the throne in credit. Edward's finances had been ably managed by Sir Dighton Probyn, VC, Comptroller of the Household, and had benefited from advice from Edward's financier friends, such as Ernest Cassel, Maurice de Hirsch, and the Rothschild family.
Edward VII and Queen Alexandra were crowned at Westminster Abbey on August 9, 1902, by the 80 year old Archbishop of Canterbury Frederick Temple who died only 4 months later. His coronation had originally been scheduled for June 26, but two days before on June 24, Edward was diagnosed with appendicitis. Thanks to the discovery of anesthesia in the preceding fifty years, he was able to undergo a life-saving operation, performed by Sir Frederick Treves. This was at a time when appendicitis was not treated operatively and thus, carried with it a mortality rate of greater than 50 percent. Treves, with Lister's support, performed a then radical operation of draining the infected appendix through a small incision. The next day he was sitting up in bed smoking a cigar. Two weeks later it was announced that the King was out of danger. Treves was honored with a baronetcy (which Edward had arranged before the operation) and appendix surgery entered the medical mainstream for the first time in history.
Edward refurbished the royal palaces, reintroduced the traditional ceremonies, such as the State Opening of Parliament, that his mother had foregone, and founded new orders of decorations, such as the Order of Merit, to recognize contributions to the arts and sciences. The Shah of Persia, Mozzafar-al-Din, visited England around 1902, on the promise of receiving the Order of the Garter. King Edward VII refused to give this high honor to the Shah, because the order was in his personal gift and the Government had promised the order without the King's consent. The King resented his ministers' attempts to reduce the King's traditional powers. Eventually, the King relented and Britain sent the Shah a full Order of the Garter.
As king, Edward's main interests lay in the fields of foreign affairs and naval and military matters. Fluent in French and German, he made a number of visits abroad, and took annual holidays at Biarritz and Marienbad. One of his most important foreign trips was an official visit to France in spring 1903, as the guest of President Émile Loubet. Following on from the first visit of a British or English king to the Pope in Rome, this trip helped create the atmosphere for the Anglo-French Entente Cordiale, an agreement delineating British and French colonies in North Africa, and making virtually unthinkable the wars that had so often divided the countries in the past. Negotiated between the French foreign minister, Théophile Delcassé, and the British foreign secretary, the Marquess of Lansdowne, and signed on April 8, 1904, by Lord Lansdowne and the French ambassador Paul Cambon, the Entente marked the end of centuries of Anglo-French rivalry and Britain's splendid isolation from Continental affairs. It also was an attempt to counterbalance the growing dominance of the German Empire and its ally, Austria-Hungary.
Edward involved himself heavily in discussions over army reform, the need for which had become apparent with the failings of the South African War. He supported the re-design of army command, the creation of the Territorial Army, and the decision to provide an Expeditionary Force supporting France in the event of war with Germany. Reform of the navy was also suggested, and a dispute arose between Admiral Lord Charles Beresford, who favored increased spending and a broad deployment, and the First Sea Lord Admiral Sir John Fisher, who favored scrapping obsolete vessels, efficiency savings, and deploying in home waters, as a means of countering the increasing menace of the German fleet. Edward lent support to Fisher, in part because he disliked Beresford, and eventually Beresford was dismissed. Beresford continued his campaign outside of the navy, and Fisher resigned. Nevertheless, Fisher's policy was retained.
Edward VII, mainly through his mother and his father-in-law, was related to nearly every other European monarch and came to be known as the "uncle of Europe." The German Emperor Wilhelm II, Tsar Nicholas II of Russia, Grand Duke Ernst Ludwig of Hesse and by the Rhine and Grand Duke Carl Eduard of Saxe-Coburg-Gotha were Edward's nephews; Queen Victoria Eugenia of Spain, Crown Princess Margaret of Sweden, Crown Princess Marie of Romania, and Empress Alexandra Feodorovna of Russia were his nieces; King Haakon VII of Norway was his nephew by marriage and his son-in-law; King George I of the Hellenes and King Frederick VIII of Denmark were his brothers-in-law; and King Albert I of Belgium, Kings Charles I of Portugal and Manuel II of Portugal, King Ferdinand of Bulgaria, Queen Wilhelmina of the Netherlands, and Prince Ernst August, Duke of Brunswick-Lüneburg, were his cousins. Edward doted on his grandchildren, and indulged them, to the consternation of their governesses. However, there was one relation whom Edward did not like—his difficult relationship with his nephew, Wilhelm II, exacerbated the tensions between Germany and Britain.
He became the first British monarch to visit the Russian Empire in 1908, despite refusing to visit in 1906, when Anglo-Russian relations were still low in the aftermath of the Dogger Bank incident, the Russo-Japanese war, and the Tsar's dissolution of the Duma.
In the last year of his life, Edward became embroiled in a constitutional crisis when the Conservative majority in the House of Lords refused to pass the "People's Budget" proposed by the Liberal government of Prime Minister Herbert Henry Asquith. The King let Asquith know that he would only be willing to appoint additional peers, if necessary, to enable the budget's passage in the House of Lords, if Asquith won two successive general elections.
Edward was rarely interested in politics, although his views on some issues were notably liberal for the time, he had to be dissuaded from breaking with constitutional precedent by openly voting for Gladstone’s Representation of the People Bill in the House of Lords. On other matters he was less progressive—he did not favor Irish Home Rule (initially preferring a form of Dual Monarchy) or giving votes to women, although he did suggest that the social reformer Octavia Hill serve on the Commission for Working Class Housing. Edward lived a life of luxury that was often far removed from that of the majority of his subjects. However, his personal charm with people at all levels of society and his strong condemnation of prejudice went some way to assuage republican and racial tensions building during his lifetime.
In March 1910 the King was staying at Biarritz when he collapsed. He remained there to convalesce while Asquith remained in London trying to get the Finance Bill passed. The King's continued ill-health was unreported and he came in for some criticism for staying in France while political tensions were so high. On April 27, he returned to Buckingham Palace, still suffering from severe bronchitis. The Queen returned from visiting her brother, King George I of Greece, in Corfu, a week later on May 5.
The following day, the King suffered several heart attacks, but refused to go to bed saying, "No, I shall not give in; I shall go on; I shall work to the end." Between moments of faintness, the Prince of Wales (shortly to be King George V) told him that his horse, Witch of the Air, had won at Kempton Park that afternoon. The King replied, "I am very glad," his final words. At half-past-eleven he lost consciousness for the last time and was put to bed. He died at 11:45 p.m.
As king, Edward VII proved a greater success than anyone had expected, but he was already an old man and had little time left to fulfill the role. In his short reign, he ensured that his second son and heir, who would become King George V, was better prepared to take the throne. Contemporaries described their relationship as more like affectionate brothers than father and son, and on Edward's death George wrote in his diary that he had lost his "best friend and the best of fathers … I never had a [cross] word with him in my life. I am heart-broken and overwhelmed with grief." Edward received criticism for his apparent pursuit of self-indulgent pleasure, but he received great praise for his affable and kind good manners, and his diplomatic skill. Edward VII is buried at St George's Chapel, Windsor Castle. As Barbara Tuchman noted in The Guns of August, his funeral marked "the greatest assemblage of royalty and rank ever gathered in one place and, of its kind, the last."
Edward was afraid that his nephew, the Kaiser, would tip Europe into war. Four years after his death, World War I broke out. The naval reforms and the Anglo-French alliance he had supported, and the relationships between his extended royal family, were put to the test. The war marked the end of the Edwardian way of life.
The lead ship of a new class of battleships, launched in 1903, was named in his honor, as were four line regiments of the British Army—The Prince of Wales's (North Staffordshire Regiment), The Prince of Wales's Leinster Regiment (Royal Canadians), The Prince of Wales's Own (West Yorkshire Regiment), and The Duke of Cornwall's Light Infantry—and three yeomanry regiments—King Edward's Horse, The Prince of Wales's Own Royal Regiment of Wiltshire Yeomanry Cavalry, and the Ayrshire Yeomanry Cavalry (Earl of Carrick's Own). Only one of these titles is currently retained in the Army, that of The Staffordshire Regiment (The Prince of Wales's).
A statue of King Edward VII and supporters constructed from local granite stands at the junction of Union Gardens and Union Street, in the city center of Aberdeen. An equestrian statue of him, originally from Delhi, now stands in Queen's Park, Toronto. Other equestrian statues of him are in London at Waterloo Place, and in the city of Sydney, Australia, outside the city's Botanic Gardens.
King Edward VII is a popular name for schools in England. Two of the largest are King Edward VII Upper School, Melton Mowbray, Leicestershire, founded in 1908, and King Edward VII School in Sheffield, founded in 1905 (formerly Wesley College). King Edward Memorial (KEM) Hospital is amongst the foremost teaching and medical care providing institutions in India. The hospital was founded in Bombay in 1926, as a memorial to the King, who had visited India as Prince of Wales in 1876. King Edward Memorial Hospital for Women in Subiaco, Western Australia, is the largest maternity hospital in the Perth metropolitan area. Two other Perth landmarks are named in his honor, Kings Park and His Majesty's Theatre, the latter a rare example of an Edwardian Theater. The only medical school in the former British colony of Singapore was renamed the King Edward VII Medical School in 1912 prior to being renamed King Edward VII College of Medicine in 1921. Originally named the Straits and Federated Malay States Government Medical School, its new name remained until the University of Malaya was founded in the city-state in 1949, whereupon the College became its Faculty of Medicine. The students' hostel adjoining the College of Medicine building retained King Edward's name. The hostel has kept the name since moving to the new Kent Ridge campus of the now-Yong Loo Lin School of Medicine, and is affectionately referred to as the "K.E.7 Hall" by students. The Parque Eduardo VII in Lisbon, King Edward Avenue, a major thoroughfare in Vancouver, and King Edward Cigars are also named after him.
All links retrieved September 13, 2013.
|House of Saxe-Coburg-Gotha
Cadet Branch of the House of Wettin
Born: 9 November 1841; Died: 6 May 1910
|King of the United Kingdom
Emperor of India
22 January 1901 – 6 May 1910
The Princess Victoria
|Heir to the Throne
as heir apparent
1841 – 1901
George, Prince of Wales
later became King George V
The Marquess of Ripon
|Grand Master of the United Grand Lodge of England
1875 – 1901
Prince Arthur, Duke of Connaught
Title last held by
Prince Albert, the Prince Consort
|Great Master of the Bath
1897 – 1901
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | <urn:uuid:e611ec87-8d18-4f53-b687-3124b7af102e> | CC-MAIN-2017-17 | http://www.newworldencyclopedia.org/entry/Edward_VII_of_the_United_Kingdom | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00192-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.981402 | 6,765 | 2.796875 | 3 |
Coronary bypass surgery, widely used to treat cardiovascular disease, involves redirecting a patient’s bloodflow around the heart in order to allow surgeons to operate. Heart-lung machines synthetically oxygenate and pump blood during such surgeries in order to keep the patient alive. The first heart-lung machine dates back to the 1930s and consisted of many of the same components as the machines of today. The design of each of these components is inspired by different principles of physics and engineering, including fluid dynamics and pressure gradients. Engineers are now applying these same concepts to create new heart-lung machine models such as miniaturized or portable versions. With its foundations in biology, physics, and engineering, the heart-lung machine has proven to revolutionize the treatment of heart disease.
Heart disease is a major health problem facing Americans today. According to the American Heart Association, 80 million men and women suffered from cardiovascular disease in 2006. In 2005, over 860,000 cardiovascular disease patients died . Despite these statistics, the situation is not hopeless. Different solutions exist, such as lifestyle changes, medicines, or in the most severe cases, coronary bypass surgery. Patients can undergo different types of cardiac bypass surgery to repair their faulty hearts or blood vessels. The surgery is commonly referred to as open heart surgery because the doctors actually open up the patient’s chest cavity, expose the heart, and operate on it. In order to allow such a surgery to be performed, the heart must be temporarily stopped from beating.
Obviously the heart is an essential organ. If it stops beating, oxygen-carrying blood cannot be circulated through the body, and a person will die shortly afterward. This presents quite a predicament for cardiovascular surgeons: how can they stop the heart to operate on it, yet keep the patient alive? The answer lies with a special apparatus, called the heart-lung machine, or cardiopulmonary bypass machine. The heart-lung machine is a device that is connected to the blood vessels and serves as the person’s heart and lungs for a period of time. In other words, the patient’s blood bypasses the heart to enter the machine instead, where it is oxygenated just as it would be in the lungs. From there, the machine pumps the blood out into the rest of the body (Fig. 1).
In doing so, the heart-lung machine essentially replaces the most vital organs, thereby sustaining the patient’s life. From its original development to the components of current models to its future applications, the heart-lung machine is truly an impressive feat of technology that integrates the engineering principles of fluid flow, pressure gradients, and heat transfer into one life-saving device.
History of the Heart-Lung Machine
The first machine of this type was developed by surgeon John Heysham Gibbon in the 1930s . During this time, physicians were looking into the possibility of extracorporeal circulation, or blood flow outside of the body . They wondered if there was a way to extend this extracorporeal circulation to bypass not just minor organs, as was often done in surgery at the time, but to bypass the heart completely. Saddened at a patient’s death mid-surgery, Gibbon made it his mission to come up with an artificial heart-lung machine that would keep a patient alive during heart surgery.
Between 1934 and 1935, Gibbon built a prototype of his heart-lung machine and tested its function on cats in order to assess what problems needed to be addressed before using it with humans . For example, in one model Gibbon observed that an inadequate amount of bloodflow was exiting the machine, so he decided to make the flow continuous, instead of in short pulses . By introducing bloodflow that would remain at the same rate continuously, instead of increasing and decreasing with a set rhythm, he increased the total blood volume capacity that could flow throughout the machine.
In the 1940s, Dr. Gibbon met Thomas Watson, an engineer and chairman of International Business Machines (IBM). Gibbon and Watson, along with other engineers from IBM, collaborated on the quest for an effective cardiopulmonary bypass machine, and together they created another new model . When this model was testing by performing surgeries on dogs, they noticed that many of their test subjects died after surgery due to embolisms (An embolism occurs when a small particle or tissue migrates to another part of body and causes the blockage of a blood vessel, which prevents vital tissues from receiving oxygen) . From these experiments, they saw the need to add a filter to their apparatus. Gibbon and the IBM engineers decided to use a 300-micron by 300-micron mesh filter, which proved successful in trapping these harmful tissue particles .
In 1953, Gibbon himself completed the first successful surgery on a human patient with the help of the cardiopulmonary bypass machine . Since then, open heart surgeries have been performed for over 55 years, with almost 700,000 performed annually in recent years . Much has changed since Gibbon’s first model, but the main engineering concepts behind his machine have remained the same. Today’s heart-lung-machine contains the same basic components: a reservoir for oxygen-poor venous blood, an oxygenator, a temperature regulator, a pump to drive the blood flow back to the body, a filter to prevent embolisms, and connective tubing to tie all the other elements together .
Today’s Machine: A Journey Through its Components
In an open-heart surgery, the surgeon first connects the bypass machine to the patient by inserting tubes called the venous cannulas into the vena cavae, the large blood vessels leading to the heart . This redirects the flow of blood into the heart-lung machine, bypassing the heart completely. Engineers must design the venous cannulas such that a precise and controlled amount of blood will flow through them into the machine. They do so by creating the tubes in varying sizes and resistances . According to fluid dynamic principles, the larger a tube is, the more liquid can flow through it at a given point in time. On the other hand, if a tube has a greater resistance, which is controlled by surface roughness and fluid viscosity, then less fluid may pass through. By adjusting these two properties, an engineer can create venous cannulas that allow specific rates of blood to flow from the body and into the machine.
From the cannulas, the blood flows into the venous reservoir, a chamber made of plastic or polyvinyl chloride (PVC) that collects and stores the blood from the patient’s body . The reservoir must have a large volume capacity to accommodate a large volume of blood. According to Boyle’s Law, pressure and volume are inversely related under constant temperature; as one increases, the other decreases. Thus, the venous reservoir’s large volume gives it a low pressure. All solvents naturally move from regions of higher pressure to regions of lower pressure. Therefore, since the reservoir has a low pressure, blood flows from the high-pressure vessels in the body into the bypass machine’s venous reservoir.
Upon leaving the venous reservoir, blood next travels into the heart-lung machine’s pump, which utilizes compression force or centrifugal force to drive blood flow. A pump may come in either one of two types: roller pumps or centrifugal pumps. In a roller pump, the blood enters a curved track of tubing made of a flexible material, often PVC, latex, or silicone . As the blood enters, two cylindrical rollers rotate and slide forward, constricting the tubing. This compression reduces the volume in the tube, giving the blood no room to go but forward. Just as squeezing a tube of toothpaste pushes the paste forward and out of the tube, compressing the roller pump forces the blood to flow forward, through the rest of the bypass machine. While roller pumps may be used as the primary pump in a heart-lung machine, centrifugal pumps are often used as an alternative. The centrifugal pump is comprised of a plastic wheel that rotates rapidly, propelling the liquid away from the center of rotation . Imagine spinning a bucket of water overhead fast enough so that water is pressed outward against the bucket and does not fall out. The same force is utilized in the heart-lung machine as the rotation of the centrifugal pump forces the blood to flow past the spinning wheel and out towards the next section of tubing. While some heart-lung machine manufacturers prefer this type of pump because they believe it reduces the formation of harmful clotting elements in the blood, at this point in time, both types of pumps are widely used .
Blood flows from the pump into the heat exchanger, which uses the concept of heat transfer to cool the blood down to the optimal temperature for surgery. The human body normally maintains an internal temperature of 37 degrees Celsius but during cardiac surgery, physicians lower the patient’s core temperature to a state of moderate hypothermia or 5 to 10 degrees lower than usual . Oxygen gas is more soluble in cold blood than in warm blood . Thus, lowering the temperature maximizes the amount of oxygen the patient’s blood cells can carry.
Following the basic principle of heat transfer, a warmer object will always transfer heat to any colder object with which it is in contact. Similarly, if a cold object touches a warmer object, the warmer object will be cooled. That is precisely what occurs in the heart-lung machine’s heat exchanger. It consists of a thermally adjustable compartment of cold water with plastic rubes submerged in it. As blood flows through the tubes, thermal energy is transferred between the water and the tubing, and then between the tubing and the blood. The warmer object, the blood, becomes colder, while the cooler object, the water, becomes warmer. Thus, the heat exchanger cools the blood to the desired temperature.
From the heat exchanger, the cooled blood enters the oxygenator, where it is imbued with oxygen. Today’s heart-lung machines use an oxygenator that attempts to mimic the lung itself. This oxygenator, aptly called a membrane oxygenator, consists of a thin membrane designed like the thin membranes of the alveoli, the air-filled sacs that comprise the lungs. Venous blood from the heat exchanger flows past one side of the membrane, while oxygen gas is stored on the other. Micropores in the membrane allow oxygen gas to flow into the blood and into the blood cells themselves. Just as blood spontaneously flows along a pressure gradient, gases also move from regions of high pressure to regions of low partial pressure. The oxygenator is designed such that the oxygen pressure on the gas side of the membrane is much higher than the pressure in the blood . Thus, oxygen passes through the membrane into the blood, following the natural high-to-low pressure gradient.
At this point in the journey through the heart-lung machine, the blood has been collected, cooled and oxygenated, so it is nearly ready to return to the patient’s body. Before this can happen, however, it must pass through a filter to eliminate the potential for embolisms. Anything that could lead to blockage of a blood vessel, whether it is an air bubble, a piece of synthetic material, or a clotting protein, poses a great risk to the patient and must be filtered out of the returning blood. The filters used in the heart-lung machine are comprised of nylon or polyester thread woven into a screen with small pores . The small pores trap the harmful bubbles or particles, allowing purer blood, free from dangerous embolism-causing particles, to flow through. After being filtered, the blood travels through plastic tubes called arterial cannulas. Arteries, the blood vessels that deliver oxygen-rich blood from the heart to the rest of the body, have the highest speed of any vessel. In order to imitate this, engineers designed the arterial cannulas to be very narrow . In fluid dynamics, the flow rate of a liquid through a vessel is equal to the cross-sectional area times the speed of flow. Thus, tubes like the arterial cannulas that have a smaller diameter allow for a higher blood velocity. During surgery, the physician inserts the cannulas into one of the major arteries of the patient, such as the aorta or the femoral artery . Blood then leaves the last component of the cardiopulmonary bypass machine, enters the patient’s own vessels, and again makes its natural journey through the circulatory system.
Heart-Machines of the Future
There are dozens of heart-lung machines currently on the market today that are widely used in operating rooms across the nation. Most of these machines employ the same basic components and functions. However, like most areas of science and engineering, the technology of the heart-lung machine is not stagnant. Recent breakthroughs of biomedical engineers give a glimpse of the cardiopulmonary bypass machines of the future. In 2007, the world’s first portable heart-lung machine received the CE mark, which officially allowed it to be sold across Europe. Weighing only 17.5 kilograms and powered by a rechargeable battery, the Lifebridge B2T can be transported to different parts of a hospital, giving paramedics or emergency room physicians the chance to start extracorporeal circulation in critical patients before even reaching the operating room (Fig. 2).
Another new development of the heart-lung machine is the MiniHLM, a miniaturized heart-lung machine developed for infants. Instead of having all the components spaced separately, as with normal-sized machines, the MiniHLM integrates the functions so the machine is much smaller and more compact . This allows cardiac bypass surgery to be performed on neonates, something that will surely expand the capacity with which heart conditions in newborns can be treated.
European Hospital/European Hospital
Current implementations of the cardiopulmonary bypass machine have advanced far past John Gibbon’s original idea almost 80 years ago. Yet no step in the process has been insignificant, as every improvement has improved the safety and usability of the machine. Engineers continue to consider both the biological needs of the human body and the basic principles of physics in order to create a functional biocompatible device that performs what was once unthinkable, sustaining human life without the use of one’s heart or lungs. Hundreds of thousands of patients undergo open-heart bypass surgeries every year, intense procedures which require extracorporeal circulation . That’s hundreds of thousands of lives saved with the help of one essential biomedical device: the heart-lung machine.
- "AHA Heart Disease and Stroke Statistics - 2009 Update." American Heart Association. Internet: http://www.americanheart.org/downloadable/heart/1240250946756LS-1982%20Heart%20and%20Stroke%20Update.042009.pdf, 2009. [28 Jun 2009].
- “Internal Working of the Cardiopulmonary Bypass Machine.” The Chemical Engineers' Resources. Internet: http://www.cheresources.com/ cardiopul.shtml, 2008. [29 Jun 2009].
- “Extracorporeal circulation.” The American Heritage Medical Dictionary. 2007. Internet: http://medical-dictionary.thefreedictionary.com/ extracorporeal+circulation [29 Jun 2009].
- Adora Ann Fou. “John H. Gibbon. The first 20 years of the heart-lung machine.” Texas Heart Institute Journal, vol. 24(1), pp. 1-8, [On-line] Available: http://www.pubmedcentral.nih.gov/pagerender.fcgi?artid=325389&pageindex=1 [29 Jun 2009].
- Kelly D. Hedlund.”A Tribute to Frank F. Allbritten, Jr. Origin of the Left Ventricular Vent during the Early Years of Open-Heart Surgery with the Gibbon Heart-Lung Machine.” Texas Heart Institute Journal, vol. 28(4), pp. 292-296. [On-line] Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=101205, 2001 [30 Jun 2009].
- Lawrence H. Cohn “Fifty Years of Open-Heart Surgery.” Circulation, vol. 107, pp. 2168-2170. [On-line] Available: http://circ.ahajournals.org/cgi/content/short/107/17/2168, 2003 [29 Jun 2009].
- Ludwig K. Von Segesser. “Peripheral cannulation for cardiopulmonary bypass.” Multimedia Manual of Cardiothoracic Surgery Internet: http://mmcts.ctsnetjournals.org/cgi/content/full/2006/1009/mmcts.2005.001610, 2006, [30 Jun 2009].
- Eugene A. Hessel, II, and L. Henry Edmunds, Jr. “Extracorporeal Circulation: Perfusion Systems.” Cardiac Surgery in the Adult. [On-line] New York: McGraw-Hill, Available: http://cardiacsurgery.ctsnetbooks.org/cgi/ content/full/2/2003/317 2003, [30 Jun2009].
- “Venous Reservoirs.” Perfusion Equipment. Internet: http://www.perfusion.com.au/CCP/Perfusion%20Equipment/ Venous%20Reservoirs.htm, 2008, [30 Jun 2009].
- Masaru Yoshikai, Masakatsu Hamada, Kyoumi Takarabe, and Yukio Okazak. “Clinical Use of Centrifugal Pumps and the Roller Pump in Open Heart Surgery: A Comparative Evaluation.” Artificial Organs pp. 704-706, Internet: http://www3.interscience.wiley.com/journal/121514553, 2008, [30 Jun 2009].
- Gordon Giesbrecht and James A. Wilkerson. Hypothermia, Frostbite, and Other Cold Injuries. Seattle: Mountaineers Books, 2006.
- “Membrane Oxygenators.” Perfusion Equipment. Internet: http://www.perfusion.com.au/CCP/Perfusion% 20Equipment/Membrane%20Oxygenators.htm, 2008 [30 Jun 2009].
- J, H Schnöring Arens, F Reisch, JF Vázquez-Jiménez, T Schmitz-Rode, and U Steinseifer. “Development of a miniaturized heart-lung machine for neonates with congenital heart defect.” American Society for Artificial Internal Organs Journal[i/], vol. 54(5), pp. 509-13. Internet: http://www.ncbi.nlm.nih.gov/pubmed/18812743?ordinalpos= 3&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum, 2008, [30 Jun 2009].
- "AHA Open-Heart Surgery Statistics.” American Heart Association. Internet: http://www.americanheart.org/presenter.jhtml?identifier=4674, 2009 [29 Jun 2009].
- Mark Z. Jacobson. Fundamentals of Atmospheric Modeling. New York: Cambridge University Press, 2005.
- “Ready for action: The 17.5 kg heart-lung machine.” European Hospital Online. Internet: http://www.european-hospital.com/topics/article/2412.html, 1 Sep. 2007 [6 Jul. 2009]. | <urn:uuid:1cf4376d-b0e3-4fb4-a0e0-c671ecf99913> | CC-MAIN-2017-17 | http://illumin.usc.edu/14/engineering-the-heart-lung-machine/fullView/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.900875 | 4,197 | 3.71875 | 4 |
An Introduction to Sake and Japan
The Japanese archipelago stretches over 3,000 km from north to south. Therefore, there are various lifestyles and customs. In addition, Honshu (the main island) is divided into the Pacific Ocean side and Japan Sea side by it’s over 1,000-meter elevation backbone ridge. This further results in different lifestyles and customs.
Therefore, the various cultures such as food and drink, festival rites, and folk entertainment have developed according to the climate of the plains, basins, mountains, and seasides. Since there had been almost no historical influence of politics and religion, the cultures of each small local community have been well preserved.
Despite this history, the pursuit of higher-quality sake has progressively evolved.
For example, in ancient times, it was the custom for the people in each region to brew and drink sake with Shinto deities after offering it to those deities at festivals and events. The main sake was called doburoku (unrefined sake). However, such a tradition has declined these days.
More ancient sake, such as kuchinokami-no-sake (sake made from rice or other cereal which is chewed to promote fermentation) and shitogi-zake (sake made from powdered rice which is also chewed) were recorded but the details have not been confirmed.
Seishu (refined sake) is the symbol of present-day sake. In the urban areas, this dates back to the Edo period (17th to 19th century). However, for the farming, mountain, and fishing villages, it was after the Meiji era (19th to 20th century) with the development of brewing techniques and distribution channels.
Present-day sake is made with high-quality standards for a refined taste and is easily available.
However, this standardization does not necessarily mean the decline of the cultural aspects of sake. The relationships of festival rites and sake, appetizers and sake, and containers and sake pass on the unique Japanese tradition, although the differences of the regions are declining.
By striving for the excellent taste and recounting the history of sake, we hope to pass on this part of Japanese culture to future generations and the international community.
History of Sake
Sake is made from rice. In Japan, sake has been consumed since ancient times. Of course, it is not exactly the same sake as what we have these days. The technique has advanced over time to the present day. Considering that the common ingredient, rice, is both the staple of Japanese food and the main ingredient of sake, this history goes back about 2,000 years.
The brewing of sake is a complex process. First, the rice starch needs to be converted into sugar. Then sugar is converted by kobo (yeast) into alcohol. The present, established method of converting starch into sugar is by koji-kin (aspergillus mold), the same process used since the fourth century. Until that time, sake was brewed by a method such as kuchikami-sake (sake made from rice or other cereal, which is chewed to promote fermentation.)
The organization called Miki-no-Tsukasa (sake brewery office) was established by the Imperial Court and started brewing sake for the ceremonies during the Heian period (eighth to 12th century). During the Muromachi period (15th century), hundreds of small-scale sake shops were born in Kyoto and sake came to be brewed throughout the year. At the same time, the brewers of soboshu, sake brewed in temples in Nara and other places, came to lead the development of brewing techniques.
Since then, the technical development with consistent quality has progressed and from the middle of the Edo period (around 18th century), the brewing technique was established and is similar to the technique used today.
First, koji-kin (aspergillus mold) is carefully grown over the steamed rice to make komekoji (malted rice). Then, to komekoji, steamed rice and water are added to make the fermentation starter, shubo (yeast mash). After that, the fermentation is promoted by the method called danjikomi (three-step fermentation process) by adding steamed rice, komekoji, and water three times. After the fermentation, sake is filtered, pasteurized at low temperature, stored, and matured. This production method requires very complex, advanced skill.
At around this time, it became popular to concentrate brewing sake in the best season, winter. This technical development gave rise to the special professional group of sake brewing consisting of toji (chief sake brewer) and kurabito (a worker at a sake brewery.) Migrant workers mainly from farming villages during agricultural off-season became the professional group.
It was also discovered that the quality of water used in brewing had an effect on the brewing of sake. It was the development of the breeding of rice, brewery science, and manufacturing facilities after the Meiji era (19th to 20th century), which marked the beginning of modern Japan, that established the modern brewing process. However, the skill involved with the multiple parallel fermentation process, which converts rice starch into sugar by koji-kin (aspergillus mold) and converts sugar into alcohol by the power of kobo (yeast) simultaneously, has not changed even today.
The fermentation method, which performs simultaneous saccharification of rice and alcoholic fermentation of sugar. With this method, the putrefaction risk becomes lower and alcohol content becomes higher than saccharifying and fermenting alcohol separately.
Various Sake Produced in Climate Conditions of Japan
Japan, which is situated off the northeast portion of the Eurasian continent is a long arc-shaped island country, surrounded by the Kuroshio (warm current) flowing from south to north and the Oyashio (cold current) flowing from north to southwest. The climate varies greatly from north to south and from the Pacific Ocean side to the Japan Sea side. Japan also belongs to the temperate monsoon region and experiences four seasons. However, due to the central mountain range that divides the archipelago, the character of the climate, even at the same latitude, is quite different from the Pacific Ocean side to the Japan Sea side.
As a result, the farm and marine products are very different in each region. Although food from all over the country is available these days, it was in the past the custom for the Japanese to eat local food using local recipes. Therefore, traditional Japanese cuisine is as diverse in flavor, seasoning, and cooking methods as each region.
As a result, the basics of brewing sake in over the 1,000 breweries in Japan are to match the sake to the local diet. For example, there is many red fish caught from the Pacific Ocean, white fish from the Seto Inland Sea, and fatty fish from the Sea of Japan because of the extremely cold winters. Food preservation developed in the inland provinces. In addition, some breweries brewed sake for Edo (present-day Tokyo), which was the world’s largest consumer city during the Edo period (17th to 19th century). Brewing sake for each lifestyle and diet was developed and refined for each region.
Even now, the Japanese cultural sensitivity to the four seasons is reflected in how sake is consumed. Each season brings us a different type of sake and a different way to drink it. In autumn, we have hiyaoroshi, which is sake well matured over the summer; in the winter to early spring, shiboritate (fresh sake) with a fresh flavor; in the hot summer, namazake (unpasteurized sake), which is cooled in the refrigerator. Some prefer to drink sake cold or at room temperature called hiya (unwarmed sake). On the other hand, even these days, others prefer the traditional drinking custom of kanzake (warmed sake) from autumn to spring.
Recently, a technical approach to sake brewing has developed. There are the traditional kimoto and yamahai with a sour and thick taste; and daiginjo (very special brew) with the fruity taste using highly polished rice and brewing at a low temperature. Recently, sparkling sake is being produced.
The traditional method of growing of active kobo (yeast) through the action of lactic acid produced by natural lactic acid bacterium while preventing other bacteria activity.
Yamahai operates kimoto-type shubo (yeast mash) growing method which cut the operation procedure called yamaoroshi, grinding rice during the process of active kobo.
Most importantly, the quality control of sake after shipping is essential for enjoying the delicate taste and different flavors. The reason for the sake containers to have lightproof brown or UV-cut bottles is to reduce the sunlight, the most dangerous factor for preserving sake. For drinking delicious sake, it is important to store it in a cool, dark place.
Three Reasons Why Sake Goes Well with Japanese Cuisine
A distinct flavor produced from the brewing of sake is called umami, or savory good taste. These days, sake is consumed with a variety of delicious foods. However, traditionally, it was consumed with a simple appetizer called sakana. The variety of conditions spanning east to west in Japan has produced a diversity of flavors complimentary to the local sake.
- Sake contrasts well with salty foods. Because the Japanese summers are hot and humid, salted seafood evolved as a preservative over smoked foods. Therefore, many appetizers that are consumed with sake are high in salt content. Shiokara (salted and fermented fish innards) and naresushi are such examples. It was also common to have sake with salt and miso (fermented soybean paste) only. The umami character of sake goes well with the salty taste of these appetizers.
- Sake complements fermented foods. The variety of ingredients used in Japanese cuisine results in unique seasonings. Common seasonings such as shoyu (soy sauce), miso, komesu (rice vinegar), and mirin (sweet sake for cooking) are all fermented using koji (malted rice). In particular, shoyu and miso, like sake, are uniquely developed in each region and have become the main taste of the local cuisine. The predominant use of fermented foods and almost no use of oils and fats are the features of “Washoku: Traditional Japanese Dietary Cultures” listed on UNESCO’s Intangible Cultural Heritage list.
- Sake is good in recipes for cooking. The variety of fish, which Japanese people prefer to eat, is rich in minerals and calcium, more than that of Western food. Sake goes well with these flavors. Additionally, it has a good masking effect to remove the odor of raw fish. Therefore, sake is often used, not only as a drink, but also as a cooking ingredient. For these reasons, sake goes well with Japanese cuisine.
The good taste and the variety of qualities of present-day sake have not only become popular over a wide range of Japanese cuisine. It has also become popular with international dishes including fatty meats.
Sake Strongly Connected with Traditional Ceremonies
Shinto is a polytheistic belief system based on nature and ancestor worship. As such, there are many Shinto deities throughout Japan. Based on farming culture, Japan cultivates rice in the northernmost possible location of the world. Rice produced under these severe weather conditions has become the most precious staple food for the Japanese. It has been ancient tradition to celebrate the good harvest and express gratitude by offering sake to the deities. The food and sake offerings to the deities are called shinsen. Although there are various offerings for each region, the essential ones are as follows: miki (sake made from fermented rice), mike (washed rice or boiled white rice), and mikagami (round rice cake made from pounded steamed rice).
These days, the Japanese people eat rice throughout the year as a staple food. However, in the older days, people used to eat katemeshi, rice mixed with crops such as millet as a staple food, eating pure rice only on honored days such as ceremonies. In addition, sake made from the abundance of valuable rice and through much effort has become the most important part of these offerings.
Drinking sake with the deities and offering gifts to them on festival days are traditions passed on to today. Even today, the summoning of the Shinto deities is a tradition that is preserved throughout Japan.
For example, the ceremony jichinsai, for the construction of the new buildings, is performed by sprinkling sake over the property and offering it to the owners. Furthermore, Japan celebrates four distinct seasons with a festival called Sekku, performed at the turning point of each season. Although it has been simplified in recent years, it used to be the custom to float seasonal flower petals on sake, admire the flowers, and drink sake. For example, peach sake in March, sweet-flag sake in May, and chrysanthemum sake in September. People drink it to ward off evil spirits and wish for a long life. Also, on New Year’s Day, there is a custom by which people wish the peace for the new year by drinking sake called toso, a mixture of about ten kinds of herbs mixed with seishu (refined sake).
While feeling the change of each season, we Japanese hope to cherish those events by celebrating with sake and strengthen the ties now and forever.
Sake Necessary for Social Bonding
Since ancient times, Japanese have used sake as a way to create special bonds with each other. Sakazukigoto is a ceremony meaning the exchanging of sake cups. San-san-kudo is the most popular type of ceremony. After pouring sake, each person takes three sips of sake from each of three kinds of cups: large, middle, and small. It is important to sip three times as the number three is considered lucky. Especially in wedding ceremonies, san-san-kudo is usually performed while making vows before Shinto deities.
Outside of weddings, a custom called katame-no-sakazuki (ceremony of exchanging sake cups as a pledge of friendship) is used when people with no blood relationship become sworn brothers or a parent and a child. The phrase, “exchanging sake cups,” has a similar meaning as “contract” in Western societies. The phrases “drink sake together” and “eat out of the same pot,” mean closer relationships without any special contracts.
During present-day Japanese banquets, we often hear the phrase like “let’s do without the formalities and make ourselves at home today.” This means that there is no distinction between social statuses for developing relationships. Usually organizers and guests of honor give the opening speech to propose the toast saying, “kampai” at the beginning of the banquet. Kampai means to dry or empty a glass. It is a Japanese word to express not only a toast, but also a feeling of cultural bonding.
After this reiko (formal ceremony) people start bureiko, an informal party. The phrase, “we wish you continued success and prosperity …,” is usually used to propose a toast of kampai.
The word kinen means, “praying to the deities” In short, the original traditional ceremony sakazukigoto (ceremony of exchanging sake cups) is symbolized in the act of the toast, kampai, as the simplified confirmation of the purpose of the gathering. Therefore, we make a toast, kampai, with sake to pray to the deities.
Originally, it was common that people drank sake not only for auspicious occasions but also for funerals and Buddhist services. People drank sake to bid farewell and to remember the deceased. For important emotions in Japanese life, sake was indispensable.
Gift Exchange Culture and Sake
It is ancient tradition and customary for people to exchange sake as gifts. First, sake is indispensable as the offering to the deities.
People bring sake as the celebration gift on New Year’s holidays and at festivals saying the words such as “we offer this to Shinto deities” or “we offer this to Buddha.” After offering sake to the deities, people commenced with osagari, consuming sake with the deities. Therefore, sake is indispensable as the gift on festival days.
Also since ancient times, sake has been used as the expression of sympathy and condolences. It was especially important to give sake as an expression of sympathy in the case of fires and disasters. It was custom for neighbors to help clear debris of fires and disasters. It was also custom to bring sake to encourage good feelings and restore good luck. As such, the custom of bringing sake as the expression of sympathy after fire and accidents was established.
There are other Japanese unique gifts called o-chugen in summer and o-seibo at the end of the year. These are gifts from one person to another to express gratitude for their help. The gift-giving custom of o-chugen and o-seibo started during the Edo period (17th to 19th century) when subordinates gave gifts to superiors as a token of their gratitude. In return, the superiors would give back a gift, twice of value, called baigaeshi. Soon after, this custom became popular regardless of social rank. The main gift was sake.
Although modern society has a variety of items for gift giving, the custom of giving gifts as religious offerings, expressing sympathy, and o-chugen and o-seibo are deeply rooted in Japanese society. Sake still shows its presence as one of the main gift items.
Development of Sake and Its Distribution
Originally, sake was brewed in each region throughout Japan as local production for local consumption. From the late Muromachi period (16th century) to the early Edo period (17th century), the brewing industry was concentrated in the Kinki region such as Nara, Fushimi, and Itami.
This changed during the Edo period (17th to 19th century) because of a peaceful 300-year reign of the Tokugawa shogunate and a developing economy. Since the population of Edo (present-day Tokyo), the center of politics, was already over one million, there was a strong demand for sake there. In addition, the shogunate and related domains strictly controlled the licensing system for production and sales of sake.
Present-day Nada in Hyogo Prefecture, the largest sake-producing district, grew as the largest sake supplier for Edo. Originally, the Kansai region had the concentration of sake brewing techniques from the Nara period (eighth century). Also, the extremely cold winter climate was suitable for brewing sake. In addition, an abundant supply of hard water called miyamizu, suitable for brewing sake, was discovered there.
As it was located near Osaka, the center of the nation’s economy, a special sea route using a ship called tarukaisen was established for shipping the sake to Edo. Although it had several sea routes surrounding the Japanese archipelago, throughout the Edo period, the original purpose was for the transport of sake.
The sake wholesale district in Edo, Shinkawa, which was established as the shipping discharge base in Edo, became the largest base of sake distribution in eastern Japan. Sake brewing in Nada was developed to the taste of urban residents of Edo. Nada thus grew as the representative sake-producing district in Japan. Because of the abundance of sake shipped to Edo, it was easily available to the population.
Since the main distribution system moved from maritime to railroad in the Meiji era (19th to 20th century), several sake brewery districts were established mainly for selling outside of their own area: Fushimi in Kyoto Prefecture, Saijo in Hiroshima Prefecture, and Jojima in Fukuoka Prefecture.
Nowadays, people can drink various locally brewed sake quite easily throughout Japan owing to the development of reliable logistics systems. Presently, the most productive districts of sake are Hyogo Prefecture, Kyoto Prefecture, Niigata Prefecture, Saitama Prefecture, Akita Prefecture, and Aichi Prefecture.
Sake as the National Alcoholic Drink of Japan
Presently in Japan, people can drink various types of alcohol such as beer, wine, and whiskey along with various foods from all over the world. It was important for us to understand and respect the cultural backgrounds of each country as we consume a variety of traditional food and drink of each of those countries.
Although the Japanese diet has undergone many changes, the conventional Japanese cuisine and sake are being seen in a new light. At the same time, the cultural and historical significance of Japanese cuisine and sake have come to attract people’s attention as well.
The reasons why sake qualifies as “the national alcoholic drink of Japan” are the followings: it is made from rice and water, the blessings of Japanese climate; it has the unique technique of using koji-kin (aspergillus mold) grown by the blessed climate of Japan; it has the history that people have consumed it for a long time throughout Japan; it has the strong connection with Japanese native beliefs, traditional annual events, and lifestyle; and it is brewed all over Japan.
Therefore, cherishing “the national alcoholic drink of Japan” is none other than being proud of Japanese culture. Of course, it is also important to deepen the mutual understanding by respecting foreign cultures, histories, foods, and alcoholic drinks. Japanese sake has been recognized overseas as the word, “sake.” Furthermore, recently the words such as ginjo (special brew sake) and junmai (pure rice sake) have become popular as well. In recent years, the export volume of sake for overseas has increased favorably.
The Japanese have promoted sake overseas as the representative of Japan, in other words, “the national alcoholic drink of Japan.” | <urn:uuid:18215b60-7fd6-4459-a935-b0cdb4515b0c> | CC-MAIN-2017-17 | http://www.talkativeman.com/tag/culture/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00543-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.967393 | 4,612 | 2.96875 | 3 |
Make a simple animated Gingerbread character in Blender Part I: modelling
The following tutorial takes the Reader though the process of making a simple animated low-poly Gingerbread character in Blender (any version, Blender 2.50 or above), from the initial start-up file through modelling and modifying the mesh, to Material assignments and UV unwrapping, to rigging and finally animating a simple walk cycle (an animation that loops).
- "MMB" = MIDDLE mouse
- "LMB" = LEFT mouse
- "RMB" = RIGHT mouse
- "MMB" = Rotate Scene
- "Shift+MMB" = Strafe Scene
- "Ctrl+MMB" = Zoom Scene
The exercise is suitable for beginners wanting to learn the basic process of making animated characters for games or other similar environments. Although the following exercise is largely for new-comers to Blender, and 3D modelling in general (beginners), it's not absolutely necessary the Reader be familiar with Blender although it will help if they are.
Design note: for more on basic Scene navigation in Blender click here, else to recap before continuing; "middle-mouse" ("MMB") click+hold+drag rotates the Scene; "Shift+MMB" drag strafes left/right and up/down; with "Ctrl+MMB" drag zooming in/out.
The character being made is based on a 'concept', an image that will also be used later as the Texture assigned to the mesh (the image the viewer sees when looking at the model). Because of this the image itself is organised as it would be for that purpose, a 'front' and 'back' representation clearly defined as separate regions of the image so any eventual UV map(s) can be positioned without interference from or with other areas of the image. Normally when using 'concepts' as a basis from which to model, they are not usually as well organised because they generally serve an inspirational rather than diagrammatic purpose (a 'reference' rather than 'blueprint' to be copied exactly).
Design note: 'concepts' are helpful but not specifically required when working on personal projects. They are however, useful when communicating initial ideas to others so it's always worth drawing out or collecting together references to get familiar with the idea of using them.
The 'concept' image open in a photo/image editor (Corel PhotoPaint in this instance). As can be seen it's not a 'concept' in the normal sense but the texture that will later be assigned to the mesh to provide its surface appaeance (of being made from 'ginger')
Using a Background Image ^
First thing to do is to load in a background image (see source file). Press "NumPad 5" to toggle out of "Persp" ("Perspective") and into "Orthogonal" mode, the Scene 'flattens' removing the default three-dimensional perspective, then press "NumPad 1" to switch to what is "Front" view. The Scene will reorient itself to face front.
Design note: background images only display in Orthogonal views - "Front", "Right", "Top" etc. so it's preferential to switch beforehand so as not to mistakenly think the image hasn't loaded due to the Scene still being in perspective mode.
Next in the 3D View's Header menu (runs along the bottom of the view), click "View" then "Properties" ("View » Properties") to open/access the "Properties" panel which appear to the left of the window.
Design note: as with most functions in Blender there are several ways to open/access or perform an action, in this instance pressing "N" whilst the mouse-cursor is over the 3D View will toggle open/close the "Properties" panel.
Scroll down the panel to the "Background Image" subsection and next to the heading left-click the checkbox to activate and display the relevant properties and options. Here, click "Open" then in the "File Browser" that appears find the image to be used, left-clicking to select, before then finally left-clicking the "Open Image" button top-right to load.
Design note: when activating options it's often the case that only the tool section heading and the activate/deactivate checkbox is/remains visible by default. When this happens simply left-click the small black arrow pointing to the right (which flips to point downwards) to open/expose the relevant properties and options. Images need to be square or rectangles unless 'shaped' using alpha masks and/or transparency (so long as they are saved to an appropriate format).
From the initial start-up file, open the 3D View's "Properties" panel and activate "Background Image" down at the very bottom (scroll down) by left-clicking the checkbox
Load an image by clicking the "Open" button in "Background Image" properties, then browse and select the picture with the Gingerbread character on it. Clicking "Open Image" top-right loads it into the 3D View as a background overlay ["background.blend"]
Depending on the size of the image there may be a short delay after which Blender will return to the previous screen displaying the bitmap so its centred in the workspace, i.e. the "Z" and "X" axes of the grid will bisect the image, essentially quartering it, indicating its positioned dead-centre of the 3D View.
Design note: background images do not replace the 3D View's grid, they augment it (although with that said it is possible to create this effect by turning off the grids display in "Display" setting, again in "Properties). In other words, grid lines and their subdivisions will still be visible throughout. In addition background images don't affect on "Grid Settings" so they can be adjusted as needed without issue.
Normally this default behaviour can be left as-is but because a particular section of the bitmap needs to be used as a guide for building the Gingerbread character it needs to be moved to one side so the modelling process isn't made any more complicated than it need be (cf. mirroring). This can be done using "Offset". In "Background Image" properties again click the arrow to the left side of the "Horizontal Offset" input field (second row from the bottom), this moves the image incrementally to the right offsetting the left side, so the 'blue' vertical line ("Z" axis) is positioned approximately centre-mass of the gingerbread man on the left side of the image - the input field value it should finalise at "2.300" or thereabouts, situating the image ready for it to be used as a guide around which the mesh is shaped.
Design note: offsetting the background is not the same as moving the Scene using the middle-mouse button.
The 'front' section of the image is to be used to make the mesh so using an "Offset" value position the image so the "Z" axis cuts down the middle - click the 'left' arrow of the "Horizontal Offset" input field or "LMB" click to edit and type a value, "2.300" for example
Edit Mode & Blocking out ^
As with most types of 3D modelling the process of making a character is similarly done in stages, the first typically being to rough out an overall shape, which serves two primary purposes; 1) it guides the rest of the build; 2) it creates an initial impression of 'mass' or 'volume'. In other words when starting a project all that's initially needed is a blocked out form, one that provides a very general impression of the character, its shape, size and dimensions. As this isn't a sophisticated structure at this point it can be built using extrusions and other basic manipulations of the mesh.
Design note: the full gamut of development typically follows... the initial primitive shape, blocking or roughing out, detailing, materials, UV maps and textures, rigging, animation (and finally export where needed).
First, whilst still in Object mode "Scale" the cube so it's approximately the width of the Gingerbread characters torso (though not as wide as the arms). To do this press "S" then "X" to resize whilst locked to the "X" axis (side to side) - "LMB" click to confirm. Or from the 3D View's Header click the "Scale" widget button and "LMB+drag" the 'red' handle to expand the mesh, "LMB" release to confirm. This provides the general 'body mass' starting point.
Design note: like other functions in Blender the various attributes of "Scale" can be accessed and performed in a number of ways - via shortcut or 'widget';
press "S" to scale on all axes ("XYZ") equally ('free' scale).
press "S" then either "X", "Y" or "Z" to scale along a specific axis.
using the default "Transform" manipulator (the one terminating with arrows), left-click hold on of the pointers, press "S", then drag to scale in the direction indicated by the arrow.
activate the "Scale" manipulator in the Header and left-click drag anywhere within the white circle to scale in all directions.
activate the "Scale" manipulator and left-click hold and drag one of the coloured 'handle' to scale along a specific axes.
Use the "Scale" widget (or press "S") to resize the cube so it's about the same width as the characters torso (but not as wide as its arms)
Next enter "Edit Mode" by pressing the "Tab" key or selecting "Edit Mode" from the "Mode Selector" in the 3D View's Header menu (runs along the bottom of the 3D View by default). The Scene will change slightly showing a new set of button and the entire cube will likely be pre-selected, often the default state when starting a project from the start-up file, so press "A" to deselect everything (or from the Header menu click "Select » (De)Select All").
Switch to "Edit Mode" to begin modifying the mesh pressing the "Tab" key, or by selecting "Edit Mode" from the Header menu and the "Interaction Mode" selector
To make this next step easier to do click the "Face Select" button to the right of the Widget selector used previously (middle of the Header), or press "Ctrl+Tab" to open the "Mesh Select Mode" menu, choosing "Face" from the list. The mesh will change slightly in appearance. Next check to make sure the button to the right of "Face Select" is inactive (off/disabled), the "Limit selection to visible" option - this allows edge selections to be made without interference from surfaces facing the viewer.
Design note: the "Limit selection to visible" button is usually disabled by default but should be double-checked to make sure - when active the buttons background will appear a darker grey colour whilst the icon image itself, the small dots surrounding a surface, will remain relatively light. Note additionally that the "Limit selection to visible" option is only available when the mesh is viewed in "Solid" or other 'opaque' display mode - it's not available in "Wireframe" or "Bounds" due to the 'transparent' or 'wire' nature of those mesh display modes.
Select the outer right-side 'edge' (relative to looking at the screen) by right-clicking ("RMB") the small black dot that can be seen - this is an 'origin' reference point for a given surface. It will highlight as an 'orange' line to indicate its selected. Then from the Header click "Mesh » Extrude » Region", or press the "E". The selected face will immediately expand outwards following the mouse; move it to approximately the outside edge of the characters arm and "LMB" click to confirm the action. Repeat the process to create the arm on the opposite side, the 'head' and 'lower torso' areas - click the black dots for each 'side', 'upper' and 'lower' edge, press "E" to "Extrude", then "LMB" confirm the action (see image below for reference).
Design note: extrusions occur perpendicular to surfaces they're extruded from, in other words, vertical surfaces result in horizontal extrusion and vice versa. This behaviour is true for all surfaces. If a mistake is made during an extrusion, press "Ctrl+Z" to "Undo" (or from the Header menu select "Mesh » Undo/Redo").
Using "Extrude" the mesh is blocked out using simple shapes and forms; initially the arms, then head and lower-torso, all approximately positioned to create a roughed-out object
For the legs, although they too are to be extruded a slight modification of the mesh is needed before that can be done; doing so now simply adds another block to the bottom of the mesh instead of two needed, one for each leg. Once the arms, head and lower body have been blocked out, from the Tool Shelf to the left (press "T" if its not visible) click the "Loop Cut & Slide" button (or press "Ctrl+R") to initiate to the "Loop Cut" tool then, moving the mouse over a horizontal edge of the mesh (side to side), "LMB" click to set the orientation of the intended cut when the pink loop cut-selector appears vertically up the middle of the mesh. The line will change colour, to orange, after which "RMB" click confirm the cut and have it automatically placed up the centre of the mesh (and the selected loop).
Design note: the difference between using "RMB" instead of "LMB" to confirm or carry out an action in this context (loop-cutting) relates to the way the cut is placed - in using "RMB" Blender will confirm selection and the action in a one, automatically placing the cut in the exact middle of the selected surface, whereas using "LMB" the cut can be freely positioned (loop selected) by the user before then being confirmed and cut (the orientation is selected/confirmed and then the cut).
Adding a "Loop Cut" down the centre of the mesh to allow for the extrusion of individual leg sections ["blockout-half.blend"]
Once the cut has been made the legs can be extruded as before - select an 'edge' black dot, press "E" to "Extrude" and "LMB" to confirm once the result has been positioned; the legs can also be splayed slightly at this point as this will make further editing easier, so "LMB+drag" the red handle of the "Transform" widget (or press "G") to the side slightly, in-line with the general position of each leg shown in the background image.
Design note: if the black dots aren't visible check selection mode is set to "Face". To do this press "Ctrl+Tab" to access the "Mesh Select Mode" menu and click "Face". Or alternatively in the 3D View's Header click the "Face select" button from the "Selection Mode" button array (series of buttons showing a grey cube highlighted to represent 'vertex', 'edge' or 'face' selection options).
The final block-out of the Gingerbread character with arms, legs and head extruded, and with a vertical "Loop Cut" up mesh-centre ["blockout.blend"]
Loop Cuts & Defining Shape ^
With the mesh blocked out the next step is add a bit more structure so it can be better shaped to follow the general profile background image. This is done by placing a series of additional loop cuts on the left and right side of the body, on the arms, across the torso and across the head. As before then click the "Loop Cut & Slide" button in the Tool Shelf, or press "Ctrl+R" to initialise and then move the mouse over a horizontal edge to one side of the previously cut centre-line, "LMB" click when the pink line appears and then "RMB" to confirm the selection and cut the loop. Repeat for the other side of the mesh, the end result being two additional cuts top-to-bottom, essentially quartering the torso vertically.
Design note: once the orientation of the cut is selected, the use "RMB" in this instance to automatically select and cut the loop is optional because exact precision, i.e. the placement of an exact centre-line through a loop, is not absolutely necessary. Instead using "LMB" to select and then "LMB" again to cut can be used as normal.
To cut horizontally across the body (side to side) move the mouse over a vertical edge then "LMB" click as before to select the new orientation and again to set the cut - in this instance once across the upper torso and arms (the loop should cut around the mesh in a way that includes the arms), and twice for the head. The result should be similar to the image below.
Design note: where multiple loops are needed it is possible to to place more than one at a time; once an edge has been selected to set the orientation, but before confirming for the cut, scroll the middle mouse button up to add ("MMB+up"), or down to remove ("MMB+down"), cuts. To "Cancel" a loop-cut action press "Esc" or "RMB" which will resets the operation, cancelling it as if no cut were made. This differs from "Ctrl+Z" to "Undo" because the latter implies the action has been confirmed and needs to be physically undone, whereas the former doesn't because a given operation had not yet been completed.
Adding more loop-cuts to the mesh so it can be better shaped to match the background image - as the head needs to be relatively round it contains more loops compared to other sections ["loopcuts.blend"]
Once all the loop-cuts have been made it's time to shape the mesh. Before doing that however, first a "Display Mode" switch is needed that alters the appearance of the mesh so the background image is visible through it. Whilst still in Edit mode ("Ctrl+Tab"), to the right of the "Interaction Mode" selector (currently displaying "Edit Mode"), click the button with the small white sphere and from the pop-up list select "Wireframe" (or alternatively simply press "Z"). The mesh will change, becoming transparent in appearance. Next click the "Vertex Select" button (to the right of the widget selector) to switch to a mode better suited for this step in the process.
Design note: one of the main reasons for doing this is that "Wireframe" mode allows selections to 'drill down' through the mesh - although the object is being viewed face-on (in Orthogonal view), selections propagate down through the mesh so not only the upper element, but all those beneath, can be selected at the same time/through the same action, useful for quick 'group' selections or changes.
Switching to "Wireframe" and "Vertex Select" means subsequent selections drill-down through the mesh so not only the element visible to the viewer is selected but also those not directly in view (top and bottom vertices are selected in other words)
Once the Scene has been set up editing can begin. First make a selection. Because both top and bottom of the mesh need to be selected rather than using "RMB" to do this press "B", or from the Header menu click "Select » Border Select", to activate the "Border Select" tool. An extended cross-hair appears centred on the mouse cursor. To make a selection "LMB+drag" a 'box' around one of the small black dots (vertices). It will change colour by way of confirmation. Then using the "Transform" widget handles, or pressing "G" to 'free translate', move the selection so it sits on the outer edge of the Gingerbread character relative to it's location. Press "A" to deselect and then repeat the process for each vertex around the outer edge of the mesh so each occupies a position such that the result is shaped similar to the background image (see below).
Design note: the 'feet' of the character can be shaped using the available structure but an extra loop cut can be made so the area better conforms to the background image - when adding, once the vertical edge is highlighted and the cut direction set (the line turns orange), slide the mouse towards the bottom of the foot before left-clicking to confirm the addition.
Switching to "Wireframe" and using "Vertex Select" makes it easier to see the underlying background image, aiding the repositioning of vertices so they follow the general outline/contours of the character better ["shaped_half.blend"]
The outer edge of the mesh manipulated to follow the general contour of the underlying background image (note the additional loop cut towards the bottom of each leg which allows the mesh to be better shaped to fit) ["shaped_full.blend"]
Once the silhouette is done, press "NumPad 5" to toggle back into "Perspective" view and inspect the mesh, the outline is fine but the mesh is too thick (it needs to be a thin biscuit). This is easily fixed switching back into Orthogonal view and editing the mesh from the "Right" (side view). To do this toggle back into "Orthogonal" mode pressing "NumPad 5" and then "NumPad 3" to switch to "Right" side view. Using "Border Select" again, press "B", select in turn the 'front' and 'back' of the mesh and using the 'green' handle of the manipulator widget, move each side closer to the blue centre-line. "LMB" to confirm after each action. The result should be a much thinner version of the mesh ready for Material assignment and UV unwrapping. Press "Tab" to exit "Edit Mode" (which automatically sets "Object Mode").
Design note: an alternatively to being in "Wireframe" mode to make selections easier is to use "Limit selection to visible". If this feature is active whilst in "Solid" display mode (where the mesh is displayed as opaque grey) selections will similarly drill-down through the mesh because 'back-face culling', the non-display of elements that cannot be seen, is enabled. "Limit selection to visible" is only available for 'opaque' modes of display where the mesh appears 'solid'.
Inspecting the mesh so far in "Perspective" mode ("NumPad 5") its noticeably thicker than needed (note mesh is show in "Solid" shaded mode, "Alt+Z", which also shows, because "Limit selection to visible" was previously disabled/turned off, structure 'behind' front facing elements - re-enabling "Limit selection...." would prevent this, displaying only those elements facing the viewer) ["shaped.blend"]
From the "Right" side ("NumPad 3") and in "Wireframe" ("Z"), it's possible the entire front surface of the mesh can be selected in a single action (similarly for the back) before being adjust to sit closer to the objects centre - approximately where the 'blue' line (representing the "Z" axis) is located
The final mesh ready for the next stage; Material assignment and UV unwrapping ["shaped_thin.blend"] | <urn:uuid:35d45290-9f93-4ce0-a59a-df3ba5ce0236> | CC-MAIN-2017-17 | https://www.katsbits.com/tutorials/blender/gingerbread.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00246-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.893872 | 4,988 | 2.5625 | 3 |
- I saw your email to Betty regarding aspartame and methanol.
I will enclose a detailed scientific statement related to aspartame and
methanol below. It answers every one of the industry's PR statements related
to this subject and provides detailed references.
- If you take the time to read through it, you'll get to
the issue of methanol in alcoholic beverages and methanol in fruits/foods.
Basically, alcoholic beverages are known to have a protective factor,
ethanol, that prevents the conversion of absorbed methanol to formaldehyde.
In other words, scientists have known for decades that methanol in alcoholic
beverages would cause serious chronic poisoning if ethanol were not present
as a protective factor. (Note: in the early 20th century some alcoholic
beverages were known to cause methanol poisoning because the methanol level
was too high -- "methanol-contaminated" so that even the high
ethanol level of the beverage would not be enough of a protective factor.)
- Recent research has shown that there is so much methanol
absorbed from fruits, often many times the amount that would be absorbed
from a methanol-contaminated alcoholic beverage, that there must be protective
factors in fruits and foods as well or else we would see millions of cases
of chronic methanol poisoning from fruit juice ingestion.
- On the other hand, aspartame contains no protective factors
that prevent the conversion of methanol into the highly-toxic chemical,
formaldehyde. In fact, the excitotoxin released from aspartame, free-form
aspartic acid, is known to increase the adverse effects of formaldehyde
toxicity. Instead of a protective factor, aspartame provides a chemical
that worsens the damage from formaldehyde.
- Below my signature line is the details related to aspartame
and methanol as taken from http://www.holisticmed.com/aspartame/abuse/
- Best Wishes,
- Mark Gold Aspartame Toxicity Information Center 12 East
Side Dr., #2-18 Concord, NH 03301
- 603-225-2110 email@example.com http://www.HolisticMed.com/aspartame/
- Scientific Abuse in Methanol / Formaldehyde Research
Related to Aspartame
- Please print and read completely through this document!
- Table of Contents
- Summary of Aspartame Methanol/Formaldehyde Toxicity Hiding
the Blood Plasma Methanol Increase From Aspartame Ingestion Methanol and
Fruit/Tomatos: Convince the World That a Poison is "Natural"
Avoiding the Discussion of Chronic Methanol Toxicity Convince Scientists
& Physicians With Irrelevent and Flawed Formate Measurements The "It
is Found in the Body, so a Proven Poison Must be Safe" Excuse to Eat
Poison Formaldehyde & Formic Acid in Foods: A Final Attempt to Prove
a Poison is "Safe" References
- Summary of Aspartame Methanol/Formaldehyde Toxicity
- "These are indeed extremely high levels for adducts
of formaldehyde, a substance responsible for chronic deleterious effects
that has also been considered carcinogenic.
- "It is concluded that aspartame consumption may
constitute a hazard because of its contribution to the formation of formaldehyde
adducts." (Trocho 1998)
- "It was a very interesting paper, that demonstrates
that formaldehyde formation from aspartame ingestion is very common and
does indeed accumulate within the cell, reacting with cellular proteins
(mostly enzymes) and DNA (both mitochondrial and nuclear). The fact that
it accumulates with each dose, indicates grave consequences among those
who consume diet drinks and foodstuffs on a daily basis." (Blaylock
- Methanol from aspartame is released in the small intestine
when the methyl group of aspartame encounters the enzyme chymotrypsin (Stegink
1984, page 143). A relatively small amount of aspartame (e.g., one can
of soda ingested by a child) can significantly increase plasma methanol
levels (Davoli 1986a).
- Clinically, chronic, low-level exposure to methanol has
been seen to cause headaches, dizziness, nausea, ear buzzing, GI distiurbances,
weakness, vertigo, chills, memory lapses, numbness & shooting pains,
behavioral disturbances, neuritis, misty vision, vision tunneling, blurring
of vision, conjunctivitis, insomnia, vision loss, depression, heart problems
(including disease of the heart muscle), and pancreatic inflammation (Kavet
1990, Monte 1984, Posner 1975).
- The methanol from aspartame is converted to formaldehyde
and then formic acid (DHHS 1993, Liesivuori 1991), although some of the
formaldehyde appears to accumulate in the body as discussed above. Chronic
formaldehyde exposure at very low doses has been shown to cause immune
system and nervous system changes and damage as well as headaches, general
poor health, irreversible genetic damage, and a number of other serious
health problems (Fujimaki 1992, He 1998, John 1994, Liu 1993, Main 1983,
Molhave 1986, National Research Council 1981, Shaham 1996, Srivastava 1992,
Vojdani 1992, Wantke 1996). One experiment (Wantke 1996) showed that chronic
exposure to formaldehyde caused systemic health problems (i.e., poor health)
in children at an air concentration of only 0.043 - 0.070 parts per million!
- Obviously, chronic exposure to an extremely small amount
of formaldehyde is to be avoided. Even if formaldehyde adducts did not
build up in the body from aspartame use, the regular exposure to excess
levels of formaldehyde would still be a major concern to independent scientists
and physicians familiar with the aspartame toxicity issue.
- In addition to chronic formaldehyde poisoning, the excitotoxic
amino acid derived from aspartame will almost certainly worsen the damage
caused by the formladehyde. Synergistic effects from aspartame metabolites
are rarely, if ever, mentioned by the manufacturer. Aspartame breaks down
into a free-form (unbound to protein) excitotoxic amino acid which is quickly-absorbed
(as long as it is not given in slow-dissolving capsules) and can raise
the blood plasma levels of this excitotoxin (Stegink 1987). It is well
known that free-form excitotoxins can cause irreversible damage to brain
cells (in areas such as the retina, hypothalamus, etc.) in rodents and
primates (Olney 1972, Olney 1980, Blaylock 1994, Lipton 1994). In order
to remove excess, cell-destroying excitotoxic amino acids from extracellular
space, glial cells surround the neuron and supply them with energy (Blaylock
1994, page 39, Lipton 1994). This takes large amounts of ATP. However,
formate, a formaldehyde metabolite, is an ATP inhibitor (Liesivuori 1991).
Eells (1996b) points out that excitatory amino acid toxicity may be the
"mediators of retinal damage secondary to formate induced energy depletion
in methanol-intoxication." The synergistic effects from the combination
of a chronic formaldehyde exposure from aspartame along with a free-form
excitotoxic amino acid is extremely worrisome.
- It appears that methanol is converted to formate in the
eye (Eells 1996a, Garner 1995, Kini 1961). Eells (1996a) showed that chronic,
low-level methanol exposure in rats led to formate accumulation in the
retina of the eye. More details about chronic Methanol / Formaldehyde poisoning
from aspartame can be found on the Internet at http://www.holisticmed.com/aspartame/aspfaq.html.
- How did the manufacturer convince scientists and physicians
that it is "safe" to be exposed regularly to low levels of an
exceptionally toxic poison? Answer: Deceptive research and deceptive statements!
- Hiding the Blood Plasma Methanol Increase From Aspartame
- On February 22, 1984, the acting FDA Commissioner, Mark
Novitch stated (Federal Register 1984):
- "... aspartame showed no detectable levels of methanol
in the blood of human subjects following the ingestion of aspartame at
34 mg/kg ...."
- The American Medical Association repeated this statement
one year later (AMA 1985). This statement was repeated in American Family
Physician in 1989 (Yost 1989). Shaywitz (1994) stated that there was no
detectable levels of methanol in the blood after aspartame administration.
Puthrasingam (1996) stated that methanol from aspartame is "undetectable
in peripheral blood or even in portal blood."
- All of these statements were very convincing ... and
very wrong! The statements were based on aspartame industry research which
used an outdated plasma methanol measuring test (Baker 1969). The test
they used had a limited of methanol detection of 4 mg/l. However, Cook
(1991) measured an average baseline (unexposed) methanol level of ~0.6
mg/l. Others (Davoli 1986, d'Alessandro 1994, Osterloh 1996) have measured
an average baseline methanol level of close to 1 mg/l. This means that
a person's methanol levels would have to rise 350% to 600% before an increase
would have been noticed by the industry researchers using this outdated
test! An increase of less than 350% to 600% appeared as no increase at
- Probably only a handful of people in the world would
have noticed that by using a plasma methanol measuring test with limits
of 4 mg/l, they avoided seeing an methanol level increase -- even though
there was a large increase. Below are some of the experiments which used
the inappropriate methanol measuring technique.
- Aspartame Dosage Claimed to Not Raise Lowest
Methanol Possible Research Levels Measurement
Other Methanol Issues
- Frey 1976 77 mg/kg Not stated Test conducted
after 12-hour fast. All methanol would have been converted to formaldehyde.
- Stegink 1981 34 mg/kg 4 mg/l Orange juice
given despite discussion of high level of methanol in fruit.
- Stegink 1983 34 mg/kg 4 mg/l
- Leon 1989 75 mg/kg 4 mg/l Test conducted
after 12-hour fast. All methanol would have been converted to formaldehyde.
- Stegink 1989 8 hourly doses 4 mg/l of 10 mg/kg
- Stegink 1990 8 hourly doses 4 mg/l Fig. 4:
Graph of blood of 10 mg//kg methanol concentrations shown
with all points well below 4 mg/l -- the lower limit of their methanol
- Hertelendy 1993 15 mg/kg 4 mg/l
- Shaywitz 1993 34 mg/kg 4 mg/l
- Shaywitz 1994 34 mg/kg 4 mg/l
- Note: 10 mg/kg is approximately a one liter bottle of
diet soda for a 60 kg adult and 1.5 cans of diet soda for a 30 kg child.
Children with aspartame freely-available can ingest between 27 mg/kg -
77 mg/kg (Frey 1976) and adults dieters have been shown to ingest between
8 mg/kg and 36 mg/kg (Porikos 1984).
- In 1986, Davoli (1986a) published a study which showed
that 6 mg/kg to 8.7 mg/kg of aspartame could significantly raise the plasma
methanol levels. The methanol levels nearly doubled in some cases. While
there were some logical errors in Davoli's conclusion (discussed below),
the study proved that by using a reasonable methanol testing method, plasma
methanol levels will increase from a relatively low dose of aspartame ingestion.
The methanol measuring technique used by Davoli was published in 1985 (Davoli
1986b) and was sensitive to 0.012 mg/l.
- Other researchers have used sensitive plasma methanol
measurement techniques. d'Alessandro (1994) measured plasma methanol levels
in humans well below 1 mg/l. Cook (1991) used a methanol test developed
in 1981 to measure methanol plasma methanol levels in humans below 0.5
- What did industry scientists know or should have known?
- 1. They knew and admited that their methanol testing
procedure developed in 1969 was not sensitive enough to detect the large
increases of plasma methanol levels when aspartame was given at doses of
34 mg/kg (Stegink 1984b).
- 2. They must have been aware that Davoli found methanol
levels increase significantly when aspartame was given at doses of 6 mg/kg
to 8.7 mg/kg. To believe that they were not aware of this, one has to believe
that none of the researchers choose to or knew how to conduct a simple
Medline database search.
- 3. They should have known that there were several legitimate
plasma methanol measurement techniques developed since 1969. Given that
they admited their technique was not appropriate for aspartame doses of
less than 34 mg/kg (Stegink 1984b), they should have at least looked to
find an appropriate test.
- 4. Given that Leon (1989) was aware enough to test for
formate levels, he must have been aware that all of the methanol from aspartame
would have already converted to formaldehyde after a 12-hour fast.
- I believe that Monsanto/NutraSweet and the aspartame
industry are clearly taking advantage physicians and scientists who lack
the time to carefully investigate each number in a study to see if there
is deception. While these actions may not amount to "scientific fraud,"
it does amount to an abuse of the scientific method in my opinion.
- Methanol and Fruit/Tomatos: Convince the World That a
Poison is "Natural"
- Monsanto/NutraSweet's all time favorite aspartame fairy
- "In addition, exposure to methanol from many fruits,
vegetables, and juices in the normal diet is several times greater than
that from beverages sweetened with APM [aspartame]." (Butchko 1991)
- This statement from NutraSweet scientists has been repeated
countless times (AMA 1985, FDA 1984, Hertelendy 1993, Lajtha 1994, Monsanto
1999, Nelson 1996, Stegink 1981, Stegink 1983, Yost 1989, etc.). This is
very convincing ... but deceptive and irrelevent!
- It is well known that alcoholic beverages such a wine
contain a large amount of ethanol, a protective factor which prevents methanol
poisoning by preventing the conversion of methanol to the highly toxic
formaldehyde (Leaf 1952, Liesivuori 1991, Roe 1982). Because alcoholic
beverages contain protective factors which prevent chronic poisoning from
methanol metabolites (formaldehyde, formate), comparisons between the methanol
derived from aspartame and the methanol derived from alcoholic beverages
- Clinical reports and a small number of epidemiological
studies appear to demonstrate that prolonged exposure to methanol air concentrations
(in the workplace) of 260 mg/m3 (200 ppm) can cause chronic methanol toxicity
(Kavet 1990, Frederick 1984, Kingsley 1954-55). The weekly amount of methanol
absorbed from a 260 mg/m3 workday exposure is (formula in Kavet 1990):
- (260 mg/m3 * 6.67 m3/workday * 5 workdays * 60% absorption
rate) / 70 kg adult = 75 mg/kg weekly methanol
- Note: While this seems like a high weekly methanol dose,
please keep in mind that 1) much lower levels may cause toxicity in some
individuals; and 2) that aspartame breaks down into an excitotoxin which
will likely enhance the toxicity of methanol metabolites as described above.
- However, the ingestion of a moderate amount of apples
or oranges (or juice equivalent) per week leads to a similar exposure to
methanol (Lindinger 1997):
- (750 mg methanol (1.5 kg fruit) * 7 days) / 70 kg adult
= 74 mg/kg weekly methanol
- Keep in mind that tomatoes may have more than five times
the amount of methanol as that found in oranges (Kazeniac 1970, Nisperos-Carriedo
1990), so exposure to regular ingestion of tomatoes and tomato juice may
produce very large amounts of methanol.
- Lindinger (1997) points out that the amount of methanol
released in the human body from a few apples or oranges is equivalent to:
- "...0.3 liters of brandy (40% ethanol) containing
0.5% of methanol (compared with ethanol), which would qualify as significantly
- Because of the high amounts of methanol in fruits/tomatoes,
enough that would clearly cause chronic methanol poisoning, these foods
must contain protective factors (as does alcoholic beverages). If they
did not contain protective factors, we would be seeing widespread methanol
poisoning for persons who ingestion fruits and tomatoes regularly.
- The manufacturer showed that the protective factor in
fruits cannot be ethanol by itself (Sturtevant 1985), but there are a myriad
of chemicals in fruits which might serve as protective factors.
- What did industry scientists know or should have known?
- 1. They knew that alcoholic beverages contain protective
factors which prevent chronic methanol poisoning (Sturtevant 1985).
- 2. Because industry scientists regularly announced that
certain fruits contain extremely high levels of methanol, they should have
taken the time to find out that fruits have protective factors which help
prevent chronic poisoning from methanol metabolites.
- Avoiding the Discussion of Chronic Methanol Toxicity
- A number of Monsanto/NutraSweet public relations statements
as well as statements from government officials imply that the amount of
methanol obtained from aspartame is not toxic:
- "From estimates based on blood levels in methanol
poisonings, it appears that the ingestion of methanol on the order of 200
to 500 mg/kg body weight is required to produce a significant accumulation
of formate in the blood which may produce visual and central nervous system
toxicity" (Federal Register 1984)
- Lajtha (1994) claimed that "blood methanol concentrations
greater than 200 to 100 mg/L are required for clinical neurotoxicity or
for measurable formate formation." Non-scientists on the Internet
often make similar claims. Shahangian (1984) claimed that the amount of
formate (methanol and formaldehyde metabolite) is not enough to cause toxicity.
- This sounds very convincing until one realizes that the
doses they are refering to are the single doses required for death or near
death in humans! Monsanto/NutraSweet and persons promoting aspartame will
avoiding discussing chronic, low-level methanol or formaldehyde poisoning
because once this issue is raised it becomes apparent that the manufacturer
did not conduct or even cite any legitimate studies on chronic, low-level
methanol exposure in humans!
- Only on very rare occassion will the manufacturer mention
chronic methanol toxicity (Nelson 1996, Sturtevant 1985). When they do
this, they always cite a study of infant monkeys (a species closely related
to rhesus monkeys) (Reynolds 1984). A dose of 3,000 mg/kg of aspartame
was given to the monkeys for nine months. This amounts to a daily methanol
dose of 300 mg/kg -- a huge dose.
- What Monsanto/NutraSweet fails to mention is 300 mg/kg
of methanol has been estimated as the minimum single dose which can cause
death in humans (Kavet 1991). If such a study were conducted on humans,
nine months of daily ingestion of the minimum lethal single dose of methanol
would clearly kill everyone in the study!. As pointed out by Roe (1982),
methanol is significantly more toxic in humans than in monkeys or rodents.
It is important to note that the free-form excitotoxin derived from aspartame
and which will likely increase the formaldehyde/formate damage from aspartame,
appears to be approximately twenty times more toxic in humans than in monkeys
due to differences in excitotoxin metabolism (Olney 1988, Stegink 1979,
- What did industry scientists know or should have known?
- 1. They knew that there was never a controlled, long-term
study of methanol exposure in humans. Given that the manufacturer was expecting
to dose the human population with aspartame for a lifetime and even generations,
some might consider it criminal to sell a poison under these circumstances.
- 2. They should have known that an excitotoxin will likely
increase the toxicity of the formaldehyde/formate based upon the way these
chemicals produce cell damage and cell death. At the very least, the manufacturer
should have exhausted all reasonable possibilities of synergistic reactions
as opposed to using flawed research and flawed logic to explain away the
countless cases of aspartame poisoning.
- Convince Scientists & Physicians With Irrelevent
and Flawed Formate Measurements
- The FDA Commissioner has claimed (Federal Register 1984):
- "In the Searle [manufacturer] clinical study using
abuse doses of aspartame equivalent to 20 mg/kg body weight of methanol,
no significant increases were observed in plasma concentrations of formate,
suggesting that the rate of formate production does not exceed its rate
of urinary excretion."
- The AMA (1985) claimed that abuse doses of aspartame
have not been shown to increase blood formate levels. Stegink (1989, 1990)
claimed that large doses of aspartame did not raise blood and urine formate
levels significantly. Leon (1989) claimed to show no increase in urinary
formate from a daily dose of 75 mg/kg of aspartame. Hertelendy (1993) claimed
that there was not increase in urine or plasma formate levels from 15 mg/kg
- Since methanol metabolizes into formaldehyde and formaldehyde
metabolizes into formate, all of these statements appear to point to safety
... at first glance. But what the manufacturer does not tell you is that
these tests are now known to be irrelevent and flawed!
- Formate (formic acid) measurement of the urine is not
an appropriate test for low-level formaldehyde poisoning. (Keep in mind
that extremely low doses of formaldehyde have been shown to cause chronic
poisoning symptoms as discussed above.) Triebig (1989) states that formic
acid excretion in the urine is a "unspecific and insensitive biological
indicator for monitoring low-dose formaldehyde exposure." Schmid (1994)
found that neither a single significant exposure to formaldehyde nor a
week-long exposure to formaldehyde correlated with urine formic acid measurements.
After testing subjects exposed to formaldehyde, Heinzow (1992) stated:
- "Excretion [of formic acid] in the general population
is determined by endogenous metabolism of amino acids, purine- and pyrimidine-bases
rather than the uptake and metabolism of precursors like formaldehyde.
Hence in contrast to recent recommendations in environmental medicine,
formic acid in urine is not an appropriate parameter for biological-monitoring
of low level exposure to formaldehyde."
- Therefore, all of the aspartame industry's urine formate
measurements are useless for chronic methanol/formaldehyde poisoning from
- Blood formate measurements also appear to be inadequate
for chronic, low-level methanol or formaldehyde poisoning. d'Alessandro
- "While exposure to several different levels of methanol
above the threshold limit [200 ppm] might demonstrate slight increases
in formate concentrations, it seems doubtful that this measure would be
useful for monitoring individual low-level exposure."
- And after further study, Osterloh (1996) stated:
- "Previously, we reviewed exposure studies (both
occupational and experimental) in which formate concentrations were measures,
along with these data, as a basis for the conclusion that methanol, not
formate, in serum can be used as a biological marker of exposures."
- Three other reasons why aspartame industry formate measurements
can be considered useless include:
- 1. Trocho (1998) showed a significant amount of formaldehyde
from aspartame binding with proteins and accumulating in tissues rather
than metabolizing into formate.
- 2. The average baseline (pre-exposure) measurements of
formate in the aspartame industry research (e.g., Stegink 1981, 1989, 1990)
is unexplicabally 1.5 to 3 times higher than any other independent researcher
(d'Alessandro 1994, Baumann 1979, Buttery 1988, Heinrich 1982, Osterloh
1986, Osterloh 1996). As pointed out by Kavet (1990), high pre-exposure
blood formate levels "may have masked any subtle increases that the
aspartame may have caused."
- 3. A respected formaldehyde and formic acid exposure
researcher has pointed out that several formate measurement techniques
including the one used by aspartame industry researchers (Makar 1982) are
"notoriously inaccurate." (Liesivuori 1986).
- Unfortunately, there are still researchers who cite old
tests of formate levels related to aspartame ingestion even though these
tests have proven to be meaningless and flawed.
- What did industry scientists know or should have known?
- 1. The industry researchers should have keep up-to-date
on formate measurment research. Had they done that, they would have known
that such measurements are inappropriate for chronic, low-level methanol
and formaldehyde exposure.
- The "It is Found in the Body, so a Proven Poison
Must be Safe" Excuse to Eat Poison
- From time to time, it will be implied that because methanol
and formaldehyde are in the body, it is perfectly safe to add more. Acting
FDA Commissioner Mark Novitch stated the following (Federal Register 1984):
- "Normal metabolic processes such as purine and pyrimidine
biosynthesis and amino acid metabolism require methyl groups from compounds
like methanol. It also appears that either methanol or formaldehyde may
serve as precursors for the methyl groups in choline synthesis."
- On the Internet, this is a popular technique used to
try to convince people that methanol and formaldehyde exposure is safe.
What the FDA Commissioner and other persons unfamiliar with this issue
did not point out is that chronic poisoning from low-level methanol and
formaldehyde exposure is already accepted in the medical community. In
fact, children who were chronically-exposed to formaldehyde in the air
at concetrations of 0.05 parts per million (ppm) developed systemic health
problems after several months (Wantke 1996). This is equivalent to a daily
exposure of only 0.75 mg of formaldehyde (or less if 100% of the formaldehyde
is not absorbed):
- 0.05 ppm formaldehyde ~= 0.075 mg/m3 0.075 mg/m3 * 10
m3/workday = 0.75 mg/day (for a workday/schoolday)
- Other researchers have noted formaldehyde toxicity symptoms
appearing at chronic, low-level exposure (Srivastava 1992):
- "Complaints pertaining to gastrointestinal, musculoskeletal
and carbiovascular systems were also more frequent in exposed subjects.
In spite of formaldehyde concentrations being well within the prescribed
ACGIH [American Conference of Governmental Industrial Hygienists] limits
of 1 ppm...."
- This proves that formaldehyde levels in the body must
be very tightly-controlled since a very low daily exposure leads to health
problems. Even a very small, regular increase can lead to chronic, low-level
- Davoli (1986) showed that aspartame significantly increases
plasma methanol levels. However, he mistakenly concluded that because the
post-aspartame administration methanol levels did not rise above the baseline
methanol levels of every other human being, those levels might not be toxic.
What Davoli failed to consider, however, is that 1) we know that methanol
and formaldehyde levels in the body must be tightly controlled because
exposure to very low levels of these chemicals have been shown to lead
to chronic toxicity; and 2) that people have their own individual metabolism
so that a slight addition of formaldehyde to the current tightly-controlled
level in one individual could cause toxicity even though it might not rise
above the baseline level of another individual's formaldehyde level. As
one can see from the Davoli (1986) study, the administration of aspartame
lead to a fairly sudden and significant increase in plasma methanol levels
and would be expected to cause a significant formaldehyde exposure.
- Formaldehyde & Formic Acid in Foods: A Final Attempt
to Prove a Poison is "Safe"
- "... formaldehyde [exposure from aspartame is] comparable
to a serving of fresh broccoli." (Weber 1999)
- Every once in a while there will be a statement pointing
out that some foods have relatively high levels of formaldehyde and formic
acid. What is not pointed out is that formaldehyde in food is much less
toxic than formaldehyde from air exposure or formaldehyde from aspartame
exposure due to the way the body metabolizes it.
- Restani (1991) points out that formaldehyde can be found
in seafood, honey, fruits, vegetables, etc. Restani (1991) points to a
human study showing where 200 mg of formaldehyde per day was ingested for
13 weeks without showing adverse effects. This would be equivalent to an
daily air exposure of: | <urn:uuid:3d6a80e8-93b6-43b9-b323-707b78b8604f> | CC-MAIN-2017-17 | http://rense.com/health3/badnews.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123046.75/warc/CC-MAIN-20170423031203-00544-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.89001 | 6,641 | 2.828125 | 3 |
Between 1859 and 1866 the Austrian light horse found itself at war with Frenchmen, Italians and Prussians. It has to be said that, despite almost suicidal bravery, it did not go well. Part of the problem seems to have been that they were, well, light. Idolisation of the charge, and disdain for firearms (as expressed to General McClellen in 1855), meant that they flung themselves at the enemy at every opportunity. Against infantry this was disastrous, but even against cavalry it was unwise. In a swirling cavalry melee, man against man, it often came down to sheer physical strength, and time and again contemporary commentators report how they lost to heavier men on larger horses, especially the Prussians, even Prussian hussars.
Not that there wasn't glory to be won.....
The 10th Hussars at Solferino, June 24 1859
Modern tactics of the three arms by General MW Smith 1869
The front of the Austrian line was covered by artillery, which came into action at from a thousand to twelve hundred yards' range. The four batteries of the first and second French divisions galloped forward to the line of skirmishers, and came into action. Two tumbrils of the Austrian artillery were blown up, after which they retired. It was at the commencement of this engagement that General Auger lost his arm.
During the artillery action, the 10th Austrian hussars, moving under the cover of the trees, which covered the ground, approached the left of the second French division, with the intention of turning the left of the second corps d'armee. Passing the line of skirmishers, they charged; but Gaudin de Villaine's brigade of cavalry encountered them. They charged three times, but were finally driven among the squares formed by the first brigade of the second division. The successful action of the cavalry, and the fire from the Duke of Magenta's position, held the enemy in check on this point of the battle field.
The 13th Uhlans at Custoza
24 June 1866
Near the town of Custoza the Austrian army of 75,000 men was facing 120,000 Italians. It was early morning, and the light brigade under Pulz was shadowing two Italian infantry divisions, one commanded by the Piedmontese crown prince, Umberto, and a division of heavy cavalry (20 squadrons).
Neither commander was under much inclination to attack just yet, but Lt. Col. Maximilian Rodakowski of the 13th “Trani” Uhlans had other ideas. He rode along the front of his lancers, shouting in Polish “Follow me! And when you can no longer see the regimental standard, look out for the plume of my czapka to see where the action is, and show what the Trani-Ulanen can do!”
Peeling off his 4 squadrons of Uhlans he charged directly at the Italians.
Pulz assumed that Rodsakowski could not possibly be so stupid as to attack, it must be a feint to draw the Italians off their position. He was wrong. It wasn't just the Austrians caught by surprise, the Italians desperately formed squares and Umberto, taking a morning stroll, only just made it into one in time.
The lancers thundered across the open ground, directly into the murderous fire of two divisions, as Pulz followed their progress as a dust cloud in the distance, with the rapidly increasing sound of rifle and artillery fire. Rodakowski's lancers crashed through a gap between the two divisions, only to run into a deep ditch that send horses somersaulting into the air. Those lancers still upright were halted, making easy targets. Worse, the 1st “Kaiser Franz Josef” Hussars had followed Rodakowski in, and suffered the same fate.
The survivors fled back to their lines, but half the Austrian cavalry had been slaughtered. Incredibly, the Austrians still won. The Italian supply train bolted, blocking bridges preventing the arrival of reinforcements, and panic spread in the rear divisions. Dello Roca, the Italian commander, believed this must be the start of a major attack, and halted advances on his left where the Austrians were weak. After a confusing day in which both sides thought they had lost, an Austrian attack finally broke through and Della Roca retreated.
The Alexander lancers at Sadowa, 3 July 1866
Modern tactics of the three arms by General MW Smith 1869
From the Prussian perspective
The first cavalry division under Major-General V. Alvensleben had, about three o'clock in the afternoon of the day of the battle, received orders to support the Elbe army; in consequence, the light cavalry brigade, Rheinbaden, consisting of one dragoon regiment of the guard, at the head, followed by the first and second Lancer Regiments of the Guard, broke tip from Johanneshof, where the division had been concentrated for some time, and marched in the direction of Nachanitz. The brigade during their march were exposed to a continuous fire, and Ridingmaster V. Bodelschwing, of the First Dragoons of the Guards, when near Lubno, fell mortally wounded.
The brigade crossed the Bistritz at Nechanitz soon after four p.m. The Commandant of the Dragoon Regiment, Lieut.-Colonel V. Barner, had received the order to report his arrival upon the scene of action personally to General V. Herwarth, commanding the Elbe army. It was impossible to find the General in the midst of the tumult and confusion raging on all sides, and nothing remained for him but to use his own discretion, and ride forward into the action; so putting himself at the head of his own regiment he took the direction of Problus; the advance of about five or six miles over broken and intersected ground, mostly covered with standing corn, at an uninterrupted trot, had considerably exhausted the horses during their advance; they encountered different parties of their own cavalry, the fifth, sixth and seventh Lancers, and a portion of the Neumark Dragoons were seen in the direction of Streselitz. As they still advanced they came in sight of a body of the enemy's cavalry to the east of Problus, which had hitherto been concealed from their view, consisting of the brigade Meugden. The King Ludvig of Bavaria's Cuirassiers on the right, the 11th (Alexander) Lancers on the left, and Count Neipperg's Cuirassiers in reserve: these troops had not as yet been in action.
The Alexander Lancers detaching themselves from the left of the brigade moved forward in the direction of the dragoons, who deployed at once and attacked, both regiments charged well home and mutually broke through each others ranks, a general melee and hand-to-hand conflict ensued, and for some minutes, the fighting masses surged hither and thither. Some files of the Austrian lancers rode between the intervals of the guns of Captain Caspasy's battery, which was attached to the Prussian regiment, and which, during the intermingling of friend and foe, had ceased firing. The dragoons at last prevailed in the desperate struggle.
The lancers were forced back, the greater portion towards Streselitz, and the remainder took a southern direction. The first were encountered by the regiment of Blucher's Hussars, under the command of Colonel V. Flemming, which had just appeared upon the battle-field from Unter Dohalitz; in spite of their disordered formation, the lancers stood the charge of the hussars bravely till taken in flank by the fourth squadron of the regiment, they broke and fled towards Streselitz.
At this moment, the first lancers of the guard of the brigade, Bheinbaben came upon the ground, and Colonel V. Colomb, commanding the regiment, was ordered by General V. Bheinbaben to take up the pursuit, and the lancers suffered severe losses in their retreat.
Generals V. Alvenslaben and Rheinbaben took part personally in this part of the action. In general, the different parties of the enemy's cavalry, scattered in retreat towards Streselitz and Laugenhof, mostly succumbed to the fire of the Prussian artillery and infantry stationed in the above-mentioned places; but one body of the Alexander Lancers held together in a most extraordinary manner till they reached the neighbourhood of Lipa, where his Majesty the King had taken up his temporary position; they had the temerity to dash forward and attempt a surprise, but a battalion of the thirty-fifth Prussian infantry received them with a murderous volley, and most of these brave men were sacrificed.
The Hesse Cassel hussars at Saar, 10 July 1866
The Seven Weeks War by HM Hozier 1867
NB. The author gives the name of the Austrian regiment as the Hesse Cassel hussars. Although at this time Hesse Cassel was a separate state, allied to Austria, this appears to be a regiment of the Austrian army and there are references to it in, for example 1854. He describes the prisoners as Hungarians, who would have served in the Austrian army and the description of a blue pelise with yellow facings would match the Austrian regiments.
The monotony of the march was relieved by a spirited cavalry skirmish in the little town of Saar, which is about six miles to the west of Neustadt. On the previous night the Austrian hussars of the regiment of Hesse-Cassel held Saar. The Prussian cavalry was to proceed on the 10th to Gammy, about a mile in front of Saar, and the 9 th regiment of Uhlans formed its advanced guard on the march. The Austrians intended to march the same day to the rear towards Briinn, and the hussars were actually assembling for parade previous to the march when the first patrols of the Prussian Uhlans came rattling into the town. The Austrians were collecting together from all the different houses and farmyards; mounted men, filing out of barns and strawhouses, were riding slowly towards their rendezvous in the market-place; men who had not yet mounted were leading their horses, strolling carelessly alongside them, when, by some fault of their sentinels, they were surprised by the Prussians. The Uhlans were much inferior in number at first, but their supports were coming up behind them, and this disadvantage was compensated for by the Austrians being taken unawares. The Uhlans quickly advanced, but did not charge before one Austrian squadron had time to form, and only while most of the men of the remaining divisions were quickly falling into their ranks, though some were cut off from the rendezvous by the Prussians advancing beyond the doors from which they were issuing, and were afterwards made prisoners.
In the market-place an exciting contest at once began. The celebrated cavalry of Austria were attacked by the rather depreciated horsemen of Prussia, and the lance, the "queen of weapons," as its admirers love to term it, was being engaged in real battle against the sword. The first Prussian soldiers who rode into the town were very few in number, and they could not attack before some more came up. This delay of a few minutes gave the hussars a short time to hurry together from the other parts of the town, and by the time the Uhlans received their reinforcements the Austrians were nearly formed.
As soon as their supports came up the lancers formed a line across the street, advanced a few yards at a walk, then trotted for a short distance, their horses' feet pattering on the stones, the men's swords jingling, their accoutrements rattling, and their lances borne upright, with the black and white flags streaming over their heads; but when near the opening into the broader street, which is called the Market-place, a short, sharp word of command, a quick, stern note from the trumpet, the lance-points came down and were sticking out in front of the horses' shoulders, the horses broke into a steady gallop, and the lance flags fluttered rapidly from the motion through the air, as the horsemen, with bridle hands low and bodies bent forward, lightly gripped the staves, and drove the points straight to the front.
But when the Prussians began to gallop, the Austrians were also in motion. With a looser formation and a greater speed they came on, their blue pelisses, trimmed with fur and embroidered with yellow, flowing freely from their left shoulders, leaving their sword-arms disencumbered. Their heads, well up, carried the single eagle's feather in every cap straight in the air; their swords were raised, bright and sharp, ready to strike, as their wiry little horses, pressed tight by the knees of the riders, came bounding along, and dashed against the Prussian ranks as if they would leap over the points of the lances. The Uhlans swayed heavily under the shock of the collision, but, recovering again, pressed on, though only at a walk. In front of them were mounted men, striking with their swords, parrying the lance-thrusts, but unable to reach the lancer; but the ground was also covered with men and horses, struggling together to rise; loose horses were galloping away; dismounted hussars in their blue uniforms and long boots were hurrying off to try to catch their chargers or to avoid the lancepoints. The Uhlan line appeared unbroken, but the hussars were almost dispersed. They had dashed up against the firmer Prussian ranks, and they had recoiled, shivered, scattered, and broken as a wave is broken that dashes against a cliff. In the few moments that the ranks were locked together, it seems that the horsemen were so closely jammed against each other that lance or sword was hardly used. The hussars escaped the points in rushing in, but their speed took them so close to the lancers' breasts that they had not even room to use their swords. Then the Prussians, stouter and taller men, mounted on heavier horses, mostly bred from English sires, pressed hard on the light frames and the smaller horses of the hussars, and by mere weight and physical strength bore them back, and forced them from their seats to the ground; or sometimes, so rude was the shock, sent horse and man bounding backwards, to come down with a clatter on the pavement.
The few Austrians who remained mounted fought for a short time to stop the Prussian advance, but they could make no impression on the lancers. Wherever a hussar made a dash to close three points bristled couched against his chest or his horse's breast, for the Austrians were now in inferior numbers in the streets to the Prussians, and the narrowness of the way would not allow them to retire for their reserves to charge. So the Prussians pressed steadily forward in an invulnerable line, and the Austrians, impotent to stop them, had to fall back before them. Before they had gone far through the town fighting this irregular combat more Prussian cavalry came up behind the Uhlans, and the Austrians began to draw off. The lancers pushed after them, but the hussars got away, and at the end of the town the pursuit ceased. One officer and twenty-two non-commissioned officers and privates taken prisoners, with nearly forty captured horses, fell into the hands of the Uhlans, as the trophies of this skirmish. Some of the prisoners were wounded; a few hussars killed, and two or three Prussians were left dead upon the ground.
One or two of the privates taken prisoners were Germans, but by far the greater number were Hungarians—smart, soldierlike-looking fellows, of a wiry build; they looked the very perfection of light horsemen, but were no match in a melee for the tall, strong cavalry soldiers of Prussia, who seemed with one hand to be able to wring them from their saddles, and hurl them to the ground.
The Battle of Tischnowitz, 11 July 1866
2nd Prussian Guards Dragoons' regiment against Graf Wallmoden Lancers.
Translated from Der deutsche Krieg von 1866, Volume 1, Part 2
By Theodor Fontane 1870
To the left of the Hann division, which formed the extreme right wing of the I. Army, was the light cavalry brigade of Duke William of Mecklenburg. The second Guards 'Dragoons' regiment,, under the command of Colonel von Redern led the brigade.
The advance was difficult insofar as the right and left of the terrain led through wooded ravines and steep hills, making searching more difficult. In OIschy, half a mile from Tischnowitz, was found the first enemy, (the Wallmoden Lancers, we later discovered), who appeared in front and both flanks. Colonel v. Redern conducted a squadron immediately right and left, enough to suppress the enemy detachmentment, while the advance half continued under Lieutenant von Dieskau to Tischnowitz.
Tischnowitz was located on the left, beyond its suburb "Vorkloster". A bridge connects the city and suburbs across the Schwaraza river.
In Vorkloster our advanced half encountered a train of enemy lancers, threw themselves at him and chased him across the bridge, into Tischnowitz. Here, however, the attack faltered. In the marketplace were two Austrian squadrons, and with the cry:" The Prussians are here, " threw themselves into the saddle (they had just dismounted) and fell on the advancing Dragoons and drove them out of the city.
But not for long. Just now appeared the first Squadrons under Captain v. Korff and the second attack on Tischnowitz started. On the Schwarzawa bridge between city and suburb, the troops collided. The Lancers seemed to want to form an impenetrable line, but our forwards attacked with dragoon sabers and the horses could only go in the last moments between their horses and so fell in between the Lancers. Major von Schack was wounded by a lance on the left shoulder however the dragoons went so close to the enemy that the lances were unusable.
The scuffle lasted only a few moments, Captain vd Knesebeck, the leader of the enemy squadrons, was carved from his horse, which turned the Lancers and they retreated into the city. The dragoons pursued, but their officers kept strict command, they did not come out of order. When they had gained the road leading to the marketplace, the Lancers tried again to make a front, but once more Dragoons attacked and again pushed the enemy back by the sheer weight of the horses and the force of the blows. The tough battle lasted a long time. The riders were so close together in each other that they could hardly use their weapons and they fought with each other and sought to seize the horses, which, frightened and made wild, reared, and struck out. The force of Prussia prevailed and they pressed their opponents back on the market, where a picture of the Madonna on a high column looked down. Here an Austrian officer was with almost unbelievable power thrown from the saddle, the lighter Austrian riders were not at all able to stand against the greater strenght and violence and turned and hurried out of the city to join regiments outside. A pursuit took place, but was halted by the inequality of forces. The loss of the enemy was 2 officers and 53 men, partly dead and wounded, others trapped. Our part, we had 2 men killed and 10 wounded. | <urn:uuid:e0030ee2-f721-46e4-a025-87a6dda55e88> | CC-MAIN-2017-17 | http://nelsonlambert.blogspot.com/2011/08/austrian-light-2-austrian-cavalry-in.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00190-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.98282 | 4,198 | 3.015625 | 3 |
The Quarter Shrinker uses a technique called high-velocity
This is also called EM forming, "magneforming", or magnetic pulse forming.
It is a high-energy-rate metal forming process. High-energy rate
processes apply a huge amount of energy to an object for a very brief
period of time. The technique was originally
developed by the aerospace industry in conjunction with NASA, and was commercialized by Aerovox,
Grumman, and Maxwell Technologies (now a subsidiary of General Atomics). EM forming uses pulsed power technology to quickly discharge high-energycapacitors through a coil of wire to generate a brief, but extremely powerful, rapidly-changing magnetic field to re-shape metals inside or near the coil. Although electromagnetic forming works best with metals that have
good electrical conductivity (such
as copper, silver, or aluminum), it also works to a limited extent
with poorer-conducting metals or alloys such as nickel or steel.
In order to
shrink coins, we charge up a high voltage capacitor
bank consisting of two to four large "energy discharge" capacitors. These capacitors are specially constructed low-inductance, steel-cased capacitors that can each deliver up to 100,000 amperes (100 kA) at up to 12,000 volts. Each capacitor measures
14" x 8", and weighs about 180 pounds. The capacitors are extremely robust - each one has an expected
lifetime of over 300,000 shots at 100,000 amperes per shot. A double-pole double-throw (DPDT) high voltage relay
is used to connect a variable high voltage AC power source through a 40 kV full-wave bridge rectifier
to charge up the capacitor bank. After the bank is charged to the
desired voltage, the HV relay disconnects the capacitor bank from the
charging supply to prevent possible damage to the rectifiers when the system is fired.
charged capacitor bank is then quickly discharged into
a single-layer ten-turn work coil wound from high-temperature
(polyamide-imide double-build 200C) magnet wire. The coil has an inner
diameter that is slightly larger than the diameter of the coin to be
shrunk. The coin is centered and held in the
center of the coil by a pair
of non-conductive dowel rods. The rods center the coin inside the coil so that the coin is subjected to the
strongest part of
the coil's magnetic field. The dowels also prevent the coin from twisting or
being ejected from
the coil during the shrinking process. The wire ends of the work coil
are securely bolted to
a pair of heavy copper bus bars. A spark gap is the only affordable switch that can hold off the high voltage and then efficiently
switch the huge currents used during the coin shrinking process.
For many years, we used a custom three-terminal triggerable spark gap called a "trigatron". The
trigatron was "fired" by applying a fast rising 50,000 volt pulse to a trigger electrode, which then
caused the main gap
of the trigatron to fire. However, in order to increase the range of operating
voltages and reduce spark gap maintenance, we have since converted to a solenoid-driven high-current spark gap
with 2.5" diameter brass electrodes. When switched, the solenoid drives
a movable electrode towards the fixed electrode. Once the gap between
electrodes becomes small enough, the air breaks down,
triggering an arc that connects the capacitor bank to the work coil. Since the
electrodes do not
contact each other, welding between the molten areas on the electrodes is prevented. Unlike the earlier
trigatron switch, the solenoid-driven spark-gap switch consistently
fires, never self-triggers (i.e., no unexpected high-energy
"surprises"!), and it requires considerably less
Once the spark gap fires, current climbs in
work coil at a rate that may reach five billion
amperes per second. As the work coil current increases, it creates a rapidly increasing magnetic field inside the work coil.
The resonant frequency of the resulting LC circuit (the capacitor bank
and the combined inductance of the work coil and the rest of the system) ranges between 7.8 and 10 kilohertz (kHz) depending on the diameter of the coin and the work coil. Through electromagnetic induction ("transformer action"), a
huge circulating alternating current is induced within the coin. However, due to skin effect,
the induced current in the coin is confined to the outermost rim of
the coin, forming a ring of current only about 1/20 of an inch thick. Because of Lenz's Law, the magnetic fields of the coin and work coil strongly oppose each
other, resulting in a tremendous repulsion force (Lorentz force) between
the work coil and the rim of the coin. The repulsion force is directly proportional to
the initial energy stored in
the capacitor bank. Since the capacitor bank's stored energy is proportional to the square of its voltage, doubling the capacitor voltage quadruples the peak magnetic force.
We typically use pulses between 2,000 to 9,600 joules
(watt-seconds) from the capacitor bank. Because this energy is discharged within 20-40
millionths of a second, the instantaneous power
peak electrical power consumed by a large city. The repulsion forces between the
work coil and the coin create radial compressive forces that
the yield strength of the alloys in the coin, causing the coin's diameter to shrink. A 5,000 joule pulse
will reduce a US clad quarter to the diameter of a
dime. Simultaneously, powerful outward forces, sometimes called "magnetic pressure",
cause the work coil to explode
in a potentially lethal shower of copper shrapnel. Axial magnetic forces
also smash the wire turns together while the coil is simultaneously
expanded in diameter.
The combination of magnetic forces acting upon the work coil is always
in the direction that tend to to increase the coil's inductance.
The coin behaves like a short-circuited secondary in a 10:1 step down transformer, so the current circulating within
the outer rim of the coin may approach a million amperes!
A US clad quarter is reduced from an initial diameter of 0.955" to
0.650" within 36 millionths of a second. The coin's diameter shrinks at a average rate exceeding 480 miles per hour. In US clad coins, most of
current actually flows within the pure-copper center layer of
the clad sandwich
rather than through the poorer-conducting copper-nickel alloy cladding layers. This causes the copper center layer to
shrink a more than the outermost layers, leading to an "Oreo cookie"
effect on the shrunken coin. The
also becomes thicker as it shrinks in diameter. Despite the radical
changes to the coin, its mass,
volume, and density all remain the same before and after shrinking. The
cookie and thickening effects can be easily seen in the following
of a normal-size and
shrunken US quarter. The slight waviness in the shrunken coin is due to
force imbalances including variations in coin thickness from the coin's
surface features, and dynamic force imbalances as the work coil changes
shape before exploding. This short slide show from the Florida State University National High Magnetic Field
Laboratory provides an excellent explanation and demonstration of
shrinking. In their demonstration, they use #14 AWG magnet wire for their work coils. We use
#10 - #14 AWG wire depending on the size of the coin we're going to
In clad coins, as the copper core layer shrinks, the outer cladding
layers of the coin are pulled along for the ride, similar to the way
continental drift is thought to move continents in
the Earth's crust. This sometimes leads to "collisions" between various surface
features where one feature may plow underneath another!
For example, note how some of the lettering on
quarter below have shifted so that they become partially obscured by various
parts of the horse.
coin shrinks. similar but opposite forces are acting on the work coil's windings.
Magnetic pressure rapidly expands and stretches the copper wire in the work
coil. The film insulation peels off the wire since it can't stretch as
much as the copper! The wire coil "rapidly disassembles" (explodes!), and
fragments of the coil are blown
outward with the force of a small bomb. Small coil fragments
have been measured with velocities of up to 5,000 fps (>3400 mph, or greater than Mach
4), so the work coil must be
contained within a heavy blast shield. Our blast shield is made from Lexan
polycarbonate, the same material that's used
to make bulletproof windows. Regions of the blast shield that are in the direct
path of exploding coil fragments are further reinforced with steel
armor plates. Once the work coil disintegrates, any residual energy in the
system is dissipated in a ball of white-hot plasma.
The Quarter Shrinker is designed so that any residual voltage on the
capacitor bank is safely dissipated by a bank of high-power wirewound
resistors. The system is triggered from about 10 feet away from
a remote control box. I've found (the hard way!) that 8,000 Joules is
about the maximum energy I can repeatedly use without running the risk of
fracturing the Lexan walls from the shock wave. When slammed by a
high-intensity shock wave, Lexan does indeed shatter - I've got the
pieces to prove it! Other
(Rob Stephens, Bill Emery, Phillip Rembold, Ross Overstreet, Brian
Basura, and Ed Wingate) have resorted to using 100% steel enclosures
when running at higher power
Adding strategically-placed steel plates has stopped our Lexan blast
fracturing. We've found that AR400 steel plates (also used for armor in Humvees!)
are well suited to surviving repetitive bombardment from supersonic
coil shrapnel. But even these must be periodically replaced after a couple
2009, the amateur scientists at Hackerbot Labs (Seattle, WA) built their own coin
shrinker. By using a special 100,000 frame/second camera, transparent Plexiglas
dowels, and carefully pre-triggering electronic flash units, their
partners at Intellectual Ventures, Inc. were able to capture
a sequence of images of a quarter AS IT WAS SHRINKING. Because the shrinking process occurs so
rapidly, the actual "shrinkage" is only seen during four consecutive frames - about 40 millionths of a second.
The largest coin we've ever shrunk was a US Silver Eagle,
a pure silver
coin that is reduced from 1.6" in diameter to 1.3" after a 6300 Joule shot. At similar energies, a Morgan
is reduced from about 1.5" to 1.25" in diameter, and a clad Kennedy
half dollar is reduced to the diameter of a US Quarter.
At 5,000 joules, US clad quarters shrink to the diameter of a dime.
A few years ago, physicist Dr. Tim Koeth and I took various measurements
of work coil current during the shrinking process. These showed that
the work coil consistently failed shortly after the first current peak.
Fortunately, virtually all of the coin's
shrinkage has occurred by this time. Disintegration
of the coil helps to reduce voltage reversals that could damage, and eventually destroy, the energy discharge capacitors.
The combination of high peak currents and oscillatory discharges
is extremely demanding on capacitors. Because of
premature failures with earlier GE pulse capacitors, the current system uses low inductance Maxwell (now General Atomics Energy Products - GAEP) pulse capacitors that are designed to safely cope with this abuse. While the original GE capacitors began failing
after only 50 - 100 shots, the trusty Maxwell capacitors have withstood
well over 6,000 shots with nary a whimper.
Examination of the coil fragments show that the wire has
been substantially stretched (#10 AWG looks like #14 AWG afterwards),
it becomes strongly work hardened, and it has periodically "pinched" regions and kinks
caused by the copper being stressed far beyond its yield strength by the
ultrastrong pulsed magnetic field. Many fragments are less than 1/4" long, and all
pieces show evidence of tensile fracture at the ends. Since the wire's insulation
gets blown off, most fragments are bare copper. The wire often also shows signs of localized melting
on the innermost surface of the solenoid due to "current bunching" from the combination
of skin effect and proximity effect.
Shrinker works very well on clad dimes, quarters, half dollars,
Eisenhower, silver Morgan and Peace Dollars, Susan B. Anthony,
Sacagawea, small Presidential dollars, and many foreign coins.
It works less well with nickel and nickel-copper coins, and
it virtually no effect on plated steel coins. It also works well with
bronze and copper-zinc alloy pennies. However, since mid-1982 US
have been made using a zinc core with a thin copper overcoat. During
the thin copper layer vaporizes and the zinc core melts, leaving an
disk of molten zinc accompanied by a messy shower of zinc globules
Because of the greater hardness and much poorer electrical conductivity
of nickel-copper alloys, the shrinking process doesn't work as well
nickels, shrinking them by only about 10% even at 6,300 Joules. Larger
copper-nickel coins, such as the UK Churchill Crown, seem to be almost
impervious to shrinking even at 6,300 Joules - this coin is as tough as its namesake!
shrunken coin weighs exactly the
same as a normal size coin. As the coin's diameter shrinks, it becomes
correspondingly thicker, but its volume and density remain
the same. Bimetallic foreign coins (with rings and
centers made from different alloys) often show different degrees of
upon electrical conductivity and hardness of the respective alloys. In
some cases, the
center portion shrinks a bit more, loosening or sometimes even freeing
it from the outer ring. Complete separation occurs with older
Mexican, UK, and French bimetallic coins, and with newer Two Euro bimetallic coins.
the extremely high discharge currents
and fast current rise times, capacitors rated for energy discharge applications must be designed
have high mechanical strength and very low inductance. They use special internal construction
safely handle mechanical stresses created by strong magnetic and electrostatic forces
during fast, high-current discharges. Unfortunately, our original GE
energy discharge capacitors were simply not constructed for this type of abuse,
and magnetic forces began tearing them apart during every shot. One
suffered an internal electrical explosion that ruptured its metal
case, causing it to hemorrhage stinky, arc-blackened capacitor oil and aluminum foil fragments all over the floor.
The wife was not amused! Our Maxwell energy discharge
capacitors have proven to be true
"Timex's" of the pulsed power world - they continue to "take a lickin' and keep on
Update - One of our Maxwell capacitors finally failed. While charging
the bank, a muffled bang was heard, the bank voltage
abruptly plunged from about 8 kV to zero, and the mains fuse in the
power controller blew. The problem was traced to a catastrophic failure of one of the Maxwell
capacitors. The failing capacitor developed an internal short
and all of the stored energy in the capacitor bank (~4.5 kJ) was
abruptly dumped into the internal fault. Fortunately, the heavy steel
case didn't rupture, so I was spared cleaning up several gallons
of castor oil. This capacitor, and an identical mate, had survived over
6,000 "shots" in the quarter shrinker, so I'm very satisfied with its
performance. Further research determined that the root cause of the
failure was not lifetime-related, but was due to an extended period of
abnormally low temperatures. These capacitors use a combination of kraft
castor oil for the dielectric system that separates the foil plates. The
Quarter Shrinker resides in an unheated patio.
Although cold temperatures had not been a problem during previous
was abnormally cold in the Chicago area. When
the capacitor's internal temperature fell below -10C
(14F), the castor oil began to partially solidify. As castor oil
constant drops from 4.7 to about 2.2. During solidification, small
amounts of dissolved water (that were previously harmlessly in
solution), were driven out of solution and absorbed by the kraft paper.
dielectric constant increased the voltage
stress on the kraft paper dielectric while the absorbed water
simultaneously reduced its electrical strength. The result was sudden
dielectric failure and catastrophic short-circuiting of the capacitor.
We've subsequently installed flexible silicone electrical
heating elements to the sides of the capacitors to always keep them toasty
(above 40F) during even the coldest days. This will prevent any freezing problems in the
future. With the new heaters in place, the winters of 2015 and 2016 were nicely uneventful.
Can Crushing: A larger diameter 3-turn work coil, operating at lower power
levels, is used to crush aluminum cans. An aluminum soft drink can ends up looking
like an hourglass as the center is shrunk to about half its original diameter.
During can crushing, the coil does not disintegrate due to its more massive design
(#4 AWG solid copper wire) and because the system is fired using lower energy
levels than coin crushing. At higher power levels the can is
ripped apart from the combination of the air inside the can suddenly being
compressed, and heating/softening of the can from the induced currents. Can crushing
also works with steel cans, but the can undergoes greater heating and reduced
shrinkage because of steel's lower electrical conductivity. The "skin depth" in steel is also
much smaller due to its ferromagnetic properties. Since
the work coil is not destroyed during can crushing, the capacitor bank and
spark gap are more heavily stressed by the oscillatory ("ringing")
capacitor bank voltage must be reduced to so that the ~100% voltage reversals
don't overstress the pulse capacitors' dielectric system.
Since most of the capacitor bank's initial energy ends up being dissipated as
in the spark gap, can crushing also causes significant heating and
erosion of the electrodes in the high voltage switch.
Is Wire Fragmentation Consistent with EM Field Theory?
Copper wire fragments from the work
coil clearly indicate that the wire has been subjected to large tensile stresses.
Most of the observed effects on the wire can be explained by hoop stresses
created by the combination of magnetic pressure
within the work coil solenoid, Lenz's Law repulsion
between the coil and the coin, and periodic conductor necking. The
latter occurs when magnetic
pinch forces are sufficient to cause the conductor to behave as though
it were a conductive fluid. Because of pinch instabilities, the wire becomes periodically pinched off
and broken. However, there is also a
curious ridge which shows
up under microscopic examination of the coil fragments that may hint of
effects as well. This artifact was first noticed by Richard Hull of the
Coil Builders of Richmond, Virginia (TCBOR) when reviewing similar wire
from another researcher (Jim Goss). It seems that when an extremely
current flows through a solid or liquid metallic conductor, certain
begin to appear which may not be fully explained by existing EM field
and Lorentz forces.
One very interesting example involves forcing a very large
current pulse very quickly through a straight piece of wire. Under
conditions, the wire does not melt or explode. Instead, it fractures
a series of roughly equal length fragments, with each fragment showing
evidence of tensile failure. Each segment was literally pulled
apart from neighboring fragments with little or no evidence of necking
or melting. Clearly large tensile forces were set up within the wire
the brief time that the large current flowed. But, per existing EM
no tensile forces should exist, implying that the current theory of how
Lorentz forces act on metallic conductors may be incorrect!
A father and son team of physicists, Dr.'s Peter and Neal Graneau (who
are coauthors of "Newtonian Electrodynamics" and "Newton Versus Einstein")
theorize that internally developed "Ampere' tensile forces" may account
for the observed behavior of this, and other high-current experiments.
While Ampere' tensile forces are predicted by classical electromagnetic
theory, they have long been removed from all modern textbooks, being replaced
instead by modern field theory and Lorentz forces. Interestingly, even though Ampere' forces
are no longer an accepted part of current EM theory, their existence appears to be experimentally
verifiable in exploding wires or high DC current flow within molten metals (such as aluminum refining).
In their books, the Graneau's provide many thought-provoking
that appear to support Ampere' Tension forces. More recently, other
scientists have proposed that high-current wire fragmentation may actually be
caused by a combination of flexural
vibrations and thermal shock. However, we suspect that the jury is
still out on this issue, and its an area that's ripe for
additional research and experimentation. Isn't Mutilating Money a Federal Offense?
US Federal law specifically forbids
the "fraudulent mutilation, diminution, and falsification of coins" (seeUS
Code, Title 18 - Crimes and Criminal Procedure, Part I - Crimes, Chapter
17 - Coins and Currency, Paragraph 331). However, the key word is Fraudulent.
it recently became illegal to melt pennies or nickels or to export them
to reclaim their value as scrap metal, you can otherwise do pretty much
anything to US coins as long as you don't alter them with an intent
to defraud. This includes squashing
them on railroad tracks, flattening them into elongated souvenirs at
traps... or crushing them with powerful electromagnetic fields. I
take great pains to
tell folks exactly what they are receiving and how the process was
So vending machines in tourist traps that squash
into elongated souvenirs or "funny" stamped pennies with Lincoln
a cigar are legal (although the coins can't be used as currency
anymore). In an opinion letter, folks at the US Mint "frown on the despicable practice"
altering coins, but they agree that it is quite legal to shrink
Note that this is not always the case within other countries! For example,
UK and Australia, defacing the Queen's image on a coin may be
considered a punishable offense. Here is an interesting example of fraudulent "coin shrinking" that was prosecuted by the US Secret Service (way back in 1952!).
deals with debasement of coins; alteration of official scales,
or embezzlement of metals. Since most of the coins we shrink are made
metals, this section does not apply. However, since the density, metal
and weight remain unaltered during the shrinking process, coin
is legal even when applied to bullion coins made from precious metals,
and most larger gold and silver
coins shrink quite nicely. HOWEVER, shrinking US paper money is indeed
illegal. Even though we are aware of a couple of chemical processes that
can shrink dollar bills to about half their original size, we do not
sell "shrunken dollar bills", since defacing paper currency is indeed
See Paragraph 333 for details.
Are you the nut who invented this device?
No, it wasn't this nut! We just perfected the technique. For the history of coin
shrinking, check out The
Known History of "Quarter Shrinking"
“There’s always a hole in theories somewhere, if you look close enough”
Mark Twain, “Tom Sawyer Abroad”, Charles L. Webster & Co., 1894
A. Electromagnetic Metal Forming and Magneto-Solid Mechanics:
1. ASM, "Metals Handbook, 8th Edition, Volume 4, Forming", American Society for Metals
- see section on Electromagnetic Forming (out of print)
2. Wilson, Frank W., ed., "High Velocity Forming of Metals", ASTME,
Prentice-Hall, 1964, 188 pages (out of print)
3. Bruno, E. J., ed., "High Velocity Forming of Metals", Revised, edition,
ASTME, 1968, 227 pages (out of print)
4. NASA, "High-Velocity Metalworking, a Survey, SP-5062", National Aeronautics
and Space Administration, 1967, 188 pages (out of print)
5. Moon, Francis C., "Magneto-Solid Mechanics", John Wiley & Sons, 1984, ISBN 0471885363, 436 pages (out of print)
6. Murr, L. E., Meyers, M. A., ed., et al, "Metallurgical Applications
of Shock-Wave & High-Strain-Rate Phenomena", Marcel Dekker, 1986,
1136 pages, ISBN 0824776127 (in print) 7. "Pulsed Magnet Crimping" by Fred Niell, straightforward explanation of magnetic forming (fairly technical, written by a physicist)
B. Capacitor Discharges, High Magnetic Fields, Pulsed Power/Switching, and Exploding Wires:
1. Frungel, F., "High Speed Pulse Technology", Vol. 3, Academic Press,
1976, 498 pages (Capacitor Discharge Engineering, out of print)
2. Schaefer, Gerhard, "Gas Discharge Closing Switches", Plenum, 1991,
569 pages (out of print)
3. Martin, T. H., et al, "J. C. Martin on Pulsed Power", Plenum, 1996,
546 pages (out of print)
4. Knoepfel, H., "Pulsed High Magnetic Fields; Physical Effects &
Generation…", Elsevier, 1970, 372 pages (out of print)
5. Fowler, C. M., Caird, Erickson, "Megagauss Technology and Pulsed
Power Applications", Plenum; 1987; 879 pages (out of print)
6. Vitkovitsky, Ihor, "High Power Switching", Van Nostrand Reinhold,
1987, 304 pages
(out of print)
7. Pai, S. T, & Zhang, Q., "Introduction to High Power Pulse Technology",
World Scientific, 1995, 307 pages (in print)
8. Sarjeant, W. J. & Dollinger, Richard E., "High Power Electronics",
Tab Professional & Reference Books, 1989, 392 pages (out of print)
9. Shneerson, G. A., "Fields & Transients in Superhigh Pulse Current
Devices", Nova Science, 1997, 561 pages (out of print)
10. Parkinson, David H., Mulhall, Brian E., "The Generation of High
Magnetic Fields", Plenum, 1967, 165 pages (out of print)
11. Chace, W. G., Moore, H. K, "Exploding Wires", Volume 1, Plenum, 1959, 373 pages (out of print)
12. Chace, W. G., Moore, H. K, "Exploding Wires", Volume 2, Plenum, 1962, 321 pages (out of print)
13. Chace, W. G., Moore, H. K, "Exploding Wires", Volume 3, Plenum, 1964, 410 pages (out of print)
14. Chace, W. G., Moore, H. K, "Exploding Wires", Volume 4, Plenum, 1967, 348 pages (out of print)
15. Mesyats, Gennady A., "Pulsed Power", Springer, 2004, 568 pages, ISBN 0306486531
C. Special Reading for those wishing to delve deeper into some "interesting" areas of EM Field Theory and Wire Fragmentation:
1. Graneau, Peter & Neal, "Newtonian Electrodynamics", World Scientific,
1996, 288 pages (in print)
2. Graneau, Peter & Neal, "Newton Versus Einstein, How Matter Interacts
with Matter", Carlton Press, 1993, 219 pages (in print)
3. Jefimenko, Oleg, "Causality, Electromagnetic Induction, and Gravitation",
Electret Scientific, 1992, 180 pages (in print)
4. Lukyanov, A., Molokov, S., "Why High Pulsed Currents Shatter Metal Wires?",
Pulsed Power Plasma Science, 2001, Digest of Technical Papers, Volume 2,
5. Lukyanov, A., Molokov, S., Allen, J. E., Wall, D., "The Role of Flexural
Vibrations in the Wire Fragmentation", Pulsed Power 2000, IEE Symposium ,
pages 36/1 -36/4
6. Wall, D. P., Allen, J. E., Molokov, S., "The Fragmentation of Wires
by Pulsed Currents: Beyond the First Fracture", Journal of Physics D: Applied Physics.
36 (2003) 2757–2766
Information on this site is for educational purposes only. It is not to
be construed as advice on how to build or use similar equipment.
forming is an extremely dangerous high-energy process that can maim or
a casual HV experimenter. Large high-voltage capacitors are VERY
unforgiving, and they will NOT give you a second chance! | <urn:uuid:80634952-3536-41c5-a744-eb1e785a5204> | CC-MAIN-2017-17 | https://capturedlightning.com/frames/shrinker.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.89796 | 6,510 | 2.84375 | 3 |
The following two articles come from Australia. The co-author of the first, Alex Wodak is a world renowned activist in the movement to legalise drugs – in particular cannabis. The carefully referenced response in the second article was written by Professor Dr. Stuart Reece.
Some frequently asked Q’s and A’s about medicinal cannabis
Prepared by Laurie Mather, PhD, FANZCA, FRCA, Emeritus Professor of Anaesthesia, The University of Sydney (firstname.lastname@example.org) and Alex Wodak, AM, FRACP, FAChAM, FAFPHM Emeritus Consultant, Alcohol and Drug Service, St Vincent’s Hospital, Sydney, NSW, Australia (email@example.com), 18 June, 2014
Q: What is cannabis?
A: Some people call it by its American name, marijuana. The name ‘cannabis’ describes its botanical origins and comes from the Latin word for hemp. The name ‘marijuana’ (or sometimes ‘marihuana’) is a contrived name given to associate it with African and Hispanic Americans who used it as a recreational drug in the United States during the 1930s.
Q: What has the cannabis plant been used for?
A: Cannabis is an ancient herb-like plant that has been used for thousands of years for fibre-making for products such as clothing and rope, for dietary ingredients, as an element of folk medicine, and as an agent to promote spiritual transcendence, particularly in the religions of South Asia. ‘Recreational’ cannabis use was uncommon in the West before the 1960s. A League of Nations meeting in Geneva in 1925 decided to ban cannabis internationally. Cannabis first started to come to the attention of law makers and enforcers in the USA in the 1930s. The Congressional Record from that time includes comments about perceived depravity attributed to cannabis use along with racial slurs. Progressively it became an illegal substance in many countries, including Australia.
Q: When did cannabis come into Western civilisations?
A: European venturers over many centuries, as judged by their writings, certainly encountered cannabis in their travels to exotic Eastern and Far Eastern lands. By the mid 19th century, cannabis, in one form or another, had become part of the medical-societal-experimental experience of many European societies.
Q: When did cannabis come into Western-style medicine?
A: Cannabis was adopted into British medicine from India in the mid-19th century having been observed there to relieve pain, muscle spasm, convulsions of tetanus, rabies, rheumatism and epilepsy.
B Cannabis as a medicine:
Q: How does cannabis work?
A: As a plant preparation, cannabis ordinarily contains many hundreds of chemical substances commonly found in plants (‘phytochemicals’), and a hundred or so unique substances commonly referred to as ‘phytocannabinoids’. A small number of phytocannabinoids are believed to cause the main pharmacological effects of cannabis in humans. Cannabis attaches to special receptors in the brain and some other organs in the body. This releases a special chemical that the body produces. The chemical acts as a transmitter.
Q: What is ‘medicinal cannabis’? Some people also refer to this as ‘medical marijuana’.
A: The ‘medicinal’ tag recognizes that cannabis, among many other uses, has the properties of a medicine.
C Benefits of medicinal cannabis:
Q: Why do some argue that medicinal cannabis be legalised?
A: It helps some people with distressing symptoms from serious medical conditions when they have not been sufficiently helped by the standard medicines. Cannabis is considered a ‘second line’ drug to be used when the first line drugs have been tried and have either not worked or had unacceptable side effects.
Q: What kind of evidence is there that cannabis can help some people?
A: The evidence is basically of three kinds. First, there is anecdotal evidence, usually provided by people who have experienced in themselves or observed in others some effect. Most information like this is hard to assess because it lacks corroborative documentation – and this is the kind of evidence that tends to appear in the lay press and on internet blog sites. This is not to say that the evidence is invalid – but only to say that the much of the vital information underpinning the claims is not available in a way that permits scientific scrutiny. The second type of evidence is papers published in reputable medical and scientific journals after peer-review. A third type of evidence is careful reviews of papers reporting the results of cannabis research.
Q: How good is the evidence that cannabis can help some people?
A: Randomised controlled trials (RCTs) are usually regarded as the best way of telling whether a medication is effective. In one recent review, for example, 82 RCTs showed that medicinal cannabis is effective in relieving distressing symptoms in about half a dozen conditions. 9 RCTs found that medicinal cannabis was not effective. This is quite an impressive result. There are at least half a dozen favourable reviews by prestigious organisations.
Q: What are the main medical conditions that might be helped by medicinal cannabis?
A: Severe nausea and vomiting after cancer chemotherapy, especially if no standard treatment has worked; severe chronic non cancer pain, especially if the pain is due to nerve damage; severe wasting in cancer or AIDS (though this is less common these days); stiffness due to multiple sclerosis. There are also some other conditions.
Q: Is cannabis a cure for any conditions or diseases?
A: Not as far as we know so far from scientifically assessed evidence.
Q: Can cannabis help young children with severe epilepsy resistant to all known treatments?
A: A number of people have claimed this. But this possible benefit has not yet been tested in scientifically assessable research.
D Potential risks:
Q: Are there any bad side effects from medicinal cannabis? People talk a lot about psychosis and marijuana: should we be worried about using a medicine that could cause schizophrenia?
A: Most of the assessment of side effects has been based on what is known from studying recreational cannabis. That’s like studying the safety of bootleg alcohol to estimate the safety of regulated alcohol. Used medically, cannabis can cause some mental disorientation, sleepiness, and dry mouth but these are typically less severe and troublesome than many of the medications that might be used to treat the same conditions. Besides, the effects of not treating the conditions also has to be considered. It has also been said that some of these side effects counteract the worse side effects of the other medications such as chemotherapy agents that cause serious side effects themselves. People distressed by severe symptoms unrelieved by conventional medications are unlikely to be concerned by the small risk of serious mental illness in a couple of decades time.
Q: Is there a risk that legal medicinal cannabis would increase the use of recreational cannabis.
A: Recreational cannabis use in those US states which allow medicinal cannabis is not greater than those states where medicinal cannabis is not permitted.
Q: Can’t people taking cannabis become addicted to it?
A: Dependence is a small risk with cannabis in the sense that it is not as severe as the dependence that occurs with tobacco, heroin or cocaine. What matters is not just the risks of cannabis but also its possible benefits and the benefits and risks of using other medicines or no medicines.
Q: Aren’t there more modern and more effective drugs than cannabis?
A: Yes there are. But these don’t work in every case and sometimes they too can produce nasty side effects. Many of the more modern drugs are also much more expensive and some require the patient to be kept in hospital while they are being administered.
E Taking medicinal cannabis:
Q: Are there alternatives to taking cannabis by smoking it? How else can medicinal cannabis be taken?
A: Cannabis can also be vaporised and the vapour inhaled. Devices are now available to make inhalation of cannabis vapour convenient and inexpensive. Oral forms of cannabis (dronabinol and nabilone, developed some 30 years ago) used to be available in Australia but are not available any more because they were expensive and not especially reliable, and they have been made obsolete. There is little scientific information available about other forms of medicinal cannabis given by mouth (such as tincture). Cannabis taken by mouth, although perhaps well-enough absorbed, is broken down in the liver before it gets into the main blood stream, making it hard to get the right dose in many people. Also, when cannabis is taken by mouth there seems to be an increased risk of anxiety attacks because there is no way to ‘stop giving it’ once it has been swallowed. Sativex (aka nabiximols) is a form of medicinal cannabis manufactured by a small pharmaceutical company. It is sprayed on the inside of the mouth. There are many attractive aspects of Sativex®, particularly convenience, but it is not readily available in Australia, and is only permitted in cases of stiffness (spasticity) from multiple sclerosis. Tincture of cannabis used to be legally available some 20 years ago. It has been made available by some individuals in Australia but its supply, these days, is not legal. If medicinal cannabis is allowed in Australia, some people with only a short time left to live and others who have been smoking cannabis for a long time are likely to continue to smoke the drug
Q: Aren’t Sativex and dronabinol available on the Pharmaceutical Benefits Scheme?
A: Neither Sativex (nabiximols) nor dronabinol are available on the Pharmaceutical Benefits Scheme.
Q: Is cannabis available medically in any other countries?
A: Medicinal cannabis is now available in about twenty countries including the USA (23 states), Canada, Switzerland, the Netherlands, and Israel.
Q: How is medicinal cannabis controlled in other countries?
A: In some countries medicinal cannabis controlled quite carefully with prescriptions by doctors and pharmacy dispensing. In some other countries, controls are much more relaxed and cannabis can be bought over the counter.
F Political and community factors:
Q: What’s stopping the government from legalising medicinal cannabis in Australia?
A: The main reason cannabis in not available in Australia is because of political impediments. Some Commonwealth and state/territory laws would have to be changed slightly. States make their decisions independently. Medicinal cannabis is allowed, in principle, under Australia’s international treaty obligations.
Q: How can we allow cannabis to be used medicinally while stopping it being used recreationally?
A: Easy. Australia allows morphine, cocaine, amphetamine and ketamine to be used medically while the recreational use of these drugs is prohibited.
Q: Is Australia doing enough research on medicinal cannabis?
A: Very little research on medicinal cannabis is carried out in Australia.
Q: What about people who might take medicinal cannabis and then try to drive a car?
A: There is an increased risk of a car crash if a driver has taken cannabis recently. This risk is much less than with alcohol but the risk if even greater after a combination of alcohol plus cannabis has been taken. A number of medicines which are prescribed today in Australia also increase the risk of a car crash.
Q: What is public opinion in Australia about medicinal cannabis?
A: In a community survey commissioned by the Commonwealth Department of Health in 2010, 69% of Australians supported medicinal cannabis with 75% supporting more research.
Q: Do many Australians take cannabis for medicinal purposes now?
A: Yes, but we don’t know how many.
Q: Will medicinal cannabis be allowed in Australia?
A: Possibly. But it’s very hard to predict this.
* * * * * * * * * * * * *
Response to Comments by Wodak and Mather
1. One notes that Dr Alex Wodak is one of the key authors of this paper. As the undisputed champion of drug decriminalisation in Australia for the last 30 years one must necessarily wonder what impact his personal views have on the advice he has provided to the parliament on this occasion.
2. The title of the paper uses the phrase “Medical cannabis”. It is a matter of record that “medical cannabis” has been deliberately used as the “Trojan horse” or thin edge of the wedge which is strategically used to introduce cannabis decriminalization. This has been true in many instances overseas, and the US states where it is now decriminalized. Moreover this tactic was made explicit in NSW last year during the unsuccessful attempt to introduce what was popularly thought to be a medical cannabis bill, but it turned out was only for homosexual patients who liked to smoke cannabis. In the GPSC2 report which was tabled before the parliament at that time, it was acknowledged that only patients who liked to smoke cannabis – and their friends and carers – would be likely to avail themselves of the alleged benefits of the then proposed legislation. In other words the very use of the term “medical cannabis” is the standard misnomer for cannabis decriminalization 1 which it has been found to be the most successful way to introduce it in virtually every jurisdiction around the world, and has been repeatedly used in NSW.
3. As was noted recently by Dr Nora Volkow the Director of the NIH Institute concerned with drug addiction 2, cannabis has a well recognized withdrawal syndrome associated with it, which can be experienced by up to 50% of people who are exposed to it on a daily basis, particularly when that exposure occurs in adolescence 1. In the fourth answer on page 1, the authors list a series of symptoms including pain, muscle spasm, agitation, fits, convulsions and rheumatics all of which are recognized presentations of cannabis withdrawal 3. Since the pro-pot group acknowledged that only pot-smokers will want to smoke pot if it is legalized, what they are really saying is that they will be able to treat their cannabis dependence syndrome more easily if it is made more readily available. Even the cannabis advocates acknowledge that more efficacious and safer treatments exist for every purported indication for which they suggest its use.
4. The first answer on page 2 is completely incorrect. In this response Wodak et. al. appear to claim that smoked cannabis is a medicine. As noted by Dr Volkow raw cannabis contains hundreds of chemicals and is an impure substance. After burning as in smoking the products of full and partial oxidation form thousands of chemicals many of them highly toxic and frankly carcinogenic including similar tars, polycyclic hydrocarbons and aromatic amines as those found in tobacco smoke. No regulatory authority in the world (e.g. FDA 4 in USA or TGA 5 in Australia) acknowledges any smoked preparation as a valid form of dosing of any medicine. The term “medical cannabis” is therefore in strictly medical terms a misnomer which has been
strategically designed to confuse and mislead people as part of the clever public relations marketing campaign of the big cannabis industrial developers (by analogy with big tobacco interests), as have now developed in California, Colorado, Oregon, Washington state and elsewhere.
1 Lonsberry B “Medical marijuana is a fraud.” News Radio WHAM 1180. http://www.wham1180.com/onair/bob-lonsberry-3440/medical-marijuana-is-a-fraud-12428431 Viewed 13th July 2014. 2 The Institute she directs is called the National Institute of Drug Abuse. 3 See Epilepsy Action Australia – http://www.epilepsy.org.au/living-with-epilepsy/lifestyle-issues/alcohol-and-drugs 4 Food and Drug Administration 5 Therapeutic Goods Administration
5. The answer to the second question on page 2 is also incorrect. Wodak et. al. claim that cannabis is a second line drug for various – unspecified – medical conditions. This is erroneous. As clearly stated on the Epilepsy Action Australia webpage cited 6 it is not indicated at all by reputable authorities in this country as it is not even legal! The other point is that to achieve the so-called therapeutic effects one frequently has to achieve concentrations into the toxic range. There are numerous other treatments for glaucoma, asthma, epilepsy, pain and nausea. Were it legal and therefore ethical to list cannabis for these disorders, cannabis would be about 10th line, 20th line, 60th line, 80th line and 10th line respectively. This is another way of – politely – saying that there are no valid clinical indications for cannabis at this time. As Wodak and colleague correctly observe the indication for AIDS wasting has now become obsolete because of the great improvements in the treatments for AIDS.
6. Moreover in addressing this all important issue – the motivation for medical cannabis – Wodak and Mather appear to overlook the role of the pro-cannabis lobby in this campaign. Indeed one wonders if there would be any campaign to legalize cannabis if those who do not like to use it themselves were excluded from advocacy roles. One can only surmise at the relationship of the present advocates of the pro-pot position to the pro-pot practice.
7. Wodak and colleague’s answer to Question 3 on page 2 is also erroneous. Anecdotal evidence is not considered evidence which is even evaluable by reputable medical authorities. Wodak’s remarks do not state this clearly. One notes – paradoxically – that Wodak is keen to discount such evidence in the case of implant naltrexone – even in anecdotal cases where implant naltrexone has been obviously enormously successful (such as five years heroin free). At this point Wodak appears to be applying a double standard. The third type of evidence cited by Wodak and colleague is vague and unclear. The authors refer to “careful reviews of papers”. This is not a medical term. Modern Science considers “systematic reviews” and “meta-analyses”. Wodak and Mather do not even use these terms. So their meaning is unclear. In the context one must be concerned that this obfuscation of meaning may be deliberate.
8. Similar concerns apply to the fourth answer on page 2. Wodak and Mather refer to “one recent review”. The source is not even referenced! There are many reviews in medicine and one needs to consider the whole of the literature. Apparently this was not a systematic review or a formal meta-analysis as otherwise one would expect the authors of the present work to have cited this. Moreover the results of meta-analyses are typically reported in very complex form – not the very simplistic format which seems to be indicated by Wodak and Mather. The question is not “What were the findings of one particular review?”. The question in principle is “What does the totality of the literature say?”, or more formally “What were the findings of the largest, most comprehensive and most recent meta-analyses of the topic”. Moreover one again notes that Wodak and Mather have reported only a fraction of the information required to form an evaluation. How many of the patients involved in these un-sourced trials had to discontinue their trial medication because of toxicity? How many were lost to follow up? And particularly in how many patients who had not been previously exposed to smoked cannabis and who had been provided with access to all the usually recommended treatment options – was cannabis found to be the best therapy? Wodak and Mather’s un-referenced material does not even consider these pivotal questions, much less provide the parliament with the sorely needed information to address them.
9. The fifth answer on page 2 realting to the alleged medical indications for cannabis is also highly suspect. Let us review these conditions individually.
1) Nausea and vomiting with cancer chemotherapy can generally be controlled adequately with current methods. The drugs most commonly used and often effective are prochlorperazine and metaclopramide. Chief amongst the newer agents is the 5HT3 7 antagonists such as ondansetron, tropisetron and dolasetron, some of which can also be given as a sub-lingual wafer or by subcutaneous, intramuscular, or intravenous injection if needed so that vomiting itself does not preclude their administration. Similarly prochlorperazine can be given by suppository. These medications can all be given by many routes of administration. Other medications can also be used including steroids where required.
2) Pain clinics have numerous ingenious ways to control pain. Pain can also be induced by cannabis withdrawal, and cannabis use itself has been shown to be linked with chronic back pain, so beware the pain presenting in the cannabis addicted patient / advocate. Nevertheless Wodak and Mather are correct that many patients are left in difficult situations by their chronic non-cancer pain. This is an active area of research internationally, and one to which Australian researchers, particularly at the University of Adelaide, are making major contributions. The recent demonstration that inflammatory activity in the brain and nerves is associated with pain generation and pain perceptual mechanisms has opened major investigative pathways for the development of several exciting new agents. This is a project upon which some of the top medicinal chemists in the world are actively engaged, some of whom work intramurally at the NIH and NIDA 8 itself. One notes in passing that Wodak and Mather have neglected to observe that D-naltrexone and D-naloxone show special promise for this application.
3) AIDS wasting – As noted by Wodak and Mather this indication is disappearing due to the efficacy of the newer treatments for AIDS.
4) There are other treatments for MS stiffness. In particular recent advances in immunology have meant that the treatment of MS itself has dramatically improved in recent times with several newer options including teriflunomide, dimethyl fumarate, fingolomod and dalfampridine. Benzodiazepines, Lioresal, several anticonvulsants and local Botox can all find application when spasm is a problem.
10. The sixth answer on page 2 is also erroneous. Wodak and Mather claim that cannabis is not a cure for any described medical condition. Cannabis dependence and withdrawal is a well described medical condition acknowledged both in DSM-IV and DSM-V 9 of the APA 10. Administration of cannabis to patients in such states will produce a short term relief of symptoms, albeit with an exacerbation of its many long term toxic effects, oncogenicity, and gateway effects in other drug use, and likely damage to adolescent brain development 1-2. There is no intention in making this point to be humorous. This is very important because it is clear that many of the patients who are brought along to parliamentary enquiries, and who offer public testimony of the wonderful effects of cannabis are actually speaking from a background of pre-existing cannabis dependency and addiction. Lawmakers need to keep this key issue always in the forefront of their minds. As correctly identified by Dr Volkow, cannabis can cause many illnesses so the claim that cannabis relives a pain in whose aetiology cannabis was implicated, must be viewed with substantial circumspection by those charged with responsible decision making in our community. Lawmakers should note that these disorders include chronic back pain 2.
7 5HT is the standard medical abbreviation for serotonin. This refers to the 5HT-3 ligand – receptor pair. 8 National Institute of Drug Abuse 9 Diagnostic and Statistical Manual IV and V respectively. 10 American Psychiatric Association
11. The purported answer of Wodak and Mather to the issue of cannabis related toxicity given as answer 1 on page 3 is not only erroneous but dangerous. It is misleading and confusing. Of course one can form an impression of the possible early toxicity of high level cannabis exposure by studying low level recreational exposure.
12. In addressing the subject of cannabis toxicity their answer actually acknowledges none of the key salient points made by Dr Nora Volkow in her leading article in the New England Journal of Medicine on June 4th 2014. The interested reader is referred there for more information, and to Hon. Rev. Fred Nile’s speech introducing the subject to the Legislative Council of NSW. In particular, compared with the eminent work of Volkow and colleagues, Wodak and Mather overlook:
1) Known psychiatric toxicity – schizophrenia, anxiety, depression, bipolar disorder;
2) Effects as a gateway agent to other and hard drug use;
3) Damage to brain development particularly when exposure occurs in key developmental stages such as pregnancy, childhood and adolescence
4) Damage to attention, intellect, cognition, memory
5) Damage to long term lifetime trajectories including ability to form stable relationships and to gain useful employment;
6) Respiratory toxicity including chronic bronchitis and emphysema-like changes;
7) Driving related toxicity including fatal car crash, both alone and in combination with alcohol;
8) Cardiovascular diseases including stroke, and heart attack and transient ischaemic attacks;
9) Immunosuppressive actions particularly when given to AIDS patients, and especially when taken by the smoked route;
10) Real concern in many studies about the connections of cannabis to cancer.
13. Moreover as Dr Volkow astutely observes many of these old cannabis studies were done when the THC concentration of cannabis was 3%. So the studies which found no ill effects in the 1970’s – 1990’s are likely out of date at this time. Dr Volkow has noted that THC concentrations of cannabis are now reported in the USA commonly at 12%. Indeed one cannabis shop is said to be opening in Colorado reporting a choice for patrons from 17% – 20% THC in its product!
14. Wodak’s answer in relation to side effects also reverses the true state of affairs. Clinical reports of cannabis use cite a very high rate of unacceptable side effects, which frequently precludes is clinical application. Such very elevated rates of discontinuation (often around 30-50%) of cannabis based treatments are rare with other treatments in the conditions under discussion.
15. The risks of mental side effects from cannabis are not distant and remote as Wodak and Mather claim. Cannabis intoxication, dependence and tolerance in patients exposed to high levels of it – albeit for therapeutic purposes – are common, and
entail anxiety, paranoia, forgetfulness and depression, and at times psychotic disturbances and hallucinations as being not unusual.
16. The second answer on page 3 is misleading. There is extreme concern in the US now, and numerous on the ground reports that cannabis use in states permitting cannabis use has increased dramatically. California tabled its first cannabis BILLIONAIRE in 2013. Does anybody seriously believe that that is because nobody is buying his products??
17. It was estimated recently by official sources that Colorado will consume 130 tonnes of cannabis annually 11. Selling at $220 per ounce 12 and with 35,274 ounces per tonne, this translates to $7,760,280 / tonne or $1,008,836,400 for the whole crop in that state alone. Unfortunately, whilst tax revenues were cited as a major reason for legalization in Colorado, the simple expedient of not buying it from one of the state’s three registered recreational cannabis dispensaries which were more expensive than the medical pot shops, allowed taxation to be circumvented 13. It is important to note that 67% of all the cannabis sold was used by the 22% of heaviest users, further confirming the addictive nature of the legally available weed 14.
18. The trade was also encouraging cannabis tourists to flow into the state, just as had happened in the Netherlands 15. Indeed one court has ruled that the Dutch coffee shops be compensated for the reduction in their trade consequent upon a tightening of the laws which have now been put in place to restrict such cannabis tourism 16.
19. The US reviews cannabis consumption in numerous states. The CDC have just published national figures however the data from two key states was not available. The sample from Colorado was unusable, and Washington state did not participate in the survey at all 17. In other words if official figures fail to show increased use in the states legalizing cannabis that is likely a direct product of the “Don’t’ ask, Don’t tell” policy applied to addiction epidemiology by CDC.
20. The third answer on page 3 is also incorrect as judged by Dr Volkow’s article. Even the baseline risk of cannabis addiction is high at 9%, particularly given that up to 40% of the community have been exposed to cannabis. As Dr Volkow points out the addiction rate can rise up to as high as 50% in many groups. If as is widely suggested cannabis is legalized, then heavily cannabis addicted patients will become much more commonplace.
11 Silva R “Colorado marijuana market consumes estimated 130 tonnes of the drug annually.” HNGN 12th July 2014. http://www.hngn.com/articles/35958/20140711/colorado-marijuana-market-consumes-estimated-130-tonnes-of-the-drug-annually.htm Viewed 13th July 2014. 12 Wyatt C., “Colorado Completed First Legal Pot Study.” Associated Press. http://hosted.ap.org/dynamic/stories/U/US_RETHINKING_POT_DEMAND?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT Viewed 13th July 2014. 13 Wyatt C., “Colorado Completed First Legal Pot Study.” Associated Press. http://hosted.ap.org/dynamic/stories/U/US_RETHINKING_POT_DEMAND?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT Viewed 13th July 2014. 14 Light M.L., Orens A.;, Lewandowski B., Pickton T. “Market size and demand for marijuana in Colorado.” Prepared for Colorado Dept of Revenue. http://www.colorado.gov/cs/Satellite?blobcol=urldata&blobheadername1=Content-Disposition&blobheadername2=Content-Type&blobheadervalue1=inline;+filename%3D”Market+Size+and+Demand+Study,+July+9,+2014.pdf”&blobheadervalue2=application/pdf&blobkey=id&blobtable=MungoBlobs&blobwhere=1252008574534&ssbinary=true Viewed 13th July 2014. 15 Rodriguez C., “Marijuana for tourists, discord for the Netherlands.” Forbes magazine 24th September 2013. http://www.forbes.com/sites/ceciliarodriguez/2013/09/24/weed-ghettos-for-tourists-anger-netherlands-neighbors/ Viewed 13th July 2014. 16 Kooren M, “Dutch Cannabis coffee shops to be compensated over tourist laws.” Reuters. http://rt.com/business/shops-dutch-coffee-cannabis-303/ Viewed 13th July 2014. 17 CDC MMWR – Youth Risk Behaviour Surveillance – United States , 2013. http://www.cdc.gov/mmwr/pdf/ss/ss6304.pdf Viewed 13th July 2014.
21. The fourth answer on page 3 is also misleading. If one speaks with unbiased and independent respiratory physicians who treat asthma, ophthalmologists who treat glaucoma, neurologists who treat epilepsy, and pain physicians who treat pain, one hears the same refrain repeated over and over again that cannabis is not required as a treatment. The treatments of today are in general more than sufficient for the clinical requirements.
22. The fifth answer on page 3 is strangely at variance with every drug regulatory agency in the world. Oddly, Wodak and Mather seem to recommend the smoked route in direct contrast to every other medicinal chemist and regulatory agency the world over. One can only wonder if this does not reveal their personal bias.
23. Australia is a signatory to the international narcotic conventions particularly the Single convention 1961. Legalization would entail a major change in Australian society and Australian Law to allow legal cannabis. We would be in breach of our international treaty obligations. Amongst other things, these treaties allow us to participate in international policing operations to help to break up global drug running gangs, and to cooperate with law enforcement across national boundaries on many issues.
24. There is no question that Australia’s use of its presently legal drugs, tobacco and alcohol is responsible for an enormous public health burden. Adding cannabis to this situation, when – paradoxically – Wodak has been one of the loudest voices opposing alcohol- and tobacco- related harms – would clearly compound this situation. Moreover because of the well established gateway effect of cannabis, allowing cannabis would increase the use of the other illegal drugs. Hence this change would signal Australia’s degeneration into an increasingly drug taking-culture. We would become less employed and less employable; that is our welfare bill will inevitably rise. The rate of congenital abnormalities would rise so children would be borne with lifelong disabilities including mental retardation. The rate of chronic disease in the community, including chronic back pain, would rise. In other words legalizing cannabis will increase our physical and mental health bill and our long term welfare dependency bill, at the same time as reducing our taxation base and national income generating capacity. This is an impossible cost squeeze and social dysfunction squeeze for any Government.
25. The fifth answer on page 3 relating to restricted use of cannabis is invalid. Wodak and Mather claim that one could nevertheless restrict cannabis use if it was allowed medicinally by analogy with morphine, cocaine, amphetamine and ketamine. 40% of our population has not been exposed to these agents. Moreover this is not the pattern which has been seen recently as medicinal cannabis is the all too obvious leading edge of cannabis decriminalization around the world. One notes the very reverse of this in the Dutch experience alluded to above.
26. The sixth answer on Page 3 is also suspect. Wodak and Mather have neglected to mention that cannabis is the drug most frequently implicated in car crashes after alcohol, and the most frequently implicated of all the illicit drugs in motor vehicle crashes. Legalizing it and increasing its use would obviously exacerbate this by an amount at least proportional to the amount of its increased use.
27. Moreover as the authors correctly observe alcohol is already legal, so that legalizing cannabis effectively legalizes the highly dangerous cannabis–alcohol cocktail. This
has been shown to be very dangerous in many studies, as is acknowledged by the present authors.
28. Wodak claims that many Australians take cannabis medicinally at present. He has not stated how many of these were previously habituated to cannabis. He does not say how many of these are taking it for cannabis-induced diseases. He does not give data on the overall physical or mental health of cannabis smokers, prior to the commencement of their supposed serious illness.
29. The other chestnut which Dr Wodak frequently mentions, although it is absent from the present paper, is that alcohol and tobacco are related to far more ill-health in the Australian community than cannabis. In a simple quantitative sense it may or may not be correct. In either event it is an appalling argument in that it fails to correct for the very different exposure patterns of the different agents. The more frequent use of tobacco and alcohol in our community is directly related to their differing legal status. Both the numbers consuming tobacco and alcohol and the relative amounts consumed, are greater for the legal drugs than any of the illegal drugs, precisely because of their legal status. So whilst Wodak and colleagues frequently use this argument to ridicule genuine medical concerns in relation to the illicit drugs, in fact it is a potent argument in favour of retention of the present status quo, and the illicit status of the presently proscribed agents including cannabis. Given what has now been established by medical researchers in relation to cannabis-induced toxicity it presumes far too much to suppose that cannabis is any less toxic than our presently legal intoxicants. No reputable scientist who is unbiased and familiar with the published research in this area would support this liberalist position.
30. In fact detailed examination of communities where cannabis consumption is normative, such as the northern rivers district of NSW including the Nimbin-Mullumbimby area, show that the area is shockingly affected by unduly elevated rates of depression, suicide, murder, unemployment, family breakdown rates, poverty and general unhappiness 18, despite its being situated in some of the most fertile and productive rural landscapes in the country. Given what is now known of the medical effects of cannabis, much of this social disadvantage and community repression which is reflected on every metric, can likely be related directly or indirectly to the known high cannabis consumption rate in the area, and the apparently legally protected status of the region’s not insignificant cannabis crop.
31. Overall one is left with the impression that the work that has been produced by Wodak and Mather is a thoroughly activist piece. This document distorts and mishandles the truth at most points. In short it is a document such as might be expected from Australia’s leading drug advocate. In that sense it is highly predictable.
32. That it purports to be a reputable and scientifically reliable source of information for lawmakers is appalling. It is neither scientific nor reliable. In a scientific sense it is nothing less than a national scandal. It is not so much a scurrilous abuse of scientific process and current evidence in regard to both the basic science of pathophysiology and applied clinical therapeutics, as a mockery, a debasement, and a frank abuse of science and medical data.
33. Given that Dr Alex Wodak appears to position himself as one of Australia’s leading national figures advising the nation on addictive drugs, the conclusion becomes inescapable that Australia has been ill-advised on illicit drug policy by this self-confessed drugs legalization activist, and that our policies in this area are therefore likely misinformed, ill-conceived and / or ill-constructed.
34. Given that the activist position adopted by Dr Wodak, speaking in the name of Science, is clearly at major variance with the contemporaneous pronouncement of acknowledged world leaders, sufficient evidence exists for a formal motion of censure against Dr. Wodak from this house for attempting to mislead the Legislative Council of NSW.
1. Volkow ND, Baler RD, Compton WM, Weiss SR. Adverse health effects of marijuana use. N Engl J Med 2014;370(23):2219-27.
2. Reece AS. Chronic toxicology of cannabis. Clin Toxicol (Phila) 2009;47(6):517-24. | <urn:uuid:eda9143c-e907-44ef-9f8f-c2d75838da61> | CC-MAIN-2017-17 | http://drugprevent.org.uk/ppp/2014/08/some-frequently-asked-qs-and-as-about-medicinal-cannabis/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00545-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.954073 | 8,150 | 2.5625 | 3 |
The Historical and Archaeological Significance of the Tefft Historical Park
The Tefft Historical Park site consists of at least five loci with American Indian artifact deposition, and multiple loci representing historic period occupation between the mid-17th to early 20th century. Historic period loci consist of three house foundations, four outbuilding locations, three historic cemeteries, three stone-lined wells, an irrigation system, a stone wall complex, two stone footbridge features, and several abandoned farm fields.
The property is a well-preserved example of prehistoric period occupation and an early colonial Rhode Island family settlement. An archaeological survey revealed evidence of 5000 years of prehistoric human occupation as a seasonal Narragansett Indian camp or village site. (Strauss, 1998) Some historic features appear to date back to the first English settlement of southern Rhode Island, known as the Pettaquamscutt Purchase of 1658.
Situated along southern Rhode Island’s coastal plain, the site enjoyed some of the mildest weather in all New England. Perhaps the first settlers of this site, the Narragnsett Indians, were attracted by the site’s protective southwest prospect at the base of one of the taller hills in the region. (Tefft Hill-Elev. 255') The presence of two natural springs provided an ample water supply. Glacial deposits provided the raw material for the many stone walls that cover the landscape marking the remains of several small arable fields, orchards and pastures. At the time of European contact in the mid-17th century, it is likely that the lowland was burnt-over cropland utilized by the Narragansett people, while the uplands contained stands of mature softwood forests. Worden's Pond, headwaters of the Pawcatuck River and at one time the site of several large Narragansett villages, is quite close to the property.
It was these favorable environmental attributes that motivated the English immigrant John Tefft to purchase 500 acres of land encompassing the site sometime between 1658 and 1672. The original deed has not survived. John Tefft served as a witness to the second Pettaquamscutt Purchase of 1661, and possibly laid out his share soon after. From land evidence records of adjoining neighbors, and from the Fones Record, we learn the location and extent of John Tefft’s holdings. From John Tefft’s 1674 will, we also learn that he owned a 20 acre homestead along the Pettaquamscutt River in the Tower Hill area of the Pettaquamscutt Purchase.
It appears that Joshua and Samuel Tefft, John’s only sons, settled the 500 acre property and began to raise livestock, cattle in particular, in the mid-1660s. The homestead site, situated near the geographical center of the property, seems to suggest that it was probably selected first before the boundaries were run. Situated nearby a natural protective peninsula in the Genessee Swamp called Tobeys Neck, along with the natural springs and fertile ground, the site was ideal for agricultural pursuits.
There is documentary evidence that both Joshua Tefft and his brother Samuel spoke the native Algonquin language. (Providence Town Papers, 1:364, LaFantasie, 2:711) Joshua Tefft mentions his cattle and "his farme a mile and a half from Puttuckquomscut" in a deposition taken by Roger Williams in 1676. (LaFantasie, 2:711) For fourteen years the Tefft family lived peacefully with their Narragansett neighbors, until the outbreak of King Philip’s War in 1675. While the Tefft family sought safety on Aquidneck Island, Joshua remained behind to care for the cattle. Joshua Tefft did not survive the war.
In the decade following King Philip's War, the land remained largely abandoned due to Rhode Island’s recurrent boundary conflict with the colony of Connecticut. However, Samuel Tefft1 returned to re-occupy and work his father’s land in the mid-1680s, being taxed nine shillings by the Andros administration in 1687. Documentary and archaeological evidence suggests that the foundation of Samuel’s 17th century dwelling house still exists on the property and it is of prime importance in understanding the nature of the property. Further archaeological research is necessary to confirm a date of construction.
Over time, a complex network of stone walls, as well as an irrigation facility, animal facilities, and several outbuildings were also built. About 1720, Samuel Tefft1 built another dwelling house in the northeast section of his 500 acre property, which stood until it was destroyed by fire in December 2000. Samuel Tefft1 left the original homestead in the center of the property to his two sons, John2 and Samuel Jr.2, in his 1725 will. John inherited the northwest corner of the property, containing 125 acres of the original 500 acres, which also includes several historic features, but is not part of this application. Samuel Tefft2 received the southern 250 acres. Upon the death of their mother Elizabeth in 1740, Samuel Jr. bought out his brother’s share of her land, leaving Samuel Jr. with total of 375 acres, or 3/4 of his grandfather’s 500 acres. (A third son, Joseph Tefft2, received property in the Shannock Purchase.) The 1730 Rhode Island census reveals that Elizabeth and her two sons kept four Indians in their households, two of whom took on the Tefft name, Robin and Joshua.
The farm continued to be divided and subdivided, and again consolidated by Tefft family members until the early 20th century when it was finally sold out of the family in 1909. Like other South Kingstown farms, it was probably most prosperous in the mid-18th century. At the time of his death in 1725, Samuel Tefft1 was relatively wealthy, with an inventory in excess of £1,300 and extensive land holdings in South Kingstown and the town of Richmond (where there is a "Tefft Hill" along the Exeter line, named after him).
It was through the descendants of Samuel Tefft Jr.2 that much of the land in and about "Tefft Hill" (of South Kingstown) remained in the Tefft family over the next two centuries. Samuel Jr.2 divided the property between his sons; Samuel3, Daniel3, Stephen3, Tennant3 and Ebenezer3. It appears that Daniel and Ebenezer received a portion of the property in question. By the mid-1700s, Samuel Tefft’s1 original house had fallen into ruins, but it became a corner marker for many of the subsequent property divisions between Tefft descendants and their neighbors.
In the 1754 will of John Tefft2, he describes the bounds of his property beginning "near about north from the place where the old house stood that did belong to my honoured father, Samuel Tefft, dec’d." (SKCP 5:191) In 1771, John Tefft’s2 son Samuel, took possession of 120 acres of his father’s land in the northwest corner of the original 500 acre purchase. He is referred to as "Samuel Tefft of Richmond" in the historic record, to distinguish him from his similarly named cousin.
Ebenezer Tefft3, father of James Tefft4, was also the town sergeant for many years, most notably during the American Revolution. Ebenezer inherited 30 acres from his father Samuel Tefft3 which included the site of his grandfather’s dwelling house in 1760. (SKLE 7:413) In the 1758 will of Samuel Tefft2, he describes Ebenezer’s tract of land as "beginning at the middle of chimney north one rod thence east to the lane as it now is, then northerly as the fence and wall now stands…" (SKCP 5:128) Many of these features are still recognizable today. This is a brick in situ on the remains of the chimney.
Stephen Tefft’s3 son, Gardner Tefft4, purchased 10 acres with dwelling house from his cousin James in 1778. (SKLE 7:414) During the late 18th century, the site was frequently referred to as "Gardner Tefft's Farm." Gardner Tefft, a private during the Revolution, and his wife Waity, are buried in South Kingstown Historical Cemetery #17 (SKHC #17) located on property directly adjacent to the park.
Gardner and Waity Tefft’s sons, Norman5 and Elijah5, were next to work the family farm. During the 1830s, Nailer Tom (Thomas B. Hazard), the local blacksmith, made several references in his diary to Norman Tefft. Nailer Tom mentions purchasing "4lbs ½ of butter of Norman Tifft at 18cents a lbs" and frequently complains "Norman Tiffts oxen gott into my Corn last Night" and about similar events. Elijah Tefft sold his 150 acre portion of the property to his nephew Daniel E. Tefft6 for $300 in 1835 only reserving "a privilege to the family burying place." (SKLE 16:89)
It is evident that cattle were the primary livestock raised at the farm from its earliest colonial occupation in the mid-1600s, well into the mid-1800s. From Samuel Tefft’s1 1725 will, we learn much about the agricultural activities of the early Tefft family. (SKCP 2:29-39) Cheese and butter were the primary dairy products. For this reason, hay was the main agricultural crop; but rye, corn, beans and flax were also grown. In addition to cattle; sheep, swine and geese were raised. Wool from sheep and linen from flax were spun and woven into garments. Apple orchards produced barrels of cider. There was also a cherry orchard. Below is one of the stone-lined water wells.
In 1796, Ebenezer Tefft’s3 son, Daniel Tefft4 sold his parcel of upland to Samuel Fowler. (SKLE 9:160) On this portion of the property is what appears to be the foundation of a "stone-ender" style building. Further archaeological research is necessary to determine the building’s original structure. Simon Niles Sr., a Native American/African American, purchased this property from Samuel Fowler, et al., and lived on it for several years. In 1858, Simon Niles Sr. sold the property consisting of 18 acres to Elisha R. Potter reserving the "right to be buried in the Fowler lot…where the said Simon has already commenced to bury his family…" (SKLE 20:214) South Kingstown Historical Cemetery #81, otherwise known as the Niles/Fowler Cemetery, with approximately 15 burials, is located nearby the dwelling house. All of the gravestones are unmarked, except one:
Serg’t Co. A 11th U.S.C. Art’y H’y
Died Nov.22 1865
Died of disease contracted in U.S.
service during the Great Rebellion
Aged 17 years
In 1868, through the power of eminent domain, the State of Rhode Island and Providence Plantations granted the Narragansett Pier Railroad Company a right-of-way to build a railroad line" not to exceed six rods" from West Kingston to Narragansett Pier, provided "that all damages to any persons thereby to be paid." (R.I. Acts & Resolves & Reports, May 1868) John H. Tefft7, a descendant of both John Tefft2 and Samuel Tefft2, and lived in the Oliver Watson House (located nearby on the URI campus) owned the homestead site at the time. He and his uncle, David Tefft6, who owned land to the east, did not anticipate the disruption the railroad would bring as it passed through their farmland.
In 1880, John and David, as well as several of their neighbors, took their case against the Narragansett Pier Railroad Company to the Rhode Island Superior Court. (May Term 1880, No. 1147) David won his case, and was awarded $500 and costs; John settled his case out of court. However, the popular Narragansett Pier Railroad could have very well been the death knell to the Tefft farms, and may well explain their swift decline in the late 1800s.
The last burial in SKHC #17 is that of Daniel E. Tefft, who died in 1886. While the land evidence is hard to follow at this point, it appears that at least the Simon Niles property next fell into the ownership of John H. Tefft7. Over the years, John H. Tefft accumulated a vast estate including much of his ancestor’s "Tefft Hill", parts of West Kingston, and beyond. John H. Tefft died in 1888, leaving much of the Tefft Hill property to his niece, Mary L. Northup. (SK Prob. Rec. B 13:109) Mrs. Northup eventually sold 250 acres comprised of four adjoining lots known as the "John Tefft Estate" (site of Samuel Tefft’s1 1725 dwelling house), "Simon Niles Land", "Oatley Land" and the "Woodlot" to Oliver W. Greene in 1909. (SKLE 37:231) Exceptions granted, from the land evidence record it is reasonable to conclude that the majority of the property presently known as the Tefft Historical Park was largely in the possession of the Tefft family for nearly 250 years.
During the 20th century, the property fell into disuse and remained fallow while its historical buildings drifted into ruins. Ownership of the property changed hands every decade or so, but due to its semi-remote location no attempts were made to develop the land until the later part of the century—once in the 1970s, and again in the late 1990s. However, any development would have destroyed the already ephemeral nature of the site. While the property has been subject to infrequent random vandalism, it remains largely intact and in-situ. It has excellent integrity as an archaeological site and possesses considerable research potential.
Through successful negotiation and fund raising efforts, 28.07 acres of land representing the core settlement of John Tefft’s 500 acres in the Pettaquamscutt Purchase were acquired by the South Kingstown Land Trust in November 2000. The site's close proximity to the South County Bike Path (formerly the Narragansett Pier Railroad) may promote substantial visitation. It is hoped that the Tefft Historical Park will serve the public as an important educational resource and a reminder of southern Rhode Island’s rich cultural heritage and regional history.
The Tefft Historical Park is considered to be one of the best preserved colonial sites in Rhode Island, and possibly in New England. The pre-historic and historic remains associate with several important themes and long term human adaptation to the Rhode Island environment; European/Indian contact and interaction, King Phillip's War, Revolutionary War and Civil War, all of which are events that have made a significant contribution to the broad pattern of our history. Here is the gate leading to cemetery #100.
The site is intimately associated with the Tefft Family. Ownership of the property remained largely in the Tefft line for over 225 years and the remains of the property reflect several significant periods of American history. Members of the Tefft family served in both civil and military capacities, and were especially involved in the development of the early South Kingstown town government. Perhaps one of the most significant aspects of the Tefft Historical Park is that there is still a considerable amount of documentary evidence; i.e. probate, inventory and land evidence records, etc., that would enhance future archaeological research projects.
Tefft Historical Park contains two historical cemeteries (SKHC #81 & #100), and is directly adjacent to a third (SKHC #17). SKHC #100 may date to circa 1675. (Anthony, 1994) James Arnold counted 50 graves in SKHC #100 in 1883. There are approximately 24 graves in SKHC #17, and 15 in SKHC #81. These early cemeteries contain Tefft ancestors and people in the lines intermarrying with them (Gardners, Oatleys, Lillybridges, and others). The care and maintenance of these cemeteries is important out of respect to these early Rhode Island pioneers, and for their research potential.
Cemetery #100 may also contain the remains of Joshua Tefft, who fought with the Narragansett Nation against the United Colonies during the "Great Swamp Fight" and was subsequently captured and executed for high treason by United Colony forces. Today, Joshua Tefft has the dubious distinction of being the only known Englishman to be "hanged and quartered" in New England history. The details of this complex story are fascinating and provide an intriguing case study into the relationship between early Rhode Island and the other New England colonies. Is this Joshua Tefft's grave? I believe that it is, but will we ever know for sure?
While the cemeteries remind us of the ninety-plus lives who lived their lives and now rest in this serene locale, many of their descendants migrated throughout the country during the growth of the Nation. Today thousands of individuals can trace their lineage back to "Tefft Hill" through a Tefft ancestor.
The Tefft Historical Park is an important historical resource as it represents many centuries of human occupation from approximately 5000 years ago to the early 20th century. It is well preserved and has significant potential to yield information about the prehistoric, contact, and historic eras. There is a nearly continuous record of artifacts still buried in the ground, as confirmed by the preliminary archaeological survey. The site is one of the few remaining sites capable of revealing new information about early cross-cultural contact—a vital time in the history and development of our nation. Prehistoric sites abound on this land with no written history for these Americans. Their story, and the story of the colonial people they interacted with, is written in the soil.
Anthony, A. Craig, Joshua Tefft - R.I.P. Plymouth State University. Ms. 1994.
---, The Tefft Family and the Narragansett Controversy - A Window into the Creation of Rhode Island and Providence Plantations. Plymouth State University. Ms. 1998.
Arnold, James. Ancient Cemeteries of Kingstown. Rhode Island Historical Society. 1883.
Hazard, Caroline. Nailer Tom’s Diary otherwise The Journal of Thomas B. Hazard of Kingstown Rhode Island, 1778 to 1840. Boston: The Merrymount Press, 1930.
LaFantasie, G. Ed., The Correspondence of Roger Williams. Rhode Island Historical Society. 1989, 2:711
Early Records of the Town of Providence Vol. XV, Being the Providence Town Papers Vol. 1 1639-April 1682 Nos. 01-0367 (1:364).
South Kingstown Land Evidence. Vols. 9, 16, 20, 37.
South Kingstown Council & Probate Records. Vols. 2, 5.
Strauss, Alan E. Phase 1C Archaeological Survey of the Proposed Blueberry Hill Development in South Kingstown, Rhode Island. September 1999.
Tefft, Timothy Nathan. A Record of Some of the Descendants of John Tefft of Portsmouth and Pettaquamscutt, Rhode Island. 2001.
DIRECTIONS TO THE TEFFT HISTORICAL PARK:
Via Road: Once you find yourself in the village of Kingston, Rhode Island, begin at the Pettaquamscutt Historical Society (2636 Kingstown Road) which is directly west of the Congregational Church (you can't miss it). Proceed west on Route 138 1/10ths of a mile taking the second left, Biscuit City Road. Proceed on BDR 8/10th of a mile until it comes to a T, and go right on Stonehenge Road 3/10ths of a mile until it comes to a T, and go right on White Horn Drive. Proceed downhill 2/10ths of a mile and take a left on Berry Lane. Follow Berry lane all the way to the dead end and park at the edge of the circle. This is public access so you may park freely. There is a chained gateway, walk around this and you are within the Tefft Historical Park
Via the South County Bike Path: The Tefft Historical Park is mid-way along the South County Bike Path between the access points on South Road and Ministerial Road. The entrance is situated between a culvert on the northwest and the posted and chained entrance to the Kingston Water District property to the southeast. Scan the eastern side of the pathway and you will easily spot two boulders under some white pines on the ridge. This is the entrance to the Tefft Historical Park.
TAKE NOTHING BUT PHOTOS AND LEAVE NOTHING BUT FOOTPRINTS | <urn:uuid:546626d0-4b09-4579-a073-8791024e24f9> | CC-MAIN-2017-17 | http://www.freewebs.com/kingsprovince/teffthistoricalpark.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00131-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.961957 | 4,511 | 3.640625 | 4 |
Sperm competition, in which the ejaculates of multiple males compete to fertilize a female's ova, results in strong selection on sperm traits. Although sperm size and swimming velocity are known to independently affect fertilization success in certain species, exploring the relationship between sperm length, swimming velocity and fertilization success still remains a challenge. Here, we use the zebra finch (Taeniopygia guttata), where sperm size influences sperm swimming velocity, to determine the effect of sperm total length on fertilization success. Sperm competition experiments, in which pairs of males whose sperm differed only in length and swimming speed, revealed that males producing long sperm were more successful in terms of (i) the number of sperm reaching the ova and (ii) fertilizing those ova. Our results reveal that although sperm length is the main factor determining the outcome of sperm competition, complex interactions between male and female reproductive traits may also be important. The mechanisms underlying these interactions are poorly understood, but we suggest that differences in sperm storage and utilization by females may contribute to the outcome of sperm competition.
Sperm competition is almost ubiquitous across the animal kingdom and imposes strong selection on males to produce high-quality sperm. Males of species experiencing intense sperm competition typically produce ejaculates with: (i) more sperm , (ii) a higher proportion of viable sperm , (iii) more uniform sperm morphology [4–7], (iv) longer sperm [8–12], but see , and (v) faster swimming sperm [14,15], relative to males of species with little or no sperm competition.
Our understanding of how different sperm traits influence competitive fertilization success, however, remains incomplete. The number of sperm inseminated is often important in determining the outcome of sperm competition ([16–18] but see ), but the enormous variation in sperm morphology across species [20,21] suggests that size and shape are also important. However, attempts to understand how sperm length influences fertilization success have yielded inconsistent results (e.g. [19,22,23])—inconsistencies that may be partly explained through variable sperm competition mechanisms and ejaculate investment across different taxa .
Longer sperm are assumed to have an advantage over short sperm in a competitive scenario, because long sperm generally have: (i) longer flagella , providing greater forward propulsion , and (ii) relatively larger midpieces (, see also ), which produce more energy (via adenosine triphosphate (ATP) [27,28]. Although the relationship between sperm ATP content and swimming speed is uncertain [28–30], there is good evidence that longer sperm swim faster than shorter sperm, both within and between species (e.g. [15,31], but see [32,33]).
Faster swimming sperm are often assumed to fertilize more ova because fast sperm may reach the site of fertilization before slow sperm. This relationship between swimming speed and fertilization success is evident in some species of birds [34,35] and fish . In species where longer or larger sperm achieve higher velocities, logic suggests that sperm size should then predict a given male's fertilization success in a sperm competition situation. In fact, there is limited experimental support for this prediction . It is possible that, in some species, high levels of intra-ejaculate variation in male's sperm could mask a positive relationship between sperm length and fertilization success . A lack of variation between sperm of an individual male (i.e. in species with intense sperm competition) could mean that detecting relationships is challenging (although not impossible, e.g. [37,38]).
In this study, we use the zebra finch, Taeniopygia guttata, to clarify the relationship between sperm length and fertilization success. The zebra finch is an ideal species to do this because considerable natural variation in sperm length exists between males (mean values for different males vary from approximately 40 to 80 μm , as a consequence of relatively low sperm competition intensity [39,40]. In previous studies of the zebra finch, we have also shown that: (i) sperm length is extremely consistent both within and between the ejaculates of individual males , (ii) length and swimming speed are heritable and positively genetically correlated [39,42], and (iii) longer sperm swim at greater velocities than shorter sperm . Crucially, however, it is still not known whether, in a competitive scenario, males producing long sperm enjoy greater fertilization success than males producing relatively short sperm.
We conducted sperm competition experiments to test the hypothesis that, in a competitive environment, long sperm males fertilize more ova than short sperm males. In a mate-switching experimental design (similar to ), pairs of males, one male producing long sperm and the other producing short sperm, were mated sequentially, for 3 days per male, to a single female. In the zebra finch and other birds, inseminated sperm are stored in the female's reproductive tract in specialized sperm storage tubules (SSTs) [44–46] from which they are lost over time at a constant rate [47–49]. In birds, following sequential copulations with two different males, the proportion of sperm from the second mating male is expected to increase across successive eggs in a clutch. This is a result of passive sperm loss from the SSTs, such that fewer sperm from the first male remain in the SSTs at any given time point , explaining why, in birds, when all else is equal, sequential copulations usually result in the last male to copulate siring most offspring [43,51]. In our sperm competition experiments, we controlled for last male sperm precedence by employing a paired experimental design, in which we repeated the sperm competition protocol with the identical male pairs and females, but alternated the order in which the long and short sperm males copulated with the female.
By counting the sperm embedded in the outer perivitelline layer (OPVL) of the avian ovum, it is possible to estimate how many sperm reach the ovum and to determine the likelihood of fertilization following a single insemination . We developed this technique further, using phenotypic ‘labelling’ of sperm, to allow us to confidently assign each individual sperm observed on the OPVL to one of the competing males. This allowed us to assess the proportion of each male's sperm that reached the ovum. We also determined the paternity of each embryo, revealing the eventual winner of sperm competition. In addition, we investigated whether an individual male's fertilization success in a sperm competition scenario is predicted by the number of his sperm reaching the ovum relative to that of the other male. Our results provide unique insight into the processes occurring immediately prior to fertilization, and how they affect the outcome of sperm competition.
2. Material and methods
The zebra finches in this study were part of a domesticated population maintained at the University of Sheffield since 1985. Zebra finch sperm morphology (e.g. sperm total length) is highly heritable [39,42]. We conducted an artificial breeding experiment (described in the electronic supplementary material) which increased the number of males in the population that produced long (more than 70 μm) or short (less than 60 μm) sperm, but did not increase sperm length beyond that which occurs naturally . Sperm samples were collected from all adult male birds and five morphologically normal sperm per male were photographed using light microscopy at 400× magnification (Infinity 3 camera, Luminera Corporation, and Leitz Laborlux microscope) and measured to the nearest 0.01 μm using ImageJ . Based on these initial sperm measurements, pairs of males (matched by nearest hatching date) were selected for the sperm competition experiment, such that one male produced long sperm (n = 18) and one produced short sperm (n = 18). In no case did the sperm of the male pairs overlap in length (mean difference between males ± s.e.m.: 18.27 ± 0.70 μm). Each male pair was allocated to an unrelated (i.e. not a sibling, parent or offspring) female who originated from either the long (n = 8) or short (n = 10) selection line. The mean relatedness scores (presented as mean ± s.d.) between the male and female pairs, and between pairs of competing males were low (0.0371 ± 0.056 and 0.0026 ± 0.007, respectively; see the electronic supplementary material for further details). Females were housed singly in a cage (dimensions 0.6 × 0.5 × 0.4 m) with a nest-box half filled with hay. Each female cage had an adjoining cage for use later in the experiment.
(b) Sperm competition experiments
Sperm competition experiments were conducted using a mate-switching protocol . One male from each pair was paired to the female for 3 days (and allowed to copulate freely). The males were selected systematically to ensure that approximately half of the females (in both the long and short lines) were paired to a long sperm male first, with the remaining females paired to a short sperm male first. The second male (either a long or short sperm male) was then paired to the female for an additional 3 days (to copulate freely). After 3 days, the second male was placed in the adjoining cage, where a wire mesh divider prevented any further physical contact. Females were allowed to lay a clutch of eggs, all of which were collected daily (n = 192) and marked with a unique female code and the egg number. Eggs were artificially incubated at 38°C for 48 h, and stored at 4°C until processing. When the duration of sperm storage for female zebra finches was exceeded—14 days —each mating trial was repeated as above (using the identical males and females), except males were paired to the female in the reverse order. Thirty clutches of eggs were collected and analysed from 18 females; 12 of which produced a clutch of eggs in both mating rounds.
(c) Quantifying competitive success
Male competitive success was assessed in two ways: (i) the proportion of sperm from each male that reached each ovum (determined by counting sperm on the OPVL) and (ii) the paternity of each embryo. Eggs were dissected in the following way, as in . The egg was opened into a petri dish of phosphate-buffered saline (PBS), and the embryo gently detached from the surface of the yolk using a hair loop (a piece of human hair taped to a pipette tip to form an oval loop approx. 5 mm long), collected using a pipette and sterile pipette tip, and stored in 100% ethanol for molecular paternity analysis at a later date. The yolk was cut in half and the OPVL was removed, washed in PBS, laid flat on a microscope slide, stained with 10 μl Hoescht 33342 fluorescent dye (0.5 mg ml−1) (Molecular Probes, USA) and incubated in the dark for 2 min. We examined the half of the OPVL that contained the germinal disc (GD) because the majority of sperm are observed around the GD . Using fluorescence combined with darkfield microscopy (Leica DMBL) at 400× magnification, sperm on the OPVL were photographed (Infinity 3 camera, Luminera Corporation), and sperm length (n = 4420) was measured to the nearest 0.01 μm (see the electronic supplementary material for images of sperm embedded in the OPVL). This measurement was used to assign each sperm to either the long or short sperm male based on sperm length data collected previously; thus, each male's sperm in the OPVL was ‘labelled’ by its phenotype (long or short). The mean length of sperm collected directly from the male (from the seminal glomera—SG—see below), and from sperm embedded in the OPVL (from the same male), was significantly correlated (r2 = 0.96, t = 15.18, d.f. = 18, p < 0.0001). In cases where the sperm's head was missing, we used flagellum length to identify sperm as long or short (flagellum and total length are also significantly correlated; r2 = 0.99, t = 106.01, d.f. = 33, p < 0.0001).
(d) Sperm quality analyses
At the end of the experiment, all males (fully rested from copulation for at least four weeks) were humanely killed by cervical dislocation and sperm collected from the distal region of the left SG by dissection. The following sperm quality analyses (described in the electronic supplementary material) were carried out to determine whether sperm quality parameters were similar within the male pairs: (i) swimming velocity (the swimming speed of sperm), (ii) viability (the proportion of viable sperm), (iii) morphology (the proportion of sperm with normal, undamaged morphology), (iv) concentration, and (v) longevity (the length of time sperm remained motile) (for results, see the electronic supplementary material, table S1). Testes mass data were also collected (electronic supplementary material, table S2). Data on copulation rate and SG mass were opportunistically collected from long and short sperm males that were not used in the experiment (refer to the electronic supplementary material, tables S3 and S4).
(e) Paternity assignment
DNA was extracted from embryos using the ammonium acetate protocol . DNA was amplified by PCR using a DNA Engine Tetrad 2 thermocycler (MJ Research, Bio-Rad, Hemel Hampstead, Herts, UK). The PCR products were genotyped using an ABI 3730 48-well capillary sequencer (Applied Biosystems, CA, USA). The reaction products were visualized and scored for eight microsatellite loci using GeneMapper v. 3.7 (Applied Biosystems, CA, USA). Paternity was assigned to embryos (n = 166) using Cervus v. 3.0.3 , at greater than 80% confidence. For detailed methods, see the electronic supplementary material.
(f) Data analysis
All data were analysed in R v. 2.15.1 . Exact binomial tests were used to test for differences in the numbers of long and short sperm that reached the OPVL, and the number of embryos sired by the long and short sperm males. Generalized linear mixed models (GLMMs) in the R package LME4 were used to investigate whether male sperm length determined fertilization success. Data were modelled using the function ‘glmer’ with a binomial error distribution and logit link function. To determine the relationship between the proportions of long sperm reaching the ovum and the likelihood of the long male siring the embryo, we first modelled embryo paternity as either ‘1’ or ‘0’ (i.e. sired by the long male or not), with the proportion of long sperm embedded on the OPVL included as a fixed effect. Trio ID (i.e. a single female and pair of males) was used as a random effect.
In order to control for the effects of last male sperm precedence (we repeated the experiment with males copulating in the reverse order), we then carried out a second GLMM that used the second mating male as the focal male in the analysis. The paternity of each embryo was included as either ‘1’ or ‘0’ (i.e. sired by second male or not). Male mating order (short first/short second), female line (long/short) and the number of days between the male swap and the laying of the focal egg were included as fixed effects. Trio ID was fitted as a random effect. We also modelled all interactions between the three fixed effects. Model simplification was carried out using log-likelihood tests and Akaike information criterion (AIC) values to obtain the minimal adequate models.
(a) Sperm length influences fertilization success
Significantly more long sperm (57 ± 2%) reached the ova than short sperm (43 ± 2%) (mean percentage ± s.e.m. of sperm counts; exact binomial test; p < 0.0001). Long sperm males sired a greater proportion of embryos (64 ± 8%) than short sperm males (36 ± 8% (mean percentage ± s.e.m. of all paternity results; exact binomial test; p < 0.0001; figure 1; see also the electronic supplementary material, table S5). Sperm total length and swimming velocity differed between the competing males (electronic supplementary material, table S1), such that longer sperm swam faster, as in . Our results also show that the proportion of sperm on the OPVL from a given male determines his likelihood of successful fertilization (GLMM; estimate = 7.86 ± 1.42 (mean ± s.e.m.); z = 5.52; p < 0.0001; figure 2).
(b) A lack of last male sperm precedence
Mating order of the males did not determine which male fertilized the egg (long male first: 69 ± 10% (mean percentage ± s.e.m.); long male second: 60 ± 11% (mean percentage ± s.e.m.); proportion test; χ2 = 1.15, p = 0.28)). This means that the patterns of paternity observed in this study cannot be explained simply by the passive loss of sperm from the SSTs, where last male precedence would be expected .
(c) Male × female interaction
Male fertilization success was also influenced by interacting effects of male mating order and female selection line (GLMM; estimate = 3.60 ± 1.12; z = 3.20, p = 0.001, figure 3; see the electronic supplementary material, table S6, for model output). The number of days between the male swap and the laying of the focal egg did not affect male fertilization success as a main effect, nor did it interact significantly with any other factors. Long sperm males sired more embryos than short sperm males in three out of the four mating combinations: (i) long male first, short male last, long female, (ii) long male first, short male last, short female, and (iii) short male first, long male last, short female. However, in a single mating combination (short male first, long male last, long female), the proportion of embryos sired by the long and short sperm males were not significantly different from 0.5 (exact binomial test; p = 0.89; see the electronic supplementary material, table S5, for summary data). Taken together, these results demonstrate that in the zebra finch, long sperm males are more successful in a sperm competition scenario than short sperm males.
We have experimentally demonstrated for the first time, we believe, in a vertebrate species that under competitive conditions, long sperm tend to reach ova in greater numbers, and consequently fertilize a greater number of ova than short sperm. Our controlled experimental design, which incorporated a powerful pairwise comparison of male fertilization success using alternate mating of males, revealed an apparent lack of last male sperm precedence. This is inconsistent with the passive sperm loss model of last male sperm precedence, which is the widely accepted mechanism of sperm competition in the zebra finch and other birds (e.g. ). The passive sperm loss model predicts that all else being equal (including sperm length and swimming velocity), following sequential inseminations by two different males, a greater proportion of eggs should be fertilized by the second male to copulate. However, in this study, we found that regardless of whether they were first or second to copulate, long sperm males sired significantly more embryos than short sperm males in the majority of pair combinations. Surprisingly, the only scenario in which this was not the case was when the long sperm males copulated second (and were therefore predicted—because of last male sperm precedence—to have had an advantage regardless of sperm length) with females who originated from the long sperm selection line. In this particular instance, the proportion of embryos sired by males from both lines did not differ significantly from 0.5, so it is difficult to draw any conclusions about this particular result in isolation.
The simplest explanation for the observed overall long sperm advantage would be that, because long sperm swim faster (electronic supplementary material, table S1, and ), they reach the SSTs sooner than short sperm. However, this is unlikely to account for the patterns of paternity we observed for the following reason. Assuming that space in the SSTs is limited, and long sperm reach the SSTs sooner, the ‘fertilizing set’ of sperm in the SSTs would therefore consist of a higher proportion of long sperm than short sperm. As a result, more long sperm would reach the ovum, increasing the odds of a long sperm fertilizing the ovum. This result is what we would expect if the two inseminations (of long and short sperm) occurred simultaneously—effectively as a single, mixed insemination (as in ). In our experiment, however, inseminations were sequential, with the first male copulating with the female for 3 days, after which he was replaced with the second male who also copulated for 3 days. Despite this interval between inseminations, mating order did not affect the outcome of sperm competition, because the long sperm males generally sired the majority of embryos. This indicates that there may be differences in the rates of uptake or release of long and short sperm into or from the SSTs, which may influence the relative proportions of long and short sperm available at the time of fertilization.
In a study of domestic fowl (Gallus gallus domesticus) , it was found that high mobility sperm (when mobility is measured as the ability of sperm to penetrate a solution of inert medium (Accudenz), which is positively correlated with sperm swimming velocity ) fertilized more ova overall than low mobility sperm under sperm competition, and that this relative success increased over successive eggs within the clutch. One explanation for these results is that high mobility sperm remain in storage for longer than low mobility sperm, which is consistent with an earlier hypothesis . In this study, since long zebra finch sperm swim faster than short sperm (electronic supplementary material, table S1), this may also explain why long sperm males achieved higher paternity regardless of mating order.
Alternatively, our results may be accounted for if short sperm are simply less likely to reach and/or enter the SSTs. To reach the uterovaginal junction, where the SSTs are located, sperm must swim through the hostile vaginal region of the oviduct, so it is likely that swimming speed determines success during this phase . Again, since we know that short sperm swim more slowly than long sperm, it is possible that fewer short sperm than long sperm (in absolute terms) are able to survive the journey through the vagina to the SSTs. This could result in a greater proportion of long sperm in the ‘fertilizing set’, regardless of mating order. This is a particularly interesting idea, given that our results suggest that the long sperm males may store fewer sperm (although not significantly fewer) prior to copulation (electronic supplementary material, table S1). If we speculate that the stored sperm concentration may be related to the number of sperm used for insemination (note that we could not test this relationship), this suggests that the long sperm fertilization advantage reported in this study may be a conservative estimate.
Overall, long sperm outcompeted short sperm in our study, but sperm length was not the only factor influencing fertilization success. Specifically, the selection line origin of the female also appeared to influence the degree of last male precedence in our sperm competition trials. Assuming an overriding long sperm advantage, as our results indicate, data from matings with females from the short selection line also suggest a small underlying effect of last male precedence. As expected, long sperm males are more successful in both cases, but less so when the short sperm male was second to mate; figure 3). Data from matings with long line females, however, suggest the opposite pattern—an unexpected underlying effect of first male precedence (figure 3). Without further experiments, it is difficult to explain these opposing patterns across female lines, but this result is suggestive of a female-mediated influence on the outcome of sperm competition.
There is increasing evidence that females exert some control over paternity , and that the final outcome of sperm competition may be determined by a combination of both male and female effects [68–71]. In Drosophilia, for example, sperm are stored in the female's seminal receptacle (SR), and the size and shape of her SR influences a male's fertilization success depending on his sperm length. In an elegant experiment, Miller & Pitnick used populations of male and female Drosophila, artificially selected for divergence in sperm length and SR length, respectively. Long sperm males had a pronounced fertilization advantage when copulating with females with long SRs, possibly due to optimal positioning of long sperm within the SR for fertilization. Given the growing evidence of the pivotal roles of females in determining the outcome of sperm competition, particularly in internally fertilizing species, it is perhaps unsurprising that, in addition to the strong effect of sperm length, we also found some evidence for female effects on competitive fertilization success in the zebra finch.
We have experimentally demonstrated that in the zebra finch, long sperm have an advantage in sperm competition compared with short sperm. This long sperm advantage is evident both in the number of sperm that reach the site of fertilization and those that fertilize the ovum. As all other measures of sperm quality, except swimming velocity, were comparable between our long and short sperm males, the competitive success of the long sperm males can clearly be attributed to sperm length. Importantly, however, our results demonstrate that male competitive success is not necessarily the simple outcome of a race between the sperm of rival males. Instead, sperm competitive success appears to be mediated by the female, possibly through as yet unknown mechanisms of differential sperm acceptance or release from sperm storage sites.
This study was approved by the University of Sheffield, UK. All procedures performed conform to the legal requirements for animal research in the UK, and were conducted under a project licence (PPL 40/3481) issued by the Home Office. All animals were humanely killed under Schedule 1 (Animals (Scientific Procedures) Act 1986).
The following datasets relevant to this study are available from the Dryad digital repository (http://www.datadryad.org; doi:10.5061/dryad.dc335): (i) sperm competition experimental data, (ii) sperm quality comparisons, (iii) testes mass data, (iv) seminal glomera mass data, (v) copulation rate data, and (vi) relatedness scores between experimental birds.
This study was supported by grant no. BB/I02185X/1 from the Biotechnology and Biological Sciences Research Council (to J.S.), grant no. 268688 from the European Research Council (to T.R.B. and N.H.) and a research studentship from the Natural Environment Research Council (to C.B.).
Conflict of interests
We have no competing interests.
We thank Lola Brookes, Lynsey Gregory, Gemma Newsome, Andrew Szopa-Comley, Rachel Tucker and Phil Young for technical assistance. We are especially grateful to Gerhard van der Horst for assistance with sperm velocity analyses. This manuscript was greatly improved by comments from two anonymous referees. C.B. coordinated the study, collected the sperm competition data, carried out sperm quality analyses and molecular laboratory work, conducted the analyses and wrote the manuscript; N.H. carried out the sperm viability assay and advised on data analyses; J.S. advised on the paternity assignment; T.R.B. conceived the study. All authors participated in the design of the study, helped draft the manuscript and gave final approval for publication.
- Received July 30, 2014.
- Accepted November 17, 2014.
© 2014 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited. | <urn:uuid:b9528f7c-aa8d-44e2-901d-9fbcdd7cc12f> | CC-MAIN-2017-17 | http://rspb.royalsocietypublishing.org/content/282/1799/20141897 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00485-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.948034 | 5,943 | 3.34375 | 3 |
- Year Published: 1922
- Language: English
- Country of Origin: United States of America
- Source: Anderson, R.G. (1922). Half-Past Seven Stories. New York, NY: G.P. Putnam’s Sons.
- Flesch–Kincaid Level: 6.2
- Word Count: 3,765
Anderson, R. (1922). Story 5: "The Old Woman Who Lived on the Canal". Half-Past Seven Stories (Lit2Go Edition). Retrieved April 28, 2017, from
Anderson, Robert Gordon. "Story 5: "The Old Woman Who Lived on the Canal"." Half-Past Seven Stories. Lit2Go Edition. 1922. Web. <>. April 28, 2017.
Robert Gordon Anderson, "Story 5: "The Old Woman Who Lived on the Canal"," Half-Past Seven Stories, Lit2Go Edition, (1922), accessed April 28, 2017,.
In front of the White House with the Green Blinds by the Side of the Road was the Canal; and beyond the Canal the River. They always flowed along side by side, and Marmaduke thought they were like two brothers. The Canal was the older brother, it was always so sure and steady and ready for work. It flowed steadily and evenly and carried the big canal-boats down to the Sea. The River also flowed towards the Sea, but it wasn’t at all steady, and never quiet. It was indeed like the younger brother, ever ready for play, although, as a matter of fact, it had been there long before the Canal had been even thought of by the men who built it. But thousands of years couldn’t make that River grow old. It was full of frolicsome ripples that gleamed in the sun, and of rapids and waterfalls. Here it would flow swiftly, and there almost stop as if it wanted to fall asleep. And every once in a while it would dart swiftly like small boys or dogs chasing butterflies. Sometimes it would leap over the stones or, at the dam, tumble headlong in sheets of silver.
Little fish and big loved to play in its waters. Of course they swam in the Canal too, but life was lazier there and the fish, like Marmaduke, seemed to prefer the River. There were pickerel and trout and catfish and eels, and in the Spring the great shad would come in from the Sea and journey up to the still cool pools to hatch out their millions of children.
They looked very inviting this morning, the River and the Canal, and Marmaduke decided he would take a stroll. He whistled to Wienerwurst, who was always the best company in the world, and the little dog came leaping and barking and wagging his tail, glad to be alive and about in such lovely weather, and on they went by the side of the Canal.
They went along very slowly, for it is a mistake to walk too fast on a Spring morning—one misses so many things.
Now and then a big fish would leap out of the River, it felt so happy, and in the little harbours under the banks of the Canal the scuttle-bugs went skimming, skimming, like swift little tugboats at play. In the fields on the other side of the road a meadowlark sang; swallows twittered overhead; and in the grass at his feet the dandelions glowed like the round gold shields of a million soldiers. Yes, altogether it was a wonderful day.
Marmaduke picked a great bouquet of the dandelions—for Mother—then he looked up the towpath. He could see the Red Schoolhouse, and, not so far away, the Lock of the Canal. He was very glad it was Saturday. It was far too nice to stay indoors.
Just then he had a great piece of good luck, for a big boat came by, a canal-boat, shaped like a long wooden shoe. It had no sails and no smokestacks, either, so it had no engine to make it go. It was drawn by two mules who walked on shore quite a distance ahead of it. A long thick rope stretched from the collars of the mules to the bow of the boat. A little boy walked behind the mules, yelling to them and now and then poking them with a long pole to make them go faster. My! how they pulled and tugged on that rope! They had to, for it was a pretty big load, that boat. And it had a big hole in it laden with black shiny coal—tons and tons of it!
Just behind the coal was a clothes-line with scores of little skirts and pairs of pants on it, and behind that, a little house with many children running in and out of the door. A round fat rosy woman with great big arms was calling to the children to “take care,” and a man stood at the stern with his hand on the tiller. He had a red shirt on and in his mouth a pipe which Marmaduke could smell a long way off.
The little boy waited until the stern came by so he could see the name of the boat. There it was now, painted in big letters, right under the tiller. He spelled it out, first “Mary,” then “Ellen”—”Mary Ellen—” a pretty name, he thought.
The Man With the Red Shirt and the Pipe, and the Round Fat Rosy Woman With the Big Arms, and all the children waved their hands to Marmaduke and he waved back, then hurried ahead, Wienerwurst trotting alongside, to catch up with the boy who was driving the mules.
“’Llo!” said he to the boy, but the boy paid no attention at all, just “licked up” his mules. But Marmaduke didn’t mind this rudeness. He thought that probably the boy was too busy to be sociable, and he trotted along with the mules and watched their long funny ears go wiggle-waggle when a fly buzzed near them. But they never paused or stopped, no matter what annoyed them, but just tugged and strained in their collars, pulling the long rope that pulled the boat that carried the coal that would make somebody’s fire to cook somebody’s supper some day down by the Sea.
For a long time Marmaduke trotted alongside the boy and the mules, not realizing at all how far he had come. Once or twice he looked back at the “Mary Ellen” and the Man With the Red Shirt and the Pipe, and the little house on the deck. He wished he could go on board and steer the “Mary Ellen,” and play in that little house, it looked so cute. The Round Fat Rosy Woman was coming out of it now with a pan of water which she threw in the Canal; and the little children were running all over the deck, almost tumbling in the water.
After quite a journey they drew near the Lock, a great place in the Canal like a harbour, with two pairs of gates, as high as a house, at each end, to keep the water in the Lock.
Outside one pair of gates the water was low; outside the others, which were near him, the water was high; and Marmaduke knew well what those great gates would do. The pair at the end where the water was high would open and the canal boat would float in the Lock and rest there for a while like a ship in harbour. Then those gates would shut tight, and the man who tended the Lock would open the gates at the end where the water was low. And the water would rush out and go down, down in the Lock, carrying the boat with it until it was on a level with the low part of the Canal. And the boat at last would float out of the harbour of the Lock and away on its journey to the Sea.
But all this hadn’t happened yet. There was much work to be done before all was ready.
Now the boat had stopped in front of the high pair of gates. The Man With the Red Shirt and the Pipe shouted to the boy who drove the mules, without taking the pipe out of his mouth. The great towrope was untied and the mules rested while the man who tended the Lock swung the high gates open with some machinery that creaked in a funny way, and the “Mary Ellen” glided in the harbour of the Lock.
Then the man who tended the Lock went to the gates at the lower end. There were more shouts and those gates opened too. The water rushed out of the Lock into the lower part of the Canal, and down, down, went the boat. And down, down, went the deck and the little house on it, and down, down, went the Man With the Red Shirt and the Pipe, and the Round Fat Rosy Woman With the Great Arms, and all the children. Marmaduke started to count them. He couldn’t have done that before, they ran around too fast. But now they stood still, watching the water fall and their boat as it sank. Yes, there were thirteen—he counted twice to make sure.
Now the boat had sunk so low that Marmaduke was afraid it would disappear forever, with all the children on it. But there was no danger, for when the water in the Lock was even with the water on the lower side of the Canal it stopped falling, and the “Mary Ellen” stopped, too. At least, there was no danger for the children, but there was for Master Marmaduke, he had leaned over so far, watching that boat go down, down, down.
All-of-a-sudden there was a splash. It was certainly to be expected that one of the thirteen children had fallen in, but no!— It—was—Marmaduke!
Down, down, down, he sank in the gurgly brown water. Then he came up, spluttering and choking.
“Help, help!” he cried.
Then under he went again.
But the Round Fat Rosy Woman had seen him.
“Quick, Hiram!” she shouted to her husband in a voice that sounded like a man’s, “there’s a boy fallen overboard!”
“Where?” asked the man at the tiller, still keeping the pipe in his mouth.
She pointed into the brown water.
“Right there—there’s where he went down.”
Perhaps the Man With the Red Shirt and the Pipe was so used to having his children fall into the coal, or the Canal, or something, that he didn’t think it was a serious matter, for he came to the side of the “Mary Ellen” very slowly, just as Marmaduke was coming up for the third time.
And that is a very important time, for, they say, if you go down after that you won’t come up ‘til you’re dead. Whether it was true or not, Marmaduke didn’t know, for he had never been drowned before, and no one who had, had ever come back to tell him about it. Anyway, he wasn’t thinking much, only throwing his arms around in the water, trying vainly to keep afloat.
The Round Fat Rosy Woman grew quite excited, as well she might, and she shouted again to the Man With the Red Shirt and the Pipe:
“Don’t stand there like a wooden Injun in front of a cigar-store. Hustle or the boy’ll drown!”
Then he seemed to wake up, for he ran to the gunwale of the boat, and he jumped over with his shoes and all his clothes on. And, strange to say, he still kept that pipe in his mouth. However, that didn’t matter so very much, for he grabbed Marmaduke by the collar with one hand and swam towards the “Mary Ellen” with the other. The woman threw a rope over the side; he grasped it with his free hand, and the woman drew them up—she certainly was strong—and in the shake of a little jiffy they were standing on board, safe but dripping a thousand little rivers from their clothes on the deck. The man didn’t seem to mind that a bit, but was quite disturbed to find that his pipe had gone out.
“Come, Mother,” said he to the Round Fat Rosy Woman, “get us some dry duds and a match.”
And quick as a wink she hustled them into the little house which they called a cabin, and gave Marmaduke a pair of blue overalls and a little blue jumper which belonged to one of the thirteen children. Of course, she found the right size, with so many to choose from. His own clothes, she hung on the line, with all the little pairs of pants and the skirts, to dry in the breeze.
Then she put the kettle on the cook stove and in another jiffy she was pouring out the tea.
“M—m—m—m,” said Marmaduke. He meant to say,—”Make mine ‘cambric,’ please,” for he knew his mother wouldn’t have wanted him to take regular tea, but his Forty White Horses galloped so he couldn’t make himself heard.
“There, little boy,” said the Round Fat Rosy Woman, “don’t talk. Just wrap yourself in this blanket and drink this down, and you’ll feel better.”
It did taste good even if it was strong, and it warmed him all the way down under the blue jumper, and the Forty White Horses stopped their galloping, and while the men were hitching the mules up again, and the “Mary Ellen” was drifting through the lower pair of gates out of the Lock, he fell fast asleep.
He must have slept for a whole lot of jiffies. When he woke up at last, he looked around, wondering where he could be, the place looked so strange and so different from his room at home. Then he remembered,—he was far from home, in the little cabin of the “Mary Ellen.” It was a cosy place, with all the little beds for the children around the cabin. And these beds were not like the ones he usually slept in. They were little shelves on the wall, two rows of them, one row above the other. It was funny, he thought, to sleep on a shelf, but that was what the thirteen children had to do. He was lying on a shelf himself just then, wrapped in a blanket.
The Round Fat Rosy Woman was bending over the stove. It was a jolly little stove, round and fat and rosy like herself, and it poked its pipe through the house just above his head. In the pot upon it, the potatoes were boiling, boiling away, and the little chips of bacon were curling up in the pan.
Outside, he could see all the little skirts and the little pairs of pants, dancing gaily in the wind. He could hear the children who owned those skirts and pairs of pants running all over the boat. The patter of their feet sounded like raindrops on the deck above him.
They seemed to be forever getting into trouble, those thirteen children, and the Round Fat Rosy Woman was forever running to the door of the little house and shouting to one or the other.
“Take care, Maintop!” she would call to one boy as she pulled him back from falling into the Canal.
“Ho there, Bowsprit!” she would yell to another, as she fished him out of the coal.
They were certainly a great care, those children, and all at once Marmaduke decided he knew who their mother must be. The boat was shaped just like a huge shoe and she surely had so many children she didn’t know what to do. Yes, she must be the Old Woman Who Lived in a Shoe, only the shoe must have grown into a canal boat.
He wondered about the funny names she called them.
“Are those their real names?” he asked, as he lay on his little shelf.
“Yes,” she said, “my husband out there with the pipe was a sailor once, on the deep blue sea. But he had to give it up after he was married, ‘cause he couldn’t take his family on a ship. We had a lot of trouble finding names for the children started to call ‘em Mary and Daniel and such, but the names ran out. So, seeing my husband was so fond of the sea, we decided to call ‘em after the parts of a ship, not a canal boat, but the sailing ships that go out to sea—that is, all but Squall.
“Now that’s Jib there, driving the mules, and that’s Bowsprit—the one all black from the coal. Cutwater’s the girl leaning over the stern; Maintop, the one with the three pigtails; and Mizzen, the towhead playing with your dog.”
“And what are the names of the rest?” Marmaduke asked, thinking all this very interesting.
“Oh!” she replied. “I’ll have to stop and think, there’s so many of them. Now there’s Bul’ark and Gunnel—they’re pretty stout; the twins, Anchor and Chain; Squall, the crybaby; Block, the fattest of all; Topmast, the tallest and thinnest; and Stern, the littlest. He came last, so we named him that, seeing it’s the last part of a ship.
“Now, let me think—have I got ‘em all?” and she counted on her fingers,—”Jib, Bowsprit, Cutwater, Maintop, Mizzen, Bul’ark, Gunnel, Anchor, Chain, Block, Squall, Topmast, and Stern. Yes, that surely makes thirteen, doesn’t it? I’m always proud when I can remember ‘em.”
By this time the potatoes and the bacon and coffee seemed about ready, so she went out on deck, and Marmaduke slid off his little shelf bed and followed her to see where she was going. On deck was a great bar of iron with another beside it. She took up one bar of iron and with it struck the other—twelve times. The blows sounded way out over the Canal and over the fields and far away, like a mighty fire-alarm, and all the children, that is all but Jib, who was driving the mules and would get his dinner later, came running into the cabin.
A great clatter of tin plates and knives and forks there was, and very nice did those potatoes and that bacon taste.
And it didn’t take long for them to finish that meal, either. Then they went out on deck.
The mules were pulling and pulling, and the boat was sailing on and on towards the Sea. They passed by so many places—lots of houses and lots of farms, the Red Schoolhouse and Reddy Toms’ house, and Sammy Soapstone’s, and the funny place where Fatty lived, and the pigs, fat like himself, ran all over the yard.
Fatty and Sammy were playing on the shore at that very moment. He waved to them and they waved back, but they didn’t know they were waving to their old playmate Marmaduke, he was so mixed up with all the children of the woman who lived on the canal boat that looked just like a shoe. How Sammy and Sophy and Fatty would have envied him if they had only known it was he sailing away to the Sea!
But he never arrived there, after all—at least he didn’t on that voyage. For, you see, after he had had a wonderful time, running all over the deck with the thirteen children, and looking down into the big hole where they kept the shiny coal, and exploring the little house on the deck, the Round Fat Rosy Woman and her Husband With the Red Shirt and the Pipe had a talk together.
“We must send him back home,” said she, “or his folks’ll be scared out of their wits.”
The man took a few puffs on his pipe, which always seemed to help him in thinking, then replied,
“We might let him off at the Landing it’s up the towpath a piece. We kin find someone to give him a lift.”
“That’s the best plan,” she agreed, “there’s the Ruralfree’livery now.”
And she pointed to the shore where the horse and wagon of the postman were coming up the road.
“What ho, Hi! Heave to!” she called, raising her hands to her mouth and shouting through them just like a man, “here’s a passenger for you, first class.”
“Mr. Ruralfree’liv’ry” shook his whip at them, then hollered “Whoa!” and stopped the old horse; and Jib hollered “Whoa!” and stopped his mules, right at the Landing.
Then Marmaduke said “Goodbye.” It took him some time, for there was the Man With the Red Shirt and the Pipe; and the Round Fat Rosy Woman; and Jib, Bowsprit, Cutwater, Mizzen, Maintop, Bul’ark, Gunnel, Anchor, Chain, Block, Squall, Topmast, and Stern; the “Mary Ellen”; and the mules, to say “Goodbye” to. Just before he went ashore the Round Fat Rosy Woman gave him his clothes back, for they were all dry by that time, and she stuffed something in his pocket besides. And what do you think it was? A toy anchor and chain that would just fit the “White Swan,” the ship the Toyman had made him.
So he rode home with Mr. Ruralfree’liv’ry and all his sacks of mail. But he kept turning his head for a long while to watch the Man With the Red Shirt and the Pipe, and the Round Fat Rosy Woman, and the Thirteen Children, and all the little pairs of pants that seemed to be waving farewell to him. But soon the “Mary Ellen” drifted out of sight. She was a good boat, the “Mary Ellen.”
He almost felt like crying, for he would have liked to have gone on that voyage to see the rest of the world. But, after all, he had seen a great deal of it, and he had that anchor and chain. | <urn:uuid:72526d86-6bf7-4520-b214-537eeb8ebf6b> | CC-MAIN-2017-17 | http://etc.usf.edu/lit2go/93/half-past-seven-stories/1584/story-5-the-old-woman-who-lived-on-the-canal/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00427-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.983428 | 4,983 | 2.953125 | 3 |
Transcriptional activity of transposable elements in maize
© Vicient; licensee BioMed Central Ltd. 2010
Received: 26 March 2010
Accepted: 25 October 2010
Published: 25 October 2010
Mobile genetic elements represent a high proportion of the Eukaryote genomes. In maize, 85% of genome is composed by transposable elements of several families. First step in transposable element life cycle is the synthesis of an RNA, but few is known about the regulation of transcription for most of the maize transposable element families. Maize is the plant from which more ESTs have been sequenced (more than two million) and the third species in total only after human and mice. This allowed us to analyze the transcriptional activity of the maize transposable elements based on EST databases.
We have investigated the transcriptional activity of 56 families of transposable elements in different maize organs based on the systematic search of more than two million expressed sequence tags. At least 1.5% maize ESTs show sequence similarity with transposable elements. According to these data, the patterns of expression of each transposable element family is variable, even within the same class of elements. In general, transcriptional activity of the gypsy-like retrotransposons is higher compared to other classes. Transcriptional activity of several transposable elements is specially high in shoot apical meristem and sperm cells. Sequence comparisons between genomic and transcribed sequences suggest that only a few copies are transcriptionally active.
The use of powerful high-throughput sequencing methodologies allowed us to elucidate the extent and character of repetitive element transcription in maize cells. The finding that some families of transposable elements have a considerable transcriptional activity in some tissues suggests that, either transposition is more frequent than previously expected, or cells can control transposition at a post-transcriptional level.
Transposable elements (TEs) are DNA sequences that move from one location to another within the genome or can produce copies of themselves. Eukaryotic TEs are divided into two classes, according to whether their transposition intermediate is RNA (class I) or DNA (class II). Each class contain elements that encode functional products required for transposition (autonomous) and elements that only retain the cis sequences necessary for recognition by the transposition machinery (non-autonomous). Class I elements can be divided into several subclasses: SINEs, LINEs, long terminal repeat (LTR) retrotransposons and TRIMs (Terminal-repeat Retrotransposons In Miniature), which are LTR non-autonomous elements . Class II elements comprise autonomous and non-autonomous transposons, including the MITEs (Miniature Inverted-repeat Transposable Elements) .
TEs are major components of most eukaryotic genomes and are particularly abundant in plants. TEs represent 80% of the maize and 90% of wheat genomes . All the classes of TEs found in Eukaryotes are also present in plant genomes, but LTR retrotransposons are the most abundant in terms of copy number and percentage of genome . 95% of maize TEs are LTR retrotransposons .
TEs play an important role in genome and gene evolution. TE insertion can disrupt genes and mediate chromosome rearrangements and can provide alternative promoters, exons, terminators and splice junctions . Several rice genes contain TE derived sequences . However, TE influence in gene expression is not restricted to physical modification of chromosomes. TEs were first characterized in maize as gene ''controlling elements'' . Maize "controlling elements" change the expression of some genes due to the transcription of non-coding RNAs (ncRNA) from the transposon promoters which contribute to the epigenetic regulation of neighbouring genes through mechanisms such as RNAi, transcriptional interference and anti-silencing . The methylation of a SINE element close to the FWA gene, a gene submitted to imprinting, allows the proper epigenetic control in Arabidopsis thaliana . TEs also produce short double-stranded RNAs (dsRNAs) which contribute to the epigenetic gene regulation. Analyses in maize, tobacco, wheat or rice have shown that transcriptional readout from retrotransposon LTRs may generate sense and antisense transcripts of adjacent genes, altering their expression . Given the large number of retrotransposon copies in plant genomes and their frequent location near genes, it becomes clear the high potential impact of the TE transcription on the expression of the nearby genes [12, 13]. For this reason, TE transcription was believed to be severely repressed in plants. This point of view was supported by the fact that during long time transcription activity was only demonstrated for a few plant TEs, and only activated under certain precise circumstances as, for example, pathogen infection, physical injuries or different abiotic stresses [14, 15]. Inactivity of TEs may be due to the accumulation of mutations that have altered their structure. However, although transpositionally inactive due to insertions, deletions, rearrangements or mutations, some copies of the TEs may retain the capacity to direct transcription from their own promoters. In addition to the direct inactivation, cells have also developed mechanism for TE control including silencing by DNA methylation or the small RNA pathways . TEs producing double-stranded or aberrant RNAs are silenced by a post-transcriptional gene silencing mechanism (PTGS) and active TEs are inactivated by transcriptional gene silencing (TGS) .
Despite mutations and cell control, TEs manage to be transcriptionally and transpositionally active. Phylogenetic analysis of TE families in maize revealed recent events of extreme TE proliferation and recent transposition activity has also been demonstrated in rice . The use of sensitive techniques for gene expression analysis like deep transcriptome sequencing provide increasing data on the presence of TE-transcripts in several plants and cell types [20–23].
Maize is the plant species from which more ESTs have been sequenced and the third species only after human and mice . More than two million maize ESTs have been sequenced from many libraries corresponding to several maize organs, developmental stages and conditions. It provides a strong basis for the development of computer-based procedures for the in silico analysis of expression profiles. The present work aimed to produce a body map of TE transcription in maize plants. We show that the fraction of TE-related transcripts varies greatly among TE classes and among organs.
Maize TEs are widely represented in EST databases
The number of ESTs in a large transcript database can be used to estimate relative transcriptional rates. More than two million maize EST sequences are deposited in the NCBI EST database (Zm-dbEST). Such a large amount of sequences provides an opportunity to perform virtual analysis of gene expression in this species. We used a representative sequence of 56 well-characterized TEs (available in the repeats and retrotransposon databases) to query BLASTN against the maize EST database (Zm-dbESTs). 1,5% of the total maize ESTs (25.282 sequences) showed significant sequence homology (e-value < 1E-20) with one of the 56 analysed TE families (Additional file 1).
TE families analysed in this study
LTR retrotransposon (Copia)
LTR retrotransposon (gypsy)
The distribution of the EST matches along the TE sequence was examined (Additional file 2). The EST distribution was variable depending on the TE element family, but a general similar behaviour was observed within classes. For example, in LINEs, most of the ESTs showed similarity with the 3'end. On the other hand, in LTR retrotransposons and TRIMs, most of the ESTs are similar to the LTR regions. These non-random distribution is probably a consequence of the different transcription mechanism characteristic of each TE family.
Profiles of TE transcription in different maize organs and conditions
"Virtual northern" analysis provides an easy and cheap alternative to the study of transcriptional profiling. An advantage of EST profiling compared with other methods is that it does not require prior knowledge of the gene sequences. The accuracy of "virtual northern" analyses will depend on the diversity of biological samples and in the number of sequence tags to provide sufficient depth to identify low-abundant transcripts. An additional problem will be the possibility to distinguish between closely related genes in the basis of partial sequences. EST profiling have been used for the identification of reference genes for quantitative RT-PCR normalization in wheat and barley , expression profiling of storage-protein gene families in wheat , identification of differentially expressed transcripts from sugarcane maturing stem , or the identification of cancer gene-markers in humans . The application of EST profiling to maize TEs is particularly appropriate. First, the analysis of TE families, and not single genes, virtually eliminates the problem of distinguishing between closely related sequences. Second, maize is the third organism in number of ESTs, and, finally, several of the cDNA libraries were constructed from precise, well-defined, dissected organs. The applicability of EST profiling in maize is demonstrated by the expected results for some marker genes (figure 4). One possible problem is the presence of sequences originated from contaminant genomic DNA in the EST collections. This problem is especially serious in the case of TEs because some of them are present in high copy numbers in the genome. Although we cannot totally exclude the presence of some genomic contamination, our results indicate that, if any, it may be considered anecdotic (Figure 2).
Once integrated in the genome, TEs accumulate mutations and become transpositionally inactive. However, even partial or rearranged TE copies may retain their capacity to initiate transcription. Cells have active mechanisms to protect their genome integrity against TE activity including transcriptional silencing and short-interfering RNAs (siRNAs) . Under certain circumstances some TEs can escape this cell control and transcribe and, sometimes, transpose . For example, different TE families are transcribed in response to biotic or abiotic stresses or in cell culture [33–37]. In addition to these "stress response" transcription, increasing data demonstrate that some TEs may have at least low transcriptional activities under normal circumstances in plant life. For example, transcription in leaves has been demonstrated for barley BARE, maize Grande and tomato Rider retrotransposons, and in different sorghum TEs [38–41]. Different EST based analysis, including the data presented here, demonstrate the presence of TE-transcripts in several organs and cell types [20–22, 42, 43]. According to our data, at least 1.5% of the ESTs correspond to TEs. This is an underestimation because only well characterized maize TEs were considered in our analysis and because ESTs libraries only contain data on polyadenylated mRNAs and it is not clear which percentage of TE transcripts contain a polyA track. For example, it has been estimated that only 15% the transcripts of the barley retrotransposon BARE1 are polyadenylated . In any case, the percentage is different according to the organs analysed ranging from 7.7% in SAM and 6.2% in cultured cells, to only 0.2% in female flowers and 0.1% in embryo.
TE transcripts are specially abundant in SAM, cultured cells (Figure 4; ) and sperm cells (Figure 5; [31, 45–47]). A common feature of SAM, pollen and cell cultures is that they contain pluripotent cells. Animal totipotent cells like oocytes and two-cell mouse embryos also exhibit high levels of TE transcription . The acquisition of totipotency depends, among other things, on epigenetic reprogramming and activation of TEs has also been associated with reductions on DNA methylation . For example, DNA in plant cultured cells undergoes hypomethylated and these cells show a transcriptional activation of specific TEs . Tobacco Tnt1 retrotransposon is silenced when introduced in Arabidopsis, but reversion of Tnt1 silencing is obtained when the number of Tnt1 elements is reduced to two by genetic segregation . Microarray expression profiling of Arabidopsis mature pollen revealed that many of the genes involved in siRNA biogenesis and silencing are not expressed in pollen or expressed at low levels . Although epigenetic changes may explain activation of certain TEs in some tissues, not all TE families accumulate equally in SAM or sperm cells, suggesting that the phenomenon requires some family specific mechanisms rather than simply being the result of a genome-wide activation of retrotransposons. One possible explanation may be the presence of cis specific signals in the TE promoter that may enhance their expression in certain cells. For example, pollen promoter specific signals have been detected in the LTR of Grande (personal unpublished data).
The use of powerful high-throughput sequencing methodologies allowed us to elucidate the extent and character of repetitive element transcription in maize cells. Next-generation sequencing of transcriptomes and genomes will enable further studies on TE transcription and their consequences.
Data sources and analysis
Organ/condition maize EST databases used in this analysis
Number of libraries
Number of ESTs
Shoot apical meristem
Sequence alignments were performed using CLUSTALW and phylogenetic trees using neighbour joining method. Graphic representation of phylogenetic trees were prepared using Dendroscope v.2.7.4 .
List of Abbreviations
EST: with sequence similarity to a transposable element
long terminal repeat
long interspersed transposable elements
Miniature Inverted Transposable Element
Terminal-repeat Retrotransposons In Miniature.
I am in agreement with Josep M Casacuberta for critical reading the manuscript.
- Wicker T, Sabot F, Hua-Van A, Bennetzen JL, Capy P, Chalhoub B, Flavell A, Leroy P, Morgante M, Panaud O, Paux E, SanMiguel P, Schulman AH: A unified classification system for eukaryotic transposable elements. Nature Rev Genet. 2007, 8: 973-982. 10.1038/nrg2165.PubMedView ArticleGoogle Scholar
- Feschotte C, Jiang N, Wessler SR: Plant transposable elements: where genetics meets genomics. Nat Genet. 2002, 3: 329-341.View ArticleGoogle Scholar
- Devos KM, Ma J, Pontaroli AC, Pratt LH, Bennetzen JL: Analysis and mapping of randomly chosen bacterial artificial chromosome clones from hexaploid bread wheat. Proc Natl Acad Sci USA. 2005, 102: 19243-19248. 10.1073/pnas.0509473102.PubMed CentralPubMedView ArticleGoogle Scholar
- Vitte C, Bennetzen JL: Analysis of retrotransposon structural diversity uncovers properties and propensities in angiosperm genome evolution. Proc Natl Acad Sci USA. 2006, 103: 17638-17643. 10.1073/pnas.0605618103.PubMed CentralPubMedView ArticleGoogle Scholar
- Haberer G, Young S, Bharti AK, Gundlach H, Raymond C, Fuks G, Butler E, Wing RA, Rounsley S, Birren B, Nusbaum C, Mayer KF, Messing J: Structure and architecture of the maize genome. Plant Physiol. 2005, 139: 1612-1624. 10.1104/pp.105.068718.PubMed CentralPubMedView ArticleGoogle Scholar
- Bennetzen JL: Transposable element contributions to plant genome evolution. Plant Mol Biol. 2000, 42: 251-269. 10.1023/A:1006344508454.PubMedView ArticleGoogle Scholar
- Sakai H, Tanaka T, Itoh T: Birth and death of genes promoted by transposable elements in Oryza sativa. Gene. 2007, 392: 59-63. 10.1016/j.gene.2006.11.010.PubMedView ArticleGoogle Scholar
- McClintock B: The significance of responses of the genome to challenge. Science. 1984, 226: 792-801. 10.1126/science.15739260.PubMedView ArticleGoogle Scholar
- Zaratiegui M, Irvine DV, Martienssen RA: Noncoding RNAs and gene silencing. Cell. 2007, 128: 763-776. 10.1016/j.cell.2007.02.016.PubMedView ArticleGoogle Scholar
- Kinoshita Y, Saze H, Kinoshita T, Miura A, Soppe W, Koornneef M, Kakutani T: Control of FWA gene silencing in Arabidopsis thaliana by SINE-related direct repeats. Plant J. 2007, 49: 38-45. 10.1111/j.1365-313X.2006.02936.x.PubMedView ArticleGoogle Scholar
- Wang GL, Ruan DL, Song WY, Sideris S, Chen L, Pi LY, Zhang S, Zhang Z, Fauquet C, Gaut BS, Whalen MC, Ronald PC: Xa21D encodes a receptor-like molecule with a leucine-rich repeat domain that determines race-specific recognition and is subject to adaptive evolution. Plant Cell. 1998, 10: 765-780. 10.1105/tpc.10.5.765.PubMed CentralPubMedView ArticleGoogle Scholar
- Le Q, Melayah D, Bonnivard E, Petit M, Grandbastien M: Distribution dynamics of the Tnt1 retrotransposon in tobacco. Mol Genet Genom. 2007, 278: 1617-4615.View ArticleGoogle Scholar
- Miyao A, Tanaka K, Murata K, Sawaki H, Takeda S, Abe K, Shinozuka Y, Onosato K, Hirochika H: Target site specificity of the Tos17 retrotransposon shows a preference for insertion within genes and against insertion in retrotransposon rich regions of the genome. Plant Cell. 2003, 15: 1771-1780. 10.1105/tpc.012559.PubMed CentralPubMedView ArticleGoogle Scholar
- Grandbastien MA: Activation of plant retrotransposons under stress conditions. Trends Plant Sci. 1998, 3: 181-187. 10.1016/S1360-1385(98)01232-1.View ArticleGoogle Scholar
- Takeda S, Sugimoto K, Otsuki H, Hirochika H: Transcriptional activation of the tobacco retrotransposon Tto1 by wounding and methyl jasmonate. Plant Mol Biol. 1998, 36: 365-376. 10.1023/A:1005911413528.PubMedView ArticleGoogle Scholar
- Casacuberta JM, Santiago N: Plant LTR-retrotransposons and MITEs: control of transposition and impact on the evolution of plant genes and genomes. Gene. 2003, 311: 1-11. 10.1016/S0378-1119(03)00557-2.PubMedView ArticleGoogle Scholar
- Okamoto H, Hirochika H: Silencing of transposable elements in plants. Trends Plant Sci. 2001, 6: 527-534. 10.1016/S1360-1385(01)02105-7.PubMedView ArticleGoogle Scholar
- Kronmiller BA, Wise RP: TEnest: automated chronological annotation and visualization of nested plant transposable elements. Plant Physiol. 2008, 146: 45-59. 10.1104/pp.107.110353.PubMed CentralPubMedView ArticleGoogle Scholar
- Picault N, Chaparro C, Piegu B, Stenger W, Formey D, Llauro C, Descombin J, Sabot F, Lasserre E, Meynard D, Guiderdoni E, Panaud O: Identification of an active LTR retrotransposon in rice. Plant J. 2009, 58: 754-765. 10.1111/j.1365-313X.2009.03813.x.PubMedView ArticleGoogle Scholar
- Vicient CM, Jääskeläinen MJ, Kalendar R, Schulman AH: Active retrotransposons are a common feature of grass genomes. Plant Physiol. 2001, 125: 1283-1292. 10.1104/pp.125.3.1283.PubMed CentralPubMedView ArticleGoogle Scholar
- de Araujo PG, Rossi M, de Jesus EM, Saccaro NL, Kajihara D, Massa R, de Felix JM, Drummond RD, Falco MC, Chabregas SM, Ulian EC, Menossi M, Van Sluys MA: Transcriptionally active transposable elements in recent hybrid sugarcane. Plant J. 2005, 44: 707-717. 10.1111/j.1365-313X.2005.02579.x.PubMedView ArticleGoogle Scholar
- Lopes FR, Carazzolle MF, Pereira GA, Colombo CA, Carareto CM: Transposable elements in Coffea (Gentianales: Rubiacea) transcripts and their role in the origin of protein diversity in flowering plants. Mol Genet Genom. 2008, 279: 385-401. 10.1007/s00438-008-0319-4.View ArticleGoogle Scholar
- Ohtsu K, Smith MB, Emrich SJ, Borsuk LA, Zhou R, Chen T, Zhang X, Timmermans MC, Beck J, Buckner B, Janick-Buckner D, Nettleton D, Scanlon MJ, Schnable PS: Global gene expression analysis of the shoot apical meristem of maize (Zea mays L.). Plant J. 2008, 52: 391-404. 10.1111/j.1365-313X.2007.03244.x.View ArticleGoogle Scholar
- dbEST: database of "Expressed Sequence Tags". [http://www.ncbi.nlm.nih.gov/dbEST/]
- Meyers BC, Tingey SV, Morgante M: Abundance, distribution, and transcriptional activity of repetitive elements in the maize genome. Genome Res. 2001, 11: 1660-1676. 10.1101/gr.188201.PubMed CentralPubMedView ArticleGoogle Scholar
- Paolacci AR, Tanzarella OA, Porceddu E, Ciaffi M: Identification and validation of reference genes for quantitative RT-PCR normalization in wheat. BMC Mol Biol. 2009, 10: 11-10.1186/1471-2199-10-11.PubMed CentralPubMedView ArticleGoogle Scholar
- Faccioli P, Paolo Ciceri GP, Provero P, Stanca AM, Morcia C, Terzi V: A combined strategy of ''in silico'' transcriptome analysis and web search engine optimization allows an agile identification of reference genes suitable for normalization in gene expression studies. Plant Mol Biol. 2007, 63: 679-688. 10.1007/s11103-006-9116-9.PubMedView ArticleGoogle Scholar
- Kawaura K, Mochida K, Ogihara Y: Expression Profile of Two Storage-Protein Gene Families in Hexaploid Wheat Revealed by Large-Scale Analysis of Expressed Sequence Tags. Plant Physiol. 2005, 139: 1870-1880. 10.1104/pp.105.070722.PubMed CentralPubMedView ArticleGoogle Scholar
- Casu RE, Dimmock CM, Chapman SC, Grof CP, McIntyre CL, Bonnett GD, Manners JM: Identification of differentially expressed transcripts from maturing stem of sugarcane by in silico analysis of stem expressed sequence tags and gene expression profiling. Plant Mol Biol. 2004, 54: 503-517. 10.1023/B:PLAN.0000038255.96128.41.PubMedView ArticleGoogle Scholar
- Reis EM, Ojopi EP, Alberto FL, Rahal P, Tsukumo F, Mancini UM, Guimarães GS, Thompson GM, Camacho C, Miracca E, Carvalho AL, Machado AA, Paquola AC, Cerutti JM, da Silva AM, Pereira GG, Valentini SR, Nagai MA, Kowalski LP, Verjovski-Almeida S, Tajara EH, Dias-Neto E, Bengtson MH, Canevari RA, Carazzolle MF, Colin C, Costa FF, Costa MC, Estécio MR, Esteves LI, Federico MH, Guimarães PE, Hackel C, Kimura ET, Leoni SG, Maciel RM, Maistro S, Mangone FR, Massirer KB, Matsuo SE, Nobrega FG, Nóbrega MP, Nunes DN, Nunes F, Pandolfi JR, Pardini MI, Pasini FS, Peres T, Rainho CA, dos Reis PP, Rodrigus-Lisoni FC, Rogatto SR, dos Santos A, dos Santos PC, Sogayar MC, Zanelli CF: Large-scale transcriptome analyses reveal new genetic marker candidates of head, neck, and thyroid cancer. Cancer Res. 2005, 65: 1693-1699. 10.1158/0008-5472.CAN-04-3506.PubMedView ArticleGoogle Scholar
- Tanurdzic M, Vaughn MW, Jiang H, Lee TJ, Slotkin RK, Sosinski B, Thompson WF, Doerge RW, Martienssen RA: Epigenomic consequences of immortalized plant cell suspension culture. PLoS Biol. 2008, 6: 2880-2895. 10.1371/journal.pbio.0060302.PubMedView ArticleGoogle Scholar
- Kasschau KD, Fahlgren N, Chapman EJ, Sullivan CM, Cumbie JS, Givan SA, Carrington JC: Genome-wide profiling and analysis of Arabidopsis siRNAs. PLoS Biol. 2007, 5: e57-10.1371/journal.pbio.0050057.PubMed CentralPubMedView ArticleGoogle Scholar
- Pouteau S, Huttner E, Grandbastien MA, Caboche M: Specific expression of the tobacco Tnt1 retrotransposon in protoplasts. EMBO J. 1991, 10: 1911-1918.PubMed CentralPubMedGoogle Scholar
- Hirochika H: Activation of tobacco retrotransposons during tissue culture. EMBO J. 1993, 12: 2521-2528.PubMed CentralPubMedGoogle Scholar
- Mhiri C, Morel JB, Vernhettes S, Casacuberta JM, Lucas H, Grandbastien MA: The promoter of the tobacco Tnt1 retrotransposon is induced by wounding and by abiotic stress. Plant Mol Biol. 1997, 33: 257-266. 10.1023/A:1005727132202.PubMedView ArticleGoogle Scholar
- Ramallo E, Kalendar R, Schulman AH, Martínez-Izquierdo JA: Reme1, a Copia retrotransposon in melon, is transcriptionally induced by UV light. Plant Mol Biol. 2008, 66: 137-150. 10.1007/s11103-007-9258-4.PubMedView ArticleGoogle Scholar
- Ueki N, Nishii I: Idaten is a new cold-inducible transposon of Volvox carteri that can be used for tagging developmentally important genes. Genetics. 2008, 180: 1343-1353. 10.1534/genetics.108.094672.PubMed CentralPubMedView ArticleGoogle Scholar
- Suoniemi A, Narvanto A, Schulman AH: The BARE-1 retrotransposon is transcribed in barley from an LTR promoter active in transient assays. Plant Mol Biol. 1996, 31: 295-306. 10.1007/BF00021791.PubMedView ArticleGoogle Scholar
- Gómez E, Schulman AH, Martínez-Izquierdo JA, Vicient CM: Integrase diversity and transcription of the maize retrotransposon Grande. Genome. 2006, 49: 558-562. 10.1139/G05-129.PubMedView ArticleGoogle Scholar
- Cheng X, Zhang D, Cheng Z, Keller B, Ling HQ: A new family of Ty1-copia-like retrotransposons originated in the tomato genome by a recent horizontal transfer event. Genetics. 2009, 181: 1183-1193. 10.1534/genetics.108.099150.PubMed CentralPubMedView ArticleGoogle Scholar
- Muthukumar B, Bennetzen JL: Isolation and characterization of genomic and transcribed retrotransposon sequences from sorghum. Mol Genet Genom. 2004, 271: 308-316. 10.1007/s00438-004-0980-1.View ArticleGoogle Scholar
- Vicient CM, Schulman AH: Copia-like retrotransposons in the rice genome: few and assorted. Genome lett. 2002, 1: 35-47. 10.1166/gl.2002.002.View ArticleGoogle Scholar
- Kashkush K, Feldman M, Levy AA: Transcriptional activation of retrotransposons alters the expression of adjacent genes in wheat. Nat Genet. 2003, 33: 102-106. 10.1038/ng1063.PubMedView ArticleGoogle Scholar
- Chang W, Schulman AH: BARE retrotransposons produce multiple groups of rarely polyadenylated transcripts from two differentially regulated promoters. Plant J. 2008, 56: 40-50. 10.1111/j.1365-313X.2008.03572.x.PubMedView ArticleGoogle Scholar
- Skibbe DS, Fernandes JF, Medzihradszky KF, Burlingame AL, Walbot V: Mutator transposon activity reprograms the transcriptomes and proteomes of developing maize anthers. Plant J. 2009, 59: 622-633. 10.1111/j.1365-313X.2009.03901.x.PubMedView ArticleGoogle Scholar
- Nobuta K, Venu RC, Lu C, Belo A, Vemaraju K, Kulkarni K, Wang W, Pillay M, Green PJ, Wang GL, Meyers BC: An expression atlas of rice mRNAs and small RNAs. Nature Biotech. 2007, 25: 473-477. 10.1038/nbt1291.View ArticleGoogle Scholar
- Engel ML, Chaboud A, Dumas C, McCormick S: Sperm cells of Zea mays have a complex complement of mRNAs. Plant J. 2003, 34: 697-707. 10.1046/j.1365-313X.2003.01761.x.PubMedView ArticleGoogle Scholar
- Peaston AE, Evsikov AV, Graber JH, de Vries WN, Holbrook AE, Solter D, Knowles BB: Retrotransposons regulate host genes in mouse oocytes and preimplantation embryos. Dev Cell. 2004, 7: 597-606. 10.1016/j.devcel.2004.09.004.PubMedView ArticleGoogle Scholar
- Roelen BA, Lopes SM: Of stem cells and gametes: similarities and differences. Curr Med Chem. 2008, 15: 1249-1256. 10.2174/092986708784534992.PubMedView ArticleGoogle Scholar
- Lippman Z, May B, Yordan C, Singer T, Martienssen R: Distinct mechanisms determine transposon inheritance and methylation via small interfering RNA and histone modification. PLoS Biol. 2003, 1: e67-10.1371/journal.pbio.0000067.PubMed CentralPubMedView ArticleGoogle Scholar
- Pérez-Hormaeche J, Potet F, Beauclair L, Le Masson I, Courtial B, Bouché N, Lucas H: Invasion of the Arabidopsis genome by the tobacco retrotransposon Tnt1 is controlled by reversible transcriptional gene silencing. Plant Physiol. 2008, 147: 1264-1278. 10.1104/pp.108.117846.PubMed CentralPubMedView ArticleGoogle Scholar
- Pina C, Pinto F, Feijó JA, Becker JD: Gene family analysis of the Arabidopsis pollen transcriptome reveals biological implications for cell growth, division control, and gene expression regulation. Plant Physiol. 2005, 138: 744-756. 10.1104/pp.104.057935.PubMed CentralPubMedView ArticleGoogle Scholar
- TIGR Plant Repeat Databases. [http://www.tigr.org/tdb/e2k1/plant.repeats/]
- Retrotransposon Database. [http://data.genomics.purdue.edu/~pmiguel/projects/retros/]
- NCBI blast. [http://blast.ncbi.nlm.nih.gov/Blast.cgi]
- Huson DH, Richter DC, Rausch C, Dezulian T, Franz M, Rupp R: Dendroscope: An interactive viewer for large phylogenetic trees. BMC Bioinf. 22: 460
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:49994cbc-dc9e-468d-a35b-4cee9764e6f2> | CC-MAIN-2017-17 | http://bmcgenomics.biomedcentral.com/articles/10.1186/1471-2164-11-601 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00188-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.750685 | 7,292 | 2.65625 | 3 |
The William Watson Manuscript (1535)
Nothing in the William Watson Manuscript – which consists of six parchment sheets sewn together, to form a roll some 12 feet long – gives any indication to how or why it was written (around 1535). Its first transcription was made in 1687 by a certain Edward Thompson, of whom we know nothing.
Its dating derived by comparing its contents with contemporary documents duly cataloged in the Old Charges (1) of the 16th century, within the Plot Family (1) to which it is connected.
What is known is that the manuscript was discovered in an old iron chest at Newcastle-upon-Tyne, England; and was then received as a gift by a Mr. Hamilton and eventually purchased in 1890 by William Watson, the librarian of the Provincial Grand Lodge of West Yorkshire, and published, as a facsimile, in 1891 (2).
The context - In 1535, Charles V of France (1500-1558) seized Tunis, Jacques Cartier (1491-1557) undertook a second journey to Canada in his search for a northwest waterway from the Atlantic to the Pacific Ocean, and Pope Paul III (1468-1549) threatened Henry VIII (1491-1547) with excommunication.
- In 1687, the year of its transcription, James II of England (1633-1701) granted religious freedoms to Catholics of the Kingdom, and Isaac Newton (1642-1727) published his Naturalis Principia Mathematica Philosophiæ on motion and gravitation.
- It was also the year of death for Jean-Baptiste Lully (1632-1687), the painter, and René-Robert Cavelier de la Salle (1643-1687), the explorer. There was a revival of arts and letters on both sides of the Channel during the 16th century, as well as the development of Protestantism in Germany.
The 17th century was one of religious wars, a strengthening of royal power in France, and the unification of the different countries of the United Kingdom.
Transcription from Old English.
In the Lord is all our Trust.
Thanks to our glorious God, Father and maker of Heaven and Earth (3), and of all things in them, because he has granted of his glorious Divinity to make so many things of such diverse benefits for mankind. For he made all worldly things to be obedient and subject to man; and all things that are edible and wholesome, he ordered to be for man’s food and sustenance; and also he has given man knowledge and understanding of diverse Sciences and Crafts by which we can travel in this world to earn our livings.
To make things that are to God’s pleasure and also for our comfort and profit, if I were to list all of them, would take too long to tell or to write, so I will leave off. But I will show and tell you part of them, and how the Science of Geometry first began, and who were its founders, as well as those of the other Crafts, as is told in the Bible, and in other histories also; how and in what manner this worthy Science of Geometry first began, I will tell you, as I said before.
You should understand that there are seven liberal Sciences, from which seven Sciences all the Sciences and Crafts in the world were first discovered, and especially Geometry, for it is the source of all the others, that are called the seven Sciences.
• A.H. [Ad Honorem, to its honor], the first is called the foundation of Sciences; its name is Grammar; it teaches a man to write and speak correctly.
• The second is Rhetoric; it teaches a man to speak fluently and elegantly.
• The third is Logic, which teaches a man to discern the true from the false, and commonly is called the art of Sophistry.
• The fourth is called Arithmetic, which teaches a man the Craft of number, to calculate and make accounts of all manner of things.
• The fifth is Geometry, which teaches a man boundaries and measures and computations or weights of all manner of Crafts.
• The sixth is Music, which teaches a man the Craft of songs, and organ, trumpet and harp, and all others relating to them.
• The seventh is Astronomy, which teaches a man to know the hours of the sun and the moon and all the other planets and stars of heaven.
Our intent is principally to treat of the first foundation of the worthy Science of Geometry, and who its founders were. As I said before, there are seven liberal Sciences, that is to say seven Sciences or Crafts that are free in themselves, which seven Sciences all depend on one, and that is Geometry. Geometry is as much as to say the measure of the earth.
“Et so a qr qr et teru lati e et metron mensure, vn Geometrie, i mesure terre nos tra” (4), which is to say in English that Geometry is, as I said, from “geo”, meaning “earth” in Greek, and “metron”, that is to say “measure”, and that is how the word Geometry is compounded, and is the measure of the earth.
Don’t be surprised that I said all the Sciences depend on the Science of Geometry, for there is nothing made or crafted by man’s hand unless it is made by Geometry, and caused by it, because if a man works with his hands he works with some kind of tool.
There is no instrument in this world that does not come from the Earth, and to Earth it will return again. And there is no instrument, that is to say, a tool, to work with, that does not have some proportion, either more or less, and proportion is measure and the tool is made from the earth, and therefore every instrument is Earth, and Geometry is the measure of the Earth.
Therefore I can say that all men live by Geometry, for all men in the world live by the labor of their hands. Many more proofs I could tell you that Geometry is the Science that reasonable men live by, but I leave off at this time, for the length of writing, and now I will proceed further on my topic.
You should understand that among all the Crafts in the world, Masonry is the most notable, and the greatest part of this Science of Geometry, as is noted and said in the histories and in the Bible, and in the Master of History and in the Polychronicon, a proven story, and also in the Doctor of History, Bede’s De Imagine Mundi, and in Isidore’s Etymologiae, Methodius, Bishop and Martyr, and others.
I suppose it may well be said, for it was founded, as it is noted in the Bible in the first book of Genesis, in Adam’s male line, in the seventh generation, before Noah’s flood, there was a man called Lamech, who had two wives, one named Adala and the other Zillah. By the first wife, called Adala, he fathered two sons, one named Jaball [Jabal] and the other named Juball [Jubal]. The older son, Jabal, was the first who ever found Geometry, Intentores atatar pastor (5), that is to say, the father of men [who live in tents].
He became the Master Mason and governor of the works when he built the city of Henoch [Enoch], which was the first city that was ever made, and it was made by Cain, Adam’s son, who gave it to his own son Enock, and gave the city the name of his own son Enoch and called it the city of Enoch, and now it is called Ephrame [Ephraim], and that is where the Science of Geometry and Masonry was first carried out and contrived as a Science and as a Craft; and so we may say that this was the first origin and foundation of all Sciences and Crafts.
Also this man Jabal was called Pastor Pastoru [Shepherd of Shepherds], and as the Master of Histories says, and also Bede, De Imagine Mundi, the Polychronicon, and many others say, he was the first who ever partitioned land so that every man might know his own ground and labor on it for himself. He divided flocks of sheep so that every man might know his own sheep, and so we may say he was the first founder of that Science.
His brother Jubal was the first founder of Music, as Pythagoras says in the Polychronicon, and Isidore says the same thing in his Etymologies, in the sixth book he says that he was the first founder of music, of song and or organ and of trumpet, and he discovered the Science of the smith’s Craft by the sound and pounding of his brother’s hammers, and that was Juball Cain.
Truly, as the Bible says in the same chapter of Genesis, that Lamech fathered on his brother’s wife Zillah a son and a daughter, whose names were called Tuball-Caine [Tubal-Cain], who was his son, and his daughter’s name was Madmah [Naamah]. As the Polychronicon says, some men said that she was another man’s wife, whether this is so or not we do not affirm, but this Tubal-Cain was the first founder of the smith’s Craft and the other Crafts of metal, that is to say of iron and of brass, of gold and of silver, as some learned persons affirm, and his sister Naamah was the first founder of the Craft of weaving, because before her time there was no cloth woven, but then they spun yarn and knit and made themselves such clothing as they could, but this woman Naamah founded the Craft of weaving, and therefore it was called women’s Craft.·
And these brothers had knowledge beforehand that God would take vengeance for sin either by fire or by water, and they had great concern what they could do to save the Sciences that they had discovered, and took council together, and by all their knowledge they said that there were two kinds of stones of such virtue that one would never burn, and that stone is called marble, and another stone that would not sink in water, and that stone is called Laterus.
So they devised to write all the Sciences that they had found on these two stones, so if God should take vengeance by fire, then the Marble stone would not burn, and if God sent vengeance by water, then the other would not drown. So they provided that their elder brother Jabal would make two pillars from the two stones, that is marble and Laterus, and that he would write on the two pillars all the Sciences and Crafts that they all had discovered, and he did so.
Therefore we may say that he was the most cunning in Sciences, because he began and performed the last end before Noah’s flood, knowing of the vengeance that God would send, whether it should be by fire or by water, the brothers did not know.
By prophecy they knew that God would do one of them, and so they wrote their Sciences on the said stones. Some men affirm that they wrote all their seven Sciences on the said stones, and as they had in their mind that vengeance would come, so it was that God sent it by water, because there came such a flood that all the world was drowned, and all men were dead in it except eight persons, who were Noah and his wife and his three sons and their wives, from which three sons all the world came. Their names were in this manner, Sem, Cham and Japhett. This flood was called Noah’s flood because he and his children were saved and no more.
Many years after, as the Chronicle tells, these two pillars were found; and the Polychronicon says that a great clerk, whom men called Pythagoras, found one, and Hermes the Philosopher found the other, and they taught the Sciences that they found written on them.
Every chronicle and history, and many other writers, and especially the Bible, bear witness to the making of the Tower of Babylon [Babel]. It is written in the Bible, Genesis, chapter ten, how Cham, Noah’s son [fathered] Nimrod, and he became a mighty man upon the earth, and he was a strong man, like a giant, and he was a great King. In the beginning of his reign and kingdom he was the true King of Babylon and Amad, Calneth and the land of Shinar, and these same men, brothers, began the Tower of Babylon [Babel] (6).
He taught to his workmen the Craft of Masonry and had with him many Masons, more than 40,000, and he loved them and cherished them well. As it is written in the Polychronicon and in the Master of Histories and also in other histories, and a part of this is related in the Bible, in the said tenth chapter, where it says that Assur [Asshur], who was a close relative of Nimrod, went out of the land of Shinar and built the city of Nineveh and other cities also, and it says:
“Ye illa taira in defemare egressus est Asshur et edificavit Ninevi et implecens anitates et calath et Rifio qr is Ninivehet calath he est civitas Magr” (7).
It would be reasonable to declare openly how and in what manner the Charges of the Mason’s Craft were first found, and who gave it the name of Masonry. You should well know that it is plainly stated in the Polychronicon and in Methodius Bishop and Martyr, that Asshur, who was a worthy Lord, sent to Nimrod the King, asking him to send masons and workmen of the Craft that might help him to make his city, which he intended to make and finish. Nimrod sent him 3,000 masons. When he was sending them forth he called them before him and said:
“You must go to my cousin Asshur to help him to build a city, but see that you are well governed with such a charge that it will be profitable both for you and me. Truly do you labor and Craft and take a reasonable amount for your efforts, whatever you deserve. And I would have it that you love each other as if you were brothers, and hold together truly. He that has the most ability should teach it to his brother or fellow.
“See that you govern yourselves well towards your Lord and among yourselves, so that I may have honor and thanks for sending you and teaching you the Craft.”
They received the charge from the King, who was their Lord and Master, and went forth to Asshur and built the city of Nineveh in the country of Plateas, and also other cities, that were called Calath and Resen, which is a great city between Calath and Nineveh. In this manner the Craft of Masonry was first instituted and charged as a Science and Craft.
It is reasonable that we would show you how the elders who were before our time had the charges written in Latin and in French, and now we should tell you how Euclid came to Geometry. It is noted in the Bible and in other histories. In the twelfth chapter of Genesis, it tells how Abraham came into the land of Canaan, and the Lord appeared to him and said: “I shall give this land to you and to your seed.”
But there fell a great hunger in the land and Abraham took Sarah his wife with him and went into Egypt on a journey. While the hunger lasted he would live there. Abraham, as the story says, was a wise man and a great scholar. He knew all the Seven Sciences, and taught the Egyptians the Science of Grammar (8).
This worthy clerk Euclidus [Euclid] was his pupil, and learned Masonry from him, and he was the first to give it the name of Geometry. But it is said by Isidore in the Etymologiae in the first book, Isidore in his Etymologiae in the fifth book, first chapter, says Euclid was one of the first founders of Geometry and give it its name. For in his time there was a water in the land of Egypt that was called the Nile, and it flowed so far into the land that men could not dwell therein. Euclid taught them to make great walls and ditches to hold out the water.
By Geometry he measured out the land and parted it into various parts, and made every man close off his own part with walls and ditches.
Then it became a productive country with all manner of fruit and young people, both men and women. There were so many young people that the country could not live well. The lords of the country came together and held a council how they could help their children who did not have a suitable livelihood and were not able to find them for their children, for they had many among those who were in council.
There was this worthy clerk Euclid, and when he perceived they all were not able to resolve this matter, he said to them
“If you will give me your sons in governance, I will teach them such a Science that they shall thereby live like gentlemen, under the condition that you will be sworn to me to perform whatever I tell you.”
So it was reasonable that every man would do the things that were profitable to themselves and so they took their sons to Euclid to govern them at his own will. He taught them the Craft of Masonry and gave it the name of “Geometry” because of the partition of the ground that he had taught to the people in making their walls and ditches, as said before, to close out the water. Isidore says in his Etymologiae that he called the Craft Geometry.
This worthy clerk gave it a name and taught it to the sons of the lords of the land that he had in his teaching.
He gave them charge that they should call each other fellow, and nothing else, because they were all of one Craft and of gentle birth, sons of lords. Also, he that was most able should be Governor of the Work and should be called Master. There were also other charges that were written in the Book of Charges. And so they worked with the lords of that land and made cities and towns, castles and temples and lords’ palaces, and did live honestly and truly by the said Craft.
When the Children of Israel lived in Egypt they learned the Craft of Masonry. Afterwards, when they were driven out of Egypt they came into the land of Behest, which is now called Jerusalem, and occupied it there, and the Charges were held and kept.
At the making of Solomon’s Temple, which King David began, King David loved Masons well, and he gave them charges, nearly as they are now. The making of Solomon’s Temple, as it is said in the Third Book (9) of Kings, the fifth chapter (Regnum i terti regun capitul quinto), that Solomon had 4,000 Masons at his work, and the son of the King of Tyre was his Master Mason. In other Chronicles it is said in old books of Masonry that Solomon confirmed the charges that his father David had given Masons, and Solomon himself taught them their manners, very little differing from the manners that are now used.
From there this worthy Science was brought into France by the grace of God, and into many other worthy regions. In France there was a worthy Knight who was named Carolus Secundus, that is to say Charles the Second. This Charles was elected King of France by the grace of God and by his lineage, and yet some men will say that he was elected by fortune only, which is false and untrue, as plainly appears by the Chronicle, for he was of the King’s blood royal.
This same King Charles was a Mason before he was King, and afterwards when he was King he loved Masons well and cherished them and gave them charges and manners of his devising, some of which are at this time used in France, and ordered that they should have reasonable pay, and also that they should assemble once a year and discuss together about such things as were amiss, and the same would be received by Masters and fellows.
Every honest Mason or any other worthy workman that has any love for the Craft of Masonry, and would like to know how the Craft of Masonry first came into England and by whom it was established and confirmed, it is noted and written in histories of England and in old charges of St. Alban’s time, and King Ethelstone [Athelstan] declared, that Amphabell came out of France into England, and brought St. Alban into Christendom and made him a Christian man. He brought with him the charges of Masons as they existed in France and in other lands.
At that time the King of the land, who was a pagan, lived where the city of St. Alban is now, and he had many Masons working on the town walls. At that time St. Alban was the King’s steward, pay master, and governor of the King’s work and loved Masons well and cherished them well and gave them good pay, for a Mason then received but a penny a day (10) and meat and drink.
St. Alban got from the King that every Mason should have thirty pennies a week and four pence for their meal expenses (11), and he gave them charges and manners as St. Amphabell had taught him, and they differ only a little from the charges that are now used at this time.
These charges and manners were used for many years, and afterwards they were almost lost until the time of King Athelstan. King Athelstan and his son Edwin loved well Geometry, and he applied himself busily in learning that Science, and also he desired to learn the practice of it, so he called to him the best Masons that were in the realm, because he knew well that they had the practice of Geometry the best of any Craft in the realm. He learned Masonry from them, and cherished and loved them well, and he took upon himself the charges and learned the manners.
Afterward, for the love that he had for the Craft, and for the good grounding that was found in it, he purchased a free charter from the King his father, that they should have freedom to have correction within themselves, and that they could meet together to correct such things as were amiss within themselves.
They made a great congregation of Masons to assemble together at York, where he was himself, and called the old Masons of the realm to that congregation, and commanded them to bring to him all the writings of the old books of the Craft that they had, out of which books they prepared the charges by the devising of the wisest Masons that there were, and commanded that these charges might be kept and held.
He ordered that such a congregation (12) should be called an Assembly. He ordered good pay for them, that they might live honestly, which charges I will declare hereafter, and so the Craft of Masonry was there established and considered.
In England, right worshipful Masters and fellows at various assemblies and congregations, with the consent of the lords of this realm, have ordained and made charges by their best judgment that all men who shall be made and allowed to become Masons, must be sworn upon a book to keep the same in all that they may do, to the utmost of their power, and also they have ordained that when any fellow shall be received and allowed that these charges should be read to him, and will take these charges.
These charges have been seen and reviewed by our late Sovereign Lord, King Henry the Sixth (13), and the Lords of the Honorable Council, and they have approved them and said they were right, good and reasonable to be held, and these charges have been drawn and gathered out of various ancient books, both of the old Law and new Law, as they were confirmed and made in Egypt by the King, and by the great clerk Euclid, and at the making of Solomon’s Temple by King David and by Solomon his son, and in France by Charles, King of France, and in England by St. Alban who was the steward to the King at that time, and afterward by King Athelstan who was King of England, and by his son Edwin who was King after his father, as is told in many and various histories and stories and chapters.
The charges follow, particularly and severally:
The first and principal charge is that you shall be true man or true men to God and the Holy Churcho(14), and that you shall use neither error nor heresy, by your own understanding or discretion or wise men’s teaching.
2. - That you be true liege men to the King, without treason or falsehood, and if you know of either treason or treachery, look to amend it if you can, or else privately warn the King or his rulers or his deputies and officers.
3. - That you shall be true to one another, that is to say to every Master and fellow of the Science and Craft of Masonry who have been accepted as Masons, and do to them as you would that they should do to you.
4. - That every Mason keep true counsel both of Lodge and Chamber (15), and all other counsels that ought to be kept because of Masonry.
5. - That no Mason be a thief or [support] thieves as far as he knows.
6. - That he shall be true to his Lord and Master that he serves, and truly look to his Master’s profit and advantage.
7. - You shall call Masons your fellows or your brethren, and by no other foul name, nor shall you take your fellow’s wife in villainy, nor desire his daughter or servant.
8.o-oAlso, that you pay truly for your meat and your drink wherever you go to eat, also you shall do no misconduct in the house, whereby the Craft might be criticized.
These are the general charges that every Mason should hold, both Masters and fellows. Now here are other singular charges for Masters and fellows:
1. - That no Master or fellow take upon him any Lord’s work, nor any other man’s unless he knows himself able and capable enough to perform it, so that the Craft will not be criticized or reproached, so that the Lord may be well and truly served.
2. - That no Master take any work unless he takes it reasonably so that the Lord may be well and truly served for his own good, and that the Master may live honestly and pay his fellows truly their pay, as the manner of the Craft asks.
3. - That No Master or fellow shall supplant another from their work, that is to say, if has taken a job, or is acting as Master for any Lord’s work, or any other, you shall not replace him unless he is unable to have the ability to complete the job.
4. - That no Master or fellow shall take any apprentice as his apprentice unless for seven years, and that apprentice be able of birth and of living as he ought to be.
5. - That no Mason nor fellow take anyone to be made a Mason without the consent of at least five or six of his fellows, and he that shall be made a Mason is even within all sides, that is to say that he is free born and of good family and not a bondman, and that he have his limbs right, as a man ought to have.
6. - That no Master or fellow shall take any Lord’s work as task work that has been customarily done as journey work.
7. - That every one give pay to his fellow only as he deserves, so that the worthy Lord of the work may not be deceived by false workmen.
8. - That no fellow slander another behind his back to make him lose his good name or his worldly goods.
9. - That no fellow within Lodge or without give evil answer to another, ungodly, without reasonable cause.
10. - That every Mason shall do reverence to his betters and do him honor.
11. - That no Mason shall gamble, or play at dice, nor at any other unlawful games, so that the Craft might be reproached.
12. - That no Mason engage in sexual immorality to bring the Craft into disrepute.
13. - That no fellow go into town in the night time without a fellow to bear witness for him that he has been in honest company, for if he does so there a Lodge of fellows will punish that sin.
14. - That every Mason and fellow will come to the assembly if it is within five miles of him and if has any notice to come there, at the judgment of Masons and fellows.
15. - That every Master and fellow, if they have done wrong, to stand at the judgment of Masters and fellows and make compensation if they can; and if they can not compensate them, they go to Common Law.
16. - That no Master make any mold or square, or use a ruler to lay.
17. - That no Master or fellow shall set a layer within the Lodge, nor without, to show any molded stones with any mould of his own making.
18. - That every Master shall receive and cherish strange Masons when they come out of the country, and set them to work as the manner is, that is to say, if they have molded stones in place they shall set them to work for at least two weeks, and give him his pay, and if he doesn’t have stones for him to work, then he shall assist him to the next Lodge.
19. - That you shall truly serve the Lord for your pay, and justly and truly complete your work, be it task or journey work, so that you may have your pay truly, according to how you ought to have it.
20. - That every Mason work truly on the working day, so that he may receive his pay, and serve it so he may live honestly upon the holy day, and that you and every Mason receive your pay from your pay master, and that you shall keep correct account of your time of work and rest as it is ordained by the Masters Council.
21. - That if any fellows shall be at discord or dissension, you shall truly treat between them to make accord and agreement and show no favor on the part of either, but justly and truly for both the parties, and that it be done at such a time that the Lord’s work not be delayed.
22. - Also, if you act as Warden or have any authority under the Master where you are serving, you shall be true to your Master while you are with him, and be a true mediator between the Master and his fellows to the utmost of your power.
23. - Also, if you act as Steward, either of the Lodge Chamber or of Common House needs, you shall keep a true account of your fellows’ goods, how they are dispensed, when they will take account, and also if you are more capable than your fellow who stands by you in his work and you see him in danger of spoiling his stone and he wants advice from you, you shall inform and teach him honestly so that the Lord’s work is not spoiled.
These charges that we have declared and recorded to you, you shall well and truly keep to your power, so help you God and Holy Doom and by the holy contents of this book.
Anno Domini 1687.
1. - Set in legendary stories, regulations and obligations, the Old Charges, over a hundred in number, are usually divided into distinct families; the most important one (besides the Regius Ms placed outside families) belong to the Grand Lodge family (53 documents) Sloane family (21 documents) and Tew family (nine documents), The Plot family has got six documents, the Cooke family only three.
2. - The William Watson Manuscript, which we reproduce from the original text, has nothing to do with the Watson Manuscript presented by some English websites: it can be, however, compared with the Grand Lodge Manuscript No.1 (1583) and the York No. 1 (1600).
3. - The text of the William Watson Manuscript somewhat resembles the Cooke Manuscript, which dates from 1410; which could lead one to believe that one is directly derived from the other, an hypothesis which is however rejected by many Masonic authors such as G. W. Speth, D.C. and Howard W. Begemann, for whom the two documents would, rather, be the heirs of a third one ... now disappeared.
4. - This is a very corrupt rendition of the Latin phrase in the Cooke Manuscript: “Et sic dicitur a geo graece quod est terra latinae et metron quod est mensura, unde Geometria, id est mensura terrae vel terrarum.”
5. - This is a corruption of “Pater habitancium in tentoris atque pastorum.”
6. - Confusion between the Tower of Babel and the Tower of Babylon, see Cooke Ms.
7. - A corruption of “De terra illa egressus est Assur, et aedificavit Niniven, et plateas civitatis, et Chale, Resen Quoque inter Ninevet et Chale: haec est civitas magna.”
8. - In the Cooke Manuscript, it is said: Geometry.
9. - See the Cooke Manuscript, note 14.
10. - Penny - The penny is an old English monetary unit.
11. - The editor wrote: “IIIj d for their nonfinding,” which makes no sense if one does not translate “IIIj” as four and “non-finding” by “snack” or “light meal”.
12. - Congregation, assembly - An assembly is a meeting organized for a number of people, while a congregation is a meeting of cardinals and prelates, permanent or temporary, to examine certain special religious cases, and by analogy a group of senior Masons.
13. - Henry VI of England, son of Henry V and Catherine of Valois, married Margaret of Anjou, reigned from 1422 to 1461 and from 1470 to his death. During a period of nine years, he was deprived of his kingdom by Edward of York, grandson of Charles VII of France.
14. - The Watson Manuscript dates, let us say around 1535. It might be considered slightly older if one refers to the words Holy Church which usually characterizes the Roman Catholic Church. Indeed, having failed to obtain in 1527, from Pope Clement VII, the annulment of his marriage with Catherine of Aragon, Henry VIII imposed in retaliation various measures contrary to the interests of the Church: so, in December 1534, the Parliament passed the Act of Supremacy, making the King the supreme head on earth of the Church of England.
15. - Lodge and Chamber- The Lodge (a Craft site) and the Chamber (a Meeting site) are clearly differentiated in the text. | <urn:uuid:3a81f439-d559-491d-8933-300534f6ebab> | CC-MAIN-2017-17 | http://theoldcharges.com/chapter-10.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122629.72/warc/CC-MAIN-20170423031202-00015-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.98544 | 7,569 | 3.28125 | 3 |
Temporal range: Oligocene, 34–23 Ma
|Skeleton cast of P. transouralicum, National Museum of Nature and Science, Tokyo|
† denotes extinct
Paraceratherium is an extinct genus of hornless rhinoceros, and one of the largest terrestrial mammals that has ever existed. It lived from the early to late Oligocene epoch (34–23 million years ago); its remains have been found across Eurasia between China and the Balkans. It is classified as a member of the hyracodont subfamily Indricotheriinae. "Paraceratherium" means "near the hornless beast", in reference to Aceratherium, a genus that was once thought similar.
The exact size of Paraceratherium is unknown because of the incompleteness of the fossils. Its weight is estimated to have been 15 to 20 tonnes (33,000 to 44,000 lb) at most; the shoulder height was about 4.8 metres (15.7 feet), and the length about 7.4 metres (24.3 feet). The legs were long and pillar-like. The long neck supported a skull that was about 1.3 metres (4.3 ft) long. It had large, tusk-like incisors and a nasal incision that suggests it had a prehensile upper lip or proboscis. The lifestyle of Paraceratherium may have been similar to that of modern large mammals such as the elephants and extant rhinoceroses. Because of its size, it would have had few predators and a slow rate of reproduction. It was a browser, eating mainly leaves, soft plants, and shrubs. It lived in habitats ranging from arid deserts with a few scattered trees to subtropical forests. The reasons for the animal's extinction are unknown, but various factors have been proposed.
The taxonomy of the genus and the species within has a long and complicated history. Other genera of Oligocene indricotheres, such as Baluchitherium, Indricotherium, and Dzungariotherium have been named, but no complete specimens exist, making comparison and classification difficult. Most modern scientists consider these genera to be junior synonyms of Paraceratherium, and that it contains four discernible species; P. bugtiense (the type species), P. transouralicum, P. prohorovi, and P. orgosensis, although the last may be a distinct genus. The most completely-known species is P. transouralicum, so most reconstructions of the genus are based on it. Differences between P. bugtiense and P. transouralicum may be due to sexual dimorphism, which would make them the same species.
The taxonomic history of Paraceratherium is complex due to the fragmentary nature of the known fossils and because western, Soviet, and Chinese scientists worked in isolation from each other for much of the 20th century and published research mainly in their respective languages. Scientists from different parts of the world did attempt to compare their finds to get a more complete picture of these animals, but were hindered by politics and wars. The opposing taxonomic tendencies of "lumping and splitting" have also contributed to the problem. Inaccurate geological dating previously led scientists to believe various geological formations that are now known to be contemporaneous were of different ages. Many genera were named on the basis of subtle differences in molar characteristics—features that vary within populations of other rhinoceros taxa—and are therefore not accepted by most scientists for distinguishing species.
Early discoveries of indricotheres were made through various colonial links to Asia. The first known indricothere fossils were collected from Balochistan (in modern-day Pakistan) in 1846 by a soldier named Vickary, but these fragments were unidentifiable at the time. The first fossils now recognised as Paraceratherium were discovered by the British geologist Guy Ellcock Pilgrim in Balochistan in 1907–1908. His material consisted of an upper jaw, lower teeth, and the back of a jaw. The fossils were collected in the Chitarwata Formation of Dera Bugti, where Pilgrim had previously been exploring. In 1908, he used the fossils as basis for a new species of the extinct rhinoceros genus Aceratherium; A. bugtiense. Aceratherium was by then a wastebasket taxon; it included several unrelated species of hornless rhinoceros, many of which have since been moved to other genera. Fossil incisors that Pilgrim had previously assigned to the unrelated genus Bugtitherium were later shown to belong to the new species.
In 1910, more partial fossils were discovered in Dera Bugti during an expedition by the British palaeontologist Clive Forster-Cooper. Based on these remains, Foster-Cooper moved A. bugtiense to the new genus Paraceratherium, meaning "near the hornless beast", in reference to Aceratherium. His rationale for this reclassification was the species' distinctly down-turned lower tusks. In 1913, Forster-Cooper named a new genus and species, Thaumastotherium ("wonderful beast") osborni, based on larger fossils from the same excavations, but he renamed the genus Baluchitherium later that year because the former name was preoccupied, as it had already been used for a hemipteran insect. The fossils of Baluchitherium were so fragmentary that Foster-Cooper was only able to identify it as a kind of odd-toed ungulate, but he mentioned the possibility of confusion with Paraceratherium. The American palaeontologist Henry Fairfield Osborn, which B. osborni was named after, suggested it may had been a titanothere.
A Russian Academy of Sciences expedition later found fossils in the Aral Formation near the Aral Sea in Kazakhstan; it was the most complete indricothere skeleton known, but it lacked the skull. In 1916, based on these remains, Aleksei Alekseeivich Borissiak erected the genus Indricotherium named for a mythological monster, the "Indrik beast". He did not assign a species name, I. asiaticum, until 1923, but Maria Pavlova had already named it I. transouralicum in 1922. Also in 1923, Borissiak created the subfamily Indricotheriinae to include the various related forms known by then. In 1939, Borissiak also named a new species of Paraceratherium from Kazakhstan, P. prohorovi.
In 1922, American explorer Roy Chapman Andrews led a well documented expedition to China and Mongolia sponsored by the American Museum of Natural History. Various indricothere remains were found in formations of the Mongolian Gobi Desert, including the legs of a specimen standing in an upright position, indicating that it had died while trapped in quicksand, as well as a very complete skull. These remains became the basis of Baluchitherium grangeri, named by Osborn in 1923.
Dzungariotherium orgosensis was described in 1973 based on fossils—mainly teeth—from Dzungaria in Xinjiang, northwest China. A multitude of other species and genus names—mostly based on differences in size, snout shape, and front tooth arrangement—have been coined for various indricothere remains. Fossils attributable to Paraceratherium continue to be discovered across Eurasia, but the political situation in Pakistan has become too unstable for further excavations to occur there.
In 1936, American palaeontologists Walter Granger and William K. Gregory proposed that Forster-Cooper's Baluchitherium osborni was likely a junior synonym (an invalid name for the same taxon) of Paraceratherium bugtiense, because these specimens were collected at the same locality and were possibly part of the same morphologically variable species. William Diller Matthew and Forster-Cooper himself had expressed similar doubts few years earlier. Although it had already been declared a junior synonym, the genus name Baluchitherium remained popular in various media because of the publicity surrounding Osborn's B. grangeri.
In 1989, palaeontologists Spencer G. Lucas and Jay C. Sobus published a revision of indricothere taxa, which is followed by most western scientists today. They concluded that Paraceratherium, as the oldest name, was the only valid indricothere genus from the Oligocene, and contained four valid species, P. bugtiense, P. transouralicum, P. prohorovi, and P. orgosensis. They considered most other names to be junior synonyms of those taxa, or as dubious names, based on remains too fragmentary to identify properly. By analysing alleged differences between named genera and species, Lucas and Sobus found that these most likely represented variation within populations, and that most features were indistinguishable between specimens, as had been pointed out in the 1930s. The fact that the single skull assigned to P. transouralicum or Indricotherium was domed, while others were flat at the top was attributed to sexual dimorphism. Therefore, it is possible that P. bugtiense fossils represent the female, while P. transouralicum represents the male of the same species.
According to Lucas and Sobus, the type species P. bugtiense from the late Oligocene of Pakistan includes junior synonyms such as B. osborni and P. zhajremensis. P. transouralicum, formerly Indricotherium, from the late Oligocene of Kazakhstan, Mongolia, and northern China includes B. grangeri and I. minus. P. orgosensis, formerly Dzungariotherium from the middle and late Oligocene of northwest China includes D. turfanensis and P. lipidus. P. orgosensis may be distinct enough to warrant its original genus name, but its exact position requires evaluation. P. prohorovi from the late Oligocene of Kazakhstan may be too incomplete for its position to be resolved in relation to the other species; the same applies to proposed species such as I. intermedium and P. tienshanensis, as well as genera like Benaratherium and Caucasotherium. Though the genus name Indricotherium is now a junior synonym of Paraceratherium, the subfamily name Indricotheriinae is still in use because genus name synonymy does not affect the names of higher level taxa that are derived from these. Members of the subfamily are therefore still commonly referred to as indricotheres.
In contrast to the revision by Lucas and Sobus, a 2003 paper by Chinese researchers suggested that Indricotherium and Dzungariotherium were valid genera, and that P. prohorovi did not belong in Paraceratherium. They also recognised the validity of species such as P. lipidus, P. tienshanensis, and P. sui. A 2004 paper by Chinese paleontologist Tao Deng and colleagues also recognised three distinct genera. Some western writers have similarly used names otherwise considered invalid since the 1989 revision, but without providing detailed analysis and justification.
The superfamily Rhinocerotoidea, which includes modern rhinoceroses, can be traced back to the early Eocene—about 50 million years ago—with early precursors such as Hyrachyus. Rhinocerotoidea contains three families; the Amynodontidae, the Rhinocerotidae ("true rhinoceroses"), and the Hyracodontidae. The diversity within the rhinoceros group was much larger in prehistoric times; they ranged from dog-sized to the size of Paraceratherium. There were long-legged, cursorial forms adapted for running and squat, semi aquatic forms. Most species did not have horns. Rhinoceros fossils are identified as such mainly by characteristics of their teeth, which is the part of the animals most likely to be preserved. The upper molars of most rhinoceroses have a pi-shaped (π) pattern on the crown, and each lower molar has paired L-shapes. Various skull features are also used for identification of fossil rhinoceroses.
The Indricotheriinae subfamily, to which Paraceratherium belongs, was first classified as part of the Hyracodontidae family by Leonard B. Radinsky in 1966. Previously, they had been regarded as a subfamily within Rhinocerotidea, or even a full family, Indricotheriidae. In a 1999 cladistic study of tapiromorphs, Luke Holbrook found indricotheres to be outside the hyracodontid clade, and wrote that they may not be a monophyletic (natural) grouping. Radinsky's scheme is the prevalent hypothesis today. The hyracodont family contains long-legged members adapted to running, such as Hyracodon, and were distinguished by incisor characteristics. Indricotheres are distinguished from other hyracodonts by their larger size and the derived structure of their snouts, incisors and canines. The earliest known indricothere is the dog-sized Forstercooperia from the middle and late Eocene of western North America and Asia. The cow-sized Juxia is known from the middle Eocene; by the late Eocene the genus Urtinotherium of Asia had almost reached the size of Paraceratherium. Paraceratherium itself lived in Eurasia during the Oligocene period, 23 to 34 million years ago. The genus is distinguished from other indricotheres by its large size, nasal incision that would have supported a muscular snout, and its down-turned premaxillae. It had also lost the second and third lower incisors, lower canines, and lower first premolars.
Lucas and colleagues had reached similar conclusions in a previous 1981 analysis of Forstercooperia, wherein they still retained Paraceratherium and Indricotherium as separate genera.
Paraceratherium is one of the largest known land mammals that have ever existed, but its exact size is unclear because of the lack of complete specimens. Early estimates of 30 tonnes (66,000 lb) are now considered exaggerated; it may have been in the range of 15 to 20 tonnes (33,000 to 44,000 lb) at maximum, and as low as 11 tonnes (24,000 lb) on average. Calculations have mainly been based on fossils of P. transouralicum because this species is known from the most complete remains. Estimates have been based on skull, teeth, and limb bone measurements, but the known bone elements are represented by individuals of different sizes, so all skeletal reconstructions are composite extrapolations, resulting in several weight ranges. Its total body length was estimated as 8.7 m (28.5 ft) from front to back by Granger and Gregory in 1936, and 7.4 m (24.3 ft) by Vera Gromova in 1959, but the former estimate is now considered exaggerated. The weight of Paraceratherium was similar to that of some extinct proboscideans, with the largest complete skeleton known belonging to the steppe mammoth (Mammuthus trogontherii). In spite of the roughly equivalent mass, Paraceratherium may have been taller than any proboscidean. Its shoulder height was estimated as 5.25 m (17.2 ft) at the shoulders by Granger and Gregory, but 4.8 m (15.7 ft) by Gregory S. Paul in 1997. The neck was estimated at 2 to 2.5 m (6.6 to 8.2 ft) long by Michael P. Taylor and Mathew J. Wedel in 2013. The teeth of P. orgosensis (which that species is mainly known from) are 25 percent larger than those of P. transouralicum, making it the largest known indricothere.
No complete set of vertebrae and ribs of Paraceratherium have yet been found and the tail is completely unknown. The atlas and axis vertebrae of the neck are wider than in most modern rhinoceroses, with space for strong ligaments and muscles that would be needed to hold up the large head. The rest of the vertebrae were also very wide, and had large zygapophyses with much room for muscles, tendons, ligaments, and nerves, to support the head, neck, and spine. The neural spines were long and formed a long "hump" along the back, where neck muscles and nuchal ligaments for holding up the skull were attached. The ribs were similar to those of modern rhinoceroses, but the ribcage would have looked smaller in proportion to the long legs and large bodies, because modern rhinoceroses are comparatively short-limbed. The last vertebra of the lower back was fused to the sacrum, a feature found in advanced rhinoceroses. Like sauropod dinosaurs, Paraceratherium had pleurocoel-like openings (hollow parts of the bone) in their pre-sacral vertebrae, which may have helped to lighten the skeleton.
The limbs were large and robust to support the animal's large weight, and were in some ways similar to and convergent with those of elephants and sauropod dinosaurs with their likewise graviportal (heavy and slow moving) builds. Unlike such animals, which tend to lengthen the upper limb bones while shortening, fusing and compressing the lower limb, hand, and foot bones, Paraceratherium had short upper limb bones and long hand and foot bones—except for the disc-shaped phalanges—similar to the running rhinoceroses from which they descended. Some foot bones were almost 50 centimetres (20 in) long. The thigh bones typically measured 1.5 m (4.9 ft), a size only exceeded by those of some elephants and dinosaurs. The thigh bones were pillar-like and much thicker and more robust than those of other rhinoceroses, and the three trochanters on the sides were much reduced, as this robustness diminished their importance. The limbs were held in a column-like posture instead of bent, as in smaller animals, which reduced the need for large limb muscles. The front limbs had three toes.
Due to the fragmentary nature of known Paraceratherium fossils, the animal has been reconstructed in several different ways since its discovery. In 1923, W. D. Matthew supervised an artist to draw a reconstruction of the skeleton based on the even less complete P. transouralicum specimens known by then, using the proportions of a modern rhinoceros as a guide. The result was too squat and compact, and Osborn had a more slender version drawn later the same year. Some later life restorations have made the animal too slender, with little regard to the underlying skeleton. Gromova published a more complete skeletal reconstruction in 1959, based on the P. transouralicum skeleton from the Aral Formation, but this also lacked several neck vertebrae.
There are no indications of the colour and skin texture of the animal because no skin impressions or mummies are known. Most life restorations show the creature's skin as thick, folded, grey, and hairless, based on modern rhinoceroses. Because hair retains body heat, modern large mammals such as elephants and rhinoceroses are largely hairless. American palaeontologist Donald Prothero has proposed that, contrary to most depictions, Paraceratherium had large, elephant-like ears that it used for thermoregulation. The ears of elephants enlarge the body's surface area and are filled with blood vessels, making the dissipation of excess heat easier. According to Prothero, this would have been true for Paraceratherium; he points to robust bones around the ear openings. The palaeontologists Pierre-Olivier Antoine and Darren Naish have expressed scepticism towards this idea.
The largest skulls of Paraceratherium are around 1.3 metres (4.3 ft) long, 33 to 38 centimetres (13 to 15 in) at the back of the skull, and 61 centimetres (24 in) wide across by the zygomatic arches. Paraceratherium had a long forehead, which was smooth and lacked the roughened area that serves as attachment point for the horns of other rhinoceroses. The bones above the nasal region are long and the nasal incision goes far into the skull. This indicates that Paraceratherium had a prehensile upper lip similar to that of the black rhinoceros and the Indian rhinoceros, or a short proboscis or trunk as in tapirs. The back of the skull was low and narrow, without the large lambdoid crests at the top and along the sagittal crest, which are otherwise found in horned and tusked animals that need strong muscles to push and fight. It also had a deep pit for the attachment of nuchal ligaments, which hold up the skull automatically. The occipital condyle was very wide and Paraceratherium appears to have had large, strong neck muscles, which allowed it to sweep its head strongly downwards while foraging from branches. One skull of P. transouralicum has a domed forehead, whereas others have flat foreheads, possibly because of sexual dimorphism. A brain endocast of P. transouralicum shows it was only 8 percent of the skull length, while the brain of the Indian rhinoceros is 17.7 percent of its skull length.
The species of Paraceratherium are mainly discernible through skull characteristics. P. bugtiense and P. orgosensis share features such as relatively slender maxillae and premaxillae, shallow skull roofs, mastoid-paroccipital processes that are relatively thin and placed back on the skull, a lambdoid crest which extends less back, and an occipital condyle with a horizontal orientation. P. transouralicum has robust maxillae and premaxillae, upturned zygomata, domed frontal bones, thick mastoid-paroccipital processes, a lambdoid crest that extends back, and occipital condyles with a vertical orientation. P. orgosensis is distinguished from the other species by the larger size of its teeth, and distinct crochets of its molars.
Unlike most primitive rhinoceroses, the front teeth of Paraceratherium were reduced to a single pair of incisors in either jaw, which were large and conical, and have been described as tusks. The upper incisors pointed downwards; the lower ones were shorter and pointed forwards. Among known rhinoceroses, this arrangement is unique to Paraceratherium and the related Urtinotherium. The incisors may have been larger in males. The canine teeth otherwise found behind the incisors were lost. The incisors were separated from the row of cheek teeth by a large diastema (gap). This feature is found in mammals where the incisors and cheek teeth have different specialisations. The upper molars, except for the third upper molar that was V-shaped, had a pi-shaped (π) pattern and a reduced metastyle. The premolars only partially formed the pi pattern. Each molar was the size of a human fist; among mammals they were only exceeded in size by proboscideans, though they were small relative to the size of the skull. The lower cheek teeth were L-shaped, which is typical of rhinoceroses.
Zoologist Robert M. Alexander has suggested that overheating may have been a serious problem in Paraceratherium due to its size. According to Prothero, the best living analogues for Paraceratherium may be large mammals such as elephants, rhinoceroses and hippopotamuses. To aid in thermoregulation, these animals cool down during the day by resting in the shade or by wallowing in water and mud. They also forage and move mainly at night. Because of its large size, Paraceratherium would not have been able to run and move quickly, but they would have been able to cross large distances, which would be necessary in an environment with a scarcity of food. They may therefore have had large home ranges and have been migratory. Prothero suggests that animals as big as indricotheres would need very large home ranges or territories of at least 1,000 square kilometres (250,000 acres) and that, because of a scarcity of resources, there would have been little room in Asia for many populations or a multitude of nearly identical species and genera. This principle is called competitive exclusion; it is used to explain how the black rhinoceros (a browser) and white rhinoceros (a grazer) exploit different niches in the same areas of Africa.
Most predators in their habitat were relatively small—about the size of a wolf—and were not a threat to Paraceratherium. Adult individuals would be too large for most predators to attack but the young would have been vulnerable. Bite marks on bones from the Bugti beds indicate that even adults may have been preyed on by 10-to-11-metre (33 to 36 ft)-long crocodiles, Crocodylus bugtiensis. As in elephants, the gestation period of Paraceratherium may have been lengthy and individuals may have had long lifespans. Paraceratherium may have lived in small herds, perhaps consisting of females and their calves, which they protected from predators. It has been proposed that 20 tonnes (44,000 lb) may be the maximum weight possible for land mammals, and Paraceratherium was close to this limit. The reasons mammals cannot reach the much larger size of sauropod dinosaurs are unknown. The reason may be ecological instead of biomechanical, and perhaps related to reproduction strategies. Movement, sound, and other behaviours seen in CGI documentaries such as "Walking With Beasts" are entirely conjectural.
The simple, low-crowned teeth indicate that Paraceratherium was a browser with a diet consisting of relatively soft leaves and shrubs. Later rhinoceroses were grazers, with high-crowned teeth because their diets contained grit that quickly wore down their teeth. Studies of mesowear on Paraceratherium teeth confirm the creatures had a soft diet of leaves; microwear studies have yet to be conducted. Isotope analysis shows that Paraceratherium fed chiefly on C3 plants, which are mainly leaves. Like its perissodactyl relatives the horses, tapirs, and other rhinoceroses, Paraceratherium would have been a hindgut fermenter; it would extract relatively little nutrition from its food and would have to eat large volumes to survive. Like other large herbivores, Paraceratherium would have had a large digestive tract.
Granger and Gregory argued that the large incisors were used for defence or for loosening shrubs by moving the neck downwards, thereby acting as picks and levers. Tapirs use their proboscis to wrap around branches while stripping off bark with the front teeth; this ability would have been helpful to Paraceratherium. Some Russian authors suggested that the tusks were probably used for breaking twigs, stripping bark and bending high branches and that, because species from the early Oligocene had larger tusks than later ones, they probably had a more bark than leaf based diet. Since the species involved are now known to have been contemporaneous, and that the differences in tusks are perhaps sexually dimorphic, the latter idea is not accepted today. Herds of Paraceratherium may have migrated while continuously foraging from tall trees, which smaller mammals could not reach. Osborn suggested that its mode of foraging would have been similar to that of the high-browsing giraffe and okapi, rather than to modern rhinoceroses, whose heads are carried close to the ground.
Remains assignable to Paraceratherium have been found in early to late Oligocene (34–23 million years ago) formations across Eurasia, in modern-day China, Mongolia, India, Pakistan, Kazakhstan, Georgia, Turkey, Romania, Bulgaria, and the Balkans. Their distribution may be correlated with the palaeogeographic development of the Alpine-Himalayan mountain belt. The range of Paraceratherium finds implies that they inhabited a continuous landmass with a similar environment across it, but this is contradicted by palaeogeographic maps that show this area had various marine barriers, so the genus was successful in being widely distributed despite this. The fauna which coexisted with Paraceratherium included other rhinoceroses, artiodactyls, rodents, beardogs, weasels, hyaenodonts, nimravids and cats.
The habitat of Paraceratherium appears to have varied across its range, based on the types of geological formations it has been found in. The Hsanda Gol Formation of Mongolia represents an arid desert basin, and the environment is thought to have had few tall trees and limited brush cover, as the fauna consisted mainly of animals that fed from tree tops or close to the ground. A study of fossil pollen showed that much of China was woody shrubland, with plants such as saltbush, mormon tea (Ephedra), and nitre bush (Nitraria), all adapted to arid environments. Trees were rare, and concentrated near groundwater. The parts of China where Paraceratherium lived had dry lakes and abundant sand dunes, and the most common plant fossils are leaves of the desert-adapted Palibinia. Trees in Mongolia and China included birch, elm, oaks, and other deciduous trees, while Siberia and Kazakhstan also had walnut trees. Dera Bugti in Pakistan had dry, temperate to subtropical forest.
The reasons Paraceratherium became extinct after surviving for about 11 million years are unknown, but it is unlikely that there was a single cause. Theorised reasons include climate change, low reproduction rate, and invasion by gomphothere proboscideans from Africa in the late Oligocene. Gomphotheres may have been able to considerably change the habitats they entered, in the same way that African elephants do today, by destroying trees and turning woodland into grassland. Once their food source became scarce and their numbers dwindled, Paraceratherium populations would have become more vulnerable to other threats. Large predators like Hyaenaelurus and Amphicyon also entered Asia from Africa during the early Miocene; these may have predated Paraceratherium calves. Other herbivores also invaded Asia during this time. | <urn:uuid:0ec283fb-001d-4c4d-a2fb-5276b62c8e8c> | CC-MAIN-2017-17 | http://www.mashpedia.com/Paraceratherium | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00483-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.964075 | 6,516 | 3.65625 | 4 |
A Brief History of American Alternative Journalism in the Twentieth Century
By Randolph T. Holhut
It has been said that the duty of the press is to comfort the afflicted and afflict the comfortable. While mainstream journalism has more often than not just paid lip-service to that credo, alternative journalism has lived up to those words.
Throughout its history, alternative journalism has dug up the news that others would wish to see buried. It has spoken truth to power. It has stuck up for the common person and worked for the public good. It has used the craft of journalism as an agent of social change. This is a brief history of the genre and the people that shaped it in this century.
1900-1920: GROWTH AND EMERGENCE
The period between 1900 to 1920 saw the emergence, growth and demise of alternative journalism.
With the turn of the Twentieth Century, the combination of new publishing and distribution technologies, a literate population that hungered to know what was really going on and fierce competition by newspapers and magazines to bring these people the truth set the stage for the creation of a new form of journalism - ``muckraking'' as President Theodore Roosevelt dubbed it in 1906.
The muckrakers - Lincoln Steffens, Ida Tarbell, Will Irwin, Ray Stannard Baker and Upton Sinclair chief among them - invented and perfected the craft of investigative journalism in the first two decades of this century.
S.S. McClure and the magazine that bore his name started off the muckraking movement. In the pages of McClure's beginning in late 1902, Tarbell exposed the business practices of John D. Rockefeller and his Standard Oil Company; Steffens began chronicling corruption in city and state governments and Baker began reporting on the problems of working people.
Tarbell, Steffens and Baker's stories caused a sensation and drove the circulation of McClure's past the half-million mark. It was eventually joined by other mass-market magazines such as Collier's , Cosmopolitan , Everybody's , Hampton's , The Independent , Pearson's and The American Magazine .
For the first time, there was a group of writers and a concentration of publications hammering away at the ills of American society. Nothing before or since has equaled the scope of the muckrakers' work. They uncovered corruption in business and politics, food adulteration and harmful ingredients in patent medicines, the plunder of natural resources, the plight of black Americans and the victims of unfettered capitalism.
While this was happening, two other publications pushed the boundaries a little further out. The socialist newspaper, Appeal to Reason , had a peak circulation of 760,000 during its life as a weekly from 1895 to 1917. It was here where Sinclair's classic book on the Chicago meatpacking houses, "The Jungle," was serialized a year before its publication.
The Masses , published monthly from 1911 to 1917, was not so much a muckraking magazine as it was a fusion of art, culture and radical politics that broke taboos and thumbed its nose at the establishment. Not until the rise of the underground press in the 1960s would there be other publications that would share The Masses ' libertine spirit.
A combination of factors killed the muckraking movement. Advertising pressures forced magazines to either soften their content or go out of business. World War I and the resulting wave of reaction made it difficult to challenge the established order. The U.S. Post Office denied mailing privileges in 1917 to magazines such as The Masses , Appeal to Reason and Emma Goldman's anarchist magazine Mother Earth for their opposition to the war. All three ceased publication.
1920-1950: PERIOD OF TRANSITION
The years between 1920 and 1950 were a transition period for the genre. The public soon tired of appeals for reform. The national mood was summed up by Warren Harding in his campaign for the 1920 Republican Party presidential nomination; America wanted "a return to normalcy."
The muckrakers themselves left journalism for other fields. It would take five decades before another sustained movement of investigative journalism would appear that was comparable to the peak years of the muckraking era from 1902 to 1912.
The Russian Revolution of 1917 ushered in a period of intramural squabbling on the Left and repression on the Right. The utopian image of a people's democracy captured the imagination of the American Left, but it only took a few years for that image to fade into a lengthy clash of conflicting visions: socialism versus marxism versus communism versus anarchism.
The Right had no problem figuring out the meaning of the Russian Revolution. They viewed it as a threat to existence of capitalism and the established order that needed to stamped out immediately. The wave of mass arrests and deportations of suspected communists and anarchists ordered by U.S. Attorney General A. Mitchell Palmer in 1919 further chilled the climate for dissent.
It was an era where mainstream liberal publications like The Nation and The New Republic were considered subversive literature. For the publications that were further Left, it was a challenging time.
The Masses reappeared as The Liberator in 1918 but it never was able to recapture and sustain the momentum it had before America entered World War I. The unresolved battle between art and revolution sapped its strength and the magazine was turned over to the Communist Workers Party in 1922.
From the ashes of the Masses//Liberator arose The New Masses in 1926. When the Great Depression struck in 1929 and America became more receptive to ideas from the Left, the magazine was poised to become one of the most influential publications of the 1930s.
It continued the original Masses tradition of fusing art, reportage and revolution together into a highly readable package. Most of the important writers of the period - Ernest Hemingway, Richard Wright, Thomas Wolfe, Dorothy Parker, Erskine Caldwell, Mike Gold, Theodore Dreiser, James Agee, Langston Hughes and Josephine Herbst - appeared on its pages.
Another important dissident publication was launched in the post-World War I era, the Daily Worker. It was started up by the Communist Party in 1924 and generally reflected the prevailing views of the party. Often dismissed as a mere mouthpiece for the CP, it was more of a radical labor paper that at the same time tried to become a popular paper of the Left. Despite a peak circulation of only 35,000 and consistent financial problems, it lasted until 1957.
The religious counterpart of the Daily Worker was Dorothy Day's newspaper, the Catholic Worker . Founded on May Day, 1933, Day was the publisher, editor and chief writer until her death in 1980. Through good times and bad, the paper never wavered in its editorial line of social justice, pacifism, the dignity of labor and the glory of God. It endures today as the voice of the Catholic Worker movement with a circulation of about 100,000.
George Seldes - an independent journalist and author - made a bold attempt to single-handedly revive the muckraking movement with his weekly newsletter, In fact . Published from 1940 until 1950, it was one of the first publications that was solely devoted to press criticism and was staunchly anti-fascist.
Seldes attacked the shortcomings of the commercial media with vigor. He also printed stories that the commercial media wouldn't touch _ stories that came from sources ranging from the Congressional Record to reporters who were suppressed by their editors. It had a peak circulation of 176,000, one of the largest-ever circulations for a liberal weekly _ more than The Nation , The Progressive and The New Republic combined.
In fact started publishing in an era when the Left was a legitimate force in American society. It shut down when the paranoia of the Cold War and the communist witch hunts of U.S. Senator Joseph McCarthy were ascendant.
1950-1960: SOWING THE SEEDS OF A NEW REVOLUTION
The years between 1950 and 1960 were a crucial incubation period for the era of the underground press of the 1960s and early 1970s.
As in the years after World War I, the post-World War II period marked another period of decline for the Left. Those years of hardship also gave birth to a quintet of magazines that all would help usher in the modern era of alternative journalism.
The first was the National Guardian , which began as a weekly in 1948. It strived to be a dissenting voice to the Cold War and sought to revive the more militant aspects of the New Deal without having the taint of the Communist Party that the Daily Worker had. It opposed both the Korean and Vietnam Wars, gave extensive coverage to the Civil Rights movement, was alone in defending Ethel and Julius Rosenberg and was a consistent advocate of forming a third national political party. The paper's name was shortened to the Guardian in 1967. It ceased publication in the early 1990s
In 1953, veteran Left journalist I.F. Stone picked up where Seldes left off with his weekly newsletter, I.F. Stone's Weekly . Stone did not do as much press criticism as Seldes, but he did perfect Seldes' technique of finding official duplicity through a careful reading of government documents. Treated as a pariah through the 1950s, he opposed U.S. Senator Joseph McCarthy and FBI head J. Edgar Hoover long before it was fashionable. His reportage of America's maneuverings in Vietnam in the 1960s made I.F. Stone's Weekly must reading for journalists, scholars and the anti-war movement. It stopped publication in 1971.
Two years after Stone started his newsletter, the Village Voice was launched by Ed Fancher and Dan Wolf, with financial backing by novelist Norman Mailer. Its politics were more liberal Democrat than radical socialist, but politics were secondary to the Voice's reputation. What made the Voice a model for future alternative publications was its style and wit.
As editor for the first two decades of its existence, Wolf recruited talented people and let them do their thing. Sometimes, rambling and egocentric writing was the result. More often than not, the Voice was home to the most in-depth, literate and entertaining writing of any weekly in America.
Two smaller publications round out the top five seminal influences on the modern era of alternative journalism. Liberation , started in 1956 by pacifist philosopher A.J. Muste, took the pacifism and non-violent activism of the Catholic Worker and added intellectual anarchism to the mix. Paul Goodman, David Dellinger and Bertrand Russell were among the main contributors to this monthly, which folded in 1977.
The Realist, published by Paul Krassner from 1958 until 1974, used ridicule and satire as its weapons. Journalism was the last priority in Krassner's magazine, but it provided the inspiration for the outrageousness of the underground press of the 1960s.
1960-1975: THE SECOND GOLDEN AGE OF MUCKRAKING
The second golden age of alternative journalism took place between 1960 to 1975. The political, economic and technological circumstances that made the first golden age - the Muckraking era from 1900 to 1915 - possible were again present in the 1960s. Offset printing made it possible for anyone with a typewriter, a paste pot and a little bit of money to put out a newspaper cheaply. A vast audience of young people - alienated by the mainstream media - was ready for something different. The Vietnam War and the growing revulsion with it was an even bigger catalyst.
On May Day 1964, Art Kunkin handed out the inaugural issue of the Los Angeles Free Press, generally acknowledged to be the first of the Sixties underground papers. By the end of the decade, there would be an estimated 400 regularly and irregularly published underground papers in existence in America. They included the Berkeley Barb, the East Village Other, the San Francisco Oracle and the Chicago Seed, among others.
From their predecessors, the underground press borrowed the fusion of art and politics of The Masses, the advocacy journalism of Appeal to Reason, the moral fire of the Catholic Worker and Liberation, the free-swinging satire of The Realist and the non-conformity and self-expression of the Village Voice. The result was an eclectic, unpredictable style of newspapering that went far beyond the traditional styles of journalism on the Left.
The third wave of feminism that began in the 1960s fueled a boom in feminist publications. At first, feminists tried to used the underground press as their forum but it was just as unreceptive to their ideas as the mainstream media. By 1970, it was clear to women that if they wanted to get the word out about their movement, they had to do it themselves.
It Ain't Me Babe was started in Berkeley in 1970 as the third wave's first feminist newspaper. It only lasted a year, but its righteous anger and energy was contagious. The Washington, D.C.-based off our backs , started a few weeks after its West Coast counterpart, had more staying power and became the most respected newspaper of the women's movement.
The feminist publication that got the most readers and the most attention was Ms. , which debuted in 1972. Glossy and more politically conservative than papers like off our backs , Ms. kept left-wing and lesbian feminism at arms length and emphasized a personal rather than collective vision of women's liberation.
The biggest trend in alternative journalism was ``New Journalism,'' the combination of non-fiction reporting with literary techniques associated with fiction writing. The genre was created in the early 1960s; not in the underground or Left press but in mainstream publications such as the New York Herald Tribune's Sunday magazine, New York , and in Esquire .
Warren Hinckle III invented the concept of ``radical slick'' when he took over Ramparts in 1964. Founded two years earlier in San Francisco as a liberal Catholic quarterly, Hinckle converted it into a monthly and introduced contemporary graphics and design, high-profile publicity efforts and provocative investigative reporting.
Circulation zoomed up to 250,000 on the strength of Ramparts' exposes of the Cold War and Vietnam War policies of the U.S. government, but the magazine went bankrupt in 1969 and limped along until going under for good in 1975. The flashy muckraking style of Ramparts was revived the following year when several of its former staffers started up Mother Jones .
Ramparts was great at muckraking, but not as good at covering rock & roll. Jann Wenner, miffed at Ramparts' treatment of rock and the counterculture, decided to start a biweekly in 1967 - Rolling Stone . A little slicker and more conservative in style than the underground papers, it celebrated music as being something that was above and beyond politics.
Rolling Stone grew more and more successful as the years passed. Its financial success came in part from corporate America's recognition of the consumer possibilities of the counterculture. By being more of a lifestyle publication than a political one, it survived the implosion of the New Left at the end of the 1960s that killed off most of the underground press.
1975-PRESENT: MATURATION AND STRUGGLE YET AGAIN.
The period from 1975 to the present saw a maturation of the alternative press, as it struggled to stay relevant in yet another conservative age.
The weekly papers that were started in the 1970s took a different tack, eschewing radical politics for community involvement and local news coverage. Two papers that were started in the late 1960s, the San Francisco Bay Guardian and Boston After Dark (later absorbed into the Boston Phoenix) pioneered the local approach to alternative journalism. By reaching out to the communities where they published and celebrating the concept of regional identity, the city and regional weeklies ended up with far larger audiences than their more strident predecessors.
The idealism of the Sixties underground press was tempered somewhat by this approach, but it was not totally compromised...at least for a while. Eventually, the lure of advertising dollars was too great to ignore. Investigative reporting gave way to arts and entertainment coverage, lifestyle features, restaurant reviews and fashion spreads. The message to alternative weeklies was clear - go downmarket or go out of business.
The political mood of the country shifted again to the right in the 1980s, and the alternative press struggled to adapt. But unlike previous eras of conservatism and reaction, the changes brought on by the movements of the 1960s proved harder to kill off.
The alternative press is heading into the Twenty-First Century bloodied but unbowed. Of the three stalwart journals of the Left that survived all the changes of the century - The Nation, The Progressive and The New Republic - The Nation and The Progressive maintained their editorial ideology and struggled under constant financial difficulty while The New Republic turned into a neo-conservative magazine and prospered.
Other alternative journals are still in business, but are far from robust. In These Times, a socialist newsweekly that started publication in 1976, just barely fought off bankruptcy and now is a biweekly. Mother Jones fought off a costly challenge by the Internal Revenue Service to its tax-exempt foundation status in the early 1980s, struggled financially and eventually cut back from 10 to six issues a year by the end of the Eighties. Ms. ceased publication in the late 1980s and was reborn in 1990 as a adless, reader-supported bimonthly.
The Village Voice still chugs along as the flagship of the alternative press. Like most of the urban weeklies such as the Boston Phoenix, the San Francisco Bay Guardian and L.A. Weekly, they all struggle to maintain a balance between muckraking and fluff that will please advertisers while not alienating readers.
The bimonthly Utne Reader has done well as an alternative press version of Reader's Digest. The liberal media watchdog group Fairness and Accuracy in Reporting (FAIR) and its magazine EXTRA! continues the work of Seldes and In fact by reporting on media bias and suppressed news. Z Magazine is a vibrant new voice on the Left and an example of how two people - Lydia Sargent and Michael Albert - can put out a solid publication with a shoestring budget. Other publications such as CovertAction Quarterly, Dollars and Sense, CounterPunch, Earth Island Journal, High Country News, Index on Censorship, Multinational Monitor, Southern Exposure, and the Texas Observer also further the tradition of the alternative press.
Looking back over a century of alternative journalism, one can see its resilience in the face of political and economic pressures. It has endured periodic government repression and sharp changes in the political and cultural climate. It has maintained a commitment to social change, even when it is not a popularly held sentiment. As long as there is a majority media that serves the interests of the powerful rather than the people, there will be a place for dissident voices. That place will be the alternative press.
David Armstrong. ``A Trumpet to Arms: Alternative Media in America.'' South End Books, 1981.
Abe Peck. ``Uncovering The Sixties: The Life and Times of the Underground Press.'' Pantheon, 1985.
Robert J. Glessing. ``The Underground Press in America'' Indiana University Press, 1970.
Laura Kessler. ``The Dissident Press: Alternative Journalism in American History.'' Sage Publications, 1984.
John M. Harrison and Harry H. Stein. ``Muckraking: Past, Present, and Future.'' Penn State, 1973.
Louis Filler. ``The Muckrakers: Crusaders for American Liberalism (revised edition).'' Gateway, 1968.
Daniel Aaron. ``Writers on the Left.'' Harcourt, Brace & World, 1961.
Edwin and Mary Emery. ``The Press and America: An Interpretive History'' Prentice Hall, 1984.
Mari Jo and Paul Buhle and Dan Georgakas. ``Encyclopedia of the American Left.'' University of Illinois Press, 1990.
Back to the home page | <urn:uuid:72b93cbd-5093-454a-9d0d-11450109728c> | CC-MAIN-2017-17 | http://www.brasscheck.com/seldes/history.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00012-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.959807 | 4,122 | 3.21875 | 3 |
|Reuters – Guided missiles are launched during a drill of the Chinese East Sea Fleet.|
- China attempted to use military force to back up alleged historical claims to the South China and East China Seas.
- China’s belligerent attempts to enforce its claims endanger peace in Asia.
- Southeast Asian nations must maintain a military presence to deter Chinese aggression while attempting negotiations with China.
- China has recently attempted to use military force to back up alleged historical claims to the South China Sea and East China Sea; however, upon closer examination, the claims do not hold up.
- China’s belligerent attempts to enforce its claims in the South and East China Seas endanger peace in Asia. China appears unlikely to accept any reasonable proposals that respect history and geography.
- Southeast Asian nations and other interested countries, like the United States and Australia, must maintain a military presence to deter Chinese aggression while attempting to negotiate a peaceful settlement with China.
Recently, China has used military aircraft and ships to threaten Japan in the East China Sea near the Senkaku Islands (which the Chinese call the Diaoyu Islands and the government in Taiwan calls the Diaoyutai). Similarly, in the South China Sea, Chinese ships have claimed areas very far from China but very close to such Southeast Asian countries as the Philippines, Malaysia, and Vietnam. China argues that these places belong to China, owing to long historical circumstances. But an examination of the evidence demonstrates that China has no historical claims to either the South China Sea or the East China Sea.
China makes its historical claims to the South and East China Seas in two key documents. “Historical Evidence to Support China’s Sovereignty over Nansha Islands,” issued by the Chinese Ministry of Foreign Affairs on November 17, 2000, makes China’s claims for the South China Sea.1 The Chinese government white paper entitled “Diaoyu Dao, an Inherent Territory of China,” issued in September 2012, makes the historical case for the East China Sea.2
The Chinese claim places in the South and East China Seas because Chinese historical books mention them. For example, during the Three Kingdoms period (the years 221-277), Yang Fu (楊阜) wrote about the South China Sea: “There are islets, sand cays, reefs and banks in the South China Sea, the water there is shallow and filled with magnetic rocks or stones (漲海崎頭. 水淺而多磁石).”3 Despite the assertions in part A of “Historical Evidence,” this passage simply describes a sea and does not make any claim for Chinese sovereignty.
These references in Chinese historical books have four additional difficulties. First, names in historical books are not necessarily the same as the place claimed today. Second, many places are described as the location of “barbarians” (for example, yi 夷 and fan 番), who by definition were not Chinese. Third, some of the mentions describe a “tributary” (附庸) relationship with China, but in these tributary relationships China and the tributary nation sent each other envoys (使臣).
Furthermore, these foreign and tributary nations most clearly were not under the rule of the Chinese emperors, nor were they part of the Chinese nation or empire.
Finally, the Chinese historical claims refer to the Mongol (1279-1367) and Manchu (1644-1911) empires when China was defeated and under foreign rule. China’s defeat becomes clear when reading the despair of Chinese scholars in those times, yet the rulers in China today distort China’s history by pretending that this rule was simply by Chinese “minority nationalities.” China today making a claim on the basis of the Mongol or Manchu empires is like India claiming Singapore because both were simultaneously colonies of the British Empire or Vietnam claiming Algeria because both were simultaneously colonies of the French Empire.
Let us now consider more specific claims with respect to the South and East China Seas.
The South China Sea
Figure 1 shows the conflicting claims over the South China Sea. China makes by far the largest claim to the South China Sea, a claim that runs along the Vietnamese coast and approaches the coasts of Indonesia, Malaysia, Brunei, and the Philippines. The Chinese claim, which extends about 1,600 kilometers (1,000 miles) to the south of China’s Hainan Island, is difficult to defend in geographic terms.
Figure 1. Conflicting Maritime Claims in the South China Sea
Source: US Central Intelligence Agency (available from http://en.wikipedia.org/wiki/File:Schina_sea_88.png).
Figure 2, an official Chinese map of Hainan Province, demonstrates that figure 1 does in fact accurately represent China’s claims to the South China Sea.
Figure 2. An Official Map of China’s Hainan Province
Source: Hainan provincial government (www.hainan.gov.cn/code/V3/en/images/map-of-hainan-large.jpg).
The Chinese document “Historical Evidence” begins to provide more evidence about the South China Sea as of the Ming Dynasty (1368-1644).4 Yet for centuries prior to the Ming Dynasty, ships of Arab and Southeast Asian merchants had filled the South China Sea and the Indian Ocean. China, too, was involved in this trade, though the trade was dominated by Arabs and Southeast Asians. In the words of Edward Dreyer, a leading Ming Dynasty historian, “Arabic . . . was the lingua franca of seafarers from South China to the African coast.”5
The importance of Arab traders is clear in a variety of ways. During the Tang Dynasty (618-906), a “largely Muslim foreign merchant community [lived] in Canton (Guangzhou). Canton was sacked in 879 by the Chinese rebel Huang Chao, and the most vivid account of the ensuing massacre is in Arabic rather than Chinese.”6
Before the Song Dynasty, non-Chinese dominated trade in the South China Sea and the Indian Ocean. In the words of Dreyer, “Despite the importance of China in this trade, Chinese ships and Chinese merchants and crews did not become important participants prior to the Song (960-1276). Well before then, voyages between China and India were made in large ships accompanied by tenders. The Chinese Buddhist pilgrim Faxian [法顯] travelled in 413 aboard a large merchant ship. . . . The largest ships of Faxian’s day were . . . very large . . . [b]ut they were Indonesian, not Chinese.”7
The Mongol Empire sent a Chinese man, Zhou Daguan (周達觀), as envoy to Angkor (modern Cambodia) in 1296-97. Zhou’s writing provides an important source of information about daily life in Angkor at this time, and two different English translations have now been published.8 Of course, Angkor was a foreign country outside of the Mongol Empire, and Zhou did not pretend otherwise.
Early in Ming Dynasty, during the reign of the Yongle (永樂) Emperor (r. 1403-24) and his successors, the Ming court sent the famous commander, Zheng He (鄭和), on seven major expeditions to Southeast Asia, South Asia, and the east African coast between 1405 and 1433. Zheng He had huge fleets with many “treasure ships” (baochuan 寶船), which were probably the largest wooden ships ever constructed. But Zheng’s voyages were not voyages of exploration. In fact, Dreyer wrote, “Zheng He’s destinations were prosperous commercial ports located on regularly travelled trade routes and . . . his voyages used navigational techniques and details of the monsoon wind patterns that were known to Chinese navigators since the Song Dynasty (960-1276) and to Arab and Indonesian sailors for centuries before that.”9 Zheng’s voyages, like those of the Portuguese who came a few decades later, “were attracted by an already functioning trading system.”10 Like the later Portuguese, Zheng most likely used Arab navigators in the western half of the Indian Ocean.
Zheng’s voyages had the purpose of bringing various foreign countries into China’s tributary system. This proved successful as long as Zheng’s voyages continued, but the immense military force of Zheng’s fleets, with over 27,000 men (mostly soldiers), meant that potential force was always an element in these voyages and violence was used on three occasions.11
The biography of Zheng He in the official History of the Ming Dynasty (Mingshi 明史) demonstrates the importance of the “iron hand in the velvet glove”: “Then they went in succession to the various foreign countries. . . . Those who did not submit were pacified by force.”12 Zheng’s voyages did have some influence. The rise of Malacca (Melaka) as a trading port to some extent owes to support from Zheng.13 But, “After the third ruler of Malacca converted to Islam in 1436, Malacca attracted to its port an increasing amount of the Indian Ocean and South China Sea trade, much of which was carried on ships sent by Muslim merchants and crewed by Muslim sailors. . . . [After Zheng He] this pattern of trade, now largely in Muslim hands, persisted until the arrival of the Portuguese.”14
Owing to the great expense of Zheng He’s voyages, as well as the Ming Dynasty’s concern with the Mongols on its northern borders, China turned inward and northward: “The [Ming] prohibition against building oceangoing ships and conducting foreign trade remained in force, and Chinese private citizens who violated this prohibition went beyond the borders of the Ming empire and ceased to be objects of government solicitude.”15 With a northward-oriented foreign policy and the prohibition of building oceangoing ships and conducting foreign trade, Ming China withdrew from the oceans. As I will show, this policy also affected the East China Sea.
Before moving to the East China Sea, however, let us consider another argument used to prove that China owns the areas around the South China Sea. This argument emphasizes the discovery of Chinese ceramics and pottery shards. As noted earlier, the South China Sea was a trading hub filled with ships carrying various valuable cargoes, including Chinese ceramics and Southeast Asian spices. But most of the ships carrying this cargo were Southeast Asian or Arab. This failure to distinguish between a trade good and the ships carrying the good affected the analysis of at least one senior Chinese leader. In his speech to the Australian Parliament on October 24, 2003, Chinese President Hu Jintao said, “Back in the 1420s, the expeditionary fleets of China’s Ming Dynasty reached Australian shores.”16 President Hu was referring to Zheng He, but we know the itineraries of Zheng’s voyages, and we know that they did not include Australia.17 In fact, Australian aborigines had long carried on trade with Macassans, who came from Sulawesi in modern Indonesia, and such Chinese ceramics most likely came from this trade, which included trepang and northern Australian timbers.18 This trade between the northern Australian indigenous peoples and the Macassans resulted in several Macassan words becoming an integral part of north Australian indigenous languages,19 but it provides no evidence that Chinese ever visited Australian shores before the 19th century.
“Historical Evidence” does not address one more important historic claim: the so-called “Nine-Dash Line” in the South China Sea. The origins of this line date back to 1933, when the then Republic of China’s Land and Water Maps Inspection Committee was formed. Conventionally, the public appearance of the so-called Nine-Dash Line map (figure 3) is dated 1947, though some sources date its publication as early as December 194620 or as late as February 1948.21 After the establishment of the People’s Republic of China in 1949, Premier Zhou Enlai (周恩來) accepted the Nine-Dash Line as valid for the People’s Republic as well, though sources vary as to when this took place. Since then, the Nine-Dash Line has varied, with different official versions having 9, 10, and 11 dashes. Yet this cartographic claim adds nothing to the historical evidence about any “sovereignty” over the South China Sea.
Figure 3. Original Nine-Dash Line Map Issued by the Republic of China in the Late 1940s.
Source: 1947 Nanhai Zhudao (available at http://en.wikipedia.org/wiki/Nine-dash_line#mediaviewer/File:1947_Nanhai_Zhudao.png).
The East China Sea
Chinese historical claims to the East China Sea were clarified in the September 2012 white paper “Diaoyu Dao, an Inherent Territory of China.” The paper begins its historical argument by stating that the Diaoyu Islands 釣魚島 (or, to use their Japanese name, the Senkaku Islands 尖閣諸島) were mentioned in a Chinese book published in 1403, Voyage with a Tail Wind (Shunfeng xiangsong 順風相送).22 As noted earlier, specific identification of modern locations with places mentioned in Chinese historical books remains uncertain, and in any case, the naming of a foreign country or place does not in any way say that China made a claim to these places. It is noteworthy that the Ministry of Foreign Affairs of the Republic of China in Taiwan made a similar claim in September 2012, but that this claim had been deleted from the Ministry’s website in June 2013.
The white paper then goes on to mention that the Kingdom of the Ryukyu Islands began to pay the Ming tribute in 1372.23 As noted earlier, a tributary relationship is not the same as a claim of ownership. Tribute nations were foreign states, and the Ming sent envoys to and received envoys from these foreign countries. Tributary relations gave the tribute nation substantial foreign trade privileges with China.
As shown in the discussion of the South China Sea, following the deaths of the Yongle Emperor and Zheng He, the Ming Dynasty focused inward and northward and forbade “building oceangoing ships and conducting foreign trade.”24 Han Chinese from Fujian did temporarily visit Taiwan, primarily southwestern Taiwan, to fish, trade with the aborigines and hide, in the case of pirates. Yet Taiwan remained a foreign place,25 and no permanent Han Chinese settlements existed in Taiwan until the Dutch imported Chinese for labor after the establishment of their colonial regime in 1624. When the Spanish arrived in 1626, they found virtually no Han Chinese in northern Taiwan.26
Taiwan received little attention in Chinese documents until late in the Ming Dynasty. In the words of Laurence G. Thompson, one of the earliest Western scholars on Taiwan history: “The most striking fact about the historical knowledge of Formosa is the lack of it in Chinese records. It is truly astonishing that this very large island . . . should have remained virtually beyond the ken of Chinese writers until late Ming times (seventeenth century).”27 The Diaoyu(tai)/Senkaku Islands were much smaller than Taiwan, much farther from the Ming to Taiwan’s east, and uninhabited. Thus, when Ming documents ignored much larger and closer Taiwan, they almost certainly did not mention the much smaller and more distant Diaoyu(tai)/Senkaku Islands.
In fact, both the People’s Republic of China and the Republic of China on Taiwan stated that the Diaoyu(tai)/Senkaku Islands belonged to Japan until the possibility of hydrocarbons in the seas near the islands was mentioned in a 1968 United Nations Economic Commission for Asia and the Far East survey of coastal mineral resources. On January 8, 1953, the official newspaper of the Chinese Communist Party, the People’s Daily (Renmin ribao 人民日報), published a report stating that the Senkaku Islands belonged to Japan’s Ryukyu Archipelago.28 Figure 4 shows this article on the lower-left of page 4. Figure 5 shows the article itself. The article begins:
The Ryukyu Archipelago is distributed on the sea between the northeast of China’s Taiwan and the southwest of Japan’s Kyushu Island. It has seven groups of islands including the Senkaku Islands. . . . The Ryukyu Archipelago stretches one thousand kilometres. On its closest side (內側) [to us] is China’s East China Sea. On its furthest side (外側) are the high seas of the Pacific Ocean. (琉球群島散佈在我國台灣東北和日本九州島西南安之間的海面上,包括尖閣諸島…琉球群島綿亙達一千公里.它的內側是我國東海,外側就是太平洋公海.)29
This suggests that the Senkaku Islands are outside of China’s sovereignty, an interpretation that other pieces of evidence also support.
Figure 4. View of People’s Daily.
Source: Renmin ribao, January 8, 1953, 4.
Figure 5. People’s Daily Article Stating That Senkaku Islands Belong to Ryukyu Archipelago
Source: Renmin ribao, January 8, 1953, 4.
In 1958 China published a World Atlas (Shijie dituji 世界地图集) that demonstrates that the Senkaku Islands belonged to Japan.30 The map of Japan (figure 6) has a separate map of the Ryukyu Archipelago in the lower right-hand corner. On this map, the international boundary is to the east of Taiwan but to the west of the Senkakus, which are clearly labeled in Chinese characters as Uotsuri Island 魚釣島 and as the Senkaku Islands 尖閣群島.
Three other maps in this collection verify that the Senkaku Islands fall to the east of China’s proclaimed international boundary to Taiwan’s northeast. These maps are Asia Political 亚洲政区 (figure 7), China Topographical 中国地形 (figure 8), and China Political 中国政区 (figure 9). In figures 8 and 9, the international border is also shown to be west of the 123° longitude line while, as shown below, the Senkaku Islands are all to the east of that line. The government of Taiwan under Chiang Kai-shek 蔣介石 also repeatedly published official maps that showed the Diaoyu(tai)/Senkaku Islands as belonging to Japan until 1971.31
Figure 6. Map of Japan
Source: Shijie dituji 世界地图集 [World Atlas], 1958, 25-26.
Figure 7. Asia Political Map
Source: Shijie dituji 世界地图集 [World Atlas], 1958, 11-12.
Figure 8. China Topographical Map
Source: Shijie dituji 世界地图集 [World Atlas], 1958, 14-15.
Figure 9. China Political Map
Source: Shijie dituji 世界地图集 [World Atlas], 1958, 17-18.
Only after both the 1968 United Nations Economic Commission for Asia and the Far East survey of coastal mineral resources suggesting hydrocarbons in the area of the islands and the Diaoyutai movement in Hong Kong, the United States, and elsewhere did either the government of the People’s Republic or the government of Chiang Kai-shek evince any interest in the islands. Furthermore, all Chinese assertions of sovereignty based on the Treaty of Shimonoseki (1895) or the San Francisco Peace Treaty (1951) have no credibility since these treaties do not even mention the Diaoyu(tai)/Senkaku Islands.32 These islands did not belong to China and could not be returned.
Claims that the Diaoyu(tai)/Senkaku Islands have “always been affiliated to China’s Taiwan Island both in geographical terms and in accordance with China’s historical jurisdiction practice”33 also have no historical basis. The Republic of China government under Chiang Kai-shek accepted the surrender of the Japanese in Taiwan on October 25, 1945. The Taiwan Provincial Executive Commander’s Office 臺灣省行政長官公署 under Chen Yi 陳儀 published a major book with 540 tables and 1,384 pages translating 51 years of Japanese statistics about Taiwan into Chinese.34 Using statistics dated August 1946, this book suggests that the eastern most parts of “Taiwan Province” were Taiwan island (122°00′04″E), Pengjia Islet 彭佳嶼 (122°04′51″E), and Mianhua Islet 棉花嶼 (122°06′15″E).35 These are the only locations east of 122°E. Yet, the westernmost of the Diaoyutai/Senkaku Islands is more than 1°24′45″ further east at 123°31′0″E. Thus, under the Japanese colonial rule over Taiwan (1895-1945), the Diaoyu(tai)/Senkaku Islands were never administered as part of Taiwan. This situation is quite different from that of the South China Sea, where Japan did administer some islands through its colony in Taiwan.36
The Chinese government has also expressed anger over the so-called “nationalization” (Japanese: kokuyūka 国有化) of the Senkaku Islands, a subject mentioned in both the foreword and conclusion of the “Diaoyu Dao” white paper. The Chinese assert that the Japanese government gained sovereignty through this nationalizing process. In fact, this is a misunderstanding. As we have seen, the Japanese government exercised sovereignty over the Diaoyu(tai)/Senkaku Islands before the nationalization process and the process did not change sovereignty at all. Rather, by nationalizing, the Japanese government converted Japanese land from private ownership to land held by the national government. This happens frequently in many societies when, for example, a government converts private property into a national park.
At the recent international China Pacific Forum 2013 held in Beijing in October 2013, Chinese scholars continued to provide further “historical evidence” that the so-called Diaoyu Islands belong to China. One scholar showed a Ming Dynasty map that purported to show both the coast of Fujian Province and the Diaoyu Islands. The map, however, did not show Taiwan. Clearly the so-called Diaoyu Islands on this map were not the islands to the northeast of Taiwan.
Another scholar asserted that a Japanese military map stated that the Diaoyu Islands belong to China, but the Japanese writing on the map simply referred to “Taiwan and associated islands.” The evidence presented in this paper clearly shows that the Diaoyu(tai)/Senkaku Islands were not associated with Taiwan. Thus, Chinese scholars today continue to make historical claims for the Senkaku Islands, but poor history and leaps of logic underpin their “research.”
China’s belligerent attempts to enforce its claims in the South and East China Seas endanger peace in Asia. In dealing with the Chinese about these issues, the United States and countries with claims to these seas should make crystal clear that they do not accept China’s so-called historical claims. We must note that these claims have no historical basis and that the Chinese use these false claims in their efforts at territorial expansionism in the South and East China Seas.
Unfortunately, to date China has failed to indicate any willingness to take steps that might lead to genuine peace in disputes over the South and East China Seas. For example, in response to a recent Philippine initiative to go to an international tribunal, the Permanent Court of Arbitration, a commentary in the People’s Daily responded, “The act of the Philippine side is against the international law and the historical truth as well as against morality and basic rules of international relations [italics added].”37 Such a broad-based Chinese attack on the Philippine proposal, including the claim that the Philippines is acting immorally, suggests that China is not prepared to make any concession whatsoever and that it does not seek any genuine resolution of the dispute.
Similarly, the last paragraph in the Chinese “Diaoyu Dao” white paper also expresses a lack of willingness to make even the slightest concession:
China strongly urges Japan to respect history and international law and immediately stop all actions that undermine China’s territorial sovereignty. The Chinese government has the unshakable resolve and will to uphold the nation’s territorial sovereignty. It has the confidence and ability to safeguard China’s state sovereignty and territorial integrity.38
Yet, as we have seen, China’s claims in “history and international law” do not demonstrate that China has sovereignty in the Senkaku Islands.
While policymakers must continue to make efforts to reach a just peace in the South and East China Seas, the prospects of China accepting any reasonable proposals that respect history and geography seem remote. Japan, Vietnam, the Philippines, Malaysia, Indonesia, and Brunei and other interested nations such as the United States and Australia must also maintain a strong military capacity to deter Chinese aggression simultaneous with attempts to negotiate a peaceful settlement with China.
J. Bruce Jacobs (Bruce.Jacobs@monash.edu) is Emeritus Professor of Asian Languages and Studies at Monash University in Melbourne, Australia. His most recent books are Local Politics in Rural Taiwan under Dictatorship and Democracy (EastBridge, 2008) and Democratizing Taiwan (Brill, 2012). The four-volume Critical Readings on China-Taiwan Relations, which he edited with an introduction, is being published by Brill in June 2014.
1. For the text of “Historical Evidence,” see www.fmprc.gov.cn/mfa_eng/topics_665678/3754_666060/t19231.shtml.
2. State Council Information Office, the People’s Republic of China, “Diaoyu Dao, an Inherent Territory of China,” September 2012; for English text, see www.gov.cn/english/official/2012-09/25/content_2232763.htm, and for Chinese text, see http://news.xinhuanet.com/2012-09/25/c_113202698.htm.
3. “Historical Evidence.” For more information about Yang Fu, see http://en.wikipedia.org/wiki/Yang_Fu_%28Han_Dynasty%29 and http://zh.wikipedia.org/wiki/%E6%A5%8A%E9%98%9C. In fact, Yang’s main contributions were during the Three Kingdoms period rather than the Eastern Han.
4. “Historical Evidence,” especially Parts B and C.
5. Edward L. Dreyer, Zheng He: China and the Oceans in the Early Ming Dynasty, 1405–1433 (New York: Pearson Longman, 2007), 7.
6. Ibid., 37.
7. Ibid., 37–38.
8. Chou Ta-kuan (Zhou Daguan), The Customs of Cambodia (Bangkok: Siam Society, 1987, 1992, 1993); and Zhou Daguan, A Record of Cambodia: The Land and Its People, trans. Peter Harris (Bangkok: Silkworm Books, 2007). The Chinese title of Zhou’s book is Zhenla fengtuji真臘風土記.
9. Dreyer, Zheng He, 182.
10. Ibid., 175.
11. Ibid, 28–29 and others.
12. Mingshi 304.2b-4b, as translated in Dreyer, Zheng He, 187–88. The Chinese text in simplified characters is: “以次遍历诸番国…不服则以武慑之.” For the original Chinese Mingshi biography of Zheng He, see www.guoxue.com/shibu/24shi/mingshi/ms_304.htm.
13. Dreyer, Zheng He, 46.
14. Ibid., 175.
16. For the text of Hu Jintao’s speech to the Australian parliament, see Australian Parliament House of Representatives, “Address by the President of the People’s Republic of China,” October 23, 2003, 166–71, www.aph.gov.au/binaries/library/pubs/monographs/kendall/appendone.pdf. Quote is from 166.
17. Dreyer, Zheng He.
18. On the trade between the northern Australian indigenous peoples and the Macassans, see “Macassan Traders,” Australia: The Land Where Time Began, September 30, 2011, http://austhrutime.com/macassan_traders.htm; Rupert Gerritsen, “When Did the Macassans Start Coming to Northern Australia?,” http://rupertgerritsen.tripod.com/pdf/published/Djulirri_Rock_Art.pdf; and Marshall Clark and Sally K. May (eds.), Macassan History and Heritage: Journeys, Encounters and Influences (Canberra: ANU E Press, 2013), introduction, http://epress.anu.edu.au/apps/bookworm/view/Macassan+History+and+Heritage/10541/ch01.xhtml#toc_marker-4.
19. Kate Humphris, “Macassan History in Arnhem Land,” 105.7 ABC Darwin, July 29, 2009, www.abc.net.au/local/stories/2009/07/21/2632428.htm.
20. Erik Franckx and Marco Benatar, “Dots and Lines in the South China Sea: Insights from the Law of Map Evidence,” Asian Journal of International Law 2 (2012): 90–91.
21. Zhiguo Gao and Bing Bing Jia, “The Nine-Dash Line in the South China Sea: History, Status, and Implications,” American Journal Of International Law 107 (2013): 102–03.
22. “Diaoyu Dao, an Inherent Territory of China,” Section I.1. The Chinese text of Voyage with a Tail Wind can be found at http://zh.wikisource.org/wiki/%E4%B8%A4%E7%A7%8D%E6%B5%B7%E9%81%93%E9%92%88%E7%BB%8F.
23. “Diaoyu Dao,” Section I.1.
24. Dreyer, Zheng He, 175.
25. See the 1603 account by Chen Di陳第, “An Account of Eastern Barbarians” (Dongfan ji東番記), translated in Lawrence G. Thompson, “The Earliest Chinese Eyewitness Accounts of the Formosan Aborigines,” Monumenta Serica, no. 23 (1963): 172–78.
26. Tonio Andrade, How Taiwan Became Chinese: Dutch, Spanish, and Han Colonialization in the Seventeenth Century (New York: Columbia University Press, 2008), 83. See also sources cited in J. Bruce Jacobs, “Review Essay: The History of Taiwan,” China Journal, no. 65 (January 2011): 196–97.
27. Laurence G. Thompson, “The Earliest Chinese Eyewitness Accounts of the Formosan Aborigines,” Monumenta Serica, no. 23 (1964): 163.
28. “Ziliao: Liuqiu qundao renmin fandui Meiguo zhanling de douzheng 資料: 琉球群島人民反對美國佔領的鬥爭” [Reference: The Struggle of the Ryukyu Archipelago People Against American Occupation], Renmin ribao 人民日報 [People’s Daily], January 8, 1953, 4.
30. Shijie dituji 世界地图集 [World Atlas] (Beijing and Shanghai: Ditu chubanshe, 1958).
31. Ko-hua Yap, Yu-wen Chen, and Ching-chi Huang, “The Diaoyutai Islands on Taiwan’s Official Maps: Pre- and Post-1971,” Asian Affairs: An American Review 39, no. 2 (2012): 90–105.
32. “Diaoyu Dao,” Section IV. For the text of the Treaty of Shimonoseki, see www.taiwandocuments.org/shimonoseki01.htm. For text of the Treaty of San Francisco, see www.taiwandocuments.org/sanfrancisco01.htm. The Treaty of Taipei (1952), the Treaty of Peace between the Republic of China government under Chiang Kai-shek and Japan, which the Ma Ying-jeou government in Taiwan often cites, also does not mention the islands. For the text of the Treaty of Taipei, see www.taiwandocuments.org/taipei01.htm.
33. “Diaoyu Dao,” Section IV.
34. Taiwan sheng wushiyi nian lai tongji tiyao 臺灣省五十一年來統計提要 [Statistical Abstract of Taiwan Province for the Past Fifty-One Years] (Taipei: Statistical Office of the Taiwan Provincial Administration Agency, 1946; reprint, Taipei: Guting Shuwu, 1969).
35. Ibid., 52.
36. Ibid., 51, 54.
37. “Commentary Gives China’s Reasons for Refusing Arbitration on South China Sea Issue,” Xinhua, April 1, 2014, http://english.people.com.cn/90883/8584641.html or http://news.xinhuanet.com/english/china/2014-04/01/c_133228152.htm.
38. “Diaoyu Dao.” | <urn:uuid:a7486924-b8d4-4201-918b-eb712617df55> | CC-MAIN-2017-17 | https://duockhoa74.com/2014/06/30/chinas-frail-historical-claims-to-the-south-china-and-east-china-seas/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118552.28/warc/CC-MAIN-20170423031158-00600-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.915944 | 7,688 | 3.015625 | 3 |
A Practical Path to Recruitment and Retention
Recruitment and retention challenges are once again leading to teacher shortages across the nation. Especially in urban and rural school districts, low salaries and poor working conditions often contribute to the difficulties of recruiting and keeping teachers, as can the challenges of the work itself. As a consequence, in many schools—especially those serving the most vulnerable populations—students often face a revolving door of teachers over the course of their school careers.1
Turnover is higher in districts that meet shortages by hiring teachers who have not completed an adequate preparation, as novices without training leave after their first year at more than twice the rate of those who have had student teaching and rigorous preparation.2 Similarly, teachers who do not receive mentoring and support in their first years leave teaching at much higher rates than those whose school or district provides such support.3 Under these circumstances, everyone loses: Student achievement is undermined by high rates of teacher turnover and by teachers who are inadequately prepared for the challenges they face. Schools suffer from continual churn, undermining long-term improvement efforts. Districts pay the costs of both students’ underachievement and teachers’ high attrition.4
Newly emerging teacher residency programs seek to address these problems by offering an innovative approach to recruiting and retaining high-quality teachers. Residencies have typically been focused in hard-to-staff geographic areas (urban and rural) and subject areas (e.g., mathematics, science, special education, and bilingual/English as a second language teaching). They recruit the teachers that local districts know they will need early, before they are prepared, so that they can then prepare them to excel and remain in these schools. When used in this deliberative manner, teacher residencies can address a crucial recruitment need while also building the capacity of districts to provide high-quality instruction to the students they serve.
The Design of Teacher Residency Programs
Building on the medical residency model, teacher residencies provide an alternative pathway to teacher certification grounded in deep clinical training. Residents apprentice alongside an expert teacher in a high-need classroom for a full academic year. They take closely linked coursework from a partnering university that leads to a credential and a master’s degree at the end of the residency year. They receive living stipends and tuition support as they learn to teach; in exchange, they commit to teach in the district for several years beyond the residency.
This model fosters tight partnerships between local school districts and teacher preparation programs. Residencies recruit teachers to meet district needs—usually in shortage fields. Then they rigorously prepare them and keep them in the district. While most teacher residencies began in urban districts, consortia of rural districts and charter school organizations have also created them.
Although many teacher preparation programs have evolved substantially, traditional university-based programs have often been critiqued for being academically and theoretically focused, with limited and disconnected opportunities for clinical experience. Conversely, alternative routes into teaching have been criticized for focusing on “learning by doing,” with limited theoretical grounding and little or no opportunity for supervised student teaching alongside expert teachers modeling good practice.5 These critiques, coupled with the challenge of hiring and keeping well-prepared teachers in hard-to-staff districts, have led to the “third space” from which teacher residencies have grown in the last 15 years.6
In part, the residency design emerged from the Master of Arts in Teaching programs started in the 1960s and 1970s—an earlier era of teacher shortages—as federally funded innovations at elite colleges and universities. Columbia, Harvard, Stanford, and the University of Chicago, among others, launched yearlong postgraduate programs that typically placed candidates in schools for a full year of student-teaching internships in the classrooms of expert veteran teachers, while the candidates also took coursework from the university. In those days, the federal government provided aid to offset many of the costs of these teacher preparation programs. Even though federal aid has dwindled considerably, many of these programs continue today. This design created the foundation for the residency model, which adds a closer connection to the hiring district and provides additional financial incentives and mentoring supports for teacher candidates.
Several characteristics set teacher residency programs apart from most traditional teacher preparation and alternative certification programs. First, residencies are typically developed as a partnership between a school district and a local institution of higher education, with the goal of fulfilling the partner district’s hiring needs. A second characteristic of residencies is a longer clinical placement than is found in most traditional or alternative programs, generally at least a full school year, with residents working under the guidance of an experienced, expert mentor—before becoming the teacher of record. Third, high-quality residencies offer teacher candidates a curriculum that is tightly integrated with their clinical practice, which creates a more powerful learning experience.
Although each teacher residency program is unique, a few of the key common characteristics shared by high-quality residencies are described below:
District-university partnerships. In contrast to traditional teacher preparation programs, which often do not recruit and place candidates in specific districts to fulfill the districts’ particular needs, residents are recruited to work for the partner district (or charter management organization) and fulfill its hiring needs (e.g., filling shortage subject areas and/or teaching in specific schools). Residents commit to teaching in the local school district after the program ends. High-quality residency programs are codesigned by the district and the university to ensure that residents get to know the students and families in the communities in which they will be teaching and are rigorously prepared to teach in those communities and schools.
Candidate recruitment and selection. Districts and preparation programs partner in the recruitment and selection of the residents to ensure that residents meet local hiring needs. In addition, the programs aim to broaden and diversify the local teacher workforce by selecting high-quality candidates through a competitive screening process. Residencies recruit candidates from a wide variety of backgrounds, both recent college graduates and midcareer professionals, and are highly selective.
Clinical experience. For at least one academic year, candidates spend four to five days a week in a classroom under the wing of an experienced and trained mentor teacher, and gradually take on more responsibilities over the course of the year.7 Most residents receive at least 900 hours of pre-service clinical preparation, while the norm for most traditional programs is in the range of 400–600 hours. Most alternative certification programs offer little or no student teaching.8
Coursework. Coursework in residencies is closely integrated with clinical experiences. Sometimes, courses are designed and taught by experienced teachers in the district.9 Often, the university faculty members who teach courses are involved in local schools and are themselves former teachers. Many courses are cotaught by school and university faculty. Candidates take graduate-level coursework that leads to both state certification/licensure and a master’s degree from the partner university.
One study found that residents across 30 teacher residency programs took an average of 450 hours of coursework, roughly equivalent to 10 college courses; residents in these programs reported that the coursework was well integrated with their clinical experiences, a key goal of residencies.10
Additionally, many programs require frequent feedback and performance-based assessments of candidates’ classroom practice.
Mentor recruitment and selection. Residencies not only allow districts to attract and train high-quality teacher candidates, but also provide career advancement opportunities for experienced teachers within those districts to serve as mentors, supervisors, and instructors in the programs. As it is for candidates, the selection process for mentors typically is rigorous because they must be both experienced and accomplished. A study of 30 teacher residency programs found that mentors in these programs had, on average, 10 years of prior teaching experience.11 Some programs offer teacher mentors financial benefits, such as $2,000 or $3,000 stipends and/or money targeted for professional development, but there are nonfinancial rewards to mentoring as well, notably the benefit to mentors of improving their own practice. As a mathematics and science mentor from one program explained:
The mentorship experience reinspired me. I became a more reflective educator by working closely with someone daily, and my students benefited by having two teachers in the classroom. Mentoring also made me think back to everything that I had stopped doing and reminded me how to be a better teacher.12
Cohorts placed in teaching schools. Another key feature of many residencies is the placement of candidates into cohorts; participants of a program may be clustered in university courses as well as school sites, to create a stronger support network and to foster collaboration among new and experienced teachers.13
In these kinds of teaching schools, often called professional development schools (PDSs) or partner schools, faculty members from the school and university work together to develop curriculum, improve instruction, and undertake school reforms, making the entire school a site for learning and feedback for adults and students alike.14 Many such schools actively encourage resident teachers to participate in all aspects of school functioning, ranging from special education and support services for students, to parent meetings, home visits, and community outreach, to faculty discussions and projects aimed at ongoing improvement in students’ opportunities to learn.
Studies of highly developed PDSs have found that new teachers who graduate from such programs feel better prepared to teach and are rated by employers, supervisors, and researchers as stronger than other new teachers. Veteran teachers working in such schools describe changes in their own practice as a result of the professional development, action research, and mentoring parts of the PDS. Studies have documented gains in student performance tied to curriculum and teaching interventions resulting from PDS initiatives.15
Early career mentoring. Programs also provide early career mentoring and support for one to three years after a candidate becomes the teacher of record. This type of intentional mentoring in high-quality residency programs can be very important both for developing teachers’ competence and for reducing attrition. Studies show that having planned time to collaborate with a mentor in the same subject area is a key element of successful induction that supports beginning teacher retention.16
Financial support and incentives. Unlike most traditional or alternative preparation programs, residency programs are organized and funded to offer financial incentives to attract and retain high-quality candidates with diverse backgrounds and experiences. These incentives include living stipends, student loan forgiveness, and/or tuition remittance in exchange for residents’ commitment to teaching in the district for a specified period of time, typically three to five years. One cross-site study cites residency program contributions for candidates’ training and master’s degrees to be anywhere from $0 to $36,000 in the programs reviewed.17 Other kinds of resident funding and support, such as stipends and tuition reimbursements, also vary. Often, living stipends are lower when tuition reimbursements are higher.
Impact of Residencies
With recent federal and philanthropic support, there are now at least 50 teacher residency programs nationwide, which range in size from five to 100 residents per year. A small but growing body of research has been conducted on the impact of residencies on teacher recruitment, teacher retention, and student achievement. Most studies have been in-depth case studies of the earliest programs; to date, only one comprehensive study (of the Teacher Quality Partnership grant) examines characteristics and impact across several programs nationally.
The findings from these studies regarding the impact of teacher residencies on teacher recruitment and retention are promising, although more research is needed, especially with respect to teacher impacts on students. Research suggests that well-designed and well-implemented teacher residency models can create long-term benefits for districts, for schools, and, ultimately and most importantly, for the students they serve. Key benefits include increased teacher recruitment diversity, higher teacher retention, and greater student outcomes.
Recruitment. Many residency programs have specific goals around recruitment, such as diversifying the teacher workforce by attracting more candidates of color or bringing in midcareer professionals. Research suggests that residencies bring greater gender and racial diversity into the teaching workforce. Across teacher residency programs nationally, 45 percent of residents in 2015–2016 were people of color. This proportion is more than double the national average of teachers of color entering the field, which is 19 percent.18
In addition to attracting a more diverse workforce, residencies aim to staff high-need schools and subject areas. Nationally, 45 percent of residency graduates in 2015–2016 taught in a high-need subject area, including mathematics, science, technology fields, bilingual education, and special education.19
Retention. National studies indicate that around 20–30 percent of new teachers leave the profession within the first five years, and that attrition is even higher (often reaching 50 percent or more) in high-poverty schools and in high-need subject areas.20 Studies of teacher residency programs consistently point to the high retention rates of their graduates, even after several years in the profession, generally ranging from 80–90 percent in the same district after three years and 70–80 percent after five years.21
In two of the most rigorous studies to date, researchers found statistically significant differences in retention rates between residency graduates and nonresidency peers, controlling for the residents’ characteristics and those of the settings in which they taught. Higher retention rates may be attributable to the combination of program quality, residents’ commitment to teach for a specific period of time in return for financial support, and induction support during the first one to three years of teaching.22
Student outcomes. Because most residency programs are still in their infancy, only a few studies have examined program impact on student achievement. These initial studies have found that the students of teachers who participated in a residency program outperform students of non-residency-prepared teachers on select state assessments.23
The teacher residency model holds much promise to address the issues of recruitment and retention in high-need districts and subject areas. This model also has the potential to support systemic change and the building of the teaching profession, especially in the most challenging districts.
Initial research is promising as to the impact residencies can have on increasing the diversity of the teaching force, improving retention of new teachers, and promoting gains in student learning. This research also suggests that the success of residencies requires attention to each of the defining characteristics of the model and the integrity of their implementation. Important factors include: (1) careful recruitment and selection of residents and mentor teachers within the context of a strong partnership between a district and university, (2) a tightly integrated curriculum based on a yearlong clinical placement in classrooms and schools that model strong practice, (3) adequate financial assistance, and (4) mentoring supports as candidates take on classrooms and move into their second and third years of teaching.
Residencies support the development of the profession by acknowledging that the complexity of teaching requires rigorous preparation in line with the high levels of skill and knowledge needed in the profession. Residencies also build professional capacity by providing professional learning and leadership opportunities for accomplished teachers in the field, as they support the growth and development of new teachers. These elements of strengthening the teaching profession can create long-term benefits for districts, schools, and, most importantly, the students they serve.
Roneeta Guha and Maria E. Hyler are senior researchers at the Learning Policy Institute, where Linda Darling-Hammond is the president and CEO. This article is excerpted with permission from their 2016 report The Teacher Residency: An Innovative Model for Preparing Teachers.
1. Anne Podolsky, Tara Kini, Joseph Bishop, and Linda Darling-Hammond, Solving the Teacher Shortage: How to Attract and Retain Excellent Educators (Palo Alto, CA: Learning Policy Institute, 2016).
2. Matthew Ronfeldt, Susanna Loeb, and James Wyckoff, “How Teacher Turnover Harms Student Achievement,” American Educational Research Journal 50 (2013): 4–36; Linda Darling-Hammond, The Flat World and Education: How America’s Commitment to Equity Will Determine Our Future (New York: Teachers College Press, 2010); and Linda Darling-Hammond, “Keeping Good Teachers: Why It Matters, What Leaders Can Do,” Educational Leadership 60, no. 8 (May 2003): 6–13.
3. Richard M. Ingersoll and Michael Strong, “The Impact of Induction and Mentoring Programs for Beginning Teachers: A Critical Review of the Research,” Review of Educational Research 81 (2011): 201–233.
4. Ronfeldt, Loeb, and Wyckoff, “How Teacher Turnover Harms.”
5. Davida Gatlin, “A Pluralistic Approach to the Revitalization of Teacher Education,” Journal of Teacher Education 60 (2009): 469–477.
6. Emily J. Klein, Monica Taylor, Cynthia Onore, Kathryn Strom, and Linda Abrams, “Finding a Third Space in Teacher Education: Creating an Urban Teacher Residency,” Teaching Education 24 (2013): 27–57; and Ken Zeichner, “Rethinking the Connections between Campus Courses and Field Experiences in College- and University-Based Teacher Education,” Journal of Teacher Education 61 (2010): 89–99.
7. Tim Silva, Allison McKie, Virginia Knechtel, Philip Gleason, and Libby Makowsky, Teaching Residency Programs: A Multisite Look at a New Model to Prepare Teachers for High-Need Schools (Washington, DC: National Center for Education Evaluation and Regional Assistance, 2014).
8. Teacher Quality Partnership grantees are required to provide a full school year of pre-service clinical preparation to teacher candidates (equaling at least 30 weeks or 900 hours). The American Association of Colleges for Teacher Education recommends that states require a minimum of one semester or 450 hours (15 weeks at 30 hours per week) of clinical preparation, if not the full year. See American Association of Colleges for Teacher Education, “Where We Stand: Clinical Preparation of Teachers” (Washington, DC: AACTE, 2012); and Silva et al., Teaching Residency Programs.
9. Barnett Berry, Diana Montgomery, Rachel Curtis, Mindy Hernandez, Judy Wurtzel, and Jon Snyder, Creating and Sustaining Urban Teacher Residencies: A New Way to Recruit, Prepare, and Retain Effective Teachers in High-Needs Districts (Washington, DC: Aspen Institute, 2008).
10. Silva et al., Teaching Residency Programs.
11. Silva et al., Teaching Residency Programs.
12. Quoted in Daniel Dockterman, “2010–15 IMPACT Surveys: Cohorts I–IV, Final Findings,” Xpress Working Papers, no. 8, in “The Power of Urban Teacher Residencies: The Impact of IMPACT,” ed. Karen Hunter Quartz and Jarod Kawasaki, XChange: Publications and Resources for Public School Professionals (UCLA Graduate School of Education & Information Studies), Fall 2014, 10, https://centerx.gseis.ucla.edu/xchange/power-of-urban-teacher-residencie....
13. Berry et al., Creating and Sustaining; and John P. Papay, Martin R. West, Jon B. Fullerton, and Thomas J. Kane, “Does an Urban Teacher Residency Increase Student Achievement? Early Evidence From Boston,” Educational Evaluation and Policy Analysis 34 (2012): 413–434.
14. Ismat Abdal-Haqq, Professional Development Schools: Weighing the Evidence (Thousand Oaks, CA: Corwin, 1998); Roberta Trachtman, “The NCATE Professional Development School Study: A Survey of 28 PDS Sites,” in Designing Standards That Work for Professional Development Schools, ed. Marsha Levine (Washington, DC: National Council for Accreditation of Teacher Education, 1998), 81–110; and Linda Darling-Hammond, “Teaching as a Profession: Lessons in Teacher Preparation and Professional Development,” Phi Delta Kappan 87, no. 3 (November 2005): 237–240.
15. Linda Darling-Hammond and John Bransford, eds., Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do (San Francisco, Jossey-Bass, 2005).
16. Ingersoll and Strong, “Impact of Induction.”
17. Urban Teacher Residency United, “Financially Sustainable Teacher Residencies” (Chicago: Urban Teacher Residency United, 2012).
18. Nineteen percent of new hires (first-time teachers) are teachers of color (nonwhite). Twenty percent of total hires are teachers of color—this includes brand-new, returning, and reentry teachers. Eighteen percent of the total teacher workforce are teachers of color (nonwhite). Data from authors’ analysis of National Center for Education Statistics, 2011–12 Schools and Staffing Survey (SASS) Restricted-Use Data Files.
19. National Center for Teacher Residencies, 2015 Network Impact Overview (Chicago: National Center for Teacher Residencies, 2016), 5.
20. Linda Darling-Hammond and Gary Sykes, “Wanted: A National Teacher Supply Policy for Education; The Right Way to Meet the ‘Highly Qualified Teacher’ Challenge,” Educational Policy Analysis Archives 11, no. 33 (2003): 1–55; and Richard M. Ingersoll, Is There Really a Teacher Shortage? (Seattle: Center for the Study of Teaching and Policy, 2003).
21. For more on these findings, see table 1 in Roneeta Guha, Maria E. Hyler, and Linda Darling-Hammond, The Teacher Residency: An Innovative Model for Preparing Teachers (Palo Alto, CA: Learning Policy Institute, 2016), 14.
22. Tim Silva, Allison McKie, and Philip Gleason, New Findings on the Retention of Novice Teachers from Teaching Residency Programs (Washington, DC: Institute of Education Sciences, 2015).
23. See Papay et al., “Does an Urban Teacher Residency Increase”; and “Tennessee Teacher Preparation Report Card 2014 State Profile,” in Tennessee Higher Education Commission, 2014 Report Card on the Effectiveness of Teacher Training Programs, accessed December 22, 2016, www.tn.gov/assets/entities/thec/attachments/reportcard2014A_Tennessee_St....
[illustrations By Enrique Moreiro] | <urn:uuid:e91fea70-3289-49d1-b85c-27a9088f7419> | CC-MAIN-2017-17 | http://www.aft.org/ae/spring2017/guha_hyler_and_darling-hammond | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00602-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.94581 | 4,672 | 2.796875 | 3 |
Acts of Parliament
See the index for the full list of Acts relating to children, schools and education and for notes on the texts.
Grammar Schools Act 1840
View a copy of the original:
Grammar Schools Act 1840 (pdf image 1.6mb).
The text of the Grammar Schools Act 1840 was prepared for the web by Derek Gillard and uploaded on 16 February 2013.
Grammar Schools Act 1840
© Crown copyright material is reproduced with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
An Act for improving the Condition and extending the Benefits of Grammar Schools.
[7th August 1840.]
'Whereas there are in England and Wales many endowed schools, both of royal and private foundation, for the education of Boys or Youth wholly or principally in grammar; and the term "grammar" has been construed by Courts of Equity as having reference only to the dead Languages, that is to say, Greek and Latin: And whereas such Education, at the Period when such Schools or the greater Part were founded, was supposed not only to be sufficient to quality Boys or Youth for Admission to the Universities, with a view to the learned Professions, but also necessary for preparing them for the superior Trades and Mercantile Business: And whereas from the Change of Times and other Causes such Education, without Instruction in other Branches of Literature and Science, is now of less Value to those who are entitled to avail themselves of such charitable Foundations, whereby such Schools have, in many Instances, ceased to afford a substantial Fulfilment of the Intentions of the Founders; and System of Education in such Grammar Schools ought therefore to be extended and rendered more generally beneficial, in order to afford such Fulfilment; but the Patrons, Visitors, and Governors thereof are generally unable of their own Authority to establish any other System of Education than is expressly provided for by the Foundation, and Her Majesty's Courts of Law and Equity are frequently unable to give adequate relief, and in no Case but at considerable Expence: And whereas in consequence of Changes which have taken place in the Population of particular Districts it is necessary, for the Purpose aforesaid, that in some Cases the Advantages of such Grammar Schools should be extended to Boys other than those to whom by the Terms of the Foundation or the other existing Statutes the same is now limited, and that in other Cases some Restriction should be imposed, either with reference to the total Number to admitted to the School, or as regards their Proficiency at the Time when they may demand admission; but in this respect also the said Patrons, Visitors and Governors, and the Courts of Equity are frequently without sufficient Authority to make such Extension or Restriction: And whereas it is expedient that in certain Cases Grammar Schools in the same Place should be united: And whereas no Remedy can be applied in the Premises without the Aid of Parliament:'
Be it therefore declared and enacted by the Queen's most Excellent Majesty, by and with the Advice and Consent of the Lords Spiritual and Temporal and Commons, in this present Parliament assembled, and by the Authority of the same,
Courts of Equity empowered, whenever a Question comes before them, to make Decrees or Orders extending the System of Education and the Right of Admission into any School, and to establish Schemes for the Application of its Revenues, having due Regard to the Intentions of the Founder.
That whenever, after the passing of this Act, any Question may come under Consideration in any of Her Majesty's Courts of Equity concerning the System of Education thereafter to be established in any Grammar School, or the Right of Admission into the same, whether such Question be already pending, or whether the same shall arise upon any Information, Petition, or other Proceedings which may now or at any Time hereafter filed or instituted for whatever Cause
the same may have been or may be instituted, according to the ordinary course of proceedings in Courts of Equity, or under the Provisions of this Act, it shall be lawful for the Court to make such Decrees or Orders as to the said Court shall seem expedient, as well for extending the System of Education to other useful Branches of Literature and Science in addition to or (subject to the Provisions hereinafter contained) in lieu of the Greek and Latin Languages, or such other Instruction as may be required by the Terms of the Foundation or the then existing Statutes, as also for extending or restricting the Freedom or the Right of Admission to such School, by determining the Number or the Qualifications of Boys who may thereafter be admissible thereto, as free Scholars or otherwise, and for settling the Terms of Admission to and Continuance in the same, and to establish such Schemes for the Application of the Revenues of any such Schools as may in the Opinion of the Court be conducive to the rendering or maintaining such Schools in the greatest Degree efficient and useful, with due Regard to the Intentions of the respective Founders and Benefactors, and to declare at what Period and upon what Event such Decrees or Orders, or any Directions contained therein, shall be brought into operation, and that such Decrees and Orders shall have Force and Effect notwithstanding any Provisions contained in the Instruments of Foundation, Endowment, or Benefaction, or in the then existing Statutes: Provided always that in case there shall be any special Visitor appointed by the Founder, or other competent Authority, Opportunity shall be given to such Visitor to be heard on the Matters in question, in such Manner as the Court shall think proper, previously to the making of such Decrees or Orders.
Before making such Decrees the Courts shall consider the Intentions of the Founders, the State of School, &c.
II. Provided always, and be it enacted, That in making any such Decree or Order the Court shall consider and have regard to the Intentions of the Founders and Benefactors of every such Grammar School, the Nature and Extent of the Foundation and Endowment, the Rights of Parties interested therein, the Statutes by which the same has been hitherto governed, the Character of the Instruction theretofore afforded therein, and the existing State and Condition of the said School, and also the Condition, Rank, and Number of the Children entitled to and capable of enjoying the Privilege of the said School, and of those who may become so capable if any extended or different System of Education, or any Extension of the Right of Admission to the said School, or any new Statutes, shall be established.
Court not to dispense with the principal Objects, or the Qualifications required, unless, &c.
III. Provided also, and be it enacted, That, unless it shall be found necessary from the Insufficiency of the Revenues of any Grammar School, nothing in this Act contained shall be construed as authorizing the Court to dispense with the teaching of Latin and Greek, or either of such Languages, now required to be taught, or to treat such Instruction otherwise than as the Principal Object of the Foundation; nor to dispense with any Statute or Provision now existing, so far as relates to the Qualification of any Schoolmaster or Under Master.
Standard of Admission not to be lowered where Greek and Latin is retained.
IV. Provided also, and be it enacted, That in extending, as herein-before provided, the System of Education or the Right of Admission into any Grammar School in which the teaching of Greek or Latin shall be still retained, the Court shall not allow of
the Admission of Children of an earlier Age or of less Proficiency than may be required by the Foundation or existing Statutes, or may be necessary to show that the Children are of Capacity to profit by the Kind of Education designed by the Founder.
Where the teaching of Greek and Latin is dispensed with, analogous Instruction to be substituted, &c.
V. Provided also, and be it enacted, That whenever, on account of the Insufficiency of the Revenues of any Grammar School, the Court shall think fit to dispense with the teaching of Greek or Latin, the Court shall prescribe such a Course of Instruction, and shall require such Qualifications in the Children at the Period of their Admission, as will tend to maintain the Character of the School as nearly as, with reference to the Amount of the Revenues, it may be analogous to that which was contemplated by the Founder; and that whenever, on the like Account, the Court shall think fit to dispense with any Statute or Provision as far as relates to the Qualification of any Schoolmaster or Under Master, the Court shall substitute such Qualification as will provide for every Object implied in the original Qualification, which may be capable of being retained notwithstanding such Insufficiency of the Revenues.
Qualifications of new Schoolmasters and Right of Appointment regulated.
VI. Provided also, and be it enacted, That in case the Appointment of any additional Schoolmaster or Under Master shall be found necessary for the Purpose of carrying the Objects of this Act into execution, the Court shall require the same Qualification in such new Schoolmaster or Under Master respectively as may be required by the existing Statutes in the present Schoolmaster or Under Master, except such as may be wholly referable to their Capability of giving Instruction in any particular Branch of Education; but that every other Qualification implied in the Qualification of the original Schoolmaster or Under Master, and capable of being retained, shall be retained and required in such new Schoolmaster or Under Master; and the Court shall also in such Case declare in whom the Appointment of such new Schoolmaster or Under Master shall be vested, so as to preserve as far as may be the existing Rights of all Parties with regard to Patronage.
Schools to be Grammar Schools, though Greek and Latin dispensed with, and Masters subject to the Ordinary.
VII. Provided also, and be it enacted, That although under the Provisions herein-before contained the teaching of Greek or Latin in any Grammar School may be dispensed with, every such School, and the Masters thereof, shall be still considered as Grammar Schools and Grammar Schoolmasters, and shall continue subject to the Jurisdiction of the Ordinary as heretofore; and that no Person shall be authorized to exercise the Office of Schoolmaster or Under Master therein without having such Licence, or without having made such Oath, Declaration, or Subscription as may be required by Law of the Schoolmasters or Under Masters respectively of other Grammar Schools.
Extension of Right of Admission not to prejudice existing Rights.
VIII. Provided also, and be it enacted, That whenever the Court shall think fit to extend the Freedom of or the Right of Admission into any Grammar School, such Extension shall be so qualified by the Court that none of the Boys who are by the Foundation or existing Statutes entitled to such Privilege shall be excluded, by the Admission of other Boys into the said School, either from such School itself or from Competition for any Exhibition or other Advantage connected therewith.
Where several Schools are in one Place, and the Revenues of any are insufficient, they may be united.
IX. And be it enacted, That in case there shall be in any City, Town, or Place any Grammar School or Grammar Schools, the Revenues of which shall of themselves be insufficient to admit of the Purposes of their Founder or Founders being effected, but which Revenues if joined to the Revenues of any Grammar School or Grammar Schools in the same City, Town, or Place which would afford the Means of effecting the Purposes of the Founders of such several Schools, it shall be lawful for the Court of Chancery to direct such Schools to be united, and the Revenues of the Schools so united to be applied to the Support of One School to be formed by such Union, and which shall be carried on according to a Scheme to be settled for that Purpose under the Direction of the said Court:
Consents necessary to Union.
Provided always, that before Application shall be made to the Court to direct such Union the Consent of the Visitor, Patron, and Governors of every School to be effected thereby shall be first obtained.
Present Schoolmasters not to be affected, but to be at liberty to resign on receiving Pensions.
X. Provided always, and be it enacted, That no new Statutes affecting the Duties or Emoluments of any Schoolmaster or Under Master shall be brought into operation as regards any such Master who shall have been appointed previously to the passing of this Act without his Consent in Writing; but that in case any such Schoolmaster or Under Master as last aforesaid shall be unwilling to give such Consent as aforesaid, and shall be desirous or willing to resign his Office on receiving a retiring Pension, it shall be lawful for the Governors, if there be any competent to act, or if there be no such Governors, for the Visitor, to assign to such Master such Pension as to them or him (as the Case may be) shall seem reasonable from the Time of his Resignation, which Pension, if approved as herein-after mentioned, the Trustees of the said School are hereby authorized and required to pay to him, or his Order, according to the Terms of such Assignment.
How new Appointment of Master to be made.
XI. And be it enacted, That any Schoolmaster appointed in any Grammar School after the passing of this Act shall receive his Appointment subject to such new Statutes as may be made and confirmed by the Court of Chancery, in pursuance of any Proceedings which may be commenced under this Act, within Six Months after such Vacancy shall have occurred.
Lapse of Right of Nomination of Master shall take place from Time of settling the new Statutes.
XII. Provided always, and be it enacted, That the Term on the Expiration of which any Right of Nomination or Appointment of the Master in any Grammar School would otherwise lapse shall, on the first Avoidance of the Office which shall occur after the passing of this Act, be computed from the Time of the Confirmation of the new Statutes by which the School is to be in future governed, or if no Proceedings are pending for the Purpose of having Statutes established from the Expiration of the Time within such Proceedings may be instituted, and not from the Time of the Avoidance.
Where sufficient Powers of Discipline exist, the Persons possessing to be at liberty to exercise them.
XIII. 'And whereas it is expedient that the Discipline of Grammar Schools should be more fully enforced;' be it declared and enacted, That in all Cases in which sufficient Powers, to be exercised by way of Visitation or otherwise in respect of the Discipline of such Schools, shall already exist and be vested in any Person or Persons, it shall be lawful for such Person or Persons to exercise the same when and so often as they shall deem fit, either
by themselves personally or by Commission, without being first requested or required to do so, and likewise to direct such Returns to be made by the Masters of such Schools, of the State thereof, of the Books used therein, and of such other Particulars as he or they think proper, and also to order such Examinations to be held into the Proficiency of the Scholars attending the same as to him or them may seem expedient.
Where such Powers not sufficient, Court may enlarge them.
XIV. And be it enacted, That in all Cases in which any Person or Persons, having Authority, by way of Visitation or otherwise, in respect of the Discipline of any Grammar School, may not have sufficient Power properly to enforce the same, it shall be lawful for the Court of Chancery to order and direct that the Powers of such Person or Persons shall be enlarged to such Extent and in such Manner, and subject to such Provisions, as to the said Court shall seem fit.
Where no such Powers, Court may create them.
XV. And be it enacted, That in all Cases in which no Authority to be exercised by way of Visitation in respect of the Discipline of any Grammar School is now vested in any known Person or Persons, it shall be lawful for the Bishop of the Diocese wherein the same is locally situated to apply to the Court of Chancery, stating the same; and the said Court shall have Power if it so think fit to order that the said Bishop shall be at liberty to visit and regulate the said School in respect of the Discipline thereof, but not further or otherwise.
Court of Chancery may substitute a Person to act pro hâc vice in certain Cases.
XVI. And be it enacted, That in event of the Person or Persons by whom the Powers of Visitation in respect of the Discipline of any Grammar School ought to be exercised refusing or neglecting so to do within a reasonable Time after the same ought to be exercised, or in the event of its being uncertain in whom the Right to exercise such Powers is vested, such Powers shall be exercised pro hâc vice by some Person specially appointed by the Authority of the Court of Chancery, on Application made by any Person or Persons interested in such Grammar School:
Provided always, that nothing herein contained shall exempt any Visitor from being compelled by any Process to which he is now amenable to perform any Act which he is now compellable to perform.
Court of Chancery to have Power to appoint Mode of removing Masters.
XVII. 'And whereas it is expedient to provide for the more easy Removal of unfit and improper Masters;' be it declared and enacted, That it shall be lawful for the Court of Chancery to empower the Person or Persons having Powers of Visitation in respect of the Discipline of any Grammar School, or who shall be specially appointed to exercise the same under this Act, and the Governors, or either of them, after such Inquiries and by such Mode of Proceeding as the Court shall direct, to remove any Master of any Grammar School who has been negligent in the Discharge of his Duties, or who is unfit or incompetent to discharge them properly and efficiently, either from immoral Conduct, Incapacity, Age, or from any other Infirmity or Cause whatsoever.
Power in certain Cases to assign retiring Pension.
XVIII. Provided always, and be it enacted, That in case the Cause for which any Master be removed shall be Incompetency from Age or other Infirmity, it shall be lawful for the said Governors, with the Approbation of the Visitor, to assign to the Use of such Master any Portion of the annual Revenues of the said Grammar School in One or more Donations, or by way of An-
nuity determinable on the Death of such Master, or on any other specified Event during his Life, or to assign to him any Part of the Estate of the said Grammar School for his Occupation for a Term determinable in like Manner: provided that there shall remain sufficient Means to provide for the efficient Performance of the Duties which belong to the Office from which such Master shall be removed.
Premises held over by Masters dismissed, or ceasing to hold Office, to be recovered in a summary Way.
XIX. And for the more speedy and effectual Recovery of the Possession of any Premises belonging to any Grammar School which the Master who shall have been dismissed as aforesaid, or any Person who shall have ceased to be Master, shall hold over after his Dismissal or ceasing to be Master, except under such Assignment as may have been made under the Provisions of this Act, the Term of such Assignment being still unexpired, and the Premises assigned being in the actual Occupation of the Master so dismissed or ceased to be Master, be it enacted, That when and as often as any Master holding any Schoolroom, Schoolhouse, or any other House, Land, or Tenement, by virtue of his Office, or as Tenant or otherwise under the Trustees of the said Grammar School, except on Lease for a Term of Years still unexpired, shall have been dismissed as aforesaid, or shall have ceased to be Master, and such Master, or (if he shall not actually occupy the Premises or shall only occupy a Part thereof) any Person by whom the same or any Part thereof shall be then actually occupied, shall neglect or refuse to quit and deliver up Possession of the Premises, or of such Part thereof respectively, except such as are herein-before excepted, within the Space of Three Months after such Dismissal or ceasing to be Master, it shall be lawful for Justices of the Peace acting for the District or Division in which such Premises or any Part thereof are situated, in Petty Sessions assembled, or any Two of them, and they are hereby required, on the Complaint of the said Trustees or their Agents, and on the Production of an Order of the Court of Chancery declaring such Master to have been duly dismissed or to have ceased to be Master, to issue a Warrant, under their Hands and Seals, to the Constables and Peace Officers of the said District or Division, commanding them, within a Period to be therein named, not less than Ten nor more than Twenty-one clear Days from the Date of such Warrant, to enter into the Premises, and give Possession of the same to the said Trustees or their Agents, in such Manner as any Justices of the Peace are empowered to give Possession of any Premises to any Landlord or his Agent under an Act passed in the Session of Parliament held in the First and Second Years of the Reign of Her present Majesty,
1 & 2 Vict. c.74.
intituled An Act to facilitate the Recovery of Possession of Tenements after the Determination of the Tenancy.
Master shall not set up Title, &c.
XX. Provided always, and be it enacted, That nothing in this Act or the said recited Act shall extend or be construed to extend to enable any Master so dismissed, or ceasing to be Master as aforesaid, to call in question the Validity of such Dismissal, provided that the same shall have proceeded from the Persons authorized to order the same, after such Inquiries and by such Mode of Proceeding as required in that Behalf, or to call in question the Title of the Trustees to Possession of any Premises of which such
Master shall have become possessed by virtue of his late Office, or as Tenant or otherwise under the Trustees of the said Grammar School for the Time being.
Applications to Court to be by Petition.
XXI. 'And whereas it is expedient to facilitate Applications to the Court of Chancery under this Act;' be it enacted, That all Applications may be heard and determined and all Powers given by this Act to the Court of Chancery may be exercised in Cases brought before such Court by Petition only,
Such Petitions to be decided under 52 G.3. c.101.
such Petitions to be presented, heard, and determined according to the Provisions of an Act passed in the Fifty-second Year of the Reign of His late Majesty King George the Third, intituled An Act to provide a summary Remedy in Cases of Abuses of Trusts created for charitable Purposes.
If Crown is Patron, Lord High Chancellor or Chancellor of Duchy of Lancaster shall act.
XXII. And be it enacted, That in every Case in which the Patronage of any Grammar School, or Right of appointing the Schoolmaster or Under Master thereof, is vested in the Crown, the Lord High Chancellor, or the Chancellor of Duchy of Lancaster in respect of any Grammar School within the County Palatine of Lancaster, shall be considered as the Patron of such Grammar School for the Purposes of this Act.
Exercise of Powers of Lord Chancellor.
XXIII. And be it enacted, That the Powers and Authorities herein-before given to the Lord High Chancellor shall and may be exercised in like Manner by and are hereby given to the Lord Keeper or Lords Commissioners for the Custody of the Great Seal respectively for the Time being.
Saving of Rights of Ordinary.
XXIV. Provided always, and be it enacted, That neither this Act nor any thing therein contained shall be any way prejudicial or hurtful to the Jurisdiction or Power of the Ordinary, but that he may lawfully execute and perform the same as heretofore he might according to the Statutes, Common Law, and Canons of this Realm, and also as far as he may be further empowered by this Act;
Certain Foundations exempted from this Act.
and that this Act shall not be construed as extending to any of the following Institutions; (that is to say,) to the Universities of Oxford or Cambridge, or to any College or Hall within the same, or to the University of London, or any Colleges connected therewith, or to the University of Durham, or to the Colleges of Saint David's or Saint Bee's, or the Grammar Schools of Westminster, Eton, Winchester, Harrow, Charter House, Rugby, Merchant Taylors, Saint Paul's, Christ's Hospital, Birmingham, Manchester, or Macclesfield, or Louth, or such Schools as form Part of any Cathedral or Collegiate Church.
Construction of Terms.
XXV. And be it enacted, That in the Construction and for the Purposes of this Act, unless there be something in the Subject or Context repugnant to such Construction, the Word "Grammar School" shall mean and include all endowed Schools, whether of Royal or other Foundation, founded, endowed, or maintained for the Purpose of teaching Latin and Greek, or either of such Languages, whether in the Instrument of Foundation or Endowment, or in the Statutes or Decree of any Court of Record, or in any Act of Parliament establishing such School, or in any other Evidences or Documents, such Instruction shall be expressly described, or shall be described by the Word "Grammar," or any other Form of Expression which is or may be construed as intending Greek or Latin, and whether by such Evidences or Documents as aforesaid,
or in Practice, such Instruction be limited exclusively to Greek or Latin, or extended to both such Languages, or to any other Branch or Branches of Literature or Science in addition to them or either of them; and that the Words "Grammar School" shall not include Schools not endowed, but shall mean and include all endowed Schools which may be Grammar Schools by Reputation, and all other charitable Institutions and Trusts, so far as the same may be for the Purpose of providing such Instruction as aforesaid; that the Word "Visitor" shall mean and include any Person or Persons in whom shall be vested solely or jointly the Whole or such Portion of the visitatorial Power as regards the Subject of the Enactment or Provision, or any Powers in regard to the Discipline or making of new Statutes in any School; that the Word "Governors" shall mean and include all Persons or Corporations, whether Sole or Aggregate, by whatever Name they may be styled, who may respectively have the Government, Management, or Conduct of any Grammar School, whether they have also any Control over the Revenues of the School as Trustees or not; that the Word "Trustees" shall mean and include all Persons or Corporations, Sole or Aggregate, by whatever Name they may be styled, who shall have the Management, Disposal, and Control over the Revenues of any Grammar School, whether the Property be actually vested in them or not; that the Word "Statutes" shall mean and include a;; written Rules or Regulations by which the School, Schoolmasters, or Scholars are, shall, or ought to be governed, whether such Rules or Regulations are comprised in, incorporated with, or authorized by any Royal or other Charter, or other Instrument of Foundation, Endowment, or Benefaction, or declared or confirmed by Act of Parliament, or by Decree of any Court of Record, and also all Rules and Regulations which shall be unwritten, and established only by Usage or Reputation; that the Word "Schoolmaster" shall mean and include the Head Master only, and the Word "Under Master" every Master, Usher, or Assistant in any School except the Head Master; and that the Word "Master" shall mean and includes well any Head Master as Under Master; that the Words "Discipline" or "Management" of a School shall mean and include all Matters respecting the Conduct of the Masters or Scholars, the Method and Times of Teaching, the Examination into the Proficiency of the Scholars of any School, and the ordering of Returns or Reports with reference to such Particulars, or any of them; and that any Word importing the Singular Number only shall mean and include several Persons or Things as well as one Person or Thing, and the converse.
Act may be amended, &c.
And be it enacted, That this Act may be amended or repealed by any Act to be passed in this present Session of Parliament. | <urn:uuid:30a255b5-cc64-454b-b7d4-8d00b6ed9b09> | CC-MAIN-2017-17 | http://www.educationengland.org.uk/documents/acts/1840-grammar-schools-act.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00369-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.948085 | 6,064 | 3 | 3 |
Session 24: Grand Strategies in the Realm of Governance and Implementation (Part 1)
In initial discussions on Islamic political philosophy, I stated that, like any political system, the Islamic government has two basic axes: (1) law and legislation, and (2) management and implementation of law. Previous discussions were essentially about the first axis, dealing with the importance of law, characteristics of ideal law, legislation in Islam and its conditions, while addressing the skepticism regarding the above.
The present topic is management and implementation of law. In order to have a clear understanding of the topic, we will realize that the more transparent and clear the goal and objective of an institution or organization, the easier it will be to understand its structure, working conditions and qualities in the people elected as its members. Therefore, to discuss the executive branch of Islamic government, i.e. its managerial aspect, we must be familiar with the reason for establishing the government including the goal of its management.
Notwithstanding the trend which considers government unnecessary, the majority of political philosophers regard the existence of government in society as necessary. That is, they believe that in society there should be a body which must issue orders, oblige people, implement ordinances acceptable to society and apprehend and punish violators. This premise is accepted by almost all thinkers and its need realized by every society. In Islam this premise is also affirmed, and in the words of the Commander of the Faithful (‘a) recorded in Nahj al-Balaghah: even if a society does not have an upright and meritorious government, a tyrannical government is still better than the absence of any government.1 It is because in the absence of government or the executive, there will be chaos, the rights of individuals violated and the interests of society trampled upon. So, according to Islam, one of the most important social obligations of people is the establishment of an upright government so as to guarantee the interests of society.
We all know that executive power is for implementation of law, and thus, its objective is implementation of laws, but the nature and structure of the law which the state is trying to implement must be seen. The objectives of law are nothing but two: material and spiritual. In general, all those who are involved in debates on political philosophy acknowledge the fact that the state must secure material interests of people, but there is a difference of opinion about guaranteeing spiritual interests of people; whether they should be reflected in law, the government implement such a law and guarantee its implementation.
Since long, many schools of philosophy have believed that the government must also guarantee spiritual values and the law guaranteed by the government must take human virtues into account. Even in non-religious schools of philosophy some ancient Greek philosophers like Plato regarded paving the ground for the flourishing of human virtues as the duty of government. He asserted that the government must be run by men of wisdom and those who are the best in terms of moral virtues. The saying “The men of wisdom must rule” is attributed to him. So, non-Muslim and non-religious philosophers—those who are not followers of the religions with heavenly origin—have also laid stress on spiritual issues and moral virtues. Even the philosophers with no religious beliefs have emphasized the observance of moral virtues in society and the creation of an atmosphere for the moral growth of people.
After the spread of Christianity in Europe, the Roman Emperor Constantine’s conversion to Christianity and his propagation of it in Europe, and adoption of Christianity as the official religion of civilized countries in Europe, religion was attached to government and the goal of government was to secure religious objectives. That is, the statesmen also used to implement what they had accepted as Christianity. Since the Renaissance, the Westerners experienced an intellectual revolution and endeavored to separate moral issues from the realm of government concerns.
After the Renaissance many developments took place in Europe which became the origin of the new Western civilization, and their hallmark is the separation of religion from the realm of social concerns. It was during that time that philosophers discussed about politics, wrote books, founded schools of thought, and consigned moral virtues and spirituality to oblivion.
Among these philosophers was Thomas Hobbes, the English philosopher, who believed that the only function of government was to prevent anarchy. According to him, like wolves, human beings by nature would be at each other’s throats and destroy one another. Accordingly, a body was needed to curb the wolf’s instinct in them and prevent their aggression against one another. Following him, John Locke, who was the founder of Western liberal thought and whose ideas are still discussed and more or less accepted in all political and academic circles in the world, presented maintenance of security as the purpose of government.
According to him, what human beings need in life is a controlling agent called “government” in the absence of which social order will not come into being, anarchy will prevail, security will be lost, and the life and property of people will be endangered. He says, “We want government to fill this vacuum, other matters have nothing to do with government.”
Of course, the separation of religion from government and social affairs does not mean that none of these theoreticians gave importance to moral virtues and spiritual values. In fact, they said that individuals would have to pursue these matters themselves because they had nothing to do with government. Those who believe in God have to go themselves to the temple, church or anywhere they wish and engage in worshipping God. Similarly, moral virtues such as honesty, good conduct, respecting others, attending to the poor, and others are valuable, but considered personal matters. Individuals themselves have to strive to acquire these pleasant moral virtues, for government has nothing to do with them.
So, the objective of social law, i.e. what government must implement, is only maintenance of security in society so as to protect the life and property of people. Likewise, executive power has no function except maintenance of security and protection of people’s lives and properties. In the words of Locke, apart from protection of life and property, protection of personal freedom is also considered part of security. Regarding moral and spiritual interests, the maximum thing he said was that social law must be such that it does not conflict with morality nor hinder the worship of God.
With respect to preservation of moral values, however, social law and government would not assume the responsibility of preserving religious values and creating an atmosphere for spiritual and religious growth. Nowadays, this statement of Locke is the gospel and constitution of most schools of philosophy. Their principal motto is that the only duty of government is preservation of security and freedom, and it has no responsibility towards religious and moral affairs. This is the fundamental difference between Western thinkers in the world today and Islam.
The view of prophets (‘a), especially the Great Prophet of Islam (s) is that apart from securing the materials needs and interests, securing the spiritual interests is also part of the duty of a government. In fact, securing spiritual interests takes precedence and is more important than securing material interests. The government must implement the law whose ultimate objective is to secure the spiritual, religious, moral and human interests—the same things regarded by religion as its ultimate purpose, because the perfection of man depends on them. It considers the purpose of the creation of man, endowed with freewill, to know and pursue this lofty objective.
The axis of these matters is nearness to God which is, thanks to God, well entrenched in Islamic culture today. In fact, it has gained currency among Muslims and even those who do not correctly know its meaning are familiar with its expression. Common people who do not know how to read and write, daily use the expression “qurbatan ilallah” [for the sake of nearness to Allah].
Law that is implemented in society must be geared towards the realization of the ultimate goal and purpose behind the creation of man which is nearness to God. The social life of man should progress in this direction and other issues and animal dimensions are valuable provided they are a prelude to his progress, spiritual perfection and proximity to God.
The goal of state can also be identified once it is proved that the purpose behind the codification of social laws is to secure both spiritual and material interests, as a matter of course. The state must consider protecting the life and property of citizens, paving the ground for the spiritual growth of human beings and combating anything that is against the realization of this objective, as part of its duty. This is in reality a preliminary and not the main goal. That is to say, it is a means to achieve a loftier goal. Hence, laws to be recognized officially in Islamic society should be totally concordant with religious foundations and geared toward the spiritual and religious growth of human beings. For them not to be inimical to religion is not enough; they must be attuned to the goals of religion. The Islamic state must also combat religious disbelief and hostility to religion and materialize religious objectives.
In a religious society, it is possible that certain material needs may not be provided temporarily because of the expediency to attend to some spiritual affairs. If the ordinances of Islam are implemented, in the long run material interests of people will also be better secured than in any other system. However, if to provide for all material interests will undermine religion within a limited period, one should only provide for material interests that will not undermine religion, because spiritual interests take precedence. But in Western countries what we have said is not credible. They are only concerned with material objectives and the state is not responsible for spiritual interests.
Sometimes, people protest that in the West spiritual and religious interests are also attended to. Westerners also offer sacrifices and pay attention to social problems. Of course, this contention is correct and we acknowledge that not all Westerners are individualistic. Prevalence of liberal thought does not mean that all people in the West are influenced by it. What we mean is that liberalism dominates Western societies and because of social necessities they are sometimes compelled to act contrary to the dictates of their philosophy.
That is, because of some exigencies even those who are individualistic and liberal have social considerations, and in order to prevent an uprising and revolt by the majority of people, they have to consider the deprived. In practice, in many countries ruled by socialists and social democrats, a great portion of the taxes levied are spent on social services. Their materialist philosophy does not make such a demand but in order to maintain security, they are compelled to provide these facilities.
The point is that liberalism demands one thing and the action of its proponents exhibit something else. In fact, this criticism is leveled at them— liberalism and individualism does not expect them to take these things into account; so, why do they provide social securities and facilities which are in favor of the deprived? The reply to this question is that these facilities are meant to safeguard the capital of the capitalists and prevent communist uprisings and Marxist revolutions. Before Marxist thought was put into practice in Marxist countries, it was prevalent in Western countries. Karl Marx, a German scholar who lived in the U.K, initially promoted his ideas and books there. Studying his works, the English statesmen realized the perils Marx had brought them and parried them in anticipation.
The Labor Party and socialist tendencies that came into being in Britain and the programs in favor of the deprived implemented there were all meant to counter Marxist tendencies, because it was predicted that the advancement of capitalism would urge the majority of people to stage an uprising. In order to preempt that they attended to the poor and silenced them.
This attitude was beyond the dictates of their capitalist school but it aimed at protecting the interests of the capitalists. In any case, liberalism asserts that the state does not have any responsibility in relation to spiritual affairs.
Possibly, they would complain to us, saying: “In principle, in the Western countries the state levies taxes from people for the church. Why do you accuse them of being heedless to religion and spirituality?” This is the reply: This is also not dictated by liberal thought. In fact, their purpose is to win the hearts of the religious and make use of the power of the church.
Our concern here is their philosophy and their frame of mind. If ever they engage in some religious activities, it is meant to protect their own interests. In a bid to win elections, they strive to win the hearts and votes of the religious. Sometimes, during the presidential elections in the U.S of America, presidential candidates are seen going to church and drawing the attention of people. It does not mean that they are proponents of religion in the affairs of government.
According to Islam, protection of spiritual interests which can be realized under the auspices of religion is among the essential and primary objectives of government. This is the key point of difference between Islam and other schools of philosophy dominant in the world today, and we cannot follow the West with respect to the mode of governance and duties of government because of this fundamental and basic difference with them. Once the objective is forgotten, the structure, conditions, duties, and prerogatives will change accordingly.
In reality, the reason behind the ambiguity and deviation in ideas and thoughts of individuals—even those who are not spiteful—and the ambiguities and deviations they express in their newspapers and books is that they have not paid attention to the objective of law and government from the Islamic viewpoint and the difference between Islam and other schools. They have accepted the essence of Islam. They also really believe in God, say their prayers and observe fasting. They do not deny and reject religion either. Practically, however, they totally follow the West in sociopolitical issues. They no longer enquire whether a certain method is consistent with Islamic thought or not. They say, “Today, the world is administered in this way and we cannot go against the dominant current in the world. Today, the world’s civilization is Western civilization and the dominant culture is the liberal culture. We cannot go against this trend!”
We, however, must first understand what Islam theoretically says; whether it accepts whatever is practiced in the West or not. Secondly, in practice we have to see whether we can implement the commandments of Islam or not. Assuming that we cannot implement them in practice, at least we have to know that Islam does not accept the liberal approach and attitude. So, we should not attempt to present a non-Islamic approach as Islamic. During the time of the taghut, we could not also put into practice the Islamic methods but we knew that that government was not Islamic and some of its policies were anti-Islamic. Thus, the absence of the ground for implementation of the commandments of Islam does not make us say that Islam has been changed.
Even today, in some cases, we may not be able to implement Islam yet we are not supposed to say that Islam is exactly what we are doing. We have to understand Islam as it really is, and if we cannot practice an aspect of it, we have to beseech the forgiveness of God for our failure to do so, and if ever we have any shortcomings, God forbid, then we have to ask apology from the Muslim nation for our shortcomings in implementing Islam. So, we should not make any change in Islam and we should bear in mind that Islam is the same religion which was propagated by the Prophet of Islam (s) 1,400 years ago.
Therefore, the objective of the Islamic government is definitely the realization of Islamic and divine values in society and under its auspices the realization of material interests, and not the opposite. We also need to know the structure of the Islamic government and the qualities of those who should take charge of government.
No doubt, the principal duty of executive power in any political system is the implementation of law, and this point is acknowledged by everybody. The Islamic state guarantees the implementation of Islamic laws and the realization of the objectives of those laws. Now, the question is: In any political system—whether Eastern, Marxist, Western liberal, or any other existing system—what qualities and features should the institution that wants to implement laws have? In reply, it must be stated that law-enforcers in any political system should possess at least two qualities:
1. Knowledge of law: How can the person who wants to guarantee the implementation of a law implement it if he does not know and understand it? Knowledge of law is the first condition and quality that the state must possess if it wants to guarantee the implementation of laws for if it has no correct knowledge of the laws’ dimensions and angles, it will probably commit mistakes in implementation. As such, the ideal option is that the person who heads the government must be the most knowledgeable in law so as to commit the fewest possible mistakes in implementation.
2. Ability to implement law: The institution that wants to guarantee the implementation of law must possess sufficient power and capability to implement it. If it wants to rule over a nation of 60 million people, nay a nation of one billion people like China, and implement laws and ordinances for them, it must possess sufficient power and capability to implement them. This point is so important that nowadays in many schools of philosophy, “government” has been treated as synonymous with “power” and one of the key concepts in political philosophy is the concept of “power”. In any case, we should bear in mind that the government must have power.
Since time immemorial, along with developments in human society, there existed different concepts of power. In simple and primitive governments—like the tribal governments which existed thousands of years ago in approximately all parts of the world—power basically focused on physical power which existed in the tribal chief or ruler. In those societies, the person who was physically the strongest was recognized as ruler; for, if there were any violator, the ruler used his physical power to punish him. Thus, in those days, power was only physical.
When social conditions became complex and there was further social growth and advancement, the physical power of a person was transformed into the power of an institution. That is, even if the ruler was not physically strong, he could have people at his disposal that had considerable physical strength. He could have a strong army and military force composed of strong men. With the advancement of knowledge, power went beyond the physical realm and was transformed into scientific and technological power. That is, the ruler was supposed to possess instruments that could successfully perform physical tasks.
With progress and development in societies and advancement of various industries and technologies, including the daily qualitative and quantitative advancement of military equipment, the state had no option but to acquire and equip the military with sufficient physical, industrial and technological power, to be able to suppress any uprising, prevent violations and people from embezzling property and endangering lives, by means of the power at its disposal.
The power or force we have so far mentioned is confined to bodily or physical power which was considered important in primitive and advanced forms of government and which is still utilized. We can also observe that states strengthen their military and defense structure and stockpile military arms and equipments to make use of them in times of need. It must be noted, however, that the power and capability of a government is not confined to this. In fact, in progressive societies the power and authority of a state largely emanates from social influence and popular acceptability.
Not all demands and programs can be imposed on society by means of violence or brute force. Essentially, the people voluntarily and willingly accept and implement laws. So, the person who is entrusted with implementing laws and is at the helm of affairs must be accepted by people, as in the long run, the mere use of physical force and power will not do anything.
Thus, the executive official must also possess social authority and acceptability. As such, in order to prevent any problem in the domain of management and pursue social interests, the distinctive qualities of executive officials must be determined so that they can guarantee the objectives of the government and law. That is, they really qualify to run the government and guarantee implementation of law. This is discussed in various forms in political philosophy and is usually known as social legitimacy and popular acceptability.
It means that the government must have a rational basis and adopt the correct way of implementing law, and people must consider it legally credible. In addition to the fact that the executive official must enjoy physical power to be able to prevent violations, the people must believe in his credibility and regard him deserving to rule. Thus, we have three types of authority. The first two types have been recognized in all societies. Of course, there are differences in forms of implementation in different schools and forms of government. Yet, what is most important for us is the third form of authority.
- 1. “The fact is that there is no escape for men from rulers, good or bad. The faithful persons perform (good) acts in his rule while the unfaithful enjoy (worldly) benefits in it.” Nahj al-Balaghah, Sermon 40. | <urn:uuid:b76383bc-8fcf-4003-aef6-5121414c14cd> | CC-MAIN-2017-17 | https://www.al-islam.org/islamic-political-theory-ayatullah-misbah-yazdi-vol2/session-24-grand-strategies-realm-governance | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00191-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.970605 | 4,317 | 2.5625 | 3 |
Cars, clocks, and can openers, along with many other devices, use gears in their mechanisms to transmit power through rotation. Gears are a type of circular mechanical device with teeth that mesh to transmit rotation across axes, and they are a very valuable mechanism to know about as their applications range far and wide. In this Instructable I'll go over some basic gear concepts and interesting mechanisms, and hopefully you'll be able to design your own gear systems and make stuff like this!
Step 1: What Are Gears?
A gear is a wheel with teeth around its circumference. Gears are usually found in sets of two or more, used to transmit rotation from the axis of one gear to the axis of another. The teeth of a gear one one axis mesh with the teeth of a gear on another, thus creating a relationship between the rotation of the two axes. When one axis is spun, the other will too. Two gears of different sizes will make their two axes spin at different speeds, which you'll learn about, along with different types of gears and places they're used.
Step 2: Why Use Gears?
Gears are a very useful type of transmission mechanism used to transmit rotation from one axis to another. As I mentioned previously, you can use gears to change the output speed of a shaft. Say you have a motor that spins at 100 rotations per minute, and you only want it to spin at 50 rotations per minute. You can use a system of gears to reduce the speed (and likewise increase the torque) so that the output shaft spins at half the speed of the motor. Gears are commonly used in high load situations because The teeth of a gear allow for more fine, discrete control over movement of a shaft, which is one advantage gears have over most pulley systems. Gears can be used to transmit rotation from one axis to another, and special types of gears can allow for the transfer of motion to non-parallel axes.
Step 3: Parts of a Gear
There are a few different terms that you'll need to know if you're just getting started with gears, as listed below. In order for gears to mesh, the diametral pitch and the pressure angle need to be the same.
Axis: The axis of revolution of the gear, where the shaft passes through
Teeth: The jagged faces projecting outward from the circumference of the gear, used to transmit rotation to other gears. The number of teeth on a gear must be an integer. Gears will only transmit rotation if their teeth mesh and have the same profile.
Pitch Circle: The circle that defines the "size" of the gear. The pitch circles of two meshing gears need to be tangent for them to mesh. If the two gears were instead two discs that drove by friction, the perimeter of those discs would be the pitch circle.
Pitch Diameter: The pitch diameter refers to the working diameter of the gear, a.k.a., the diameter of the pitch circle. You can use the pitch diameter to calculate how far away two gears should be: the sum of the two pitch diameters divided by 2 is equal to the distance between the two axes.
Diametral Pitch: The ratio of the number of teeth to the pitch diameter. Two gears must have the same diametral pitch to mesh.
Circular Pitch: The distance from a point on one tooth to the same point on the adjacent tooth, measured along the pitch circle. (so that the length is the length of the arc rather than a line).
Module: The module of a gear is simply the circular pitch divided by pi. This value is much easier to handle than the circular pitch, because it is a rational number.
Pressure Angle: The pressure angle of a gear is the angle between the line defining the radius of the pitch circle to the point where the pitch circle intersects a tooth, and the tangent line to that tooth at that point. Standard pressure angles are 14.5, 20, and 25 degrees. The pressure angle affects how the gears contact each other, and thus how the force is distributed along the tooth. Two gears must have the same pressure angle to mesh.
Step 4: Calculating Gear Ratios
As I mentioned previously, gears can be used to decrease or increase the speed or torque of a drive shaft. In order to drive an output shaft at a desired speed, you need to use a gear system with a specific gear ratio to output that speed.
The gear ratio of a system is the ratio between the rotational speed of the input shaft to the rotational speed of the output shaft. There are a number of ways to calculate this in a two gear system. The first is via the number of teeth (N) on each gear. To calculate the gear ratio (R), the equation is as follows:
R = N2⁄N1
Where N2 refers to the number of teeth on the gear linked to the output shaft, and N1 refers to the same on the input shaft. The left gear in the first image above has 16 teeth, and the right gear has 32 teeth. If the left gear is the input shaft. then the ratio is 32:16, which can be simplified to 2:1. This means that for every 2 rotations of the left gear, the right gear rotates once.
The gear ratio can also be calculated with the pitch diameter (or even the radius) with basically the same equation:
R = D2⁄D1
Where D2 is the pitch diameter of the output gear, and D1 is the pitch diameter of the input gear.
The gear ratio can also be used to determine the output torque of the system. Torque is defined as the tendency of an object to rotate about its axis; basically, the turning power of a shaft. A shaft with more torque can turn larger things. The gear ratio R is also equal to the ratio between the torque of the output shaft and that of the input shaft. In the example above, although the 32 tooth gear spins more slowly, it outputs twice the turning power as the input shaft.
In a larger system of gears with multiple gears and shafts, the overall ratio of the system is still the ratio of the speeds of the input and output shafts, there are just more shafts in between. To calculate the overall ratio, it is easiest to start by identifying the gear ratio of each set. Then, starting with the set driving the output shaft and working backward, you can multiply the first value in the ratio (the input shaft's speed) by the values corresponding to the ratio of the next gear set, and use the value obtained from the input shaft's speed after the multiplication as your new input speed for a net ratio. This may be a bit confusing, so an example is provided below.
Say you had a gear train consisting of three sets of gears, one set coming from a motor with a 2:1 ratio, and another set stemming off the output shaft of the first set with a 3:2 ratio, and the next set driving the output of the system, with another 2:1 ratio. To calculate the gear ratio of the overall system, you would start with the last ratio, 2:1. Because the smaller gear on the 3:2 set and the larger gear on the 2:1 set are currently "equal" because of the ratios, the ratio of the input shaft of the second set of gears to the overall system output shaft is 3:1. We do that again, multiplying the ratio of the first gear set by 3 (to get 6:3), and combining it with our net ratio (currently 3:1), to get the overall ratio of the system, 6:1.
Step 5: Types of Gears
There are a handful of different types of gears and gear mechanisms, and this Instructable definitely doesn't cover all of them. I hope that this guide will give you a sense for how you can use gears to improve your mechanical design techniques. In the next few steps I'll be starting with some of the simplest types of gears and gear mechanisms and going into some of the more complicated, interesting ones as well. If you're really interested in learning more, I would suggest you check out this book, 507 Mechanical Movements, as it comes with a lot of really neat mechanisms!
Step 6: Spur Gears
Spur gears are the most common and simplest type of gear. Spur gears are used to transfer motion from one shaft to a parallel shaft. The teeth are cut straight up and down, parallel to the axis of rotation. When two adjacent spur gears mesh, they spin in opposite directions. These gears are most commonly used because they can be easily cut on a 3 axis machine like a laser cutter, waterjet, or router. Other types of gears require more precise and more complicated machining procedures.
Step 7: Gearboxes
Before I go any further, I first want to introduce the gearbox. Gearboxes take the rotation of an input shaft, usually the axle of a motor, and through a series of gears alter the speed and power coming from the input shaft to turn an output shaft at a desired speed or torque. Gearboxes are usually classified in terms of their overall speed ratio, the ratio of the speed of the input shaft to the speed of the output shaft.
Step 8: Bevel Gears
Bevel gears are a type of gear used to transmit power from one axis to another non-parallel axis. Bevel gears have slanted teeth, which actually makes the shape of their "pitch diameter" a cone. This is why most bevel gears are classified based on the distance from the rear face of the gear to the imaginary tip of the cone that the gear would form if its teeth extended out. In order for two bevel gears to mesh, the tips of each imaginary cone should meet at the same vertex. When two bevel gears are the same size and turn shafts at 90 degree angles, they are called mitre gears.
Step 9: Rack and Pinion
The rack and pinion converts the rotational motion of a gear (the pinion) to the linear motion of a rack. The pinion is just like any other spur gear, and it meshes with the rack, which is a rail with teeth. The rack slides continuously as the gear rotates.
Step 10: Internal Gears
An internal gear is simply a gear with teeth on the inside rather than the outside. Internal gears can be used to reduce the amount of space a drive train takes up, or allow something to pass through the center of the axis as the gear is turning. Unlike normal spur gears, an internal gear rotates in the same direction as the normal spur gear spinning it. For the most part, internal gears are used for planetary gearboxes, which I'll talk about next.
Step 11: Planetary Gearboxes
A planetary gearbox is a specific type of gearbox that uses internal gears. The main components of a planetary gearbox include the sun gear, which is in the center of the gearbox, usually connected to the input shaft of the system. The sun gear rotates a few planet gears, which all simultaneously rotate a large internal gear, called the ring or annular gear. The planet gears are usually constrained by a carrier to keep them from spinning around the sun gear. Planetary gearboxes can take on higher laods than most gearboxes because the load is distributed among all of the planet gears, as opposed to just one spur gear. These gearboxes are great for large gear reductions in small spaces, but can be costly and need to be well lubricated because of their design complexity.
Step 12: Worm Gears
A worm gear is a gear driven by a worm, which is a small, screw-like piece that meshes with the gear. The gear rotates on an axis perpendicular, but on a different plane than, the worm. With each rotation of the worm, the gear rotates by one tooth. This means that the gear ratio of a worm gear is always N:1, where N is the number of teeth the gear has. While most gears have circular pitch, a worm has linear pitch, which is the distance from one turn in the spiral to the next.
Worm gears can thus be used to drastically reduce the speed and increase the torque of a system in only one step in a small amount of space. A worm gear mechanism could create a gear ratio of 40:1 with just a 40 tooth gear and a worm, while when using spur gears to do the same, you would need a small gear meshing wit another 40 times its size.
Because the worm is a spiral, worm gears are almost impossible to back-drive. What this means is that if you tried spinning the system by its output shaft (on the worm gear) instead of its input shaft (on the worm), then you would not be able to. When a worm gear drives, the spiral spins and slowly inches each tooth forward. If you back-drove the system, the gear would be pushing against the side of the threads without actually turning them. This makes worm gears very valuable in mechanical systems because the axle cannot be manipulated by an external force, and it reduces the backlash and the play in the system.
Step 13: Helical and Herringbone Gears
Helical gears are a more efficient type of spur gear. The teeth are set at an angle to the axis of rotation, so they end up curving around the gear instead of straight up and down like spur gears. Helical gears can be mounted between parallel axes, but can also be used to drive non-parallel axes as long as the angled teeth mesh.
While the teeth on spur gears engage all at once, in that the entire face of a tooth on one gear fully contacts the face of a tooth on an adjacent gear as soon as they mesh, the teeth on helical gears gradually slide into each other. Because of this, helical gears are much better suited for high load and high speed situations. The disadvantage of helical gears is that they require thrust bearings, because when the teeth of a helical gear mesh, they produce an axial thrust pushing the gear along its axis of rotation.
This problem can be fixed with herringbone gears, which are basically two helical gears joined together, with their teeth angled in opposite directions. This eliminates the sideways force that helical gears produce because the axial force from one side of the herringbone gear cancels out the force on the other side. Herringbone gears, because of their geometry, are harder to manufacture than helical gears.
Step 14: Cage and Peg Gears
Cage and peg gears are a certain style of gear mechanisms that are much easier to make, because they can be made cheaply out of wooden boards and dowels. However, they are not very good for high speed or high load situations because they are usually made with a lot of backlash and wiggle-room. Cage and peg gears are mostly used to transmit rotation between perpendicular axes. A peg gear is basically a disc with short pegs sticking out from it around its circumference (to form a spur gear), or on its face parallel to the axis of rotation (to form a bevel gear). The pegs in these gears act as the teeth, and contact one another to spin each of the gears. A cage consists of two discs with pegs running between them parallel to the axis of rotation. A cage gear can be used like a worm gear, as each of the dowels on the gear contact the pegs on a normal peg gear. However, this system can be driven from either end.
Step 15: Mutilated Gears
A mutilated gear is a gear whose tooth profile does not extend all the way around its pitch circle. Mutilated gears can be useful for many different purposes. In some cases, you may not need the entire tooth profile of a gear because the gear may never need to rotate 360 degrees, and you could have a linkage, beam, or other mechanism as part of the mutilated side of the gear. In other cases, you may want the mutilated gear to rotate 360 degrees, but you may not want it to be turning another gear all the time. If you rotate a mutilated gear with half its teeth missing, whose teeth mesh with a full spur gear at one rotation every 30 seconds, the spur gear will turn for 15 seconds, and then stay put for 15 seconds. In this way you can turn continuous rotational motion into discrete rotational motion, meaning that the input shaft turns continuously and the output shaft turns a little, and then stops, then turns again, then stops again, repeatedly.
Step 16: Non-Circular Gears
Although rare in industry, non-circular gears are pretty interesting mechanisms. The diameter of the gears where they are contacting each other change as the gears rotate, so the output speed of the system oscillates as the gears rotate. Non-circular gears can take almost any shape. If the two axes constraining the gears are fixed, then the sum of the radii of the gears at the point where they mesh should always be equal to the distance between the two axes.
Step 17: Ratchets
A ratchet is a fairly simple mechanism that only allows a gear to turn in one direction. A ratchet system consists of a gear (sometimes the teeth are different than the standard profile) with a small lever or latch that rotates about a pivot point and catches in the teeth of the gear. The latch is designed and oriented such that if the gear were to turn in one direction, the gear could spin freely and the latch would be pushed up by the teeth, but if the gear were to spin in the other direction, the latch would catch in the teeth of the gear and prevent it from moving.
Ratchets are useful in a variety of applications, because they allow force to be applied in one direction but not the other. These systems are common on bikes (how you can pedal forward to turn the wheels, but if you pedal backward the wheel will spin freely), some wrenches, and large winches that reel in loads.
Step 18: Clutches
Clutches are mechanisms found primarily in cars and other road vehicles, and they are used to change the speed of the output shaft, as well as disengage or engage the turning of the output shaft. A clutch mechanism involves at least two shafts, the input shaft, driven by a power source, and the output shaft, which drives the final mechanism. As an example, I'll explain a simple 2 gear clutch mechanism, referencing the image above. The input shaft would have two gears on it of different sizes (the two blue gears on the top shaft), and the output shaft contains two gears that mesh with the gears on the input shaft (the red and green gears), but can rotate freely around the output shaft, so they do not drive it. A clutch disc (the blue grooved piece in the middle) sits between the two gears, rotates with the output shaft, and can slide along it. If the clutch disc is pressed against the red gear, the output shaft would engage and turn at the speed defined by the gear ratio of that set of gears (3:2). If the clutch disc presses against the green gear, the output shaft drives at a different gear ratio, defined by that gear set (2:3). If the clutch disc sits between the two gears, then the output shaft is in neutral and is not being driven.
The clutch disc can engage with the gears in a few different ways. Some clutch discs engage via friction, and have friction pads mounted to their sides as well as the sides of the gears. Other clutch discs, like the one in the image above, are toothed, and they mesh with specific teeth on the faces of the gears.
Step 19: Differentials
A gear differential is a pretty interesting mechanism involving a ring bevel gear and four smaller bevel gears (two sun gears and two planet gears that orbit around them), acting sort of like a planetary gearbox. It is used mostly on cars and other vehicles, because it has one input shaft that drives two output shafts (which would connect to the wheels), and allows for the two output shafts to spin at different velocity if they need to. It ends up that the average of the rotational velocities of each output shaft always has to equal the rotational velocity of the ring gear.
I'll explain how a differential works using the images above. The input shaft spins the yellow bevel gear, which spins the green bevel ring gear. A carriage is fixed to the ring gear that spins with it. Both the carriage and the ring gear rotate around (but do not directly turn) the axis of the red output shafts. The two blue bevel gears turn in big circles around the central axis, the axis the output shafts go through. Lets imagine this differential sits with the output shafts connected to the back two wheels of a car. If the car is going straight, the two blue bevel gears will spin around the output shafts, because of the rotation of the carriage, without rotating about their own axis. Their teeth will push the two red gears at the same speed, each connected to their respective output shafts. Thus, the wheels spin at the same speed and the car goes straight. You'll notice the blue gears have the ability to spin about their axis though, which is important to the mechanism. Keep reading!
Should the car turn, then the two wheels will want to spin at different speeds. The inner wheel will want spin at a velocity slower than the outer one because it is closer to the center point of the car's turn. If the two wheels were connected on the same shaft, then the car would have a difficult time turning: one wheel would want to spin slower than the other, so it would drag. With the differential gear mechanism, the two shafts not only allow the wheels to spin at their own speeds, but also are still powered by the input shaft. If one wheel is spinning faster than the other, the blue planetary bevel gears just rotate about their axes instead of staying fixed. Now, the planetary gears are both rotating about their axes and about the output shafts (because of the carriage), thus powering both wheels, but allowing one to spin faster than the other.
This is a pretty tricky mechanism to explain. If you're still confused, I encourage you to check out this video, also shown above, which shows the process visually very well.
Step 20: Gear Design Software
While you can purchase gears of specific sizes from vendors, there are also situations in which you may want to design your own gears for a specific purpose or so that you can modify them to create non-standard gear parts. Here's some software to help you get started. If you know of any more, let me know and I'll add them!:
Autodesk Inventor (Free for Students):Has a gear design feature for spur and helical gears, worm gears, and bevel gears
RushGears:Contains a customizeable online gear template that allows you to download 3D CAD files of your designed gears.
Gearotic:Online gear mechanism design software.
DelGear:Gear design software package.
WoodGears:Gear design software for designing laser cut and wood gear profiles.
Step 21: Make Something With Gears!
Now its your turn to make something cool with gears! I made this simple GearBot to go along with this Instructable, but there are many other directions to go in from here. Use what you've learned and don't forget to share it!
If you have some more gear advice or ideas to share, or have any questions about mechanisms, please do so in the comments. | <urn:uuid:3ef2f8f6-1105-4585-b0d7-462415659c2d> | CC-MAIN-2017-17 | http://www.instructables.com/id/Basic-Gear-Mechanisms/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.79/warc/CC-MAIN-20170423031203-00074-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.945203 | 4,890 | 4 | 4 |
|Nucleus · Nucleons (p, n) · Nuclear force · Nuclear structure · Nuclear reaction|
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which a proton is transformed into a neutron, or vice versa, inside an atomic nucleus. This process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits a detectable beta particle, which is an electron or positron.
Beta decay is mediated by the weak force. There are two types of beta decay, known as beta minus and beta plus. In beta minus (β−) decay a neutron is lost and a proton appears and the process produces an electron and electron antineutrino, while in beta plus (β+) decay a proton is lost and a neutron appears and the process produces a positron and electron neutrino; β+ decay is thus also known as positron emission.
An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14:
6C → 14
7N + e− + ν
In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number A, but an atomic number Z that is increased by one. As in all nuclear decays, the decaying element (in this case 14
6C) is known as the parent nuclide while the resulting element (in this case 14
7N) is known as the daughter nuclide. The emitted electron or positron is known as a beta particle.
An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23:
12Mg → 23
11Na + e+ + ν
In contrast to β− decay, β+ decay is accompanied by the emission of an electron neutrino and a positron. β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one.
Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. An example of electron capture is the decay of krypton-81 into bromine-81:
36Kr + e− → 81
35Br + ν
Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino.
The generic equation is:
ZX → A
Z+1X' + e− + ν
Another example is when the free neutron (1
0n) decays by β− decay into a proton (p):
- n → p + e− + ν
At the fundamental level (as depicted in the Feynman diagram on the left), this is caused by the conversion of the negatively charged (−1/3 e) down quark to the positively charged (+2/3 e) up quark by emission of a W− boson; the W− boson subsequently decays into an electron and an electron antineutrino:
- d → u + e− + ν
The beta spectrum is a continuous spectrum: the total decay energy is divided between the electron and the antineutrino. In the figure to the right, this is shown, by way of example, for an electron of 0.4 MeV energy. In this example, the antineutrino then gets the remainder: 0.76 MeV, since the total decay energy is assumed to be 1.16 MeV.
β− decay generally occurs in neutron-rich nuclei.
In β+ decay, or "positron emission", the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron (e+) and an electron neutrino (ν
e). The generic equation is:
ZX → A
Z−1X’ + e+ + ν
This may be considered as the decay of a proton inside the nucleus to a neutron
- p → n + e+ + ν
However, β+ decay cannot occur in an isolated proton because it requires energy due to the mass of the neutron being greater than the mass of the proton. β+ decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron and a neutrino and into the kinetic energy of these particles. In an opposite process to negative beta decay, the weak interaction converts a proton into a neutron by converting an up quark into a down quark by having it emit a W+ or absorb a W−.
Electron capture (K-capture)
In all cases where β+ decay of a nucleus is allowed energetically, so is electron capture, the process in which the same nucleus captures an atomic electron with the emission of a neutrino:
ZX + e− → A
Z−1X’ + ν
The emitted neutrino is mono-energetic. In proton-rich nuclei where the energy difference between initial and final states is less than 2mec2, β+ decay is not energetically possible, and electron capture is the sole decay mode.
If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc.
Competition of beta decay types
Three types of beta decay in competition are illustrated by the single isotope copper-64 (29 protons, 35 neutrons), which has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission (18%) or electron capture (43%), as through neutron decay by electron emission (39%).
Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay
After the discovery of parity non-conservation (see history below), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have positive helicity, while antineutrinos (emitted in electron decay) have negative helicity.
The higher the energy of the particles, the higher their polarization.
The Q value is defined as the total energy released in a given nuclear decay. In beta decay, Q is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to Q. A typical Q is around 1 MeV, but can range from a few keV to a few tens of MeV.
Consider the generic equation for beta decay
ZX → A
Z+1X’ + e− + ν
The Q value for this decay is
where is the mass of the nucleus of the A
ZX atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus mN is related to the standard atomic mass m by
That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the binding energy Bi of each electron. Substituting this into our original equation, while neglecting the nearly-zero antineutrino mass and difference in electron binding energy, which is very small for high-Z atoms, we have
This energy is carried away as kinetic energy by the electron and neutrino.
Because the reaction will proceed only when the Q-value is positive, β− decay can occur when the mass of atom A
ZX is greater than the mass of atom A
The equations for β+ decay are similar, with the generic equation
ZX → A
Z−1X’ + e+ + ν
However, in this equation, the electron masses do not cancel, and we are left with
Because the reaction will proceed only when the Q-value is positive, β+ decay can occur when the mass of atom A
ZX exceeds that of A
Z-1X’ by at least twice the mass of the electron.
The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture
ZX + e− → A
Z−1X’ + ν
which simplifies to
where Bn is the binding energy of the captured electron.
Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true.
+ e− + ν
(beta minus decay) 22
+ e+ + ν
(beta plus decay) 22
+ e− → 22
Beta decay does not change the number A of nucleons in the nucleus, but changes only its charge Z. Thus the set of all nuclides with the same A can be introduced; these isobaric nuclides may turn into each other via beta decay. Among them, several nuclides (at least one for any given mass number A) are beta stable, because they present local minima of the mass excess: if such a nucleus has (A, Z) numbers, the neighbour nuclei (A, Z−1) and (A, Z+1) have higher mass excess and can beta decay into (A, Z), but not vice versa. For all odd mass numbers A, there is only one known beta-stable isobar. For even A, there are up to three different beta-stable isobars experimentally known; for example, 96
42Mo, and 96
44Ru are all beta-stable. There are about 355 known beta-decay stable nuclides total.
Usually, unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is 64
29Cu, which decays by positron emission/electron capture 61% of the time to 64
28Ni, and 39% of the time by (negative) beta decay to 64
Most naturally occurring isotopes on earth are beta stable. Those that are not have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide 40
19K, which undergoes all three types of beta decay (β−, β+ and electron capture) with a half-life of 7016402990552000000♠1.277×109 years.
Double beta decay
Some nuclei can undergo double beta decay (ββ decay) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as the process has an extremely long half-life. In nuclei for which both β decay and ββ decay are possible, the rarer ββ decay process is effectively impossible to observe. However, in nuclei where β decay is forbidden but ββ decay is allowed, the process can be seen and a half-life measured. Thus, ββ decay is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change A; thus, at least one of the nuclides with some given A has to be stable with regard to both single and double beta decay.
"Ordinary" double beta decay results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless double beta decay has never been observed.
Bound-state β− decay
A very small minority of free neutron decays (about four per million) are so-called "two-body decays", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino.
For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This can not occur for neutral atoms with low-lying bound states which already filled by electrons.
The phenomenon in fully ionized atoms was first observed for 163Dy66+ in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research group. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes β decay into the K and L shells with a half-life of 47 days.
Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for 187Re by Bosch et al., also at Darmstadt. Neutral 187Re does undergo β decay with a half-life of 42 × 109 years, but for fully ionized 187Re75+ this is shortened by a factor of 109 to only 32.9 years. For comparison the variation of decay rates of other nuclear processes due to chemical environment is less than 1%.
Beta decays can be classified according to the L-value of the emitted radiation. When L > 0, the decay is referred to as "forbidden". Nuclear selection rules require high L-values to be accompanied by changes in nuclear spin (J) and parity (π). The selection rules for the Lth forbidden transitions are:
where Δπ = 1 or −1 corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the ΔJ and Δπ values for the first few values of L:
|First forbidden||0, 1, 2||yes|
|Second forbidden||1, 2, 3||no|
|Third forbidden||2, 3, 4||yes|
A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition ). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by
A Gamow-Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by
with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon.
Beta emission spectrum
Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum N(T) of emitted betas as follows:
where T is the kinetic energy, CL is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), F(Z, T) is the Fermi Function (see below) with Z the charge of the final-state nucleus, E = T + mc2 is the total energy, p =√(E/c)2 − (mc)2 is the momentum, and Q is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by Q minus the kinetic energy of the beta.
As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right.
The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be:
where S =√1 − α2Z2 (α is the fine-structure constant), η = ± αZE/pc (+ for electrons, − for positrons), ρ = rN/ℏ (rN is the radius of the final state nucleus), and Γ is the Gamma function.
For non-relativistic betas (Q ≪ mec2), this expression can be approximated by:
Other approximations can be found in the literature.
A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's Q-value). With a Kurie plot one can find the limit on the effective mass of a neutrino.
|This section needs additional citations for verification. (September 2014) (Learn how and when to remove this template message)|
Discovery and characterization of β− decay
Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. (In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903, and termed gamma rays).
In 1900, Becquerel measured the mass-to-charge ratio (m/e) for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that m/e for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron .
In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., β−) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left.
Neutrinos in beta decay
Historically, the study of beta decay provided the first physical evidence of the neutrino. Measurements of the beta particle (electron) kinetic energy spectrum in 1911 by Lise Meitner and Otto Hahn and in 1913 by Jean Danysz showed multiple lines on a diffuse background, offering the first hint of a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make a more accurate measurement and showed that the spectrum was continuous. This was in apparent contradiction to the law of conservation of energy, since if beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should equal the energy difference between the initial and final nuclear states and lead to a narrow energy distribution, as observed for both alpha and gamma decay. For beta decay, however, the observed broad continuous spectrum suggested that energy is lost in the beta decay process.
From 1920–1927, Charles Drummond Ellis (along with James Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933 Ellis and Nevill Mott obtained strong evidence that this spectrum has an effective upper bound in energy, which was a severe blow to Bohr's suggestion that conservation of energy might be true only in a statistical sense, and might be violated in any given decay.:27 Now the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute.
A second problem related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e. equal to the reduced Planck constant), and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number, as later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so that the change of nuclear spin must be an integer. However the electron spin is 1/2, so that angular momentum would not be conserved if beta decay were simply electron emission.
In a famous letter written in 1930, Wolfgang Pauli suggested that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum) and had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" to neutrino and, in 1934, he published a very successful model of beta decay in which neutrinos were produced. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge, which was finally met in 1956 in the Cowan–Reines neutrino experiment. However, the properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi.
Discovery of other types of beta decay
In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction 4
2He + 27
13Al → 30
15P + 1
0n, and observed that the product isotope 30
15P emits a positron identical to those found in cosmic rays by Carl David Anderson in 1932. This was the first example of β+ decay (positron emission), which they termed artificial radioactivity since 30
15P is a short-lived nuclide which does not exist in nature.
The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides.
Non-conservation of parity
In 1956, Chien-Shiung Wu and coworkers proved in the Wu experiment that parity is not conserved in beta decay. This surprising fact had been postulated shortly before in an article by Tsung-Dao Lee and Chen Ning Yang.
- Alpha decay
- Particle radiation
- Tritium illumination, a form of fluorescent lighting powered by beta decay
- Pandemonium effect
- Total absorption spectroscopy
- Tuli, J. K. (2011). Nuclear Wallet Cards (PDF) (8th ed.). Brookhaven National Laboratory.
- Sin-Itiro Tomonaga (1997). The Story of Spin. University of Chicago Press.
- The Live Chart of Nuclides - IAEA with filter on decay type | <urn:uuid:21bccde2-7c2b-4f1d-b254-ebb660a86ec2> | CC-MAIN-2017-17 | http://yovla.com/en/article/Beta_emission | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00188-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.914951 | 5,505 | 4.40625 | 4 |
Reaching the Nations
Return to Table of Contents
Area: 71,740 square km. One of the smallest West African countries, Sierra Leone borders the Atlantic Ocean, Guinea, and Liberia. With the exception of mountains in the east, plains and plateaus cover most of the terrain. Sierra Leone has a wet climate, receiving up to nearly 500 centimeters of rain annually, and a dry season. Most of the country is heavily forested with mangroves on the coast and tropical forest in the interior. Deforestation and overfishing are the greatest environmental concerns. Sierra Leone is administratively divided into three provinces and one area. Each of the three provinces is further divided into districts.
Population: 5,363,669 (July 2011)
Annual Growth Rate: 2.249% (2011)
Fertility Rate: 4.94 children born per woman (2011)
Life Expectancy: male 53.69, female 58.65 (2011)
There are approximately 20 African ethnic groups. The Temne and Mende are the largest ethnic groups, together comprise two-thirds of Sierra Leoneans, and reside in the north and the south, respectively. Creole [Krio] are descendents of freed slaves.
Languages: Mende (26%), Themne (22%), Krio (10%), other (42%). English is the official language. Although only 10% of Sierra Leoneans speak Krio as a first language, 95% speak Krio as a second language. Krio originated from freed Jamaican slaves that settled around the Freetown area and spread throughout the country as a language for communication between different ethnic groups. All together there are 16 ethnic groups in the country, each having their own language. Mende is most widely spoken in southern and eastern Sierra Leone and Temne is most widely spoken in northern Sierra Leone. Languages with over one million speakers include Mende (1.48 million) and Themne (1.23 million).
Literacy: 35.1% (2000)
West African tribes populated the region prior to European exploration. Sierra Leone received its name from Portuguese explores in the 1400s, meaning "Lion Mountains" and was one of the first areas of West Africa contacted by Europeans. Slaves were regularly trafficked from Sierra Leone by the British to coastal areas of North America during the eighteenth century. The British freed 500 slaves from North America in the late eighteenth century and established one of their first African colonies in Freetown in 1792. During the following century, the British ruled their West African colonies from Freetown, such as The Gambia and Gold Coast. Ethnic conflict prevailed during the nineteenth century between the indigenous population, Krios, and Europeans and ethnic tension deescalated for much of the twentieth century. Independence occurred peacefully from the United Kingdom in 1961. Following independence, the government suffered from instability and was accused of corruption and favoring certain ethnic groups more than others. Instability continued until the Sierra Leone Civil War erupted in 1991. The war came to an end in 2002. During this time tens of thousands died and millions vacated the country to surrounding nations. Since the end of the civil war, Sierra Leone has grown increasingly more stable due to the military highly involved in enforcing security after the withdrawal of United Nations peacekeepers, although corruption and low standard of living remain major challenges.
Tribalism, Islam, and Christianity are the primary cultural influences on society. Sierra Leoneans stress politeness and good manners. Sierra Leone was once known in West Africa for its high-quality education but recent war and ethnic conflict have resulted in widespread poverty and low standards of living. Common foods include cassava, okra, and fish. Alcohol consumption rates are comparable to the worldwide average. Marriage usually requires the groom to pay a bride price and is sometimes arranged by families. Polygamy is commonplace, especially in rural areas.
GDP per capita: $900 (2010) [1.9% of US]
Human Development Index: 0.365
Corruption Index: 2.4
One of the poorest countries in the world, Sierra Leone ranks among the nations with the lowest Human Development Index (HDI) ratings. Rampant corruption and inequality in the distribution of wealth characterize the economy. Significant natural resources valuable for economic development have yet to be exploited such as valuable minerals and favorable agricultural conditions. Diamond exports account for half of total exports. Agriculture employs half of the work force whereas industry and services employ 31% and 21% of the work force, respectively. Sierra Leone lacks an educated population and infrastructure to support greater economic growth. The economy will more likely develop as peace is kept and commercial farming and mining operations employ the local population. Unemployment remains a major problem in Sierra Leone due to the recent end of the civil war. Diamond mining, manufacturing, petroleum, refining, and ship repair are major industries. Common crops include rice, coffee, palm kernels, palm oil, and peanuts. Primary trade partners include Belgium, the United States, South Africa, and the Netherlands.
Corruption is perceived as widespread and present in all areas of society. Instances of corruption in government officials is commonplace. Low literacy rates have likely contributed to high levels of corruption.
indigenous beliefs/other: 2%
Denominations Members Congregations
Seventh Day Adventist 17,151 52
Latter-Day Saints 8,054 18
Jehovah's Witnesses 1,786 33
Most of the population adheres to Islam. The percentage of Muslims in the country has increased since independence. Many Christians are Catholic. Syncretism between Islam, Christianity, and indigenous beliefs and practices is common.
The constitution allows religious freedom which is protected by government. There are no requirements for religious groups to register in order to operate in Sierra Leone. Government recognizes both Muslim and Christian holidays.
Freetown, Bo, Kenema, Koidu, Makeni, Waterloo, Port Loko, Goderich, Daru, Lunsar.
Four of the 10 largest cities have a congregation. 27% of the national population lives in the 10 largest cities.
The first Sierra Leonean Latter-day Saints were baptized in the Netherlands and Ghana and later returned to Sierra Leone in the 1980s. A study group was formed in January of 1988. Sierra Leone was included in the Liberia Monrovia Mission when it was organized in March 1988. In May 1988, the first senior couple missionaries arrived and the first 14 converts were baptized the following month. The first branch was organized in Goderich in August 1988. The first Sierra Leonean Mission began serving as full-time missionaries in 1989. The first young missionaries to serve in Sierra Leone arrived around 1990. The first official LDS meetings in Bo occurred in July 1990 with just five members. Due to a civil war in Liberia, the Liberia Monrovia Mission was relocated to Sierra Leone in 1990 and discontinued in 1991. Full-time missionaries were first assigned to Kenema in 2004. Administrative responsibility for Liberia and Sierra Leone pertained to the Ghana Accra Mission until the organization of the Sierra Leone Freetown Mission in 2007.
LDS Membership: 8,907 (2010)
Church membership grew rapidly in the late 1980s and early 1990s as there were less than 100 Latter-day Saints in 1988 and approximately 1,900 in 1993. Membership stood at 2,700 in 1997 and 3,920 in 2000. During the 2000s, steady membership growth occurred as church membership totaled 4,782 in 2002, 5,712 in 2004, 6,938 in 2006, 8,054 in 2008, and 8,907 in 2010. Annual membership growth rates during this period ranged from a high of 12% in 2002 to a low of 3.4% in 2008 but generally ranged from 7-10%. In 2004, there were 2,177 members in the Bo Sierra Leone District, constituting 38% of the national LDS membership. In 2010, one in 602 was LDS.
Wards: 0 Branches: 27 Groups: 1+
Rapid congregational growth occurred between 1988 and 1993 as the number of branches increased from one to 14. The first district was organized in Freetown in 1990 followed by two additional districts in 1991 in Wellington and Bo. The number of branches increased to 16 by 2000. There were 17 branches in 2003, 15 in 2005, 17 in 2006, 18 in 2007, 22 in 2009, and 23 in 2010. Congregational growth in the late 2000s was concentrated in Freetown and Bo. Districts in Freetown and Wellington were consolidated into a single district in 2005 and the Freetown district was divided to create the Freetown Sierra Leone East District in 2011. In 2009, a district branch was organized in Bo for members meeting in groups within the boundaries of the district. In April 2011, there were nine branches in the Bo Sierra Leone District and 13 branches in the Freetown Sierra Leone District. In late 2011, four additional branches were organized: Two in Kenema (IDA and Simbeck) and two in Freetown (Belliar Park and Koso Town). Groups appear to function in Makeni and possibly Moyamba.
Activity and Retention
The LDS Church experienced low member activity and convert retention rates in Sierra Leone from the mid-1990s to the late 2000s due to rushing investigators into baptism with little prebaptismal preparation. In 2010 and early 2011, the Sierra Leone Freetown Mission made noticeable progress improving convert retention and member activity rates. 1,100 attended district conference in Freetown in March 2011 and 1,660 attended in August 2011 when the district was divided. 1,090 attended the Bo Sierra Leone District Conference in late 2011. The average number of members per congregation increased from 135 in 1993 to 245 in 2000 and 387 in 2010. 2,050 were enrolled in seminary and institute during the 2009-2010 school year. In early 2011, most branches had between 100 and 200 active members. Nationwide active membership is estimated at 3,500, or 35-40% of total church membership.
Languages with LDS Scripture: English
All LDS scriptures and materials are available in English. The Articles of Faith are available in Mende.
In early 2011, there were approximately a dozen LDS meetinghouses. The Church began construction of its first Church-built meetinghouse in 2004 in Bo. There are additional church-built meetinghouses, but most congregations meet in rented spaces or renovated buildings.
Health and Safety
Tropical diseases are endemic and health infrastructure is poor. HIV/AIDS infects 1.7% of the population. Sexual promiscuity is widespread and contributes to the spread of disease.
Humanitarian and Development Work
Poverty is a major issue in Sierra Leone. The Church has periodically organized "Helping Hands" service projects for local members to clean streets and hospitals. In 2007, the Church planned measles vaccinations for children after programs were successfully conducted in other African nations. In 2008, senior missionaries reported that the Church assisted in building 71 wells around the city of Bo. Additional humanitarian and development projects have included wheelchair donations for the disabled, a clean water project in Waterloo, donating health care equipment, and providing neonatal resuscitation training.
Opportunities, Challenges and Prospects
No government regulations limit proselytism or the arrival of foreign missionaries. Sierra Leone offered unrealized opportunities for the Church prior to the organization of the Sierra Leone Freetown Mission. These opportunities continue not to be fully utilized as few full-time missionaries serve in the country and most areas receive no LDS mission outreach.
Poverty reduces the ability of many to be self-reliant economically and obtain vocation training and education. Low literacy rates severely challenge efforts for the Church to establish self-sustaining local leadership and for illiterate members to obtain and grow their testimonies on an individual basis. Poor standards of living nonetheless provide opportunities for development and humanitarian projects for the Church which at present have been severely limited. Tribalism and ethnic conflict in some unreached areas have likely contributed to no increase in national outreach by the Church for nearly a decade. Many have been receptive to the Church notwithstanding the prominence of Islam in local culture. As in many Muslim-majority African nations, the prevalence of polygamy presents an obstacle for some who wish to join the Church. If those participating in a polygamous marriage wish to join the Church, men must divorce their additional wives and women must divorce their husbands if they are not the first wife. Many investigators stop investigating the Church when the issue of polygamy and joining the Church is brought up. There have been some faithful investigators who have divorced additional wives in order to become members of the Church. Polygamy remains a cultural obstacle for many Sierra Leoneans to join the Church as it adversely affects family and community relationships.
23% of the national population resides in a city with an LDS congregation and LDS congregations operate in three of the four administrative divisions. The Church conducts excellent mission outreach in most areas of Freetown, Bo, and Kenema and the majority of the population of these cities resides within a kilometer of an LDS meetinghouse. The decision in recent years to organize additional branches in Freetown rather than consolidate active membership into larger congregations to form prospective wards for a future stake present a good planning and outreach approach that encourages growth and accessibility. LDS meetings have only recently begun in Makeni and no independent branch or full-time missionaries are assigned.
The Church initially made significant inroads expanding national outreach following an official church establishment in the country as several congregations were established in Freetown and Bo but war, poverty, leadership training challenges, convert retention issues, ethnic conflict, distance to mission headquarters, and limited missionary resources dedicated to the region contributed to no additional cities opening to missionary work between the early 1990s and the mid-2000s. The organization of the Sierra Leone Freetown Mission in 2007 directed greater mission resources to Sierra Leone in the late 2000s and the mission has recently prepared for the opening of additional cities to missionary work, such as Makeni and Moyamba. Full-time missionaries reported that both Makeni and Moyamba almost opened to full-time missionaries in early 2011, but area leadership recommended that mission efforts be concentrated on establishing stakes in Bo and Freetown prior to opening additional cities to proselytism. Notwithstanding continuing delays in expanding national outreach due to administrative and activity challenges in established church centers, mission leadership has broadened its vision for expanding outreach and in early 2011 announced to full-time missionaries serving in the mission that between one and two dozen new branches would be organized in Sierra Leone and Liberia within the coming year. Prospects for expanding outreach in the medium and long term are favorable as Sierra Leone services the smallest population of any African mission of less than ten million and receptivity has been historically high.
LDS mission outreach will face significant challenges proselytizing the rural population. The majority of the population resides in small cities, towns, and villages inhabited by less than 10,000 people. Transportation difficulties, tribalism, a lack of LDS materials in native languages, and few or no members residing in these locations create barriers to outreach outside the largest cities.
Member Activity and Convert Retention
Many are willing to listen to what the missionaries teach but overall struggle to remain active and develop lasting church attendance. One missionary in August 2009 reported that the branch he was serving in the Freetown Sierra Leone District had 400 members but fewer than 100 attended Church meetings regularly. Low member activity and poor convert retention have contributed to the lack of a stake in Sierra Leone. High membership growth rates are less impressive when poor convert retention and low member activity rates are considered. The small increases in the number of branches which has not kept pace with nominal membership growth since the early 1990s demonstrate the severity of the retention problem. The new mission was created partially to address Sierra Leone's worsening member activity and convert retention problems and in 2010 and 2011 noticeable results were forthcoming as evidenced by an increase in active membership and congregations and stable seminary and institute enrollment numbers. Emphasis on establishing weekly church attendance and personal gospel living habits will be required to improve member activity and convert retention over the long-term.
Ethnic Issues and Integration
Sierra Leone is the African country with the highest percentage of Muslims (77%) with an LDS mission. Unlike many other African nations, there appears to be little violence or conflict between Christians and Muslims in Sierra Leone. Despite the small number of Christians in the country, the Church has made many converts. These converts do not come from just one religious group and consist of fellow Christians, Muslims, and followers of other religions. LDS missionaries have not reported significant ethnic integration issues notwithstanding chronic ethnic conflict in Sierra Leone. Expanding LDS outreach into rural and northern areas may increase the likelihood of ethnic tensions presenting at church.
Only the Articles of Faith have been translated into Mende. No other LDS materials or scriptures have been translated any into any indigenous African languages languages. English is the official language of Sierra Leone, but its use is limited primarily due to low literacy rates. Foreign missionaries learn and speak Krio while serving in the country. As Krio is widely spoken across Sierra Leone, it is the most likely candidate for future translations of church materials and scripture. Additional church material translations into Mende appear likely in the coming years due to church growth in Mende-speaking areas such as Bo and Kenema.
Sierra Leone is nearly self-sufficient in its missionary manpower, but many local members serve elsewhere in West Africa instead of in their home country such as Nigeria. In early 2011, approximately half of the full-time missionary force was North American. 89 local members were serving full-time missions by year-end 1993, 41 of which were from the six branches operating in Freetown at the time. North American full-time missionaries and African missionaries regularly serve in Sierra Leone. There have been some reported challenges training and preparing male LDS youth to serve missions and return honorably. Maintaining high rates of seminary and institute participation will facilitate higher rates of missionary service.
Missionaries serving in Sierra Leone point to difficulties in developing strong, educated local leadership as well as problems with convert retention and low member activity. Despite these problems, active members of the Church in Sierra Leone provide great service and support to full-time missionaries and the overall functioning of the Church. Full-time missionaries frequently report on the high level of involvement of local branch missionaries in teaching and fellowshipping investigators, recent converts and less active members. Many Sierra Leonean members serve missions with some branches having over half a dozen members serving in the mission field at a time. It does not appear that the widespread corruption in the country has significantly impaired the functioning of the Church. Missionaries have reported several instances of some local leaders dealing haphazardly with church finance responsibilities. These issues were resolved quickly by area, district and mission presidencies. In recent years, the number of returned missionaries has increased, greatly strengthening leadership manpower and training but their numbers remain too limited to justify the organization of stakes at present.
Sierra Leone is assigned to the Accra Ghana Temple district. One of the first groups of members to go to the temple attended the Ghana Accra Temple in May 2006. A total of 42 members from Freetown and Bo participated and 18 couples were sealed. Temple trips occur infrequently due to distance to the temple and travel costs. There are no realistic prospects for a prospective temple to be built closer to Sierra Leone in the foreseeable future due to inadequate local leadership, low member activity rates, few total members, and economic difficulties in the region.
Membership growth rates and seminary and institute enrollment for the LDS Church in Sierra Leone have been comparable to other African nations in recent years although convert retention, member activity, and congregational growth rates have been considerably lower and comparable to Liberia. The Church in Sierra Leone had the fourth most members without a stake in 2010. The percentage of Latter-day Saints in the general population is among the highest in Africa and comparable to Ghana.
Other outreach-oriented Christian faiths experience similar obstacles in Sierra Leone. Jehovah's Witnesses have a small membership in the country and claimed about 1,800 active members in 33 congregations in 2008, baptizing about 100 new members a year. Seventh Day Adventists numbered 17,151 in 52 churches for the same year. Only one new Adventist congregation was created between 1997 and 2008 during which time SDA membership in Sierra Leone increased by 5,000. Comparing the results of other church's missionary programs with the LDS Church reveals that even some of the most organized and successful Christian churches in Africa experience high convert attrition in Sierra Leone. Problems with the people in Sierra Leone actively participating in Christian churches may be linked to low literacy, extreme poverty, and a culture which is dominated by Islam.
The outlook for future LDS Church growth is favorable due to the organization of the Sierra Leone Freetown Mission in 2007 which has emphasized higher standards for convert baptisms, provided more consistent leadership training, and recently held a vision for expanding national outreach in the coming months and years. The establishment of stakes appears highly likely in the near future in both Freetown and Bo. In 2011, full-time missionaries reported short-term plans for the organization of a district in Kenema. Missionaries report that several branches may soon be organized in Makeni. Additional cities will likely open for missionary work within the next decade, but outreach will likely not occur in rural areas for many more years. The Sierra Leone Freetown Mission may administer currently unreached nations in West Africa one day, such as Guinea and The Gambia. Poverty, low levels of religious commitment, and mediocre literacy rates will continue to limit growth and present persistent challenges toward establishing long-term, self-sufficient leadership.
"Background Note: Sierra Leone," Bureau of African Affairs, 8 March 2011. http://www.state.gov/r/pa/ei/bgn/5475.htm
"Sierra Leone," International Religious Freedom Report 2009. http://www.state.gov/g/drl/rls/irf/2009/127254.htm
"Sierra Leone," International Religious Freedom Report 2009. http://www.state.gov/g/drl/rls/irf/2009/127254.htm
"Sierra Leone," Country Profile, 2 April 2011. http://newsroom.lds.org/country/sierra-leone
Thomas, President Kent; Thomas, Sister Carolyn. "Church, tribal leaders pleased with start of first chapel in Sierra Leone," LDS Church News, 6 November 2004. http://www.ldschurchnews.com/articles/46437/Church-tribal-leaders-pleased-with-start-of-first-chapel-in-Sierra-Leone.html
Thomas, President Kent; Thomas, Sister Carolyn. "Church, tribal leaders pleased with start of first chapel in Sierra Leone," LDS Church News, 6 November 2004.
Weaver, Sarah Jane. "Measles initiative continues to fight disease in Africa," LDS Church News, 30 September 2006. http://www.ldschurchnews.com/articles/49498/Measles-initiative-continues-to-fight-disease-in-Africa.html
"Projects - Sierra Leone," Humanitarian Activities Worldwide, retrieved 30 April 2011. http://www.providentliving.org/project/0,13501,4607-1-2008-117,00.html
"Sierra Leone," Country Profile, 2 April 2011. http://newsroom.lds.org/country/sierra-leone
Gunnell, President Grant. "Joy at Ghana temple," LDS Church News, 27 May 2006. http://www.ldschurchnews.com/articles/49002/Joy-at-Ghana-temple.html | <urn:uuid:66ab755f-7619-4beb-802d-1640553f21da> | CC-MAIN-2017-17 | http://cumorah.com/index.php?target=view_country_reports&story_id=79 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00247-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.955025 | 4,902 | 3.25 | 3 |
Science, Art, Litt, Science based Art & Science Communication
PART -1 - Introduction
Science is not finished until it’s communicated.
If you can't explain it simply, you don't understand it well enough - Einstein
Doing hard research by scientists in fort-like labs that are inaccessable to the outside world is one side of Science. Then communicating it in the right manner the world can understand and get benefited by it is a different ball game altogether.
Scientists publish their work in science journals with all the data and statistics and in a language that seems like Greeck and Latin to the man on the street even if he is literate. Then think about the situation of illiterates. To a large section of people, these science journals don't exist at all!
These journals are used by scientists to communicate their work only to their colleagues in their field.
Usually transferring the complex science concepts from the labs to the ordinary world in the manner that makes some sense will be done by science journalists. They do it in the way in which they understand because it is very difficult even for them to understand the jargon and data and depend on the scientists' explanations to communicate the difficult subject. Therefore a miscommunication is taking place ( Ref.4) with the result that science is being misunderstood and even hated by some. People feel the disconnect with science all around in the West (Ref.10) (and to some extent in the East too) from a common misconception that evolution is a theory that says human beings descended directly from the monkeys, to the worry that physicists in Geneva might suck the universe into a tea cup — or something uncomfortably smaller - unsubstantiated fears that the Large Hadron Collider, used to study subatomic particles, might create a black hole. Some think science is responsible for all the ills we are facing in the world now. One third of Americans are rejecting the theory of evolution (13,15). A move is afoot to keep climate science and evolution out of classrooms in the US now (10). And on several major issues we face, the views of public drastically differ from those of scientists in some parts of the world (17 ). Despite tremendous progress brought by science and technology, several people - irrespective of their literary status - still remain entangled in blind and superstitious states of mind as in the dark ages (7,12,15) proving that the communication system had failed to a large extent. Moreover, the influence of politics (Ref 2,3,6,8,16) and commercialization of the fruits of science (Ref 1, 14) are taking their toll on both scientific research and journalism with the former dancing to the tunes of its mentors (14) and the latter falling prey to conflicting stories. This is resulting in ordinary people being left to deal with the chaos themselves driving them to question the integrity of science.
It's important for lay people to have some understanding of the science involved in the important problems we are facing right now like climate change, antibiotic resistance, vaccine safety, etc. to take right decisions and cooperate with the governing bodies. Unfortunately, coverage of scientific topics in the mass media all too often oversimplifies, fails to provide adequate context, and in some instances is downright wrong. Science can be pretty off-putting if it gets all tangled up in jargon and sounds like something tough and impenetrable to the average person. The communicator really has a job here to be an effective articulator of what the point is, what the progress is, why it matters, why it’s exciting, how it could be helpful.
Science is communicated by journalists in two ways: S cience "journalism" (contextualising, investigating and, at times, challenging science) and just science "communication" (a public relations exercise that is brought directly from the scripts of scientific institutions).
Reporting of science is particularly difficult when compared with other fields of journalism or that it is bad because of some special property that science but no other discipline possesses - "the scientific methods" and "peer review". For good science journalism to happen, journalists must try to stay at arm's length from their sources according to journalists. Failing to remain at one-step-removed runs the risk of turning a piece of journalism into some drippy, flaccid piece of science communication. What a journalist should be really doing when reporting science is asking questions and deflating exaggeration. But do scientists have vested interests in the way their work is portrayed in the media? The answer is yes to some extent. Practically any story has the potential to have an impact on a scientist's reputation or his/her next grant application. Journalists, on the other hand, must try to be independent if they are to be credible. Scientists feel Journalists should get the science right in their articles and let them look at the copy before publication to ensure accuracy. As an outsider, media can be irresponsible while reporting by sensationalizing issues like the GM crop stories. Sometimes research is applied out of context to create dramatic headlines, push thinly disguised ideological arguments, or support particular policy agendas. Scientists who demand to see a draft or journalists who let them may be doing so with the best of intentions. But does it betray the reader or the viewer? Reporters will give the story an angle that has their reader or viewer firmly in mind. Sometimes they give it a spin to sensationalize the stories. The biggest issue is that often the media purposefully produce rubbish scientific stories, as it can suit their agendas(ref 9). This is abuse, and there needs to be some form of policing to stop abuse! For instance one journalist wrote very interesting stories saying that intuition and other non scientific methods were being used by scientists. Some artists who read them thought that was true and argued with me saying that such practices were universal and critical to scientific research! I was shocked to hear such nonsense being spread by journalists. Unlike others what the scientists use is 'educated guessing' or 'informed imagination' which is different from ordinary 'intuition'. The imagination of a scientist is based on reality. If the journalists give the working of scientific methods a spin to suit their write ups it is bad science journalism that leads to misunderstandings. Unlike the journalists the reader or viewer is not a scientist's first concern. As a result, researchers can often suggest changes that would flatten the tone, or introduce caveats and detail that would only matter to another specialist in their own field of research. The scientists are more concerned about facts and correct representation while journalists also think about mass appeal and sales of his/her journal/paper apart from correct presentation.
The relationship between scientists and journalists remains difficult, sometimes even hostile. There are complaints on both sides — scientists doubt the ability of journalists to report accurately and responsibly on their work, while journalists complain that scientists are bad communicators, hiding behind jargon (11) and therefore can confuse them which ultimately could lead to bad reporting. Journalists have a need for digestible headlines that convey simple, accessible, and preferably novel lessons. The scientific method stresses a slow accumulation of knowledge, nuance, and doubt.
But scientists should realize that at times in a scientist's career, it can be extremely important, perhaps even critical, to have a good relationship with a few key journalists more importantly if they themselves cannot communicate their work properly.
Bad science journalism also comes from an inability to make sense of statistics and scientific data. Do journalists read primary source? Without a basic understanding of the techniques being used ( a little research here benefits everybody) or a grasp of statistics. One of the things science journalists can do to improve the quality of their work is when something they think is bad, they should ask relevant scientists to check if the facts in the story are accurately described. Because there is a special property that science but no other discipline possesses: it's extremely complicated and the gap between common knowledge and new scientific findings is ever widening. Bad stories are where reporters get the facts wrong, because they don't know what the facts are. The danger of losing the facts in translation is what worries the scientific community the most.
Stories, especially the big ones, should have some form of fact checking performed on them prior to publication. Journalists can get the story checked by another scientist who does know something about the subject and who isn't associated with the scientist or the paper that is reporting. Some journals do a good job of this and you often see quotes attributed to scientists not involved in the study passing comment as part of the new story in them. However, majority of news papers and journals that get involved in the rat races, want to publish the story first without checking the facts. It is very easy to write things better than scientists can but which subtly or not so subtly alter the meaning. Running it by a third party would be a useful compromise of checking the science without giving up journalistic principles. If something sounds odd or a scientific claim just sounds too bold, then we expect reporters to question it - and check with independent sources as to whether it stands up. It's unrealistic to expect any journalist, however scientifically literate, to have expert knowledge of all the fields in science, so there is nothing wrong with contacting a person in the field to check that your coverage makes sense. Journalists should collaborate with actual scientists more. On the other hand it would be better if Journalists themselves try to specialize in science subjects.
Journalists say they have deadlines to meet and cannot take time to verify the facts. One journalist told me his editor says - "If you can't write 500 good words an hour, you're in the wrong business." And I told the journalist - if you can write 500 science words an hour, you are in the wrong field! You chose a wrong subject! Even the most experienced science writer is not an expert in all the areas of science! You got to check and recheck facts. Scientists take years to do a paper. Can't you take even a few days to communicate it?! I want to tell these media people deadlines are death knells for science communication. Rat races kill their efficiency in science journalism.
A journalist who deals with science once asked me," If a science writer calls you up and says: 'Dr. C, I write for Y publication, and we would like to feature a precis of your paper that appeared in this morning's issue of the Journal of Last Resort. My editor gave me a copy of your paper a half hour ago, and my summary is due in an hour and a half. Could you please answer the following questions about your paper and refer me to someone else in your field who could comment on it now,' what would you say?" And my reply to her : "I would just say, 'sorry, wrong number' and hang up! Nothing annoys a scientist more than dead lines." I prefer to have no article on my work than a bad article sculpted by a dead line because I am from the life sciences and a badly written article might harm the people who read it!
Here is a gem of a quote from a scientist: Journalists take liberty with my articles in a manner that is not a slight "mishap" but an attempt to sensationalise. Everywhere in the world but more so in Africa where people may not have other resources such as books, TV or internet to counter check the info given on newspapers, such liberties at time have more than just an annoyance factor for the scientist, they actually have life and death implications...think MMR, and other anti-vaccine stories based on misquotations or poor synthesis of research information. So as a journalist in your rush to avoid being killed by your editor think how many readers you might actually harm with the article...deadlines or dead readers ...the choice is yours!
I will give another example. When Indian Space Research Organization launched Mangalyaan, its Mars Orbiter, recently, all the news papers just quoted what the scientists said during the launch, copied from ISRO's site a few details and published them the next day. I took one week to write my article and post it here, after doing thorough research on it and people told me my write up was the best they came across on the subject! Need I say more?!
And some of the things science journalists do - which might not be deliberate but still- can make people understand things differently from the way scientists want they should be understood. For instance, in their effort to "hear both sides of the story," professional journalists have contributed to the misconception that there is a "debate" among climate scientists over anthropogenic global climate change. That "debate" really exists only in the misguided minds and resulting headlines, and here is why: If a journalist tries to "balance" a quote from one of the vast majority of scientists who agree on climate change (97% according to scientific studies, ref5) with one coming from the tiny minority of those who don't (just 3%), he or she creates the wrong impression that the scientific world is equally divided about the issue. No journalistic training, only brains, can protect from such blunders.
Let us watch a funny video to really tell the world how it should be done:
Some media people don't even bother about educating people regarding scientific explanations of things happening around the world and breakthroughs because 'science' doesn't increase the readership, viewership or TRPs of the media. So they think - why spend time and space on it?
Therefore, Scientists should make more of an effort to do pieces themselves for popular media, more regularly if they want correct portrayal of their work. Some of the best blogs and stories written these days are done by real scientists. They are creating art works based on their own work. Making videos and movies is the method followed by some. I am glad scientists themselves are coming forward now to communicate with the people outside and art is being considered as one of the important tools to use in this process. Quite a lot of discussion is taking place lately in the Scientific community about the need for Scientists themselves to come forward and share their knowledge and in ways that will reach more people.
It is difficult sometimes for scientists to understand how the world sees what they see. They get entangled in scientific jargon, think and work at a different level and fail to see from the angles of ordinary people. This is because they get several years of specific and special training in the subject to deal with the complexity of science. The training turns them into experts to deal with highly complicated subjects, data and the jargon. Sometimes the jargons don't even have words to describe in common language. It becomes inconvenient and highly demanding for the scientists to deal with communication. So opening a dialogue is really important. Only when the scientists deal with the world outside of theirs, they can understand the problems faced by people in understanding them and their world and how close or far away they are from them. Then they can do full justice to their work by delivering the themes in the way the world wants. Scientists are really facing some problems in communicating with others, but they are trying to overcome them. I wrote an article on how scientists should communicate with laymen based on my experiences. You can read it here: http://kkartlab.in/group/some-science/forum/topics/how-scienitsts-s...
I write on science topics and even stories to remove misconceptions about science I come across while dealing with people. Some of the false notions prevalent among the ordinary people are really shocking to me. Some human beings have very closed minds that are too difficult and time consuming to open. We get entangled in arguments that are quite unnecessary. Scientists will not have have so much time to waste in them. But that again shows the gap between the scientific world and the ordinary world. Now we are trying to close it. But what is the best way to do this is the issue before the scientists right now.
Scientists representing their own work in the visual communication of science is one way of doing it or working in general on science themes and science culture is another aspect. I do both text and literature and art communication of science. The former in the form of articles, stories and poems and the latter in the form of paintings, installations and videos.
Art helps science in communicating the theories, concepts, facts in a better manner. Even an illiterate person can understand science when it is showed in a picture form. A scientist knows what s/he wants to communicate therefore will be in a better position to put his/her work in a picture form. I feel when scientists are doing this, they should try to simplify things so that there won't be any communication gap between scientists and non-scientists. Some of my artist friends advised me to make my art works complex as I try to make them as simple as possible.. According to them there is no need for common people to understand art! But I disagree with them. Science is a complex subject and if you make it more complex people won't be able to understand so much complexity and move away from them and the whole purpose of communication will be lost.
Several of my colleagues in the scientific community all over the world are strongly supporting me in the way I communicate the science concepts with well balanced themes in the form of art ( You can see my work on my website: http://www.kkartfromscience.com/ ). I am glad more and more scientists are coming forward to try this method and able to do this with ease. If journalists are not bothered about science communication or good science communication, yes, scientists will have to do this work themselves.
"Telling people about science is just as important as conducting the science".
Dr. Krishna Kumari Challa's poem on "Science Communication"
From the group (Art- literature-Science Interplay)
Science communication, science communication, science communication
An useful tool that converts difficult to understand things into easy translation
Brings in human beings many a right vibration
Communicators are people who guide this beautiful mutation
Yet other times cajolingly,
Using metaphors freely,
Making people trust science merrily!
If science communicators fail to convince,
In order to solve the problems we face
There is no other go but to use force
The field that gets maligned in this process is Science!
Communicators have a difficult role to play
Art, literature, text, speeches and plays are the methods to sway
Whichever route used to convey
Science messages should reach the masses every way!
Copyright © 2012 Dr. Krishna Kumari Challa.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
One of the things science journalists can do to improve the quality of their work is something you think is bad: ask relevant scientists to check if the facts in the story are accurately described.
science journalism is always going to be a tossup between accuracy and impact. Science is not aimed at being media friendly and hardly ever gives you a punchy take home message.
Perhaps the answer is context, if sci-media articles were to provide a nice serving of scientific background in their articles it may make it easier for readers to put the research into context.
Although this may limit the audience, mistakes may be less likely and (assuming the authors are skilled in expressing the science behind the findings) readers may actually come away feeling better about the article and generally happy that they understand the science and aren't just aware of the findings.
People promoting pseudoscientific cures will take science articles and plagiarize them acting as if it was their own work, or contact a scientist that wrote the article allegedly for the purpose of fact-checking, and then the citizen journalist(s) publish the "spinoff" article full of inaccuracies which they immediately copyright as their proprietary intellectual property.
An experiment with 243 young communication scholars tested hypotheses derived from role congruity theory regarding impacts of author gender and gender typing of research topics on perceived quality of scientific publications and collaboration interest. Participants rated conference abstracts ostensibly authored by females or males, with author associations rotated. The abstracts fell into research areas perceived as gender-typed or gender-neutral to ascertain impacts from gender typing of topics. Publications from male authors were associated with greater scientific quality, in particular if the topic was male-typed. Collaboration interest was highest for male authors working on male-typed topics. Respondent sex did not influence these patterns.
This article says: A good science/health/environment journalist should read the paper if possible. It is the record of what the scientists actually did and what the peer reviewers have allowed them to claim (peer review is very far from perfect but it is at least some check on researchers boosting their conclusions).
Without seeing the paper you are at the mercy of press-release hype from overenthusiastic press officers or, worse, from the researchers themselves. Of course science journalists won't have the expertise to spot some flaws, but they can get a sense of whether the methodology is robust – particularly for health-related papers. In any case, very often the press release does not include all the information you will need for a story, and the paper can contain some hidden gems. Frequently the press release misses the real story.
The art of science: science communication:
Scientists are more interested in glorifying themselves than leaving things better than how they could find them. As we acquire knowledge, we climb a ladder that helps us see farther and observe unexplored territory. A new scientific creation demands several smaller steps, continually communicating findings to their collaborators. This may be envisioned as skipping the stairs and rather taking an elevator. The result would be that each paper will contain fewer questions and more answers. Our scientists are isolated in their researches. If communication truly erases distance in the scientific community, new fields are likely to emerge in the intersections between disciplines.
Scientists, even more than today, will have to learn how to work together. it is also a sociable affair. Communication between researchers in different fields is vital. Team work is one of the things that categorizes interdisciplinary.
In the last ten years, the rise of a variety of web-based and social media platforms has dramatically changed the role of the science journalist. No longer do people have to rely on traditional media to learn about a new issue, instead they can go online and visit a host of blogs to find the specific information that they want. This new media ecosystem, write Declan Fahy and Matthew Nisbet in a recent study, has greatly diminished the power science journalists previously held as “gate-keepers.”
Every single one of the big existential challenges we face in this century calls for better science, to identify the problems, and better technology, to identify the solutions. But the science won’t get done, and the solutions won’t get implemented, unless the general public is part of the process. And to be involved in a meaningful way, citizens need accurate information. That’s where science and technology writers come in.
Do we want consumers and voters to be prepared to make smart decisions that will contribute to rational policy changes? If so, we have to figure out how best to engage them and offer a wide range of compelling and accurate stories about science and technology.
If we, as a society, don’t broaden our basic research literacy — our scientific understanding of the way life works — then it’s very difficult for us to make common-sense decisions that allow us to take care of each other and our environmentally endangered planet. And beyond the save-the-world aspects — and, yes, they matter — I think a basic understanding of science accomplishes an essential something else. It reminds you that we live on the most fantastic, complicated, unexpected place. It just makes life more interesting.
A scientist explained how science communication should be done in a better manner:
At an event he attended, it seems, an audience member stood up and asked a question that had been all over the news in 2008, before a court dismissed a lawsuit alleging the dangers of particle accelerators: "Are you concerned that the Large Hadron Collider might create a black hole that engulfs the world?"
"No," responded one of the scientists present there. "This work is peer reviewed," he said, "and any talk of black holes is complete nonsense."
That seemingly condescending reply immediately set the wrong tone for the dialogue with the audience. The lecturer instead could have addressed this widely felt public fear and explained the facts refuting the threat.
"When you start telling people: 'We're experts. Don't worry about it,' that's the best way to turn off the public," says the scientist . "We have to open up that process somehow to the public. It's not a one-way street. It's also how the culture can more broadly enter into the debate of science and create a more socially robust science."
"As scientists, we tend to underestimate how esoteric we are," he says. "We deal with concepts and words that are just not part of general daily life. Very often we end up mystifying people, rather than engaging them. Just getting people familiar with some of the technology and concepts without trying to do science education I think is an important part of the cultural appropriation of science."
This is the point I have been stressing all the while! Now other scientists too are realizing it! - Krishna
The public and researchers complain of inadequate training for journalists. And there are few examples of innovative technology being used for dialogue among scientists, communications professionals and lay audiences — certainly fewer compared with the arts and 'culture' industries. Therefore, there are signs of increased demand for science journalism.
On science being challenged by religious fundamentalists, some educationists and polititians:
The problem is not that science is being challenged, it is what it is being challenged with. If logical questions are being posed, it is appropriate. If the challenge comes from restating ancient faith-based fables it is not appropriate, both from a scientific and separation-of-religion-and-state perspective. Plain and simple.
Science being challenged by scientists is the essence of science. Science being challenged by public school teachers with an agenda is the essence of stupidity.
Read the actual legislation of some states in the US which is actually not so bad. It only permits teachers to "help students understand, analyze, critique and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories covered in the course being taught" and to encourage students to "respond appropriately and respectfully to differences of opinion about controversial issues," which include evolution, global warming, etc., so as to "develop critical thinking skills." It does NOT permit the teaching of ID or any other non-scientific theory, nor does it permit any teacher to refuse to teach any particular scientific topic. | <urn:uuid:bbe4d5e0-46f0-46e0-a937-adf71f8b44a0> | CC-MAIN-2017-17 | http://kkartlab.in/group/some-science/forum/topics/science-communication | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00073-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.958413 | 5,488 | 2.71875 | 3 |
[ Music ]
[ Applause ]
Hi, I'm Caroline Cranfill and I'm a designer and production manager here at Apple.
Today, I'm excited to talk to you about some principles about inclusive design.
When you put so much hard work and creativity into an app, you really want it to reach people.
But ultimately, everyone will experience your app in their own way and if you think about the millions of people who could use your app, you have to consider how diverse their perspectives are.
Some people are just learning how to interact with technology.
Others are digital natives who are completely comfortable with the latest and greatest.
And there's some who are just trying to get comfortable with the new way to communicate.
And some will have cultural backgrounds that are very different from yours.
And some will interact with your app with assistive technology like a Bluetooth Braille Display.
So when you're trying to ship an app, it always feels like there isn't enough time and there aren't enough resources and it only gets worse the closer you get to the end.
So that's why it's important to plan to make your app inclusive as early as possible in the development cycle.
The problem is that early in the cycle, everyone just wants to talk about the cool feature ideas they have.
So you need to convince your team to do the right thing.
And here are some statistics that will persuade them to make inclusiveness a high level feature.
Tim Cook recently announced that 67 percent of Apple sales in the second quarter were international.
Your app needs to be inclusive and flexible to reach this huge market of people with diverse backgrounds.
Worldwide, 285 million people are blind or have low vision.
That's like half the population of North America.
Over one billion people live with some kind of disability.
That's one out of every seven people on the planet.
Now disability and diversity may not be the first things you think about when you think about user experience.
But these numbers make it clear, these are the people that use your app and creating designs that work for them is at the heart of the user experience.
So, how can you make your apps more inclusive?
Fortunately, there are a small number of key principles that I have found to be helpful.
If you incorporate these principles into your process, you will have more informed designs from the beginning.
And in the end, your app will not only work better for more people, but you may also save time on localization and QA.
So today, I'm going to share some concepts that I found eye opening and now keep in mind when I'm designing.
I'll discuss how to use typography to maintain hierarchy in different localizations.
I'll show you why layouts need to be dynamically built to accommodate translation lengths, screen sizes and dynamic type.
And we'll discuss emotions and legibility considerations when making design choices with color.
And we'll go over certain cultural sensitivities around iconography.
So first, we'll talk about the need impactful typography for everyone regardless of language.
Because the most fundamental objective of app design is to communicate information clearly and we largely do that with typography.
So let's look at a quick design exercise together.
One of the ways you know what is most important is by using basic typographic treatments and compositional elements to give visual cues of hierarchy.
You can use extra space between different pieces of information or try different font sizes.
Or you can explore typographic styles like font weights or character styles like capitalization or italics.
Or start adding color and before you know it, you have so many good options.
And at this stage, it's an excellent opportunity to preview your favorite options in other languages you are going to localize in because you want all localizations to have this same impact as your original intention.
In this example, the hierarchy isn't as strong in the title and subtitle of the Chinese localization because the Chinese writing system doesn't have the concept of upper and lowercase.
In the second design, the title additionally has a larger font size and color from the subtitle which helps maintain a clear hierarchy in Chinese.
So out of my two favorites, this second design is the most robust choice because it creates hierarchy through using two typographic treatments, both localizations can support, which is font size and color.
I've found that once a design is going to be translated into multiple languages, having two to three levels of change will be more inclusive just in case a treatment isn't applicable in certain languages.
And sometimes, designing with constraints like these can help come up with a better end solution for everyone.
Here's another example at Airbnb.
They use font size, color, and extra space between information to maintain hierarchy across localizations.
And so now for you to have more background knowledge and feel informed to design for other languages, I want to dive a little deeper about a few topics.
First, it's important to remember that around the world, we speak different languages and use different alphabets.
So, I want to introduce you to some writing systems that you may not be familiar with and how hierarchy can be lost if you aren't aware of their typographic support.
So everyone here knows this writing system called Latin.
But you may not be familiar with characters in the extended Latin character set such as these.
And you really might not be familiar with all ten writing systems that are used in over forty languages or localizations that we ship.
A few typographic things that you might not be aware of is that some writing systems do not have the concept of upper and lowercase as I mentioned earlier.
Also, the same subset does not have an italic appearance and it's not in good practice to force a slant on these writing systems.
So now you need to know that different fonts for these writing systems have different numbers of font weights.
Currently these are the font weights in the system font, San Francisco.
But you can see that the system fonts for all writing systems at the moment do not have the same number of font weights.
So this means your designs using extreme weights may fall back to the reduced set.
And this can be unexpected and sometimes lose hierarchy or emphasis.
In the next consideration, I would like to talk about is using larger text sizes because they are more universal between languages and user capabilities.
So let's start with a good question which is, "What is too small?"
Maybe everyone here can see this bottom letter on this enormous screen.
But on a smaller device, it'll be too hard to read.
Legibility is really going to depend on what device you are targeting because the standard viewing distances are different.
But, even if the letters are legible, it doesn't mean it's going to be good for reading content.
Let's look at a non-Latin writing system like Chinese.
It gets harder to read at a higher number because rendering dense writing systems too small can reduce the clarity of letter forms.
The meaning can be misconstrued or lost because the strokes blend together.
You want people to read their content easily and effortlessly.
So leaning toward larger font sizes will make the content more readable for everyone.
Now one more thing about typography.
Some writing systems are considered to be tall because they have tall letter forms.
As you might have noticed in the writing system overview, some of the examples have extra marks that we don't use in English.
These are call diacritics, vowel marks, or tone marks and sometimes they're extremely high or low.
For example, in this Thai font it can have ascenders much higher than Latin characters.
And in these Arabic and Hindi fonts, they have descenders much lower than Latin characters.
And you should also be aware that in some code implementations, characters that draw outside those ascender and descender boundaries of a font and text views can be cut off if clip to bounds is true for the label.
This should be off, because this is bad.
Missing marks can change the meaning of the word entirely.
So, also in design comps, you should avoid assuming fixed heights for texts because the letter form is going to vary in different writing systems.
So you might be thinking how does this affect my layouts?
So let me show you a quick example of how labels will adjust automatically for these tall writing systems and code.
If you look at this composition consisting of three separate Latin text-size labels, and you can see in a tall writing system it requires a larger line height to avoid clipping.
And here they are, side by side.
And you can see that Thai text is longer and it's been allowed to expand appropriately.
And it's important that you don't restrict the height of the view so this expansion can happen because letter forms could overlap or be illegible.
And later, we'll explain how to make further make these spaces between labels increase dynamically to accommodate dynamic type.
So some things to remember about typography that you need to make sure designs will retain the same meaningful purpose in other languages by preserving hierarchy, by choosing universal character styles, or by having two to three levels of change in case one isn't applicable in one of the localizations.
Use font sizes that will be easy to read across all of your localizations and capabilities of the people who will use your app.
And don't restrict drawing of views to their bounding boxes.
So now we've looked at a little bit of typography.
For more information about typography, be sure to check out my friend Antonio's session tomorrow morning called Typography and Fonts for even more details on typographic treatments and layout considerations.
But now, let's talk about dynamic layouts.
We've talked about the appearance of words from all over the world, but now let's talk about how they come together in layouts.
An Auto Layout can really help you implement these considerations, but I'm not going to go into detail about Auto Layout in this session.
I want to focus on why having flexibly built layouts helps on so many different levels.
It helps with localization, adapting to different screen sizes and for visual impairments when type is able to scale.
So first, let's see how dynamic layouts help with varying translation lengths.
As crazy it sounds, the shorter the English words are, the longer the translations are going to be.
The word edit in English is four characters but in Russian, it's twice the size.
Just kidding, it's thirteen characters long.
If an English word is less than ten characters long, the translation can be three times longer or more.
A seemingly concise English phrase may be two times longer in another language.
And an English sentence can be 1.3 times or longer and might need to wrap to more lines.
So this variation means that the amount of maximum text lines should not be strictly defined and text should be allowed to reflow by having a flexible layout.
You need to allow enough space for translations to expand, especially if there's no other place to see the words in full.
You would hate for the end of a very important message to be lost.
Titles and other key texts of apps must not be truncated because it can lose meaning or functionality of the app.
So those expansion stats were eye-opening and I hope it is for you too.
I can't stress enough that layouts need to be overspecified for what you aren't seeing in the language that you are designing in.
Let me show you an example of a dynamic layout using Auto Layout looks like in a few different languages.
Here's this What's New in Photos screen that was created at Speed Dynamic.
In Chinese, it's actually shorter than English and in German it's much longer and would start a Scroll View on an iPhone SE.
And here is Thai with the tall letter forms and extra line height.
And here's Arabic which is also tall and a right-to-left layout.
It's best to look at each screen after implementation in each language that you're going to localize in so you aren't missing something that will create a bad experience.
Accommodating translation should be the next step into making your app inclusive and it will have immediate thank you from the people who use your app.
But I have one more reason why you need dynamic layouts, and that's Dynamic Type.
This feature might be one of the most important features that you could add to your app if you haven't already.
In settings, you have the ability to scale text.
This enables a broad audience to personalize their devices and be in control of their own legibility.
But it also allows them to choose how they consume information.
Small text sizes may produce dense content.
And larger text sizes could be more focused.
This is a highly used feature.
In fact, who here uses a text size feature on their devices?
Nice. So let's get a great consistent across all of our apps.
And you also might not be aware, but in accessibility settings, low vision people can get even larger sizes for body copy.
And here is Ulysses.
They do a great job scaling their UIs dynamically.
So, how do I specify a design to be dynamic?
First you start by specifying different text styles for semantically distinct blocks of text such as title, headline, body, etc. for the platform you're developing for.
This gives ample variety to achieve a nice layout hierarchy and today I've got a present for all of you when making designs with text styles.
The font size, leading, and tracking values for each style is now published on the iOS Human Interface Guidelines in the typography section.
And you will be able to download a working Photoshop file.
[ Applause ]
Okay, so, let's go into detail, I'm focused for a minute on an iOS example.
These are the font styles at the sizes they appear at for the default text size setting.
When the user slides the slider smaller and larger, these other columns are the sizes that the fonts will appear at.
You can see how the font sizes get smaller and larger, both up and down in text styles, and left to right in the text size slider.
And here's an iOS 10 mail message.
And I'm going to show you some behind-the-scenes information about how to use text styles to make this design dynamic.
First, you will need to assign a text style to each text area.
The sender and the subject are the most important labels and will be headlined so they're prominent and bold.
The To field and the time stamp are less important details and will be subhead.
And then the message body will be body.
Then the next step, you will need to add a baseline measurement to express the relationship of each label to other elements around it.
And in the dynamic type APIs, you'll use these values to create a ratio to scale the spaces proportionately to the font size as it changes.
These are not fixed baseline-to-baseline measurements.
The bottom space under the type also needs to be specified dynamically so that the text cluster will remain vertically centered in the header and the other measurements will follow similarly.
The next step, you'll specify or enforce the standard layout margins.
It's important to at least use the standard width from UIKit because the actual value is different on different devices.
You want to be consistent with other apps.
And lastly, we'll add some padding between horizontal elements.
So now together, with the text styles, baseline ratios, margins, and padding, the layout is ready to be coded dynamically.
And so now, let's watch this design scale.
So, ta-da, this is the mail message screen, growing from large to extra, extra large.
There's also another legibility feature that you will get for free if you use text styles and that's called bold text.
This helps people who would otherwise have a difficult time reading lightweight fonts.
And additionally, it also adjusts for tall writing systems.
So sometimes, with the 12.9-inch iPad Pro, paragraphs of texts, or large blocks of texts, can often appear too wide for the reading experience if the layout margins were coded as fixed values.
So when you're reading a long line length, it's hard for your eye to jump back to the beginning of the next line.
So in iOS 9, we introduced the readable content guide property of UIView to respect a suitable line length for reading based on the size of the font which makes the margins dynamic instead of fixed.
And now, so the margins adjust when the text size setting changes.
This way, the line length remains favorable for each font size.
Also, layouts need to be further dynamic for right-to-left writing systems such as Arabic and Hebrew because they read from the right side of the screen to the left.
These languages require the UI to be mirrored.
They've done a great job at their right-to-left layout.
If using Auto Layout's leading and trailing attributes, the flow and elements will easily mirror.
However, it's important to make sure that some things do not mirror.
They are presented left-to-right even in the middle of right-to-left phrases.
You should not mirror phone numbers, clocks, music notes, graphs, and video playback controls.
Video progress sliders still increase from the left to the right and rewind always points the triangles facing left and fast-forward to the right.
And images do not need to be mirrored unless there is a sense of direction or design composition elements tying to other parts of the user interface.
So now let's internationalize an example together so you can see how this all comes together.
Here's a step one, two, three, and an onboarding flow in a hypothetical app.
If we were going to localize this for right-to-left languages, first we would mirror steps one through three with step one starting on the right.
And then of course, translate the text and numbers by making them localizable labels instead of being baked into the art.
But here, the story of the images seems to be pointing in the opposite direction as the numbered steps.
So we would want to mirror each image separately so the story of the images follows the steps.
However, we should not mirror the checkmark because checkmarks are written the same.
And now, the onboarding flow would feel natural for right-to-left languages.
So some things to remember about dynamic type is that translation lengths are going to vary, both shorter and longer, than the language you are designing in.
Dynamic Type is an awesome, highly used feature that allows scalable type.
And reability margins help keep columns and areas of type at nice readable line lengths.
And Arabic and Hebrew will require the UI to mirror.
So now we've looked at dynamic layouts and I hope your apps will look great to these size changes and new languages.
There are new additions to the APIs that will make it easier to implement, so check out these other great talks for more information.
The Auditing Your App for Accessibility highlights a new tool for previewing your app using the Dynamic Type settings and Making Apps Adaptive, Part 2, will give you a summary of the new Dynamic Type APIs and What's New in International User Interfaces highlights new support for handling right-to-left images with asset catalogs.
But now, let's talk about color.
Color is another fundamental design element.
There's a and there's a lot to consider when choosing colors for your app because they will bring emotions to the experience.
So first, it's important to recognize what emotions colors will mean to your particular audience in your context.
So let's talk about a few meanings of the three most popular colors.
Surveys have told us that blue might be considered the most favored color around the world.
And the second most popular color varies in countries but is usually green or red.
So first let's talk about blue.
Why does everybody like blue so much?
I mean, blue has a short wavelength on the spectrum and actually makes it less work on us to see.
And our eyes are trained to see the skies as generally blue because the short wavelengths are scattered more efficiently by the molecules.
FYI, that's why the sky's blue.
But also seeing some colors releases calming chemicals in your body and lowers the blood pressure.
However, universally, some shades of blue also portray sadness and loneliness likely stemming from an association with the vast and ominous oceans.
So now, let's talk about green.
It has a wide range of symbolic associations.
In the western cultures, it's heavily marketed as go green, live healthy, and reduce, reuse, recycle, as a natural eco-friendly way of life.
In Ireland, green commemorates the Patron Saint of Ireland on Saint Patrick's Day and is also known throughout the world as a lucky color.
Green is also a universal color for safety.
It's used in traffic light systems and road signs all over the world to indicate that it is safe to proceed.
So now, let's talk about red.
It's pretty consistent around the world that it stirs up both similar positive and negative meanings.
Red has a long wavelength on the spectrum which grabs our attention easily.
Therefore, in many parts of the world, red is a symbol of revolution and conflict.
It exudes passion and love.
And to most Asian cultures, red means happiness, prosperity, and good luck because of the association with the Lunar New Year.
And it's also worn at weddings.
In fact, red is so positive here, that the Chinese stock market uses red to mean gains and green to mean losses.
And the next color consideration is color blindness.
And you need to be aware that color blindness affects more people than you think.
It's actually one in 12 men and one in 200 women.
That's almost 5 percent of the total population of the whole world.
And all of these popular preferred colors have implications to color blind people.
It's not that these are these colors are confused with each other, but it's how much of blue, red, and green they have difficulty seeing in all colors.
For one kind of color blindness, this color palette may look similar to this.
This is why our platforms have settings to differentiate key information without color if needed.
Like here in mail settings, you have the choice between an orange dot or an orange flag for mail flagging because if you are color blind, you may not be able to tell the difference from the orange flagging dot with the blue and red dot.
So the additional shape option is good for people who need it.
And the next topic is about having high-color contrasts for legibility between backgrounds and text.
Contrast is important to test in your apps because you want people who may have contrast sensitivities as well as situational impairments like bright sunlight or wearing sunglasses to be able to use your app.
And to find what is high contrast, you need to calculate the luminosity ratio between two colors or one way of roughly testing contrast without calculation is to turn your file into grayscale.
You can quickly see if the colors will have enough contrast with each other in your app.
And soon, on the iOS Human Interface Guidelines in the resources section, you'll be able to download a helpful tool to calculate color contrast by using RGB values and it will also tell you if you're in compliance with common accessibility guidelines for different text sizes and weights.
But you can also use numerous online contrast ratio calculators.
If we calculate the contrast ratio for this purple and orange, it was found to be 1.5 to 1.
That is not in compliance with because it is a very low ratio.
In fact, 1 to 1 is the lowest ratio you can have.
Let's look at another example to see the highest contrast possible, and that's between black and white.
A contrast ratio calculator finds that the white and black has a ratio of 21 to 1 and of course it passes at all text sizes.
Now let's check out some gray text since it's such a fad.
The ratio comes in at 4.4 to 1 for this gray.
It's okay for large font sizes, but it's too low for small text sizes.
When font sizes are small and have low contrast, your eyes can't distinguish the shape of the letter forms to easily read the text.
Let's look at one more gray.
Maybe not on this screen, but on small devices, this is really hard to see, even at large sizes.
That's because the contrast ratio is 1.9 to 1.
And this fails at all text sizes.
You wouldn't want your app to have text or glyphs at this low contrast if you want people to see and be able to use it.
Ideally gray text should be reserved to indicate disabled or inactive states rather than ornamental or decorative style.
Now, have you ever debated whether you should use white text or black text on a color background?
Happens to me all the time.
If we use this calculator, we can see that white text on an orange button is a 2.2 to 1 contrast ratio.
And black text on an orange button is 9.6 to 1.
So in this particular example, choose black on orange.
It's a higher ratio.
Your texts and glyph colors should be in good contrast to the background color for optimal reading experience.
So some things to remember about color, some of the most popular colors can mean different things around the world.
Additionally, you should not rely solely on color to show the difference between similar shaped objects that have different meanings.
And you need a high color contrast between text and backgrounds for legibility.
And so now we've looked at color.
Let's talk about iconography.
It's important for you to know that iconography can have different meanings around the world too.
Semiotics shows us that the language of visible symbols is audience and context dependent and how cultural references and values shape the message.
For example, to be in order to be multicultural, the International Red Cross has three official symbols for protection during conflict.
They have the red cross for predominantly Christian regions, the red crescent for Muslim regions, and the red crystal as a neutral symbol.
These religious connotations of these simple symbols, risk the safety of people in conflict, so they adopted three.
So, when you design glyphs and icons, it's important to remember that different areas of the world might associate meaning with even the simplest of shapes.
Something as simple as a hand can have different meanings around the world.
In most places, it means stop or requesting a cheerful high five.
But in Hinduism and Buddhism, it's the hand position of Abhaya and it means no fear.
However, maybe you wanted to make a simple design decision to spread out the fingers of a palm for better clarity in a glyph, but an open, facing palm is actually offensive in some countries.
And I'm not going to show you that one today.
Additionally, non-directional and non-textual icons are going to be more universal because it does not rely on one region's alphabet or particular cultural objects' orientation.
For example, iBooks using an open book instead of a book showing the binding which is going to be different for right-to-left locals.
You don't want to alienate a group of potential users because they can't recognize a letter form or if something will feel backward.
Also, sometimes you create glyphs that mimic other parts of the UI.
For example, in right-to-left settings, the notification icon is mirrored to match the way it appears on the right-to-left Home screen.
The badge should be on the left of the glyph representing a badged icon.
So some things to remember about icons and glyphs, iconography can conflict with cultural norms or symbols can have different meanings entirely.
Iconography are ideally non-directional and non-textual.
And iconography should match UI if the UI was mirrored for right-to-left languages.
You want the meaning of your feature or app to be clear and used, not offensive, confusing and unused.
Okay, so we've looked at ways in which your app's UI could change to be more intuitive and appropriate for a culturally diverse audience, and in ways in which your designs can be all around more assessable to everyone.
But it's a lot to consider and sometimes it's hard to know if you're doing something that is insensitive.
So what I encourage you to do is to reach out to people in the countries you plan to make your app available in and ask them for feedback.
Ask them how they were using your app.
If it feels logical in their language?
Is everything easy to read?
And then I want you to ask yourself, "How can you be more inclusive?"
So to watch this talk again, please check out this address and to also find related resources for what I discussed today.
And it also has a link to the completely redesigned iOS Human Interface Guidelines.
It has been migrated to a new visual layout style with fresh imagery and streamlined graphics for designing great apps.
It's also been updated to include guidance on the new features in iOS X.
Be sure to check out these great related sessions happening all week.
As I mentioned before, the Typography and Fonts tomorrow morning has additional great design information about typography and fonts.
The Auditing Your Apps for Accessibility highlights the new tool for previewing your dynamic type.
And Making Apps Adaptive, Part 2 will go over layout guides, readable content guides, assets and appearance customization.
All information that you will need to use to put my considerations into practice.
And the What's New in International User Interfaces highlights the new support for handling the right-to-left images with asset catalog.
Thank you and have a great conference. | <urn:uuid:65cc9c29-aa4c-4f72-9910-ded535fdcad5> | CC-MAIN-2017-17 | http://asciiwwdc.com/2016/sessions/801 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.940161 | 6,306 | 2.703125 | 3 |
Information • Entertainment • Opinion (Since 1985)
|News & Notes||Book Fairs||Book Auctions||Auction Results||Book Search||Rare & Unusual Books||Alternative News|
September and Thomas Paine
September 11th made the news before 2001. A shootout with twenty killed occurred September 11, 1897 at Hazleton and Latimer, Pennsylvania between striking coal miners and deputy sheriffs. Friends of labor recognize that strike as one of the first victories for the United Mine Workers who won an eight-hour work day and the end to obligatory company stores.
September 11 is recorded as the date of a calamitious end due to pestilence for the 1227 Crusade under Frederick II. And the Revolutionary battle of Brandywine, Pennsylvania took place September 11, 1777. In 1939, New York Times correspondent Otto Tolischus wrote: “September 11 – having hurled against Poland their mighty military machine, the Germans are today crushing Poland like a soft-boiled egg.”
History does offer pleasant September 11ths. Collectors of 19th century dime novels and even literary scholars may find it noteworthy that Erastus Beadle was born September 11, 1821. Erastus and his brother Irwin founded the Beadle’s Half-Dime Pocket Library for mass distribution of melodramatic fiction including thrillers. Stephen Jay Gould in I Have Landed (2002), reported that his maternal grandfather Joseph Arthur Rosenberg, Papa Joe, after arriving at Ellis Island as a teenager from Hungary bought a 5-cent English grammar book at a used book store in Brooklyn and wrote in it: “I have landed. Sept. 11th 1901.”
Ruminations for the Ages
On September 12, 2001 in a letter, while anxiously reaching for comprehension about the previous day’s holocaust in a neighborhood where for many years I worked, shopped, and prospected for books, I voiced a wish others expressed since, a shared wish for creation of a Memorial Park devoted to peace, tolerance, compassion, and justice at what has come to be called Ground Zero.
News releases indicate such a Memorial is indeed planned for the phoenix-like resurrection of the area. When the Memorial is designed, eloquent lines appropriate for display can be found in Stephen Gould’s “September 11, ’01,” the final chapter of his final book. I’d be delighted if they worked in Papa Joe’s “I have landed” followed by “We have landed. Lady Liberty still lifts her lamp beside the golden door.” Here’s hoping as well when inspirational maxims (if any) are selected for the Memorial there’s sufficient wisdom and courage to include apropos reflections by the controversial author who had a large role in creating America through his electrifying tracts that spoke the language of the people with shrewd and compelling insight. He gave the American Colonies the will for rebellion, yet he died in New York City traduced, vilified, and ignorantly despised. Indeed, without half trying he had a special knack for rubbing aristocrats and political leaders wrong. He knew the principal establishment figures of his time in America, England, and France; and by stubbornly insisting on the truth, he managed to incur the wrath of most, with Benjamin Franklin as the chief exception. Could it have been because he considered hereditary aristocracy a fungus growing out of the corruption of society and defined Nobility as “No-ability.” “He who dares not offend cannot be honest,” he warned, and repeatedly proved himself formidably offensive and dangerously honest.
Perennial thoughts from this crotchety crusader suitable for Memorial display pack his broadsides, pamphlets, and books: “These are the times that try men’s souls. The summer soldier and the sunshine patriot will in this crisis, shrink from the service of his country; but he that stands it now, deserves the love and thanks of man and women.” “My country is the world, and my religion is to do good.” “He, who would make his own liberty secure must guard even his enemy from oppression.” “How necessary it is at all times to watch against the attempted encroachment of power, and to prevent its running to excess.” “To elect, and to reject, is the prerogative of a free people.”
Courage is probably still needed in some places to post his famous declarations for the emancipation of humanity. The lines were penned by a long-time front-runner for the title of most despised American even though without his pen the United States of America (a name he was among the first to use) might never have emerged from history’s cocoon. As G. K. Chesterton pointed out, he also invented the name of the Age of Reason. While waiting to find out if terrorist zealots (9/11 slaughterers of the innocent, alas, had countless predecessors) were going to guillotine him during the French Revolution, deeply religious, he wrote The Age of Reason; Being an Investigation of True and of Fabulous Theology (1794-95). The book advocated deism and criticized the Bible as a book of errors and contradictions, “scarcely anything but a history of the grossest vices, and a collection of the most paltry and contemptible tales. ” It was a PR calamity for his reputation even as an admired political theorist and made his name anathema for generations. “Insolent blasphemer of things sacred,” wheezed John Adams. The Bible’s devotees, past and present, have never appreciated hearing that their Book needs better proofreading both from above and below.
The Paine of Liberty
C. S. Lewis wrestled with a moral mystery in his 1940 book, The Problem of Pain. I’d have been more interested if the Oxford moralist (picture Anthony Hopkins) had tackled the problem of Thomas Pain (who added an “e” to become Paine after making himself a passionate advocate in print for American independence at 1770s Philadelphia). Paine was a thinker, writer, idealist, and firebrand with a never-wavering focus on human rights and freedom. He also had the strange distinction of becoming one of the most hated men of his era.
Near the time of 9/11/01 events, I read John Dos Passos excellent 1940 study of Thomas Paine in The Living Thoughts Library. Books about Tom Paine have always seized my attention since I devoured Howard Fast’s novel Citizen Tom Paine (1943) and The Selected Work of Tom Paine (1945) as a youth. I doubt that I’ll ever invest $125,000 for a 1776 first edition of Paine’s Common Sense, but I do enjoy owning first editions of the Howard Fast volumes. Both books had regrettable collisions with censoring watchdogs. The novel was removed from New York City public school libraries in 1946, and the U.S. State Department banned Paine’s Selected Work from Information Service libraries abroad in 1953. State Department banners stirred up opinion to the point where the American Library Association in The Freedom to Read stated, “The suppression of ideas is fatal to a democratic society. Freedom itself is a dangerous way of life, but it is ours.” That ALS line also wouldn’t be a bad one for the 9/11 Memorial.
As he did others, and maybe does still, Paine helped me learn how to question venerable assumptions that are labeled unquestionable and to realize some rebellions are not evils but duties. A year after 9/11 it strikes me as appropriate again to consider the life and living thoughts of Thomas Paine. His humanitarianism and consistent devotion to the great, sometimes lost, causes of human liberation seem in harmony with and symbolic of 9/11 remembrances.
Considering Thomas Paine’s contributions through his pamphlets, Common Sense and The American Crisis, as a uniquely persuasive voice for the achievement of American freedom, I’ve always been intrigued that he could arouse such intense and belligerent anger among his contemporaries and later generations. For such hatred we must look to political rivals, establishment functionaries resenting opposition, greedy grabbers for power and position, and religious dogmatists indignant about challenges to their artifacts of reverence. Thomas Paine afflicted such enemies with extreme discomfort that triggered contempt. John Adams, who couldn’t condone the prospect of government by an unqualified rabble, called Paine “a mongrel between pig and puppy, begotten by a wild boar on a bitch wolf.” “Adventurer from England, without fortune, without family or connections, ignorant even of grammar,” proclaimed Bronx New Yorker Gouverneur Morris whose distaste for popular democracy was never a secret.
Paine’s foes finding themselves weak in argument often sputtered that he used ordinary language, clumsy grammar, and split his infinitives. “Dirty little atheist,” was Theodore Roosevelt’s verdict, based apparently on rumor not reading. Anyway, leaving the Rough Rider Paine-fully misinformed, we note that Abraham Lincoln said, “I never tire of reading Paine.” Woodrow Wilson, Walt Whitman, and Thomas A. Edison appreciated and praised the man and many of us now openly dare to admire his works. It was reassuring in the July 2002 Harper’s that Lewis H. Lapham in his Notebook saluted Thomas Paine for his “Uncommon Sense.”
An “Ingenious, worthy young man”
The development of Thomas Paine as a professional revolutionary predominantly came from self-directed reading, tavern talk, coffee-house discussion, and living in the eighteenth century when threatening new ideas about human rights and individual freedom were in the air. To reactionary Tories of England and America the ideas were seditious, unpatriotic, traitorous to the way things were and of course should forever remain. For Thomas Paine the ideas of liberty, inalienable rights, and eradication of hereditary tyrannies became irresistible trumpet calls to action.
He was born January 29, 1737 at Thetford, England. His father was a corset maker, and the boy left school in 1750 to work in his father’s shop. Apparently he wasn’t thrilled at the prospect of a life in corsets. In 1757 for six months he served aboard a privateer during the Seven Years War. Back on shore, he made corsets, taught school, served as an exciseman, and gradually realized his true destiny must be to acquire knowledge and write. “I know but one kind of life I am fit for, and that is a thinking one, and of course a writing one,” he stated years later in a letter seeking a loan. The thinking and writing life not surprisingly or unusually nearly always kept him a stranger to solvency.
Fate gave a hand at London in 1774 when he met Dr. Benjamin Franklin. They talked at length about conditions in England and the options for America from submission to conflict. Paine by then was fed up with the failure and frustration that dogged him in his home country. He was an attentive listener when Franklin suggested he might do better in Philadelphia. In September 1774 Franklin wrote him a generous letter of introduction. The 37-year old immigrant, tired, poor, and yearning to breathe free, landed at Philadelphia November 30, 1774. The Franklin introduction served him as a magical open sesame to meet leading Americans, make himself heard on the urgent questions of the day, and gain an influential position as writer and editor for printer Robert Aitken’s The Pennsylvania Magazine. During his tenure as the editor, who was also writing much of the contents under psudonyms, the magazine ran articles opposing slavery, cruelty to animals, and the subjugation of women. There were essays in favor of liberalizing divorce laws, and predictably, the growing merit of delivering final walking papers to the arrogant and oppressive British Crown.
Recently escaped from what Paine considered the prison of England, constricted by class and ruled by the “Royal Brute” (King George III), the English journalist brought flaming rhetorical fury to the cause of American union for independence. In September 1775 with precious few shillings to keep himself afloat, he left Pennsylvania Magazine to focus all his efforts on resistance to tyranny and releasing mankind from imposed shackles. The cause would largely consume the rest of his life with creation of an autonomous America as the first essential step with the world watching and hoping.
He set to work late in 1775 writing a pamphlet to help Americans better comprehend their situation and opportunity. He was skillful by then at writing swiftly and delivering his thoughts in plain, realistic, easy to understand language. Titled Common Sense: Addressed to the Inhabitants of America and authorship identified as “Written by an Englishman,” the first printing of 1,000 copies dated January 10, 1776 by Robert Bell in Philadelphia sold in a fortnight.
Paine guaranteed Bell’s printing costs and instructed that any profits should buy mittens for American soldiers. An expanded second edition was rushed to print February 14. Results were astonishing. Further printings quickly followed as America’s first best seller ran from reader to reader throughout the Colonies. Thousands heard a call to arms in the plea, “O ye that love mankind. Ye that dare oppose, not only the tyranny, but the tyrant, stand forth!…O! receive the fugitive, and prepare in time an asylum for mankind.” An estimated 150,000 copies circulated in 1776, and the pamphlet was a key factor in readying citizens for the July 4th Declaration. “His writings certainly have had a powerful effect upon the public mind,” understated George Washington to James Madison in 1784.
“I saw, or at least I thought I saw, a vast scene opening itself to the world in the affairs of America,” Paine reminisced in the 1790s. “I published the work known by the name of Common Sense, which is the first work I ever did publish, and so far as I can judge of myself, I believe I never should have been known in the world as an author on any subject whatever, had it not been for the affairs of America.” He followed up the amazingly successful pamphlet with a series of 1776 letters by “The Forester” vigorously espousing independence and lambasting critics of Common Sense. There was no backing away, he insisted, from the necessity of resistance; “It is not a time to trifle…the false light of reconciliation – There is no such thing.”
Soon after July 4th, he joined Pennsylvania militia volunteers to take part directly in the struggle. Then American defeats in New York and elsewhere made it clear that he was vastly more useful with a pen than a musket. The anti-tyranny propagandist initiated a fresh series of pamphlets still treasured and read as The American Crisis (1776-1783). American legend has it that he began the first, “These are the times that try men’s souls…,” writing on a drumhead in the company of chilled and despairing Continental troops. Thus came “drumhead journalism” which correspondents and reporters emulated in subsequent human conflicts. By the time of the Crisis Papers, Paine was known as the author of the Common Sense manifesto. Readers were primed to heed his new appeals. Crisis Paper, Number IV, dated the day after September 11, 1777 when Americans bravely fought and lost the Battle of Brandywine, opened with words to sustain them through the long haul ahead: “Those who expect to reap the blessings of Freedom, must, like men, undergo the fatigue of supporting it. The event of yesterday is one of those kind of alarms, which is just sufficient to rouse us to duty, without being of consequence enough to depress our fortitude.”
On September 11, 1777, with the sound of cannon audible at Brandywine, Paine was preparing dispatches for Benjamin Franklin. Continuing the correspondence after the interruption of battle and retreat, he informed his friend about the resolution of Washington’s men at Valley Forge. He concluded, “Among other pleasures I feel in having uniformly done my duty, I feel that of not having discredited your friendship and patronage.” Franklin in time would respond, “You Thomas Paine, are more responsible than any other living person on this continent for the creation of what we call the United States of America.”
Thomas Paine & Posterity
With customary directness, Thomas Paine wrote on during and after the Revolution. His pamphlets, articles, and letters kept him involved in behind the scenes controversies that earned him wages of enmity more often than approval. This topical journalism now provides forceful footnotes to the times. His articles critical of Silas Deane, a businessman accused of using his position to buy war supplies in France for personal profits, brought wrath on Paine from Gouverneur Morris and other eminent Deane supporters. The 1770s-1780s meet 2002 as profits in all ages soar beyond taint! The aftermath of the Deane debacle was that Paine lost his position as the $70-a-month secretary to the Continental Congress Committee for Foreign Affairs.
He was back scrounging for a livelihood, dependent on occasional earnings from his pen. Luckily, in the 1780s he still had sympathetic friends who honored his wartime services. “Must the merits of Common Sense continue to glide down the stream of time unrewarded?” asked Washington. Paine’s rewards in 1784-85 included from New York a confiscated Tory farm at New Rochelle, a grant of 500 pounds by Pennsylvania, and a slightly larger money gift from Congress. Further signs of appreciation from American sources were few in number.
In a century of talented polymaths, Paine likewise was an inventor as well as writer. Best known of the many Paine inventions was an original design for a single-arch, pierless iron bridge, the idea for which came from studying a spider’s web. In 1787 to market his bridge, he traveled to Europe with letters from Franklin presenting him to French scientists and political leaders. In Paris he strengthened his friendship with the American minister Thomas Jefferson. From France he journeyed to England in quest of bridge investors. Without the recriminations due an apostate, he was honored in the land of his birth by eager admirers of his well-known writings including poet William Blake and statesman Edmund Burke. But smooth sailing in the social swim was never Paine’s lot for long.
In 1789, the American example a success across the Atlantic, revolution broke out in France, and a working-class uprising even seemed not impossible in England. These new causes of the poor versus the posh rearoused Paine’s restless, rebel spirit; and a burst of polemical, trouble-making writing followed. Shocked Edmund Burke, a sort of reactionary Scarlet Pimpernel, in November 1790 published Reflections on the Revolution in France praising the French aristocracy and attacking the people as a vicious mob. Weeks later after intense writing, Paine published an answer to Burke, Part One of Rights of Man, a milestone in the human struggle for rights. He dedicated the book to George Washington with a prayer “that you may enjoy the Happiness of seeing the New World regenerate the Old.” Washington’s vice-president John Adams said of the work, “I detest that book and its tendency from the bottom of my heart.”
Rights of Man, Part the Second, was published in February 1792. Sales of both parts were over 200,000 by the end of 1792, and the rapture with which workers in England received it caused a nervous government to fear a nationwide revolution. Paine as the author was indicted for sedition; printers and sellers of the book were prosecuted; and simply having a copy was so politically-incorrect it endangered the possessor. Ironically, most of the frightening reforms Paine advocated in Rights of Man are now in place and taken for granted in Great Britain and many other democratic countries including the one Thomas Paine godfathered.
To avoid being locked up or worse, Paine escaped to France before he was tried and convicted in absentia. His fate in France replicated his experiences in England. Initially he was honored as a hero of revolution, given a seat in the French Convention, and assigned to the committee to draft a new constitution. Then in the volatile, out-of-control climate of 1793, Paine speaking his mind stepped off the gangplank into hot water again. He had written earlier in favor of deposing the king. At the convention he opposed sending Louis XVI to the guillotine and counseled imprisonment followed by exile. Paine was sent to prison by Robespierre for this gesture of courage and conscience. When Robespierre fell in 1794, Paine was freed and reinstated in the convention. The experience had undermined his health but not his ability to think and write. During his time out of favor, he produced The Age of Reason, giving his enemies, both pious and political, a harvest of verbal ammunition they would use against him from then on.
He sailed for America in September 1802 and was met at Baltimore by an angry crowd with cries of infidel, heretic, drunk. He was bitterly attacked and slandered in the Federalist press on his own account and to oppose the incumbent president, Thomas Jefferson, who was still his friend. During his remaining years, the aging pamphleteer continued to write as he had always done against tyranny and for freedom. He knew well by then what it meant to have his past efforts dismissed and to be judged by the scandals and charges of immorality in the present, mostly invented by his enemies. The anti-Paine campaign was so widespread, mothers used the threat that “Mad Tom” would get them to frighten chidren. He died June 8, 1809 in Manhattan. With no fanfare, he was buried on his New Rochelle farm. Ten years later English writer William Cobbett took Paine’s remains to England planning a memorial. A series of bizarre events followed for the migratory remains, and they were somewhere, somehow irretrievably lost.
Posterity has somewhat restored Thomas Paine to the fame and respect he deserved. Ideas in The Rights of Man became facts of existence for some modern governments, and The Age of Reason doesn’t noticeably rock contemporary theological boats. The Library of America in an excellent volume published his collected writings in 1995. Unlike most venerated figures in the founding generation, Thomas Paine needed several tries before he was admitted to the Hall of Fame for Great Americans. After falling short in four previous votes, Thomas Faine finally made the Hall in 1945. Better late than never.
Roy Meador, a free-lance technical writer, researches and writes extensively about books and authors. He was a frequent contributor to Biblio and currently to Book Source Magazine. After residing in Manhattan for many years, he writes and adds to his collection in Ann Arbor, Michigan.
To read more, please visit our Archive Page. | <urn:uuid:5f81062a-dcaa-463e-bee5-484ca59acb40> | CC-MAIN-2017-17 | http://booksourcemagazine.com/story.php?sid=12 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00134-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.972353 | 4,898 | 2.546875 | 3 |
Stop ploughing, let the natural cycles of the soil be just what they are, and good yields can be drawn, year after year, without the demand for fertility to be drawn from elsewhere – whether it be in the form of machinery, synthetic chemicals, organic matter or whatever the fervent imaginations of desiring humans demand. When the cereal field has been established; companion planting of white clover or black medic, early, open and surface sowing, it is possible to grow the same cereal crop annually without pests, diseases or the proliferation of weeds. This perennialisation of annual crops is not new to nature; escourgeon, which is resistant to salt, has been grown for millennia near salt marshes without rotation and with consistent yields. Similarly, rye has been grown annually on the acid soils of moorland and millet grown without rotation or fallow in drier soils. Nature is always far more diverse than we imagine and if we were to open ourselves to this expression we would discover that NF produces a feast of diversity, way beyond the narrow ordering order of conventional machine and chemical agriculture. The use of a companion legume cover crop with cereals, the mowing of the weeds and companion crop and the return of the straw to the field are all necessary factors in the annual improvement of the soil structure, providing living roots and a cycling of nutrients that help support a diverse and active microbial life. The lignin of the straw and the decomposing green manure provide two of the key elements that help bind the soil together. The straw breaks down to provide the ‘cement’ (lignin) and the green manure breaks down to provide the ‘glue’ (polysaccharides) that bind the soil particles together. While the cementitious action of the straw is long lasting, that of the glue is relatively short term, but both are essential to the natural cycles of soil and allow cereals to be grown annually, without the need for rotations or fallow.
In an NF cereal field, rotations become obsolete; there is no longer any need to ‘clean’ weeds, no longer any need to seed a manure crop before the food crop, no longer any need to seed a deep rooted crop to draw up nutrients, no longer any need to succeed different crops to avoid long term pest and disease problems. Yield rises in the first few years due to the increasing understanding of the NFer, deeper and denser establishment of the clover, the improvement of the cereal through seed selection, the growing quantity of organic matter and root penetration by the preceding crop.
Yet, if the NFer should wish it, it is also possible to continue using rotations, which might be of value on small farms with several small fields or larger farms with many fields to increase diversity. However, when a farm is beyond a certain size it becomes impossible to maintain the appropriate relationship between the NFer and the land, at this point the farmer moves not only from NF to organic agriculture, but from no-mind to theoretical agriculture. If we cannot maintain our deep and abiding contact with the land and its constantly changing expression, then we break the link of attending/tending and step beyond NF.
Such rotations within NF are seeding winter wheat into alfalfa, improved fallow (which has had a green manure or cover crop seeded into it), a mown fodder crop such as Italian Rye Grass, red clover and winter vetch (tare), or tare, oats and fodder crop (mown as it flowers), or if absolutely necessary into colza – although crucifers are never a good precedent for cereal crops – or even into temporary meadows mown as close to the ground before drilling the cereal seed. The use of pasture or fallow for the NF of winter wheat opens a very interesting possibility; provided the NFer has scythed or mown the pasture to keep the grasses and weeds short and the cut plants are left on the surface as mulch, such fields can be brought into cereal production without the need of ploughing or by spraying with herbicides.
Traditionally the concept of ‘fallow’ has been understood as the moment the soil rests after having exerted itself in producing crops. This is a concept, that as an attempt to represent nature is probably about as far from reality as it is possible to get. If soil rests it is dead, the only soil that rests is in a desert and then it gets blown away and only the sand remains! Soil as soil cannot rest because it is alive, dynamic, it is only this life that makes it soil, otherwise soil would be the scholarly interest of geologists and not biologists. What has to be made clear is that when we conventionally farm a cereal field we reduce the life of the soil by wiping out the plants in the field, destroying innumerable trillions of micro-organisms along with the plants. In temperate climates, because of the soil’s ability to go into relative hibernation in Winter, such destruction is not as disastrous as sub-tropical and tropical climates where great swathes of arable land have been decimated by the incorporation of temperate agricultural practices. Rest is death, activity is life for soil and during a fallow period the soil comes back to life, growing a large number of diverse species, initially the fast germinating and growing weed species and then grasses and larger herbaceous plants. The healthiest soil is the most active soil, the more and diverse plants possible the healthier the field; some grow deep to mine nutrients washed from the bare surface of the last culture, some form symbiotic relationships with soil micro-organisms who grow in nodules on these plants roots and begin to fix atmospheric Nitrogen, some grow massive and dense such as grasses, whose blaze of growth in the rich soils of Autumn is something always to behold, but the soil grows what it will and by so doing repairs the ravages of reasoned agriculture. Perhaps it is best to say that fallow is rest only from the human intellect.
Early sowing permits better working because it is possible to seed into a ripening crop reducing the hassle of harvesting quickly to seed the next crop. Seeds from the previous year will have better viability than those of the current year if the current harvest falls in a cool and damp period when wheat seeds will likely be dormant. A good drying is necessary to break such dormancy. It is therefore possible for a NFer to time sowing to before, during and after harvest, spreading the work to suit the quantity of land to be harvested. Harvest, due to earlier sowing, will then be possible earlier the following year, making it possible to resow earlier, creating a virtuous cycle for the wheat and its potential for growth.
Micro-organisms multiply very quickly if conditions are right, having a speed of reproduction which far exceeds any other life form. Some bacteria can duplicate themselves every 20 minutes, so that in 12 hours a single bacteria might (!) produce 268,229,000 descendants. Their short life span allows upto 6500kg or more of living micro-organisms per hectare at any given moment, but, perhaps, another 70 tonnes dead and decomposing in the soil, releasing their nutrients and, because the greatest concentration of micro-organisms live around the roots of plants, it is here that the nutrients, in plant available form, are most concentrated.
The mechanical working of the soil will produce only a fraction of micro-organisms found in an unworked field, perhaps only 5 to 6 tons of microbial humus per hectare per year, and thus the need arises to import to the field and then spread other types of fertiliser. Such expense, both financial and ecological, is just not necessary. Bare soil is a catastrophe for nature, always! But when there is the permanent occupation of the soil, there is no need for off-farm inputs, Nitrogen, Phosphorus, Potassium is not washed from the soil, water and air penetrate deeply, the soil becomes balanced in minerals and trace elements, and white clover produces upto 200kg of Nitrogen per hectare per year. Undisturbed soil also supports a large population of algae which, in turn, form a symbiotic relationship with azotobacters and produce more Carbon products, through photosynthesis, and Nitrogen, through atmospheric fixation.
Choose plants with powerful rooting and rapid growth, like the RGI , or a pea – tare – field bean mixture which can). RGI (Italian rye grass) is one of the strongest plants for providing biomass, producing 30 to 60 tonnes of matter in 2 to 3 months, while being able to compete against the allelopathic chemicals secreted by the weed roots. Vetch (tare) is a good green manure cold hardy, grows in poor soil, fixes Nitrogen and produces 35 tons of green matter. Rye because of its resistance to cold and its upright growth provides a good frame for the vetch to attach. These three plants quickly provide a strong quantity of biomass, including a dense and ramified root system, helping to break up soil compacted by conventional techniques. They are all pioneer plants.
Six Year Rotation
- – red clover and RGI
- – winter wheat (sown Late June)
- – spring oats (or winter oats sown into winter wheat before harvest)
- – winter wheat (sown in maturing oats)
- – winter barley (sown in maturing wheat)
- – spring oats (under-sown with red clover)
Winter is the critical period for a cover crop because this is when the soil can be eroded and washed away if left naked after ploughing. Such destructive farming practices result in the soluble salts on the surface (which are basic) being washed from the surface creating acidification; October and pH might be as high as 8.2, but by April this might have dropped to as little as 4.6! This then requires the agriculturalist to once more take out his tractor, hitch a machine and go out into his fields and amend the pH imbalance with agricultural lime (basic). Unfortunately, as the old saying makes clear; ‘If the father uses lime, the son loses fertility over time’, the use of lime has the effect of liberating fertilising elements in the short term, which degrades the soil in the longer term. In silty soils such fluctuations of pH are only aggravated by the loss of the natural soil structure through conventional agricultural practice, resulting in rapid asphyxiation of soil life. In these vulnerable soils, rotations such as sugar beet, potato, or salsify, which demand heavy fertilisation, their harvest further disturbs the structure of the soil and they contribute very little organic matter, become part of the vicious cycle of soil destruction modern agriculture initiates – that always then requires more work from the farmer and more inputs to remedy. Ultimately, this leads to a situation where soil is no longer soil but technically a growing medium that must be carefully worked by machines, with fertilisers, herbicides, pesticides delivered at ordered intervals to ensure the established financial return at the end of the season. Modern farmers force their soils into the form of an intensive care patient that must then be given round-the-clock medication. If I make it sound like madness that is because it is madness!
Once weeds have been controlled by the installation of an effective cover crop, preferably one that fixes Nitrogen such as white clover or black medic, cereals can then be grown annually without the disadvantages usual in traditional agriculture. It is essential to sow winter cereals as early as possible into cover crops such as red clover, sainfoin or alfalfa, close to Midsummers day, so that it can germinate and grow rapidly and be sufficiently developed to take advantage of the nutrients released by the decomposing cover crop, which has been mowed before seeding the cereal.
The rotation of crops in the high mountains is more complicated than elsewhere due to the truncated growing season. In Scotland, Russia, Scandinavia and on high mountains, the perennialisation of winter cereals might not work. Therefore, it is necessary to establish rotations. The use of fallow under these conditions becomes very much more important. The possibility that the cereal harvest might take place as late as September, rules out the obligatory sowing (because of the need to extend the period of growth of the cereal the first year so that it can fully tiller) of winter cereals in June.
In the high pastures, where cereals are occasionally seeded into temporary pasture, the following five-year rotation fits perfectly into these conditions;
- rye – winter barley (sown June 15th)
- winter wheat (sown Midsummer’s Day)
- common millet – spring oats
Until about 100 years ago the types of wheat’s cultivated in Europe were many and specific to particular to each region and adapted to climate and soil. Because wheat is self-fertilising (autogamous) spontaneous hybridization is very rare, allowing for stable cultivars. These old varieties of Northern Europe have common characteristics;
- relatively long life cycle 2400°C (against 2200°C for the early alternative varieties)
- 500 with 700°C for rising to floral initiation (800°C for Poulard wheats)
- Late rising, middle of August
- Long straw, 1.50 meters
- Broad leaves (adaptation to the wet climates)
- Broad tiller plate
- Strong vegetative growth
- Strong tillering
- Strong rooting
All of the above allow these wheat’s to have a very large potential for production.
However, with the changes in farming practices across the last 100 years, farmers have quickly lost sight of this productivity as they’ve searched around for different types of wheat that better fit in with new cropping systems. The incorporation of sugar beet, potatoes or other cultures required farmers to seed their wheat later and this meant that they sought out those varieties with a reduced growing cycle. Unfortunately, this later sowing often left these other wheats open to problems of disease, pests, lodging and scalding. One of the results of this was a desire for wheat with shorter straw that was less liable to lodging.
Alternative varieties were introduced when in 1896, a miller of Nérac, Lot-et-Garonne, France, discovered a quick maturing, short straw wheat, in a batch coming from the area of Odessa (Black Sea). Known as Noé Blé Bleu, it originates not from the Black Sea but from North Africa. In European climates this variety rarely tillers and therefore had the added bonus of being able to be seeded at very dense rates. But because it is North African in origin it does not like the cold and is susceptible to damp climate diseases, such as rusts. But, with the new cultures and the likelihood of having to seed the wheat after October 25th, the alternate varieties became imperative – because of their low thermal requirements, 400-500° before floral initiation.
All this is to say that it was not for the reason of the wheat, health or yield that the old winter wheat’s were replaced, but for the farmers desire to add an extra culture into the yearly cultivation. This was disastrous for the soils as has been all too graphically shown with the loss of topsoil.
In this little more than 100 years, thousands of new varieties have been created, with new varieties appearing all the time. The farmer does not have the time anymore to become accustomed to one variety before he is recommended to use the nest, best variety! However, one thing has remained the same with each new variety cultivated, the increase in work and inputs to just keep production at a steady level.
However it is clear that under the climates of the North-West of Europe, the alternate variety’s precocity and vegetative vigour are incompatible with a high potential of productivity. The introduction of the alternate varieties began a vicious circle from which farmers have still not escaped. They sowed later to add another culture, they chose a variety with poor vigour that was intolerant of cold, that was susceptible to pests and disease and consequently produced poor yields, which they tried to compensate for by more dense sowings, only exacerbating the plants health problems; which has ultimately meant that new solutions have had to be sought for these problems in the form of growth hormones, pesticides and the application of fertilisers, especially in the spring because the alternate wheat has not had the time or warmth to develop the adequate reserves or root system necessary to support itself. Yet, modern wheat fields still suffer from lodging, scalding, and cryptogamic diseases.
But the search goes on for the perfect wheat variety, now crossing the alternates with the dwarfing characteristics of Mexican or Japanese varieties, which while reducing the height of the straw also tend to again reduce production. There is also research into trying to incorporate disease resistance from wild varieties. But all of this only continues the inexorable slide of reduction of vigour, fertility, tillering and rooting, and thus the capacity to compete with weeds and diseases, condemning farmers to the increased use of machines, manure, pesticides, etc.
The answer to the problem is simple, we must end the use of alternate varieties and turn back to the wheat’s that served farmers well for so many centuries. Varieties of true winter wheats that have the following characteristics;
- vegetative vigour
- long straws
- broad tiller plate
- strong cold resistance
- very late maturity
- type nor worm at very winter
- floral initiation from between 600 with 700°C (800 for Poulard wheats)
- large leaves
- powerful root system to avoid scalding and lodging
Poulard wheats, Triticum turgidum, offer the base characteristics of a true winter wheat, they are strong, vigorous, have a very late rising enabling an extremely long phase ATI compared to ordinary wheats, late grain maturity, long straw, a strong resistance to scalding (even if the summer is very hot) and with straw very resistant to lodging, they tiller strongly, have a high resistance to cold, like rye, in good conditions of culture their ears tend to ramify, they like the heat of summer; so, all in all, they are perfectly matched to continental type climates, but, perhaps, less so in damp maritime areas – but they offer an excellent alternative to durum wheats cultivate because the latter, being of spring type, offers only poor yield.
Poulard varieties include; winter poulard, Giant Milanese, Australian, smooth white, yellow barb, Nonette de Laussane. Other true winter wheat’s from which to choose; Blé Roseau, Autumn Victoria, hybrid winter champlan, Benefactor, Cerès, Prince Albert, Dattel, Blé Roux de Champagne, Rouge d’Alsace, Alsace 22, hybrid King, Autumn Chiddam, Sheriff Square head, white narrow straw (Blanc à pailles raides), hybrid giant red square, hybrid giant white square Parsel, big head, Briquet, Golden Top, Blé-seigle. The best rye to chose is Schlanstadt.
Rule of Thumb: Do not use modern varieties or those crossed with them
- Blé Roseau (Picardie)
- Victoria d’automne
- Hybride Champlan d’Hiver
- Blé Bénéfactor
- Prince Albert
- Blé Roux de Champagne
- Rouge d’Alsace
- Alsace 22
- Hybride King
- Chiddam d’automne
- Blé épi carré (Sheriff Square head)
- Blanc à pailles raides
- Hybride carre géant rouge
- Grosse tête
- Golden Top
- Seigle de Schlanstadt
- Blé hybride carre géant blanc
Modern conventional agriculture demands that the soil and the crop be constantly and continuously monitored and dosed with one or other product to ensure fertility and crop yield. Through no fault of their own, crop plants have been reduced to a pitiful state of dependency; rotation, fallow, manure, compost, fertiliser, herbicide, pesticide, growth regulator, compost tea, biodynamic preparation, some or all must be used to keep it alive, when all the soil and crop require is that we leave it alone to be just what it is, for only then can the cycles of life evolve and flourish and produce the health and vitality natural to them, as to all plants that grow wild in nature and year by year improve the fertility of the land upon which they grow.
Why is something so simple so difficult? Why do we prefer to spend so much time, money and energy on controlling what is freely available and requires no effort at all? The long term effect of such control are only now becoming apparent in the loss of topsoil, organic matter, erosion, loss of species, water table pollution, toxicity, salinisation, collapse of soil structure – almost any woe you might care to name. All in order to produce food – whereas if we were to learn to do-nothing, time, money, energy and destruction would cease and soils would begin to improve and continue to improve until they reached again the fertility they had when agriculture first began.
NF is not just about the growing of food, nor just about the perfection of human being, it is about the whole being whole, the return to the one, which is not even one but nothing, it is about the restoration of soils worldwide.. Only when we begin to gather our disparate energies, when we come together in order to shelter what is, only then do we truly begin to gather and become what we have always been, no different, not separate…Earth.
The NF of cereals ends the destruction of ploughing, ends the application of fertilisers, whether organic or inorganic, the application of all synthetic chemicals that damage soil, plants, insects, birds, earthworms and micro-organisms and asks the Earth to provide. We can plough or we can allow earthworms to plough, we can fertilise or we can allow the clover and cereal to fertilise, we can seed late and manage every stage of growth or we can seed early and allow the sun and rain to manage growth, we can weed with toxic chemicals or we can allow clover to weed. There is a choice, this is the choice, now we must choose.
The association of winter wheat and white clover with oaks and black locust, allows the deep penetration of plant roots into the soil, allows the reaggregation of soil structure, allowing the easy penetration of rain and air into the soil at depth. | <urn:uuid:3c3883f4-0de0-4c77-9c5f-d64f8eebb7bb> | CC-MAIN-2017-17 | https://seedzen.wordpress.com/2011/09/08/the-natural-farming-of-winter-wheat-part-four/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122167.63/warc/CC-MAIN-20170423031202-00249-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.942356 | 4,745 | 2.671875 | 3 |
The Earls and Dukes of Suffolk and the later Chaucers
The Earldom of Suffolk was first created in 1336 for Robert de Ufford, a great landowner in the east of the county and, of course, a close attendant of the king, but the Ufford line failed after only two generations and, in 1385, the title was revived for Michael de la Pole. Despite their name, the de la Poles were not soldier-landowners of Norman stock; they were merchants from Hull, originally named Poole, who had added the French prefix in order to become landowners. They rose to prominence by lending money to Edward III. Michael’s father had bought land in Suffolk and married his son into the great local family of Wingfield. Michael won the confidence of the ten-year old Richard II and used his position to extend and consolidate his Suffolk estates. At Wingfield he built an impressive, new, fortified manor house (see above). Still standing, it is the oldest castle in England to have been continuously occupied to this day. However, in 1387 he was hounded out of office by jealous rivals and had to flee to France disguised as a peasant. His son waited eight years to succeed to the title and then held it only for five weeks, before perishing during Henry V’s Agincourt campaign of 1415. The de la Poles were part of the small army which seized Harfleur, but the elder Earl died of dysentery a few days later. His son, the third Earl, then became one of the few English aristocrats to be killed at the Battle of Agincourt. His cadaver was returned for burial at Wingfield.
The lands and dignities of Suffolk now passed to the third Earl’s nineteen-year-old brother, William. As fourth Earl, he played a leading part in the power struggle which broke out at the accession of the infant Henry VI.
William became constable of Wallingford Castle in 1434. In 1437 the Duke constructed the God’s House at Ewelme, a reminder of the de la Pole’s Catholic devotions. William married Thomas Chaucer’s only daughter Alice, by whom she had a son John in 1442 (who became 2nd Duke of Suffolk in 1463). Alice could be both ruthless and acquisitive in pursuit of her son’s inheritance. She was a lady-in-waiting to Margaret of Anjou in 1445, and a patron of the arts.
William worked his way into a position of almost supreme power, bringing about a marriage between the King and Margaret of Anjou, whom many believed to be his mistress, and dominating the pious, weak-minded Henry. His only strong opponent was Humphrey, Duke of Gloucester. He removed that obstacle in 1447 by summoning a parliament to meet at Bury St Edmunds, a town which the Earl could easily pack with his own supporters. When Gloucester arrived he was arrested and confined to his lodgings. The following morning the Duke was found dead. Lands, offices and tithes were now de la Pole’s for the taking, and he became the first Duke of Suffolk in the following year.
William was steward of the household to Henry VI, and from 1447 to 1450 was the dominant force in the council and chief minister to the king; as such he was particularly associated with the unpopular royal policies whose failures culminated in the anti-court protest and political violence of Cade’s Revolt in 1450. Drunk with power, de la Pole had pursued his own policies, accrued further wealth, harassed his enemies and was quite open in his contempt for public opinion, which was running strongly against him. He was accused of usurping royal power, committing adultery with the Queen, murdering Gloucester, despoiling men of their possessions, giving away lands in France and plotting to put his own son on the throne.
By 1450 Suffolk’s opponents were strong enough to force him to stand trial and William was impeached by the Commons in parliament, but Henry VI intervened to exile his favourite rather than have him tried by the Lords. Instead, he was banished for five years. Dissatisfied with this, his enemies had him followed to Calais. On his way across the Channel his vessel was intercepted by The Nicholas of the Tower whose crew subjected him to a mock trial, after which the Duke’s head was hacked off by an inexpert sailor with a rusty sword and his body was thrown overboard, a scene made even more gruesome by Shakespeare in Henry VI, Part II, in which the bard also makes fun of the name of the great family. William’s remains were recovered from a beach at Dover, and Alice had her husband buried at the Carthusian Priory in Hull, founded in 1377 by his grandfather, Michael de la Pole, first Earl of Suffolk.
After William was killed, his properties including the castle and Honour of Wallingford and St Valery passed to Alice. She lent the Crown 3500 Marks and the king spared the fate of attainder of title. She survived many challenges to her position, including a state trial in 1451. Whilst Alice had benefited from Lancastrian connections, she switched to supporting the House of York during the Wars of the Roses. In 1455 she was custodian of the Duke of Exeter at Wallingford Castle. After her husband’s death, Alice had become even more ruthless and took back many of her friend’s Margaret Paston’s manors in Norfolk, with dubious title deeds. The Pastons now grew to loathe the Yorkist family, notorious for their corruption. William’s heir, John, was the greatest landowner in Suffolk and Norfolk and kept an army of retainers to enforce his will. The Paston family were among those who fell foul of the second Duke on more than one occasion. In 1465 de la Pole sent men to destroy the Pastons’ house at Hellesdon. Margaret Paston reported the incident to her husband:
There cometh much people daily to wonder thereupon, both of Norwich and of other places, and they speak shamefully thereof. The duke had better than a thousand pounds that it had never been done; and ye have the more good will of the people that it is so foully done.
The second Duke of Suffolk could afford to upset farmers, merchants and peasants. He was married to Elizabeth, the sister of King Edward IV (right). His mother, Alice, remained castellan at Wallingford until at least 1471 and possibly until her death in 1475. In 1472 she became custodian of Margaret of Anjou, her former friend and patron. A wealthy landowner, Alice de la Pole held land in 22 counties, and was a patron to poet John Lydgate, no doubt playing a role in having his poetry printed by William Caxton, along with her grandfather’s works.
At a time when unscrupulous men like the Dukes of Suffolk held sway and when feuding, rustling and brigandage were common, householders had to take greater care in protecting their families and property. Given the turmoil of the Wars of The Roses, it is therefore not surprising that, by their end in 1487, with the defeat and death of the second Duke of Suffolk’s son, also John de la Pole, the Earl of Lincoln, at Stoke Field, there were over five hundred moated houses throughout Suffolk alone. At the upper end were stone fortresses like Framlingham and Wingfield, but a great many were timber-framed farmhouses of quite modest proportions, like Gifford Hall, Wickhambrook (left). The majority were probably built on traditional defensive positions occupied almost continuously since Saxon times. As well as providing security they had a good, well-drained base, important for building in clayland areas. Solid and functional to begin with, they are now picturesque gems and probably the most typically Suffolk items in the county’s architectural treasury. These include Parham Moat Hall and Little Wenham Hall, England’s oldest brick-built house.
The Wool Trade
The dynastic struggle which engaged the energies of the nobility had little to do with the real history of Suffolk, and indeed that of much of southern England. However, warfare did provide the catalyst for the rapid industrialisation of these parts in the fourteenth and fifteenth centuries. In Edward III’s reign, the government realised that the flourishing trade in wool with the continent could be used to raise money to fund its wars with France as well as an economic weapon in them. It levied swinging taxes on markets and customs duties on ports. The results were dramatic: English merchants quickly turned to more profitable trade and foreign merchants sought more valuable markets. Wool exports, standing at 45,000 sacks in 1350, were halved in thirty years and continued to decline.
In some wool-producing areas the result was catastrophic, but in Suffolk it was the opposite. The decline in the trade in cheap wool brought about a decline in the Flemish textile industry and the migration of weavers from their depressed homeland. Many of them made use of their contacts in Suffolk and settled there, where the drop in wool exports and cloth imports was already invigorating the local cloth industry. The county not only had sufficient sheep to produce the quantities of wool required, but it also had the skilled labour and the trading network. It also acquired the technical expertise of the Flemish master craftsmen and weavers. Within a generation Suffolk, Essex and parts of Wessex had taken over as Europe’s principal exporters of fine cloth.
Suffolk became a boom area, with insignificant market towns and villages, like Lavenham, being put on the map. Ipswich and Sudbury became busy, populous towns. Over the county as a whole average wealth increased fourfold in the century after 1350. In the centres of the new industry the growth of personal prosperity was much more marked. Lavenham’s assessment for taxation in this period increased eighteen-fold. The annual export of cloths increased throughout the fifteenth century. However, industrial growth was not steady and there were setbacks.
The intermittent upheavals of the Wars of the Roses created difficulties. The Earl of Oxford and the Duke of Norfolk were leading figures in the conflict and this meant that the East Anglian tenants of these great landowners were regularly pressed into fighting for either Lancaster or York. But the real history of the region during this period was being woven on the looms of Lavenham, Clare and Sudbury.
Church-builders, Martyrs, Pilgrims and Puritans
The evidence for much of this wealth is still to be seen in the merchants’ half-timbered town houses and guildhalls and, above all, in Suffolk’s magnificent wool churches. The lynchpins of the cloth trade were the entrepreneurs, the clothiers, men like Thomas Spryng of Lavenham whose tomb is in the church to which he contributed so heavily, so that it became the finest church in the county. Carved on its south porch, and repeated many times throughout the building, are the boar and molet, the heraldic devices of the de Vere family. They remind us that the rebuilding of this magnificent church was begun as a thanksgiving for the victory of Henry Tudor over Richard III at Bosworth in 1485. John de Vere, Earl of Oxford, was Henry’s captain-general and largely responsible for the successful outcome of the battle. When the earl returned shortly afterwards to his manor of Lavenham he suggested to the great clothiers and other leading worthies that a splendid new church would be an adequate expression of gratitude for the new dynasty and era of peace it was ushering in. The shrewd merchant community may well have been sceptical about this. For half a century Yorkist and Lancastrian forces had chased each other in and out of power. There was little reason to suppose that the latest victor would not, in his turn, be removed from his throne. Two years later, a rebellion led by Richard’s nephew, another John de la Pole, Earl of Lincoln, almost succeeded in doing just that.
Besides Lavenham, there are also many less well-known, but very fine examples of churches begun in the fifteenth century, the building of which was financed by rich merchants. St Mary’s Woolpit was built on the site of a Saxon church given to St Edmund’s Abbey by Ulfcytel, Earl of East Anglia. The first Norman abbot had the timber church pulled down and a new church built. By the thirteenth century, Woolpit had many well-to-do farmers and prosperous merchants who were organised into two guilds. They collected and distributed alms, caring for the poor and needy, and supervised the upkeep of the church fabric. Early in the fourteenth century the guilds, the patron and the rector agreed that Woolpit needed a new church. Preserving little but the foundations of the Norman building, they rebuilt in the prevailing Decorated style using Barnack stone from Leicestershire and Suffolk oak for the roofs and doors. They added side aisles, partly to accommodate two chantry chapels. In the Mary Chapel, the statue of the virgin became a famous object of devotion, attracting pilgrims from a wide area. The decoration of the new church was completed with a profusion of stained glass, and a variety of wall paintings covering almost every free surface.
In the fifteenth century, the parishioners, perhaps spurred on by the Perpendicular splendours of nearby Rattlesden, subscribed to extensive alterations in the latest style. They installed a magnificently intricate rood screen and loft, surmounted by a carved canopy, which is still in place. This meant raising the height of the nave which now gained a clerestory and a new roof. The superb double hammer beams with their angels were well illuminated by new windows and originally glowed with poly-chromatic splendour. The north aisle was rebuilt at the same time and, to add the finishing touches to the new church, a beautiful south porch was added with statues of Henry VI and his queen above the doorway.
The parish church of St Mary the Virgin in Woodbridge was begun in about 1400 when Woodbridge was an extremely prosperous port. It rises high on a hill overlooking the town, close to the old market place, with its group of medieval houses. The church suffered desecrations at the time of the Reformation and in the Civil Wars, but the beauty of the original fine craftsmanship can still be viewed. It has an impressive tower which is lavishly patterned in flushwork flint and stone and stands 108 feet high. It was completed in about 1453 when perpendicular architecture was at its zenith.The parapet is considered to be one of the finest in Suffolk. The eight bells inside the tower were originally hung on a massive timber frame with louvred windows, which deflected the sound down over the town.
The magnificent porch was begun in 1455 with a bequest by Richard Gooding and donations by other rich townspeople. Inside, there are fine traceried panels and emblems from the period. In the baptistery there are fourteen preserved panels of the fifteenth century rood screen, which originally comprised as many as thirty-four panels, stretching the entire width of the church.
The growth of trade, especially in woolen cloth, the building of stone castles, moated manor houses and magnificent churches, have all shaped the townscapes of much of central and southern England. The growth of important centres of pilgrimage also contributed to the concentration of population in towns and, as had been proved in the case of Bury St Edmunds in the first part of the thirteenth century, it was now impossible for feudal law and custom to apply in urban communities proudly seeking their independence from powerful magnates, be they temporal or spiritual, and no matter how good or bad they seemed.
It was not one of the ancient shrines that brought pilgrims from all over Christendom to England, but the drama of the quarrel between Henry II and his former friend and Chancellor, Thomas á Becket, which culminated in Becket’s murder in his own cathedral at Canterbury. It was the starting point for other tales of the performance of many miracles in his name, or by his spiritual intervention. These led on to the canonisation of the martyr. The story of their quarrel and of Becket’s death is full of contradictions and mysteries, belonging to the period of the re-establishment of order after the anarchy of the civil war between supporters of Stephen and those of Matilda, in which Becket was one of Henry’s chief aides. Part of this new order was the founding of an English legal system, the split between the two men coming over whether a cleric committing a crime should be tried in a civil court, or whether he could only be held to account by an ecclesiastical court, with its milder forms of correction, as Becket demanded. This conflict, as at Bury St Edmunds, was one which regularly played itself out in real life. For many then, as now, it seemed that no man, not even a king, could place himself above the law of the land, but in pre-Reformation England, clerics and nuns were set aside through the rites of ordination from their lay brothers and sisters. They were sacramentally different, owing their chief allegiance to the Catholic Church, which alone had the power to judge them. Although this was a mid-twelfth century quarrel, it was one which was not settled four another four centuries, after it had claimed the lives of many more martyrs on both sides.
In 1174, four years after Becket’s murder, in the year of Henry II’s penance at Canterbury, a great fire destroyed the choir of the cathedral which had been built by Prior Conrad earlier in the century. Rebuilding started almost immediately and in 1220 the choir was finished, the body of the saint being transferred from the crypt to its new home in a great ceremony in the presence of King Henry III. There it became a treasury of gold and precious stones donated by kings, emperors and nobles. The approach to the high altar and the shrine was by means of a series of steps up from the old Norman nave. In the fourteenth century this was made more magnificent by the rebuilding of the nave and the redesigning of the transepts. With its huge aisle windows flooding the interior with light and its immensely high vaulting, Yevele’s nave is one of the masterworks of the later Gothic in England.
The final addition, that of the central tower, Bell Harry, was begun in 1496. By this time, generations of pilgrims had made their ways, by various means, from Portsmouth, Southampton, Winchester, Farnham, Sandwich and, of course, from London, taking the route of the pilgrims in Chaucer’s Canterbury Tales, down the Roman road known as Watling Street. The pilgrims would travel together for companionship, for safety, and out of a spirit of common devotion. Other holy places and hostelries (left) en route provided rest and consolation for them on their journeys. Those coming from Southampton would travel via Winchester where the shrine of the Saxon saint, Swithin, had been restored in the thirteenth-century retrochoir within the Norman cathedral. Nearer Canterbury they would sojourn with the Carmelites at Aylesford. The route from London by way of Greenwich and Deptford led to Rochester Cathedral, where the tomb of the Scottish pilgrim, saint William of Perth, murdered on his way to the Holy Land, attracted special veneration. In 1420, a hundred thousand pilgrims visited the shrine.
The attraction of St Thomas’ shrine was for people of all classes and nations. Among the early Hungarian visitors was the Emperor Sigismund (1387-1437), the Holy Roman Emperor. His sister, Anne of Bohemia, married Richard II at the instigation of Michael de la Pole, the first Earl of Suffolk. She arrived in England in a light-weight covered carriage, or Kocsi (named after the village in Hungary where it was invented), and its relative comfort made it popular with ladies, giving English the word coach, one of the few Hungarian words in English, but one still used frequently both as a noun and a verb. Though the marriage, which took place in 1382, was unpopular at the English Court, for financial reasons, there is evidence that Anne became more popular with time, especially with the ordinary citizens of London, for whom she interceded with her husband. Anne of Bohemia died of plague in 1394, aged only twenty-eight. She is known to have visited Norwich, where a ceiling in a hospital was dedicated to her, and in All Saints Church, Wytham, in Oxfordshire, there is a late-fourteenth century stained glass window depicting royal saints, thought to be likenesses of Anne and Richard. Golafre, like his illegitimate cousin before him, had begun his career in the service of Richard at court, and he paid for the windows to be made in Oxford. Like Richard, Golafre was a great lover of Gothic art forms from across Europe. The church also contains a memorial brass and stone to Julianna Golafre, who married Robert Wytham, probably given by their grand-daughter, Agnes Wytham, who was named by John Golafre as his heir, though she died soon after him in 1444.
Anne of Bohemia took a great interest in the writings of John Wycliffe, the reforming cleric from Lutterworth, and in the Lollards, his itinerant preachers. She is said to have introduced Wycliffe’s work to Prague, where it had a strong influence on the Bohemian reformer, Jan Hus. By the end of the fourteenth century, Wycliffite heresies had taken firm root in Suffolk, as they had throughout much of the East Midlands and East Anglia. In villages and towns throughout the county there were groups of Wycliffites, or Lollards who met in secret to study the Bible in English, the well-worn, often-copied tracts which condemned transubstantiation, the orthodox doctrine of the mass, pilgrimages, veneration of images, relics and other superstitions. Beccles and Bungay were centres of vigorous Lollardy, but there was also intermittent activity in Ipswich, Bury, Sudbury, and numerous towns and villages along the Essex border. These probably drew their inspiration from the Lollard group in Colchester, which was, for more than a century before the Reformation, a persistent centre of heresy.
The Lollards were reacting, in part,to the revival of what they felt were superstitious rites and cults within the church, such as the cult of the Virgin Mary. Only one English shrine equaled that of St Thomas of Canterbury in international fame, that of Walsingham in Norfolk, though St David’s Cathedral in Wales, St Mungo’s in Glasgow Cathedral and St Brigid’s at Kildare in Ireland all attracted pilgrims down the centuries, in addition to serving as centres of holiness and learning.
Walsingham’s fame was based on a vision of the Virgin Mary in 1061 in which the lady of the manor was carried in the spirit to Nazareth and shown the house where the Archangel Gabriel had appeared to Mary. She was told to build an exact copy of the house at Walsingham. She then employed skilled joiners to construct the Holy House of wood, a task which they completed (apparently) with the help of a further intervention by Our Lady and her angels.
In 1169, the Holy House and the stone church which had been built around it came into the possession of Augustinian canons. They popularised the legends still further and pilgrims began to come from many directions. As with the routes to Canterbury, sojourns were made for them along the way. At Houghton St Giles, a mile outside town, on the road from London known as Walsingham Way, there is a charming,
small chapel, known as the Slipper Chapel (see photo above). Here the pilgrims would hang up their shoes before walking the remaining length of winding road into Walsingham to receive their blessing in the ruins of the abbey and in the shrine where the cult of Mary had been revived.
In Chaucer’s pilgrims, whether on their way to Canterbury or Walsingham, and Wycliffe’s Lollards in Suffolk, we have two distinct pictures of mixed gatherings from the fourteenth and fifteenth centuries. These centuries were still about great castes and manor houses, cathedrals and abbeys, but as the language and literature of the English developed, we are able to trace the lives of a greater range of classes and characters.
In the following century, these competing cultures of pilgrimage and the itinerant preaching of the word were destined to come into open conflict with each other, sometimes violently, but also creatively, especially in poetry, drama and music.
Derek Wilson (1977), A Short History of Suffolk. London: Batsford.
Robert McCrum (1986), William Cran, Robert MacNeil, The Story of English. London: Penguin.
William Anderson (1983), Holy Places of the British Isles. London: Ebury | <urn:uuid:6b01c5df-2abc-4fb9-91b3-aef92a236f3d> | CC-MAIN-2017-17 | https://chandlerozconsultants.wordpress.com/2014/08/30/the-lives-and-times-of-the-chaucers-part-two/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00073-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.982945 | 5,323 | 3.046875 | 3 |
Harwich is not mentioned in the Domesday Book so at that time if anyone lived there it must have been a very small settlement. (The name Harwich is believed to be derived from the old words here wic, meaning army camp because the Danes camped there in the 9th century). However there is an entry for Dovercourt. It was a little village with a population of about 120. The inhabitants were peasants who farmed the land around a cluster of wooden huts.
Yet by 1177 a chapel existed at Harwich so by then there must have been a small number of people living there. Then in the 13th century the Earl of Norfolk turned the hamlet into a town. At that time trade and commerce were increasing in England and many new towns were founded.
In 1253 the Earl of Norfolk, Lord of the manor, started a weekly market in Harwich. In those days there were very few shops and if you wished to buy or sell anything you went to market. Once the market in Harwich was up and running craftsmen and merchants would go and live in the town. So new streets were laid out and wooden jetties were built for ships.
Harwich grew rapidly and in 1318 it was given a charter (a document granting the townspeople certain rights). In the later Middle Ages Harwich was a busy little port. At that time England’s main export was wool and bales were sent from Harwich. The main import was wine (the drink of the upper class). Furthermore in Harwich there were the same craftsmen found in any town such as carpenters, brewers, butchers, blacksmiths etc. then at the time of Henry VIII strong defences were built at Harwich. Three forts were erected. At that time Harwich was a busy fishing port with a population of about 800.
In 1604 James I gave Harwich a new charter. As well as weekly market Harwich was allowed 2 annual fairs. In those days fairs were like markets but they were held only once a year. People came from all over Essex to attend a Harwich fair. In the 17th century Harwich continued to flourish. Shipbuilding was a major industry in the town.
Harwich became an important naval base in the 1660’s, and Samuel Pepys was First Secretary to the Admiralty at this time and also the town’s M.P. The Navy Yard was the original site of one of the town’s most impressive surviving monuments, the Harwich Crane, dating to about 1667.
The unique Treadwheel Crane, built at the shipbuilder’s yard in 1667. It was operated by two men walking inside twin wooden treadwheels and was in use until the early years of the 20th century. It was re-erected on Harwich Green when the old shipyard was dismantled in 1928. In the 18th century quieter times returned but civilian shipbuilding continued in Harwich and fishing was still an important industry. Harwich was also a still a busy little port. However the town did suffer from flooding at intervals.
In the 1720s the writer Daniel Defoe visited Harwich and he said it was ‘a town of hurry and business, not much of gaiety and pleasure; yet the inhabitants seem warm in their nests and some of them are very rich’. Defoe was also impressed by Harwich harbour. he said it was ‘able to receive the biggest ships and the greatest number that ever the world saw together’.Harwich Guildhall was rebuilt in 1769.
The urban structure of the Harwich peninsular has developed around three defined centres, each totally different from the others in both Character and layout.as they have followed one another in development so they are now saves out in a sequence along the narrow hook shaped strip of land. Linking them but also dividing them is the main A604 trunk road which terminates at Harwich Quay.
Anyone who spends a little time exploring the quiet old streets of Harwich could hardly fail to be impressed by the wealth of buildings which recall the town’s past importance, as a trading port, a naval base, and as a bastion of Britain’s front line defences in times of war. Beneath the town’s street, however, lies a vast source of information which can enable archaeologists to trace human occupation much further back in time.
The old town has long been associated with the sea, but it is not until one turns into West Street, almost at the very end of the narrow peninsular that one is aware that the sea is anywhere near, the town has developed around four main streets which run roughly parallel down to the quayside. Linking these together is a series of narrow streets and alleys. Subsequent development has softened the regularity of the street patterns, but not changed the basic town Structure.
The entrance to Harwich old town is dominated by the High lighthouse which Stands at the south end of West Street. This 90ft. high grey brick tower was built in 1818 under the supervision of John Rennie Senior. built to replace earlier wooden ones. Belonging to General Rebow who became very rich by charging 1d per ton light duties on all cargoes coming into the port. But in 1836 Trinity House acquired the Harwich Lights from General Rebow for £31,730, there being 12 years and 5 days remaining of his lease. It is suspected that Rebow had become aware of the changing course of the channel. The lighthouses became redundant in 1863 for just this reason and the new cast iron lighthouses were erected at Dovercourt near the Phoenix Hotel. The High Lighthouse was sold for £75 and was used as a residence and Wireless Museum.
Annual Plaque Unveiling
October 30th 1976 saw the unveiling of our plaque on the High Lighthouse at Harwich. The morning was cold and drear but it did not deter about fifty faithful members from attending the ceremony which was performed by the Mayor of Harwich, Councillor Mrs Georgina Potter, Also present were the Chief Technical Officer of Tendring District Council, Mr C. Bellows, and Mr Henry Lambon and Mr Tom Judson, chairman and secretary respectively of the Colchester Branch of the Royal Institute of British Architects.
The restoration of the High Lighthouse was a joint undertaking of Tendring District Council, Essex County Council and the Colchester Branch of the R.I.B.A. to commemorate European Architectural Heritage Year, and was the first major undertaking of the new Tendring District Council to give a lift to depressed Harwich, and what an excellent job they have made of it!
Winifred Cooper – Harwich Society Highlight Magazine 1976
The widest and longest street in Harwich is West Street and it is particularly important since it is from this street that the town tends to be judged by casual visitors and ferry passengers. The guildhall is the only building in Harwich old town of outstanding merit on its own, it was rebuilt in 1769, the interior contains a good staircase and a Panelled court room.
Opposite the guildhall was the Three Cups hotel which was built early in the 16th century on an L shaped plan and has a 17 century wing. Inside there is a late 15th century staircase, moulded beams and a late 16th century plastered ceiling.
Perhaps the most pleasing townscape exists around the church of St. Nicholas .as one travels down church street ,the road gradually tapers From the guildhall, which is a string visual buttress, standing forward into The street ,causing ones view to stray.
The other buildings on the quay are the Pier Hotel which is a mid-19th Century stuccoed building in the baroque style and the very austere town hall now residential apartments. This was originally built as the Great Eastern Hotel in 1864 and is a monumental composition in free mixed Italian style. The octagonal ticket office and the connecting booking hall on the edge of
the quay was originally the ticket office for the continental packet steamers , and is an elegant and fascinating reminder of the early days of cross Channel travel.
At the turn of the century the Fire Station was at the rear of the Three Cups Hotel with equipment consisting of a Shand Mason steam pump and a second smaller pump. The press commented: ‘The spectacle of a messenger running off to summon the brigade of 16, from 16 separate homes, is sure not more ridiculous than the old pump which, once working, won’t throw enough water to draw a fly a few yards.’ The hose, apparently, had so many leaks in it that it was useless. They did not have a single ladder at their disposal and had to borrow hoses from the GER and the Army. to their credit, the Council did act, and they recommended the building of a new fire station, ironically on the site of the disastrous Parson’s fire and the purchase of a horse-drawn steam engine. It was suggested that a horse-propelled fire engine be purchased and a fire station be built on the vacant land next to the new Electric Palace cinema where two cottages had been.
A vacant house on the corner of Wellington Road and Kings Quay Street would become the station officer’s house. The council accepted Mr Newton’s tender to build the new fire station at a cost of £ 1041. The new fire engine arrived in November before the fire station had been completed. The new Shand Mason equipment could be fired up in seven minutes and was capable of throwing 300 gallons of water a minute, almost three times the amount of the old pumps.
Following the destruction of some cottages belonging to the Corporation by fire in 1911, it was decided to use the £ 942 received from the insurance to build a new fire station on the site and lease the remainder for a cinema at £ 30 per annum for 60 years. When the new fire station opened in 1912, Mr Dixon Hepworth became captain of the brigade which was equipped with the latest Horse Propelled Steam Fire Engine. It stood next to the Electric Palace.
Until 1915 horses were hired to pull the engine, the council then bought two horses of its own for £ 50. This lasted until 1925, when the council sold the horses and bought a Dennis motor fire engine for £ 960.
Moving-Out day at Cow Lane
Bearded labourers sporting a helmet as their uniform, pulling hand pumps through the narrow streets, and smartly dressed “Keystone Cop style” firemen whipping up their horses in a frenzy of activity as the church bells sounded the alarm. The old fire station on Cow-lane, Harwich, has seen them all, but now the borough’s final link with those far-off days of the early fire brigade has moved to a smart, new station in Fronks-road, Dovercourt. Station officer J. Howard, chalking up the sign which marked the official closing of the old fire station.
In 1801, at the time of the first census Harwich and Dovercourt had a population of about 2,700. To us it would seem no more than a village but by the standards of the time it was a fair-sized market town. In the years 1808-1810 a redoubt was built to protect Harwich from the French. A lighthouse was built in 1818 and St Nicholas Church was dedicated in 1822. In the early 19th century an industry making ‘Roman cement’ flourished in Harwich although it had died out by the end of the century. On the other hand the railway reached Harwich in 1854 and steam ships began sailing from the port. Furthermore there were some improvements in Harwich during the 19th century. from 1870 Harwich was lit by gas and in 1880 the first sewer was dug. in 1887 Harwich gained a piped water supply. In the early 20th century the fishing industry in Harwich petered out but the port continued to thrive.
Today Harwich is still a flourishing port. In the 20th century Harwich continued to grow steadily. By 1911 it had a population of over 13,000. By 1971 it was almost 15,000.
Today Harwich retains much of it’s history, maritime links and old buildings, such as the Redoubt Fort, High and Low lighthouses, and the Electric Palace Cinema. There are numerous lovely old houses to be seen along the narrow streets including the Foresters house which is said to be the oldest house in Harwich.
The Redoubt was built between 1808 and 1810 to protect the port of Harwich against the threat of Napoleonic invasion. The Redoubt is of circular shape, approximately 200ft in diameter, with a central parade ground of 85ft diameter. Hoists were used to lift shells from the lower level to the gun emplacements. Though difficult to imagine as it is now surrounded by houses, when the Redoubt was built it was on a hill top with free views in all directions. A house was demolished to make way for the Redoubt, and a large elm tree – used by ships as a navigational mark was also removed.
Originally armed with ten 24-pounder cannon, the Redoubt was remodelled in order to accommodate increasingly heavy guns, as technology and the perceived threat changed. In 1861-2, work was carried out to accommodate 68-pounder cannon, and the emplacements were strengthened by adding granite facing to withstand improved enemy artillery. Only a decade later in 1872, three of the emplacements were altered to take enormous 12 ton RML(Rifled Muzzle Loading) guns. In 1903, three emplacements received 12 pounder QF(quick firing) guns.
Despite this ongoing modernisation, the Redoubt never fired a shot in anger. It is also probable that its strategic importance declined towards the end of the 19th century with the construction of the more powerful Beacon Hill Battery just to the south. In the 1920s the area around the Redoubt – previously kept clear to provide fields of fire – was bought by the Town Council. This land is used for allotments. The Redoubt itself was allowed to fall into disrepair.
The Redoubt was briefly taken back into military service during World War II, when it served as a detention centre for British troops awaiting trial. Examples of the graffiti left by the soldiers can still be seen in some of the rooms.
Following World War II the Redoubt was used by the British Civil defence organisation who used it until they were disbanded. That was the end of the Redoubt’s military service.
The importance of Harwich changed significantly with the arrival of the railway station in 1854 and later in 1883 when Parkeston Quay was opened. Probably it was the significance of the town to travellers and the increase in population which prompted the need to build such a substantial new Police Station complex. The site was designed as a self-contained law enforcement centre complete with court offices and accommodation for many police officers.
Plans for the proposed new Police Station were drawn up by the County Architect, Frank Whitmore in April 1913 and copies of some of the plans hang today on a wall within the building. This purpose built Police Station comprised of a basement, first and second floors and a roof space, referred to as the attic and part of this was as single Police Constables’ Quarters.
The Basement was divided up into various rooms to house the resources necessary to run such a large complex. These included a Weights & Measures Room, 2 Wash Houses complete with toilets, Store Rooms, and a Bicycle Store.
A block of terraced houses was created at the back and to the west of the Police Station which accommodated three married constables and one married sergeant. The accommodation was quite large, each house having a kitchen, pantry and living room on the first floor, two bedrooms on the second floor and a third bedroom in the attic of the roof space. The kitchens, living rooms and first two bedrooms were quite large, all measuring 13′ 6″ x 12′ 0″.
Detention room and four Prisoner Cells were constructed to very strict security guidelines. A Magistrates Court was built at the south end of the building but this was later demolished after being relocated within the Harwich Town Hall complex on the quay. Gardens were created to the west of the main building along with a grass slope along the front of the building from the pavement down to the basement level. A small footbridge, flanked by two pillars at the entrance and adorned by police beacons, crossed this slope from the pavement to the public entrance of the Police Station. A similar footbridge, flanked by just two pillars, connected the pavement to what used to be the Inspector’s accommodation to the north of the building. The Sergeant’s accommodation was built adjacent to this with access via steps from the basement level at the front and rear. A fence was built along the South-east boundary of the site and walls were built along the North-west and South-west boundaries.
By Kind permission of Richard Kirton – 18th March 2014
Unlike the previous three areas around which the town has developed.
Bathside is not a natural centre but a clearly defined residential unit with a strong social character and so is therefore worthy of comment. The physical barriers which contain and define the area, the railway to the south east and the mudflats to the North West, are extremely strong. The railway, although bridged by level crossings, divides Bathside from the town beyond and therefore essential facilities which service the Area.
The steep earth bank, which runs along North West perimeter, Shelters the area behind from the wind which blows across the open expanses of mudflats.
The railway, built in 1854, came before the houses which followed several Years later as the street pattern which conforms to the subsequent shape of the area, suggests Bath side derived its name from the baths which were located out on the Mudflats .these baths were originally set up in 1761, but with the coming of the railway which cut them off from the town beyond, and the opening of the rival baths in Dovercourt by Tolly Cobbold, the brewers, they steadily Declined and soon ceased to exist.
At one time, bathside supported a variety of small industries and commercial concerns. they had several slaughter houses at the south west end which have all now disappeared, a shipyard, a coalyard, sawmill, soft drinks factory and the gas works.
Much of the decline of the industrial and commercial activity can be attributed to the 1953 floods which devastated the whole area, but caused Particular hard ship to bath side which lies very near sea level. During the floods the entire bathside was evacuated, and many of the Population, especially the young families never returned, preferring to remain in the developing outer areas of Dovercourt.
In 1863 Trinity House erected two cast iron lighthouses on the beach. They were used until 1917 to guide ships around Landguard Point; the two lights aligned indicated the right course. the deep-water channel is now marked by buoys. the lighthouses were restored in the 1980s and are sometimes known as Dovercourt Range Lights. The Lighthouses are 150 yards apart and were leading lights, they worked as a pair; with one light positioned over the other. the vessel was then on the correct course.
The Low Lighthouse is a 45ft (16.5 metre) high, ten-sided tower of brick. The ground storey has a projecting canopy to provide public shelter. The High Lighthouse is a 90ft (32.8 metre) high, nine-sided tower of grey gault brick.
The lighthouses were built 9ft (3.3 metre) to the south west of the original sites. The old wooden Low Lighthouse was built by the beach and is portrayed in one of Constable’s paintings. The High Lighthouse was over the Town Gate (on the Felixstowe side of the present High Lighthouse). Both earlier lighthouses were coal fired.
As a fashionable 19th century Seaside resort and spa town it drained Harwich old town of most of its Wealth. as Dovercourt became more fashionable so Harwich became less so, And as the poorer sections of the population remained so the quality of the Environment in the old town deteriorated. The high street and the promenade run parallel, almost immediately behind one another with the advantage of giving the visitors staying in the Hotels along marine parade easy access to both pleasant beaches.
The main road, whilst being sheltered from the weather, is also sheltered from a view of the sea. Although from the junction of high street and Kingsway, the sea is barely 100 metres distant; one remains totally unaware if the saltiness in the air Dovercourt has the atmosphere of a reasonably prosperous small inland Town, with little evidence of its proximity to, and close association with The Sea appearing along the high street.
At the end of lower Dovercourt are the pleasant formally laid out paths and gardens of Cliff Park. Linking with the promenade behind, this too has an excellent view of the sea,
But the sight rise in the land away from the road denies passing traffic All knowledge of existence.
The original centre, now known as upper Dovercourt developed inland with no direct access to the sea. On the route in to town this is the first area of positive identity through which one passes .the green and the Mediaeval church of All Saints are
Much as they were 1000 years ago. The homes have changed, and the road has been improved but the centre still retains something of its Original village atmosphere. The actual fabric of upper Dovercourt has developed as a strip alongside the main road .beyond most of the strip is open country.
The spreading Residential development, threatening at the far end to engulf it, has been generated by the development of lower Dovercourt.
Take me back to England – take me back today
To the town where I was born – how I miss old Dovercourt Bay.
Take me to the lighthouse – I can smell the seaweed there,
Take me along the windy prom to tangle up my hair.
Just about here I’m thinking the Cliff Pavilion stood,
With Queen Victoria watching on in a very sombre mood.
Twas here just fifty years ago I met my future wife
At a summer dance on a Saturday night the luckiest day of my life.
Then on to the Spa – into the park to watch the squirrels play
A go on the swings and down the slide, then be on my way.
A bit of a hike to take a line and fish off old Stone Pier
But for all I ever caught there they didn’t have much fear.
Then onward down to Harwich where we used to moor our boat
Where I watched her go down in a gale one day, she lost the will to float.
Just past the wharf to Ha’penny pier where we used to catch the ferry,
To Shotley or to Felixstowe for a day of making merry
Watching trawlers coming and going alongside the Trinity ships,
The follow your nose up a side street for delicious fish and chips.
Round the corner to Gas House creek and the railway ferry crane
That my father once worked when I was a boy and aspired the same
Through Bathside past the sinky mud to a railway bridge by the sea
As a nine year old a most beautiful sight having been an evacuee
Then up to Dovercourt High St, past the lights to look at a place
Where I worked for ten years in my twenties and recognised every face
On up the hill where the Regal once was – next to my first high school,
Where the French teacher gave me my nick name for acting like a fool.
Down the lanes to the back of the school was the daunting Toboggan Hill,
On the few snowy days in winter sledges flying what a thrill.
Now I’ll look over to Parkeston Quay to watch the ships sail by,
After that stroll through the Hangings at dusk when bats invade the sky.
I’ll head out westward to Copperas Wood, bluebells there to pick
And on Wrabness foreshore where the tide comes in so quick,
Then I’ll make for the Wix Wagon pub through pretty country lanes
And down a couple of English ales to soothe away my pains.
Meadner through some winding roads to Oakley Little and Great
Into Mayes Lane to Ramsey church and Chafford where my mate
Spenty many years there cooking for the boys of the school.
They used to have a smashing Fete though it rained as a rule.
Through Tollgate past The Devon and onto Dovercourt Green
Where if you’re lucky daffodils to make floral scene.
The Memorial – the water towers – then wander down the Drive
The Skating rink – Putting green, the Boating Lake that I’ve Dreamed about quite often in the years I’ve been away, Then I’ll be back where I started on my Odyssey today.
We are adding more information to this site on a regular basis, if you wish to submit any photos or provide any information, please use the contact page at the bottom of the screen.
The Harwich Society | <urn:uuid:a5f907a0-6051-4d99-880b-35ac017ea365> | CC-MAIN-2017-17 | http://www.harwichanddovercourt.co.uk/harwich-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121893.62/warc/CC-MAIN-20170423031201-00190-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.977812 | 5,288 | 2.953125 | 3 |
Ballad of Bosworth Field, The
DESCRIPTION: After a prayer for England ("GOD:that shope both sea and Land"), the poem describes the armies of Richard III and Henry Tudor that fought at Bosworth Field. The Stanley Brothers are highly praised for their role in the battle that made Henry the new King
EARLIEST DATE: before 1750 (Percy folio); probably composed before 1495
KEYWORDS: royalty battle death
Aug 22, 1485 -- Battle of Bosworth. Somewhere near Market Bosworth, the forces of King Richard III are defeated by those of Henry Tudor, and Richard is killed. Henry becomes King Henry VII
FOUND IN: Britain
REFERENCES (1 citation):
ADDITIONAL: Richard III Society Web Site, Ballad of Bosworth Field page, http://tinyurl.com/tbdx-BosworthF
NOTES: For the background to the reign of Richard III, see the notes to "The Children in the Wood (The Babes in the Woods)" [Laws Q34]. This particular entry is entirely specific to one of our few historical sources for that period, the so-called "Ballad of Bosworth Field," and the battle of Bosworth itself.
We have no absolute proof that the "Ballad" was ever sung, but it seems clear that it was intended to be. Of the sources I checked, it is cited by Ross and Bennett, but rarely used by other authors. Child mentions it in his notes to "The Rose of England" [Child 166] but does not deign to print it. Its value is debated; Wagner, p. 16, says of the three Bosworth ballads ("Bosworth Field," "The Rose of England" [Child 166], and "The Song of Lady Bessy") that some have gone so far as to treat them collectively as fiction, while others treat them as biased but genuine historical sources.
An honest assessment would treat them separately. "Lady Bessy," which shares some lyrics with this ballad, was valued in the nineteenth century by Agnes Strickland (Laynesmith, p. 21) and more recently by Alison Weir, but it is patent fiction and (it seems to me) a late rewrite which uses elements of "Bosworth Field" (my own guess is that it was designed to flatter Elizabeth I, the granddaughter of Elizabeth of York who is the Lady Bessy of the ballad -- or, just possibly, it is disguising the actions of Henry VII's mother Margaret Beaufort, who in fact did conspire against Richard III, behind her future daughter-in-law). "The Rose of England" is obvious Tudor propaganda with some Stanley flattery thrown in; while not pure fiction, it is extremely unreliable.
"Bosworth Field" is another, and much trickier, matter -- frankly, I think that this, rather than "The Rose of England," is the Bosworth ballad Child should have printed. It is probably near-contemporary; although our only copy is from the Percy Folio, there is a sixteenth century epitome which differs in some regards, making it likely that the original is earlier still.
Ross argues, since it praises Sir William Stanley, that the original is from before 1495, the year Stanley was executed (although Griffiths/Thomas, p. 134, counter-argue that it was composed after 1495 as a justification of William Stanley). Sadly, it has clearly been damaged in transmission; the names in the surviving copy are often much muddled. It seems intended to glorify the Stanleys -- who certainly didn't deserve the praise -- but its primary importance is that it is probably based on evidence gathered by a Stanley herald or spy (Bennett, p. 13) -- in other words, an eyewitness.
That the witness is biased is undeniable, and the author had very little real information about what happened in Richard III's army. If we take that into account, I agree with Ross that the "Ballad" should get more respect than it does; Ross notes that, insofar as it can be tested, it is accurate. The one major error in is it the claim that Richard had 40,000 men at Bosworth, which is impossible -- but such exaggerations are commonplace in records of the era.
It is unfortunate that the "Ballad" is not more often reprinted; while awfully long to be sung (164 four-line stanzas), it has some genuinely fascinating touches, such as a speech by Henry Tudor:
Into England I am entred heare,
my heritage is this Land within;
they shall me boldlye bring & beare,
& loose my liffe, but I[']le be King.
Iesus that dyed on good ffryday,
& Marry mild thats ffull of might
send me the loue of Lord Stanley!
he marryed my mother, a Lady bright.
Henry Tudor's mother was Margaret Beaufort, the last of the Beauforts, through whom Henry made his claim to the throne; the Beauforts were descended from John of Gaunt, the third son of Edward III [died 1377]). Both Lord Stanley and Margaret Beaufort, of course, had been married to others before they married each other. Margaret Beaufort, in 1457 (at age 13!), had borne Henry Tudor to her first husband, Edmund Tudor; by 1464 she was married to Henry Stafford, the brother of the Duke of Buckingham, who died in 1471; she married Lord Stanley no later than 1473 (Chrimes, pp. 15-16).
The situation in August 1485 was this (again, this is a very brief summary of the notes in "The Children in the Wood (The Babes in the Woods)" [Laws Q34]): The widely respected King Edward IV, first of the Yorkist line, had died in 1483, leaving as his hear a 12-year-old boy, Edward V. Edward IV's brother Richard, until then known for his conspicuous loyalty, had produced a series of arguments to prove that the boy was in fact illegitimate, and had taken the throne as Richard III. The uncrowned Edward V and his brother Richard Duke of York then vanished.
Richard's brief reign had already produced significant positive legislation, but many were dissatisfied -- some were unhappy about the disappearance of the "Princes in the Tower" (Edward V and Richard of York); others were die-hard supporters of the Lancastrian dynasty which Edward IV had overthrown, and some, such as the Duke of Buckingham, seem simply to have wished to feather their own nests. The Stanleys, the heroes of this ballad, certainly fell into the latter camp -- and were ruthless about it: "The Stanley family stopped at nothing to further their hegemony in northern Lancashire, using their influence at court to gain possession of the heiresses to the Harrington estate, subsequently imprisoning them and marrying them against their will" (Langley/Jones, p. 84; see also p. 224).
Many of these disaffected nobles settled on Henry Tudor as their hope. He had no real claim to the throne -- his mother was Margaret Beaufort, who was descended in illegitimate line from King Edward III, who had died more than a century earlier -- but he was a technical Lancastrian, and Lancastrians would support anyone over a Yorkist.
Henry had tried to invade England in 1483, but the rebellions on his behalf collapsed. In 1485, he tried again, and this time, he landed in England. (I can't help but note the irony that he set out from Harfleur, the place where Henry V had invaded France seventy years earlier; Ross, p. 202). He and Richard gathered their forces, and finally met at Bosworth.
Whether he deserved it or not, Richard's position in 1485 was precarious, due primarily to the decimation of the nobility. There were only a few really strong nobles left, and not all of them were loyal. It left Richard largely dependent on lesser men -- and caused him to bring a relatively small army to the greatest battle of his life; estimates run from about 3,000 to 10,000 men, the majority of them the Duke of Norfolk's if you exclude the "neutrals."
Meanwhile, Henry Tudor had been very, very lucky in his friends. The Bretons had planned to turn him over to Richard (in which case this discussion probably wouldn't be necessary), but Henry was warned just in time, and escaped to France. The French were temporarily in a very anti-English phase. And, just at the time when Richard was most distracted, they gave Henry Tudor a fleet (Arthurson, p. 5) and let him invade (Pollard, pp. 160-162). It is also possible that the Scots sent a contingent, although the evidence for this is quite indirect (Chrimes, p. 70 and n. 1).
The Wars of the Roses witnessed, in all, six changes of King, but only once, at Bosworth, did the two rival claimants face each other in battle (Bennett, p. 99). And Bosworth proved decisive mostly because Richard III died in the battle. Henry's invasion force initially consisted mostly of mercenaries from countries hostile to Richard (Ross, pp. 202-203), though of course he picked up some supporters in Wales.
Ross-Wars, p. 101, notes how close the Tudor invasion came to failure: "In Brittany [Henry] narrowly managed to escape being captured and turned over to the English, and made good his escape to France. There the government, which was anxious to absorb Brittany into France, and feared that Richard would support the Bredon independence movement, decided to aid Henry's invasion. Supplied by France with money, ships, and some 3,000 French troops, he set sale for Wales in August 1485 -- but just in the nick of time, for French policy changed abruptly after his departure."
Henry landed at Milford Haven in Wales on August 7. Richard, who was based in Nottingham, apparently learned of his landing on August 11, and summoned such supporters as could reach him quickly. The two armies met on August 22.
The notes to "The Children in the Wood (The Babes in the Woods)" [Laws Q34]) describe the incredibly poor sources we have for this period. We have no complete account of the battle except the Tudor historian Polydore Vergil's, written decades later by someone who was not a witness and had never seen the battlefield (and who was so confused that he dated it to 1486, not 1485; Bennett, pp. 13-14), plus this song, which claims Richard had 40,000 troops, which is obviously impossible. The lack of data is so extreme that one author is convinced that we do not even know where the battle took place, moving it several miles away to Dadlington (Ross-Wars, p. 182; Pollard, p. 169 also mentions this as a strong possibility)
Although it is generally called "Bosworth," the first proclamation about it by Henry Tudor said the battle took place at Sandeford (Chrimes, p. 51). Langley/Jones, p. 220, lists several early names and says that the name "Bosworth" was not used (at least in any surviving record) until 1500.
Unfortunately, Vergil's account is not very clear, at one point it appears to confuse east and west, and does not fit the ground as it now exists -- e.g. there is a mention of a vanished marsh.
The reconstruction of the battle depends very much on where the marsh is located. The map in Burne, p. 290, places it south of Richard's position on Ambien Hill, making action on that flank difficult. Ross's map on p. 219 places it more to the west, putting a gap in the area where Henry Tudor might attack. Kendall's maps, pp. 438-439, approximate Burne's. Chrimes, p. 47, thinks Kendall's map is as accurate as can be reconstructed today but does not believe any complete reconstruction possible. St. Aubyn's map, p. 210, shows an extremely large marsh covering half the slope of Ambien Hill -- and shows details of the armies that are simply not known. Bennett, p. 108, firmly believes the marsh was on the south side of the hill although he is uncertain of the size -- but his map on p. 98 shows the marsh far from the hill and stretching all around it and implies that the armies met in a small gap. Cheetham's map is similar to Ross's. Gillingham, p. 242, delares that "all [maps] are quite worthless" but on pp. 243-244 gives a detailed restatement of Vergil that looks like a written description of St. Aubyn's map minus the mention of Ambien Hill.
Not all are convinced the battle even took place on Ambien Hill. Saul3, p. 79, mentions three possible places: Ambien Hill (which he spells "Ambion"); Dadlington; and near Atherstone; Saul thinks the last the most likely.
Ashdown-Hill, p. 80, gives a map that doesn't even show Ambien Hill, and which gives a completely different battle layout; it puts Richard's army along a line from northwest to southeast, with Howard's vanguard to the northwest, Richard's main body in the center, and Northumberland to the southeast near Dadlington. The Tudor army was across the marsh from Richard's center; the Stanley army (he believes there was only one) was to the south of the marsh, closer to the Tudor army, meaning that they could only reach Northumberland's force directly. But Ashdown-Hill, p. 81, seems to think that Richard's charge effectively opened the battle -- which leaves no time for the Duke of Norfolk to be killed. So, with Richard dead, the Tudors then attacked the royal army and killed Norfolk; with him dead, the Yorkist army had no leader and evaporated (Ashdown-Hill, p. 82). But this is pointless; with Richard dead, the war was effectively over. The only reason that I can see for this reconstruction is to give time for the battle to sweep away from Richard's corpse, so that the folklore of the crown being found under a bush could be true; Ashdown-Hill, p. 88, suggests that someone looted the body and tried to hide the loot. But even if the soldiers would have left Richard's body behind, would Henry Tudor? Hardly; he needed proof that Richard was dead!
The armies may have been almost as blind as we are; Bennett, p. 92, thinks that Lord Stanley, while claiming to bring his forces into Richard's army, was in fact between the King's and Henry Tudor's army, and was preventing the king from getting any useful intelligence. But his reconstruction, p. 109, also causes the Tudor forces to approach Richard's from the east -- meaning that Henry's army marched past Richard's and turned back. This is almost as hard to believe as Ashdown-Hill's reconstruction; I mention it simply to show how little we understand of what happened in 1485.
Bosworth was a most unusual battle, for there were not two but (probably) *five* armies. Though they were small ones -- Gillingham, p. 33, notes that at this time soldiers were paid wages, but their "profit," if any, came from plunder. Since it was hard to plunder one's countrymen, most battles of the Wars of the Roses involved relatively small forces led by a few great magnates rather than the large contract forces of the Hundred Years' War. And, as the war lasted longer, wages had to go up, and the armies got even smaller (Gillingham, p. 35).
Richard's personal army seems to have been particularly small for an army led by a crowned king, perhaps because he by this time was having financial difficulties. He had not gotten much money from his 1484 parliament; (Ross, p. 178), and was having to borrow from his magnates; (Ross, p. 179). On p. 215, Ross says that "it can be suggested that the size of Henry's army has been underestimated and that of Richard's exaggerated. Allowing for the men he recruited en route from Milford Haven, Henry may have had 5,000 men, perhaps more. Potentially, Richard could have gathered far more, but, given the hasty circumstances of his array, he may have had no more than 8,000 men in his command, although 10,000 is by no means unlikely." Bennett, p. 103, suggests 10,000 to 15,000 for Richard -- but doesn't really have a place for them in his battle map.
Either total, however, includes the Earl of Northumberland, who certainly did not fight for Richard and probably was unwilling to fight. (Bennett, p. 74, even suggests that he had been in communication with Henry Tudor, although if so, nothing came of it.) In practical terms, this suggest that Richard had no more than seven thousand, and probably less; the two armies thus were close to equal in size, though Richard's was probably better equipped and led; it would certainly have had the edge in artillery.
The senior officers in the loyal army were Richard and the Duke of Norfolk, the former Lord Howard. Henry Tudor was theoretical commander of the second force, though probably the de Vere (shadow) Earl of Oxford commanded in the field (Bennett, pp. 64-65, suggests that Henry Tudor might not have dared to invade without him, and notes that Richard had made an unsuccessful attempt to keep Oxford from getting away from his complacent guards at Calais; Langley/Jones, p. 192, points out that Henry Tudor had seen only one battle, and that was when he was twelve; it was a defeat in which he probably was not a combatant); the other senior officer in the Tudor camp was Henry's uncle Jasper Tudor, another shadow earl, although Langley/Jones, p. 195, says he had left the Tudor army before Bosworth.
Then there were the independent armies, those of Lord Stanley, his brother Sir William Stanley, and the Earl of Northumberland. Northumberland kept his troops in Richard's camp but commanded them independently. Lord Stanley, whose current wife was Henry Tudor's mother, and William Stanley kept their forces entirely separate, meeting Henry Tudor but not joining him and keeping Richard on a string. And they had a well-deserved reputation for playing both sides (see, e.g., the notes to "The Vicar of Bray"; Langley/Jones, p. 225, calls them "the arch dissemblers in the Wars of the Roses") -- one reason, perhaps, why they had to produce this piece of propaganda to defend their actions.
Thus when the Battle of Bosworth started, there were four forces, arranged probably in a rough square, or perhaps we should say in a rough cross, with Richard's forces facing Henry's and the Stanley armies (which were probably as large or larger than the other two forces) occupying the other two sides of the square. Northumberland, theoretically part of Richard's force, was sitting still to Richard's rear. The best guess is that Richard was with his army's main body but that Henry Tudor was in the rear of his army -- he wanted to win, not fight, and if he failed, he perhaps wanted to escape (Langley/Jones, p. 198).
It amazes me how many divergent details the various authors can discover in the very limited material available in Vergil. Ross rightly slams Kendall for turning a brief summary into a detailed, lyrical account -- but ignores the fact that St. Aubyn, p. 213, regales us with the tale of Richard's "terrible dream," or Seward-Roses, p. 305, wants us to know about Richard's "haggard appearance" and "ferocious speech." How many people, even in Richard's forces, would know of the dream, and why would they tell a biased chronicler? Cheetham, p. 187, comments "Predictably enough, our two contemporary voices -- Croyland and Vergil -- attribute to Richard a sleepless night, interrupted by 'dreadful visions' and premonitions of disaster." (Note, though, that Vergil is not contemporary, and that Croyland's description is only a few lines long.) Our third contemporary, this song, has a lot of surely-fictitious speeches, but no sign of the dreadful dreams in the transcription I've seen. And Langley/Jones, pp. 198-199, describes Richard's acts on that day (e.g. of displaying the crown) as the confident behavior of one who expected to win. Ashdown-Hill, p. 71, repeats another tale, in which an old woman cried for alms on the way to the battlefield, then said that "where [Richard's] spur struck, [there] his year should be broken." What, Richard's history borrowed a plot element from "Robin Hood's Death"?
In any case, as Bennett comments on p. 97, "it seems unlikely that the young Henry Tudor... slept any better."
Burne, p. 291, believes that the scene of the battle was set when Richard's force occupied Ambien Hill very early on the fatal day (Monday, August 22, 1485). This seems likely enough -- Richard was clearly the more enterprising commander, and Ambien Hill was the dominant position in the area; St. Aubyn, p. 209, Kendall, p. 433, Cheetham, p. 187, and Ross, p. 217, all agree with Burne at least this far.
Unfortunately for Richard, Ambien Hill, while tall, is very narrow. All the authors seem to agree that, instead of forming his three divisions in a line, Richard ended up with Norfolk in front, on the slopes of the hill, Richard's own division behind him, and Northumberland somewhere to the rear (though it is hard to see how they could have gotten into that formation if the map in Kendall, p. 438, is accurate; in this, Kendall clearly seems wrong).
Bennett, p. 104, suggests that Henry placed almost all his forces in a vanguard under the Earl of Oxford, keeping only a small company of his own -- understandable, given Henry's lack of experience. His inference from this is that Henry was expecting the Stanleys to guard his flanks -- as, in effect, they did. Langley/Jones, p. 197, agrees that most of Tudor's forces were in the vanguard, but offers a different explanation: the Tudor captains wanted to score an early success, even if they couldn't back it up, so the Stanleys would commit to their side.
Based on the little we know, it appears that Richard's and Henry's armies started the battle, with the Stanleys standing aside (all authorities, including even the very anti-Richard Gillingham, p. 243, agree on the duplicitous behavior of the Stanleys). By the nature of the ground, that meant Tudor's forces under Oxford attacking Norfolk. Despite Gillingham, this seems to me to almost assure the general accuracy of the Burne/Ross/Kendall reconstruction of the battle with Richard on Ambien Hill. If Richard hadn't been on the hill, he would surely have created a broader battle line, and the final charge would have been impossible.
Exactly what happened next is uncertain, because we know that Norfolk died in the battle, but we don't know when. If Vergil is right in saying that the whole battle lasted only two hours (Gillingham, p. 244), it must have happened fairly quickly, but that's not much to go on.
We also know that Northumberland did not participate in the battle. (Pollard, p. 171, mentions that we have this from Croyland, not just Vergil. One source, the "Spanish Letter," appears to say that Northumberland actually attacked Richard, but Ross, p. 216, rejects this as impossible. Ross, pp. 218, 221,thinks that the nature of the ground meant that Northumberland could not engage at all, but most of the other scholars think he refused to fight, and the behavior of his vassals in 1489 seems to support this. It seems to me that a refusal to fight would also explain the "Spanish Letter.")
Four years after Bosworth, Northumberland was murdered by a mob of rioters protesting over Henry Tudor's taxes -- Cunningham, pp. 79, 108 -- and while we don't have any certain knowledge of why he died, the strong indication is that his henchmen refused to rescue him because of his betrayal of Richard III (Pollard, p. 171). (Percy printed Skelton's "Elegy on Henry Fourth Earl of Northumberland" -- p. 117 of volume I of Percy/Wheatley -- but this elegy appears to have no useful information even though it is near-contemporary.)
Pollard is convinced, p. 171, that Richard would have won the battle had Northumberland fought. Presumably the Henry Percy's own subjects felt the same -- and liked Richard better than they liked their earl.
Eventually, Richard tried a maneuver -- a charge on the Tudor ranks, aiming for the pretender personally.
The timing and the reason is unknown. Kendall, p. 439, thinks it came when Norfolk was killed -- bad news indeed for Richard -- and that Northumberland's neutrality had already been revealed by then. If Kendall is right, then the death of Norfolk left Richard in a very precarious position, with his main force disorganized and little chance that any of the three neutrals would come to his aid. Hence he decided to try a death-or-glory charge: If he could kill Henry Tudor, the battle would be won.
Ross does not mention Norfolk's death at this stage (on p. 218 he mentions it as merely "probable" that Norfolk was already dead when Richard died), but thinks Richard may have seen that his force was being defeated (also, he speculates on p. 223 about low morale in Richard's forces). Ross, p. 222, agrees with Kendall that the desire to end the battle by killing Henry was a possible motive, though he isn't entirely sure that Richard was actually trying a charge just with his guard. He may have been trying to bring his entire division into action.
Langley/Jones, p. 191, offers a different suggestion: That Richard, who had an interest in chivalry and owned a book telling of a single combat between Alexander the Great and an enemy leader, wanted to settle things in a direct duel. On p. 201, they suggest that Richard went for it as soon as he had figured out the Tudor army's dispositions. This would certainly explain why the battle didn't last long.
There is an alternate account given by Young/Adair -- who are not specialists in the period. They credit -- without giving an authority -- Richard with having precisely 9640 men; p. 101. Henry's army they credit on p. 103 with 8000 troops. They suggest there was only one Stanley army, of about 2000 men; p. 102. And they place the battle entirely to the south of Ambien Hill, suggesting that the Stanleys positioned themselves at the top of the hill. They suggest that Norfolk and Oxford actually fought in single combat; p. 104. They credit Northumberland with sitting on his hands, but their map does not show how he could have done so. Allowing that Vergil's account is probably thoroughly untrustworthy, I have to say that this version strikes me as even less likely to be right -- it sounds as if it's straight out of a romance.
A more reasonable alternate suggestion comes from Ross-Wars, pp. 132-135, who suggests that Henry Tudor was concerned about the course of the battle, and rode off to appeal to the Stanleys (whom he too suggests may have had only one force, not two). Richard, observing the maneuver, chose to attack Henry as the rebel force moved. While a better fit for the known facts than the Young/Adair account -- indeed, it is a good explanation for why Richard would make what otherwise seems a foolhardy move -- this remains speculation.
Another possibility is suggested by Bennett's belief that Henry expected the Stanleys to cover his flanks: When Richard saw that the Stanleys were sitting still, he decided to do just what Henry feared and go around Oxford's flank to get at Henry and the Tudor rear.
Chrimes, p. 48, offers what seems to me the best suggestion of all: Richard saw that he had three neutrals on his hands (Thomas Stanley, William Stanley, and Northumberland) -- and he wanted to end the battle before any of them could decide to go over to Henry Tudor.
Whatever Richard's intention in his final maneuver, what it seem to come down to was a charge by Richard and his household knights toward the Tudor flag -- a charge which came very close to succeeding. (At least, that's what Vergil thought Richard was doing; Burne, p. 295, suggests that he was actually trying to kill the traitor Lord Stanley. This seems absurd -- Richard could have gotten real revenge on Stanley by killing Stanley's son Lord Strange, who was his hostage, and in any case, if he killed Henry Tudor, he could deal with Stanley at his leisure.) But Sir William Stanley charged and managed to destroy the back of Richard's attack force (Gillingham, p. 244, thinks that Richard's companions mostly deserted him in the attack, but also notes that Richard almost managed to reach Henry Tudor -- impossible if he had truly been abandoned). Attacked front and rear, the charge failed. Richard died in the fighting.
This would also explain a report that Richard lost his horse in a marsh near the battlefield, near a place where archaeologists found a copy of Richard's token of a boar (map on p. 204 of Langley/Jones). Probably Richard went around the Tudor flank, and when William Stanley intervened, the charging horsemen were pushed more and more away from Tudor and toward the marsh (Langley/Jones, p. 206).
Why did Richard do it? To get things over with, perhaps; this seems to be Kendall's view. But we can't know. The "Ballad of Bosworth Field" declares,
He said, "giue me my battell axe to my hand,
sett the crowne of England on my head soe hye!
ffor by him that shope both sea and Land,
King of England this day I will dye!
This seems to contradict Henry's actual behavior; according to Langley/Jones, p. 202, Henry actually got off his horse and hid among his bodyguard. Of course, he might have been trying to fight with them. Given his record, though, this seems quite unlikely.
The one thing that everyone seems to agree is that the grand charge was very courageously done: Burne, p. 295, says "Richard died like a king." Croyland said he died "like a brave and most valiant prince" (Burne, p. 296). Vergil reports, "King Richard alone was killed fighting manfully in the thickest press of his enemies... his courage was high and fierce and failed him not even at the death which, when his men forsook him, he preferred to take by the sword rather than, by foul flight, to prolong his life" (Gillingham, pp. 244-245). "Whatever he merited as man or king, as a soldier King Richard deserved a better end" (Young/Adair, p. 106).
The tendency on the part of Richard's partisans has been to blame his supporters for the defeat. Northumberland is the one usually blamed. Kendall thinks Northumberland's inertia was due to dislike for Richard. Ross, p. 167, observes that the two had had been at loggerheads from the early 1470s. He also notes that the Percies were among the oldest of the noble families, and that Richard was closely linked with the Neville family, rivals of the Percies. (He doesn't say much about the fact that the Percies had a history of rebellion against kings in power.) Cunningham, p. 75, suspects that Richard was dead by the time Henry Percy was in position to intervene -- though this doesn't explain why Northumberland's forces were so far from the field. Cunningham also suspects that it was new continental tactics which defeated Richard: Henry Tudor's mercenaries formed square to take Richard's cavalry charge, and it worked.
Gillingham goes on to call Richard a "disaster" as king. I truly don't see why -- unless one says that his death was disastrous because it put England under the Tudors. Legislatively, as we have seen, Richard's reign was unquestionably good. This is true even if one accepts the Seward/Weir view that he was a monster.
The aftermath of course was a dramatic change in English politics and the situation of the nobility. Thomas Stanley, who inspired this song, was made Earl of Derby, constable of England, steward of the Duchy of Lancaster, and more (Chrimes, p. 55). William Stanley, the man who actually did in Richard, also received offices (Chrimes, p. 55) -- but he was not made a baron, and Henry Tudor would eventually execute him! Jasper Tudor, Henry's uncle who had kept his cause alive for many years, was made Duke of Bedford despite having no English royal blood; he also married a sister of the old Queen Elizabeth Woodville (Chrimes, p. 54). And the Earl of Oxford, who probably deserves most of the credit for Bosworth, was restored to his earldom plus was made Admiral of England (Chrimes, pp. 54-55).
Perhaps we should give the last word to Ross-Wars, p. 100, who writes, "Richard was by no means the personification of evil which he was to become in the hands of hostile Tudor propagandists. He had charm, energy, and ability, and he worked hard to win popularity. But it took time to live down the legacy of suspicion and mistrust generated by the violence of his usurpation. Even in that ruthless age, many men were appalled by what they clearly believed to have been his crime against the princes.... Had Henry Tudor's invasion been long delayed, its outcome might have been very different, but in 1485, Richard was still far from having won the confidence of his people in general." - RBW
Last updated in version 4.0
- Arthurson: Ian Arthurson, The Perkin Warbeck Conspiracy: 1491-1499, 1994 (I use the 1997 Sutton paperback edition)
- Ashdown-Hill: John Ashdown-Hill, The Last Days of Richard III and the State of His DNA, original edition 2010; revised edition covering the exhumation of his body, History Press, 2013
- Bennett: Michael Bennett, The Battle of Bosworth, St. Martin's Press, 1985
- Burne: A. H. Burne, The Battlefields of England (a compilation of two volumes from the 1950s, Battlefields of England and More Battlefields of England, with a new introduction by Robert Hardy), Pen & Sword, 2005.
- Cheetham: Anthony Cheetham, The Life and Times of Richard III (with introduction by Antonia Fraser), George Weidenfeld and Nicolson, 1972 (I used the 1995 Shooting Star Press edition)
- Chrimes: S. B. Chrimes, Henry VII, a volume in the English Monarchs series, University of California Press, 1972
- Cunningham: Sean Cunningham: Richard III: A Royal Enigma, [English] National Archives, 2003
- Gillingham: John Gillingham, The Wars of the Roses, Louisiana State University,1981.
- Griffiths/Thomas: Ralph A. Griffiths and Roger S. Thomas, The Making of the Tudor Dynasty, Alan Sutton, 1985
- Kendall: Paul Murray Kendall, Richard the Third (1955, 1956). Pro-Richardness: 8. Research: Good.
- Langley/Jones: Philippa Langley and Michael Jones, The Search for Richard III: The King's Grave, John Murray, 2013 (I use the 2014 paperback edition)
- Laynesmith: J. L. Laynesmith, The Last Medieval Queens: English Queenship 1445-1503, Oxford, 2004 (I use the 2005 paperback edition)
- Pollard: A. J. Pollard, Richard III and the Princes in the Tower, 1991 (I use the 1997 Bramley Books edition)
- Ross: Charles Ross, Richard III, University of California Press, 1981
- Ross-Wars, Charles Ross, The Wars of the Roses: A Concise History, Thames and Hudson, 1976.
- Saul3: Nigel Saul, The Three Richards: Richard I, Richard II and Richard III, Hambledon & London, 2005.
- St. Aubyn: Giles St. Aubyn, The Year of Three Kings: 1483, 1983
- Seward-Roses: Desmond Seward, The Wars of the Roses, 1995
- Wagner: John A. Wagner, Encyclopedia of the Wars of the Roses, ABC-Clio, 2001
- Young/Adair: Peter Young & John Adair, Hastings to Culloden: Battles of Britain, 1964, 1979; third edition published by Sutton Publishing, 1996.
Go to the Ballad Search form
Go to the Ballad Index Song List
Go to the Ballad Index Instructions
Go to the Ballad Index Bibliography or Discography
The Ballad Index Copyright 2016 by Robert B. Waltz and David G. Engle. | <urn:uuid:15b4b6d8-b13f-47f0-bde4-436459be837b> | CC-MAIN-2017-17 | http://www.fresnostate.edu/folklore/ballads/BdTBOBoF.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00247-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.977202 | 7,992 | 3.09375 | 3 |
Readers who have been following the lessons in logic can exercise some of their new skills on the slogan of the sugar substitute Splenda:
Made from sugar so it tastes like sugar.
Q: Is this slogan an argument?
A: Yes, it is.
Q: How can you tell?
A: The word "so" is an argument indicator.
Q: What is the argument's conclusion?
A: "It (Splenda) tastes like sugar."
Q: How can you tell?
A: "So" is a conclusion indicator.
To sum up, here's the argument made by the slogan:
Premiss: Splenda is made from sugar.
Conclusion: Splenda tastes like sugar.
Q: Is this a cogent argument?
The makers of the rival sugar substitute Equal are suing the makers of Splenda for false advertising partly because of this slogan. Since I'm not a lawyer, I take no position on the legal issue of false advertising; but, as a logician, I think that the makers of Splenda are guilty of fallacious arguing.
The problem is that the phrase "made from" is ambiguous. In order for the argument made by the slogan to be cogent, Splenda must be "made from" sugar in some way that preserves the taste of sugar. For instance, one might say that a piece of candy is sweet because it is made from sugar. The sweetness of the candy comes from the sweetness of the sugar because the sugar is not chemically changed in the process of making the candy.
However, Splenda is "made from" sugar in a different way: its sweetness comes from sucralose, a chemical which can be made from sugar, that is, sucrose. However, in the chemical process that produces sucralose, the sucrose is destroyed. There is no sugar in Splenda, so its sweetness is not the result of the sweetness of sugar.
So, in order for the slogan to be cogent, Splenda would have to be made from sugar in a different way than it is made. In other words, to be cogent, its premiss must be false. In the sense in which the premiss is true, the argument is uncogent.
Apparently, Splenda is actually made using cane sugar, but sucralose can also be manufactured from such things as broccoli. However, don't expect to see "Made from broccoli so it tastes like broccoli" on its packets. Since the end result of the chemical process is sucralose, it will taste just as sweet whether it is made from sugar, beans, or onions. So, the fact―if it is a fact―that Splenda tastes like sugar is not because it is "made from" sugar.
Splenda's slogan is either a non sequitur, or its premiss is false. Is that false advertising? Maybe, maybe not, but it is fallacious advertising.
Source: Lynnley Browning, "Sweet like sugar―but it's not sugar", Boston Globe, 4/9/2007
Resource: Lessons in Logic 4: Conclusion Indicators, 4/11/2007
After an undisclosed out-of-court settlement of the lawsuit, the makers of Splenda changed its slogan to: "Itís made from sugar. It tastes like sugar. But itís not sugar." This eliminates the misleading conclusion indicator, "so", between the first and second sentences. However, the new slogan may still mislead people, because it's a convention of communication that when two statements occur close together they should have some kind of close connection. Moreover, another convention is that, just as causes precede effects, statements of causes precede statements of effects. For this reason, it's likely that many consumers will think that Splenda's being made from sugar causes it to taste like sugar. This, of course, is the same misinformation that the original slogan conveyed, though the new one is subtler and, thus, sneakier about insinuating it.
Source: Lynnley Browning, "New Salvo in Splenda Skirmish", The New York Times, 9/23/2008
The Worst of All Possible Arguments
Keung Ngai recently wrote to ask an interesting question: Can any categorical syllogism commit all of the syllogistic fallacies? The answer is no, because some of the fallacies exclude others. For instance, a syllogism which commits the fallacy of affirmative conclusion from a negative premiss cannot also commit the fallacy of negative conclusion from affirmative premisses. However, it is possible for arguments to commit more than one syllogistic fallacy. For instance:
All koalas are marsupials.
All kangaroos are marsupials.
Therefore, no kangaroos are koalas.
This argument has both an undistributed middle term―"marsupials"―and a negative conclusion together with affirmative premisses. So, while it isn't possible for a syllogism to commit every fallacy, it's possible to commit more than one. This raises the question: What is the maximum number of fallacies that can be committed by a categorical syllogism?
The answer, of course, depends upon how many syllogistic fallacies one recognizes. For instance, one could get by with a general fallacy of illicit process, rather than having the two subfallacies of illicit major and minor. Let's use the list of subfallacies given in the entry for syllogistic fallacy, treating illicit major and minor as two distinct fallacies, and not counting illicit process as a separate fallacy. Furthermore, let's add the existential fallacy, which isn't listed as a type of syllogistic fallacy because it is a more general formal fallacy, but can occur in syllogisms. That makes a total of eight possible syllogistic fallacies:
- Affirmative Conclusion from a Negative Premiss
- Exclusive Premisses
- Existential Fallacy
- Four-Term Fallacy
- Illicit Major Term
- Illicit Minor Term
- Negative Conclusion from Affirmative Premisses
- Undistributed Middle Term
My challenge to you, the reader, is to devise a categorical syllogism that commits the largest number of the above fallacies. There is no prize, other than the sense of accomplishment you will feel from having devised the worst argument in the history of the world―or, at least, the worst syllogism―but what more could you need?
Q: My wife and I entered into the difficult debate of whether to vaccinate our child. During my research, I ran into the below excerpt from a book and thought that it contained a fallacy, but I'm wondering if I'm right or just too cynical.Smallpox was the scourge of mankind in 1717 when Zabdiel Boylston developed an effective but hazardous method of protection. He called it inoculation. He injected a small amount of infected material directly from smallpox patients into uninfected patients.
It was reasonably effective. During previous epidemics of smallpox, one in seven of those infected died. Only one in forty-one of those Boylston inoculated died. Boylston did not lack volunteers for this risky procedure because fear of the epidemic drove people to him. Boylston's medical colleagues, however, strongly opposed his revolutionary practice. They made a great deal of the one of his forty-one patients who died, ignoring the extraordinary improvement in mortality the other forty represented.
They accused Boylston of violating two of the ancient injunctions of Hippocrates, whose teachings had guided medical ethics since antiquity: "Above all do no harm to anyone nor give advice which may cause his death." Boylston persevered in his treatment because he understood the relative risks of not being inoculated (at epidemic's end, 844 of 5,759 people, or 14.6 percent of those who developed smallpox, died) compared to the risks of being inoculated (eventually 6 of 247 people, or 2.4 percent of those he inoculated, died).
This striking reduction in the risk of death eventually exonerated Boylston and demonstrated the principle that smallpox could be prevented by human intervention, eventually leading to the almost-foolproof method of vaccination.
Source: When to Take a Risk (1987)
It seems to me that the author and Boylston argue (essentially), "2.4% of people dying is much better than 14.6% of people dying, therefore, everyone should be vaccinated." However, don't they make an error? Fourteen-point-six percent of those "who developed smallpox" died; but how many of the 247 people that volunteered for the vaccination would have ever contracted smallpox? If (hypothetically) only ten of the vaccinated sample would have naturally contracted smallpox, then only 1 or 2 of them would have died compared to the 6 that really died due to the vaccination, right?
Or, put in another way, since we don't know what the total population was, then we can't compare the smallpox deathrate of the total population compared to the vaccination deathrate of the 247 population. So, if (hypothetically) the total population was 100,000 people, then the 844 death rate would represent a less than 1% death rate for smallpox compared to the 2.4% deathrate of the vaccination.
I know that my argument does not make a case against the vaccinations, but likewise their argument does not make a case for vaccinations, right?―Anthony T.
A: It's important to get clear about what the passage is talking about before criticizing it. First of all, Boylston wasn't vaccinating people in the modern sense, since he didn't have a vaccine, but was inoculating them with small quantities of smallpox. Vaccination for smallpox was done late in the same century by Edward Jenner, who inoculated a boy with cowpox, a similar but less dangerous disease that conferred immunity to smallpox. This is why the word "vaccinate" comes from the Latin word "vacca" for cow. Vaccinations today are done using weakened or altered versions of a virus. So, whether it would have been a good idea for someone to have undergone Boylston's procedure is a separate issue from vaccination today.
Now, let's turn to the historical question of whether it would have been a good idea to be inoculated by Boylston in 18th century Boston. You're right that you can't decide this issue simply by comparing the death rates of inoculation and smallpox. Rather, you need to know the likelihood of contracting smallpox. If it were as low as in your examples, then it would be riskier to get inoculated than to just take the chance of catching the disease. So, what was the risk of getting smallpox in the Boston of Boylston's time?
According to the Source linked to below, the population of Boston during the epidemic of 1721 was approximately 12,000, of which half contracted smallpox. The death rate was 14%, so around 840 people died. Suppose that the entire population of Boston had been inoculated. Given that the death rate from inoculation was about 2%, this means that 240 people would have died. So, approximately 600 lives would have been saved by inoculating the entire city.
The passage that you quoted is misleading in that it does not mention the infection rate of smallpox, which is needed to evaluate Boylston's procedure. While interesting in its own right, what happened 300 years ago has no bearing on whether it's a good idea to have your child vaccinated.
Source: Stefan Riedel, "Edward Jenner and the History of Smallpox and Vaccination", Baylor University Medical Center Proceedings, 1/2005
Lessons in Logic 4: Conclusion Indicators
Having recognized that a passage contains an argument, the next skill that a logician requires is the ability to analyze its structure. By "structure", I mean identifying which of the argument's statements are premisses and which is the conclusion. Assuming that the passage contains a single argument, identifying the conclusion is the easiest way to analyze it; if the passage contains more than one argument, then identifying the conclusions will help to separate out each argument.
Analyzing the premiss-conclusion structure of an argument is a vital step in understanding and evaluating it. If one mistakes a premiss for the conclusion, any subsequent criticism of the argument will miss the mark. Such misunderstandings of an argument's structure may easily lead to straw man attacks on the argument, and it is likely that this sort of mistake is a common source of straw men.
The previous lesson introduced argument indicators, that is, words or phrases that indicate that an argument is afoot. Given that arguments consist of premisses and conclusions, there are two types of indicators:
- Premiss indicators
- Conclusion indicators
Try to answer the following question before you look at the answer; with a little thought you should be able to:
Q: Which type of argument indicator is "therefore"?
A: A conclusion indicator. The rest of this lesson will be devoted to conclusion indicators.
A conclusion indicator is a word or phrase that indicates that the statement that it is attached to is a conclusion. Typically, conclusion indicators immediately precede the conclusion, but occasionally they will be found in the middle, and sometimes even at the end!
One test of whether a word or phrase is a conclusion indicator is whether it is a synonym of "therefore", and whether it would preserve the meaning of a passage to substitute "therefore" for the word or phrase in question. Of the indicators that we have seen so far, "thus", "so", and "hence" are also conclusion indicators, as can be verified in any reliable dictionary. The following is a partial list of common conclusion indicators in English:
|it follows that
for this reason
for these reasons
we may conclude that
we may infer that
as a consequence
as a result
|which proves that
which means that
which implies that
which entails that
which shows that
Warning! This list of indicators is not complete. No exhaustive list of English indicators is possible, since it is always possible to put together new phrases which serve the purpose.
Exercises: Identify the conclusion of each of the following arguments.
- Books are not listed in the index, nor are any references to other books or articles that appear in books. Thus, if you write books, arguably the most important, most basic source of facts and ideas, or your work is referred to in other books, you are automatically excluded from the index.
Source: Martin Anderson, Impostors in the Temple (1992), p. 106.
- …[M]ost species retain sexual reproduction despite its seeming inefficiency, it follows that it must provide advantages great enough to be worth the enormous cost.
Source: New York Times, 3/25/1986.
- "Okay, the clue is 'Late bloomer, finally flown, in back.' Aster is a flower that's a late bloomer. N is the last letter of 'finally flown.' And a stern is in back. So the answer is 'astern'."
Source: A. J. Jacobs, The Know-It-All (2004), p. 137.
- No man will take counsel, but every man will take money; therefore money is better than counsel.
Source: Jonathan Swift
- James was scrupulously careful to explain religious phenomena by ordinary scientific laws and principles, if at all possible. Accordingly, religious visitations of all kinds are classed as sudden influxions from the subject's own subconsciousness.
Source: William James Earle, "James, William", Encyclopedia of Philosophy, edited by Paul Edwards (1972).
Next Lesson: Arguments and Explanations
Check it Out
Ben Goldacre's latest Bad Science column concerns a rather frightening case of the Texas sharpshooter fallacy. If you ever doubted the dangers of fallacious reasoning, read it. If you're unlucky enough, fallacious reasoning could put you behind bars for the rest of your life for "murders" you didn't commit, and which in fact may not be murders at all.
Source: Ben Goldacre, "Losing the Lottery", Bad Science, 4/6/2007
- Conclusion: If you write books, arguably the most important, most basic source of facts and ideas, or your work is referred to in other books, you are automatically excluded from the index.
- Conclusion: Sexual reproduction provides advantages great enough to be worth the enormous cost.
Indicator: it follows that
- Conclusion: The answer is "astern".
- Conclusion: Money is better than counsel.
- Conclusion: Religious visitations of all kinds are classed as sudden influxions from the subject's own subconsciousness.
Solution to the Challenge (5/1/2007): The winner of the challenge is Keung Ngai, who sent in the original question! Keung went above and beyond the call of duty by creating a spreadsheet showing all 256 forms of categorical syllogism and listing the fallacies each form commits, if any. The largest number of fallacies that can be committed by any categorical syllogism is five, which can occur only in a syllogism in the mood IIE―the "mood" of a syllogism is simply a way of representing the type of categorical propositions which occur in the syllogism; in this case, the two premisses are both I-type propositions and the conclusion is an E-type.
So, here's the worst―or, more accurately, one of the worst―of all possible categorical syllogisms:
Some black birds are rooks.
Some chess pieces are rooks.
Therefore, no chess pieces are black birds.
Obviously, a syllogism which commits this many fallacies is not at all deceptive. I chose propositions that are true, since an argument with a true conclusion is more deceptive than one with a false conclusion. However, the conclusion, though true, so obviously does not follow from the premisses that the argument is a non sequitur.
Here are the fallacies it commits:
- Undistributed Middle Term: "Rooks" is the middle term, and it is undistributed in both premisses.
- Illicit Major: The major term, "black birds", is distributed in the conclusion but not in the major premiss.
- Illicit Minor: The minor term, "chess pieces", is distributed in the conclusion but not in the minor premiss.
- Negative Conclusion from Affirmative Premisses: The conclusion is negative while both premisses are affirmative.
- Four Term Fallacy: In order for both premisses to be true, the middle term, "rooks", must have different meanings in each premiss. In the major premiss it means a type of black bird, while in the minor premiss it means a type of chess piece. Thus, it is really two terms, so that the argument has a total of four terms.
Update (5/9/2007): When I wrote the above I had forgotten that C. L. Hamblin, in his important book Fallacies, discusses the fact that syllogistic fallacies are not all mutually exclusive, and gives a couple of examples of multiply-fallacious syllogisms. Here's the first one:
Some doctors are dentists.
Some dentists are diplomats.
Therefore, no diplomats are doctors.
This syllogism has the same mood as the solution to the challenge above, though it is in the fourth figure rather than the second― the "figure" of a categorical syllogism is determined by the positions of the middle term in the premisses, so there are four of them. Hamblin's classification of syllogistic fallacies differs slightly from the one that I used: he leaves out the four term fallacy, and treats illicit process as a single fallacy. According to his classification scheme, the example commits the following fallacies:
- Undistributed Middle Term
- Illicit Process: Both Major and Minor Terms
- Negative Conclusion from Affirmative Premisses
Three is the maximum number of fallacies possible given Hamblin's scheme of classification. According to my scheme, the argument commits four fallacies, and it would be easy to come up with one of this same form which also committed the four term fallacy. Hamblin's other example is:
Not all manuscripts are irreplaceable.
Some manuscripts are indecipherable.
Therefore, all indecipherable things are irreplaceable.
Note that the first premiss is an alternative way of stating an O-type proposition. This example commits the following fallacies:
- Undistributed Middle Term
- Illicit Process of the Minor Term
- Affirmative Conclusion from a Negative Premiss
So, for Hamblin, this example also commits the maximum number of possible fallacies. However, unlike the previous example, it does not commit an illicit process of the major term, so that I would count it as committing one less than the maximum. This shows how dependent on the system of rules and fallacies such a count is. Intuitively, both arguments seem equally terrible.
Source: C. L. Hamblin, Fallacies (1986), pp. 200-201. | <urn:uuid:ed1f40da-30f4-41e2-8bdf-59e2447f7b77> | CC-MAIN-2017-17 | http://www.fallacyfiles.org/archive042007.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00482-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.94969 | 4,468 | 3.09375 | 3 |
April 24, 2005
moderated by Douglas O'Brien
Doug: The example that came up in the beginning of study group was the example that you had spoken to Adam about refining his ability of speed reading to more, well, as far as how David refined your ability and that David stated about having the preference of finding an exemplar that is an athlete that can also comprehend what they're reading very well and one of the questions that came up was "how do you find an exemplar with those specific abilities?".
David: Ok, well there are several ways to find exemplars. One of the ways that is happens often is that, just as we move through our lives, we just happen to meet or know somebody who has an ability that impresses us or that we would like to have ourselves and there you have it. There is your exemplar, if you are looking for somebody and you don't have an exemplar, I think the easiest and fastest way to find one is to just start asking people if they know somebody who has that ability and you'd be surprised how quickly you will find exemplars by doing that. You ask somebody "Do you know of anybody who has that ability?" and if they say know then the next thing to ask them is "Do you know anybody who might know somebody who has that ability?" I think you'll find that within one or two steps you'll going to find a referral to somebody who has the ability you're looking for. Then, of course, you still have to, you know, meet that person, get some experience with them and satisfy yourself that they do have that ability you want. In terms of finding people, I think that's the quickest way. Now, Adam, I suggest you do the same thing but, of course, as we talked about, we're looking for, for you, somebody who also operates in a particular realm of life, there are particular areas or realms of life that they are familiar with and comfortable with, that is they are into sports and not just watching sports, they're into playing sports. [laughter] So, what you want to be doing is, that's the segment of the population that within which you what to be looking for your exemplar, so if you don't know somebody already who is playing sports that has the ability, then ask your friends or people you know who do play sports, do they know somebody who is into sports and has that ability. So, that's what I would suggest doing, ok.
Doug: Thank you.
Tina: David, I was wondering, in regards to what you were talking to Adam about, and I'm not saying that he might have Attention Deficit, [laughter] In the case that someone did have Attention Deficit Disorder, [laughter] and they wanted to be able to read quicker and be able to retain the information more efficiently by reading with that physical, would you change your line of questioning to the exemplar or would you need to look for someone who has Attention Deficit Disorder and also be an athlete and also be able to read well and retain?
David: Holy Mackerel, [laughter] basically the answer, my answer anyway, would be your second choice, your second suggestion. That is, if I was working with somebody who had ADD and wanted to help them be able to focus their attention on their reading and have better comprehension as they read, I would definitely want, the first thing I'd do is try to find somebody to serve as an exemplar who themselves are ADD and learned how to focus on and comprehend what they are reading, that would be the best exemplar to get. The second choice would be, a little bit different way of going about it, but would be to perhaps find somebody who was successful with working with people who had ADD and helping them to read and comprehend what they are reading and Model that person and how they go about working with people who have ADD, to help them do that reading. Those are the two ways I would go about it but definitely, somebody who has ADD and you want to Model an ability from them, the first choice is to find somebody who is in that same experiential boat, [laughter] and that's the person I would Model.
Tina: That makes a lot of sense and to expand on that a little bit, I mean I could be talking about anyone with any type of disorder, you know, depression or a lot of issues that people have.
David: Yes, absolutely, for instance, a humorous example that I have faced for thirty years now which is, you know, those of you who have seen me know that I am on the slender side, [laughter] so, I have had for thirty years, people coming up and wanting to Model me for how to stay thin. Well, that's a waste of time, I'm not the person to Model for that because there is nothing I do to stay slim, I'm a freak on nature in that regard. So, there is nothing you're going to learn from me about losing weight, instead a better exemplar would be somebody who was overweight and loss weight and kept it off, that would be a better exemplar than I would be.
Tina: That makes sense, someone who has had the experience and went through the process.
David: There's nothing I do in the structure of my experience to stay thin. My parents blessed me with a bunch of brown fat cells, apparently, or something, I don't know what it is. [laughter] It's just one of those things that David's got nothing to do with, you know, it's a genetic thing.
Doug: Thank you. The next question was: Tony was working with a gentleman this weekend and just practicing the Array and the ability was how to troubleshooting alarm service calls and what he was getting was a lot of the how to, manual (book) type of answers. For example, when he asked him what was important about that ability, (the answer was) that the light stops blinking. So, what we were discussing in study group, does the, and we're not sure of, does the ability need to be refined or do we need to dig deeper into that question as far as what specifically they are looking for.
David: Well, so I could better answer this question if I had been there or had some more experience of how Tony approached asking the questions and setting up this person for it. But, let me just try the first things that occur to me and we'll see if they are useful or not. As a chunk size, what was the ability?
Tony: Troubleshooting alarm service calls.
David: Troubleshooting alarm service calls, so is this somebody sitting in a room somewhere and lights or alarms start flashing and they have to respond, is that it?
Tony: Yeah, this is somebody that has like 13 or 14 stores that he has put alarm systems into, he is the technician for his company on every thing electronic or electrical and so, often enough, somebody will call and the alarm system won't turn on so the guy can go home. So, what do you do, well that's what we talked about and what I was getting was mostly this technical, you know, how to do kind of stuff that I have no more idea than a goat what he's talking about.
Tony: But, I found my notes, so let me read quickly, like for example, well what's important to you as you're doing this, as you're doing this troubleshooting these alarm calls, you're on the phone with a guy, what have you, what is important to you and the answer I got was this: "I want to be successful with this, I want to get the system up and running the way it was designed to be, to get the functionally back the way it was suppose to be and to do this in a minimum amount of time. Now, did I get the wrong answer?
David: Beautiful, OH NO, Tony you're on goal, you just don't know it. Where I would plant my flag as I think we have talked about or maybe you have read in the book, where I would begin is to go, ok the criterion here, what's important to him is to be successful, that's the criterion and then he goes on to beautifully describe what he means by being successful. Having the system up and running in a short amount of time as possible.
Tony: Some of the background to this that came out later on is that, at one time it used to be if the thing didn't work, he would have to get into the truck and drive three to four hours by the time he gets there and back from the shop and stuff like that, now he's got the technology, the software or all the gizmos and bells and whistles such that most of the time this can be done in 15 to 30 minutes, at most, on the phone with the store manager because he can pull up everything on his laptop and he can see all the cameras and whatever he sees on the thing, you know, and so now, it's much more streamlined, so when I asked him, " Well, why is this important to you?"
David: Why is what important?
Tony: What is the Motivating Cause and Effect that box there?
David: Yeah, I know that, but I need to know the question you asked because.
Tony: Well, basically, I just went through the little worded sheet. Why is being successful important.
David: Why is being successful important, ok, good.
Tony: So, what I got was, among other things, "Well, you know, I'm responsible for this, kind of like it's impacting another employee so he can go home, if I screw this up the police have to come, the company's going to get a fine, so this has to be done right, this has to be done correctly, it makes me valuable to the company so not only do I get a paycheck but I get to enjoy more of my life when I'm not at work because at which this happens at midnight on Saturday, or whenever, most of the time this can be done on the phone and with a laptop."
David: Alright now, would you say that all of that is motivating him to be successful? Because some of what he says there, as you described that and I try it on and I go "yeah", I get how that could be motivating to be successful. So, for instance, he talks about being responsible, so there's something about his sense of self, his investment in this, and he's got a sense of self that's in relation to other people, you know, (quoted from exemplar) "What I do not only impacts other people like my co-workers and it also impacts my life in terms of my having more freedom, more time to do the things I want to do, so I think that's in there is, in that mélange, is his Motivating Cause-Effect. Then, when he goes on to say at the end, what did he say?
Tony: Well, it allows him, of course, to get a paycheck, makes him more valuable to the company; he gets to enjoy life more when he is not at work.
David: Yeah, and then you said he finished up with something. Some technical thing you add on there, anyway, that wasn't Motivating Cause and Effect, whatever it was. (because at which this happens at midnight on Saturday, or whenever, most of the time this can be done on the phone and with a laptop.")
David: I would pay attention to is as he is listing off all of these results or consequences of the work that he does, of being successful, what I would be paying attention to is where is the "juice", you know, as he is listing these things, some of them are going to be, "yeah, this is a consequence, that's all true," some of those things or maybe one of those things is going to have a lot of "juice" behind it. That is, it will have a strong analog, you'll hear it in his voice, you'll see it, you know, "yeah, I get a paycheck, it helps other people in the company AND I HAVE A LOT OF FREEDOM."
Tony: I was just going to say that going back in my mind that was "I get to enjoy life when I'm not at work."
David: There you go. That's it, that's the Motivating Cause-Effect. The other stuff, it's true, yes are consequences, yes he thinks about them and he knows about them, that's not where the "juice" is for him.
Tony: Ok, when I asked him the Sustaining Emotion and basically all I'm doing is reading off the little chart we got.
Tony: Dedication to job, sense of duty and I tried to ask it in as many ways that I could think to ask and that's basically what it came back to, you know, "I have a duty to do this and to do it right and I'm dedicated to doing this right," now is that a Sustaining Emotion?
David: Well, the part that could be a Sustaining Emotion is feeling dedicated and the other one that he mentioned, previously, that is very similar and it may be the same for him and a better way to talk about it is feeling responsible. I think, it's probably one of those or in that family of emotion; the Sustaining Emotion is the one about dedication or responsibility. Does that make sense?
Tony: Yes, that makes sense.
Doug: Are you, when he said about dedication or feeling are you assuming that he is feeling that or if they just say, you know, "Having a duty to my job or being responsible."
David: When he starts saying those things, then I imagine being him in that situation and hold in my experience that it's important "that my duty, that I have a duty to my job, that it's important to me to do what I'm here to do for various reasons" and I pay attention to how I feel. I just pay attention to how, you know, when I'm holding those kinds of ideas and those things are important to me, how does that affect my emotional state, what happens to my state, how do I feel and I feel a sense of responsibility. I feel a strong sense of responsibility and what I would do is, first of all, he has used that word once, at least, so, you know, this is not a surprise but if the person isn't using a word or words to describe their Sustaining Emotion that fits or that's obvious, then I will just give words to what I'm feeling. I'll come back to him and say, you know, the feeling I get is, when I step into what you are doing there, is that I feel deeply responsible. That's all you need to do, you just put that out and what will happen is your exemplar, this guy, he'll almost certainly do one of two things, he'll either go "yeah, that's right, that' what it is" or he'll go "no, that's not quite what it is, it's more like this" and then he'll correct you and he'll give you what it is. I think maybe I've talked about this before, I know we talked about it in the book somewhere, about offering feedback to your exemplar about how your own experience is affected when you step into their ability. What that does is really helps them identify what is going on in their own experience because it gives them something to compare their experience to and either it will be a match, which is great, or because, you know, what happens is when somebody tells you what your experience is and it's not true, it right away, by contrast, forces you to notice what is true for you. Does that make sense? I hope it makes sense to folks and even if it doesn't, try it. [laughter] Because I think you'll find that it's exactly the way it works.
Tony: What I'm editing out of all this is all the technical stuff that I had to go through to get to what looked like a Model to me. [laughter]
David: You've got it, you're doing great. You got out and you got that stuff.
Tony: Ok, because under Enabling Cause and Effect, I said "Well, what makes it possible for you to do all this?" and, of course, because I've read all the manuals. Then I asked "What made it possible for you to read all these manuals?"
David: Ok, now, so, about the question you're asking.
Tony: Ok, so now what I got was "I've always been fascinated with electricity, I think of this as something simple to do, it's logical, it's based on logic, there's no mystery, and it's understandable."
David: Ok, now, [laughter] we got to go back because there's this little tiny thing that you've done in asking your question that makes all the difference in the world. I'm going to guess, assume, that the way you asked him, what's his name? I guessing that the way you asked, Ron, is the way you just.
Tony: Well, not really. I mean, I tried to be gentler about it. I tried using a softener, "Ron, I'm curious, like I wouldn't be reading all these things, what makes it possible for you to read all these things?"
David: Exactly, that's it, right there! So, I'm going to ask the group "What is it about the way in which Tony just asked that question that has lead Ron and him astray?" There's something in how he asked that question.
Tony: Because I said "I wouldn't read those things"?
David: What in that question is leading him to give him the information you don't want. So, here's the thing, there is one word in the question that Tony asked that is leading you all astray, I think.
Joe: Is it possibility as opposed to enable, it seems to me that possibility is a much broader term.
David: No, it's the word, YOU. When you say, try the difference between saying, how it affects you differently when I say "What makes it possible for YOU to succeed?" and "What makes it possible for SOMEONE to succeed?" or for there to be success. "What makes it possible for there to be success?" as compared to "What makes it possible for you to succeed?"
Doug: There's a big difference.
Tina: It makes me feel like I have to defend myself, almost, to explain myself.
David: Yeah, it's a huge difference and it's very easy to, kind of, put it into that form but it makes a huge difference to take out the YOU. If you think about it, you know, what we're after with the Enabling Cause-Effect or all of the Belief Template, what we are after finding out or identifying are beliefs, generalizations, abstractions about experience and so we are not after the nuts and bolts, that's the strategy, we're not after the "how to" part, we're after what are the underlying beliefs that drive the "how to". So, when we ask the question, we want the question to also keep the person kind of at that level of abstraction, the level of generalization. When you say to the person "What makes it possible for YOU to succeed?", you're taking them right into their strategies. Does that make sense? Then say "What makes it possible for there to be success?" or "What makes it possible for someone to succeed?" something like that. It helps keep it at the level of generalization of the level of beliefs.
Tony: I see, because one of the things, like I had to wade through all the technical, "how to" jargon and stuff like that, which I've not a clue what he's talking about.
David: Right, well, you've brought on yourself. [laughter]
Tony: Because I realized as I'm doing this, you know I'm sitting there thinking, you know I'm doing something wrong. It's not supposed to sound like this.
David: [laughter] Tony, let me tell you something, you were doing the right thing, just at the wrong time. [laughter] You were in the wrong box, you weren't in the box you thought you were in. [laughter] So, there's useful information or it could be useful information but, you know, one of the things we are learning here is what are these distinctions so that we can recognize when this person is describing their experience, "Oh, he's not giving me, he's not telling me about his beliefs now, he's telling me about his strategy, he's telling me about how he does something, that's over there in another box. By knowing that, that helps us keep from getting lost in the information and it also gives us choices. We can go, "ok, Ron is now into strategies, let's move over there and start finding out about his strategies and we can come back to his beliefs later on because right now he's into strategies." Alright, so that's something that you'll just become more familiar with as you have more experience with it.
Doug: Very good and just to wrap up, David. Is there anything you'd like to mention about people finding the abilities that they would like to Model?
David: Yes, so I think as I understand it, Doug has sent all of you copies of my written answers to a slew of questions that people had.
David: Great, so I hope you all get a chance to read those. Now, I have talked to a few people about what they would like to do as a Modeling project during the course of the seminar and I have not talked to everyone yet. So, I encourage all of you to call me or write to me about it so we can talk. You know, I'd like to have an opportunity to talk with each of you about what you'd be interested in Modeling. So, please do that.
Doug: And our outcome is, correct if I'm wrong, is to have everybody to discuss this with you before we meet in May again, correct?
David: Well, that would be nice but it's not necessary. If I don't talk with you before then, then I'll talk with you when we get together in May and if you don't want to talk with me, we'll never talk. [laughter] That's ok, too. You know, this is not a requirement, it's just one of those things that I hope everybody will do because I think it's an important part of you're getting your arms around Modeling.
Doug: Well, thank you so much for taking the time with us tonight, David.
David: Well, it's a pleasure, thank you. | <urn:uuid:4ad128b5-526a-4e7f-9111-8db6397b0110> | CC-MAIN-2017-17 | http://expandyourworld.net/interview3.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121893.62/warc/CC-MAIN-20170423031201-00191-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.988493 | 4,816 | 2.59375 | 3 |
Tom Thurow, Amy P. Thurow, Charles Taylor, Jr., Richard Conner and Matthew Garriga
Increased dominance of shrubs and trees in what had previously been grasslands or savannas is a widely observed trend that appears to coincide with European settlement of rangelands (Archer 1994). Before people altered range systems, these vegetation communities were maintained by periodic fire and the grazing /browsing patterns of native wildlife. Most pioneers did not have experience in semi-arid regions, consequently they did not anticipate how introduction of domesticated livestock and suppression of fire would alter the rangeland structure from mostly a grassland to a woodland dominated by oak, mesquite, and juniper (Taylor and Smeins 1994). These changes in vegetation result in environmental and economic tradeoffs with regard to the types and amounts of products that rangelands provide. These tradeoffs have significant implications for ranch enterprises and for the land-use patterns supported by rangelands and surrounding regions. The objective of this paper is to discuss the relationships between production in response to changes in range vegetation, generally speaking, and the specific implications of these economic and environmental tradeoffs as they affect alternative uses of rangeland in the Edwards Plateau of Texas.
As juniper cover increases, there tends to be a decrease in the herbaceous production (Clary 1974, Clary and Jameson 1981, Pieper 1990, McPherson and Wright 1990). For example, Dye et al. (1995) projected that annual herbage production on three sites near San Angelo, Texas, in a closed-canopy redberry juniper woodland would be about 85, 59 and 82% lower than the potential herbage production estimates for the sites. Conversely, as tree density is reduced as a result of brush control efforts there is an increase in herbaceous biomass production (Robinson and Cross 1970, Clary 1971, Clary 1987). For example, herbaceous biomass at the edge of western juniper (Juniperus occidentalis) canopies increased from near 0 to about 1,400 kg/ha within 4 years after the trees were killed with granular picloram (Evans and Young 1985). The sphere of influence and the magnitude of a juniper tree’s ability to reduce herbaceous production tends to be related to soil depth, with the extent of tree impact decreasing as soil depth increases (Dye et al. 1995). It is likely that this pattern is related to amount of water able to be stored in the soil profile (in general, deep soils have more storage space than shallow soils). The influence of water storage on competition and forage production was illustrated in Oklahoma where during wet years there was no difference in forage production three meters beyond the canopy, but during a dry year the forage production in tree interspaces was significantly lower than in adjacent grasslands (Engle et al. 1987).
Figs. 1 and 2 illustrate how distance from the tree canopy influences herbaceous production for a site on the Sonora Agricultural Experiment Station. These graphs compare the herbaceous standing crop on a site three years after the brush had been cut and removed (Fig. 1) with an adjacent site that had not been cleared (Fig. 2). It is noteworthy that near where the dripline had been, herbaceous production increased by two- to three-fold three years after the trees had been cut, compared with herbaceous production at the dripline when the trees were still present. The area under the trees had much better soil structure than the grass interspace due to the high amount of organic matter in the soil which was associated with the tree litter decomposition (Hester 1996). This improved soil structure results in a much greater infiltration rate under the tree than in the grass interspace (Hester et al. 1997). The dripline area is therefore likely to be an area of relatively greater water input because water which runs off the adjacent grassland interspace is able to infiltrate into the dripline soil. The extra water input, combined with the decomposition and release of nutrients from the tree litter, explains the peak of production associated with the dripline. Greater production in the grass interspace following tree removal is due to cessation of competition for water and nutrients with tree roots. The deep litter layer near the trunk had not decomposed after three years and appeared to impede herbaceous growth in that area.
The overall effect of brush clearing on forage production and hence livestock carrying capacity is shown in Fig. 3. Decomposition of the accumulated tree litter and the associated release of nutrients and greater infiltration rate explains why, in the decade or so following brush control, land formerly under brush cover produces slightly more forage than land that had always remained as grassland. The relationship between brush cover and carrying capacity is not a straight line because as brush density increases, there is an increasing amount of forage that is not readily accessible to livestock. This is particularly true for redberry juniper, which has a multi-stem growth form, thus it is difficult for large grazing animals to reach forage growing in the understory. The relationship between carrying capacity and lease value is not a straight line because as shrub density increases, the difficulty of managing the livestock increases. Therefore, the difference between the carrying capacity and the lease value relationships illustrated in Fig. 3 is attributable to the extra cost in labor that ranchers must invest on land with dense brush, thus making the lease worth less than actual livestock carrying capacity. Hunting
The value of a hunting lease is determined by a variety of subjective assessments made by the hunter. One of the primary considerations in a hunter’s calculation of lease value is perceptions concerning the quality of habitat. For the Edwards Plateau, an estimate of hunting lease values relative to brush cover is shown in Fig. 4. This relationship is not based on game density, rather it is estimated based on perceptions of brush density habitat value by average Texas hunters in the Edwards Plateau. Actually, the density of deer is unlikely to decrease as fast as these estimated Edwards Plateau lease value drop at the lower brush cover values (Terrell and Spillett 1975, Howard et al. 1987, Skousen et al 1989).
Combining the relationships of lease value for livestock grazing and hunting leases, it is apparent that an Edwards Plateau rancher seeking to maximize hunting lease income would manage the site to approximately maintain a 30% brush cover (Fig. 4). If hunters were educated to understand that their hunting success would not be hurt (in fact, would probably be helped) if brush densities were substantially less than the 50% cover currently favored, then the maximum hunting lease income would shift to sites with approximately 20% brush cover, thereby increasing the combined livestock and hunting income to Edwards Plateau ranchers by several extra dollars per acre.
The dense heartwood of mature Ashe juniper has value for use as fence posts or for juniper oil extraction (Garriga et al. 1997). Redberry juniper or young Ashe juniper do not have sufficient heartwood to be used for either of these commercial purposes. The current market rates for Ashe juniper heartwood are about $38 per ton delivered to one of the four juniper oil mills in the Edwards Plateau, or $2-5 per acre for harvesting posts. Generally, ranchers are more concerned with the removal of juniper for range improvement than generating income from juniper harvest, therefore contracts to harvest juniper generally require cutting of all juniper. The need for old-growth heartwood and increasing labor costs make harvesting juniper for profit a unique niche and an increasingly unlikely commercial enterprise. Given current commercial markets and opportunities in Texas, the value of juniper wood does not seem to be a significant factor influencing the management of rangelands.
The influence of juniper cover and density on water fate extends beyond the ranch gate. Even though no monetary benefit is obtained by the land owner, the water that recharges Texas streams and aquifers is arguably the most valued product from rangelands. In Texas, essentially all of the surface water is already allocated to meet the demands of existing users, to meet the minimum needs of flow required to protect endangered aquatic species, and to maintain the viability of the coastal wetlands which are important for the fish/shrimp industry and for wildlife. Groundwater pumping is occurring at a rate far greater is being replenished (Van der Leeden et al. 1990). Despite this, the Texas Water Development Board (1990) projects municipal and industrial demand for water to increase 186% by 2040.
How will this additional demand for water be met? Basically, the citizens of Texas must make some difficult decisions because the development of the Texas economy cannot continue on its current expansion pace unless either more water is made available (that is, through development of new supplies or more efficient water use), or the existing supply of water is reapportioned among the users (that is, one sector gets less water so that another sector can continue to grow). Increasing the availability of water is generally viewed as preferable to the politically-divisive reallocation of water rights, therefore options that can increase the amount of water availability merit careful consideration.
A barrier to previous policy discussions of brush control as a means of increasing water yield was the lack of understanding about to how much water yield is influenced by shrub cover. Fig. 5 illustrates the relationship of shrub cover (approximately 2/3 juniper, 1/3 oak) to water yield developed at the Sonora Agriculture Experiment Station.
It is apparent that significant increases in water yield occur only after most of the brush is removed. There is not a linear relationship between brush cover and water yield because when some brush is cleared, the remaining brush and grass have the potential to use water at a faster rate. Accordingly, at high brush densities, removal of a portion of the brush is likely to result in a moderate water yield increase at first, but after several years of canopy and root growth of the remaining brush, there is unlikely to be a difference in water yield. That is why it is necessary to remove most of the brush from a range site to achieve sustainable, significant increases in water yield. The relationship depicted in Fig. 5 between water yield and brush cover helps to explain why many of the seeps and springs that were historically present throughout the Edwards Plateau no longer flow. It also explains why many ranchers have observed that clearing brush around dormant springs can cause them to flow again. On a broader scale, the graph also implies the significant extent to which an increase in brush on the Edwards Plateau over the last half century has impacted the water supply to streams and aquifers of central Texas.
There are several barriers to brush control for enhancing water yield from rangeland:
The ranching industry does not produce income sufficient to cover the full economic costs of brush control. The recent elimination of wool and mohair subsidies further constrained the ability of ranch enterprises to pay for brush control by reducing the reliability of ranch revenues from livestock enterprises. Therefore, it is unlikely that ranch enterprises in the Edwards Plateau will be able to pay for an increased effort in brush control on their own.
A publicly-funded cost-sharing program targeted to achieve increased water yields from rangelands could provide ranchers the necessary financial means to control brush in a manner that would increase water yield. Fig. 5 illustrates that public funds provided for brush control to improve water yields would get the biggest return on investment if the existing vegetation cover was converted to grassland. Fig. 4. illustrates that ranchers would maximize the lease value of the ranch when brush cover is about 30%. However, at 30% brush cover the potential for water yield is only about 1/10th that of an open grassland. Also, the long-term cost of maintaining 30% brush cover would be as great or greater than maintaining grassland. Therefore, ranchers are unlikely to participate in a cost-sharing program designed to maximize water yields from rangelands unless the cost-sharing incentives covered lost revenue-earning opportunities in addition to assisting with the costs of clearing brush.
2) Endangered Species Act Restrictions
Junipers provide nesting habitat for a variety of songbirds. One of these, the golden-cheeked warbler (Dendroica chrysoparia), is an endangered species which nests only in Texas and requires a habitat characterized as a closed canopy composed of mature Ashe juniper and oak. Vast, dense juniper monocultures, or young juniper stands that are less than 12 ft tall, are not preferred habitats for this species (Rollins and Armstrong 1994). Golden-cheeked warblers are very susceptible to failed attempts at raising young because of nest parasitism by the brown-headed cowbird (Molothrus ater), a species usually associated with grasslands. Clearing portions of closed canopy Ashe juniper and oak can therefore expose the warbler to greater vulnerability to parasitism. This is a primary concern in the debate over how large of a continuous area of closed canopy Ashe juniper-oak woodland is needed to support healthy golden-cheeked warbler populations. Resolution of this issue will determine where and how brush control on the Edwards Plateau will be compatible with the rules for critical habitat protection provided by the U.S. Endangered Species Act.
Many ranches on the Edwards Plateau are no longer the primary source of income for the owners. Therefore, the aesthetic appeal of a woodland may be of paramount concern, making it unlikely that the owner would be interested in any plan to control brush. In a similar vein, the prices of land throughout most of the Edwards Plateau exceeds its value for wildlife habitat and livestock production. Therefore, to the extent that trees are considered to enhance real estate value on the Edwards Plateau, it is unlikely that landowners will voluntarily covert the brush-covered rangelands to grasslands.
There are a variety of on-site and off-site environmental and economic ramifications regarding vegetation management on rangelands. Livestock carrying capacity would be maximized if the range was maintained as grassland. Hunting revenue, an increasingly important component of ranching income, is maximized with a brush cover of about 50%. Since most ranches rely on both livestock and hunting lease revenues, a compromise brush cover of about 30% would currently maximize the livestock and hunting lease value of the land.
Downstream citizens also have a stake in how the range is managed because much of the water recharging the region’s streams and aquifers originates on rangeland watersheds. A 30% brush cover would theoretically yield only 1/10th as much water than if the site was maintained as grassland. Since current water use patterns result in a chronic overdraft of the regions existing water supply, and since projected demands for water are expected to continue to increase, it is in the interest of downstream water users to advocate brush control. For this to happen, it would be necessary for the downstream users to develop a funding mechanism to share in the cost of brush control with the rancher.
The desire to increase water for downstream use must be balanced with the desire of many citizens to maintain a woodland cover for protection of endangered species and aesthetic values. Many recent landowners in the Edwards Plateau have sources of income other than ranching. For them, the aesthetic value of woodlands may be more important than revenue generated from the ranch. This implies that as land ownership patterns continue to change to individuals who do not depend on the ranch for income, it will become less likely (or more costly) for them to participate in brush control programs. Educating the public about the tradeoffs and consequences of brush management on the Edwards Plateau can foster informed dialog and decisions regarding these choices.
Archer, S. 1994. Woody plant encroachment into southwestern grasslands and savannas: rates, pattern and proximate causes. p. 36-68. In: M.Vavra, W. Laycock and R. Pieper (eds.), Ecological implications of livestock herbivory in the west. Society for Range Management, Denver, CO.
Clary, W.P. 1971. Effects of Utah juniper removal on herbage yields from Springerville Soils. J. Range Manage. 24:373-378.
Clary, W.P. 1974. Response of herbaceous vegetation to felling of alligator juniper. Journal of Range Management 27:387-389.
Clary, W.P. 1987. Herbage production and livestock grazing on pinyon-juniper woodlands. p. 440-447. In: Everett, R.L., (ed.). Proceedings of Pinyon-juniper conference. USDA Forest Service General Technical Report INT-215.
Clary, W.P. and D.A. Jameson. 1981. Herbage production following tree and shrub removal in the pinyon-juniper type of Arizona. Journal of Range Management. 34:109-113.
Dye, K.L, D.N. Ueckert, and S.G. Whisenant. 1995. Redberry juniper-herbaceous understory interactions. J. Range Manage. 48:100-107.
Engle, D.M., J.F. Stritzke, and P.L. Claypool. 1987. Herbage standing crop around eastern redcedar trees. J. Range Manage. 40:237-239.
Evans, R.A., and J.A. Young. 1985. Plant succession following control of western juniper (Juniper occidentalis) with Picloram. Weed Sci. 33:63-68.
Garriga, M.D., A.P. Thurow, T.L. Thurow, J.R. Conner, and D. Brandenberger. 1997. In: Taylor, C.A. (editor), Juniper Symposium. Texas A&M University Agricultural Research Station, Sonora, Texas. Technical Report.
Hester, J.W. 1996. Influence of woody dominated rangelands on site hydrology and herbaceous production, Edwards Plateau, Texas. M.S. Thesis, Texas A&M University, College Station, TX.
Hester, J.W., T.L. Thurow and C.A Taylor, Jr. 1997. Hydrologic characteristics of vegetation types as affected by prescribed burning. J. Range Manage. 50: In press.
Howard, V.W., K.M. Cheap, R.H. Hier, T.G. Thompson, and J.A. Dimas. 1987. Effects of cabling pinyon-juniper on mule deer and lagomorph use. p. 552-557. In: Everett, R.L., (ed.) Proceedings–Pinyon-juniper conference. USDA Forest Service General Technical Report INT-215.
McPherson, G.R., and H.A. Wright. 1990. Effects of cattle grazing and Juniperus pinchotii canopy cover on herb canopy cover and production in western Texas. Amer. Midl. Natur. 123:144-151.
Pieper, R.D. 1990. Overstory-understory relations in pinyon-juniper woodlands in New Mexico. Journal of Range Management 43:413-415.
Robinson, E.D., and B.T. Cross. 1970. Redberry juniper control and grass response following aerial application of picloram. p. 20-22. In: Brush research in Texas. Texas Agr. Exp. Sta. Consol. Prog. Rep. 2801-2828.
Rollins, D. and B. Armstrong. 1994. Cedar through the eyes of wildlife. p. 53-60. In: C.A. Taylor, Jr. Juniper Symposium. Texas Agricultural Experiment Station Technical Report 94-2.
Skousen, J.G., J.N. Davis and J.D. Brotherson. 1989. Pinyon-juniper chaining and seeding for big game in central Utah. Journal of Range Management 42:98-104.
Taylor and Smeins. 1994. A history of land use of the Edwards Plateau and its effect on the native vegetation. p. 1-8 In: C.A. Taylor, Jr. Juniper Symposium. Texas Agricultural Experiment Station Technical Report 94-2.
Terrell, T.L. and J.J. Spillett. 1975. Pinyon-juniper conversion: its impact on mule deer and other wildlife. p. 105-119. In: The pinyon-juniper ecosystem: a symposium. Utah State University, Logan, UT.
Texas Water Development Board 1990. Water for Texas today and tomorrow. Document No. GP-5-1.
Van der Leeden, F., F.L. Troise, and D.K. Todd. 1990. The water encyclopedia. Lewis publishers, Chelsea, MI.
Fig. 1. Standing herbaceous biomass three years after the brush had been cut and removed from the site at the Sonora Agricultural Experiment Station, Texas.
Fig. 2. Standing herbaceous biomass in association with tree species at the Sonora Agricultural Experiment Station, Texas.
Fig. 3. Estimated lease value and livestock carrying capacity of Edwards Plateau rangeland with different amounts of brush cover.
Fig. 4. Estimated lease value of Edwards Plateau rangeland with different amounts of brush cover.
Fig. 5. Estimated water yield associated with brush cover at the Sonora Agricultural Experiment Station, Texas.
Comments: Allan McGinty, Professor and Extension Wildlife Specialist | <urn:uuid:4329064f-c134-4fb9-97bf-74402aff5173> | CC-MAIN-2017-17 | http://texnat.tamu.edu/library/symposia/juniper-ecology-and-management/environmental-and-economic-tradeoffs-associated-with-vegetation-management-on-the-edwards-plateau/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00134-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.921668 | 4,475 | 2.90625 | 3 |
|Although communications were a vital component of town growth and development, they rarely feature in maps of British towns. Even when communication facilities are recorded, they tend to be peripheral to the map's purpose and are rarely delineated in any detail. Furthermore, the cartographic record of urban communication is almost solely confined to London.|
|It must be appreciated that London has always dwarfed all other British towns in importance. By the beginning of the eighteenth century London already had a population of about 600,000, about 30 times larger than either Bristol or Norwich which were the next largest towns. By 1800 London contained about one million inhabitants, between 10 and 20 times larger than any of its nearest rivals - Birmingham, Bristol, Leeds, Liverpool, Manchester-with-Salford, and in Scotland Edinburgh and Glasgow. Even after the rapid industrialisation to the 1830s, only Birmingham, Leeds, Liverpool, Manchester-with-Salford and Sheffield had over 7% of London's population each and only another eight towns had approaching 4%. Thus, London had an extraordinary importance in the national economy due to its share of the country's foreign trade, its function as a manufacturing centre and its demand for foodstuffs, raw materials and products from around the country and the world. London was and remained the largest retail market anywhere . Inevitably, London's commerce, size, share of national population and disproportionate share of the literate induced a development of urban communications which was barely imitated in other towns until the twentieth century .|
|The Post Office was founded in 1657. Postal services began as
early as the seventeenth century when letters were carried by post-boys
on horseback. By the end of the eighteenth century some 400 towns
were receiving daily mail. However, the development of postal services
was held back by the fact that charges were high, being dependent on the
weight of the letter or packet and the distance travelled. Since
it was the receiver of the mail who paid, delivery services were slow and
sometimes recipients refused to pay. In some areas mail had to be
collected from a receiving house.
By the 1670's London had grown too large for the use of messengers whose services were replaced by a postal service between the various areas of the city. Initially, all letter of less than one pound in weight were charged one penny for the City and suburbs, and twopence for any distance within a ten-mile radius. Six large offices were opened in different parts of London and all the principal streets had receiving houses.
The 'circuit of the penny post' seems first to have appeared on the 'accurate map of the country twenty miles round London', published by John Fielding in 1782 and engraved by John Cary . Cary continued his interest in postal delivery limits in London on his 'new and accurate plan of London and Westminster, the borough of Southwark and parts adjacent' (1787) , which was initially hand-coloured to show 'Letter Carriers Walks' . The 1792 edition of the map was hand-coloured to delineate 'Proposed Town Penny Post deliveries' and a later issue to show 'Present Town Penny Post deliveries' .
The limits of the twopenny post delivery in London seem to have first been represented by Robert Rowe on his 'map of the country twenty-one miles round London' (1806) which shows the extent of the twopenny post by a red line .
The 1833 (?) edition of 'Cary's new plan of London and its vicinity' (1820) was 'published by the authority of His Majesty's Postmaster General' 'shewing the Limits of the Two-penny Post Delivery'. The 'boundary of the two-penny Post Delivery' is indicated by a 'coloured Circle' . Cary's map continued to be issued under this authority until publication was taken over by George Frederick Cruchley whose first issue of the map, c. 1851, still quoted the authority . When reissued in 1857 the map had become 'Cruchley's new postal district map of London' with the postal district boundaries added . It was reissued in variants of this format until 1865 .
By Andrew Bonar Law.
Dublin: The Neptune Gallery, 1997.
ISBN 0 9532241 0 4
Pp. 334, illustrated. IR £70 (cloth).
The Neptune Gallery
|The Twenty-first report of the Commissioners of Revenue Inquiry
(1830) included three maps delineating London's postal delivery services.
Aaron Arrowsmith engraved a 'map shewing the several walks or deliveries
in the country districts of the twopenny post...' ; James Basire engraved
the 'map of London, shewing the boundaries of the general and two penny
post deliveries the divisions or districts of the two penny post, and the
number and situation of the general and two penny post receiving houses
' ; and Basire again mapped 'the general boundaries of the general
post delivery; of the foreign delivery; of the town delivery of the two
penny post department; and of the country deliveries' .
Similarly, the 9th Report of the Commissioners on Post Office Management (1837) contained maps, based on the Ordnance Survey, by James Wyld portraying the changing limits of delivery. Wyld's 'map of the country 15 miles round London' showed ' by a yellow circle of 3 miles, the limits of the twopenny post delivery, by a blue line, the old limits of the three-penny post delivery, and by a black circle of 12 miles, the present limits of the threepenny post delivery'. The accompanying 'map of London' indicated 'by the yellow line, the old boundary of the foreign letter carriers' delivery. By the blue line, the old boundary of the general post letter carrier's delivery, and by the black circle of 3 miles from the General Post Office, the present boundary of the general post delivery' .
Wyld also produced a 'Post Office plan of London' (c.1848-9) both individually and as the central sheet of his Atlas of London & its Environs .
Until 1855 when they were amalgamated, the London District Letter-carriers existed as a separate establishment from the General Post. Consequently, the circular extent of the twopenny post delivery was still being added to its map of the 'Environs of London' in 1845 by the Society for the Diffusion of Useful Knowledge .
|The inadequacies of the existing system were highlighted by Rowland
Hill who believed that an industrial nation, such as Britain had become,
needed good cheap communications to foster further development. He
argued that with a low standard rate of charge and prepayment by the
sender, the whole population would be encouraged to use the system.
Even with his suggested charge of one penny irrespective of distance, profits
would increase due to increased business. Hill's scheme was introduced
in January 1840 and after mixed immediate results, the postal business
was showing massive increases by the 1850s. By the 1870s the number
of letters delivered had increased tenfold, with an annual average of 32
letters per head of population and almost double that by 1900, when nearly
2,000 million letters were dealt with per year. Books were carried
from 1848, parcels from 1883, and picture postcards from 1894.
The coming of the penny post not only greatly extended communications between the growing urban areas, but also within them. By 1841, the 436 Post Receiving Houses in London were inadequate for the increased mail traffic. The first pillar-box was introduced in London in 1855 on the corner of Fleet Street and Farringdon Street, making it unnecessary to visit a post office in order to send a letter. The first map to show pillar-boxes appears to be James Dolling's 'Pocket Map of London' (c.1862) which locates them by black dots. In 1863 nearly half of all the letters delivered in London originated there . By 1800 London had 2012 places of all kinds (post offices, sub-offices, letterboxes) at which letters could be posted, that is 7.5% of the whole for the United Kingdom.
London postal delivery limits continued to be shown occasionally on maps of the city and its environs after the introduction of the penny post. Richard Laurie mapped the area 12 miles round London in 1854, 'being the limits of the Town Post Delivery' . Towns and villages have distances noted from the General Post Office with details of the number of dispatches and deliveries per day. Circles and squares also indicate distances from the Post Office. Similarly, Stanford's great 'Library map of London and its suburbs' (1862) also indicates 'postal town delivery limits'. In July 1895 Kelly & Co. mapped the 'collection & delivery boundaries for goods traffic' of the Metropolitan Conference.
The number of Post Offices opened in the United Kingdom on the 31st of March 1880 was 912 head and 13,300 sub-offices. By 1880, the total number of places of all kinds (post offices, sub offices, letterboxes) at which letters could be posted in the United Kingdom was 26,753. Subsequent issues of the various post office directories, most notably Kelly's, reveal the rapid increase in provision. Many such directories were accompanied by maps which, although not usually featuring postal information, were designed to provide postal address information . P.J. Jackson & Co.'s Postal Address Directory of Carlisle (1880), for example, was illustrated by 'P.J. Jackson's new postal address map of Carlisle'.
'Cary's new pocket plan of London, Westminster and Southwark...' (1790) records 'The situation of the Receiving Houses of the General & Penny Post Offices'. A table below the map records ' 2 Penny Post Receiving Houses. Chief Office in Throgmorton Street', and one on the map face lists 'Receiving Houses appointed by the General Post Office in Lombard Street'. The 1797 issue of the map replaced the City Office with the 'Westminster Chief Office Gerrard Street' and the 'City Chief Office Abchurch Lane'. The 1797 edition of the plan was used to illustrate a survey of postal delivery undertaken in 1796 by Ferguson and Sparke .
|Money orders were first suggested by Florence Nightingale as a means for soldiers to send money home from the Crimea. However, money order offices rarely feature on maps as a branch of the postal service. An early exception is Dolling's 'Pocket map of London' (c. 1862) which, as well as locating pillar boxes, also identifies 'post & money order offices'. Similarly, the 'Picture map of Manchester and strangers illustrated guide' (c. 1886) features 'post and money order offices' .|
|The telephone was invented in Canada by the Scotsman Alexander Graham
Bell in 1876. It began to come into use in Britain in the 1880s, initially
as a result of private initiative but later through municipal promotion.
The first public telephones arrived in 1884. The National Telephone Company
was formed in 1889 from the amalgamation of many private companies. However,
through the extension of public control from the early 1890s, by 1912 the
Post Office had become responsible for the telephone services of the entire
country, with the exception of the local services in Hull, which continued
as a separate undertaking.
The telephone did not really come into its own in Britain until the early 1930s. Although the National Telephone Company established an exchange in Cambridge, for example, in 1892, there were only 128 subscribers in the town by 1896. By 1910, there were only 122,000 subscribers nationally . Although the telephone reached London in 1878 and the first exchange opened in Coleman Street in the City in 1879, there were less than 1000 subscribers within two years. However, by 1882 subscribers had the use of fifteen exchanges in the city .
Given such slow development of the urban telephone network, it is no surprise that there is virtually no cartographic record of telephone services in British towns, with the exception of isolated late examples of urban maps relating to telephone rates. The Daily Telegraph, for example, published its 'new Telephone Rates Map of London & 25 miles round' c.1921. This map was produced by 'Geographia' which was also responsible for the preparation of the 'Liverpool Daily Post new telephone rates map of Liverpool & district produced under the direction of Alexander Gross...' Telephone exchanges are located in red and the inner telephone charge zone of two miles round the town hall is marked. 'Calls between all Exchanges in Inner Zone 1 1/2d. Calls between Exchanges within Inner Zone and Exchanges within Outer Zone (within 7 miles of Town Hall) 1 1/2d. Calls from any Exchange outside the Inner Zone to any Exchange within 5 miles 1 1/2d. For charges not covered by above see Diagrams and Table of Rates'.
|The electric telegraph was patented in Britain in 1837 following the
pioneering work of Cooke and Wheatstone. In 1839 experiments were
along the Great Western railway line from Paddington to West Drayton, running
the electric wire through iron tubes. However, early progress was
slow due to the high cost of the system. The electric telegraph enabled
messages to be sent along the wires, initially by needles tilted to indicate
the different letters. This complicated, slow method of transmitting
words was replaced by the simpler transmission of the 'dot and dash' code
invented by Samuel Morse in 1838, which came into use c.1845 .
However, the expense of the telegraph system slowed progress. In 1846 the Electric Telegraph Company was formed. By 1854 it boasted 17 offices in the metropolitan area, of which eight were located at railway termini. After 1850, various private companies opened telegraph services and competition developed for business. Within a few years, most large towns were connected by telegraph, allowing urgent messages to be sent at the receiving office and delivered to their destinations. The telegraph wires, which ran alongside the railway lines, not only carried urgent personal and family communications but also urgent commercial intelligence and ever increasing national and international news for the press. The building of the telegraph system alongside the railways and the establishment of early telegraph offices at the stations created a natural association between the two in the map-maker's mind and it was logical to combine railway and telegraph information on the same map. Thus, for example, Cruchley, in adapting Cary's large county series turned them into 'Railway & Telegraphic' maps giving not only railway information but also details of telegraph lines and stations. While these maps are county maps rather than urban maps, they do at least identify towns with telegraph offices sited at the railway station. Similarly, Henry Collins adapted William Ebden's county maps to emphasise telegraph lines and telegraph offices 'open daily only' and 'open day & night' .
In 1868 all telegraph services were taken over by the Post Office. 'At the time of transfer, the Telegraph Companies had 1,992 offices, in addition to 496 railway offices at which telegraph work was performed, making the total number of offices 2,488' . By 1880 there were 3,924 post offices and 1,407 railway stations open for telegraph work, 'making the total number of telegraph offices within the United Kingdom 5,331' . Thus, urban maps locating post offices and railway stations also, generally, incidentally locate telegraph offices.
Thus, the communications mapping of British towns is sparse in the extreme, being almost exclusively confined to London. Clearly, the development of communications was not considered to be of importance, interest or significance to the map-maker. For once, the maps of the British towns fail to be a significant source for the urban historian .
For an extended discussion of the importance of London, see: Dyos, H.J & Aldcroft, D.H.: British Transport. An economic survey from the seventeenth century to the twentieth (1971).
Inevitably, therefore, virtually no examples of pre-1914 urban communication mapping have been found for provincial towns. A survey of the printed map resources of the British Library's Map Library has revealed only those maps featuring communication here discussed.
Full carto-bibliographical details of maps of the whole of London are given in:
Darlington, I. & Howgego, J.: The Printed Maps of London c. 1553-1850 (1964; reprinted 1978 with revisions and additions)
Hyde, R.: Printed Maps of Victorian London 1851-1900 (1975)
Darlington, I. & Howgego, J.: op cit. no.174.
For details of John Cary and his work, see: Fordham, Sir H.G.: John Cary. Engraver, Map , Chart and Print-Seller and Globe-Maker 1754 to 1835 (1925), Smith, D.: 'The Cary family' (Map Collector, 43; 1988), and Smith, D.: 'John Cary' (Dictionary of National Biography, Supplement: Missing Persons; 1993)
Darlington, I. & Howgego, J.: op cit. no. 184.
British Museum: London. An excerpt from the British Museum Catalogue of Printed Maps, Charts and Plans. Photolithographic edition to 1964 (1967).
Darlington, I. & Howgego, J.: op cit no. 239.
Fordham, Sir H.G. : op cit.
Darlington, I. & Howgego, J. : op cit, no. 279 does not record Fordham's edition of 1833, noting no editions between 1831 and 1835 which was 'published by authority of His Majesty's Postmaster General' 'shewing the Limits of the Two-penny Post Delivery'.
Hackney coachmen were 'allowed by the New Act of Parliament to charge back Fares from all places outside the same Circle'.
For discussion of maps giving details of hackney coach fares and other cartographic aspects of urban road transport see: Smith, D. : 'The mapping of British urban roads and road transport' (Bulletin of the Society of Cartographers, 30,1;1996).
Darlington, I. & Howgego, J.: op cit, no. 279 records editions of the 1835, 1836, 1837, 1838, 1845.
British Museum: op cit, records an edition of 1841.
For details of Cruchley and his reissue of Cary's works, see: Smith, D.: 'George Frederick Cruchley' (Map Collector, 49; 1989)
Darlington, I. & Howgego, J.: op cit, no. 279 (1).
Ibid. no. 279 (5).
Ibid. no. 279 (10).
Ibid. no. 323
Ibid. no. 324.
Ibid. no. 325.
For details of Wyld see:
Darlington, I. & Howgego, J.: no. 364.
Ibid. no. 365.
Ibid. no. 416.
Ibid. no. 415.
Ibid. no. 339 (2).
Hill, R.: Post Office Reform: Its Importance and Practicability (1837).
Hyde, R.: op cit. no. 79.
Vincent, D.: 'Communication, community and the State' in: Emsley. C. & Walvin, J. (eds): Peasants & Proletarians, 1760-1860: Essays presented to Gwyn A. Williams (1985)
Hyde, R.: op cit. no. 35.
Ibid. no. 91.
For discussion of the development of maps associated with post office and other directories, and the general mapping of communications, see: Smith, D.: Victorian Maps of the British Isles (1985).
Darlington, I. & Howgego, J.: op cit. no. 192; for further details see Fordham, Sir H.G., op cit, no.2.
Ferguson, H & Sparke, J. The several Divisions and Districts of the Inland Letter Carriers, showing the order in which each District is served, being the result of a Survey taken in the year 1796, for the purpose of ascertaining the most expeditious mode of delivery (1797).
Published by Hale and Roworth, 45, King Street, Manchester.
Waller, P.J.: Town, City & Nation. England 1850-1914 (1983)
Barker, F. & Jackson, P.: London. 2000 years of a city & its people (1974)
On 1st January 1845, a suspected murderer was seen boarding the Paddington train at Slough. The telegraph office at Paddington was alerted. The suspected murderer, John Tawell, was arrested on arrival. Thus, the speed of the electric telegraph was most effectively demonstrated.
Cary's new English atlas; being a complete set of county maps, from actual surveys ... on which are particularly delineated those roads which were measured by order of the Right honourable the Postmaster-General, by John Cary .... (1809).
Cruchley retitled the county maps c.1855; e.g. 'Cruchley's railway map of ..., showing all the railways & names of stations, also the telegraph lines & stations ...' The maps were issued individually, sometimes under the cover title 'Cruchley's modern railway and telegraphic county map'. The maps were issued collectively c.1858 as Cruchley's railway and telegraphic county atlas of England and Wales and subsequently both individually and in atlas until c.1900.
For discussion of the early publication history of these maps, see: Smith, D.: 'The early issues of William Ebden's county maps' (Imago Mundi, 43;1991)
Henry George Collins first re-issued these county maps c.1853 as The new British Atlas containing a complete set of maps of the counties of England and Wales, with all railroads, telegraph lines, and their stations. The whole carefully revised. Signs for telegraph and telegraph stations 'open day only' and 'open day & night' had been added to the maps lithographically. The maps were sold individually from c.1858, sometimes under the cover title 'Collins' railway & telegraph map of ...' At about the same date the map titles were altered to 'Collins' railway and telegraph map of ...'
Report of the Postmaster-General for the year 1880.
Ibid. 'On taking over the telegraphs, the Post Office commenced with 5,651 miles of telegraph line, embracing 48,990 miles of wire, and these numbers have been increased to 23,156 miles of line, embracing 100,851 miles of wire. The total length of submarine cables connecting different parts of the United Kingdom was 139 miles in 1869; last year it was 707 miles...'
For a discussion of the portrayal of railway station on urban maps, see: Smith, D.: 'The railway mapping of British towns' (Cartographic Journal, 35, 2;1998)
For discussion of urban maps as sources of other types of historical | <urn:uuid:50a1833d-988c-4393-848b-e8a420585bdb> | CC-MAIN-2017-17 | http://mapforum.com/12/12smith.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00422-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.949572 | 4,884 | 3.609375 | 4 |
Cariyelik ( Women slavery) goes back to the era before islam. Baghdad used to be the most important slave market at the time of the Abbasides. After islam this business had to continue due to social and economic factors.
We notice that women slavery in the Ottoman Harem started with Orhan Bey, but from the period of Sultan Mehmed the Conqueror the number of women slaves in the Harem increased rapidly. Starting from the middle of the Bayezid II's period, the tradition of the Sultans' marrying the neighbouring princelets' and Principalities' daughters ended . After that time, it became a new tradition for the Sultans to marry women slaves of the Harem. From that century, the Harem and the Sultanate based upon women slaves. They especially preferred to select Circassian, Georgian and Russian girls in the Harem. Since olden times, the Caucasian girls had been renowned with their beauties in the East. That is why, the Harem received too many Caucasian slaves in the beginning and that number rapidly increased especially in the 17th. Century. The girls taken over as prisoners of war in battlefields used to be taken in the Harem as women slaves earlier but in the decline and regression eras of the Ottomans they lost that source. From then on. The Grand Vizier,
governors, pashas. Governors of provinces and the sisters of the Sultans offered them the women slaves they had raised. Another source was the slaves' having been bought and brought in the Harem by the Customs treasurer. in t.he 19th. Century despite the slavery prohibition in the Empire, Caucasians used to send out their daughters to the Ottoman Harem wishing them to be selected as the wives of the Sultan. And they would even raisc their daughters and prepare them f ör such a life in the Harem by singing them lullabies like 'Hope you will be a
wife of the Sultan and lead a glorious life with diamonds'. The women slaves bought outside the Palace at the age of 5-7 used to be raised until they were mature enough, then offered to the Sultan. As they grew bigger and göt more beautiful they would take various classes such as music, courtesy, sodal relations.
They used to be taught the ways to treat and serve a man. When they were teenagers, introduced to the initials of the Palace and bought if selected. For the first night, they would stay in the home of the person who had bought them, and if she had some wrong behaviours, physical defects ör imperfections noticed that night. then their price would go down and the father be paid less than anticipated. Their parents had to sign a document stating that they sold their daughter and would have no future rights on her. Women slaves accepted in the Harem had to be examined by doctors and midwives. The ones carrying an illness ör having some disabilities would never be accepted in the Harem. Extremely beautiful but very inexperienced women slaves were to be trained at first. They would rise to the rank of the assistant-master and the master if they were successful enough. They used to wear long skirts reaching to their heels, tight robes, and coloured chiffon bonnets on their heads. They wore ostentatious robes edged with fringes . As hair care was important to them they would style their hair spending some considerable time in front of rnirrors. Some of them used to have snch long hair even reaching to their heels.
Women slaves were in a kind of competition in the Harem which was like a grand stage set with the players performing in an extravagant costume drama. In order to be noticed they would make-up, tinge their eyes and wear beautiful perfumes. We should mention the pendants, the necklaces and the ear-rings they used to wear too. Those were enhanced with the most precious jewellery such as pearls and diamonds, of course. They always wore seasonal robes. In the summer for instance, you would see them in their light exquisite silk dresses which were rather tight showing off the silhouette of their bodies. Those fur coated dresses had to have laid open collars to bestow a tempting appearance. They had buttons on the front and a rather tightened belt two inches in width enhanced with the most precious jewellery. The belts had buckles amended with diamonds. A cashmere shawl would cover their shoulders. In winters, they would mostly wear fur coats. Women slaves were very well looked after since the Prophet of the Muslims, Mohammed ordered 'Furnish the slaves with anything you eat and wear, and never treat them badly'. The best deed in Islam is granting slaves their liberties. The Prophet Mohammed said 'Whoever grants a Muslim slave his liberty shall not go to hell'. Thai's why. all the Ottoman Sultans practised this canonical law and gave the unselected women slaves homes prepared their trousseau and let them leave the Harem.
The women slaves completing their training period would compete to become a Master, an Assistant Master, a Gozde (favoured), an Ikbal (Sultan's favourite), a Kadin Efendi (Sultan's wife) and finally the Valide Sultana (The Sultana Mother). Women slaves after spending nine years in the Harem had the right to leave. This was called 'Cirag gikma'. The Sultan would give her the trousseau and help her marry somebody else. That slave would take a document signed by the Sultan stating her freedom. The woman slave having had that document could do whatever she wanted without any obstructions. Contrary to what is known, the Sultans used to keep 10 to 20 women slaves in their private
chambers. The most beautiful ones would be of service to them and the ones who were beautiful enough be sent to the princes' chambers. And finally, the ones who were supposed to become beautiful in the future would be sent to the eunuch treasurer and the assistant masters to be raised.
The young women slaves arriving in the Harem were given different names. Some Persian names such as; Gulnaz. Nesedil. Hosneva etc. would be given to them according to their behaviours, looks, beauties and characters. To remember their names some rosettes bearing their names would be attached to their collars. Assistant-masters used to train the new comers on Behaviourism, Religion, Sociability, Respectability, Morality and Music if they had the capability. The ones having good voice were given Music classes.
Those women slaves who could rise to the rank of wife had to be meticulously trained by all means and tutored how to read and write. Those women slaves converted to Islam would practise the rules of the religion. They could pray all together or separately. Besides, they were taught how to read the Koran. All women slaves had to be trained on the Islamic religion. After rising to the rank of wife they had many mosques built and charities founded. This shows that they devoted themselves to Islam after having been converted. The letters they wrote are the indicators of their unique training. Along with Music, they were taught poetry and literature. Hiirrem Sultana made Sultan Siileyman, the Magnificent fall in love with her by sending him the poems she had written. In one of her poems dedicated to Sultan Suleyman she wrote 'Let Hiirrem be sacrificed to a single hair of your moustache' (The meaning of these metaphorical lines is 'I wouldn't hesitate to die for you') .
The number of women slaves in the Harem started to increase considerably from the time of Sultan Mehmed, the Conqueror and this number varied during the period of every single Sultan. In Ahmed I.'s period, they changed the heritage system and gave up appointing the Princes to provinces as governors. Instead, they started to accommodate them in the Harem which caused its population rise rapidly, of course. There used to be 300-500 people in the Harem before Mehmed III., but it is known that the number rose to 700 during his period. Women slaves used to be given some per diem the amount of which varied from Sultan to Sultan. For instance, during the period of Mahmud I that amount rose to 30-50 Akge (Ottoman coins).
Money and presents would also be given to them at weddings, festivals, birthdays. While they were taken good care in the Harem , the Sultan was totally intolerant to those who had committed a crime and they used to be exiled to Bursa and the island of Chios. Today, we have a document dated back to 1764 proving that Mustafa III exiled two women slaves to Bursa and Chios. Apart from those of 10-20 women
slaves who were of service to the Sultan directly, the other ones used undertake various posts in the Harem. Inexperienced women slaves would go through a training period at first, with the arrival of the new comers, the existing experienced slaves would promote to the rank of an assistant-master and work in Kadmefendi, Valide Sultan, Prince and Gozde chambers. They were classified as the grand, standard and lower assistant masters and worked under the command of the head assistant master in those chambers. 10-15 assistants under the command of the most experienced assistant master would
be on duty at night to provide the security of the Harem. Hiinkar's (The Sultan) assistant-masters had the most important place in the Harem serving the Sultan in all ways from his bed time arrangement to the preparation of his meals. Those who used to do the Sultan's private and special work were called Hazinedar (Treasurer) and the person managing them, the treasurer master.
They used to ranked as 1st,., 2nd., 3rd., 4th., and 5th. Hazinedars and remain in the Sultan's chamber whenever the Sultan himself was in the palace. But, only the Hazinedar master could sit by the Sultan as the others were let in when they were called. The 3rd., 4th., and the 5th. Hazinedars used to stand on duty with their assistants outside the Sultan's door for 24 hours. Besides, the key to the treasury was with the Head Hazinedar (The Treasurer). Hazinedars who used to carry the Sultan's seal on a golden pendant around their neck were their confidential friends, too. That is why , the Sultans would always select their own Treasurers and the former ones would either be sent back to the old palace or he let them free by signing of their documentation of freedom. One of the most important duties of the treasurer was to arrange the nights the Kadmefendis would spend with the Sultan. Kethiida Kadm ranking after Kadmefendis was the master of ceremonies which would take place in the Harem. On some important clays such as festivals and weddings they used to organise the ceremonies in the Harem. She Valide Sultan, Prince and Gozde chambers. They were classified as the grand, standard and lower assistant masters and worked under the command of the head assistant master in those chambers. 10-15 assistants under the command of the most experienced assistant master would
be on duty at night to provide the security of the Harem. Hiinkar's (The Sultan) assistant-masters had the most important place in the Harem serving the Sultan in all ways from his bed time arrangement to the preparation of his meals.
Those who used to do the Sultan's private and special work were called Hazinedar (Treasurer) and the person managing them, the treasurer master. They used to ranked as 1st,., 2nd., 3rd., 4th., and 5th. Hazinedars and remain in the Sultan's chamber whenever the Sultan himself was in the palace. But, only the Hazinedar master could sit by the Sultan as the others were let in when they were called. The 3rd., 4th., and the 5th. Hazinedars used to stand on duty with their assistants outside the Sultan's door for 24 hours. Besides, the key to the treasury was with the Head Hazinedar (The Treasurer). Hazinedars who used to carry the Sultan's seal on a golden pendant around their neck were their confidential friends, too. That is why , the Sultans would always select their own Treasurers and the former ones would either be sent back to the old palace or he let them free by signing of their documentation of freedom. One of the most important duties of the treasurer was to arrange the nights the Kadmefendis would spend with the Sultan. Kethiida Kadm ranking after Kadmefendis was the master of ceremonies which would take place in the Harem. On some important clays such as festivals and weddings they used to organise the ceremonies in the Harem. She would carry a silver rod to express the greatness of her post and used to keep the Sultan's seal to seal his properties in his chamber. Kethuda Kadin had maids help her in everything she did. Qas.nigir Usta (flavour
taster) used to deal with all the meals in the Harem. With the women slaves under their control they had to taste all the food the Sultan would eat to find out if anything was poisoned. Qamajir Usta (Laundry woman) was responsible for the laundry. With the women slaves under their command they always tried to do their best, tbriktar Usta (butler-keeping the ewers) helped the Sultan perform an ablution by pouring water on his hands. Coffee business was Kahveci Llsta's and cellar was of Kilerci Usta's duties. Kutucu Usta would wash the Sultans, Kadmefendis and Ikbals in the Hamam (Turkish Bath). Ktilhanci Usta was responsible for the warmth of the Hamams. they had to burn woods to heat the Hamam cabins. A total of 5 Katibe Ustas were in charge of the discipline, regulation and the protocol affairs. The Ustas (masters) examining the sick women slaves used to be called Hastalar Ustasi (The master of patients), midwife, and nurses used to work under the control of Kethiida Harum.
In a list showing the employees of the Harem at the time of Mahmud I, we can see 17 women slaves working in the cellar, 23 under the command of the higher ranked women slaves, 72 for the princes, 15 for Ikbals, and 230 various, making a total 456 women slaves. This list proves that The Sultans were not in relation with all Cariyes (women slaves) in the Harem. Hundreds of women slaves coming to the Harem with the hopes of being the Sultan's wife used to be accommodated in the Cariyeler Kogusu (The chamber of women slaves) to the west of the Harem. They would eat their meals altogether served on big trays directly from the Harem's kitchen by sitting on the bank where the Harem's security guards stood. In the winter time, younger Concubines used to sleep in woolen beds on wooden divans placed on the ground floor of the Concubines chamber which used to be heated by a huge fireplace and have mezzanine floors supported by strong pillars. On top floors, there lived higher ranked Concubines. Inexperienced Concubines were to be inspected by Concubines and them by assistant masters, and finally assistant masters by their masters. They all had a regular life in the Harem.
The gate opening into the Concubines garden bears some scripts revealing the hopes and the dreams of them 'My God who can open all the doors, please open us blessed doors, too). This reflects their common wish and hopes of the future. They all had new dreams everyday and waiting for the day the Sultan would select them.
She was extravagantly beautiful, would display the silhouette of her body under her elaborate tight dress . She would spend a lot of time in front of mirrors watching herself for hours while combing her hair reaching to the ground. She had dimples on her smiling face and was sure of herself that she was admired by all. With her rosy cheeks and vigorous breasts she would stand up and walk into her colourful dreams. Weren't they the same Sultans who had
promoted Hurrem, Safiye, Naksidil Sultanas to the rank of Sultana from slavery? Sometimes this dream would never come true and their hopes would remain to the following days.
After the period of Ahmed I, Ottoman Princes were not appointed to the provinces as governors and they started to stay in the Harem. At times, they had some intercourse with Concubines but since it was prohibited to have children from them they had to comply with the rules. If accidentally one of the Concubines got pregnant she had to lose the baby by mandatory
It is known that if those voluptuous Concubines were unable to make themselves some room in the Princes chambers either, they used to make love to each other. They would make love with Harem Agalari (Eunuchs) too, although they were castrated. It is widely known that Eunuchs had many adventures with women slaves. After they were freed and married other people outside the Palace some would divorce a while later saying to the husband 'I used to get more pleasure from my previous intercourse with the black men' is a proof of their adventures with them. We know that Eunuchs would kill each other because of jealousy. Suleyman II reigned between 1687-1691 was sick all the time and had to spend most of his time in Edirne Palace. Making use of his absence, Concubines used to have more relations with Eunuchs. Ahmed II who was living in the Harem as the successor at that time, learnt that from some other Concubines, and after sitting on the throne upon his father's death, banned Eunuchs enter in the Harem after dusk.
At times, those extravagantly beautiful Concubines fell in love with their Music teachers. Kalfa(Assistant Master) used to stand next to them while being tutored. But, their eyes would reveal their desires before words. No words needed in such a case. Haci Arif Bey, Aziz Efendi, and Sadullah Aga were some of those who fell in love with their students while tutoring. Haci Arif Bey was extremely handsome, and the most famous composer of the period. The Sultan was very fond of his work and he had asked him to tutor the Concubines in the Harem. As the classes started their significant glances became more meaningful. He used to make her memorise new songs and would reply back by singing some purposeful songs. That women slave was deeply in love with Haci Arif Bey, but before she revealed her love to him she had died of tuberculosis. Some other Concubine was clever enough by using the power of the words along with glances and had him love herself. Making love with Concubines was a crime. With the help of some matchmakers, the Sultan allowed his Concubine to marry Haci Arif Bey.
Aziz Efendi was not so handsome as Haci Aiif Bey, but he had an incredibly beautiful voice. He also taught Concubines in the training room of the Palace. He was too sensitive and quite shy . He could not look at the faces of the Concubines he was tutoring. One day some brought him a Cariye belonged to Hanim Sultana and asked him to train her. She was extremely talented. She could sing the songs perfectly the day after she had learnt them. Aziz Efendi was very much impressed by her ability and took a look at the Concubine admiringly. Their eyes met revealing each other what they had in their hearts. That continued repeatedly every single day, with no words being performed by either. They started to pronounce their love by meaningful songs. But, some clay they found out that their classes had been cancelled. Thus, they
had to bury their platonic love deep in their hearts. Sometimes such a love would end up in front of an executioner. Sadullah Aga was one of those who had fallen in love with his student Mihriban and was able to save his head by some piece of music he composed . We are going to tell about his story while narrating Selim III. | <urn:uuid:540ab3c0-e3a5-4d8a-a9cd-78a168e5dde6> | CC-MAIN-2017-17 | http://exploreturkey.com/exptur.phtml?id=413 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00544-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.985774 | 4,278 | 3.0625 | 3 |
Origins 16(1):11-24 (1989).
WHAT THIS ARTICLE IS ABOUT
Punctuated equilibria ("punc eq" or PE) was originally proposed as an attempt to offer an alternative evolutionary view to the classical Darwinian theory of speciation as a slow, gradual process. PE theory includes both a two-pronged claim about the fossil record of a species stasis and abrupt appearance and a historical scenario and/or mechanism to explain that claim. Although the mechanism is often very difficult to test, the claim about the fossil record of species is rather easily testable, being both falsifiable and potentially verifiable.
Not a single species has been found with a fossil record which definitely violates the claim of stasis and abrupt appearance. The organisms with fossil records the closest to violating stasis and abrupt appearance are unicellular organisms, and are primarily found in the Cenozoic portion of the stratigraphic column. Existing evolutionary theory, even that invoked in PE theory mechanisms, cannot explain why unicellular organisms should be the only organisms to violate the first two claims (stasis and abrupt appearance) of PE theory.
A stratigraphic mechanism for the claim of punctuated equilibria is here suggested. Species which experience instantaneous burial would be expected to display stasis and abrupt appearance in the resultant sediments. As the length of time actually represented by the sedimentary record increases, exceptions to stasis and abrupt appearance would be expected beginning with short-lived, catastrophe-tolerant species. Since possible exceptions to stasis and abrupt appearance are found only among species with generation times of less than a year, there is no evidence that the sediments in which any species is preserved must have taken any more than a year to be deposited. The fact that exceptions to stasis and abrupt appearance may occur in the Cenozoic sediments, but not in the older sediments, is consistent with the idea that the pre-Cenozoic was deposited in the flood, and that a significant portion of the Cenozoic may be post-flood.
Dating from only 1972 (Eldredge and Gould 1972), punctuated equilibria is a rather young theory of science. As with new ideas in other arenas of human thought, such as clothing design, computer improvement, and the rebellion of the next generation's youth, few novel ideas of science become popular. New ideas and those who hold them often encounter a plethora of competitive ideas and an abundance of misunderstanding. Punctuated equilibria is no exception. Over the last seventeen years a number of valid variations, as well as invalid understandings, of punctuated equilibria theory have surfaced. This paper seeks first of all to clear up the confusion about what punctuated equilibria truly is, and what it is not. The second purpose of this paper is to propose a punctuated equilibria mechanism which is consistent with a young-earth creation model.
PUNC EQ WHAT IT IS, AND WHAT IT IS NOT
Clarifications of PE theory
(fondly known as "punc eq", and hereafter referred to as PE) theories are all
composed of two claims. The first claim is merely a paleontological observation, or what
Stephen Jay Gould calls the "geometry" of PE. As will be elaborated below, this
claim is that: 1) transitional forms are lacking between species, and 2) a species
morphology does not change substantially throughout its range in the fossil record.
Because, with minor variations, it is a common element in all PE theories, the
paleontological observation may just as well be considered punctuated equilibria sensu
stricto. The second claim of each PE theory is a mechanism proposed to explain the
paleontological observation usually a speciation mechanism to account for the lack
of transitional forms between species. The variety of these mechanisms accounts for a vast
majority of the variations upon PE theory that exist. Since PE theories differ
predominantly in their second claims, it may be said that PE sensu lato is a
statement of the paleontological observation along with a mechanism for its explanation.
A further clarification is that PE theories are proposed to deal with species, and should be applied exclusively to species or, in some cases, subspecies and/or varieties. This claim will be argued more fully in subsequent sections of this paper. "Stasis", when used to describe the fossil record of higher taxa, has a very different and much more abstract meaning than when it is used to describe the fossil record of a species (see under "The claim of stasis", below). The difference in meaning renders it very difficult, if not impossible, to test the claim of stasis in higher taxonomic groups. The abrupt appearance of higher taxa would also be defined differently from the abrupt appearance of species (see under "The claim of abrupt appearance", below). With regard to mechanisms of origin, higher taxa, may have arisen by different means than component species. As a result, it may well be inappropriate to apply PE mechanisms which are designed to account for the origin of species to the origin of higher taxa. PE theory should not be applied to the fossil record of genera, families, orders, classes, phyla, or any other taxonomic unit higher than the species.
The claim of stasis
observation of punctuated equilibria (or PE sensu stricto) is itself composed of
two claims about the fossil record of species that species predominantly show both
stasis and abrupt appearance. The claim of stasis is that the range of morphological
variation exhibited by populations of a given species does not change over the duration of
the stratigraphic range of a species. Thus no substantial change in morphology occurs
between the stratigraphically lowest population and the stratigraphically highest
population nor, in fact, between any other two populations between. This particular
claim is verifiable and potentially falsifiable from the known fossil record. In fact, it
is extremely likely that, if incorrect, this claim would be very quickly and profoundly
falsified, even if the fossil record were very incomplete. It should theoretically take
only two stratigraphically distinct populations of a species to indicate that a population
had changed substantially.
Once again, to elaborate upon an earlier clarification, the claim of paleontological stasis is to be applied only to species. A higher taxon is defined as a region of morphological space which includes at least the morphologies of all component species. If higher taxa are real and have true boundaries in morphological space (which is the claim of many creationists), then there is unrecognized, unrealized, and perhaps even unrealizable morphological space within each higher taxon. This means that it is very improbable that any definition based upon realized morphology reflects the true nature of the higher taxon.
Let us say, for the sake of argument, that a particular higher taxon is real and that its true boundaries have never changed through time. Let us further postulate that there was a time in the history of that higher taxon when it was represented by only a single species (i.e., a single morphology). Let us further speculate that at some later point in time the higher taxon is represented by more than a single species (i.e., more than a single morphology). Regardless of how the new species came about, the empirical morphological evidence alone would lead us to the incorrect conclusion that the higher taxon's morphological range has, at the very least, become broader. Suppose further that the original species was neither represented at the later time period nor was its morphology within the region of morphological space bounded by the species that were alive. In this case the higher taxon's morphology has not only appeared to become broader, but it has also changed. To arrive at any other conclusion would not be viable. If one simply defined the higher taxon as the morphological space which includes all component species, living and dead, then it would be impossible to falsify the hypothesis of morphological stasis of a higher taxon. Such a concept would be useless to us in determining whether stasis occurs in higher taxa.
If we are to define higher taxa as exhibiting stasis, then non-paleontological and perhaps even non-morphological evidence will have to be employed. Thus, the PE claim of paleontological stasis of species should not be applied to taxonomic groups above the level of the species. If we wish to study the "stasis" of higher groups it will be advisable to carefully define new descriptors, such as "paleontological stasis", "species stasis", "familial stasis", etc.
The claim of abrupt appearance
The second claim of
the paleontological observation of PE is abrupt appearance. It is, in fact, unfortunate
that this claim should be labeled "abrupt appearance", for the label itself
implies that its definition is tautologous. Abrupt appearance would most logically mean
that a species appears abruptly in the stratigraphic record in the oldest sediments where
that species is identified. If this were its definition, abrupt appearance would be a
tautology - a true statement but without explanatory power. This, however, is not
the definition of abrupt appearance. Rather, abrupt appearance is the claim that the
oldest identifiable population of a species is not preceded by any transitional series or
even transitional form from another species in the fossil record. Stated another way,
abrupt appearance is the claim that there are no inter-specific transitional forms in the
fossil record. As with stasis, this claim is verifiable and potentially falsifiable for
the known fossil record. Considering the number of species in the fossil record
(approximately a quarter million, according to Raup and Stanley 1978, p. 3), it seems that
if it were false, it is likely that this claim would be falsified.
Also like stasis, abrupt appearance should not be applied above the level of the species. Although PE theory maintains that there are few if any inter-specific transitional forms, it does not deny the possibility that a species can act as a transitional form between two higher groups. For example, it might be argued that although there are no inter-specific transitional forms leading to or away from Archaeopteryx, Archaeopteryx itself can be considered a transitional form between reptiles and birds. It might also be argued that although there are no inter-specific transitional forms connecting any pair of species in the human series, or the horse series, or the elephant series, etc., the various species in each of these cases can be understood to be intermediate species. Thus although there may be no inter-specific transitional forms leading up to a higher taxon, there may well be transitional species leading up to a higher taxon. This would mean that although the fossil record of a higher taxon is exhibiting abrupt appearance on the species level, it is not truly exhibiting abrupt appearance. Abrupt appearance of higher taxa has a different meaning than abrupt appearance of species.
The original mechanism for abrupt appearance
variety among PE theories largely arises from the variety of mechanisms that have been
proposed to explain the two paleontological claims outlined above. The original
formulation of PE theory was that of Niles Eldredge and Stephen Jay Gould in 1972.
According to its formulators, PE theory came primarily from prevailing biological theory
and not from paleontology (nor, as some have intimated, from the claims of creationists
that transitional forms do not exist in the fossil record) (Eldredge 1971; Eldredge and
For nearly a century after Darwin most evolutionary biologists were of the opinion that speciation occurs through a phyletic transformation of large species populations over very long periods of time. By 1950, however, advances in genetics and population biology had left little hope for this type of speciation mechanism. Stabilizing selection in large populations seemed to prohibit rather than allow change to occur. As a result of this, many alternative mechanisms were proposed to explain the origin of species.
By the time Eldredge and Gould went through graduate school, the peripheral isolate theory of allopatric speciation of Ernst Mayr (1963, 1971) was the most popular and oft-advocated biological theory of speciation. According to this theory species actually arise in small populations isolated from and peripheral to the main population(s) of a species. In these peripheral isolates high selection pressure, genetic drift, and the founder effect combine to (theoretically) allow speciation in only a thousand generations or so.
If, as Eldredge and Gould reasoned, this is how species actually arise, it should be possible to specify what the paleontological prediction of such a theory would be. If this was the speciation mechanism, the transitional populations would be small in terms of occupied area. They would also exist over a time period of only thousands to tens of thousands of years. Accordingly, if the fossil record in fact records 3.5 billion years of earth history, the likelihood would be extremely low that a transitional population would ever be located in the fossil record. Furthermore, if large population sizes tend to prohibit morphological change via stabilizing selection, species should exhibit stasis during most or all of their existence. It is important to note that PE as originally defined by Eldredge and Gould was born out of the second claim the mechanism of Mayr's peripheral isolate theory of allopatric speciation. The paleontological observation was a prediction from that mechanism.
It is also worthy of note that Eldredge and Gould do expect exceptions to the universality of abrupt appearance. In those rare occurrences where the transitional populations are found in sediments of sufficient resolution, inter-specific transitional forms are expected to be seen. Since the mean stratigraphic resolution is considered to vary inversely with age, the rarity of exceptions might be expected to increase with the age of sediment. In their original paper Eldredge and Gould (1972) also suggest that PE may not apply to all organisms. Mayr's peripheral isolate mechanism was proposed to account for the origin of sexual species. Most sexual organisms would be expected to follow predictions of PE theory. Since speciation mechanisms are largely unknown among asexual organisms, Eldredge and Gould considered it possible that the fossil record of asexual organisms might not follow the predictions of PE theory. PE theory as originally formulated would predict that the number of exceptions to stasis and abrupt appearance would be small, but would increase in frequency with decreasing age, and perhaps be more frequent among asexual organisms.
Other mechanisms for abrupt appearance
peripheral isolate mechanism proposed for the original PE theory is not the only mechanism
which has been invoked to explain abrupt appearance. If speciation always occurred by
means of macromutation, then abrupt appearance without any exceptions would be the
paleontological prediction. Thus another PE theory sensu lato would be one with a
macromutation mechanism. If, on the other hand, speciation occurred by means of
large-scale morphological changes caused by mutations in developmental regulatory genes,
once again abrupt appearance without any exceptions would be the paleontological
prediction. A third PE theory sensu lato would then be one which included
speciation by means of the mechanism of regulatory gene mutation. A fourth PE theory sensu
lato might include a speciation mechanism which accounts for a large change in adult
morphology by means of small, non-regulatory-gene changes in the ontogeny of an organism.
A fifth theory might include any combination of the four speciation mechanisms mentioned
It is important to note that Goldschmidt (1940) proposed that higher taxa (e.g., phyla) may well have arisen by means of the third and/or fourth mechanisms above namely by means of small changes (regulatory or not) occurring in ontogeny which effect large changes in adult morphology. Although Goldschmidt's "hopeful monster" mechanism can be used as a mechanism to account for the paleontological observation of PE theory, it is not usually so used. It is generally appealed to in order to account for the origin of higher taxa, and not the origin of species.
Strictly speaking, since the origin of a higher taxon occurs with the origin of a new species, a mechanism for the origin of higher taxa can also be seen as a speciation mechanism. However, some evolutionary biologists feel that higher taxa do not simply originate by means of the usual speciation mechanism, or even a scaled-up version of it. It is thought that higher taxa originate only rarely, and by a mechanism of a very different nature from the normal mode of speciation. Goldschmidt's "hopeful monster" mechanism is of a very different nature from the traditional speciation mechanism, and it is thought to have occurred only very rarely, if ever, and primarily in the origin of higher taxa. His "hopeful monster" theory is not to be equated with punctuated equilibria, as the two are not the same.
Mechanisms for stasis
Besides there being a number of speciation mechanisms to account for the observation of abrupt appearance, there are several mechanisms to account for the observation of stasis. As mentioned above, one suggestion is that large population size may swamp out change and account for stasis. Another possibility is that since the fossil record preserves only a small number of the morphological characters of an organism (e.g., primarily the hard parts of organisms), the organism may actually be changing radically just not in the characters observed. Yet another possibility is that environments persist through time, producing no net change in selection pressure. An additional possibility is that there is some unknown mechanism of "homeostasis" which prevents organismal change. Any one or more of these mechanisms can be combined with a speciation mechanism to make up a PE theory sensu lato.
HOW PE SENSU STRICTO FARES AGAINST THE DATA
The paleontological observation of PE theory (i.e., PE theory sensu
stricto) has fared rather well in the light of the data of the last seventeen years.
The best exception to the claim of paleontological stasis in the fossil record of which I
am aware is the Permian foraminifer Lepidolina multiseptata (Ozawa 1975, Gould
and Eldredge 1977). Other possible claims exist among the Cenozoic fossil records of
unicellular organisms, but are insufficiently documented to be conclusive (Lazarus 1983).
To the claim of no inter-specific transitional forms there are also suggested exceptions (Kellogg 1975; Williamson 1981; Malmgren, Berggren and Lohmann 1983; and Arnold 1983). Gould and Eldredge (1977), however, feel that Kellogg (1975) did not provide sufficient evidence to exclude the possibility of the change being non-genomic (i.e., non-heritable) and ecophenotypic: (i.e., environmentally determined) in character. Similar arguments could be directed against Ozawa (1975) if the time period covered (Middle to Upper Permian) was collapsed into a period within the year of a global flood. Williamson's (1981) study did not demonstrate stasis in the case of any of his thirteen "new species". Furthermore, the changes he identified happened simultaneously in large populations of widely different organisms (sexual through hermaphroditic; infaunal through epifaunal, etc.). Once again, then, it is possible that Williamson's data record ecophenotypic, and not genotypic, change (Mayr 1982, Boucot 1982). Both Arnold (1983) and Malmgren et al. (1983) looked at only a single core of sediment, so did not control for the possibility that a climatic change may have forced the replacement of one species with another more tolerant of the new climate. As Malmgren et al. (1983) admit, there is a gradual change in ocean temperature across the interval sampled, and their own study does not exclude the possibility that a species may have migrated across the area as a result of climatic change.
Although there are no bona fide exceptions to the paleontological observation of punctuated equilibria, the best cases seem to come from foraminifera in the Upper Cenozoic. Although Eldredge and Gould (1972) felt that asexual organisms may not follow PE theory, most asexual organisms do (e.g., parthenogenic freshwater snails; Williamson 1981). There is nothing in current evolutionary theory PE mechanisms included which should predict that the exceptions to PE theory should come specifically from foraminifera, and not other asexual organisms. Yet, because some researchers think that forams show gradual change, it has been suggested that unicellular (asexual?) organisms evolve by means of a very different evolutionary mode than multicellular organisms. They, in fact, are appealing to information that is not yet known to be forthcoming, so it is hoped.
AN ALTERNATIVE MECHANISM FOR PUNC EQ
Conventional geology and PE theory
All the PE
theory mechanisms that have been proposed to date are biologic in nature, and most are
evolutionary. They do not exhaust the possibilities. It is also possible to consider a
stratigraphic mechanism for the paleontological observation of punctuated equilibria.
Let us consider such a possibility by first of all by asking what variety of theories might be invoked to explain the rock record on earth. There are a large number of potential theories in fact, there is theoretically an infinite number. Let us simplify the situation and place all the possible theories onto a one-dimensional "spectrum of theories for the origin of the earth's rocks". At one end of such a spectrum might be the theory that the rate at which existing rocks were formed has been constant throughout the entire history of the earth. At the other end of such a spectrum might be the theory that all the earth's current rocks were formed in one event of zero duration (e.g., creation) or of very short duration (e.g., a single, very short-lived catastrophe). Since rocks are forming today and at varying rates, neither of these end-point theories accurately accounts for the origin of all the rocks on the earth. The true theory for the origin of the earth's rocks lies somewhere between these two extremes. It is up to geologists to determine where on that spectrum of possibilities the theory lies which can best account for all the earth's rocks.
Consider for a moment the uniform-rate theory at the one end of the spectrum. This theory would more or less characterize Charles Lyell's theory for the origin of the earth's rocks. Modern geologic theory has modified Lyell's theory of uniformity to allow for many local catastrophes and varying rates through time, but is still located close to the uniform rate end of the spectrum. If this theory correctly characterizes the manner in which the earth's rocks were formed, the fossil record is to be interpreted from bottom to top as a sampling from the earth's biota through time. Each sample is in essence a snapshot of a particular moment in the history of the earth. Though some of the successive snapshots are closer together in time than others, the fossil record would be analogous to a motion picture of the history of life on earth each frame being a snapshot of a very brief moment in time. Consequently, any change in morphology up the geologic column would be interpreted as reflecting a change with time in other words, as evolution. Since this idea of what Stephen Jay Gould calls "deep time" is the conventional understanding of the stratigraphic column, it is understandable that any mechanism to explain vertical changes in the fossils in the stratigraphic column would be inherently biologic and evolutionary in nature.
Geologic catastrophe and PE theory
Let us now
consider, however, the theory on the opposite end of the spectrum that all the
earth's rocks originated in a single event of very short duration. If the earth's rocks
originated by means of such a catastrophe, the fossil record represents a snapshot of the
earth's biota at a moment in time. Changes in fossil morphology between levels would not
then reflect changes in biology through time. There would be no need to invoke
evolutionary mechanisms to explain any vertical changes in organismal morphology. What
then would we expect to see in the fossil record with respect to stasis and inter-specific
transitional forms? Since each species would be sampled at only a moment in time, species
should predominantly show stasis in the fossil record.
Exceptions to species stasis would occur in one or more of three ways. Firstly, the processes operating during the catastrophe may have sorted the organisms into a vertical gradient of morphology. Such an explanation might be proposed if a laboratory simulation of the depositional processes of the catastrophe sorted individuals of a given species in a manner reflective of their stratigraphic distribution. Secondly, a vertical morphology gradient may be reflective of an original geographic, latitudinal, or altitudinal gradient of morphology. This might be substantiated if a similar morphology gradient exists in living populations of the species of concern and/or related species. Thirdly, a vertical morphology gradient may be the result of an actual morphological transition during the course of the catastrophe. This could occur only in an organism which is resistant to the conditions of such a catastrophe and has a generation time substantially shorter than the duration of the catastrophe.
A catastrophe would produce a fossil record predominated by a lack of inter-specific transitional forms. As in the case of stasis, exceptions might occur in one or more of three ways. Firstly, a lineage could show inter-specific intermediates by an inter-specific morphology landing by chance in a stratigraphically intermediate position. This is a very unlikely event, and the more intermediates are found, the lesser is the likelihood that such a scenario actually occurred. Secondly, a fossil record which shows two species vertically separated by a zone of inter-specific transitional forms may be reflecting a pre-catastrophe morphology gradient. If this were the case, the fossils representing the inter-specific transitional forms should be found in a select geographic region, somewhat reflective of the original hybrid zone (or zone of intermediates). Thirdly, an apparent change in morphology up section may reflect an actual speciation event. Once again, this could occur only in an organism which is resistant to the conditions of such a catastrophe and has a generation time substantially shorter than the duration of the catastrophe. The rarity of exceptions to PE sensu stricto indicates that a model of catastrophic deposition of the earth's rocks could be invoked as a mechanism to account for the paleontological observation of PE theory.
As one moved across the spectrum from the single-catastrophe at the one end, theories would be encountered which would introduce more time into the formation of the earth's rocks. It is possible to posit that there was a single catastrophe with the remainder of the earth's history uniform, or that there were several catastrophes with uniformity between, or that there were catastrophes of increasing length. As one moved across the spectrum in this way, one would expect that true examples of biological change will manifest themselves with greater and greater frequency. When the periods of uniformity are short, the only biological change that could possibly be seen would be in those organisms with short generation times. Only when the periods of uniformity were long enough could organisms with long generation times show intra- and inter-specific evolution.
Creation geology and PE theory
models of earth history are varied. There are in fact too many of them to permit
consideration of each of them here. An outline of one creation model will be presented
with its corresponding paleontological prediction. This model begins with the creation of
the earth's oldest rocks in something less than 24 hours on Day 1 of the creation week. On
Day 3, there may well have been another geologic catastrophe of less than 24-hour
duration. Then, for over 1600 years, until the global catastrophe of the flood, there was
a period of apparent uniformity of geologic processes at something near current rates. The
initial stages of the flood may well have eroded away and thus destroyed all evidence of
this antediluvian geology. It may have even destroyed some or all of the effects of the
After the flood there may have been a series of catastrophes perhaps decreasing in geographic extent, magnitude, and duration as time passed. Each pair of successive catastrophes was probably separated by a period of uniform geologic sedimentation, again with rates similar to today. Once again, however, the evidence for the periods of "normal" activity may have usually been destroyed by subsequent catastrophe. This particular model would understand the lower portion of the rock record as the product of one or two catastrophes. Only the uppermost part of the post-flood rock record would have any significant amount of evidence of a uniform rate of rock formation.
This model lies towards the catastrophe endpoint of the spectrum of theories for the origin of the earth's rocks. Such a model would predict a fossil record which predominantly shows stasis and abrupt appearance. Since the flood was on the order of a year in length, exceptions to PE sensu stricto in the flood sediments would most likely be sediment-suspension-resistant, marine organisms with generation times on the order of a month or less. In the post-flood sediments, exceptions to PE sensu stricto (if they occur at all) should increase in frequency vertically. Exceptions, once again, would most likely be short-lived, marine organisms which are resistant to suspended sediment.
"Punc Eq Creation Style" (PECS) is a punctuated equilibria
theory sensu lato. It is composed of two primary claims: that stasis and abrupt
appearance predominate in the fossil record of species, and that the stasis and abrupt
appearance can be accounted for in a catastrophic flood model. All other PE theories
explain the paleontological observations of stasis and abrupt appearance of species. Most
PE theories also explain why the proposed exceptions tend to be in the Upper Cenozoic.
PECS, however, goes even further. It not only predicts the stasis and abrupt appearance of
species, but it also predicts that exceptions, if they occur, will be found more often
than not in the Upper Cenozoic among the marine, suspension-resistant organisms with short
generation times (e.g., foraminifera). Because of its greater explanatory power, PECS
theory is superior to other PE theories.
Much research needs to be done in this particular area. Currently, in spite of a number of claims to the contrary, there are no completely satisfactory exceptions to the universality of PE sensu stricto. Although no PE theory, including PECS, requires the existence of exceptions, valid exceptions will make it possible to choose from among the various PE theories. Alone among PE theories, PECS predicts that exceptions will tend to be marine, sediment-suspension-resistant organisms with short-generation times (one month or less in flood sediments). Searches for exceptions and evaluation of claims for exceptions will be important in determining the validity of the PECS model.
Exceptions to stasis and abrupt appearance which are the result of true morphological change through time should also aid us in differentiating between flood and post-flood sediments. It is in post-flood sediments where substantially more exceptions should be found. The identification of pre-flood/flood and flood/post-flood boundaries will be extremely important in the elaboration of better flood models. The evidence to date from possible PE exceptions suggests that at least the Neogene (Upper Tertiary) may be post-flood. PE exceptions may also aid in determining the mode, tempo, and number of post-flood catastrophes. Inferred generation times may allow for an estimate of duration of both catastrophes and inter-catastrophe periods. Organismal resistance to conditions experienced during catastrophes may allow us to infer what type of catastrophe actually occurred.
Exceptions to stasis and abrupt appearance which are not due to actual changes in morphology may also provide valuable information about the mode of deposition as well as original biogeography. Exceptions to stasis which are due to sorting will indicate the importance and manner of sorting which occurred during any one depositional period. It may well be, for example, that Cope's Law (that a lineage tends to increase in body size up the stratigraphic column) is the result of such preferential sorting. Exceptions to a lack of inter-specific transitional forms which are due to the chance occurrence of an intermediate morphology in intermediate stratigraphic position will indicate the possible importance of randomness in the production of apparent pattern in the fossil record. The more that is known about the effects of sorting and randomness in catastrophic events, the closer we will be to understanding what happened during the flood. Exceptions to stasis and/or abrupt appearance, on the other hand, which are reflections of original biogeography, will aid us immensely in the understanding of paleo-biogeography, which in turn will help us to understand paleoclimates and paleobiology.
I would like to thank the reviewers and Jim Gibson for reading an earlier draft and making suggestions for its improvement.
All contents copyright Geoscience Research Institute. All rights reserved.
Send comments and questions to email@example.com
| About Us | Contact Us | | <urn:uuid:aae0b4c3-4a45-4c3e-9663-8d7cc8370dfd> | CC-MAIN-2017-17 | http://www.grisda.org/origins/16011.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00603-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.946861 | 6,993 | 3.59375 | 4 |
Human Rights Education in Asian Schools Volume VIII
Child Rights, Classroom and School Management: An Indonesian Experience
Indonesia ratified the Convention on the Rights of the Child (CRC) through Presidential Decision No. 36/1990. In the Indonesian legal system, a Presidential Decision has a status below the laws and government regulations. This is probably the reason for the ineffective implementation of CRC by the government as well as by families, communities and institutions in Indonesia. It is also obvious that the creation of supporting structures for the fulfillment of child rights is not a government priority.
While the law on the protection of children entitled Republic of Indonesia Law Number 23 Year 2002 on Child Protection
(Undang-Undang Republik Indonesia Nomor 23 tahun 2002 tentang Perlindungan Anak) shows the Indonesian government's commitment to fulfill child rights, the lack of knowledge and understanding among the adults of this law and how to implement it is the biggest obstacle.
The teaching of child rights in Indonesian schools through civic education tends to portray children as passive objects, indoctrinating them with the obligation to obey the government, parents and other adults. They learn more about their duties as children rather than their rights that should be fulfilled. And what they understand as their rights are restricted by the people around them.
An ideal condition is needed to ensure the fulfillment of child rights. The school provides one condition. It can be a place for children to develop their capability, interest, talent and creativity through their active participation. To achieve this, the support from adults (in this case the teachers) as well as proper school environment are needed.
However, the real situation in schools or in the education community characterized by many cases of violence and abuse by teachers and school guards, and bullying by students has to be faced.
All these are due to lack of knowledge and understanding about child rights by the teachers in particular and school officials in general.
A team consisting of a social worker, a school principal and an education official in the North Sumatra Provincial Education Office in Indonesia
attended a short training course entitled "Child Rights, Classroom and School Management"
in Lund University from 24 September to 10 October 2003. As a follow up to the training course, the team started a project in a public primary school named SD Negeri No. 023898 in Binjai District, North Sumatra Province.
SD Negeri No. 023898 is one of ordinary public primary schools located in East Binjai. It has 177 students, 90 boys and 87 girls, in six classes from grade one to grade six. They come from the surrounding communities. 90% of their parents are temporary laborers, and 10% are public servants. The project team chose this school because the Principal of the school is a member of the team and participated in the short-course training program in Lund University.
The Lund University program is supported by the Swedish International Development Cooperation Agency (SIDA). The Englishlanguage training program is designed for those holding positions in schools, and at intermediate (education officers and trainers responsible for educational activities at district or provincial levels) or central levels (teacher trainers, headmaster trainers, staff at educational institutes of the Ministry of Education). Each country is represented by a team of 3 people, each member representing one level of education. The team is expected to work together in the project. The training program has 30 participants in order to ensure close working relationship between participants and lecturers.
The right to, in
education is the guiding principle in the course and the whole training program has a child-rights-based approach. The program provides opportunities for participants from different countries to compare and share their experiences in light of the CRC, Education for All (EFA) and other internationally-agreed declarations.
A child-rights-based approach has the potential of contributing to the broader efforts of improving educational quality and efficiency. Schools and classrooms that are protective, inclusive, child-centered, democratic and supportive of active participation have the potential of solving problems such as non-attendance, dropout and low completion rates, which are common in developing countries. Childcentered content and teaching/learning processes appropriate to the child's developmental level, abilities, and learning style promotes effective learning. A child-rights-based approach may also enhance teacher capacity, morale, commitment, status and income. Negative attitudes may be altered through the practice of conflict resolution, democracy, tolerance and respect in the classroom.
The overall objective of the course, from a development perspective, is to enhance the right to relevant education for all - an education that empowers the poor and excluded sections of the population to participate as active and informed citizens in all aspects of development.
The objective is to stimulate the transformation of conventional top-down approaches into participatory rights-based, learner-friendly and gender-sensitive approaches to teaching and learning. The training program aims to:
- Develop skills, understanding and attitudes in favor of rights-based educational work at the classroom and school levels, taking into consideration the experience and perspective of the participants, and the CRC, EFA and other internationallyagreed declarations.
- Stimulate and contribute to the development of methodologies in the area of child rights in the classroom and school management at country level.
- Familiarize participants with Swedish and other international practices at school and classroom levels in relation to democratic principles and human rights.
The training program consisted of two phases. The first phase took place during the 3-week stay at Lund University in Sweden. The main content of the first phase consisted of studies in the subject area, combined with visits to relevant Swedish institutions, including different schools. During the first phase all participants were assigned a mentor. The first phase also consisted of project work on a parttime basis for 5 months on a relevant task in the home country decided upon during the participants' stay in Sweden. The project work should have a high degree of practical relevance for the participants and their home organization. The second phase consisted of a followup seminar on the project work for 2 weeks in Tanzania. During this phase the participants were asked as part of the course to develop, discuss and present plans for the application of the course content in their work. Finally, a couple of months after the second phase, the mentors did a follow-up visit in the participants'
The project in Indonesia aims to:
- Collect, describe and analyze data pertaining to children's view on child rights particularly through the learning process in school. This covers both the children's knowledge of child rights and their view about the school, and the knowledge of teachers about child rights.
- Increase the knowledge and understanding of students, teachers and parents on child rights through the learning process in school.
- Provide teachers with the necessary skills to realize child rights through the learning process in school.
The project aims to directly benefit the
- Students in Grades V and VI of the primary school by preventing discrimination based on sex, race, ethnicity, religion, customs and traditions
- Teachers teaching in Grades I-VI in the school.
The parents, members of the Boards of Education of Binjai District and the North Sumatra Province were also identified as indirect beneficiaries of the project.
The implementation of the project started in November 2003 when the team started to communicate with the Binjai Board of Education and the North Sumatra Provincial Board of Education. The team explained the objectives of the project, and provided information on the Lund University training course. The team sought the comments of the two Boards of Education on the project during their meetings.
After getting the approval of the two Boards of Education in December 2003, the team started meeting the teachers and students in the primary school. The project was explained to 9 teachers and 66 students aged 10-13 years. The teachers expressed willingness to learn about child rights by getting the necessary materials on the CRC and learning from resource persons. They were also willing to acquire the skills to fulfill child rights through the learning processes in school. The students, on the other hand, expressed willingness to answer the survey questionnaire.
The team, during the training in Sweden, decided to carry out a simple survey in the school. It thought that it would be highly important to find out the basic needs of students, which form part of the whole school system. The involvement of students is important, and thus the project should be based on their needs.
The survey covering both students and teachers aimed to collect, describe and analyze data pertaining to the children's view on child rights, particularly through the learning process in school, and their view about the school. In addition, the survey also aimed to study the knowledge of teachers pertaining to child rights.
The questionnaire for the students, using simple language, contains 16 questions. The questionnaire for the teachers contains 9 questions about their view on child rights and the fulfillment of these rights through the learning processes in the school (Annex A).
66 students answered the questionnaire, with equal number of boys and girls aged 10-13 years in Grades V and VI. 9 teachers, all females, answered their own questionnaire.
The responses from the students reveal that all of them know their rights. 8.5% express the view that they will choose what rights they prefer more if they are given the opportunity to choose. 7.3% wanted to choose the right to express their opinion, 64% want the right to education, 56% want the right to play, 55% want the right to have access to education tools, and 40% want the right to stay out of the classroom for educational activities to learn about realities in society relating to the subject lesson. Social studies subject, for example, allows the observation of the surrounding environment of the school.
92% of the student-respondents answered that they have the opportunity to ask questions to the teachers. 67% felt that raising a hand first is an appropriate way to ask a question, but 55% have different views on how to ask, such as: asking a question after the teacher finished reading a question; asking a question politely, respectfully, in a well-mannered way; asking a question if pointed to by friends; asking a question whether or not it (question) makes sense; and asking a question if one does not understand the subject lesson.
44% of the student-respondents said that they never do educational activities outside the classroom. 44% feel they would be happy if they will have the opportunity to do activities outside the classroom. On the other hand, 42% of the student-respondents said that they whenever they have activities outside the classroom, the person who decide the place to study are the teachers (48%) or principal (38%). 56% of the student-respondents expressed their "own view" by not choosing the answer listed in questionnaire. They wrote, for instance, that decisions should be made by students, or the whole class, or the leader of each class on what activities to do outside the classroom.
80% of the student-respondents like the place/location chosen for their activities outside the classroom. Libraries and zoos are favorite places to visit, but 79% have their own view from the options listed.
Relating to the learning process in the classroom, 95% of the student-respondents like the way the teachers teach, although 68% have their own view from the options listed. 76% think they need tools such as textbooks, notebooks, writing materials, television, library, laboratory, musical equipments, playing tools, drawing materials to support the learning process and 74% have their own view from the options listed. From the options listed, 44% think that sports equipments are needed, 36% want visual aids, 31% want pictures and 23% want musical equipments.
Relating to the school environment, 47% of the student-respondents have their own view on what they do not feel comfortable with in school. From the options listed, 35% said that they do not like the school facilities. They point to narrow school yard (29%), dirty environment (27%), lack of latrines (15%) and dilapidated school building (12%).
In relation to discrimination, 76% of the student-respondents think that teachers give special attention to some students because of their intelligence, good behavior, and leadership in class. In response to this view of the students, the teachers said that they always try to give an equal opportunity to the students during learning process. However, students who have good behavior, intelligence, and leadership in class are always the first to take the opportunity offered.
Based on above information, the team organized a one-day training course for 9 teachers of the school. The members of the team and the teachers discussed how to explore and develop new teaching strategies and methodologies in promoting student participation in the learning process - making students willing and able to express their views freely in all matters and having fun at the same time. For instance, previously the learning process focuses more on reading, writing and written exercises in the classroom. During the training, teachers were encouraged to have group discussion, play games, role-play as well as activities outside the classroom. Teachers were also encouraged to be creative in finding new strategies and methodologies in the learning process relating to their subject lesson by using simple tools that are available in the school.
Teacher training, carried out as part of the project, was needed to increase the knowledge and understanding by the teachers on child rights through the learning process in school and to provide teachers with the necessary skills to realize child rights through the learning process in school.
At the beginning of the training, teachers seem to reject particularly the principle that child rights should be fulfilled, protected and respected by adults, and that corporal punishment for students is against child rights. They thought that when students do not behave properly despite oral admonition, they deserve corporal punishment from the teachers. Concerning the use of participatory learning process, teachers said that they would want to use it too. But they also need to fulfill the curricular requirements on time, as well as work within the limited time for each subject lesson. Otherwise, if they would like to employ the participatory learning process they should have enough time available.
Application of the Teaching/Learning Methodologies
After the training, two teachers tried out the teaching/learning methodologies in two classes in Grades V and VI. The two classes were chosen after a discussion among the team members. They considered the age of the students (who have already spent 4 to 5 years in the school), their capability to understand the questionnaire, and their need to increase their knowledge and understanding about child rights during their last or final 2 years in the school. The try-out was done in the daily classes for Indonesian language, mathematics and natural science subjects. The two teachers trying the methodologies considered these subjects as appropriate for applying the child rights principles, although there is still the possibility of carrying out the program in other subjects. The two teachers used the playing-while-studying method, and study and discussion outside the classroom. The two teachers developed the subject lesson plans that incorporate the new teaching methods.
In the mathematics subject, for example, they previously teach only theories and formulas on how to measure the width of the yard, but never showed how to actually measure it. In the Indonesian language subject, teachers always decide the topic of the essay the students have to write about instead of allowing them to decide. They rarely give the students the chance to observe the community outside the school in order that they can have ideas on what to write about. During the implementation of the project, the teachers gave the students the time to do observation activities outside the school.
The try-out of the teaching/learning methodologies was held during the 3-15 February 2004 period.
Monitoring and Evaluation
The team monitored the teaching try-out twice a week by direct classroom observation and discussion with the two teachers involved. The team members went inside the classrooms a number of times whenever the two teachers feel they do not fully understand the methods being used and when the team wanted to know how the methods are being applied in the classroom. Two teachers said that they need more time and space to implement the new methods. They also felt that they still lack the knowledge to develop their own strategies and methods, as well as lack facilities in the school.
An evaluation session was held on 19 February 2004. 10 teachers, 2 officials from the Board of Education of Binjai, 2 Grade V students, and 2 Grade VI students attended the evaluation session. The four students who attended the evaluation session were in the two classes where the project was tried out. The selection of the students was based on gender (two girls and two boys), and their willingness to attend the evaluation session. It was held in the classroom where the two teachers tried-out the program. The two teachers demonstrated to the evaluation session participants the use of the methodologies. The participants commented or raised questions on the teaching demonstration. During the evaluation session, all four students said that they enjoyed the learning process, they easily understood the topic of the subject when they worked in discussion groups, and the study outside the classroom methodology made them easily understand the reality that they previously only learn theories. But one student did not agree with group discussion held outside the classroom because of exposure to the sun.
On the whole, the evaluation session participants said that the project was able to attain its objectives. However, they think that there is still a lot of room for improvement in implementing the project. For instance, a sustainable training program for teachers is needed to increase their knowledge and understanding, and school facilities should be available for these methodologies.
The team faced some obstacles in implementing the project. There was limited time available due to the frequent school holidays during the project period (September 2003-February 2004). The inadequate teaching materials and school facilities needed for the project is another obstacle.
Based on the project implementation experience as well as the results of the evaluation session, the team came up with a set of suggestions on how to implement the project:
New Phase of the Project
- A special training for teachers on the CRC and the appropriate teaching skills related to child rights, especially in handling students who come into conflict with school systems, has to be provided.
- School facilities that support the implementation of the project in schools have to be made available.
Based on activities held and the results of the evaluation process, the team considered to do a follow-up to the project in the same school. In this new phase of the project, the inputs during the seminar in Tanzania on 26 February-6 March 2004 from tutors and participants from other countries were utilized.
The follow-up phase has the following characteristics:
- The program remained the same but it covered an expanded target group - all grade levels from Grade I to Grade VI.
- The training for all teachers in the school has more knowledge input for greater understanding of the concept of child rights as well as acquisition of necessary skills to fulfill child rights through the learning process inside the classroom (focusing on CRC provision on child participation).
- There is classroom try-out following the teacher training, and project monitoring during the try-out period to provide an opportunity for the team to discuss with teachers any difficulties encountered.
- The evaluation involved teachers from two different primary schools in Binjai, Department of Education of Binjai as well as representatives of the parents of the students.
The team held on 22-25 June 2004 a training for the same nine teachers as in the first training focusing on subjects that came out in the previous needs assessment. The subjects included the CRC, case studies, reproductive health issues, sharing of experiences, and problem solving, among others. In discussing the CRC, the historical background of the convention and the issues that emerged after the ratification of the convention by the Indonesian government in 1990 particularly relating to schools were discussed. The problem of understanding child rights under the civic education program was also taken up. It was stressed that this problem relates to the lack of participation of students in the learning process because they are treated as passive objects and to their failure to learn their rights because the focus is on duties.
The discussion on problem solving/case studies focused on many cases that the teachers face regarding the learning process. This session took much time because of the many perspectives in dealing with students inside the classroom. The reproductive health issues were discussed because the teachers think that they need appropriate and correct information about reproductive health that they can transmit to the students, and for their personal (family) benefit. During this session, the teachers asked the resource person many questions to due to myths about sexuality, HIV infection, and menstruation.
During the training, teachers were encouraged to seek appropriate approaches and teaching methods in accordance with their actual context and situations to be able to have a student-centered method, putting students at the center of teaching activities. In this method, the students become the subject and not the object of teaching. When students are the object of teaching they are passive: teachers teach and students are taught; teachers choose what to teach and students are subjected to their (teachers') choices. When students become the subject of teaching-learning, they are actively involved in the whole process of learning. Teachers function as facilitators and build a two-way communication with their students.
At the last day of the training, the teachers were asked to design their action/teaching plan for one semester. The plan is expected to help teachers in their teaching, especially because the method to be used is considered new and has never been employed before. However, it is only a tentative and alternate guide for teachers in presenting the subject. It is not a stepby-step guide that has to be strictly followed by teachers.
Sustaining the Application of the Rights-based Approach
During the semester that the teachers teach about child rights, the students had the tendency to more freely ask questions, had fun and participated in the learning process. On the other hand, teachers were more motivated to increase their knowledge and understanding of the subject, and their creativity in the learning process.
On 8-15 August 2004, the team members'
mentor visited the school as well as the government authorities in Medan and Binjai such as the officials of the Department of Education of North Sumatra Province, Department of Education of Medan, the Mayor of Binjai, the Department of Education of Binjai, and the Teacher Training Center to discuss the sustainability of the project and the possibility of continuing the same project in other schools.
One important lesson learned from the project is the need to develop a material or module for teacher training.
The basic learning from this small project is the importance of expanding it to more schools and increasing the skills of teachers through appropriate training.
Team members: Tigor Nababan - Chief, Board of Compulsory Education, Board of Education of North Sumatera, Indonesia; Hirtap Simanungkalit -
Headmaster, Primary School (SD) in Binjai; Rupinawaty Gurusinga - Social Worker.
I. QUESTIONNAIRE FOR STUDENTS
II. QUESTIONNAIRE FOR TEACHERS
- Do you know that you have rights as a child?
- If you are given the opportunity to choose your rights, which rights would you prefer?
a. Right to speak
b. Right to ask questions
c. Right to get learning tools
d. Right to play
e. Right to study
f. Right to be allowed to study outside the classroom
- Have your ever had opportunities to ask questions to your teacher?
c. Not at all
- How did you use such opportunities?
a. Raised a hand first
b. Just asked the question straight
c. Waited for other students to finish asking questions
d. Waited to be pointed out by the teacher
- How would do you feel if you are offered an opportunity to study outside the classroom?
b. Not really
- Does your teacher ever take you to places outside the classroom to study?
c. Not at all
- Who decides on the location of the study outside the classroom?
c. Students by voting among themselves
d. Students through discussion
- Do you like the location?
b. Not really
- What are the favorite places you would like to go to if you have a chance to study outside the classroom?
- Do you like the way the teachers teach inside the classroom?
b. Not really
- Which teaching method of your teachers do you like?
- Do you think you need to use a tool of learning?
b. Not really
c. Not at all
- What tools of learning do you need?
a. Visual equipments
c. Musical instruments
d. Sports equipment
- Do you feel comfortable with the situation in your school?
b. Not really
- If not, why?
a. The playground is not comfortable.
b. The school premises are dirty
c. The school building is dilapidated
d. The schoolyard is narrow
e. No toilet
- Do you think that some people in your school get more attention from your teachers?
- Do you know that a child has rights?
- Do you know child rights?
- Are you willing to implement child rights based on the Convention on the Rights of the Child?
- What do you think about this school becoming a pilot project on child rights?
- What tools do you think would be needed in the teaching process on child rights? | <urn:uuid:1c2aabe0-209c-48b2-aca8-1cf472565c7d> | CC-MAIN-2017-17 | http://www.hurights.or.jp/archives/human_rights_education_in_asian_schools/section2/2005/03/child-rights-classroom-and-school-management-an-indonesian-experience.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00484-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.961078 | 5,232 | 2.9375 | 3 |
SYNOPSIS OF UNIFICATION THEORY
The System of Spacetime
(revised March, 2012)
John A. Gowan
home page (page 1)
home page (page 2)
(see also: "A Synopsis of the System of Matter";)
The "Tetrahedron Model" vs the "Standard Model" of Physics: A Comparison.
What we see is not Nature, but Nature exposed
to our method of questioning - W. C. Heisenberg
This paper has been translated into French by Kate Bondareva. Thanks Kate! See: http://www.autoteiledirekt.de/science/synopsis-de-lunification-theorie
Table of Contents:
The conceptual basis of the Unified Field Theory, as presented in these pages, can be briefly sketched as follows:
Our Universe is asymmetric in that it consists only of matter, without an antimatter complement. This cosmic-scale asymmetry was created during the "Big Bang" by the action of the weak force, creating matter from perfectly symmetric light, thus bringing our manifest world into existence. "Noether's Theorem" states that in a continuous multi-component field such as the electromagnetic field (or the metric field of spacetime), where one finds a symmetry one finds an associated conservation law, and vice versa. The symmetries of light must be conserved no less than its energy. Consequent upon the creation of asymmetric matter from symmetric light during the "Big Bang", light's lost symmetries are conserved in matter by charge and spin; in spacetime, by inertial and gravitational forces. Light's raw energy is conserved as mass and momentum; light's "non-local" intrinsic motion or entropy drive (as "gauged" by "velocity c") is conserved in "local" matter as time and gravitation.
The function of charge is to conserve light's various symmetries; charge conservation is one of several conservation principles which are necessary to allow symmetry-breaking during the "Big Bang", and the conversion of free energy to information. Charge and charge conservation are attributes of symmetry conservation, much as entropy and entropy conservation are attributes of energy conservation. Entropy and symmetry are explicitly related through velocity c, which gauges both light's symmetric energy state and primordial entropy drive, vanishing time and distance, maintaining metric (inertial) symmetry and the "non-local" character of light (resulting in the distributional symmetry of light's energy throughout spacetime), while simultaneously causing the expansion and cooling of space. It is because of this dual "gauge" (regulatory) role of c that light's primordial spatial entropy drive may be included with symmetry under the conservation mantle of "Noether's Theorem", a consideration which also extends to the rationale for both aspects of the gravitational "location" charge, whose active principle is time. (See: "Entropy, Gravitation, and Thermodynamics" and "The Double Conservation Role of Gravitation".)
For each of the four forces of physics I identify a charge and the symmetry debt it conserves (charge conservation = symmetry conservation - "Noether's Theorem"). Charge conservation is a temporal (local, material) form of symmetry conservation. (See: "Symmetry Principles of the Unified Field Theory" and "Global vs Local Gauge Symmetry in the Tetrahedron Model").
Role: matter-antimatter annihilation - providing a spatial force of attraction between matter and antimatter (particle-antiparticle pairs) that will motivate annihilation reactions within the "Heisenberg Interval", the time limit imposed upon virtual reality by velocity c. Through annihilation reactions, electric charge prevents massless, non-local, atemporal, acausal, symmetric light from devolving into massive, local, causal, temporal, asymmetric matter with "real" charges, including gravitation. Since the photon is the field vector of electric charge, we see light protecting its own symmetry in particle-antiparticle annihilations. Magnetic forces protect the invariance of electric charges in relative motion. "Velocity c" (the intrinsic motion of light) is the metric gauge (regulator) of both the primordial spatial entropy drive of light, and the "non-local" symmetric energy state of light.
Magnetic forces are functional analogs of (and derived from) "Lorentz Invariance", the dimensional flexibility of space and time as formalized by Einstein in his theory of Special Relativity. Lorentz Invariance, in turn, is necessary to protect the invariance of "velocity c", the "Interval", and causality from the variable reference frames and perspectives of relative motion. Magnetic forces protect the invariance of moving electric charges; the Doppler effect is another consequence of Lorentz Invariance protecting the constant velocity of light.
1) Light's entropy drive: converting the intrinsic motion of light to the intrinsic motion of time - by annihilating space and extracting a metrically equivalent temporal residue. (See: "The Conversion of Space to Time".)
2) The "non-local" distribution of light's energy: the symmetric spatial distribution of light's energy (due to intrinsic motion c - Einstein's "Interval" = zero) vs the local spatial concentration of bound energy (due to the intrinsic "rest" of matter - Einstein's "Interval" > zero). Gravity converts bound to free energy in stars (and other gravitationally driven astrophysical processes).
Role 1) (entropy debt): creating matter's time dimension (the primordial entropy drive of bound energy), and the joint dimensional conservation domain of free and bound energy, spacetime. Time marches on to create the conservation domain of information and matter's "causal matrix", history (historic spacetime). Time is necessary to balance the energy accounts of matter in relative motion, provide matter's primordial entropy drive, and to conserve the invariance of causality, velocity c, and the "Interval" (via the dimensional flexibility or "Lorentz Invariance" of Special Relativity). Time also provides the historical dimensional arena in which charge conservation has durable meaning, and in its own turn, induces the gravitational force: a gravitational field is the spatial consequence of the intrinsic motion of time. Hence through gravity, time also conserves symmetry (see Role 2) below).
Role 2) (symmetry debt): converting bound to free energy in stars (via the nucleosynthetic pathway), quasars (releasing gravitational potential energy), and black holes (through Hawking's "quantum radiance") - this final and complete conversion pays all the entropy and symmetry debts of bound energy. The conservation role of entropy and symmetry in the creation of the gravitational force is paradigmatic of the relationship between Quantum Mechanics (entropy) and General Relativity (symmetry). The two theories join in the "entropic charge" of gravitation, "time". Gravitation serves energy conservation and entropy, and protects the invariance of causality and the "Interval", despite the relative motion and the immobile, local, asymmetric energy state of matter, creating time from space in order to do so. A local metric is required to conserve the energy of a local energy form; gravity provides the gauge (the universal gravitational constant "G") for that local, temporal metric. The creation of time from space is the single rationale for gravitation; it is because time is such a multitasking workhorse with so many conservation roles after it is produced that gravity appears to be such a complex and confusing force. Because gravity and time induce each other, the symmetry-conserving role of gravitation (converting bound to free energy in stars, etc.) can also be attributed to time. Entropy and symmetry conservation drive toward a common goal.
Both of gravity's (and all of matter's) entropy and symmetry debts are paid by the gravitational conversion of mass to light, since light is massless, non-local, atemporal, and produces no gravitational field (the recently observed "acceleration" of the Universe is the evidence that light produces no gravitational field). As mass is converted to light by various astrophysical processes, and by particle and proton decay (including any analogous conversion processes in "dark matter"), the total gravitational field of the Cosmos is reduced, resulting in a relative "acceleration". (See: "A Spacetime Map of the Universe.")
Gravity is weak because gravity is the energy required to produce matter's time dimension, the temporal entropy-energy of a given mass. In the case of the Earth, the gravitational energy Gm (where m is the mass of the Earth) is the energy required to produce Earth's time dimension (via the gravitational annihilation of space). The weakness of gravity tells us that creating Earth's time dimension does not require much energy, nor (equivalently) the conversion of much space to time. This is because matter, unlike light, is only tangentially connected to its historic entropy domain (via the "present moment"). (See: "The Half-Life of Proton Decay and the 'Heat Death' of the Cosmos".)
Black holes provide the physical demonstration of the gravitational conversion of space and the drive of spatial entropy (the intrinsic motion of light) to time and the drive of historical entropy (the intrinsic motion of time). The event horizon of a black hole is a temporal entropy surface (the Hawking-Bekenstein theorem). (See: "A Description of Gravitation".) (See also: J. D. Bekenstein "Information in the Holographic Universe". Scientific American Aug. 2003, pages 58-65.)
Role: permanent confinement of quarks to whole quantum unit charge combinations (so their charges can be neutralized, canceled, annihilated, or carried by the alternative charge carriers (leptons and mesons)). A related symmetry of color charge is known as "asymptotic freedom", the self-annihilation of color charge necessary for proton decay and the creation of heavy baryons (hyperons) from neutral leptoquarks (via the hypothetical "X" IVB?) during the "Big Bang". (See: "The Particle Table".)
A secondary expression of the strong force (between baryons rather than within baryons) involves the binding of protons and neutrons ("nucleons") into compound atomic nuclei via a "Yukawa" exchange field of mesons. These fusion reactions result in the conversion of bound nuclear energy to light in the nucleosynthetic pathway of our Sun and the stars. (See: "The Strong Force: Two Expressions".)
Roles: provides the basic asymmetry which allows the creation of matter. Identifies the correct antimatter partner in annihilation reactions; provides alternative charge carriers (the leptons) and metric catalysts (the "Intermediate Vector Bosons" (IVBs)) to enable the creation, destruction, and transformation of single elementary particles of matter (leptons and quarks). (Charges can be balanced by leptonic alternative charge carriers rather than antiparticles; the latter would only cause annihilation reactions.) The large mass of the weak force Intermediate Vector Bosons (IVBs) (as scaled by the Higgs boson), recreates the primordial electroweak force unification symmetric energy state of the "Big Bang". The weak force mechanism for the creation of elementary particles is essentially a "mini-Big Bang", recreating the original conditions in which the reactions it now mediates first took place, thereby guaranteeing the invariance of elementary particle mass, charge, and identity across eons of time and despite the entropic expansion of the Cosmos. (See: "The Origin of Matter and Information"; "Identity Charge and the Weak Force"; "The 'W' IVB and the Weak Force Mechanism"; "Global-Local Gauge Symmetries in the Weak Force"; "The Higgs Boson and the Weak Force IVBs".)
The dimensions of spacetime are conservation/entropy domains, created by the entropic, "intrinsic" motions of free and bound electromagnetic energy. The intrinsic motion of light, gauged by "velocity c", creates space; the intrinsic motion of matter's time dimension, gauged by "velocity T", creates history. "Velocity c" also gauges the time dimension - as the duration (measured by a clock) required by light to travel a given distance (measured by a meter stick). Gravity, gauged by "velocity G", converts space to time, welding space and time together to create spacetime, the joint conservation/entropy domain of free and bound energy. These dimensional domains function as arenas of action, where energy in all its forms can be simultaneously used and transformed, but nevertheless conserved. This is the major connection between the 1st and 2nd laws of thermodynamics. (See: "Entropy, Gravitation, and Thermodynamics".)
Time is implicit in free energy as "frequency", and is the actual driver of light's intrinsic motion: symmetric space ("wavelength") flees asymmetric time ("frequency"), which is an embedded characteristic of light's own nature (frequency multiplied by wavelength = c). Time (the proverbial "bur under the saddle") is the implicit, hidden, internal motivator of light's perpetual, "intrinsic" (self-motivated) motion, velocity c. "Velocity c" is actually a symmetry condition of free energy, which, in obedience to Noether's Theorem, forever moves in such a way as to prevent the explicit appearance of time, with its inevitable companions: mass, charge, and gravitation - the asymmetric "Gang of Four".
Spacetime is a closed, conserved, and protected domain of free and bound electromagnetic energy; c and T are (and must be) essentially "infinite" velocities which seal its borders, preventing causality tampering by either "time machine" or "superluminal" space travel; similarly, any possible metric or inertial loopholes ("wormholes") are closed gravitationally by the "event horizons" and central "singularities" of black holes. (In the extreme case of the black hole, the gravitational metric takes over all conservation functions formerly performed by the electromagnetic metric.) The invariance of velocity c also protects the invariance of causality and Einstein's "Interval", and is in turn protected (in massive systems) by the "Lorentz Invariance" of Special Relativity, the flexibility of the dimensions between observers in relative motion ("moving clocks run slow", etc.). The entropic gauges c, T, and G create and defend dimensional conservation domains for free and bound electromagnetic energy: space, history, and historic spacetime. See: "Spatial vs Temporal Entropy". Entropy is a corollary of energy conservation. The function of entropy is to protect energy conservation by preventing the abuse of energy: because of entropy, energy cannot be used twice to perform the same net "work". Without entropy, energy conservation would prevent any use of energy at all.
The historical expansion of the cosmos is funded by the gravitational deceleration of the spatial expansion of the cosmos. This is physically accomplished by the gravitational annihilation and conversion of space into metrically equivalent temporal units. Gravity pays the entropy-"interest" on the symmetry debt of matter, creating time and hence the historical dimension in which charge conservation can have durable meaning and causal significance. When mass is converted to light in stars, gravity pays the energy-"principle" of matter's symmetry debt, completing the entropy/symmetry conservation loop. The total mass of the Cosmos and its associated gravitational energy is reduced, in consequence increasing the universal spatial expansion - as recently observed.
The metric is the measured relationship within and between the dimensions. The metric functions to conserve energy, entropy, symmetry, and causality. We experience the metric through such phenomena as time, the velocity of light, gravity, and inertial force. The electromagnetic metric of space and light is "gauged" or regulated by the electromagnetic constant "c" such that one second of temporal duration is metrically equivalent to 300,000 kilometers of linear distance. Traveling at this "velocity", the photon (a quantum of light) has no time dimension, and no length in the direction of motion. As a consequence, light's energy is symmetrically distributed everywhere, simultaneously. "Velocity c" therefore gauges multiple symmetries of light's metric, including the symmetrical relations between the spatial dimensions (no favored directions in space), and the symmetrical relations between the spatial and temporal dimensions (the asymmetric one-way time dimension is suppressed at velocity c). Other metric symmetries of light include light's zero "Interval" (expressing light's "non-locality" and two-dimensionality), and the consequential fact that light, moving freely in spacetime, produces no gravitational field. Since the intrinsic motion of light also produces space and the expansion and cooling of the spatial cosmos, "velocity c" also gauges the spatial "entropy drive" of light. Space is an entropic conservation domain for free electromagnetic energy, created by light's own intrinsic/entropic motion. These are only some of the symmetry-keeping and energy conservation functions of light's electromagnetic metric.
The function of the metric is energy, entropy, and symmetry conservation, including protecting the invariance of causality, velocity "c", and Einstein's "Interval". The "Interval" is an invariant measure of spacetime, the same for all observers whether at rest, in relative motion, or even in accelerated motion, whose function is the conservation and protection of the causality relations of all massive objects in relative motion. The "Interval" remains invariant due to the covariance of space with time ("Lorentz Invariance") in Einstein's theories of Special and General Relativity. The Interval of light = zero, which is Einstein's formal (mathematical) statement of the "non-local" character of light. Light's non-local character involves the fact (also discovered by Einstein) that light has no x or t dimensions: light is a 2-dimensional transverse wave whose "intrinsic" (entropic) motion sweeps out a third spatial dimension. Having no distance or temporal component, light has forever to go nowhere, hence light's "infinite" velocity and non-local character (lacking 2 of 4 dimensions, light's position cannot be specified in 4-D spacetime).
"Velocity c" is not an actual velocity, but the electromagnetic gauge regulating, among other things, the spatial entropy drive of light (light's intrinsic motion) expanding and cooling the spatial Cosmos, and the symmetric, "non-local" distribution of light's energy throughout space, everywhere, simultaneously. Both these symmetries are conserved by gravity. (See: "The Double Conservation Role of Gravity".) "Velocity c" also gauges the energetic equivalence between free and bound forms of electromagnetic energy (E = mcc), and the magnitude of electric charge. For all these reasons and more, "velocity c" is the principle gauge of the electromagnetic metric and for obvious reasons of energy, entropy, symmetry, and causality conservation must remain invariant, even if space and time must be "bent", "warped", or "curved" (co-vary) to accomplish the task. Space is the entropic/energetic conservation domain of free electromagnetic energy (light), created by the intrinsic motion of light for its own conservation. Light is the only energy form capable of creating its own conservation domain (from nothing) by means of its own "intrinsic" (entropic) motion. Bound energy (matter) must create its conservation domain from pre-existing space and light. Therefore all-symmetric light is the primary energy form, and asymmetric matter is secondary, derived from light, in both its energy, its conservation domain, and its primordial entropy drive.
Enter now bound electromagnetic energy (matter created from light), with its inevitable asymmetric companions: mass, time, gravity, charge. In the conversion of free electromagnetic energy (light) to bound electromagnetic energy (matter) during the "Big Bang" or "Creation Event", the raw energy of light is conserved as mass and momentum; the symmetry of light is conserved as charge and spin; the spatial entropy drive (intrinsic motion) of light is conserved as gravitation and the intrinsic motion of time. The charges of matter are the symmetry debts of light (Noether's theorem). The active principle of gravity's "location " charge is time. Time is the necessary additional dimensional parameter which must be created to record and accommodate the variable energy accounts of matter in relative (rather than absolute) motion. The creation of matter's time dimension is the crucial task of gravitation. In turn, the intrinsic (entropic) motion of time creates history, the conservation domain of matter's causal information field.
Light's electromagnetic metric cannot accommodate these bound energy forms, specifically because of their 4-dimensionality, undistributed mass, and lack of intrinsic motion "c". Nevertheless, a metric must somehow be created to accommodate the energy conservation needs of bound forms of electromagnetic energy. A metric's function and rationale is energy conservation, and this is no less true for the temporal metric of matter and bound electromagnetic energy, gauged by "G" (the universal gravitational constant), than for the spatial metric of light and free electromagnetic energy, gaged by "c". These two metrics must be combined into a metric of "spacetime" capable of conserving the energy accounts of both free and bound forms of electromagnetic energy simultaneously and seamlessly. Nature accomplishes this daunting task effortlessly by the simple expedient of extracting time directly from the spatial metric by means of gravity. As Einstein discovered, space is not just space but is in fact "spacetime". Light and the space light creates contains a suppressed temporal component, present implicitly as "frequency" (frequency x wavelength = c). Gravity annihilates space, revealing its hidden, metrically equivalent temporal component. By this means a gravitational/temporal metric is created which is entropically and energetically compatible and integrated with space, creating the spacetime conservation domain we know and inhabit. It should be no surprise that electromagnetic energy, having both a free and bound form, should also have within itself the means to create a compound metric capable of satisfying the conservation needs of both these forms simultaneously. (See: "The Conversion of Space to Time".)
The active principle of gravity's "location" charge is time itself. The field vector of gravity is the temporal component of spacetime. Time has intrinsic (entropic) motion into history (the temporal analog of space), which is situated at right angles to all three spatial dimensions. As the entropic time charge moves into history, it pulls space along with it. However, space cannot squeeze into the point-like end of the one-dimensional time line, and self-annihilates at the entrance. The self-annihilation of space produces another temporal component (the metric equivalent of the annihilated space), and so the entropic cycle continues forever. A gravitational field is the spatial consequence of the intrinsic motion of time. Gravity is the only one of the four forces of physics with an entropic charge - a charge with intrinsic dimensional motion. (See: "A Description of Gravity".)
Gravity is weak because matter is only tangentially connected to its historic conservation domain - via the universal "present moment". Whereas light fully occupies its spatial conservation domain, matter exists only in the present moment of history. Consequently, gravity creates only enough time to satisfy the entropy drive of matter's tiny connection to history. This connection is equivalent to the surface area of a black hole containing the mass of the given object. (See: "A Spacetime Map of the Universe".)
Although the gravitational field of small masses is very weak, because gravity is only attractive, with no repulsive component, and cannot be neutralized except by the conversion of mass to light, gravitational fields can increase in intensity to the limiting value of g = c (the black hole). Beyond this limit, they can only increase in size (the supermassive black holes of galactic centers). As a gravitational field increases in intensity, its temporal metric begins to dominate and progressively replace the electromagnetic metric, including the latter's conservation functions. At low gravitational field energies, as in single atoms up to and including planetary sized bodies, only the entropy conservation function of gravity is evident - the conversion of light's intrinsic spatial motion to the intrinsic historical motion of matter's time dimension. Gravity crosses a threshold in stars, however, as its symmetry conservation function comes into play, in the conversion of bound energy (mass) to free energy (light) via the nucleosynthetic pathway. (The gravitational potential energy of in-falling matter is also converted to kinetic and radiant energy (think of meteors), becoming a truly significant effect in black holes and quasars.)
Other high-energy thresholds of the temporal metric are indicated by the members of the remarkable "condensed matter" series of "final states" for astronomical bodies, in which gravity begins to take over the binding functions of the other forces. First among these is the white dwarf, in which the electromagnetic force begins to give way as the electron shells of atoms are crushed and reduced to an "electron sea". Next is the neutron star, in which the electrons are driven into the protons, producing a gigantic gravitationally bound atomic nucleus (weak force beta decay gives way). Finally in the black hole, even the strong force is overwhelmed as matter is crushed to a point ("singularity") - no doubt producing "proton decay" in the interior of the black hole (in the limit of "asymptotic freedom"). (See: "Proton Decay and the 'Heat Death' of the Cosmos".) In the black hole, at the "event horizon" where space is accelerated to velocity c (where g = c), the temporal/gravitational metric completely overwhelms and replaces the spatial/electromagnetic metric. Time replaces space and becomes "visible" as the surface area of a black hole's "event horizon". The surface area of a back hole is equivalent to the temporal entropy of its mass (the Bekenstein-Hawking theorem). (See: J. D. Bekenstein "Information in the Holographic Universe". Scientific American Aug. 2003, pages 58-65.)
Just as a rock is the energy of light converted to an asymmetric massive form and brought to rest, so the surface area of a black hole is the spatial entropy drive (intrinsic motion) of light converted into an asymmetric temporal form and brought to rest. At the event horizon of a black hole, light and time stand still, and meter sticks shrink to nothing. The temporal metric has completely replaced the spatial metric - seconds become of indefinite duration as space dwindles to nothing and light ceases to move - the extreme limiting case of "Lorentz Invariance", or the co-variance of space with time.
The black hole demonstrates not only the gravitational conversion of space to time, but also both inside (via proton decay) and outside (via Hawking radiation) that the conservation role of gravitation - and hence also ultimately of time - is the conservation of the non-local symmetric energy state of light: specifically the non-local distributional symmetry of light's energy content, and the "all-way" symmetry of light's primordial spatial entropy drive. We also note that in the surface area of a black hole, matter achieves a complete integration with its entropic conservation domain, just as matter is also returned to intrinsic motion c at the event horizon. These phenomena, originally observed only in the case of light, demonstrate again the complete replacement of light's electromagnetic metric and all its conservation functions by the temporal/gravitational metric of matter.
In the phenomenon of "Hawking radiation" we see the final triumph of light over darkness, gravity and time. With the complete conversion of the mass of the black hole to light, the gravitational field associated with the hole also vanishes, indicating that its symmetry-conservation role is finished, completely fulfilling the mandate of Noether's Theorem. (See: "A Rationale for Gravity".)
For an equivalent synoptic statement
regarding matter, see: "Synopsis
of the System of Matter" and "The
Intrinsic Motions of Matter". | <urn:uuid:970770cd-bf82-4beb-baf5-a02d480f0436> | CC-MAIN-2017-17 | http://www.johnagowan.org/spacesum.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00426-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.903015 | 5,960 | 2.71875 | 3 |
Amphibious mangrove killifish, Kryptolebias marmoratus (formerly Rivulus marmoratus), are frequently exposed to aerial conditions in their natural environment. We tested the hypothesis that gill structure is plastic and that metabolic rate is maintained in response to air exposure. During air exposure, when gills are no longer functional, we predicted that gill surface area would decrease. In the first experiment, K. marmoratus were exposed to either water (control) or air for 1 h, 1 day, 1 week, or 1 week followed by a return to water for 1 week (recovery). Scanning electron micrographs (SEM) and light micrographs of gill sections were taken, and morphometric analyses of lamellar width, lamellar length and interlamellar cell mass (ILCM) height were performed. Following 1 week of air exposure, SEM indicated that there was a decrease in lamellar surface area. Morphometric analysis of light micrographs revealed that there were significant changes in the height of the ILCM, but there were no significant differences in lamellae width and length between any of the treatments. Following 1 week of recovery in water, the ILCM regressed and gill lamellae were similar to control fish, indicating that the morphological changes were reversible. In the second experiment, V̇CO2 was measured in fish continuously over a 5-day period in air and compared with previous measurements of oxygen uptake (V̇O2) in water. V̇CO2 varied between 6 and 10 μmol g–1 h–1 and was significantly higher on days 3, 4 and 5 relative to days 1 and 2. In contrast to V̇O2 in water, V̇CO2 in air showed no diurnal rhythm over a 24 h period. These findings indicate that K. marmoratus remodel their gill structures in response to air exposure and that these changes are completely reversible. Furthermore, over a similar time frame, changes in V̇CO2 indicate that metabolic rate is maintained at a rate comparable to that of fish in water, underlying the remarkable ability of K. marmoratus to thrive in both aquatic and terrestrial habitats.
The mangrove killifish, Kryptolebias marmoratus, lives in tropical mangrove forests in Florida, the Caribbean, Central and South America. These fish are the only known self-fertilizing vertebrate hermaphrodite (Harrington, 1961), although there are true males and therefore sexual reproduction occurs in some natural populations (Mackiewicz et al., 2006). They are considered amphibious fish because they survive in both aquatic and terrestrial habitats (Sayer, 2005). K. marmoratus have a remarkable tolerance to a wide range of aquatic extremes and can endure over one month of exposure to air (emersion) when among moist detritus or leaf litter (Abel et al., 1987). K. marmoratus leave their aquatic environment for varying periods of time in response to aggression between fish (Huehner et al., 1985; Taylor, 1990), as well as in response to environmental stressors, such as high hydrogen sulfide concentrations (Abel et al., 1987; Taylor, 1990), low water temperature (Huehner et al., 1985) or as a result of constant flux between drought and flooding in the areas they inhabit (Harrington, 1961). Additionally, they may leave the water for short periods of time in order to catch termites on land and then return immediately to eat their prey underwater (Huehner et al., 1985).
In water, most fish rely primarily on gills for gas exchange (Evans et al., 2005). During air exposure, the gills are no longer perfused with water and will collapse if there are no specialized structural modifications. Respiratory adaptations that allow amphibious fishes to live in both terrestrial and aquatic environments include specialized lungs and gas bladders (air breathing organ, ABO), as well as modifications of existing structures, such as the gills and skin (Graham, 1997). The amphibious gourami, Trichogaster trichopterus, depends mainly on the labyrinth organs in the suprabranchial chamber for respiration during periods of aerial exposure (Burggren, 1979). Observation of an increased capillary network in the gut of the Chilean clingfish, Sicyases sanguineus, after 24 h of emersion suggests that this fish respires via intestinal respiration (Marusic et al., 1981). Cutaneous modifications are present in many different amphibious fish (Park et al., 2003), such as the mudskipper Periophthalmus magnuspinnatus, in which an extensive capillary network lies close to the surface of the skin and the middle layer of epidermis contains modified epidermal cells that are thought to facilitate oxygen uptake (Park, 2002).
The cutaneous surface is probably a site of respiration in K. marmoratus because the epidermis is relatively thin and there is a high density of capillaries near the surface (Grizzle and Thiyagarajah, 1987). During 11 days of air exposure, a significant amount (>40%) of ammonia is released by NH3 volatilization (Frick and Wright, 2002a). The site of gaseous excretion is likely the skin because both NH +4 concentration and pH on the cutaneous surface increase significantly after air exposure (Litwiller et al., 2006). Furthermore, the number of cutaneous vessels perfused on certain areas of the dorsal surface of K. marmoratus increases significantly after 30 min of air exposure (S. Litwiller, P.A.W. and C. Murrant, manuscript in preparation). Taken together, these studies suggest that in the absence of functional gills and an ABO, K. marmoratus rely on the skin as the major respiratory surface.
Changes in gill morphology have been observed in other teleost fish in response to developmental changes or environmental stressors. For example, in the obligate air breather Arapaima gigas, the defined lamellae of the water-breathing juveniles regress and the filaments become smooth columns as they mature and become obligate air breathers (Brauner et al., 2004). The changes in A. gigas gills are long term and not reversible. By contrast, the secondary lamellae of the crucian carp, Carassius carassius, become much more defined in response to hypoxic conditions (Sollid et al., 2003), and similar changes have been observed in both C. carassius and in C. auratus in response to warmer water temperatures (Sollid et al., 2005). Exposure to hypoxia induced apoptosis of C. carassius gills in between the lamellae (the interlamellar cell mass, or ILCM), thus causing the lamellae to protrude and increase the surface area for gas exchange (Sollid et al., 2003). These changes were completely reversible when C. carassius were returned to normoxic water. Hence, gill morphology is plastic in two Carassius species in response to temperature and water oxygenation. Do similar changes occur in other fish species in response to a variety of environmental perturbations? In particular, is gill morphology plastic in K. marmoratus, a species that tolerates prolonged air exposure?
When K. marmoratus are exposed to air they do not appear to aestivate; they remain responsive (K.J.O., personal observation) and there is little change in aerobic enzyme activities (Frick and Wright, 2002a), suggesting that metabolic rate is not depressed as in prolonged emersion in lungfish (Smith, 1930). Graham reviewed the effects of air-exposure on oxygen uptake in air-breathing fish and concluded that the active amphibious species generally maintain oxygen uptake when exposed to air (Graham, 1997). Many of these species have specialized structures for air breathing, whereas K. marmoratus appear to be solely dependent on the passive exchange of gases across the cutaneous surface.
We tested two hypotheses. First, we hypothesized that mangrove killifish gills are plastic and will undergo reversible change when exposed to air. We predicted that the gill surface area would decrease in air and that a return to water would reverse any changes. Scanning electron microscopy and light microscopy techniques were used to document morphological changes in the gills of K. marmoratus associated with air exposure for 1 h, 1 day, 1 week and following 1 week of recovery in water. Second, we hypothesized that metabolic rate is maintained during air exposure. If true, we predicted that carbon dioxide excretion (V̇CO2) would remain unchanged over time and be similar to previously measured values in control killifish in water. K. marmoratus were exposed to air for 5 days and V̇CO2 was measured continuously.
Materials and methods
Kryptolebias marmoratus Poey (hermaphrodites) were held in individual containers at the Hagen Aqualab, University of Guelph under conditions simulating their natural environment (25°C, 16‰, pH 8, 12 h:12 h light:dark cycle) (Frick and Wright, 2002b). Adult killifish used in metabolic rate experiments weighed between 0.07 and 0.16 g, and in microscopy experiments weighed between 0.07 and 0.12 g. Feeding and cleaning regimes were as described by Litwiller et al. (Litwiller et al., 2006).
Two days prior to experimentation, fish were placed in the respirometry chambers in water, and food was withheld to conform with previous measurements of oxygen uptake in water (Rodela and Wright, 2006). Fourteen fish were exposed to air for 3 days (N=14); for seven of these fish, the exposure continued for 5 days (N=7). Fish were not fed in the chamber, but the chamber was opened once or twice each day to moisten the filter paper with 16‰ seawater. Lights were turned on at 08.00 h and off at 20.00 h daily. Temperature was maintained at 25±0.5°C by placing the metabolic chambers on an aluminum water jacket connected to a water bath. Temperature was measured with a thermocouple in the blank chamber.
The air source was filtered external air that was scrubbed to remove CO2 and humidified to prevent desiccation of the fish. The scrubbed and humidified air was pumped through flow control valves and then into all four chambers (three containing fish, one serving as a blank). The chambers were multiplexed so that the outflow of one went through an ice bath and Drierite® column to remove any water and then to an infrared CO2 meter (Qubit S151; Qubit Systems, Kingston, ON, Canada). A gas switcher (Qubit G243) switched flow between a fish chamber and the blank chamber every 30 min. Flow was set to approximately 25 ml air min–1, but the exact flow rate was recorded continuously in all chambers with high-accuracy low flow meters (Qubit G249). The analyzer was calibrated daily using scrubbed gas as zero, and a single high point using a calibrated gas containing a known concentration of CO2 (1500 p.p.m.), balanced with nitrogen.
Metabolic rate was calculated using the Fick equation: where V̇CO2 is calculated in μmol CO2 g–1 h–1, flow (ml air min–1) was calculated as the integral of the flow rate through the chamber for the 15 min period when CO2 levels reached steady-state, CO2 out = plateau value of CO2 leaving the chamber containing a fish (p.p.m.), CO2 in = plateau value of CO2 leaving the blank chamber (p.p.m.) taken as the average of the value before and after the measurement for each fish, 60 to convert min to h, 22.4 to convert μl to μmol, 1000 to convert ml to l, m is body mass in g.
Experimental protocol for gill structure
Five groups of fish were exposed to either control (immersed) or experimental (emersed) conditions. Control fish (time 0 h) were directly removed from 100 ml plastic chambers (containing 60 ml of 16‰ water), immediately euthanized by spinal cord transection and placed into fixative (see below). Fish exposed to air were placed in 100 ml chambers. Three cotton balls were placed at the bottom of each chamber and a piece of filter paper, cut to fit snugly into the bottom of the chamber, was placed on top of the cotton. Water (10 ml, 16‰) was pipetted onto the filter paper and allowed to soak in evenly. This provided some moisture but did not allow immersion of gills in water. After experimental treatments of 1 h, 1 day and 1 week of air exposure, fish were euthanized and immediately fixed. An additional group of fish, a recovery group, was exposed to air for one week, returned to water (60 ml, 16‰) for a further week, euthanized and fixed.
Scanning electron microscopy
Fish heads were fixed in 1% gluteraldehyde, 1% paraformaldehyde with 16‰ salt water. The left gill arch of fixed fish heads was excised 48 h later. Gills were post fixed in 1% OsO4, dehydrated in a series of graded ethanols (50%, 70%, 80%, 90% and three rounds of 100%) and then dried with a critical point drier (custom made at the Physics Workshop, University of Guelph). Samples were then mounted on carbon tape and sputter coated in 30 nm gold with an Emitech K550 Sputter Coater (Ashford, Kent, UK). A Hitachi S-570 Scanning Electron Microscope (Tokyo, Japan) was used to capture micrographs of the gills.
After 24 h of immersion in 10% phosphate-buffered formalin fixative, the left operculum was cut away and the second gill arch was extracted and then routinely processed for paraffin embedding. The gill arches were serially sectioned in 4 μm increments and then stained with hematoxylin and eosin. The slides were viewed using an Olympus BX60 light microscope (Tokyo, Japan), and images were recorded using Image Pro Plus 5.1 (Media Cybernetics Inc., Silver Spring, MD, USA).
Measurements of lamellar width, lamellar length and height of ILCM were performed for each fish (Fig. 1). Width of lamellae was measured parallel to the filament at the base of the lamellae from one edge to the other. Lamellar length was measured from the edge adjacent to the filament to the most distal point of the lamellae from the filament. Height of ILCM was measured parallel to the total lamellar length, starting from the edge of the ILCM bordering the filament to the most distal edge of the ILCM from the filament.
Changes in mean metabolic rate over the period of air exposure were analyzed using analysis of covariance (ANCOVA) (with body mass as the covariate) and Tukey's tests. For gill morphometrics, a one-way ANOVA was used to compare differences between treatments with a significance level of P<0.05. If significance was found, a Tukey's test was used to estimate where the significant differences occurred. A P<0.05 was deemed significant.
V̇CO2 values varied between 6 and 10 μmol g–1 h–1 in fish exposed to air for 5 days (Fig. 2A). Mean metabolic rate of the 14 fish was 8.02±0.75μ mol CO2 g–1 h–1 and decreased with an increase in body mass (ANCOVA, F1,237=47.92, P<0.001).
V̇CO2 increased with air-exposure time over a five-day period (ANCOVA, F4,237=13.16, P=0.000) (Fig. 2A). Tukey's tests revealed that there were no significant differences between metabolic rates on days 1 and 2 or between days 3, 4 and 5. However, metabolic rate on both days 1 and 2 was significantly less than on days 3, 4 or 5 (P<0.05). When hourly rates were averaged from different days, metabolic rate did not change significantly with time of day (Fig. 2B) (ANCOVA, F22,237=0.28, P=1.00).
Scanning electron micrographs revealed marked differences in morphological appearance between control gills and gills emersed for 1 week (Fig. 3). The lamellae of the control fish were defined, not fused together, and had a relatively large surface area for exchange with water (Fig. 3A). Conversely, the lamellae of the fish exposed to air for one week appeared to be shorter with a decreased surface area (Fig. 3B). Similarly, light micrographs indicated that the lamellae became more embedded (i.e. less surface area was exposed to the air as a result of ILCM growth) with an increase in the time exposed to air (Fig. 4). Most samples exhibited an intermediate pattern where lamellae were partially embedded after 1 week of air exposure (Fig. 4D), but in one of the fish sampled the lamellae were completely embedded (Fig. 4E). After 1 week of recovery in water, the gill lamellae appeared very similar to control fish (Fig. 4F).
Mean lamellar widths from each treatment were not significantly different (P>0.05) and ranged from 4.5 to 5.1 μm (Fig. 5A). Analysis of the lamellar length from light micrographs revealed no significant differences (P>0.05) between fish in any treatment (Fig. 5B). Short-term exposure to air (1 h and 1 day) did not yield significant changes in the height of the ILCM (P>0.05), but after 1 week of air exposure there was a significant increase (P<0.05, N=6) (Fig. 5C). After 1 week of recovery in water, there was a significant decrease (P<0.05) in ILCM height compared with air-exposed fish (1 week), and the ILCM height was not significantly different (P<0.05) from control values (Fig. 5C).
K. marmoratus is a fish that constantly undergoes changes of habitat (Davis et al., 1990). The most dramatic change is no doubt the move from an aquatic to a terrestrial habitat. In normal aquatic conditions, the gills of K. marmoratus did not exhibit any general external features unique to many air-breathing and amphibious fish (see below). The ILCM did not fill the areas between the secondary lamellae in aquatic conditions. K. marmoratus gills in water were organized in a fashion typical of aquatic teleosts with relatively long, thin secondary lamellae projecting from the filament. The move to a terrestrial habitat induced morphological changes in the gills that were reversible. The embedment of the lamellae via ILCM growth during terrestrial exposure may serve to protect lamellae from collapse, aid the fish in aerial respiration or possibly prevent desiccation.
Many amphibious fish have developed structural modifications in their gills to prevent lamellar collapse when emersed. Tamura et al. reported that the mudskipper Boleophthalmus chinensis has short, widely spaced lamellae in order to reduce coalescence during emersion (Tamura et al., 1976). The gills of Mnierpes macrocephalus are enlarged, thick and long, which prevents their collapse in air (Graham, 1970). The gills of the mudskipper Periophthalmodon schlosseri have permanent fusions between the lamellae in order to prevent collapse, and their gills have been found to be better adapted for air breathing than for water breathing (Kok et al., 1998; Wilson et al., 1999). In K. marmoratus exposed to air, there was no change in lamellar width, indicating that thickened lamellae are not a strategy adopted to prevent lamellar collapse. However, we did observe growth of the ILCM in air-exposed killifish, which may serve to provide structural support. Although significant structural changes were not detected until 1 week of emersion, more subtle changes in the ILCM may have helped prevent collapse and coalescing of the secondary lamellae in the first few hours to days of air exposure. Alternatively, the ILCM growth during air exposure may have helped to prevent water loss across the gills. Water conservation is of prime importance to K. marmoratus because death occurs after only a few hours in air in the lab if the substratum is dry (P.A.W., personal observation) and in the field emersed fish aggregate, which is thought to be a mechanism to reduce water loss (Taylor, 2000).
The length of time exposed to air varies in nature depending on circumstance. Periods of drought can leave fish stranded on land for over a month, whereas terrestrial forays in search of food can last mere minutes. Although there is some evidence that cutaneous respiration may be the primary mode of respiration in air (see Introduction), we cannot rule out the possibility that the gills may be involved in aerial respiration. Much like P. schlosseri, K. marmoratus may partially use their gills for respiration when in air; the growth of the ILCM between the lamellae may serve to separate the lamellae so that they can function as respiratory structures (Sayer, 2005). The use of both skin and gills in respiration occurs in some amphibious fish, for example Periophthalmus cantonensis and Boleophthalmus chinensis (Tamura et al., 1976). Careful observations of buccal and opercular movements in air are necessary to establish if, indeed, branchial respiration occurs.
The difficulty of distinguishing nuclei in the light micrographs made it impossible to tell whether the growth of the ILCM was due to hypertrophy or hyperplasia. Hypertrophy is a more energy-efficient method of increasing size than hyperplasia because it does not involve cell duplication (Cheek and Hill, 1970; Overgaard et al., 2002). A reduced energy intake, as in the case of the mangrove killifish during air exposure, compromises the nuclear division necessary for hyperplasia, but not necessarily for hypertrophy (Cheek and Hill, 1970). We do not know what type of cells comprise the ILCM, nor whether hyperplasia or hypertrophy is involved in the ILCM growth. However, there was an increase in V̇CO2 after several days in air, which may or may not be linked partly to the gill remodeling (see below).
In air-exposed K. marmoratus, CO2 excretion was measured instead of O2 uptake because it is a more precise measure. The respiratory exchange ratio (CO2 released per O2 consumed) usually varies between 0.7 and 0.9 in amphibious air-breathing fish (Bridges, 1988; Martin, 1993; Graham, 1997). Using a respiratory exchange ratio of 0.8 and our V̇CO2 values of K. marmoratus in air (6–10 μmol g–1 h–1), the oxygen uptake in air is estimated to be between 7.5 and 12.5 μmol g–1 h–1. Rodela and Wright reported that V̇O2 in water ranged from 8 μmol g–1 h–1 (nighttime, inactive period) to 22 μmol g–1 h–1 (daytime, active period) in K. marmoratus (Rodela and Wright, 2006), values similar to or slightly higher than our estimated oxygen uptake in air. The fact that our values in air correspond to the previously measured nighttime values in water is most likely a result of the observed quiescence when the fish were exposed to air, as well as their unfed state.
Other amphibious marine fish, such as Oligocottus snyderi, Clinocottus globiceps and Anoplarchus pupurescens, have equivalent oxygen uptake rates in both air and water (Bridges, 1988), whereas Ascelichthys rhodorus and Oligocottus maculosus have a decreased oxygen uptake when exposed to air (Yoshiyama and Cech, 1994). The salt marsh killifish Fundulus heteroclitus also undergoes a significant decrease in oxygen uptake upon aerial emergence (Halpin and Martin, 1999).
Over a 24 h period of air exposure, there were no variations in V̇CO2 in K. marmoratus. This finding contrasts with the results of our previous study of V̇O2 in control K. marmoratus in water (Rodela and Wright, 2006). Over a 3-day period, V̇O2 consistently peaked at midday and decreased to the lowest rate midway through the night. The lack of a diurnal pattern in air-exposed K. marmoratus in the present study is likely due to inactivity during emersion. Despite no fluctuations of daily V̇O2 when emersed, there was an increase in V̇CO2 after two days of emersion in K. marmoratus. Gordon et al. observed a similar increase in small Chilean clingfish, Sicyases sanguineus, over 13 h but could not provide an explanation for this rise in oxygen uptake (Gordon et al., 1970). We speculate that the increase of metabolic rate over time is due to a complex series of changes, possibly including repaying an oxygen debt, alterations in biochemical pathways, cutaneous structures and/or gill morphology.
Our study is the first to show that the gills of K. marmoratus are plastic and are capable of undergoing reversible changes when emersed and then returned to water. We suggest that the growth of the ILCM may prevent the lamellae from coalescing (which could render the gills non-functional when returned to water), facilitate branchial aerial respiration or have another function such as resistance against desiccation. Over the period of time of gill remodeling in air, metabolic rate is maintained at a rate similar to that of fish in aquatic conditions. Hence, K. marmoratus are supremely adapted to the challenges of respiring in air, explaining in part why they tolerate weeks of air exposure.
We would like to thank Alexandra Smith for her invaluable technical help with microscopy techniques, Spencer Russell for assistance with histology, Laura Mulligan for her help with the final figures, and Meghan Mitchell for caring for the killifish. This project was funded by an NSERC Undergraduate Research Student Award to K.J.O. and the NSERC Discovery and Tools and Instruments grants program to P.A.W. and E.D.S. All experiments were approved by the University of Guelph Animal Care Committee.
- © The Company of Biologists Limited 2007 | <urn:uuid:922accd1-c39d-49ee-a308-ff255ef8e4be> | CC-MAIN-2017-17 | http://jeb.biologists.org/content/210/7/1109 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123046.75/warc/CC-MAIN-20170423031203-00545-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.941325 | 5,769 | 2.71875 | 3 |
Frequently Asked Questions:
A. Most wine producers are honest, of course, but it's still important to know what you're buying. Look carefully at the wine label to learn at least the minimum. The front label of most U.S. wines usually carries the name of the grape variety along with an appellation (place name), which refers to the legally defined American Viticulture Area (AVA) in which the grapes were grown. In general, the more specific the appellation, the better you can expect the wine to be.
Here's what the most common terms on American-made wines mean:
California: If a wine label says "California" on the front it means the grapes could have been grown anywhere up and down this gigantic state. In effect it often indicates that a high percentage of the wine comes from cheaper Central Valley grapes that make less concentrated, less interesting wines.
Coastal: Be careful with this increasingly popular term. Many of the wines are great values, but "Coastal" is not an AVA and doesn't mean a thing, legally.
Counties, valleys: Specific terms such as Napa Valley, Sonoma County and Willamette Valley are almost always a good sign. They mean that at least 85 percent of the wine was made from grapes grown there.
Towns, districts: If you see a town name like Oakville or a district name like Carneros it means even more specialization, better odds for high quality and an inevitably higher price.
Vineyard designations: The individual property where the grapes came from, like Sangiacomo Vineyard or Bien Nacido Vineyard, is the finest geographical distinction a winery can put on a bottle. This is usually a good sign of quality and a chance to experience what the French call terroir, the taste of a place.
Estate bottled: Another good sign of quality. It means that the wine was made from grapes grown in vineyards owned (or leased for the long term) by the winery itself, not grown by an independent farmer or another winery.
Produced and bottled by: This is one of the best phrases to see in fine print on a label. It means that the winery itself actually crushed the grapes, fermented the juice and put the wine into bottles. The only thing better in this regard is "grown, produced and bottled by," which is basically the same as estate bottled. Other phrases, such as "vinted and bottled by" and "cellared and bottled by" can mean the winery bought the wine from another vintner, maybe blended it and aged it a bit -- maybe not -- then bottled it.
A. White wines too warm will taste alcoholic and flabby, while white wines too cold will be refreshing but nearly tasteless. As for reds, keep them too warm and they will taste soft, alcoholic and even vinegary. Too cold and they will have an overly tannic bite and much less flavor.
Here's how to be confident the wine you serve will be on its best behavior:
Champagne and other sparkling wines should start out totally chilled. Put them in the refrigerator an hour and half before serving or in an ice bucket with an ice-water mixture at least 20 minutes before serving. For vintage-dated Champagne and other high-quality bubbly, however, you should let the bottle then warm up a bit if you don't want to miss out on the mature character for which you're probably paying extra.
Sauvignon Blanc, Pinot Grigio, white Zinfandel and other refreshing white wines should also be chilled to refrigerator temperature (usually 35 to 40 degrees) for an hour and a half before serving. But the better examples, such as barrel-aged wines like Fume Blanc (made from Sauvignon Blanc grapes) will improve if brought out 20 minutes early or allowed to warm up slightly during hors d'ouevres or dinner.
Chardonnay, white Burgundy and other rich, full-bodied and barrel-fermented white wines of high quality taste their best at classic "cellar temperature," or 55 degrees. Winemakers in France's Burgundy region know what they're doing when they offer tastes to visiting journalists and wine buyers directly from the barrels of Chardonnay in their cool, humid underground cellars. So put these into the fridge an hour and half before serving, but bring them out 20 minutes early to warm a bit.
Sweet dessert wines need the same treatment as Sauvignon Blanc, above, with the exception of fortified dessert wines like Port and sweet Sherry, which are better at cellar temperature or warmer. Treat dry Sherry like Sauvignon Blanc, too.
Almost all red wines
show their best stuff when served at about 65 degrees-cool, but warmer
than cellar temperature. This is not room temperature, unless you happen
to live in a Scottish castle or in San Francisco during July. So if you
don't keep your red wine in a cool cellar or cooled storage unit, you
will enjoy it more if you chill it for 20 minutes in the refrigerator
Rhone, Cabernet, Merlot
question, but one that is totally subjective to each individual palate.
For the most part, the old standby of "Red with meat, white with
fish" is a fair rule of thumb, but should by no means be the rule.
In the Northwest, for example, salmon is most often paired with Pinot
Noir. The key to matching wines with food should be more based on matching
the levels of quality, in my humble opinion. You wouldn't really want
to serve Osso Buco with a run-of-the-mill Chianti with the straw flask,
You may be wondering, "Why do I care how the stuff is made?!"
If so, you really should relax, maybe even have a glass of wine. Learning
the basics about winemaking is useful because it allows you to (a) credibly
evaluate the wines that you taste and (b) impress your date. For instance,
it's always a fun piece of trivia to let people know that red grapes can
make white wine, and it is good to know that you should never chill your
white wine by chucking it into the freezer - frost severely damages the
alcohol balance and taste of wine.
So what exactly is
this stuff and why is everyone all up in arms about it? Let's be clear:
wine isn't just high octane grape juice. Good wine really is tough to
make; if you don't believe us, try a nice bit of crappy wine and you'll
quickly learn why Monty Python claimed that it "opens the sluices
at both ends." Making a good wine involves taking a great grape,
growing it in the right soil, ushering it through the fermentation process,
aging it in the right way, and releasing it at just the right time. So
there are plenty of things to screw up, and the English have been botching
it for years.
What is wine?
Essentially, it is
fermented grape juice, but with a few extra twists. God saved a few pieces
of Eden when he gave us the boot, and one of the best is the fact that
any fruit containing sugar will turn to booze if you leave it to ferment.
In the process of fermentation, yeast converts the sugar into alcohol.
Yeast is found all over the place, and in the wild it lands on the skins
of grapes; hence, when grape juice is left to sit about in the wild, that
yeast will mix with it and ferment it naturally. Vintners nowadays don't
take any such chances: they labor over what precise strain of yeast to
use in their recipe because different choices will obviously lead to different
Most people believe
that green grapes make white wine and red grapes make red wine. That is
largely true, but if you care to impress anyone with arcane eno-trivia,
you should know that white wine can also be made from red grapes. The
inside of red grapes is essentially "white" - it is only their
skin that is red. And most wines are made with just the inside of a grape.
The red color in red wine is created by allowing the fleshy interior to
mix with the pulpy skins when it is being crushed. This process infuses
red wines with "tannin," an ingredient that gives red wine its
distinctive flavor. So you can make white wine with red grapes - like
White Zinfandel, a fine white wine made from a grape with a decidedly
red exterior - but not red wine with green grapes. Oh, and most champagnes
are made from red grapes. Weird, but true.
The grapes are crushed with or without the skins and then left to ferment. The nasty bits are removed from the juice and a disinfectant is used to neutralize any contaminants, such as mold and bacteria, that may have been on the grapes - remember, they've just been sitting outside for ages, surrounded by bugs and dirt, and yeast ain't the only thing lurking on the skin. The fluid, or "must," is then left to complete the fermentation process in either big steel vats or small wooden barrels - barrels call for a longer process and are harder to keep at the right temperature, but supposedly lead to a better finished product, for which you of course will end up paying more. Once the wine is properly fermented, the vintner will need to pluck out all the little nibblets and then mature the clarified vino. The better vineyards will age the wine for years in oak barrels, which infuses the wine with positive woody hints. The lamer vineyards will shove the stuff in a steel vat just long enough for it to be squirted into cardboard boxes with plastic spigots.
four major types of wine: red, white, rosé (or blush), and champagne.
As far as dining is concerned, we are going to focus only on the first
two types since champagne is its own animal.
Where color comes
Color is the first
and easiest distinguishing feature of wine. As we hinted at earlier, the
main difference between red and white wine is that grape juice used to
make red wine contains skins, seeds, and stems. This is significant for
the following reason: leaving juice to mix together with the woody bits
(known as maceration) causes the finished product to contain something
we briefly mentioned earlier - tannins. If the term tannin is bugging
you because you don't really get what we're talking about, just think
about a strong cup of tea. That woody taste is tannin. In wine, it can
lend a wonderful complexity to a red wine. As a general rule of thumb,
red wines are heavier and more complex than white wines. White wines are
usually a good place for beginners to start because they are initially
more palatable to novices since they often tend to be sweeter.
The reason you need
to be aware of the differences between red and white wine is because one
of the oldest rules in fine dining is that you should attempt to harmonize
your choice of food and drink. If you are going to be eating something
delicate with subtle tastes, the Rule states, you should avoid drinking
something with a strong flavor that will overshadow the food. Conversely,
a hearty meal will often be best complimented by a strong wine with flavor
of its own. Now every single guide to wine in the world makes a point
of saying that the Rule is out of date and the only hard and fast dictate
of wine drinking is to choose something you enjoy. Of course, if you're
dropping fatty cash for grub and grog, you should pick whatever the hell
you want. Don't let dead British wankers tell you how to eat a meal -
go with what you like.
The rationale behind
a reason that Rule evolved in the first place: it makes sense. If, for
example, you're trying to pick up on the vague hints of Caribbean brine
that delicately caress the primo slice of sushi you just ordered, slurping
a bowl of tequila isn't going to help. Balancing food with drink may not
be required anymore, but it's a good tip to keep in mind and will instantly
push you off the Zero mark when you start eating at good restaurants.
One of the
main distinctions - after red and white - that is bandied about by wine
drinkers is whether a particular quaff is sweet or dry. Though imagining
how a fluid can be dry is something of a logical stretch, just bear in
mind that dry is nothing more than the opposite of sweet, and we all know
what sweet tastes like. A related factor is the weight of a particular
type of wine, which refers to the amount of alcohol present in a given
Guides to Sweetness
Here is a quick and
dirty guide to the sweetness of wines (and please note that, for both
charts, the listed reds are not necessarily of the same sweetness/weight
as the whites listed next to them -- these are relative charts of sweetness/weight,
within red or white):
And here's a thumbnail
sketch of how heavy or light a wine is:
of Poitou, La Rochelle and Angoumois, produced from high quality vineyards,
were shipped to Northern Europe where they were enjoyed by the English,
Dutch and Scandinavians as early as the 13th century. In the 16th century,
they were transformed into eau-de-vie, then matured in oak casks to become
Cognac. That was the start of the adventure for a town which was to become
the capital of a world famous trade.
Cognac is a living
thing. During its time in the oak casks it is in permanent contact with
the air. This allows it to extract the substances from the wood that give
both its color and its final bouquet.
A. Armagnac may not be as well known as its bigger brother, Cognac, throughout the world of brandy drinkers but but among afficionados it is appreciated for its greater sophistication and subtleness. Indeed, someone once said : ``Cognac is like a fresh young girl, but armagnac is like a woman of a certain age that you do not wish to take home to meet your mother.''
There is a hint too of romanticism about Armagnac. It is part of the world of Gascony, which also gave birth to those great adventurers of French literature, d'Artagnan and the three musketeers, who have captured the imagination of many generations even outside France, and - inevitably - Hollywood.
One reason for the relative obscurity of Armagnac is perhaps that with one exception it is still produced by myriad small `chais' which do not have the resources to commercialise it in the same way as the big names of Cognac.
One of the principal differences between Armagnac and Cognac is the system of distillation, the Alhambic. This has five to eight stages in the one distillation machine. The spirit that emerges at the end of the process is more complete, because it has kept those parts that are lost at the beginning and the end of simpler distillations. These fragrant esters impart to Armagnac a greater fruitiness, reminding the discerning connoisseur of the fruit from which the spirit came. The bouquet of a fine Armagnac has wonderful hints of prune and other fruits which are driven out in other eaux-de-vie.
This unusual distillation process has made possible another innovation: recent developments in the control of rot in grapes have meant that some Armagnac producers have been able to produce a single grape Armagnac - folle blanche - a magically scented spirit.
The Armagnac growing area is divided into three: Bas Armagnac, around Aire-sur-l'Adour and Eauze, which produces the most prestigious Armagnacs, the Ténarèze, (Nérac, Condom and Vic-Fezensac), which produces some highly perfumed spirits sometimes rather coarser, and the Haut Armagnac (Mirande, Auch and Lectoure) which produces very little Armagnac nowadays.
There are four main
varieties of grape used mainly in Armagnac: folle blanche (known as gros
plant elsewhere), colombard, ugni blanc and the baco.
FLOC DE GASCOGNE
Faced with a slump
in sales of spirits Armagnac producers started selling the aperitif Floc
de Gascogne in the late 1970s. This is a ratafia, grape-juice matured
with Armagnac, and comes as a white or red drink.
Whisky is whisky which has been distilled and matured in Scotland. Irish
Whiskey means whiskey distilled and matured in Ireland. Whisky is distilled
in Scotland from malted barley in Pot Stills and from malted and unmalted
barley or other cereals in Patent Stills. The well-known brands of Scotch
Whisky are blends of a number of Pot Still and Patent Still whiskies.
Irish Whiskey distillers tend to favour three distillations rather than
two, as is general in Scotland in the case of Pot Still whiskies, and
the range of cereals used is wider.
As regards Bourbon
Whiskey, the United States Regulations provide:
Rye Whiskey is produced
both in the United States and Canada but the name has no geographical
significance. In the United States, Rye Whiskey by definition must be
produced from a grain mash of which not less than 51% is rye grain. In
Canada, there is no similar restriction. The relevant Canadian Regulation
'Canadian Whisky (Canadian
Rye Whisky, Rye Whisky) shall be whisky distilled in Canada, and shall
possess the aroma, taste and character generally attributed to Canadian
two kinds of Scotch Whisky: Malt Whisky which is made by the Pot Still
process and Grain Whisky which is made by the Patent Still (or Coffey
Still) process. Malt Whisky is made from malted barley only, while Grain
Whisky is made from malted barley together with unmalted barley and other
The Pot Still process
by which Malt Whisky is made may be divided into four main stages: Malting,
Mashing, Fermentation and Distillation.
The barley is first
screened to remove any foreign matter and then soaked for two or three
days in tanks of water known as steeps. After this it is spread out on
a concrete floor known as the malting floor and allowed to germinate.
Germination may take from 8 to 12 days depending on the season of the
year, the quality of the barley used and other factors. During germination
the barley secretes the enzyme diastase which makes the starch in the
barley soluble, thus preparing it for conversion into sugar. Throughout
this period the barley must be turned at regular intervals to control
the temperature and rate of germination.
At the appropriate
moment germination is stopped by drying the malted barley or green malt
in the malt kiln. More usually nowadays malting is carried out in Saladin
boxes or in drum maltings, in both of which the process is controlled
mechanically. Instead of germinating on the distillery floor, the grain
is contained in large rectangular boxes (Saladin) or in large cylindrical
drums. Temperature is controlled by blowing air at selected temperatures
upwards through the germinating grain, which is turned mechanically. A
recent development caused by the rapid expansion of the Scotch Whisky
industry is for distilleries to obtain their malt from centralised maltings
which supply a number of distilleries, thereby enabling the malting process
to be carried out more economically.
The dried malt is
ground in a mill and the grist, as it is now called, is mixed with hot
water in a large circular vessel called a mash tun. The soluble starch
is thus converted into a sugary liquid known as wort. This is drawn off
from the mash tun and the solids remaining are removed for use as cattle
After cooling, the
wort is passed into large vessels holding anything from 9,000 to 45,000
litres of liquid where it is fermented by the addition of yeast. The living
yeast attacks the sugar in the wort and converts it into crude alcohol.
Fermentation takes about 48 hours and produces a liquid known as wash,
containing alcohol of low strength, some unfermentable matter and certain
by-products of fermentation.
Malt Whisky is distilled
twice in large copper Pot Stills. The liquid wash is heated to a point
at which the alcohol becomes vapour. This rises up the still and is passed
into the cooling plant where it is condensed into liquid state. The cooling
plant frequently takes the form of a coiled copper tube or worm that is
kept in continuously running cold water.
The first distillation
separates the alcohol from the fermented liquid and eliminates the residue
of the yeast and unfermentable matter. This distillate, known as low wines,
is then passed into another still where it is distilled a second time.
The first runnings from this second distillation are not considered potable
and it is only when the spirit reaches an acceptable standard that it
is collected in the Spirit Receiver. Again, towards the end of the distillation,
the spirit begins to fall off in strength and quality. It is then no longer
collected as spirit but drawn off and kept, together with the first running,
for redistillation with the next low wines.
The Patent Still process
by which Grain Whisky is made is continuous in operation and differs from
the Pot Still process in five other ways.
Both Malt and Grain
Whisky must be matured after distillation has been completed. The new
spirit is filled into casks of oak wood which, being permeable, allows
air to pass in and evaporation takes place. By this means the harsher
constituents in the new spirit are removed and it becomes in due course
a mellow whisky. Malt Whisky which contains more of these flavoury constituents
takes longer to mature than Grain Whisky and is often left in the cask
for 10 years or even longer.
After maturation the
different whiskies are blended together. The blend is then reduced to
the strength required for bottling by the addition of soft water. The
different whiskies in the blend will have derived some colour from the
casks in which they have been matured, but the degree of colour will vary
from one whisky to another. Whisky matured in former fresh oak sherry
casks will usually be a darker colour than that which has been matured
in refilled whisky casks. The blender aims at uniformity in his product
and he may bring his whisky to a definite standard colour by adding, if
necessary, a small amount of colouring solution prepared from caramelised
sugar, which is infinitesimal in relation to the volume of whisky involved.
The whisky is then filtered carefully.
The final stage in
the production of Scotch Whisky is packaging and despatch. Most Scotch
Whiskies are marketed at home and abroad in branded bottles.
A. Vodka is a drink which originated in Eastern Europe, the name stemming from the Russian word 'voda' meaning water or, as the Poles would say 'woda'. The first documented production of vodka in Russia was at the end of the 9th century, but the first known distillery at, Khylnovsk, was about two hundred years later as reported in the Vyatka Chronicle of 1174. Poland lays claim to having distilled vodka even earlier in the 8th century, but as this was a distillation of wine it might be more appropriate to consider it a crude brandy. The first identifiable Polish vodkas appeared in the 11th century when they were called 'gorzalka', originally used as medicines.
Medicine and Gunpowder
During the Middle
Ages, distilled liquor was used mainly for medicinal purposes, as well
as being an ingredient in the production of gunpowder. In the 14th century
a British Ambassador to Moscow first described vodka as the Russian national
drink and in the mid-16th century it was established as the national drink
in Poland and Finland. We learn from the Novgorod Chronicles of 1533 that
in Russia also, vodka was used frequently as a medicine (zhiznennia voda
meaning 'water of life').
Since early production
methods were crude, vodka often contained impurities, so to mask these
the distillers flavoured their spirits with fruit, herbs or spices.
The mid - 15th century
saw the first appearance of pot distillation in Russia. Prior to that,
seasoning, ageing and freezing were all used to remove impurities, as
was precipitiation using isinglass ('karluk') from the air bladders of
sturgeons. Distillation became the first step in producing vodka, with
the product being improved by precipitation using isinglass, milk or egg
Around this time (1450)
vodka started to be produced in large quantities and the first recorded
exports of Russian vodka were to Sweden in 1505. Polish 'woda' exports
started a century later, from major production centres in Posnan and Krakow.
From acorns to melon
In 1716, owning distilleries
became the exclusive right of the nobility, who were granted further special
rights in 1751. In the following 50 or so years there was a proliferation
of types of aromatised vodka, but no attempt was made to standardise the
basic product. Types produced included; absinthe, acorn, anisette, birch,
calamus root, calendula, cherry, chicory, dill, ginger hazelnut, horseradish,
juniper, lemon, mastic, mint, mountain ash, oak, pepper, peppermint, raspberry,
sage, sorrel, wort and water melon! A typical production process was to
distil alcohol twice, dilute it with milk and distil it again, adding
water to bring it to the required strength and then flavouring it, prior
to a fourth and final distillation. It was not a cheap product and it
still had not attained really large-scale production. It did not seek
to compete commercially with the major producers in Lithuania, Poland
Vodka marches across
The spread of awareness
of vodka continued throughout the 19th century, helped by the presence
in many parts of Europe of Russian soldiers involved in the Napoleonic
Wars. Increasing popularity led to escalating demand and to meet this
demand, lower grade products were produced based largely on distilled
Earlier attempts to
control production by reducing the number of distilleries from 5,000 to
2,050 between the years 1860 and 1890 having failed, a law was enacted
in 1894 to make the production and distribution of vodka in Russia a state
monopoly. This was both for fiscal reasons and to control the epidemic
of drunkenness which the availability of the cheap, mass-produced 'vodkas'
imported and home-produced, had brought about.
It is only at the
end of the 19th century, with all state distilleries adopting a standard
production technique and hence a guarantee of quality, that the name vodka
was officially and formally recognised.
A. The first confirmed date for the production of gin is the early 17th century in Holland, although claims have been made that it was produced prior to this in Italy. In Holland it was produced as a medicine and sold in chemist shops to treat stomach complaints, gout and gallstones. To make it more palatable, the Dutch started to flavor it with juniper, which had medicinal properties of its own.
From Dutch courage
to William of Orange
British troops fighting
in the Low Countries during the Thirty Years' War were given 'Dutch Courage'
during the long campaigns in the damp weather through the warming properties
of gin. Eventually they started bringing it back home with them, where
already it was often sold in chemists' shops. Distillation was taking
place in a small way in England, but it now began on a greater scale,
though the quality was often very dubious. Nevertheless, the new drink
became a firm favorite with the poor.
The formation by King
Charles I of the Worshipful Company of Distillers, where members had the
sole right to distil spirits in London and Westminster and up to twenty-one
miles beyond improved both the quality of gin and its image; it also helped
English agriculture by using surplus corn and barley.
The Gin Riots
The problem was tackled
by introducing The Gin Act at midnight on 29 September 1736, which made
gin prohibitively expensive. A license to retail gin cost £50 and
duty was raised fivefold to £1 per gallon with the smallest quantity
you could buy retail being two gallons. The Prime Minister, Sir Robert
Walpole, and Dr. Samuel Johnson were among those who opposed the Act since
they considered it could not be enforced against the will of the common
people. They were right. Riots broke out and the law was widely and openly
broken. About this time, 11 million gallons of gin were distilled in London,
which was over 20 times the 1690 figure and has been estimated to be the
equivalent of 14 gallons for each adult male. But within six years of
the Gin Act being introduced, only two distillers took out licenses, yet,
over the same period of time, production rose by almost fifty per cent.
quality and patronage
The Gin Act, finally
recognized as unenforceable, was repealed in 1742 and a new policy, which
distillers helped to draft, was introduced: reasonably high prices, reasonable
excise duties and licensed retailers under the supervision of magistrates.
In essence this is the situation which exists today.
First the history: Tequila was first distilled in the 1500-1600's in the
state of Jalisco, Mexico. Guadalajara is the capital of Jalisco and the
city of Tequila was established in about 1656. This is where the agave
plant grows best.
The agave is not a
cactus as rumored, but belongs to the lily family and has long spiny leaves
(pincas). The specific plant that is used to make tequila is the Weber
blue agave. It takes 8-12 years for the agave to reach maturity. During
harvest, the leaves are cut off leaving the heart of the plant or pina
which looks like a large pineapple when the jimadors are done. The harvested
pina may weigh 200 pounds or more and is chopped into smaller pieces for
cooking at the distillery.
Tequila was first
imported into the United States in 1873 when the first load was transported
to El Paso, Texas. In 1973 tequila sales in the US topped one million
There are two basic
types of tequila, 100% blue agave (cien por ciento de agave) tequila and
mixto. The 100% blue agave tequilas are distilled entirely from the fermented
juice of the agave. All 100% agave tequilas have to be distilled and bottled
in Mexico. If the bottle does not say 100% blue agave, the tequila is
mixto and may have been distilled from as little as 60% agave juice with
Grades of tequila:
As the tequila is
aged in wooden barrels, usually oak, it becomes smoother, with a woody
taste and golden color. Aging may disguise the agave flavor and few tequilas
are aged longer than three to four years.
Each distillery in
Mexico is assigned a NOM number that shows which company made or bottled
There is no worm in
tequila, that is Mezcal which is a whole different animal.
the different styles you may come across at our stores or your favorite
local brew pub.
Ale - originally a
liquor made from an infusion of malt by fermentation, as opposed to beer,
which was made by the same process but flavored with hops. Today ale is
used for all beers other than stout.
Alt - means "old".
A top fermented ale, rich, copper-colored and full-bodied, with a very
firm, tannic palate, and usually well-hopped and dry.
Amber Beer - an ale
with a depth of hue halfway between pale and dark.
Barley Wine - dark,
rich, usually bittersweet, heavy ales with high alcohol content, made
for sipping, not quaffing.
Bitter - the driest
and one of the most heavily hopped beers served on draft. The nose is
generally aromatic, the hue amber and the alcoholic content moderate.
Bock - a strong dark
German lager, ranging from pale to dark brown in color, with a minimum
alcoholic content of about 6 percent.
Brown Ale - malty
beers, dark in color, and they may be quite sweet.
Burton - a strong
ale, dark in color, made with a proportion of highly dried or roasted
Beer - these special season beers are amber to dark brown, richly flavored
with a sweetish palate. Some are flavored with special spices and/or herbs.
Dopplebock - "double
bock." A stronger version of bock beer, decidedly malty, with an
alcoholic content ranging from 8 percent to 13 percent by volume.
Hefe-Weizen - a wheat
beer, lighter in body, flavor and alcohol strength.
India Pale Ale (IPA)
- a generously hopped pale ale.
Kolsch - a West German
ale, very pale (brassy gold) in hue, with a mild malt flavor and some
Malt Liquor - most malt liquors are lagers that are too alcoholic to be labeled lagers or beers.
Muncheners - a malty,
pale lager distinguished from the darker, heavier Munich Dark beers by
the term "dunkel."
- a copper-colored, malty beer brewed at the end of the winter brewing
season in March.
Pale Ale - made of
the highest quality malts, the driest and most highly hopped beer. Sold
as light ale or pale ale in bottle, or on draft as bitter.
Pilsner - delicately
dry and aromatically hoppy beers.
Porter - a darker
(medium to dark reddish brown) ale style beer, full-bodied, a bit on the
bitter side. The barley (or barley-malt) is well roasted, giving the brew
a characteristic chocolaty, bittersweet flavor.
Stout - beer brewed
from roasted, full-flavored malts, often with an addition of caramel sugar
and a slightly higher proportion of hops. Stouts have a richer, slightly
burnt flavor and are dark in color.
Sweet Stout - also
known as milk stout because some brewers use lactose (milk sugar) as an
Wheat Beer - a beer
in which wheat malt is substituted for barley malt. Usually medium-bodied,
with a bit of tartness on the palate.
Try these beer and
food pairings: Stout with spicy chili. Hefe-Weizen (with a lemon twist)
with spicy mussels in cream sauce. Raspberry Wheat Beer with a chocolate
hazelnut tort. Pale Ale with a caesar salad (with a smoked meat). Walnut
Ale with a lamb kabob with onions and peppers.
A. If we lived in a perfect world, you would get 165 and 1/3 servings. Unfortunately, all bars see a good amount of waste from cleaning the tap lines, over-foaming pints, bad taps and common spillage. It's sad to think about all the beer that goes to waste. A moment of silence please...
produce their light beers at about an average of 30 to 40 less calories
from their premium brand (per 12 ounce serving). As for the alcohol, it
can vary from an average of only 0.4 percent to as much as 1.5 percent
alcohol by volume, from light to premium depending on the brewery.
(a Belgian-style in origin) are indeed sour, with generally unfamiliar
aromas to most beer drinkers. And, that's the intention.
Lambic brewers let
nature do the leg work via spontaneous fermentation. This is accomplished
in open fermenters where wild yeast, bacteria and other micro flora get
to do their funky job on the beer.
Stout is good for you, in moderation. Long used as a restorative and by
nursing mothers, stout contains a variety of vitamins (as do most beers).
The link between moderate but steady consumption of beer and a healthy
heart has been clearly documented.
Home | Wines | Spirits
| Beer | Cigars | | <urn:uuid:ac2379cf-c418-4c2b-af84-cf8c5fbaded1> | CC-MAIN-2017-17 | http://kcliquor.com/faq.asp | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118310.2/warc/CC-MAIN-20170423031158-00541-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.952726 | 7,948 | 2.59375 | 3 |
by Beth Shankle Anderson
In the United States, our legal system incorporates two parallel judicial processes, consisting of a federal and a state court system — each state having its own separate structure. Since 1941, there has been significant recognition of circumstances under which a federal court may decline to proceed though it has jurisdiction under the Constitution and federal statutes.1 These circumstances give rise to what is commonly referred to as the “abstention doctrine,” which “prohibits a federal court from deciding a case within its jurisdiction so that a state court can resolute some or all of the dispute.”2 The purpose of this doctrine is to “preserve the balance between state and federal sovereignty.”3 This balance between state and federal courts is often referred to as federalism or comity, and the cases involving federal court abstention embody complex considerations designed to avoid friction between federal and state courts.4
Although a plethora of criticism exists, the abstention doctrines are essential to our parallel court systems in those cases where the interests of the states outweigh federal adjudication of those interests.5 This article seeks to explore the various abstention doctrines and their application, expansion, and curtailment.
The Younger Abstention Doctrine
The Younger abstention doctrine has its roots in the concept of “Our Federalism” which grew out of the case of Younger v. Harris, 401 U.S. 37 (1971). This doctrine instructs federal courts to refrain from hearing constitutional challenges to state action when federal action would be regarded as an improper intrusion on the state’s authority to enforce its laws in its own courts.6 The abstention doctrine derives from the longstanding concepts of comity and federalism. The coexistence of state and federal powers embodies a system in which there must be respect by both sovereigns to protect the other’s legitimate interests.7 Justice Black emphasized that while the federal government may be anxious to protect federal rights and interests, it must “always endeavor to do so in ways that will not unduly interfere with the legitimate activities of the [s]tates.”8 This gave rise to the concept of “Our Federalism” which, the Supreme Court explained, “does not mean blind deference to ‘[s]tates’ [r]ights,’ anymore than it means centralization of control over every important issue in our [n]ational [g]overnment and its courts.”9
In Younger, Harris, an advocate for communism, was indicted in a California state court and charged with violation of the California Criminal Syndicalism Act. He filed a complaint in the district court seeking an order to enjoin Younger, the district attorney, from prosecuting him any further under the California statute. Harris alleged that the statute violated his right to free speech and press as guaranteed under the First and 14th amendments.10 The district court agreed, and held that the California Criminal Syndicalism Act was “void for vagueness and overbredth in violation of the First and th [a]mendments.”11 The court issued an injunction restraining Younger from further prosecuting Harris under the act.12 Younger then appealed the decision to the U.S. Supreme Court.
The U.S. Supreme Court, in an opinion delivered by Justice Black, reversed the decision of the district court and held that the federal relief sought by Harris was barred because of the “fundamental policy against federal intervention with state criminal proceedings.” The Court noted that Congress, since its early beginnings, had emphasized the importance of deference to state court proceedings, leaving them free from federal court interference, specifically citing the “Anti-Injunction Act.”13 Justice Black maintained that the primary sources for prohibiting federal intervention in state prosecutions were readily apparent in the “basic doctrine of equity jurisdiction” and comity, both of which comprise the notions found in “Our Federalism.”
The Younger abstention doctrine derived from these longstanding concepts of comity and federalism, which are unique to this country. This foundation creates a federal judiciary without blind deference to the states or centralized control over each and every national issue. Younger abstention is distinguished from other abstention doctrines because it is based on considerations of comity and equity jurisprudence. A court sitting in equity should not interfere with the ongoing proceedings of state criminal prosecutions.14 The Younger court expressly noted that its decision was based on notions of comity and “Our Federalism,” not the Anti-Injunction Act.15
Exceptions to the Younger Abstention Doctrine
Younger implied that a federal court may act to enjoin a state court proceeding when certain extraordinary circumstances exist that involve traditional considerations of equity jurisprudence.16 Although these exceptions are implicit in Younger, many scholars argue that these exceptions are virtually nonexistent in their application.17 These three principal exceptions include bad faith and harassment, patently unconstitutional statutes, and the lack of an adequate state forum.18
The bad faith and harassment exception was specifically mentioned in the opinion as the kind of extraordinary circumstances that would justify federal intervention in the state proceeding. If the state prosecution was brought in bad faith and used to harass the criminal defendant, the Court stated that injunctive relief would be available. Generally, the Court has defined bad faith as prosecuting an individual without a reasonable expectation of obtaining a valid conviction. The Court further defined the “bad faith and harassment” exception to include “a combination of impermissible motive, multiple prosecutions, and improbability of success.”19 However, since the Younger decision, the Court has never invoked this exception to find that state action constituted a bad faith prosecution.20
The second exception to Younger abstention is the patently unconstitutional exception. This exception is derived from Justice Black’s declaration that “there may, of course, be extraordinary circumstances in which the necessary irreparable injury can be shown even in the absence of the usual prerequisites of bad faith and harassment.”21 To illustrate this point, Justice Black stated that “[i]t is of course conceivable that a statute might be flagrantly and patently violative of express constitutional prohibitions in every clause, sentence and paragraph, and in whatever manner and against whomever an effort might be made to apply it.” However, this exception has proven to be almost useless because it is difficult to contemplate a statute so wholly unconstitutional as to meet these requirements. Indeed, there is not a single instance in which the Court has invoked the patently unconstitutional exception to justify federal intervention.22
Specifically, in Trainor v. Hernandez, 431 U.S. 434, 447 (1977), the Court found the statute at issue to be patently unconstitutional. The Court, however, declined to apply this exception to the case before it because the statute in question was not unconstitutional “in every clause, sentence and paragraph.”23 Justice Stevens noted in his dissent that the patently unconstitutional exception would be “unavailable whenever a statute has a legitimate title, or a legitimate severability clause, or some other equally innocuous provision.”24 Accordingly, the Court’s interpretation of this exception has completely voided it of any meaning and rendered it practically useless.
The third exception to federal court abstention that derived from Younger is one premised on the “lack of an adequate state forum.”25 Unlike the other two exceptions, the Court has actually used this exception in practice. In Gibson v. Berryhill, 411 U.S. 564 (1973), for example, the Court found that federal intervention is appropriate under this exception if the state courts are biased and unable to be trusted on a particular issue. In Gibson, the Court found that a board of optometrists was incapable of fairly adjudicating a particular suit because every member had a financial stake in its outcome.
The Court has been far more restrictive in its holdings in other cases, especially ones involving the judiciary. In Middlesex County Ethics Comm. v. Garden State Bar Ass’n, 457 U.S. 423, 435-37 (1982), the Court found that the state bar was an adequate forum for raising First Amendment objections to state bar disciplinary rules because nothing in the record indicated that the bar committee would have refused to hear a First Amendment challenge to its disciplinary rules. Therefore, the Court declined to recognize that a Younger exception existed, instead finding that the state bar was adequate to address the First Amendment challenges. And, in Kugler v. Helfant, 421 U.S. 117, 125-29 (1975), the Court found that the availability of recusal provisions in New Jersey courts substantially undermines any claim of bias surrounding the judiciary. Moreover, even when bias can be shown on the part of a judiciary, a litigant must also demonstrate that the bias is so systematic and pervasive that recusal provisions are either unavailable or ineffective.26
In sum, the opportunities to invoke a Younger exception are rare, and when opportunities do arise, these exceptions are very seldom used by the federal courts. The bad faith and harassment exception and the patently unconstitutional exception have never been invoked by the Court because the definitions of these exceptions have been so narrowly construed as to exclude almost every imaginable scenario. Therefore, the Younger exceptions have rendered themselves practically useless.
Expansion of the Younger Abstention Doctrine
The Younger abstention doctrine mandates that federal courts must abstain from hearing cases involving federal issues already being litigated in state forums. In its original version, the doctrine only applied when the federal courts were asked to intervene on federal issues in cases already being litigated in an ongoing state criminal proceeding. However, the doctrine quickly expanded to further limit a litigant’s access to federal courts.
The facts of Samuels v. Mackell, 401 U.S. 66 (1971), were similar to Younger and was in fact decided by the Court on the same day. Again, Justice Black delivered the opinion which was based on similar policy considerations as those emphasized in Younger. The Younger abstention doctrine was immediately expanded to include federal declaratory relief that interferes with state prosecutions. The Court explained that a declaratory judgment may “result in precisely the same interference with, and disruption of, state proceedings that the long-standing policy limiting injunctions seeks to avoid.”27
In 1975, the Court extended the Younger abstention doctrine to include state civil proceedings. In Huffman, 420 U.S. 592 (1975), the state sought to enforce its public nuisance statute by closing a theater that featured pornographic films. The state court, applying state law, held that the theater was a nuisance and ordered its closing for one year. The defendant filed an action in federal district court seeking declaratory and injunctive relief claiming that Ohio’s public nuisance statute was unconstitutional.28
Although this case was civil in nature and the Younger doctrine would not have applied, the Supreme Court found that the same principles of comity and “Our Federalism” that were expressed in Younger warranted abstention in this case.29 The Court declared that the state civil enforcement proceeding was analogous to a criminal prosecution. The Younger abstention doctrine was expanded to apply to civil proceedings which are “both in aid of and closely related to criminal statutes.” Additionally, the Court disapproved of the theater owner’s attempts to bypass the state appellate process and explained that the owner could have continued with the appeal which may have reached the U.S. Supreme Court. The Court was careful to limit this extension to the facts of this case and expressly refused to extend the Younger doctrine to all civil litigation. With this extension, the Younger abstention applied to proceedings that are “quasi-criminal” in nature.30
Just a month after Huffman, the Court again expanded the Younger doctrine to include criminal proceedings that are not pending at the time the federal suit is filed, as long as state court proceedings are initiated prior to any hearings on the merits in federal court. In Hicks v. Miranda, 422 U.S. 332, 349 (1975), Justice White wrote in the majority opinion that no case under Younger required that a state criminal proceeding be pending on the day that the federal action is filed. The effect of this ruling was to create a “reverse removal power” for the state to defeat a plaintiff’s choice of the federal forum.31
The Court quickly expanded the Younger abstention doctrine to apply to all civil enforcements actions brought by the state. In Trainor v. Hernandez, 431 U.S. 434, 437-38 (1977), the director of the Illinois Department of Public Aid sought to recover fraudulently-received welfare funds. The defendants filed an action in federal court and did not appear in state court to challenge the allegations. The Court held that the principles of comity and “Our Federalism” expressed in Younger and Huffman are “broad enough to apply to interference by a federal court with an ongoing civil enforcement action such as this, brought by the [s]tate in its sovereign capacity.”32 In reaching its decision, the Court eliminated the implied requirement expressed in Huffman that civil proceedings must resemble a criminal prosecution.
The Supreme Court took a more expansive view of the notion of comity in Juidice v. Vail, 430 U.S. 327 (1977), holding that abstention was proper in cases where the pending state court proceeding was civil, but neither quasi-criminal nor quasi-judicial. For the first time, the Court applied the Younger doctrine to cases in which state governments were not a party. The defendants were a class of judgment debtors who challenged the use of statutory contempt procedures in state courts as unconstitutional violations of the 14th Amendment.
The Supreme Court held that Younger abstention was proper because it was not limited to state actions involving, or similar to, criminal proceedings.33 In reaching its decision, the Court noted that the “more vital consideration” behind the Younger doctrine of abstention is not whether a state criminal proceeding was involved, but rather in “the notion of comity.”34 The opinion reiterated that Younger required deference not only to state judicial proceedings, but also to state functions, such as the contempt power, which lie “at the core of the administration of a [s]tate judicial system.” The Court declared that labels such as civil, quasi-criminal, or criminal in nature that had been placed on state proceedings were not the “salient fact” in the Younger analysis.35 Instead, the critical factor was the federal court interference with the state’s interest in enforcing its contempt power. Thus, the Court unequivocally stated that the scope of the Younger abstention doctrine was not limited to criminal or quasi-criminal cases.36
In its original form, the Younger abstention doctrine was limited to criminal law proceedings because the state’s interest in exercising its power in state court absent federal interference is strongest in the criminal law context. However, Juidice declared that Younger abstention applies when a federal court is asked to interfere with a pending state proceeding that implicates a state interest, regardless of whether the case is civil or criminal.37
Two years later, the Supreme Court again modified the Younger analysis in Moore v. Sims, 442 U.S. 415 (1979). In Moore, a Texas couple’s three children had been removed from their custody during a state court proceeding. The children were held in state custody for six weeks without a hearing.38 The couple filed a federal suit challenging the constitutionality of certain statutes of the Texas Family Code, and sought a preliminary injunction enjoining any further prosecutions under the statutes. The Supreme Court held that federal court abstention was proper in this case. The Court looked not only to Texas’ interest in litigating this matter in its own courts, but also to the fact that the state proceedings provided the couple with an opportunity to raise their constitutional claims. In writing for the majority, Justice Rehnquist stated:
The price exacted in terms of comity would only be outweighed if state courts were not competent to adjudicate federal constitutional claims — a postulate we have repeatedly and emphatically rejected. In sum, the only pertinent inquiry is whether the state proceedings afford an adequate opportunity to raise the constitutional claim, and Texas law appears to raise no procedural barriers.39
This standard is quite different from the original foundations of abstention that were articulated in Younger only eight years earlier. In Younger, the Court explicitly stated that the holding did “not mean blind deference to ‘[s]tates’ [r]ights,’” but only a “sensitivity” to the interests of both the state and the federal government.40
Moore marked a significant shift in the Supreme Court’s Younger abstention analysis because the Court asserts that the “only pertinent inquiry” is the adequacy of the state forum. Moore suggests that the state interest is secondary, and the merits of the claim no longer need consideration. It further suggests that the only way comity concerns will be outweighed in federal courts is when the state forum is inadequate.41
The Supreme Court has further required federal court abstention when the state action is an administrative proceeding that is judicial in nature. For example, in Middlesex County Ethics Comm. v. Garden State Bar Ass’n, 457 U.S. 423 (1982), a lawyer brought an action in federal court alleging that the state bar’s disciplinary rules violated the First Amendment. At the time, the rules for disciplining attorneys did not specifically provide for constitutional challenges to the disciplinary process. However, the state’s interest in licensing and disciplining its own attorneys is clearly sufficient to invoke the state interest requirement for Younger abstention. In finding that attorney licensing and disciplining procedures are significant state interests, the Court stated:
The State of New Jersey has an extremely important interest in maintaining and assuring the professional conduct of the attorneys it licenses …. The judiciary as well as the public is dependent upon professionally ethical conduct of attorneys and thus has a significant interest in assuring and maintaining high standards of conduct of attorneys engaged in practice.42
According to the Court, the only pertinent issue was whether the administrative proceedings of the state bar and the available review by the state court afforded the lawyer the opportunity to raise his constitutional claims. The Court considered the adequacy of the state forum that was first articulated in Moore.43 The Court established a three-prong test that has been applied by numerous cases since this decision. The Court found that abstention is proper when 1) the state action is an ongoing state proceeding 2) that implicates important state interests, and 3) the plaintiff has an adequate opportunity to raise his or her constitutional claim in the state proceeding. The Middlesex test is recognized as the current standard for Younger abstention.44 Since the Middlesex test, there have been numerous questions concerning the precise boundaries of Younger abstention, and the Supreme Court has not provided much guidance in clarifying its exact limits and parameters.45
The Supreme Court further expanded the Younger doctrine to include “quasi-judicial” and administrative proceedings. In Ohio Civil Rights Commission v. Dayton Christian School, 477 U.S. 619 (1986), a pregnant teacher’s contract at a religious school was terminated because of a policy requiring mothers to stay home with their young children. In response to the teacher’s termination, the Ohio Civil Rights Commission began administrative proceedings against the school.46 The school filed suit in federal district court seeking an injunction against the pending administrative action claiming that any sanctions imposed would violate the establishment or free exercise of religion clauses of the First Amendment. The Supreme Court applied the Middlesex test and determined abstention was proper, even though the judicial nature of the proceedings was not obvious. Writing for the majority, Justice Rehnquist reasoned that because the administrative proceeding provided the school with an opportunity to raise its constitutional claims, abstention was appropriate.47 The Court held that Younger abstention was required even if a constitutional issue could only be raised through state court judicial review of the administrative proceeding.48 With its holding, the Court expanded the Younger abstention to encompass quasi-judicial proceedings.
In its origin, Younger abstention rested in the notions of comity, equity, and federalism. The doctrine has now evolved and expanded to include inquiries into the sufficiency of the state interest in its proceedings and the adequacy of the state forum. Some scholars have argued that Justice Rehnquist has played a critical role in expanding the federalism element of the Younger abstention doctrine beyond that which was contemplated by Younger.49
Restriction and Curtailment of the Abstention Doctrine
Although the Court has expanded the Younger doctrine since its inception, there have been a few cases in which the Court has declined to abstain, finding that the case was appropriate for federal review. These cases are noteworthy because they provide a basis for determining when abstention is proper, and a framework for determining where the Younger abstention doctrine stands today.
As early as 1974, the Court acted to limit the application of the Younger abstention doctrine. In Steffel v. Thompson, 415 U.S. 452 (1974), decided three years after Younger, the Court held that deference to the state’s criminal proceedings was only required when actual proceedings were pending in the state court.50 In both Younger and Samuels, the federal plaintiffs were actually facing prosecution in the state courts. In Steffel, the federal plaintiff was seeking declaratory and injunctive relief based on the state’s threat of arrest and prosecution. The Supreme Court reversed the district court’s dismissal of the case under the Younger doctrine. The Court held that in the absence of pending state proceedings there was no risk of federal intrusion with the state’s criminal justice system. However, this holding was quickly overturned in 1975 with the Court’s expansion of the Younger doctrine to include criminal proceedings that were not pending at the time federal suit is filed.
In New Orleans Public Service, Inc. (NOPSI) v. Council of the City of New Orleans, 491 U.S. 350 (1989), the Supreme Court limited the extent to which Younger abstention applies in civil cases. To briefly summarize the complicated fact pattern presented in this case, the New Orleans City Counsel brought suit in state court seeking a declaration to establish a rate order. NOPSI, a local utility company, filed suit in federal court seeking an injunction against the council challenging the constitutionality of the rate order.51
The Supreme Court held that a federal court should not abstain when the ongoing state proceeding involves a state court engaged in an “essentially legislative act.” In its opinion, the Supreme Court emphasized the importance of the federal courts’ role in exercising the jurisdiction granted to them to protect and enforce individual rights. The Court stated that “our cases have long supported the proposition that federal courts lack the authority to abstain from the exercise of jurisdiction that has been conferred.”52 The Court reiterated that abstention is the “exception, not the rule,” and further declared that the circumstances in which federal court abstention is appropriate have been “carefully defined” by the Court.53
The Court determined that the issue in the NOPSI case was a legislative action, and that Younger abstention was inappropriate because “it has never been suggested that Younger requires abstention in deference to a state judicial proceeding reviewing legislative or executive action.”54 Thus, the Court declared that while the Younger doctrine had been extended “beyond criminal proceedings, and even beyond proceedings in courts,” it has never been extended to proceedings that are “not judicial in nature.”55
The Pullman Abstention Doctrine
The Pullman abstention doctrine instructs federal courts to avoid issues of federal constitutional questions when the case may be decided on questions of state law. This doctrine came from the famous 1941 Supreme Court case, Railroad Commission v. Pullman Co., 312 U.S. 496 (1941). This case involved an order by the Texas Railroad Commission that no sleeping car could be operated on any railroad line in Texas unless the cars were in the charge of an employee who held the rank of conductor.56 This order contained strong racial overtones because the trains that carried only one sleeping car were usually in the charge of a porter, all of whom were African-American. However, in 1941, all trains that carried more than one sleeping car were in the charge of a conductor, all of whom were white. The Pullman Company brought an action in federal district court to enjoin the Railroad Commission’s order.57 The Pullman Company “assailed the order as unauthorized by Texas law[,] as well as violative of” equal protection, the due process clause, and the commerce clause of the Constitution. The Pullman porters, through their union, intervened in the suit and objected to the order on the ground that it discriminated against African-Americans in violation of the 14th Amendment of the U.S. Constitution.58
After a decree was issued by a three-judge panel convened by the federal district court, the case was appealed directly to the Supreme Court. The Court conceded that the Pullman porters presented a substantial constitutional issue yet held that the issue presented was related to “social policy upon which the federal courts ought not to enter unless no alternative to its adjudication is open.”59 The Supreme Court found that even though the three-judge panel had examined the Texas law related to discrimination, the federal courts could not be the final word on Texas law. In other words, the last word on the authority of the Railroad Commission belonged to the Texas Supreme Court. The Court reasoned that the “reign of law” was not “promoted if an unnecessary ruling of a federal court” could be “supplanted by a controlling decision of a state court.”60
The Court remanded the case to the district court with directions to retain the case pending a determination of the state proceedings. The Court reasoned that if state law did not authorize the commission’s assumption of authority, then there would be an end to the litigation and the constitutional issue would not arise. The Court held that “in the absence of any further showing that these methods for securing a definitive ruling in the state courts cannot be pursued with the full protection of the constitutional claim, the district court should exercise its wise discretion by staying it.”61 This classic case dictates that the federal court should stay, but not dismiss, the action while the state court resolves the issue of state law.
The Burford Abstention Doctrine
The Burford abstention doctrine is recognized by federal courts and used “to avoid needless conflict with the administration by a state of its own affairs.”62 This doctrine grew from the case Burford v. Sun Oil Co., 319 U.S. 315 (1943), which was similar to Pullman in that it was a Texas case involving the Texas Railroad Commission.63 In this case, “Sun Oil attacked the validity of an order of the Texas Railroad Commission granting petitioner Burford a permit to drill four [oil] wells on a small plot of land [in East Texas].”64 This action was brought in federal court and based on diversity of citizenship. Sun Oil contended that “the order denied them due process of law.” While the district court refused to enjoin the order of the Railroad Commission, the Fifth Circuit Court of Appeals reversed the finding. On appeal to the U.S. Supreme Court, the district court’s refusal to enjoin the order of the Railroad Commission was affirmed. Justice Black, in delivering the opinion of the Court, reasoned that abstention would be appropriate because “questions of regulation of the industry by the [s]tate administrative agency, whether involving gas or oil prorating programs or Rule 37 cases, so clearly involves basic problems of Texas policy that equitable discretion should be exercised to give the Texas courts the first opportunity to consider them.”65
The Court further held that the “state provides a unified method for the formation of policy and determination of cases by the Commission and by the state courts…if the state procedure is followed from the Commission to the [s]tate Supreme Court, ultimate review of the federal questions is fully preserved….”66 The Court concluded that “under such circumstances, a sound respect for the independence of state action requires the federal equity court to stay its hand.” Distinct from Pullman, which allows the federal district court to stay the proceedings while the state action is pursued, the federal action is dismissed entirely under Burford.67
The Colorado RiverAbstention Doctrine
The Colorado River abstention doctrine is invoked to avoid duplicative proceedings, either in two different federal courts or in parallel state and federal court proceedings.68 In Colorado River Water Conservation District v. United States, 424 U.S. 800 (1976), the United States brought suit in federal district court on its own behalf and on behalf of two Native American tribes seeking a declaratory judgment to water rights and their tributaries in Colorado Water Division No. 7.69 Shortly after the federal action was commenced, one of the defendants filed an application for Division No. 7 in state court seeking an order directing service of process on the United States to make it a party to the state court proceedings “for the purpose of adjudicating all of the government’s claims — both state and federal.” The district court dismissed the action citing the abstention doctrine and deference to state court proceedings, but the 10th Circuit reversed this ruling on appeal.
The Supreme Court, in an opinion written by Justice Brennan, stated the general rule that “pendency of an action in the state court is no bar to proceedings concerning the same matter in the federal court having jurisdiction.” In evaluating the district court’s dismissal of the action on abstention doctrine grounds, the Court found that the circumstances presented in Colorado River did not fit into any of the recognized abstention doctrines.70 However, the Court also found that “the circumstances permitting the dismissal of a federal suit due to the presence of a concurrent state proceeding for reasons of wise judicial administration are considerably more limited than the circumstances appropriate for abstention. The former circumstances, though exceptional, do nevertheless exist.”71
The exceptional circumstances that the Court felt justified federal abstention are “(a) the apparent absence of any proceedings in the federal district court, other than the filing of the complaint prior to the motion to dismiss, (b) the extensive involvement of state water rights occasioned by this suit naming 1,000 defendants, (c) the 300-mile distance between the district court in Denver and the [state] court in Division No. 7, and (d) the existing participation by the [g]overnment” in other state court proceedings concerning other water divisions.72 The Court cautioned that this factual situation was very unusual; therefore, the rule that only “exceptional circumstances” permit dismissal in parallel proceedings “argues against, rather than for, the use of this type of abstention in routine cases.”73
Despite the abundance of criticism and drawbacks, the abstention doctrines are essential to the parallel judicial systems in the United States. Abstention is crucial when the interests of the states outweigh the federal adjudication of the matter. To preserve the balance between state and federal sovereignty, the federal courts should continue to abstain from cases to avoid friction between federal and state courts.
1 Charles Alan Wright, Law of Federal Courts §52 at 325 (6th ed. 2002).
2 James C. Rehnquist, Taking Comity Seriously: How to Neutralize the Abstention Doctrine, 46 Stan. L. Rev. 1049 (1994).
3 Matthew D. Staver, The Abstention Doctrine: Balancing Comity with Federal Court Intervention, 28 Seton Hall L. Rev. 1102, 1102 (1998).
4 Wright, Law of Federal Courts §52.
5 Leonard E. Birdsong, Comity and Our Federalism in the Twenty-First Century: The Abstention Doctrines Will Always Be With Us — Get Over It!, 36 Creighton L. Rev. 375 (2003).
6 Younger, 401 U.S. at 44.
7 Id. at 44-45.
8 Id. at 44.
9 Id. at 39.
11 Id. at 40.
12 Id. at 43.
13 Id. at 46.
14 Id. at 55-56.
15 Id. at 54 (noting the Court had “no occasion to consider” whether the Anti-Injunction Act applies to the instant case).
16 Id. at 53-54.
17 William J. Brennan, Jr., State Constitutions and the Protection of Individual Rights, 90 Harv. L. Rev. 489, 498 (1977) (concluding that showings of extraordinary circumstances under the exceptions are “probably impossible to make”); Brian Stagner, Avoiding Abstention: The Younger Exceptions, 29 Tex. Tech. L. Rev. 137, 141 (1998) (describing the Younger exceptions as an “escape hatch that rarely opens”).
18 Younger, 401 U.S. at 53-54.
19 Dombrowski v. Pfiter, 380 U.S. 479 (1965).
20 Erwin Chemerinsky, Federal Jurisdiction §13.4 at 751 (2d ed. 1994).
21 Younger, 401 U.S. at 53-54.
22 See Chemerinsky, Federal Jurisdiction §13.4 at 753.
23 Trainor, 431 U.S. at 447 (1977).
24 Id. at 463 (Stevens, J., dissenting).
25 See Stagner, Avoiding Abstention: The Younger Exceptions, 29 Tex. Tech. L. Rev. 137, 163 (1998).
26 Brooks v. N.H. Supreme Court, 80 F.3d 633, 640 (1st Cir. 1996) (stating the “biased” exception to the Younger abstention doctrine is inappropriate if a litigant fails to employ available procedures for the recusal of biased judges).
27 Samuels, 401 U.S. at 72 (1971).
28 Huffman, 420 U.S. at 598 (1975).
29 Id. at 604; see also Kevin Beck, The Ninth Circuit’s Message to Nevada: You’re Not Getting Any Younger, 3 Nev. L. J. 592, 597 (2003).
30 See Staver, The Abstention Doctrine: Balancing Comity with Federal Court Intervention, 28 Seton Hall L. Rev. at 1169 (1998).
31 Bryce M. Baird, Federal Court Abstention in Civil Rights Cases: Chief Justice Rehnquist and the New Doctrine of Civil Rights Abstention, 42 Buff. L. Rev. 501, 531 (1994).
32 Trainor, 431 U.S. at 444.
33 See also Baird, Federal Court Abstention in Civil Rights Cases: Chief Justice Rehnquist and the New Doctrine of Civil Rights Abstention, 42 Buff. L. Rev. at 536 (1994).
34 Juidice, 430 U.S. 327, 333-34 (1977).
35 Id. at 334.
36 Id. at 335-36.
37 Id. at 334.
38 Moore, 442 U.S. at 419-420.
39 Id. at 430.
40 Younger, 401 U.S. at 44.
41 Moore, 442 U.S. at 430.
42 See Middlesex, 457 U.S. at 434.
43 See Moore, 442 U.S. at 425-26.
44 See Daniel Jordan Simon, Abstention Preemption: How the Federal Courts Have Opened the Door to the Eradication of “Our Federalism,” 99 Nw. U. L. Rev. 1355, 1360 (2005).
45 See Charles R. Wise & Robert K. Chistensen, Sorting Out Federal and State Judicial Roles in State Institutional Reform: Abstention’s Potential Role, 29 Fordham Urb. L. J. 387, 389 (2001).
46 Ohio Civil Rights Comm’n, 477 U.S. at 623-24.
47 Id. at 626-27.
48 Id. at 629.
49 See Baird, Federal Court Abstention in Civil Rights Cases: Chief Justice Rehnquist and the New Doctrine of Civil Rights Abstention, 42 Buff. L. Rev. 501, 531 (1994).
50 Steffel, 415 U.S. at 462.
51 New Orleans Pub. Serv., Inc., 491 U.S. at 353-58.
52 Id. at 358.
53 Id. at 359.
54 Id. at 358.
55 Id. at 369-370.
56 R.R. Comm’n v. Pullman Co., 312 U.S. at 497.
57 Id. at 497-98.
58 Id. at 498.
59 Id. at 498-99.
60 Id. at 500.
61 Id. at 501.
62 Wright, Law of Federal Courts §52 at 325.
63 Burford v. Sun Oil Co., 319 U.S. at 316.
64 Id. at 316-17.
65 Id. at 332.
66 Id. at 333-34.
67 See Birdsong, Comity and Our Federalism in the Twenty-First Century: The Abstention Doctrines Will Always Be With Us — Get Over It!, 36 Creighton L. Rev. at 380 (2003).
68 See id.
69 Colorado River, 424 U.S. at 805.
70 Id. at 813-17.
71 Id. at 818.
72 Id. at 820.
73 Wright, Law of Federal Courts at 339.
Beth Shankle Anderson is an attorney practicing in the Tallahassee office of Theriaque, Vorbeck & Spain, where she focuses primarily on land use and environmental law. She is a graduate, with cum laude honors, from Florida Coastal School of Law. | <urn:uuid:2df2756f-32e3-4e96-b500-5371dff761a0> | CC-MAIN-2017-17 | https://www.floridabar.org/divcom/jn/jnjournal01.nsf/Author/9C411C5E565A1ECC8525738000591F91 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00251-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.941861 | 8,013 | 2.8125 | 3 |
|Home | About | Journals | Submit | Contact Us | Français|
The Drosophila melanogaster ovary is a powerful yet simple system with only a few cell types. Cell death in the ovary can be induced in response to multiple developmental and environmental signals. These cell deaths occur at distinct stages of oogenesis and involve unique mechanisms utilizing apoptotic, autophagic and perhaps necrotic processes. In this review, we summarize recent progress characterizing cell death mechanisms in the fly ovary.
Programmed cell death (PCD) is an essential process in Drosophila development. Vast numbers of cells die during embryogenesis and imaginal disc differentiation, and entire structures are destroyed during pupal metamorphosis. Extensive cell death also occurs in the adult female during oogenesis as part of normal development and in response to poor environmental conditions.
The major forms of cell death are apoptosis, autophagic cell death, and necrosis [reviewed in 1]. Apoptosis is characterized by condensation and blebbing of nuclei and cytoplasm, whereas autophagic cell death is associated with autophagosomes, double membraned vesicles that surround cellular components. Necrosis is characterized by organelle swelling and lysis. All three forms of cell death have been reported in Drosophila. The majority of cell deaths during embryogenesis and imaginal disc differentiation occur by apoptosis, whereas some cell deaths during pupation are autophagic. To date, necrosis has only been described in mutant or pathological situations in Drosophila [2, 3].
Apoptosis in Drosophila is typically initiated by the expression of reaper (rpr), head involution defective (hid), grim, and/or sickle (skl), which encode inhibitor of apoptosis protein (IAP) binding proteins [reviewed in 4, 5]. These proteins inhibit DIAP1 (Drosophila IAP1, also known as Thread), which acts to suppress caspase activity in healthy cells. Once DIAP1 is inhibited, caspases are activated and apoptosis ensues. Analysis of caspase mutants has shown that the critical caspases during embryonic apoptosis are Dronc and Drice [reviewed in 4]. Dronc is an initiator caspase which interacts with the adaptor protein Ark, and Drice is an effector caspase that is activated by Dronc. Similarly, a Hid-Dronc-Drice cascade operates during eye differentiation later in development [6, 7].
Autophagic cell death in Drosophila is best characterized in the salivary gland which is degraded during pupal metamorphosis. PCD of the salivary glands utilizes components of apoptosis, including Rpr, Hid, and caspases [8, 9]. Additionally, autophagosomes form during salivary gland cell death, and cell death is disrupted in mutants defective for autophagy . Little is known about the genes involved in necrosis in Drosophila, however in other systems calcium signaling and lysosomal cathepsins have been shown to be important [reviewed in 11].
Most cell deaths in the Drosophila ovary occur by pathways distinct from those described in other tissues, indicating novel mechanisms of cell death in the ovary. Each Drosophila ovary is composed of approximately fifteen ovarioles, chains of developing egg chambers (Fig. 1) [reviewed in 12–14]. Egg chambers are sixteen-cell germline cysts surrounded by up to a thousand somatic follicle cells. Germline and somatic stem cells reside in the most anterior region of the ovariole, a region called the germarium (Fig. 1). Egg chambers move out of the germarium progressing through fourteen defined stages of oogenesis. Early in egg chamber development within the germarium, one of the germline cells is specified to differentiate as an oocyte, and the remaining fifteen cells develop as polyploid nurse cells. The nurse cells supply the oocyte with nutrients, organelles, mRNAs, and proteins needed throughout oogenesis and early embryonic development. The somatic follicle cells are required for proper axis specification of the oocyte and synthesis of yolk, vitelline membrane and chorion.
The first example of germline cell death occurs early in embryonic development when primordial germ cells (PGCs) that fail to coalesce into the gonad undergo PCD [reviewed in 15]. Interestingly, these “lost” PGCs undergo cell death independent of the major embryonic cell death regulators, rpr, hid and grim , similar to germline cell death in the adult (discussed below). Additionally, these cell deaths cannot be blocked by the expression of caspase inhibitors. Positive effectors of PGC death are wunen and wunen-2 which encode lipid phosphate phosphatases, p53 which is an ortholog of the mammalian tumor suppressor, and outsiders which encodes a putative monocarboxylate transporter [16, 17].
In the adult female fly, cell death occurs sporadically within the germarium and during mid-stages of oogenesis (stages 7–9) [reviewed in 18]. Cell death in these regions increases in response to poor nutrition, which can be induced experimentally by withholding yeast as a protein source. It is thought that these cell deaths are induced following a “checkpoint” where environmental and nutritional inputs determine whether an egg chamber will progress into vitellogenesis, the yolk deposition that occurs later in oogenesis . Interestingly, chemical exposure or developmental insults induce cell death specifically in mid-oogenesis, suggesting that this stage is poised to undergo PCD. Cell death also occurs late in oogenesis as part of normal oocyte development, as the fifteen nurse cells and follicle cells are eliminated by PCD. These distinct examples of cell death are reviewed individually in the following sections, with a focus on recent findings.
Ovaries from nutrient-deprived flies are significantly reduced in size compared to ovaries from flies that have been conditioned on food supplemented with yeast. This reduction in size is due to effects on cell proliferation as well as PCD . Based on the TUNEL assay, nutrient-deprived flies show an increase in apoptosis in the germarium, primarily germline cells in region 2 where follicle cells begin to surround a germline cyst (Fig. 2a) . Cell death is thought to serve as a mechanism to maintain the proper number of follicle cells that are needed to surround a germline cyst during oogenesis . These findings imply that nutrient sensing pathways regulate the checkpoint in region 2 of the germarium.
A primary pathway for nutrient sensing is the insulin-mediated phosphoinositide kinase-3 (PI3K) pathway. In Drosophila, there are seven insulin-like peptides (Dilps) that interact with the insulin receptor (InR) . Mutations in positive regulators of the Drosophila insulin-mediated PI3K pathway result in a reduction in body size and sterility [21–25]. This pathway may mediate nutrient responses in the germarium and thus negatively regulate cell death. Both InR and the insulin receptor substrate Chico have been shown to be required for follicle cell proliferation, although effects on cell death in the germarium have not been investigated [19, 26]. Target of Rapamycin (Tor) is a downstream target of the PI3K pathway, but may also be activated by other mechanisms . Homozygous tor escapers are reduced in size compared to wild-type and have increased acridine orange positive puncta in germaria, indicating that cell death is increased . The combined data from InR, Chico and Tor suggest that mutants in the insulin mediated PI3K pathway may mimic nutrient deprivation by affecting cell proliferation and cell death in germaria.
Recent findings indicate that nutrient deprivation leads to autophagy induction as well as cell death in the germarium, perhaps in a mechanism similar to the Drosophila salivary gland . The genes required for autophagy were identified in yeast and are well-conserved in Drosophila and other metazoans [reviewed in 29]. The formation of autophagosomes and autolysosomes has been visualized in germaria by punctate staining of LysoTracker, or Drosophila Atg8a or the human ortholog LC3 fused to fluorescent tags [30, 31]. Germaria from well-fed flies have low levels of these markers in germaria , whereas nutrient-deprived flies display puncta in region 2 of many germaria . Electron micrographs of degenerating germline cysts display numerous autophagosomes . Taken together, these findings indicate that germline cyst cells undergo autophagic cell death in region 2. It is important to note that region 2 is also the area where apoptosis has been observed .
Caspases, proteases associated with apoptotic cell death, are required during salivary gland degradation along with the autophagy machinery and a similar mechanism may be at work in the ovary. Active caspases can be detected in region 2 of the germarium using an anti-active caspase-3 antibody, and mutants lacking the effector caspase Dcp-1 display decreased levels of DNA fragmentation and autophagy in region 2 compared to wild-type . Interestingly, mutants of autophagy genes atg7 and atg1 have decreased levels of DNA fragmentation in region 2 of the germarium compared to wild-type [30, 31]. These data indicate that autophagy and the caspase Dcp-1 are required for nutrient deprivation-induced PCD in the germarium.
In addition to the germarium, mid-stage egg chambers from nutrient-deprived flies undergo cell death, characterized by nurse cell nuclear condensation and fragmentation, and engulfment by follicle cells (Fig. 2b,d) [reviewed in 18]. Numerous other stimuli can influence cell death at mid-oogenesis, including developmental abnormalities, chemical treatment, temperature, mating, and daylength [reviewed in 18]. Even modern hazards like cocaine exposure and cellular phone radiation can induce cell death in mid-oogenesis [32, 33].
A survey of known cell death genes has revealed a novel pathway in mid-oogenesis compared to other Drosophila tissues undergoing PCD. Cell death in most Drosophila tissues is highly regulated by the IAP binding proteins Rpr, Hid, Grim and Skl [reviewed in 4, 5]. Interestingly there is no apparent role for these cell death regulators in mid-oogenesis . When rpr, hid, grim and skl were removed simultaneously during nutrient deprivation, mid-stage egg chambers still resembled wild-type animals treated under the same conditions . Similarly, mutants of other cell death regulators ark, debcl, p53, and eiger, had no effect on mid-oogenesis cell death. This work implies that the known cell death regulators are not required for mid-oogenesis nutrient deprivation-induced cell death.
A role for the insulin-mediated PI3K pathway in the regulation of egg chamber survival at mid-oogenesis has been suggested by several studies. Heteroallelic mutant combinations of InR lead to immature ovaries and egg chambers that remain pre-vitellogenic [35, 36]. Similar phenotypes are seen in InR germline clones (GLCs), indicating that InR is required for germline development in a cell autonomous manner . Additionally, InR GLCs produce abnormal egg chambers considered to be degenerating . Homozygous mutant chico flies also produce abnormal egg chambers described as degenerating . Similarly, egg chambers from tor homozygous escapers fail to develop to post-vitellogenic stages . These findings suggest a negative regulation of apoptosis by the insulin pathway, but evidence that the terminal egg chambers in these mutants are apoptotic has yet to be shown.
Survival beyond mid-oogenesis is also regulated by the hormones 20-hydroxyecdysone (20E) and juvenile hormone (JH) [reviewed in 18, 37]. Increased levels of 20E are seen following nutrient deprivation and ectopic 20E can induce egg chamber degeneration, suggesting that 20E induces PCD . Paradoxically, GLCs of the ecdysone receptor, or its target E75, also degenerate in mid-oogenesis, indicating that signaling by 20E is required for egg chamber survival . These findings can be reconciled by a model that a proper balance between JH and 20E is required for survival in mid-oogenesis . Alternatively, a threshold level of 20E may determine the outcome in mid-oogenesis . Known 20E target genes E74, E75 and BR-C show dynamic expression changes in mid-oogenesis, with some isoforms increasing and others decreasing expression in response to nutrient-deprivation [39–41]. These target genes also regulate each other and can be pro-or anti-apoptotic [40, 41]. To identify additional target genes of nutrient-deprivation and hormonal signaling in the ovary, gene expression profiling via microarray has been carried out . Changes in expression were detected for cell death and hormone-related genes, as well as components of insulin-mediated PI3K signaling and stress response genes in the c-Jun N terminal kinase (JNK) pathway. InR is required for proper levels of JH and 20E , indicating that there is crosstalk between these pathways, as has been shown in other tissues .
Mid-stage egg chambers from nutrient-deprived flies display hallmarks of autophagy like those seen in the germarium. Degenerating mid-stage egg chambers accumulate autophagosomes, based on fluorescent markers (Fig. 2f) [30, 31]. Autophagosomes have also been observed in electron micrographs of degenerating mid-stage egg chambers from Drosophila virilis [45, 46]. Mutant atg7 flies or atg1 GLCs that have been nutrient-deprived show a decrease in LysoTracker staining at mid-oogenesis and reduced levels of TUNEL-positive staining, indicating a block in DNA fragmentation even though nurse cell chromatin condensed normally [30, 31]. These data suggest that autophagy may play a role in DNA fragmentation but not in chromatin condensation during PCD in mid-oogenesis.
During apoptosis, caspases play an essential role in the execution of cell death. Expression studies have revealed a high level of caspase activity in mid-oogenesis cell death (Fig. 2e) [47, 48]. There are three initiator and four effector caspases in Drosophila . Homozygous mutants of the effector caspase dcp-1 have a striking block in mid-oogenesis germline PCD in response to nutrient-deprivation (Fig. 2c) . The ovaries of these mutants accumulate a number of egg chambers in which the follicle cells have died and the nurse cells remain intact, a phenotype referred to as “bald” or “peas without pods” (pwop) [48, 50]. Similar to the germarium, there appears to be a requirement for the caspase Dcp-1 for the induction of autophagy in mid-oogenesis. In dcp-1 null mutants, LysoTracker and punctate GFP-LC3 staining are decreased in pwop egg chambers compared to degenerating wild-type egg chambers . Ectopic expression of dcp-1 is sufficient to induce mid-stage degeneration accompanied by autophagy . These results suggest that the caspase Dcp-1 acts to promote autophagic cell death of the germline in mid-oogenesis.
Further evidence for a caspase requirement in mid-oogenesis has come from studies of caspase inhibitors. Nutrient-deprived flies over-expressing the caspase inhibitors DIAP1 or p35, show the same phenotype as dcp-1 mutants, suggesting that DIAP1 normally keeps Dcp-1 in check during mid-oogenesis [47, 48, 50]. DIAP1 shows dynamic expression changes in mido-ogenesis, suggesting that its levels may be carefully regulated during this stage. diap1 mRNA and protein levels are reduced in mid-oogenesis even in healthy egg chambers [50, 51]. This down-regulation of diap1 may be what makes mid-oogenesis highly susceptible to cell death stimuli.
Multiple caspases usually participate in cell death, with a typical initiator-effector caspase cascade [reviewed in 4]. To determine which caspases might act upstream of dcp-1, the three initiator caspases were investigated. Mid-oogenesis cell death occurred normally in all of the single initiator caspase mutants . However, double strica; dronc mutants displayed a moderate pwop phenotype, indicating a redundant function for these two caspases in mid-oogenesis PCD. Unlike dcp-1, they did not show a complete block in cell death, suggesting that the effector caspase Dcp-1 can be activated by another mechanism. Taken together, these findings indicate that a novel caspase-dependent autophagic cell death pathway acts in mid-oogenesis PCD.
In late oogenesis at stage 11, nurse cells transfer their cytoplasmic contents to the oocyte through cytoplasmic bridges called ring canals, in a process called “dumping” (Fig. 3a–c) . Drastic cytoplasmic changes occur during dumping, including the formation of unique actin bundles that extend from the plasma membrane to the nuclear envelope . After dumping, the nurse cell nuclei and other remnants are removed through cell death (Fig. 3d). Nurse cell nuclear breakdown is initiated about the same time as dumping , but it is not known if the cytoskeleton changes that occur concomitantly with the demise of the nurse cells are regulated by the same mechanism. The degradation of nurse cell components as cell death occurs would probably be detrimental to the survival of the adjacent oocyte, and it is unknown how the oocyte is protected. These characteristics make developmental nurse cell death a unique process.
A number of mutants that disrupt nurse cell cytoplasm transfer have been described. Many of these “dumpless” mutants (Fig. 3f) disrupt cytoskeletal genes, which do not affect the initiation of nurse cell nuclear breakdown, although final DNA fragmentation of nurse cell nuclei is delayed [51, 53, 54]. Pathways that control dumping upstream of the cytoskeletal proteins are less clear. Genetic analysis over a decade ago implicated the BMP receptor Saxophone in nurse cell dumping , but further analysis has not been done. A “dumpless” phenotype was initially attributed to mutants of dcp-1, demonstrating a potential link between dumping and cell death, but this phenotype is now known to be caused by disruption of the neighboring gene pita [49, 56]. In subsequent studies, dcp-1 mutants were found to show a complete block in mid-oogenesis cell death but only a mild block in nurse cell nuclear clearance in late oogenesis [49, 50]. pita, also known as spotted-dick, encodes a Zn-finger transcription factor required for DNA replication , potentially implicating cell cycle regulation in the control of dumping. Consistent with this hypothesis, GLCs of the cell cycle regulator E2F produce a dumpless phenotype [58, 59]. However, both E2F and pita GLCs show additional defects, suggesting that their effects on dumping could be indirect.
The cell death mechanism that removes the nurse cells is different from canonical cell death mechanisms in the fly. Similar to mid-oogenesis cell death, the IAP binding proteins are not required for late oogenesis nurse cell death [34, 51]. Surprisingly, the requirement for caspases in nurse cell death appears to be minimal. In a wild-type fly, a small percentage of mature stage 14 egg chambers show the persistence of any nurse cell nuclei, whereas in flies overexpressing the caspase inhibitors DIAP1 or p35, up to a third of stage 14 egg chambers have some persisting nurse cell nuclei (Fig. 3e) . Similar frequencies of persisting nuclei have been observed in certain caspase mutant combinations as well as GLCs of ark [34, 50]. Contradictory results were obtained with a caspase peptide inhibitor , however these inhibitors are known to have off-target effects . Overall, these findings suggest that degradation of the nurse cells can occur largely independently of caspases and other known apoptosis genes . In general, mutants in the apoptotic cascade result in only a mild disruption to nurse cell PCD, suggesting that other cell death mechanisms are acting in conjunction with apoptosis, or compensating when apoptosis is inhibited.
The minor requirement for the caspases suggests there are other players that have not been identified. longitudinals lacking (lola), which encodes a BTB protein previously reported to be involved in axon guidance, was identified in a forward genetics screen for effectors of late oogenesis cell death . lola GLCs show a block in nurse cell chromatin condensation and DNA fragmentation, as well as effects on dumping. lola has been shown to interact with JIL-1, a chromosomal kinase, which affects the nuclear lamina [63, 64]. Mutants of lola or jil-1 show abnormal nuclear lamin morphology, suggesting a role for lola, jil-1, and nuclear lamins in chromatin condensation during developmental nurse cell death . lola GLCs also show defects in chromatin condensation and DNA fragmentation during mid-oogenesis PCD, suggesting that lola affects mechanisms common to both mid- and late oogenesis PCD.
Following chromatin condensation of nurse cell nuclei in late oogenesis, DNA fragmentation occurs [51, 56, 64, 65]. DNA fragmentation is generally thought to be a two step process during apoptotic cell death [66–68]. Caspases activate CAD (caspase activated DNase) by cleaving its inhibitor, ICAD. CAD then localizes to the nucleus and cleaves DNA between nucleosomes. DNase II, acting within engulfing cells, subsequently breaks down DNA into nucleotides. DNase II is an acidic DNase with the highest activity in acidic environments such as lysosomes. Disruption of Drosophila CAD blocks nucleosomal fragmentation but has no apparent effect on clearance of nurse cell nuclei in late oogenesis . However, DNase II mutants have a persisting nurse cell nuclei phenotype in late oogenesis and recent findings indicate that DNase II is required cell-autonomously in the dying nurse cells . This suggests that the two step model of DNA fragmentation can probably apply in nurse cell death, with a slight twist. Caspases may activate CAD to cleave chromatin between nucleosomes, followed by DNase II activity in the dying nurse cell. Considering that DNase II is an acid nuclease, the cell autonomous role for DNase II suggests a role for lysosomes or acidic conditions within the dying nurse cells.
Lysosomes are critical for autophagy, and the presence of autophagosomes during late oogenesis has been revealed by transmission electron microscopy of Drosophila virilis late stage egg chambers , suggesting that developmental nurse cell PCD occurs by autophagic cell death. Characterization of the autophagic machinery has not yet been reported in developmental nurse cell PCD, however mutants of the lysosomal gene spinster have a significant disruption to nurse cell PCD . It is important to note that lysosomes have been shown to be involved in necrosis as well as autophagic cell death . Necrosis has always been thought of as an accidental death that occurs when a cell is injured, and had been characterized more as a series of catastrophic events rather than an organized process [reviewed in 11]. In recent years, however, evidence for programmed necrosis is emerging. Examples in C. elegans, mammalian cell lines, primate ischemia models, and Dictyostelium have shown that necrosis follows a common set of events . These events include an influx of ions or misregulation of ion homeostasis, mitochondrial uncoupling leading to ROS generation and ATP depletion, mitochondrial swelling and perinuclear clustering, lysosomal rupture, and activation of non-caspase proteases such as calpains and lysosomal cathepsins . Interestingly, some of these cellular events have been shown to occur during late oogenesis. There is a release of calcium from nuclear stores early in the dumping process and the transfer of nurse cell mitochondria to the oocyte would be expected to leave the nurse cells largely devoid of an intracellular source of ATP . Further studies are necessary to determine whether developmental nurse cell PCD occurs by necrotic or autophagic PCD or a distinct mechanism.
In addition to germline cell death, cell death occurs in the somatic follicle cells. There are three types of follicle cells: stalk cells, polar cells and epithelial follicle cells [reviewed in 12]. Compared to germline cell death, follicle cell death is largely uncharacterized, with the exception of the polar cells. The polar cells are clusters of 2–5 follicle cells that are located at the most anterior and posterior region of each egg chamber. Early in oogenesis, excess numbers of polar cells undergo PCD, leaving precisely two polar cells at each end of the oocyte during the later stages of oogenesis . If the polar cells do not undergo PCD, anterior follicle cells that cover the nurse cells fail to thin and stretch properly. Additionally, a specialized group of epithelial follicle cells called the border cells fail to migrate towards the oocyte properly, which would be expected to lead to defective formation of the micropyle, the site of sperm entry . Death of the polar cells is mediated by a canonical apoptosis cascade utilizing Hid, Dronc and Drice . Interestingly, to date this is the only type of cell death in the ovary shown to use this cascade.
The epithelial follicle cells “disappear” following germline cell death in mid-oogenesis and after chorion deposition in late oogenesis. In late oogenesis, only a small subset of anterior follicle cells show any initiation of cell death while still in contact with the oocyte . These follicle cells have condensed chromatin and stain positively for acridine orange, although they do not display caspase-3 activity . These dying follicle cells also contain autophagosomes and autolysosomes. Remnants of the remaining follicle cells are found at the base of the ovary at the entrance of the lateral oviduct . These cells stain positively for acridine orange and show condensed chromatin, and ultimately appear to be engulfed by epithelial cells and macrophages at the oviduct entrance. Taken together, these findings suggest that follicle cell death could occur by an autophagic mechanism that does not utilize caspases, unlike the death of nurse cells in mid-oogenesis.
The genetic control of follicle cell death is largely unknown. A recent paper reports that a specific isoform of the ecdysone receptor (EcR-B1) is required for follicle cell survival in mid-oogenesis . RNAi knockdown of EcR-B1 leads to caspase activation and decreased levels of DIAP1. Given that caspase activation is not thought to occur during normal follicle cell death, these findings indicate that ecdysone signaling may act normally to prevent the apoptosis of follicle cells. EcR-B1-deficient follicle cells also show disruptions to the organization of the epithelial follicle cell monolayer, suggesting the effects on apoptosis could be indirect.
Insight into the novel mechanisms of cell death in both follicle cells and nurse cells comes from a recent study on endocycling cells . During oogenesis, both nurse cells and follicle cells exit mitosis and enter an endocycle where DNA replication continues in the absence of cell division, leading to polyploidy. Work from the Calvi group has shown that ectopic expression of Double-parked (Dup) activates DNA re-replication and apoptosis in most Drosophila cells. Follicle cells expressing Dup in early oogenesis undergo apoptosis, however later stage follicle cells that have entered the endocycle fail to undergo apoptosis despite experiencing re-replication and DNA damage. Subsequent analysis revealed that rpr, hid, grim and skl could not be induced in endocycling cells, suggesting that these loci are specifically repressed . Further support for this hypothesis comes from a recent study showing that these loci are epigenetically silenced by Polycomb-mediated repression in late embryogenesis . Taken together these findings suggest that expression of these IAP-binding proteins could be transcriptionally repressed by Polycomb in endocycling cells such as nurse cells and follicle cells. Thus the canonical cell death pathway that utilizes these IAP-binding proteins would not be initiated during PCD of endocycling nurse cells and follicle cells. These findings help to explain why follicle cells and nurse cells use alternative PCD pathways. However, reaper and hid transcripts have been detected in nurse cells at stage 10 , indicating that repression of these genes is at least partially reversible in late stage nurse cells. Perhaps the level of rpr and hid expression is insufficient to drive cell death of nurse cells, or other downstream targets of the rpr-hid cascade are not adequately expressed in late oogenesis.
There are intriguing similarities between mammalian and Drosophila ovarian cell death. As in flies, mammalian ovarian cell death occurs at multiple yet specific stages of oocyte development [reviewed in 82]. Lost primordial germ cells undergo PCD in mammals as they do in flies. Numerous follicles are destroyed in adult mammals by follicular atresia, a process comparable to mid-oogenesis PCD in flies. The structure of the insect egg chamber with the germline-derived nurse cells is not found in mammals, however mammalian oocytes develop in interconnected cysts early in oocyte development, and this stage is correlated with extensive germ cell death. Analogies have also been made between nurse cells and somatic granulosa cells in mammals . Similar to nurse cells, granulosa cells initiate the cell death process during follicular atresia and have been found to deliver cellular organelles to developing oocytes in some species. Germ cells are also lost at the “pachytene checkpoint” in both C. elegans and mammals [82, 83]. Interestingly, in Drosophila, pachytene occurs between stages 2 and 3 within the germarium , the same region found to exhibit sporadic PCD. Whether any of the PCD observed in the germarium is due to improper synapsis at pachytene remains to be shown. Excessive ovarian cell death in humans is associated with fertility disorders, chemotherapy-induced premature menopause, and poor outcome in assisted reproduction technologies, and is therefore of high clinical significance [reviewed in 82].
As in the Drosophila ovary, a balance between survival and death signals regulates follicular atresia in the mammalian ovary [reviewed in 82]. Cell death is induced by withdrawal of survival signals in conjunction with activation of pro-apoptotic signals. Interestingly, follicular atresia is influenced by hormones and the insulin/IGF signaling pathways in both flies and mammals . Similar to Drosophila, autophagic cell death and apoptosis may act cooperatively during follicle atresia in the rat and quail [82, 86]. Granulosa cell apoptosis during follicular atresia is mediated by both death receptors and the Bcl-2 proteins [reviewed in 82]. The involvement of multiple mechanisms for the selection of the healthiest oocytes is consistent with findings in the Drosophila ovary. Germline cell death is found in many, if not all, animals[18, 87], suggesting that it may play evolutionarily conserved roles in both development and germ cell selection.
The Drosophila ovary provides unique opportunities for the study of cell death. Cell death occurs at distinct stages and in response to diverse stimuli, but in only a small number of cell types. Most intriguingly, these cell deaths occur predominantly by unusual and still largely uncharacterized mechanisms. Genetic control over germline cell death in the germarium and follicle cell death is for the most part a black box, and systematic surveys of cell death mutants could be informative for these examples of PCD. Additionally, very little is known about how the follicle cells and other cells carry out engulfment of nurse cell and follicle cell remnants after PCD.
Cell death in the germarium and at mid-oogenesis is regulated by nutrient availability. A challenge for the future is to determine how the insulin and ecdysone signaling pathways might interface with the cell death and autophagic machinery in the ovary. In mid-oogenesis, the effector caspase Dcp-1 is essential for germline PCD and DIAP1 levels decrease in dying egg chambers [49, 50]. Interestingly, Drosophila salivary gland death utilizes both apoptosis and autophagy, and is also regulated by the PI3K signaling pathway and ecdysone, suggesting that there may be more parallels between ovarian and salivary gland cell death. In the case of the salivary gland, a decrease in PI3K signaling leads to a growth arrest that is necessary for the onset of cell death . It is unknown if such a link exists in the ovary.
However, unlike the salivary gland and other tissues in Drosophila, the ovary does not require initiator caspases or the cell death genes rpr, hid, grim and skl to bring about cell death [18, 34]. This implies that there are other mechanisms by which Dcp-1 and DIAP1 are regulated in mid-oogenesis. The insulin-mediated PI3K signaling pathway has been shown to regulate cell death in mammalian systems by direct phosphorylation and suppression of Bcl-2 pro-apoptotic family members and caspases [88, 89]. Perhaps a similar mechanism could regulate the activity of Dcp-1 or DIAP1. Another open question is how Dcp-1 regulates autophagy during cell death in the germarium and in mid-oogenesis. Dcp-1 may cleave and activate a component of the autophagic machinery directly, or activate a signaling protein that in turn triggers autophagy.
Developmental nurse cell death in late oogenesis has many unique and interesting characteristics. The nurse cells have a highly specialized function to provide essential components to the oocyte. Once the nurse cells have transferred their contents to the oocyte, their remnants, predominantly the large nurse cell nuclei, are removed. This presents a unique situation that may utilize distinct cell death mechanisms to reach this goal without damaging the oocyte. Canonical cell death components including caspases are only partially involved. Evidence for autophagic and necrotic mechanisms has also emerged but a big question remains as to whether these mechanisms are acting redundantly or in parallel to bring about the destruction of the nurse cell.
Finally, the upstream activators in developmental nurse cell death are still unknown. Several signaling pathways are present at the right time and place to be good candidates, and careful genetic dissection of these pathways may be enlightening. Alternatively, developmental nurse cell death may occur because of the loss of protective factors during the dumping process. In such a situation, the nurse cell nuclei die by extreme neglect, having lost most of their nutrients, proteins and organelles to the oocyte. Further study of cell death in the Drosophila ovary will provide insight into the diverse ways that cells die, potentially utilizing all three forms of cell death, apoptosis, autophagy and necrosis.
We thank Jeanne Peterson and Christy Li for comments on the manuscript, and Horacio Frydman and members of our lab for helpful discussions. Supported by NIH grant R01 GM60574 (KM). | <urn:uuid:b4befacd-090d-434b-91c3-235916384e5f> | CC-MAIN-2017-17 | http://pubmedcentralcanada.ca/pmcc/articles/PMC2810646/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00013-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.926824 | 7,652 | 2.609375 | 3 |
Businesses and individuals are increasingly relying on computers and Internet-based networking. They experience several benefits, but also potential risks. When staff or business partners have constant access to internal networks from insecure locations, security is a major concern.
The Rise of Cybercrime
Cyberattacks generally refer to criminal activity involving the use of a computer network, normally conducted via the Internet. Internet users and organizations face increased risk of becoming targets of cyberattacks. An independent research report conducted by Ponemon Institute on organizations located in the United States in 2013 found that the U.S. experienced an increase of 18 percent in successful attacks from the previous year.
Today, criminals have more advanced technology and greater knowledge of cyber security. Attacks may include financial scams, computer hacking, virus attacks and distribution, denial-of-service, theft of an organization’s information assets, posting of sensitive business data on the Internet, and malware.
Risks of Cybercrime
For businesses and corporations, the cost associated with cyberattacks is large. Stolen or deleted corporate data can inflict financial damage on the victim, damage the company’s reputation, and negatively affect people’s livelihoods. The risks are even higher for small companies, since their businesses may rely solely on project files or customer data bases. The same Ponemon Institute study reported that in 2013, the average cost of cybercrime in the U.S. was $11.6 million annually - an increase in cost by 26 percent from the previous year.
Organizations should follow basic guidelines in order to reduce the security threat to their data and devices. To prevent cyberattacks, companies should:
1. Use a Secure Connection to the Corporate Data
This generally involves implementing a Virtual Private Network (VPN). VPN technology provides protection for information that is being transmitted over the Internet by allowing users to form a virtual “tunnel” to securely enter an internal network to access resources, data and communications.
2. Store Data Centrally
Centralized storage of data offers protection and increases speed, convenience and efficiency for accessing files. Sharing of files enables rapid and easy access to important data from virtually anywhere in the world. The relative mobility and control of data improves effectiveness of workflow. Another crucial advantage of centralized data is cost. Although it is possible to store and backup data on multiple machines, it is considerably more cost effective to use central storage. For instance, data can be stored on a server within the corporate LAN behind the firewall.
3. Use Modern Authentication Methods
Authentication is the process by which the parties at either end of a network connection can verify the identity of the other party. Verification is typically based upon something you know (such as passwords), something you have (smart card or tokens), or something you are (biometric techniques, including fingerprint and eye scans). Deployment of modern authentication methods, such as Kerberos authentication protocol, ensures confidentiality through encryption that ensures no one can tamper with data in a Kerberos message.
4. Use Reliable, Strong Encryption Technology
Encryption is the process of changing information in a manner that cannot be deciphered by anyone except those holding special knowledge (generally referred to as a "key") that enables them to alter the information back to its original, readable form. A VPN turns the Internet (an unsecure environment) into a secure private network, by providing heavy encryption. In particular, an SSL VPN is best-suited for mobile apps.
5. Enforce Strong Passwords
Implementation of strong passwords is a basic security procedure, however it is often overlooked. Complex, hard-to-crack passwords are a simple line of defense against a security breach. Password policies, which offer advice on proper password management, should be in place. Password best practices include:
• Avoid using dictionary words or common sequences, such as numbers or letters in sequential order or repetitive numbers or letters.
• Do not use personal information.
• Use special characters, such as * and #. The majority of passwords are case sensitive, therefore, a mixture of both upper case and lower case letters, as well as numbers, should be used.
• Choose a long password, as passwords become harder to crack with each added character.
• Create different passwords for different accounts and applications. Therefore, if one password is breached, the security of other accounts is not at risk.
• Never write down passwords and leave them unattended in a desk drawer or any other obvious place.
• Never communicate a password by telephone, e-mail or instant messaging
• Never disclose a password to others, including people who claim to be from customer service.
• Change passwords whenever there is any doubt that a password may have been compromised.
The growing popularity and convenience of digital networks has led to an increase in cyberattacks; consequently, keeping up to date with the most recent and important concerns facing the organization is in itself a challenge. Organizations can protect their highly sensitive information by following a safety plan and adopting reasonable security practices.
If you would like to learn more about VPN technology, and review some tips on critical security aspects, download our free e-book: How Do I Find the Best VPN Solution for My Company?
Controls are a mode of living. Whether it’s the workplace that requires a key fob or an identification badge, a password to log into the company network, or an access permission to use a copier, there are numerous controls/safeguards that we encounter during the normal course of our everyday lives.
Defining Control Activities
Control activities are actions taken to minimize risk. A risk is the probability of an event or action having adverse consequences on an organization, such as information assets that are not adequately safeguarded against loss.
Control activities occur throughout the organization and include diverse activities, including approvals, authorizations, verifications, reviews of operating performance, and security of assets.
Internal controls are a fundamental part of any organization’s financial and business policies and procedures. The advantages of internal controls are:
- Prevention of errors and irregularities; if these do occur, the inaccuracies will be detected in a timely method
- Protection of employees from being accused of misappropriations, errors or irregularities by clearly outlining responsibilities and tasks
IT controls are a subdivision of internal controls, and refer to policies, procedures and techniques on computer-based systems. IT controls are essential to protect assets, highly sensitive information and customers. IT controls support business management and governance; they also offer general and technical controls over IT infrastructures.
Subdivisions of IT Controls
Generally, IT controls are divided into two main categories:
1. General Controls
These apply to all system components, processes and data for a specific organization. General control activities are conducted within the IT organization or the technology they support, which can be applied to each system that the organization depends upon. These controls facilitate confidentiality, integrity and availability, contribute to the safeguarding of data, and promote regulatory compliance. General controls make safe reliance on IT systems possible. Examples of such controls include access controls (physical security and logical access) and business continuity controls (disaster recovery and back-up).
2. Application Controls
These controls are business process controls, and contribute to the efficiency of individual business processes or application systems. Examples of application controls include access authorization, which is essential for security of the corporate network. This prevents users from downloading illegal material or viruses, and may also block unproductive or inappropriate applications. Other examples of application controls include segregation of duties and concurrent update control.
Modern IT Solutions
Virtual private network (VPN) technology enables a secure connection to the organization’s data to be made over insecure connections, such as the Internet, and is essential to providing comprehensive security, safety and flexibility to businesses. Furthermore, advanced VPN technology offers several services which help users maintain access to critical information. VPNs facilitate the implementation of IT controls. For instance, VPNs provide dynamic access portals, whereby network managers can define server access with application publishing in such a way that the user only sees his or her personal, customized portal.
Control activities occur throughout the organization, and IT controls are fundamental to protect information assets and mitigate business risks. Deployment of a modern virtual private network (VPN) technology facilitates the implementation and management of IT controls.
If you would like to learn more about VPN technology, and review some helpful tips on critical security aspects, download our free e-book: How Do I Find the Best VPN Solution for My Company?
The year 2013 is synonymous with cyber attacks and numerous data breaches. Individuals and organizations worldwide are now more aware of widespread surveillance and cyber threats. But what are the costs associated with business security breaches?
1. Direct Financial Loss
Attackers may specifically target customers’ credit card numbers, employees’ checking account numbers, and the company’s merchant account passwords. Especially in the financial services industry, indirect legal fees or fines resulting from the security incident can significantly increase the costs, independent of whether the criminal is brought to justice.
2. Violation of Privacy
Employees are trusted to keep personal information private. Likewise, customers trust the organization to keep their credit card numbers and credit histories confidential. If this privacy is violated, legal consequences arise.
3. Lower Competitive Advantage and Lost Sales
Theft, modification, destruction of propriety sales proposals, business plans, product designs or other highly sensitive information can significantly give competitors a marked advantage. Sales are also lost as a consequence of the cyber attack, and the repercussions ensue long after the incident takes place.
4. Damage of Corporate Reputation and Brand
Building and maintaining a corporate image and establishing trusted relationships with customers and business partners is critical to an organization. However, the corporate credibility and business relationships can be considerably damaged if proprietary or private information is compromised.
5. Loss of Business Continuity
In the case of a service disruption caused by a data breach, the IT team must quickly address the problem, so as to minimize downtime of the system, and restore data from backup files. Nonetheless, when mission-critical systems are involved, any downtime can have catastrophic consequences. In other cases, when lost data may have to be meticulously reconstructed manually, this decreases the amount of time that systems are functioning to below acceptable levels.
Business Network Protection
As discussed above, the consequences associated with security breaches are vast and long-lasting. Several organizations now use remote access solutions to maintain a high level of security for sensitive corporate information. In particular, many companies opt for SSL VPNs due to their flexibility – SSL VPNs are not restricted to employee remote access, but incorporate partners, contractors, and possibly also customers. The increasing amount of hacking attacks and sophistication of security threats demand the use of advanced network security via a high-quality VPN as a component of a comprehensive business security policy.
If you are interested in how to secure your network from cyber attacks, we invite you to visit our website www.hobsoft.com. On our website you will be able to find data sheets of our VPN solutions as well as interesting e-books and whitepapers.
Author: Hazel Farrugia
Today, mobile workforces stay connected in and out of the office and use their devices for work and personal purposes. The ultimate goal of a remote working strategy is to increase productivity and reduce costs; indeed, studies by Best Buy, Dow Chemical and many others have proven that teleworkers are 35-40% more productive than their in-office counterparts.
The drafting and implementation of an organization-wide workplace strategy will ensure that end users at all levels of the organization will enjoy a positive experience. The following are five best practices that effectively boost remote workers’ productivity:
1. Maximize Employee Participation
Maximizing employee participation is the first step to maximizing employee productivity. Not all employees benefit equally from remote working; however, without a critical mass of users, the benefits will be limited. IT teams should not restrict solutions, such as mobile workplaces, to only those who “seem” to need it. Remote working allows employees to respond to colleagues and customers faster, therefore IT teams and managers should not deter employees from working anywhere and anytime.
2. Ensure Employees Have the Productivity Tools they Require
Employees should be encouraged to use a wide range of productivity tools which do not pose network security risks. However, if IT teams are uncertain how to handle such employee requests, they generally allow employees to use these tools without providing adequate security, or block the use of the tools entirely. Regardless of the circumstances, IT teams should circumvented security risks by deploying security solutions that allow employees to utilize tools without compromising the network security.
3. Free Use of Personal Apps and Services
Whether the device is personally owned or provided by the company, employees should be able to use their personal apps and services. Blocking an employee from storing their personal information with a cloud service provider is significantly different from ensuring corporate data does not end up in the public cloud. IT teams should focus on controlling data rather than controlling devices.
4. Offer Self-Service Support for Everyday Activities
There is a common notion that mobile devices will result in an increase in support costs – however this is a misconception. Conversely, if the IT teams provide a self-service capability, particularly for routine activities, it usually results in decreased in support costs. IT teams should stop short of supporting personal apps and services, but should invariably offer to assist with supporting business apps.
5. Support Wide Range of Devices
For the mobile workplace program to be widely adopted, the program should support a wide range of devices. Though challenges may arise, such as Android’s variability regarding support for on-device encryption and other enterprise-level security and management controls, the overall benefit is net positive.
The Future of Remote Working
The current trend towards remote working is expected to become even more prevalent in the future. With the right practices and controls in place, employee productivity can be maximized, without putting the security of the network at risk.
If you would like to learn about the advantages and limitations of mobile workplaces, and find out how to develop a strategy for mobile workplaces with the help of VPNs, please download our free eBook “Home Offices Made Easy”.
Author: Hazel Farrugia
Remote access via virtual private networks (VPNs) is a major technological advancement reshaping organizations worldwide, including educational institutions. The IT solutions of all educational institutions, ranging from primary schools to universities, face unique challenges in order to provide a more advanced learning and working environment, while also maintaining security requirements and optimal IT efficiency.
Common Applications in an Educational Institution:
Educational institutions require numerous IT applications, which are managed by the network support teams. These include:
- email accounts for students and faculty
- secure email access
- intranet set up and functionality
- web and mail services
- storage and management of sensitive data
- online examination management and results posting
- secure intra-departmental data transfer
- secure remote access to server rooms and on-site data centers; and
- maximum security levels preventing hacker attacks, and enabling secure login and sensitive information transfer
In addition to providing a secure mechanism to access the above list of necessary applications, IT administrators are also responsible for minimizing network downtime, monitoring uptime, and keeping service costs under control. In order to provide this, remote access technology is the optimal solution.
Reasons for Using Remote Access:
1. 24/7 Accessibility
Remote access through VPNs provides cost-effective 24/7 data access to students and staff from anywhere.
2. Reduced Security Concerns
VPN technology allows secure remote access to educational resources and individual desktops for faculty and staff members through encrypted connections, via Web Secure Proxy and secured authentication methods.
Innovative remote access solutions implement a security strategy that also includes firewalls, anti-virus software and intrusion prevention services to protect vital and sensitive information within the network.
3. Reduced Investment in Technology Infrastructure
Due to the potential for mechanical failure, hardware solutions are prone to break downs. Initial costs and costs to repair cause hardware solutions to be significantly less viable than pure software solutions. Additionally, software solutions enable IT administrators to resolve several problems remotely, thereby further reducing costs and resource use. The implementation of a software based solution has the additional benefit of optimizing existing server resources, which reduces total cost of ownership.
4. High Availability
Access from the client requires a Web browser only. This allows for specialty software applications to be made more readily and widely available to the students, staff and faculty. This high application availability allows for e-learning programs and superior online delivery methods after school hours.
The total enrollment in public and private postsecondary institutions increased 47% between 1995 and 2010, and a further increase of 15% is expected between fall 2010 and fall 2020. The growth in the number of students attending educational institutions puts network administrators under pressure to increase the amount of PCs and network facilities in order to accommodate their staff and students. An increase in terminals necessitates an increase in the number of servers; since these servers are the pillar of the institution’s Network, it is important that they be consistently reliable, as network downtime implies an interruption of essential services.
High-quality VPNs allow for workload balancing of cluster servers, meaning the division of a computer/network’s workload between two or more computers/servers. This process facilitates the system’s optimum performance, which results in faster data access. Load balancing also prevents failover, which occurs when a user cannot access a database in a cluster - either because they cannot access the database itself or they cannot access the database server.
A VPN is highly scalable and supports many different platforms. VPN technology provides remote access via any device, such as desktop computers, notebooks and tablets, and all operating systems are supported, including Microsoft Windows, Apple MAC OS X, and Linux. In addition, this technology allows educational institutions to purchase resources as needed. If the institution experiences significant growth, it can easily increase the capacity of their remote access solutions. Conversely, if their needs decrease, they can scale down.
6. Single Sign-On
Single sign-on is a capability that enables secure authentication across many services with only one password. It allows users to be logged into multiple services once the user has signed in to one. Single sign-on streamlines the authentication process for the user, while simultaneously protecting the institution’s resources.
Remote access technology has proven beneficial to several organizations as it optimizes resources, decreases administrative costs, increases productivity and enhances the learning process. Today, remote access technology for educational institutions is considered an essential part of a comprehensive IT security infrastructure.
Author: Hazel Farrugia
In one of our last blog posts, we introduced the concept of WAN clustering (the use of multiple redundant computing resources housed in different geographical locations that form, what appears to be, a single, highly-available system), and its role in disaster recovery and business continuity. Part II takes a deeper dive into WAN clustering and its role in load balancing.
The Need for Load Balancing
In the Internet Age, the networking (connecting) of enterprise IT infrastructure to its customers or suppliers has become mission critical. Data centers full of server farms were created by the proliferation of servers for diverse applications. The complexity and challenges in scalability, manageability, and availability of server farms is one driving factor behind the need for intelligent switching. It is unacceptable for a network to fail or exhibit poor performance, as either will virtually shut down a business in the Internet economy. In order to ensure scalability and high availability for all components, load balancing emerged as a powerful tool to solve many of the issues associated with network failure and poor performance.
Load balancing is the division of computer/server/network workload amongst two or more computers/servers. Load balancing can be implemented with hardware, software or a combination of both.
In the case of load balancing Web traffic, there are several options. For Web serving, one option is to route each request to a different server host address in a domain name system (DNS) using the round-robin technique. Usually, if two servers are used to balance a work load, a third server is needed to determine to which server work is assigned. Another option is to distribute the servers over different geographic locations.
Benefits of Load Balancing
This technique offers a number of important benefits, including increased network utilization and maximized throughput; minimizing the load on individual systems and decreasing response time; improved user satisfaction, reliability and scalability.
Generally, load balancing is the primary reason IT teams opt for a clustering architecture. Companies whose websites receive large volumes of traffic also commonly select clustering architecture, so as to avoid a situation where a single server becomes overwhelmed. Workload balancing of cluster servers facilitates the system to attain optimum performance, resulting in faster data access.
Additionally, the process also prevents failover, which occurs when a user cannot access a database in a cluster, due either to inability to access the database itself or inability to access the database server.
Virtual Private Network (VPN) technology is also critical to an effective load balancing strategy. A fast, safe and secure transfer of critical business data among servers optimizes the user experience, while simultaneously giving employees/users anytime, anywhere access to critical information.
As implementation of web applications grows and user bases become more geographically diverse, load balancing becomes increasingly less of an option, and more of a requirement in IT planning and provisioning. Load balancing enables organizations to run uninterrupted operations when WAN clustering is supported by reliable, well-managed VPNs.
If you would like to learn more about WAN clustering, and explore how VPNs can help you to create an optimal WAN clustering solution for your needs, download this free eBook:
Effective WAN Clustering Relies on High-Quality VPNs
Author: Hazel Farrugia
More and more, “work” is being defined as something people do, rather than the place people go. Today’s organizations are shifting away from the usual nine-to-five workday, and progressing towards the trend of remote working (also called telecommuting). Remote working enables organizations to gain a competitive advantage from higher productivity, better work-life balance and decreased costs.
However, IT teams frequently face several problems related to mobile workplace deployments. The most common pitfalls are:
1. Ignoring Common Threats
Security risks posed by malware have been on the top agenda of many security teams; however, a more frequent threat nowadays is mobile phishing. Phishing occurs when identity thieves collect user information such as name and password, Social Security number, date of birth, ATM PIN or credit card information, for use in committing fraud or other illegalities. Since it is more difficult to identify fake URLs on a mobile device, it is more likely that remote workers will succumb to a phishing scam, than their in-office counterparts.
2. Taking a One-Size Fits All Approach
Managing mobile device security is more limited, and normally exerts a level of inconvenience for users. For instance, mobile virtualization can allow users to work remotely without any data on their devices; however this may be overkill for the employee who simply wants access to corporate email.
3. Failing to Educate Users
As more organizations adopt the mobile workplace strategy, managing the employees who use mobile technology has become more arduous. IT teams should educate employees to participate in keeping corporate data secure.
4. Assuming Users will Follow Security Policies
The organization should draft, write and implement comprehensive and reasonable security policies to efficiently manage and protect information. IT teams should focus on protecting the company's highly-sensitive information assets, rather than the devices used by remote workers themselves. IT teams must also educate users on why it is important for them to follow the policies put in place.
For any business which has implemented a remote workforce strategy, or those wishing to deploy such a strategy, it is important that IT teams overcome these problems in order to protect the company’s resources.
If you would like to learn more about mobile workplaces, and find out which security issues need to be addressed, you can download our free eBook “How VPNs Help Providing Secure Mobile Workplaces”. | <urn:uuid:03390390-2e71-4c0e-8c46-49cd70e8565d> | CC-MAIN-2017-17 | http://212.185.199.48/tag/secure/page/2 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00600-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.928843 | 4,992 | 3.375 | 3 |
THE BLOODY BATTLE THAT FORGED AN AUSTRALIAN LEGEND
"Into the jaws of Death, into the mouth of Hell"
Alfred, Lord Tennyson, from "The Charge of the Light Brigade".
"Into the Mouth of Hell" - the 39th Battalion crosses the Mountains
In June 1942, with his mind focussed unwaveringly on recovery of the Philippines from the Japanese, General Douglas MacArthur was planning to establish a forward airbase at Dobodura near the village of Buna on the northern coast of the Australian Territory of Papua. This airbase would enable Allied aircraft to strike at Japanese bases on the northern coast of the New Guinea mainland and at the major Japanese base at Rabaul on the island of New Britain. Rabaul in American hands would provide MacArthur with an important stepping-stone towards the Philippines.
In New Guinea, Australian soldiers had to battle not only the Japanese invaders but the unforgiving terrain and climate. The steep gradients in the rugged mountains and the dense rain forests made movement difficult, exhausting, and at times dangerous. When rains fell, dirt tracks quickly dissolved into calf-deep mud that exhausted the soldiers after they had struggled several hundred metres. Sluggish streams in mountainravines quickly became almost impassable torrents when the rains began to fall. In this image, an Australian "Digger" lends a hand to a mate. The painting "Through the Rain Forest" by official Australian war artist Sir William Dargie depicts Australian soldiers in New Guinea.
On 29 June 1942, General Blamey directed that militia troops of the Port Moresby garrison were to cross the mountains of the Owen Stanley Range and resist any attempt by the Japanese to seize a vital airstrip at the village of Kokoda. The Owen Stanley Range is the massive, rugged, central mountain feature of the island of New Guinea which separates the northern coast of Papua from the southern coast. The Kokoda airstrip was located on the northern foothills of the Owen Stanley Range, and situated just over half way between Port Moresby on the southern coast of Papua and the adjacent villages of Gona and Buna on the northern coast. Having secured the Kokoda airstrip, Blamey envisaged that these militia troops could be used later to protect the Allied airbase planned for Dobodura.
MacArthur and Blamey blind themselves to Japan's strategic goals in New Guinea
This was a dangerous mission based upon a faulty appreciation of Japan's strategic goals by MacArthur and Blamey. They knew that the Japanese had already established military bases on the northern side of the Owen Stanley Range at the coastal towns of Lae and Salamaua. They had received intelligence warnings that the Japanese would try to capture Port Moresby by crossing the Owen Stanley Range. Despite these warnings, both MacArthur and Blamey chose to believe that the Owen Stanley Range would prove impassable for a Japanese army.
There is no evidence that MacArthur and Blamey reached this conclusion on the basis of a survey of the terrain by the senior army commander at Port Moresby, Major General Morris. Despite having had ample time to do so in the preceding six months, and despite the fact that the Kokoda Track would have to be used by the Japanese if they wanted to capture Port Moresby by an overland attack, Major General Morris had made no attempt to acquaint himself with the difficulties that troops would face in negotiating the Kokoda Track. Morris was not alone in his inexcusable ignorance of the nature of the terrain between his force and the enemy. The senior commanders in Australia, Generals MacArthur and Blamey, were also inexcusably ignorant, and worse still, they chose to maintain their ignorance throughout the Kokoda campaign.
Although far removed in Australia from the realities of the harsh terrain and climate of the central New Guinea mountains, Blamey must have known that he was sending inadequately trained recruits across one of the most daunting natural barriers in the world and that, with the barest equipment, minimum rations, and inadequate means of supply and communication, these raw troops might have to fight the most formidable jungle troops in the world. With a wide and rugged mountain range between them and Port Moresby, the Australian militia troops could expect no ready help if they met a strong force of Japanese troops between Kokoda and Gona. Moreover, if forced to retreat, the militia troops would have a massive natural barrier between them and Port Moresby. Once in possession of Kokoda, their only realistic hope of quick supply and reinforcement was by means of the small village airstrip.
The sending of militia troops across the Owen Stanley Range in these circumstances must raise serious doubts about the military judgment of both MacArthur and Blamey and their fitness to hold office as senior commanders. If they honestly believed that the Owen Stanleys would prove impassable for tough Japanese troops, how could they reasonably expect inadequately trained and equipped militia troops to cross the Owen Stanleys and be capable of meeting the Japanese on equal terms?
Veteran troops of the Australian Imperial Force (AIF) 7th Division had been back in Australia since March 1942 when they were recalled from the Middle East. Provided that it was adequately supplied, one brigade of the 7th Division could have been sent by Blamey on this dangerous mission across the Owen Stanley Range to defend Kokoda. The threat of a Japanese invasion of the Australian mainland had greatly diminished after the Battle of Midway, and Blamey must have been aware of this. Despite this less threatening strategic situation in the South-West Pacific, it is difficult to avoid a conclusion that Blamey was preserving the 7th Division troops for possible defence of the Australian mainland, and that he regarded the young Australian militia troops in New Guinea as expendable. With a very clear indication of Japanese intentions towards Port Moresby at the Battle of the Coral Sea, it is also difficult to avoid a conclusion that Blamey was inexcusably blind to Japan's fierce determination to capture Port Moresby.
Major General Morris, assigned the Kokoda mission to militia troops of the 39th Australian Infantry Battalion.
Composition of an infantry battalion on the Kokoda Track in 1942
For reference purposes, it may be convenient to repeat at this point the composition of an infantry battalion in 1942, because references will be made from time to time to the components of a battalion when dealing with land battles on the island of New Guinea.
In 1942, an Australian infantry battalion was composed of several companies, usually four rifle companies and a headquarters company, and designated respectively: A, B, C, D and HQ. Each rifle company was composed of three platoons which were identified by numbers starting from one. On the Kokoda Track, the number of troops in each of the components of an infantry battalion could vary significantly, and it is convenient to think in terms of a range of 450-550 soldiers when battalions are mentioned, about 100-110 for a rifle company, and about 30-35 for a rifle platoon.
A Company of the 39th Battalion is ordered to defend Kokoda airstrip against the Japanese
The dangerous task of defending the Kokoda airstrip from the Japanese was given to B Company of the 39th Infantry Battalion commanded by Captain Sam Templeton. This militia rifle company was composed of three platoons, and numbered in total about 94 officers and other ranks. A contingent of native troops of the Papuan Infantry Battalion (PIB), led by Major W.T. Watson, accompanied the militia troops. About 120 native carriers also accompanied the troops with additional rations and equipment on their backs.
The supply problem created by conditions on the Kokoda Track required the troops to carry very heavy packs and equipment. The minimum weight carried by each man was 18 kilograms (about 40 pounds), but with a .303 Lee Enfield rifle, and other battalion equipment passed around in rotation, the burden for each man could reach as much as 27 kilograms (about 60 pounds). Burdened with heavy packs, rifles, and ammunition, and wearing khaki uniforms more suited to desert warfare than a jungle killing ground like the Kokoda Track, Templeton and his troops set out on 7 July 1942 to climb the mountains between them and Kokoda. They could not have realised at the time that they were marching into history, and establishing the initial foundations of Australia's Kokoda tradition.
The Track between Port Moresby and the northern Coast of Papua
To appreciate the physical obstacles and supply problems which would face Australian and Japanese troops when they penetrated deeply into the rugged mountains of the Owen Stanley Range, it is useful to consider the nature of the terrain that the troops of each army would need to traverse as they moved towards the vital airstrip at Kokoda from the southern and northern coasts of Papua.
The Track between Port Moresby and Kokoda, also called "The Kokoda Track "
On the southern side, the Owen Stanley Range was approached in 1942 by a narrow dirt road which left Port Moresby and gradually ascended the foothills to the rubber plantations at Ilolo and Koitaki. Ilolo is situated about 40 kilometres (25 miles) from Port Moresby. See Kokoda map. After Ilolo, the road narrowed to a track which rose sharply to Owers Corner at a height of 610 metres (2000 feet) above sea level. The Kokoda Track commenced at Owers Corner. The narrow track crossed a succession of high mountain ridges, wound through thick jungle, clung precariously to steep mountain sides, plunged into deep, densely forested ravines between ridges, forded fast-flowing mountain streams, climbed peaks as high as 2,100 metres (6,800 feet) above sea level before reaching the last towering ridge on which lay the village of Isurava at a height of 1,372 metres (4,500 feet) above sea level. From Isurava, the track fell sharply over rough terrain to the village of Deniki. From Deniki it was a relatively easy three hour march to Kokoda.
Kokoda village is situated on a small plateau about 366 metres (1200 feet) above sea level on the northern foothills of the Owen Stanleys. The military significance of Kokoda in 1942 was largely related to its possession of a small airstrip. Being located roughly half way between Gona-Buna and Port Moresby, the airstrip at Kokoda was a vital acquisition and means of supply for troops of an invading Japanese army faced with crossing one of the most formidable mountain ranges in the world. It was an equally vital means of supply for a defending Australian army faced with the daunting task of blocking the passage of Japanese troops across the Owen Stanley Range to Port Moresby.
On this narrow track between Owers Corner and Kokoda, now variously known as the Kokoda Track or Kokoda Trail, an Australian legend would be forged when Australian soldiers confronted Japanese troops invading Australian territory in overwhelming force.
The Track between Gona and Kokoda
From the adjacent villages of Gona and Buna on the northern coast of Papua, narrow dirt tracks traversed coastal jungle and swamp before meeting at Igora and continuing as one track to the small village of Awala situated 56 kilometres (35 miles) inland. From Awala, the track rose gradually as it traversed the foothills of the massive Owen Stanley Range. The track crossed the wide, fast-flowing Kumusi River at Wairope by means of a flimsy wire suspension bridge, and then passed through the small villages of Gorari and Oivi to reach Kokoda. In the oppressive coastal heat, it was a three day march from Gona-Buna to Kokoda for heavily laden troops.
From Kokoda, the narrow track passed through the village of Deniki and then rose sharply over rugged terrain to reach the small village of Isurava on top of the first of a series of towering Owen Stanley mountain ridges which lay between Kokoda and Port Moresby.
B Company's trek to Kokoda
Templeton and his troops did not realise that they had been sent into a green hell by a commander who was completely ignorant of the extremely rugged mountain conditions on the Kokoda Track and apparently lacking any foresight that they might find a Japanese army waiting to greet them at the other end of the track.
Major General Morris had no transport aircraft in Port Moresby at this time, and when B Company was deep in the mountains, further supplies of food and equipment would have to follow them over the Kokoda Track on the backs of more native carriers. Morris ordered Lieutenant H. T. Kienzle, a local European officer with lengthy experience working with native labour on plantations, to take a large number of native carriers and establish staging camps for his troops along the full length of the Kokoda Track. Kienzle had already traversed the Kokoda Track in March of that year and was aware of the obstacles and hazards that the Australian troops would face. Kienzle met B Company at Ilolo where the dirt road from Port Moresby ended. His native carriers were sent ahead of the troops to set up the staging camps.
To ease the burden for Templeton's troops and the native carriers on the Kokoda Track, Morris also arranged for the lugger Gili Gili to carry some supplies and equipment around the Papuan coast to Buna where they could be collected by B Company after it had crossed the central mountain range and secured the Kokoda airstrip.
General Morris' lack of appreciation of the ruggedness of the Kokoda Track, and the magnitude of the task he had set Templeton and B Company, can be gauged from the fact that he also ordered Lieutenant Kienzle to take one thousand native labourers and build a road along the whole length of the Kokoda Track by 26 August 1942.
When B Company reached Kokoda on 15 July 1942 the troops were exhausted by the journey. They rested at Kokoda, and were fed by Lieutenant Kienzle from his own nearby plantation. Kienzle then returned across the Kokoda Track to Port Moresby and reported to General Morris that the one way trek to Kokoda required eight days for heavily laden troops, and that native carriers could not carry sufficient food, equipment and other supplies to maintain troops when they were deep in the mountains or on the other side of the range. Kienzle pointed out that large-scale supply from the air would be essential to support troops in the Owen Stanley Range.
While his troops were resting at Kokoda, Captain Templeton pressed on with some of his native carriers to meet the lugger Gili Gili at the coastal village of Buna and collect the rest of his supplies and equipment. While Templeton was engaged in these essential military "housekeeping" tasks, the Japanese were about to change his life irrevocably.
Japanese Troops land on the northern Coast of Australian Papua
With the ultimate aim of capturing Port Moresby, expelling Allied forces from the island of New Guinea, and isolating Australia from its ally, the United States, Lieutenant General Hyakutake landed a Japanese Army advance force variously estimated at between 1,500 and 2,000 troops near the village of Gona on the north coast of the Australian Territory of Papua on 21 July 1942. The immediate aims of this advance force were to secure the coastal strip between Gona and the nearby village of Buna, reconnoitre the area between Gona and the Australian administrative post at Kokoda, seize Kokoda, and assess the practicability of using the Kokoda Track as a route for Japanese troops to capture Port Moresby. If the overland route was deemed practicable, a much larger Japanese force would quickly follow.
The advance force included a large number of Japanese Army construction engineers under the command of Colonel Yosuke Yokoyama. If the Kokoda route to Port Moresby was practicable, his task was to construct access routes, light bridges and supply depots for the movement of a large body of Japanese troops towards Port Moresby.
The advance force also included a battalion of troops from the 144th Regiment of Japan's elite South Seas Detachment or Nankai Shitai under the command of Lieutenant Colonel Tsukamoto and a company of elite Japanese marines of the 5th Sasebo Naval Landing Force. These combat troops were all battle-hardened veterans of jungle warfare in South-East Asia. They were trained to live and fight in the jungle, to blend with it, and to move quietly and efficiently through it without need for roads or tracks. Their task was to deal with any Australian troops who might be met on the route to Kokoda, or at Kokoda. Unlike the militia troops of the 39th Battalion whom they would soon face, these Japanese combat troops were heavily armed. They were equipped not only with small arms, but also with heavy machine-guns and mortars.
The young militia troops of the
39th Australian Infantry Battalion would prove
their worth against Japan's best troops in the rugged mountains of New Guinea
Despite receiving some initial attention from Allied aircraft, a beachhead was quickly established, and protected by anti-aircraft batteries. The Japanese troops then moved inland along a jungle track towards Kokoda.
MacArthur and Blamey are not unduly troubled by the Japanese landing in Papua
The landing of a large formation of Japanese troops on the northern coast of Papua at Gona did not trouble Generals MacArthur and Blamey unduly. Far away in Australia, they were busy planning the establishment of their forward airbase at Dobodura which was situated roughly 24 kilometres (15 miles) inland from the neighbouring coastal villages of Gona and Buna. MacArthur and Blamey viewed the Japanese landing at Gona as an annoying intrusion upon that planning. They appreciated that the Japanese might probe in the direction of Kokoda, but they believed that the main purpose of the Japanese in landing a large force at Gona was simply to establish a forward base at either Gona or Buna. MacArthur and Blamey lacked the flexibility of military vision that would have enabled them to foresee the possibility that Kokoda, and ultimately Port Moresby, were the real targets of the Japanese landing. To counter the Japanese beachhead at Gona-Buna, Blamey ordered Morris to send the remaining companies of the 39th Battalion across the Owen Stanley Range to join B Company at Kokoda.
On 23 July 1942, Major General Morris ordered the commander of the 39th Battalion, Lieutenant Colonel Owen, to fly to Kokoda and then prepare to defend it with his battalion, which was to be given the name "Maroubra Force". At this stage, Morris appears to have felt no sense of urgency about reinforcement of B Company at Kokoda. On that same day, C Company left Ilolo to undertake the long trek to Kokoda. The remaining companies of the 39th Battalion would follow C Company across the mountains as soon as preparations could be completed.
Templeton deploys B Company between Gona and Kokoda
In response to the Japanese landing at Gona, Captain Templeton returned from the coast to Awala on 22 July. He ordered 11 Platoon and native PIB troops to leave Kokoda and move down to Awala. He ordered 12 Platoon to move down from Kokoda to the village of Gorari which was situated about half way between Awala and Kokoda. 10 Platoon was to remain at Kokoda and guard the airstrip. Having deployed his small force between Gona and Kokoda, Templeton left Major W.T. Watson of the Papuan Infantry Battalion in command at Awala and returned to Kokoda to meet his Commanding Officer, Lieutenant Colonel Owen, who was expected to arrive by air from Port Moresby on 24 July.
On 23 July, the approach of Japanese troops unnerved a PIB patrol at Awala, and many of the native troops disappeared into the bush. As the Japanese continued to advance towards Kokoda, 11 Platoon fell back to the southern bank of the Kumusi River at Wairopi and it was joined by Major Watson with his officers, NCOs and a handful of PIB troops. The wire bridge across the river was then cut.
Having been informed that between 1,500 and 2,000 Japanese troops had landed at Gona, Templeton ordered 11 Platoon to fall back to Gorari if contact was made with Japanese troops. On the afternoon of 24 July, Japanese troops appeared on the Gona side of the Kumusi River and fire was exchanged across the river. 11 Platoon and Major Watson's PIB troops then withdrew and joined 12 Platoon at Gorari.
The Japanese advance towards Kokoda
In the early morning hours of 25 July, Lieutenant Colonel Owen and Captain Templeton joined 11 and 12 Platoons at Gorari. In an attempt to slow the Japanese advance until reinforcements could arrive from Port Moresby, Owen set up an ambush on the track leading from the Kumusi River to Gorari and then returned to Kokoda to await expected reinforcements. The Japanese were only briefly delayed by the ambush before advancing relentlessly on the small Australian rearguard. The tough, jungle warfare veterans of Japan's 144th Regiment appeared to find the jungle terrain of New Guinea no impediment, even though they were carrying heavier weapons than the Australians. With at least 500 of Japan's best troops pursuing them aggressively, the two Australian platoons staged a fighting rearguard withdrawal down the track to the village of Oivi where they intended to make a stand.
Lieutenant Colonel Owen already had C Company of his battalion in the mountains heading towards Kokoda, but these troops were still six days away and the vital airstrip at Kokoda was clearly under threat from the speed of the Japanese advance. Owen contacted Port Moresby on the night of 25 July. He pointed out that the situation was becoming threatening, and asked for two of his rifle companies to be flown to Kokoda on the following morning. It was only a twenty minute flight by air.
The Australian stand at Oivi
On the morning of 26 July, one platoon of D Company of the 39th Battalion was flown to Kokoda in two separate flights. As each half of the platoon arrived, the fifteen troops were sent on to reinforce the two platoons at Oivi. Apart from this one platoon of D Company troops, no more reinforcements for Owen's beleaguered men at Oivi arrived from Port Moresby by air.
At Oivi, the Japanese attacked the small Australian force aggressively, providing them with a taste of Japanese tactics which would be used repeatedly on the Kokoda Track, and which had already been used to devastating effect in the jungles of Malaya. The life of every Japanese soldier belonged to Emperor Hirohito, and was expendable. With the advantage of overwhelming numbers on the Kokoda Track, the Japanese could afford to expend troops in apparently suicidal frontal attacks. While the Australians were fighting desperately to contain this frontal attack, other Japanese troops would try to work their way around the flanks of the Australian position with a view to encircling it. If this tactic succeeded, the end usually followed swiftly for the encircled troops.
Templeton had only about 75 Australian militia troops and a handful of local PIB troops at Oivi, and he was facing several hundred Japanese troops, including a company of the crack 5th Sasebo marines. The second half of the D Company platoon flown to Kokoda that morning had not yet arrived at Oivi. The Japanese continued to attack aggressively throughout the afternoon of 26 July, always attempting to outflank and encircle the Australians. For Templeton, there was no thought of withdrawing despite the desperate plight of his small command. He knew that if the Japanese broke through, they would capture the airstrip at Kokoda and be well down the Kokoda Track towards Port Moresby before they met C Company moving up the track to Kokoda. At about 5.00 p.m., Templeton went off alone into the gloomy jungle between Oivi and Kokoda to look for and warn the second half of the D Company platoon that they might encounter Japanese troops between them and his defensive position. It was too late, the Japanese had already encircled Templeton's troops. There was a burst of gunfire from the direction in which Templeton had gone, and this brave officer failed to return.
Under heavy attack by the Japanese from every side, and with night falling, the small Australian force appeared to be facing annihilation. Major W.T. Watson of the Papuan Infantry Battalion (PIB) had assumed command when Templeton was lost. It was one of Major Watson's men, Lance Corporal Sanopa of the PIB, who saved them. Under cover of darkness, this resourceful Papuan, a former police constable, led the Australian and Papuan troops to safety by means of a creek below Oivi. As the track to Kokoda was now cut off by the Japanese, Lance Corporal Sanopa guided them across rugged terrain to the village of Deniki, which is south of Kokoda. | <urn:uuid:e4bf36c4-c019-4a75-b9b8-37a81ef39f1f> | CC-MAIN-2017-17 | http://pacificwar.org.au/KokodaCampaign/IntoHellMouth.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00307-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.973909 | 5,178 | 3.03125 | 3 |
Chapter I. The Pine Processionary: The Eggs And The Hatching
Jean-Henri Fabre The Life of the Caterpillar
Pine Processionary Caterpillar after eggs hatching
This caterpillar has already had his story told by Reaumur, Antoine Ferchault de Reaumur (1683–1757), inventor of the Reaumur thermometer and author of Memoires pour servire l'histoire naturelle des insectes. --Translator's Note. but it was a story marked by gaps. These were inevitable in the conditions under which the great man worked, for he had to receive all his materials by barge from the distant Bordeaux Landes. The transplanted insect could not be expected to furnish its biographer with other than fragmentary evidence, very weak in those biological details which form the principal charm of entomology. To study the habits of insects one must observe them long and closely on their native heath, so to speak, in the place where their instincts have full and natural play.
With caterpillars foreign to the Paris climate and brought from the other end of France, Reaumur therefore ran the risk of missing many most interesting facts. This is what actualy happened, just as it did on, a later occasion in the case of another alien, the Cicada. the Cicada or Cigale, an insect remotely akin to the Grasshopper and found more particularly in the south of France, cf. Social Life in the Insect World, by J. H. Fabre, translated by Bernard Miall: chaps. i to iv. --Translator's Note. Nevertheless, the information which he was able to extract from a few nests sent to him from the Landes is of the highest value.
Better served than he by circumstances, I will take up afresh the story of the Processionary Caterpillar of the Pine. If the subject does not come up to my hopes, it will certainly not be for lack of materials. In my harmas harmas was the enclosed piece of waste ground in which the author used to study his insects in their natural state.--Translator's Note. laboratory, now stocked with a few trees in addition to its bushes, stand some vigorous fir-trees, the Aleppo pine and the black Austrian pine, a substitute for that of the Landes. Every year the caterpillar takes possession of them and spins his great purses in their branches. In the interest of the leaves, which are horribly ravaged, as though there had been a fire, I am obliged each winter to make a strict survey and to extirpate the nests with a long forked batten.
You voracious little creatures, if I let you have your way, I should soon be robbed of the murmur of my once so leafy pines! Today I will seek compensation for all the trouble I have taken. Let us make a compact. You have a story to tell. Tell it me; and for a year, for two years or longer, until I know more or less all about it, I shall leave you undisturbed, even at the cost of lamentable suffering to the pines.
Having concluded the treaty and left the caterpillars in peace, I soon have abundant material for my observations. In return for my indulgence I get some thirty nests within a few steps of my door. If the collection were not large enough, the pine-trees in the neighbourhood would supply me with any necessary additions. But I have a preference and a decided preference for the population of my own enclosure, whose nocturnal habits are much easier to observe by lantern-light. With such treasures daily before my eyes, at any time that I wish and under natural conditions, I cannot fail to see the Processionary's Story unfolded at full length. Let us try.
And first of all the egg, which Reaumur did not see. In the first fortnight of August, let us inspect the lower branches of the pines, on a level with our eyes. If we pay the least attention, we soon discover, here and there, on the foliage, certain little whitish cylinders spotting the dark green. These are the Bombyx' eggs: each cylinder is the cluster laid by one mother.
The pine-needles are grouped in twos. Each pair is wrapped at its base in a cylindrical muff which measures about an inch long by a fifth or sixth of an inch wide. This muff, which has a silky appearance and is white slightly tinted with russet, is covered with scales that overlap after the manner of the tiles on a roof; and yet their arrangement, though fairly regular, is by no means geometrical. The general aspect is more or less that of an immature walnut-catkin.
The scales are almost oval in form, semitransparent and white, with a touch of brown at the base and of russet at the tip. They are free at the lower end, which tapers slightly, but firmly fixed at the upper end, which is wider and blunter. You cannot detach them either by blowing on them or by rubbing them repeatedly with a hair-pencil. They stand up, like a fleece stroked the wrong way, if the sheath is rubbed gently upwards, nd retain this bristling position indefinitely; they resume their original arrangement when the friction is in the opposite direction. At the same time, they are as soft as velvet to the touch. Carefully laid one upon the other they form a roof that protects the eggs It is impossible for a drop of rain or dew to penetrate under this shelter of soft tiles.
The origin of this defensive covering is self evident: the mother has stripped a part of her body to protect her eggs. Like the Eider duck, she has made a warm overcoat for them out of her own down. R?aumur had already suspected as much from a very curious peculiarity of the Moth. Let me quote the passage:
„The females,“ he says, "have a shiny patch on the upper part of their body, near the hind-quarters. The shape and gloss of this disk attracted my attention the first time that I saw it. I was holding a pin, with which I touched it, to examine its structure. The contact of the pin produced a little spectacle that surprised me: I saw a cloud of tiny spangles at once detach themselves. These spangles scattered in every direction: some seemed to be shot into the air, others to the sides; but the greater part of the cloud fell softly to the ground.
„Each of those bodies which I am calling spangles is an extremely slender lamina, bearing some resemblance to the atoms of dust on the Moths' wings, but of course much bigger. . . The disk that is so noticeable on the hind-quarters of these Moths is therefore a heap--and an enormous heap--of these scales. . . . The females seem to use them to wrap their eggs in; but the Moths of the Pine Caterpillar refused to lay while in my charge and consequently did not enlighten me as to whether they use the scales to cover their eggs or as to what they are doing with all those scales gathered round their hinder part, which were not given them and placed in that position to serve no purpose.“
You were right, my learned master: that dense and regular crop of spangles did not grow on the Moth's tail for nothing. Is there anything that has no object? You didn't think so; I do not think so either. Every thing has its reason for existing. Yes, you were well-inspired when you foresaw that the cloud of scales which flew out under the point your pin must serve to protect the eggs.
I remove the scaly fleece with my pincers and, as I expected, the eggs appear, looking like little white-enamel beads. Clustering closely together, they make nine longitudinal rows In one of these rows I count thirty-five eggs. As the nine rows are very nearly alike the contents of the cylinder amount in all to about three hundred eggs, a respectable family for one mother!
The eggs of one row or file alternate exactly with those in the two adjoining files, so as to leave no empty spaces. They suggest a piece of bead-work produced with exquisite dexterity by patient fingers. It would be more correct still to compare them with a cob of Indian corn, with its neat rows of seeds but a greatly reduced cob, the tininess of whose dimensions makes its mathematical precision all the more remarkable. The grains of the Moth's spike have a slight tendency to be hexagonal, because of their mutual pressure; they are stuck close together, so much so that they cannot be separated. If force is used, the layer comes off the leaf in fragments, in small cakes always consisting of several eggs apiece. The beads laid are therefore fastened together by a glutinous varnish; and it is on this varnish that the broad base of the defensive scales is fixed.
It would be interesting, if a favourable opportunity occurred, to see how the mother achieves that beautifully regular arrangement of the eggs and also how, as soon as she has laid one, all sticky with varnish, she makes a roof for it with a few scales removed one by one from her hind-quarters. For the moment, the very structure of the finished work tells us the course of the procedure. It is evident that the eggs are not laid in longitudinal files, but in circular rows, in rings, which lie one above the other, alternating their grains. The laying begins at the bottom, near the lower end of the double pine-leaf; it finishes at the top. The first eggs in order of date are those of the bottom ring; the last are those of the top ring. The arrangement of the scales, all in a longitudinal direction and attached by the end facing the top of the leaf, makes any other method of progression inadmissible.
Let us consider in the light of reflection the elegant edifice now before our eyes. Young or old, cultured or ignorant, we shall, on seeing the Bombyx' pretty little spike, exclaim:
And what will strike us most will be not the beautiful enamel pearls, but the way in wich they are put together with such geometrical regularity. Whence we can draw a great moral, to wit, that an exquisite order governs the work of a creature without consciousness, one of the humblest of the humble. A paltry Moth follows the harmonious laws of order.
If Micromegas eponymous hero of Voltaire's story of „the little great man,“ published in 1752 in imitation of Gulliver's Travels.---Translator's Note. took it into his head to leave Sirius once more and visit our planet, could he find anything to admire among us? Voltaire shows him to us using one of the diamonds of his necklace as a magnifying- glass in order to obtain some sort of view of the three-master which has run aground on his thumb-nail. He enters into conversation with the crew. A nail-paring, curved like a horn, encompasses the ship and serves as a speaking-trumpet; a tooth-pick, which touches the vessel with its tapering end and the lips of the giant, some thousand fathoms above, with the other, serves as a telephone. The outcome of the famous dialogue is that, if we would form a sound judgment of things and see them under fresh aspects, there is nothing like changing one's planet.
The probability then is that the Sirian would have had a rather poor notion of our artistic beauties. To him our masterpieces of statuary, even though sprung from the chisel of a Phidias, would be mere dolls of marble or bronze, hardly more worthy of interest than the children's rubber dolls are to us; our landscape-paintings would be regarded as dishes of spinach smelling unpleasantly of oil; our opera-scores would be described as very expensive noises.
These things, belonging to the domain of the senses, possess a relative ?sthetic value, subordinated to the organism that judges them. Certainly the Venus of Melos and the Apollo Belvedere are superb works; but even so it takes a special eye to appreciate them. Microm?gas, if he saw them, would be full pity for the leanness of human forms. To him the beautiful calls for something other than our sorry, frog-like anatomy.
Show him, on the other hand, that sort of abortive windmill by means of which Pythagoras, echoing the wise men of Egypt, teaches us the fundamental properties of the right-angled triangle. Should the good giant, contrary to our expectation, happen not to know about it, explain to him what the windmill means. Once the light has entered his mind, he will find, just as we do, that there is beauty there, real beauty, not certainly in that horrible hieroglyphic, the figure, but in the unchangeable relation between the lengths of the three sides; he will admire as much as we do geometry the eternal balancer of space.
There is, therefore, a severe beauty, belonging to the domain of reason, the same in every world, the same under every sun, whether the suns be single or many, white or red, blue or yellow. This universal beauty is order. Everything is done by weight and measure, a great statement whose truth breaks upon us all the more vividly as we probe more deeply into the mystery of things. Is this order, upon which the equilibrium of the universe is based, the predestined result of a blind mechanism? Does it enter into the plans of an Eternal Geometer, as Plato had it? Is it the ideal of a supreme lover of beauty, which would explain everything?
Why all this regularity in the curve of the petals of a flower, why all this elegance in the chasings on a Beetle's wing-cases? Is that infinite grace, even in the tiniest details, compatible with the brutality of uncontrolled forces? One might as well attribute the artist's exquisite medallion to the steam-hammer which makes the slag sweat in the melting.
These are very lofty thoughts concerning a miserable cylinder which will bear a crop of caterpillars. It cannot be helped. The moment one tries to dig out the least detail of things, up starts a why which scientific investigation is unable to answer. The riddle of the world has certainly its explanation other-where than in the little truths of our laboratories. But let us leave Microm?gas to philosophize and return to the commonplaces of observation.
The Pine Bombyx has rivals in the art of gracefully grouping her egg-beads. Among their number is the Neustrian Bombyx, whose caterpillar is known by the name of „Livery,“ because of his costume. Her eggs are assembled in bracelets around little branches varying greatly in nature, apple- and pear-branches chiefly. Any one seeing this elegant work for the first time would be ready to attribute it to the fingers of a skilled stringer of beads. My small son Paul opens eyes wide with surprise and utters an astonished „Oh!“ each time that he comes upon the dear little bracelet. The beauty of order forces itself upon his dawning attention.
Though not so long and marked above all by the absence of any wrapper, the ring of Neustrian Bombyx reminds one of the other's cylinder, stripped of its scaly covering. It would be easy to multiply these instances of elegant grouping, contrived now in one way, now in another, but always with consummate art. It would take up too much time, however. Let us keep to the Pine Bombyx.
The hatching takes place in September, a little earlier in one case, a little later in another. So that I may easily watch the new-born caterpillars in their first labours, I have placed a few egg-laden branches in the window of my study. They are standing in a glass of water which will keep them properly fresh for some time.
The little caterpillars leave the egg in the morning, at about eight o'clock. If I just lift the scales of the cylinder in process of hatching, I see black heads appear, which nibble and burst and push back the torn ceilings. The tiny creatures emerge slowly, some here and some there, all over the surface.
After the hatching, the scaly cylinder is as regular and as fresh in appearance as if it were still inhabited. We do not perceive that it is deserted until we raise the spangles. The eggs, still arranged in regular rows, are now so many yawning goblets of a slightly translucent white; they lack the cap-shaped lid, which has been rent and destroyed by the new-born grubs.
The puny creatures measure a millimetre inch.--Translator's Note. at most in length. Devoid as yet of the bright red that will soon be their adornment, they are pale-yellow, bristling with hairs, some shortish and black, others rather longer and white. The head, of a glossy black, is big in proportion. Its diameter is twice that of the body. This exaggerated size of the head implies a corresponding strength of jaw, capable of attacking tough food from the start. A huge head, stoutly clad in horn, is the predominant feature of the budding caterpillar.
These macrocephalous ones are, as we see, well-armed against the hardness of the pine-needles, so well-armed in fact that the meal begins almost immediately. After roaming for a few moments at random among the scales of the common cradle, most of the young caterpillars make for the double leaf that served as an axis for the native cylinder and spread themselves over it at length. Others go to the adjacent leaves. Here as well as there they fall to; and the gnawed leaf is hollowed into faint and very narrow grooves, bounded by the veins, which are left intact.
From time to time, three or four who have eaten their fill fall into line and walk in step, but soon separate, each going his own way. This is practice for the coming processions. If I disturb them ever so little, they sway the front half of their bodies and wag their heads with a jerky movement similar to the action of an intermittent spring.
But the sun reaches the corner of the window where the careful rearing is in progress. Then, sufficiently refreshed, the little family retreats to its native soil, the base of the double leaf, gathers into an irregular group and begins to spin. Its work is a gauze globule of extreme delicacy, supported on some of the neighbouring leaves. Under this tent, a very wide-meshed net, a siesta is taken during the hottest and brightest part of the day. In the afternoon, when the sun has gone from the window, the flock leaves its shelter, disperses around, sometimes forming a little procession within a radius of an inch, and starts browsing again.
Thus the very moment of hatching proclaims talents which age will develop without adding to their number. In less than an hour from the bursting of the egg, the caterpillar is both a processionary and a spinner. He also flees the light when taking refreshment. We shall soon find him visiting his grazing-grounds only at night.
The spinner is very feeble, but so active that in twenty-four hours the silken globe attains the bulk of a hazel-nut and in a couple of weeks that of an apple. Nevertheless, it is not the nucleus of the great establishment in which the winter is to be spent. It is a provisional shelter, very light and inexpensive in materials. The mildness of the season makes anything else unnecessary. The young caterpillars freely gnaw the logs, the poles between which the threads are stretched, that is to say, the leaves contained within the silken tent. Their house supplies them at the same time with board and lodging. This excellent arrangement saves them from having to go out, a dangerous proceeding at their age. For these puny ones, the hammock is also the larder.
Nibbled down to their veins, the supporting leaves wither and easily come unfastened from the branches; and the silken globe becomes a hovel that crumbles with the first gust of wind. The family then moves on and goes elsewhere to erect a new tent, lasting no longer than the first. Even so does the Arab move on, as the pastures around his camel-hide dwelling become exhausted. These temporary establishments are renewed several times over, always at greater heights than the last, so much so that the tribe, which was hatched on the lower branches trailing on the ground, gradually reaches the higher boughs and sometimes the very summit of the pine-tree.
In a few weeks' time, a first moult replaces the humble fleece of the start, which is pale-coloured, shaggy and ugly, by another which lacks neither richness nor elegance. On the dorsal surface, the various segments1 excepting the first three, are adorned with a mosaic of six little bare patches, of a bright red, which stand out a little above the dark background of the skin. Two, the largest, are in front, two behind and one, almost dot-shaped, on either side of the quadrilateral. The whole is surrounded by a palisade of scarlet. bristles, divergent and lying almost flat; The other hairs, those of the belly and sides, are longer and whitish.
In the centre of this crimson marquetry stand two clusters of very short bristles, gathered into flattened tufts which gleam in the sun like specks of gold. The length of the caterpillar is now about two centimetres three-quarters of an inch.--Translator's Note. and his width three or four millimetres. to .156 inch.--Translator's Note. Such is the costume of middle age, which, like the earlier one, was unknown to Reaumur. | <urn:uuid:ac213a9b-0808-4de9-91f8-735f00a7cff1> | CC-MAIN-2017-17 | http://www.efabre.net/virtual_library/life_ef_caterpillar/cater.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00367-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.965436 | 4,638 | 2.578125 | 3 |
Remember back in the day when we could rely on safe drinking water in our homes? Well, those days are long gone. In general, more hazardous substances are present in public water than ever.
It should be noted that public water quality differs greatly per region and your water may be perfectly clean. NYC drinking water for example has won awards based on tests on taste, mineral content, and lack of contaminants.
Which is why it’s wise to be aware that some filter companies use scare tactics to make people believe they need filters when they may not need them at all.
Yet still, not only in the USA but also in parts of Western Europe, previously known for their healthy drinkable tap water, overall quality has declined. Decades of excavation and drilling activities, serious industrial pollution, animal medicines and over-fertilization have contaminated tap water sources.
When even water supplier companies themselves are ringing alarm bells you know it’s time to take measures into your own hands and turn to using point-of-use devices.
“We, as water companies, know that every pipe is going to leak at some point.”
Government scientists now generally agree, that many chemicals commonly found in drinking water pose serious risks at low concentrations.
Depending on where you live, tapping a glass of water from the kitchen faucet can vary from getting a healthy beverage to pouring a cocktail of various contaminants. In other words, there is truth to the horror stories about severely polluted tap water.
Sickmakers that may lurk in your water include carcinogens, substances that affect the endocrine system and nervous systems, Volatile Organic Chemicals (VOCs) such as pesticides, herbicides, chemical by-products created during water treatment, MTBE, a gasoline additive, and even rocket fuel, arsenic and fecal waste.
Thus, your tap water may put you and your family at risk, which is why doing your research can be essential.
According to former E.P.A. administrator William K. Reilly:
“For years, people said that America has the cleanest drinking water in the world. That was true 20 years ago. But people don’t realize how many new chemicals have emerged and how much more pollution has occurred. If they did, we would see very different attitudes.”
Which is why public health experts and environmental organizations recommend for most people in the U.S. as well as many other countries to filter their tap water.
Which home water filter to choose?
There are numerous water filters on the market utilizing various techniques to filter the water. This makes choosing the right water purifier to treat the drinking water in your home a daunting task. This water filter buying guide will show you in four steps that getting the best filter for your needs is not as difficult as it may seem.
Things to consider when purchasing a home water filter system
Keep these three things in mind when deciding on a filter for your home.
- How effective is the filter? In other words, what does it remove and in which extent does it remove those pollutants?
- Costs. How much does it cost to run the filter(s) on a day to day basis. Apart from the initial costs there are filter replacement costs, power usage costs, etc.
- Ease of use. How much time does it take to filter water for usage? Do you have to just switch a button or does it involve labor? Some devices require maintenance, recharging etc.
Step 1. Determine capacity and purpose.
Decide why exactly you want to filter your home’s tap water. People buy filters for the following reasons.
- Safe drinking water. The main reason for buying a home water filter is to ensure your drinking water is safe and tasty.
- Better tasting tap water. The removal of calcium carbonate from tap water can improve taste considerably especially when water is reused in a coffee machine or in a water cooker or kettle.
- Safe showering and bathing. Other reasons are to avoid to avoid dry skin and frizzy hair due to chlorinated water as well as inhaling and absorbing chlorine and related chemicals through your skin while showering or bathing. Read more about the benefits of shower water filters.
- Soften hard water and prevent damage caused by hard water. You may want to bring a water filter in the home to extend the life of clothing, prevent scaling or spotting and filming on dishes or surface, and prevent damage to pipes and showerheads. To ensure your water lathers properly with soap.
The thought that the need for shower filters has been created by clever marketeers may occur but as it turns out studies do indicate that serious health risks are linked to chlorine and its byproducts in the water in your bathroom. Chlorine is also linked to kidney stone production.
Although ingestion is commonly considered to be the primary source of exposure to chloroform from tap water, inhalation and skin absorption exposure concentrations were found to be even higher. Source: PubMed.
Some more specific reasons to filter your tap water:
- In some areas the there’s more chlorine in tap water than in pool water,
- older plumbing systems may cause lead exposure, pipes may rust, and century old wooden water mains (rare but not uncommon) may cause contaminations,
- single-celled, chloride resistant protozoa, such as giardia and cryptosporidium can pose serious health threats
- millions of individual cases of waterborne diseases occur annually
- according to the NY Times, the 35-year-old federal law regulating tap water is so out of date that the water Americans drink can pose what scientists say are serious health risks — and still be legal.
If you have a single person household a filter pitcher may suffice or you may one of the following types of filters.
- pitcher filters – inexpensive if frequent filter replacement isn’t required (moderate use), filtering is slow.
- faucet-mounted filters – inexpensive – switching between filtered and unfiltered water is often possible – requires more frequent filter change than countertop or under the sink devices
- showerhead filters – replacement cartridges – two media filters: KDF and granulated carbon are most effective.
- under-the-sink filters – discrete placement, infrequent filter replacements, requires installation and in some cases plumbing alteration
- built-in refrigerator filters (cartridges)
- whole-house filter systems – combine a variety of media types and treat all of the water in your house
- whole-house water softener with an ion exchange filter to soften your water.
The next step is to find out which contaminants you need to filter out of your water.
Step 2. Determine what you want to filter out of your tap water
It would be great if a one-type-fits-all kind of water filter would exist. Sadly, this isn’t the case. Not every filter type will eliminate every contaminant so you will have to assess your tap water.
Doing your research in order to find out which contaminants to target takes some time and effort. It is, however, an important step since if you buy the wrong purifier you’ve not only wasted money but you’ll end up with equipment that isn’t making your water any safer.
In order to know what to filter you need you have to know where your water comes from. You can check the consumer confidence report (CCR),
- released by your water supplier online,
- or provided with your bill each year.
- You may also find the CCR posted on your local government website,
- or printed in your newspaper.
Apart from your annual water-quality report you can also check the NRDC report What’s On Tap?, contact a testing laboratory or your local health department for assistance.
This way you can find out how your local municipal water district has complied with existing regulation and find out about contaminants present in the water.
- in the US, locate your water authority at EPA.
- For more information about your drinking water in the UK, visit Water UK.
- In Canada, go to Health Canada.
- In Australia, check out the Australian Drinking Water Guidelines here.
How to make sense of all this info? Here are some sources that can help you interpret.
- The Centers for Disease control offers a guideline on Understanding Consumer Confidence Reports here.
- The “What’s in your water?” tool by the Environmental Working Group is another great source of information on local water supplies.
- Or use this guide to reading water quality reports at the Campaign for Safe and Affordable Drinking Water’s Web site.
Step 3. Testing your tap water quality
Testing will take some time, effort and cost you some additional money but it is important. A few reasons to test your home’s tap water:
- Reactions in the distribution system. Municipal water may pass tests at its source but public water may still pick up contaminants on the way to your house.
- Outdated legislation and filtering technology. Current pollution can be removed with very costly and complex membrane filtration but it is often questioned whether these purification techniques are sufficiently able to continue producing clean drinking water on a large scale. The head of a Dutch water company asked out loud if current filtering technology is still on par with the state of soil and water pollution.
- Unregulated chemicals. A wide range of substances are completely unregulated by the SDWA or your country’s respective laws. Consumer confidence reports in the US may only contain data on the 91 contaminants regulated by the SDWA.
- Presentation method of the facts. CCR’s might not indicate potentially harmful spikes of a contaminant in your tap water thus providing a false sense of safe levels. This has to do with how violations are calculated (e.g., as an annual average instead of individual measurements) and the frequency of monitoring also influence how good of a picture you get from your report.
- The fixtures in your home may also affect water chemistry.
- You may not need a water filter. Companies selling water filtration systems have been misleading consumers into thinking their tap water was polluted. Water filters are trendy as well. Don’t believe the hype or fearmongering and examine your own personal situation before you purchase filtration for your home.
So in order to be entirely sure you should go a step further by testing your water at your home. This way you will know exactly which contaminants are in it.
There are a few options:
- You can have your water tested independently by a state certified lab. Self-testing kits are available online or at your local home center. Send these to the laboratory for results.
- There are TDS meters to measure the overall ppm yourself.
- You can use test kits such as the First Alert WT1 Drinking Water Test Kit which do not require lab testing. On the other hand, such DIY kits are not incredibly accurate and don’t test for all harmful contaminants. These are merely for an indication. To get an idea. People often use such home tests to determine if their water needs a more thorough, professional analysis.
- You can also hire the services of an inspector to check the water quality at your home.
People often use PH-TDS meters to test their water. A TDS meter, TDS stands for Total Dissolved Solids, lets you measure ppm (parts per million) of contaminants in the water.
in the U.S., the Environmental Protection Agency (EPA) advises against consuming water containing more than 500 ppm of TDS. However, many health specialists think that ideal drinking water should be under 50 ppm or lower.
Note: there are specific concentration standards per contaminant. The EPA standard for arsenic for instance is 10 ppb (parts per billion).
Sediment are particles in the water you can see. If not noticeable in the water itself you may encounter sediment residu in the bottom of the toilet or dishwasher and behind the shields of the faucet aerator.
You can call the Safe Drinking Water Hotline for more information at 1-800-426-4791 or visit their EPA’s Safe water Web site.
Step 4. Decide which filter is best for your needs
There are so many types of water filters available for the home and all the different medias and technology can be really confusing. Especially when you first start with researching this matter.
Depending on the type and concentration of contaminants present in your water you may need completely different equipment or possibly a combination of equipment.
For example in case in your area fluoride is added to your water. Carbon filters do not remove fluoride from your water. Some quality carbon block filters however come with attachments that do remove fluoride.
The best water filter type.
Let’s try to make it a little less complicated by letting you know that most people opt for a
- carbon type water filter
- or a reverse osmosis filter.
Common motivations to choose either the one or the other type:
- The reason for choosing a carbon block or activated carbon filter is that even if your water tests well it is still likely treated by your water company with chlorine and its byproducts as disinfectants.
- If your water does not test well a reverse osmosis filter system is probably best for your needs since it will remove antibiotics, hormones, and other pollutants that are not removed by carbon filters.
Many choose to combine best of both worlds and purchase a multiple stage system that has a carbon block filter as one of the media and most commonly reverse osmosis as the other main filtering technology.
To be sure, call your local community water system to ask if they use chlorine and chloramines (a mixture of chlorine and ammonia).
If they do you will want a certified filter able to remove chlorine, chloramines and trihalomethanes (a carcinogenic by-product of chlorination).
The best filter type to remove these substances are activated carbon filters / carbon block filters. Carbon adsorption has numerous applications in removing impurities from water or air.
People commonly install a RO filter in the kitchen for safe drinking water and a carbon filter in the bathroom for safe washing and tooth brushing and face washing.
But I heard that carbon filters do not remove chloramine?
Fact is, some carbon filters can remove chloramine but others cannot. Some regular carbon filters do indeed remove chloramine. It is however a fact that more carbon and contact time are needed to clear your water from chloramine. In other words, you will need a larger, more potent carbon filter then when only chlorine is used by the water company.
Carbon filters are best at removing taste, odor, color, chlorine, sediment, and VOC’s
Carbon water filters
Water that flows through the positively charged highly absorbent carbon (charcoal) is filtered by a process called adsorption. Pollutants present in the water are trapped inside the millions of tiny pores between the molecules in the carbon substrate.
- Quality carbon block filters are our best option for removing organic chemicals like VOC’s, pesticides and herbicides according to EPA.
- Removes bad tastes and odors
- Most affordable filter technique.
- These filters will remove most pollutants of concern such as bacteria, parasites (e.g. Giardia and Cryptosporidium), chemicals, pesticides, heavy metals (e.g. lead, copper, mercury), radon; and volatile organic chemicals (VOC’s) such as methyl-tert-butyl ether (MTBE), dichlorobenzene and trichloroethylene (TCE).
- Do not remove essential minerals naturally present in water.
- They do not remove fluoride. You need a filter attachment for that.
- Activated carbon is not effective at removing dissolved inorganic compounds such as arsenic, hexavalent chromium, nitrate, perchlorate and fluoride.
- Simple countertop filters may not be able to remove dangerious contaminants such as the rocket fuel ingredient perchlorate.
- The more potent ones can be bulky and take up counter space.
Good to know about carbon water filters
Carbon filters come in two forms, granulated activated carbon and carbon block. (Pitcher water filters often use granulated carbon.)
Carbon filters vary considerably in effectiveness. This effectiveness depends in part on how quickly water flows through.
The most effective carbon block filters are Fibredyne block filters due to a larger surface area which leads to a higher sediment holding capacity. Then come carbon block and then granulated activated carbon filters.
Many carbon filters are either impregnated with silver or use secondary media such as silver or Kdf-55 to prevent bacteria growth within the filter.
When is a carbon block or activated carbon filter your best option?
In case your water tests well but is treated with chlorine. Or if you need a shower or bathroom filter to remove toxic chlorine vapors and prevent absorption of chlorine through the skin.
The top-reviewed Berkey BK4X2-BB Big Berkey Filtration System.
What about pitcher filters?
Pitcher filters such as the popular Brita pitcher have their pros and cons. These are affordable and easy ways to drink pollutant-free water but they do not remove that many toxins because of the use of granulated instead of solid carbon. Pur and Brita pitchers remove chlorine taste and odor but not actual chlorine.
Pitcher filters also have cartridges that need to be replaced making them less cost effective in the long run. You also have to fill them up which makes them less ideal when the household consists of several people. The same applies to faucet mount external filters, which use the same technology.
If you are looking for a pitcher filter, take a look at these quality devices.
- Zerowater Z Pitcher, a certified five-stage device, recommended by Treehugger, that removes almost everything out of the water.
Reverse osmosis water filters
The most popular and best solution used when carbon block filters do not suffice. Reverse Osmosis (RO) filters are considered the safest filters. In other words, they remove the most contaminants.
RO filters push water through a semi-permeable membrane that prevents particles larger than water molecules from passing through. The residue of contaminants held by the membrane is flushed away with additional water.
Although carbon filters are the least expensive filters reverse osmosis may very well be the most cost effective solution depending on your situation.
Especially when you need to bring down the PPM of your water considerably and filter out viruses, filtrates, nitrates, arsenic, fluoride, dissolved solids such as calcium, sodium, magnesium, and inorganic minerals, RO will be your best option. RO is often used in conjunction with other filtering technologies, specifically carbon.
“I have tried carbon and PP filters but none of them could reduce the PPM to below 200”
- Most effective according to the Environmental Working Group (EWG). Reverse osmosis has the strongest contaminant reduction capabilities and can remove inorganic compounds not removed by activated carbon, including arsenic, asbestos, sodium, heavy metals (e.g. copper and lead), hexavalent chromium, nitrate, perchlorate and fluoride.
- RO also removes viruses
- Removes bacteria (whereas silver-impregnated carbon can control bacterial growth)
- Generally recognized to provide the best tasting water
- Filter systems are often convenientlly installed under the sink with a spigot over the counter for access to the filtered water.
- Can be used to make sea water drinkable
- Water inefficient. More water is wasted than produced. Up to three to five gallons of water are wasted for every one gallon of clean, filtered water produced. Note: of course, waste water can be collected and used for other purposes such as watering the lawn.
- Requires adequate water pressure to work so can not be used in case the home water supply does not function.
- Filters can be costly and need to be replaced regularly.
- Does not reduce VOC’s, chlorine and chloramines, and pharmaceuticals or other endocrine disruptors.
- Removes nutrient minerals from your drinking water. Demineralized water is acidic.
Good to know about reverse osmosis systems
Often when manufacturers include activated carbon as secondary media in their reverse osmosis systems less quality carbon is used.
Commonly convenientlyl used in under-the-sink units thus taking up no kitchen counter space. RO systems generally incorporate a carbon filter or UV disinfection unit.
Some systems come with mineral filters that add the nutritional minerals back to the water after the RO process. Adding back essential minerals can also be done manually.
When is an RO system your best choice?
If you have the need to filter out contaminants that cannot be removed with a solid carbon block filter a RO filter system is likely to be best. It is the safest method since it removes the most contaminants.
If you are able to reuse the waste water is can be an ideal system for your high demand water filtering needs.
- The bestselling iSpring 75GPD 5-Stage Reverse Osmosis Water Filter System.
Distillation water filters
These filters heat water to create steam which then condenses leaving behind contaminants. Use of home distillation water filters isn’t as popular anymore. This type of filtration is mostly used by laboratories and industries.
- Reduces large particles like minerals and bacteria, viruses and chemicals that have a boiling point higher than water. Removes asbestos and heavy metals such as chromium, cadmium, copper, lead, and mercury.
- Does also remove fluoride, arsenic, barium, sodium and selenium.
- Distillers cannot remove many other chemicals including chlorine, trihalomethanes or volatile organic chemicals (VOCs) since they vaporize with the water and rise with the steam ending up in the filtered water. Filtration also does not remove endocrine disruptors.
- Distillation systems are often large and expensive, use lots of electricity and do not work during power outages.
- Remove necessary minerals from the water.
- EWG’s water filter guide does not include any filters based on this technology.
- If you do want a home distillation system the Waterwise 8800 Water Distiller Purifier is a well-reviewed system.
Other home water filter technologies
- Sediment filters typically trap larger particles like dirt, rust and sand. These employ filtration by capturing the larger particles, contaminants such as microorganisms and insoluble minerals that aren’t dissolved in water. This is often referred to as turbidity.
- Ozone filters. These filters basically push oxygen through ultraviolet light. This process creates ozone which is added to the water in the form of bubbles. The Ozone molecules release free radicals, oxygen atoms. These atoms are very toxic to most microorganisms present in water and thus, by a process called oxidation, disinfect the water. Only remove parasites, bacteria and other microorganisms and pathogens. Therefore for most home owners only useful as an additional medium in a multiple stage water purifier system.
- Ultraviolet filter systems. Remove bacteria and parasites. Class A systems are designed to protect against hazardous bacteria and viruses, including Cryptosporidium and Giardia, while class B systems make non-disease-causing bacteria inactive. UV filters do not remove chemical contaminants. EWG’s water filter guide does not include filters based on this technology.
- Mechanical and ceramic filters. Small holes in these filters trap contaminants such as cysts and sediments. They are often used in conjunction with other kinds of technologies, but sometimes are used alone. A major downside is that they cannot remove chemical contaminants.
- Ozone water filters. Ozone kills bacteria and other microorganisms but is not effective in removing chemical contaminants. EWG’s water filter guide does not include any filters based on this technology.
- Deionization, Water softening and Ion Exchange filters. They remove or exchange ions in order to remove ionic contaminants and/or to soften water.
- Water softening systems. Hard water leaves mineral residue. Water softener systems are installed to reduce water hardness and remove barium but do not remove most contaminants. Also called cation exchange softeners, they employ an ion exchange process to lower levels of calcium and magnesium which can build up in plumbing and fixtures. These systems add salts to the water which is why a reverse osmosis system for the drinking water is recommended. RO removes these salts.
Filters vary widely in quality. To be sure, pick a device that is certified by the National Sanitation Foundation (NSF) which is a reputable product evaluation company.
Do keep in mind that an NSF certification does not necessarily mean the filter will remove specific pollutants. It may also refer to taste improvement or other aspects since different certifications exist. Therefore it is recommended to look for filters labeled NSF/ANSI Standard and that are certified to remove the contaminant(s) of concern in your water.
NSF/ANSI 42 and NSF/ANSI 53 standards refer to carbon filters and NSF/ANSI 58 to RO filters.
If you are not sure if a device you are considering is an independently certified water filter with the appropriate certification you can search for NSF certified drinking water treatment units in their extensive database here.
The Water Quality Association (WQA), also run independent product testing programs. These organizations guarantee you that the devices remove the specific compound the manufacturer claims to remove at the level stated. The WQA provides an online listing of products that have been Gold Seal Certified or have passed testing to verify their claims.
Things to consider before buying
- Calculate what capacity in gallons per hour you need to filter the water in your home.
- Be aware of costs of maintenance such as filter replacements
- Decide if you need an indicator light which will alert you that filters need to be replaced
- Do note that pure water (i.e. water of which all impurities are removed) does not contain any of the normal minerals present in most water supplies. When truly pure water ingested it leaches beneficial nutrients out of the cells in your body in order to create equilibrium. Long term use of pure water can result in health risks, because calcium and other nutrients are slowly extracted from the bones.
- Know that filters in units that allow you to temporarily disable the faucet installed filter have a longer lifespan. This may come in handy in case you will be boiling water since you will then also eliminate toxins. Thus, when boiling water for tea, pasta, instant soup and so on may you may not need the filter. Provided that chlorine concentrations in the water are low because boiling also removes chlorine at low concentrations. It does also remove higher concentrations but this requires cooking it for up to 30 minutes making is very energy inefficient (and time consuming).
Something about bottled water
This is still far from common knowledge but bottled water is not a viable option for a plethora of reasons.
- It is expensive,
- Producers may be scamming you since almost half of the bottled water is plain tap water,
- Public water EPA standards do NOT apply to bottled water and it is overall less regulated than tap water
- A recent test by the EWG found DBPs, Tylenol, arsenic and more than 30 other pollutants lurking in bottled water,
- Plastic bottles often contain the harmful chemical BPA and may leach other hazardous (hormone-like) chemicals into the water
- The bottles are very environmental unfriendly. It is estimated that more than 5000 garbage trucks full of empty plastic bottles are thrown away each day.
Bottled water will become a thing of the past (it’s not for no reason the sale of plastic water bottles in now banned in California) They can be useful in certain circumstances but are by far no sustainable option.
For more in-dept info watch Tapped. Many switched from bottles to filtered tap water after watching this documentary exposing the water bottling industry.
Wrapping it up
All the terms and technologies can make choosing a home water filter confusing but it’s actually not that hard. First you find out what’s in your water, preferably by testing it.
Then you decide if you can suffice with a carbon block filter or need a reverse osmosis filter. If you do need an RO system you decide if adding one to the kitchen and carbon filters to the bathroom will meet your needs. Then you only have to think about whether or not you need a water softening system or not. | <urn:uuid:63eb899b-006e-4c2d-9470-d6e33e6624aa> | CC-MAIN-2017-17 | http://www.criticalcactus.com/best-home-water-filter-system/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00423-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.936011 | 5,966 | 2.703125 | 3 |
Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
- Noun:A sudden brief burst of bright flame or light.
- Verb:Burn with a sudden intensity: "the blaze across the water flared".
- Synonyms:noun. blaze - flashverb. flame - blaze - flash
- Adjective:Bad-tempered and unfriendly: "he left with a surly expression".
- Synonyms:morose - gruff - sullen - sulky
- Noun:Enjoyment or vigor in doing something; zest.
- Synonyms:relish - pleasure - enjoyment - delight - taste - zest
- Noun:Distress or embarrassment at having failed or been humiliated.
- Verb:Feel distressed or humiliated.
- Synonyms:noun. disappointment - grief - sorrow - annoyance - regretverb. grieve - aggrieve
- Adjective:Given to sudden and unaccountable changes of mood or behavior.
- Synonyms:whimsical - wayward - fickle - freakish - crotchety
- Verb:Confirm or give support to (a statement, theory, or finding).
- Synonyms:confirm - bear out - affirm - verify - support - certify
- readily letting go of, giving up, or separated from an object that one
- holds, a position, or a principle: "a tenacious grip".Not easily dispelled or discouraged; persisting in existence or in a course of action: "a tenacious legend".
- Synonyms:persistent - tough - stubborn - dogged - obstinate
- Adjective:Extremely delicate and light in a way that seems too perfect for this world.Heavenly or spiritual: "ethereal, otherworldly visions".
- Synonyms:airy - aerial - etherial
- Verb:Speak in a slow, lazy way with prolonged vowel sounds.
- Noun:A slow, lazy way of speaking or an accent with unusually prolonged vowel sounds: "a Texas drawl".
- Verb:Work extremely hard or incessantly.
- Noun:Exhausting physical labor: "a life of toil".
- Synonyms:verb. labour - labor - plod - slognoun. labour - labor - drudgery
- Adjective:Having a pleasingly rich, sweet taste: "a luscious and fragrant dessert wine".Richly verdant or opulent.
- Synonyms:delicious - sweet - succulent - savoury - savory - tasty
- Adjective:Of imposing height.Of a noble or exalted nature: "lofty ideals".
- Synonyms:high - proud - sublime - exalted - noble - elevated
- Verb:Shine brightly, esp. with reflected light.
- Noun:A faint or brief light, esp. one reflected from something.
- Synonyms:verb. glitter - sparkle - glisten - shimmer - twinkle - glint
- noun. glint - ray - flash - glimmer
- Adjective:(of a person or part of their body) Slightly fat: "his pudgy fingers".
- Synonyms:podgy - chubby - tubby - roly-poly - plump - dumpy
- Verb:Erase (a mark) from a surface: "words effaced by frost and rain"; "his anger was effaced when he stepped into the open air".Make oneself appear insignificant or inconspicuous.
- Synonyms:obliterate - erase - delete - expunge - rub out
- Verb:Write (something) in a hurried, careless way.
- Noun:An example of hurried, careless writing: "reams of handwritten scrawl".
- Synonyms:verb. scribble - scrabble - doodle - scratchnoun. scribble - doodle
- Adjective:Neatly skillful and quick in one's movements: "a deft piece of footwork".Demonstrating skill and cleverness.
- Synonyms:adroit - dexterous - skillful - skilful - clever
- Verb:Irritate intensely; infuriate.
- Synonyms:irritate - aggravate - provoke - nettle - enrage - anger
- Verb:Bend one's head or body forward and downward: "he stooped down".
- Noun:A posture in which the head and shoulders are habitually bent forward: "a tall, thin man with a stoop".A porch with steps in front of a house or other building.
- Synonyms:verb. bend - bow - inclinenoun. slouch
- Verb:Extract (information) from various sources.Collect gradually and bit by bit.
- Synonyms:gather - collect - pick up - pick
- Adjective:Outstandingly bad; shocking.Remarkably good.
Adjective:Using or expressing dry, esp. mocking, humor.(of a person's face or features) Twisted into an expression of disgust, disappointment, or annoyance.
Adjective:Overcome with anger; extremely indignant.Relating to or denoting apoplexy (stroke): "an apoplectic attack".
- Noun:A shallow trough fixed beneath the edge of a roof for carrying off rainwater.
- Verb:(of a candle or flame) Flicker and burn unsteadily.
- Synonyms:ditch - channel - drain - gully - groove - trough
- Noun:Amusement, esp. as expressed in laughter: "his six-foot frame shook with mirth".
- Synonyms:gaiety - glee - merriment - joy - hilarity - rejoicing
Verb:Discover (something) by guesswork or intuition: "his brother usually divined his ulterior motives".
Adjective:(of something regarded as unpleasant) Continuing without pause or interruption: "the incessant beat of the music".
Noun:A violent person, esp. a criminal.
- Noun:A dark mark or stain, typically one made by ink, paint, or dirt.
- Verb:Dry (a wet surface or substance) using an absorbent material: "Guy blotted his face with a dust rag".
- Synonyms:noun. stain - spot - smudge - smear - smirch - taint - blemishverb. stain - spot - smear - smudge - soil
- Noun:A person employed as a caretaker of a building; a custodian.
- Synonyms:porter - doorkeeper - caretaker - doorman - concierge
Web definitions:scold: someone (especially a woman) who annoys people by constantly finding fault.
- Verb:Sound loudly and harshly: "the ambulance arrived, siren blaring".
- Noun:A loud harsh sound.
- Verb:Shout or call out noisily and unrestrainedly: "“Move!” bawled the drill sergeant"; "lustily bawling out the hymns".
- Noun:A loud, unrestrained shout.
- Synonyms:verb. yell - shout - scream - cry - roar - vociferate - bellow
- Verb:Walk slowly and with heavy steps, typically because of exhaustion or harsh conditions.
- Noun:A difficult or laborious walk: "the long trudge back".
- Verb:Burn slowly with smoke but no flame.
- Noun:Smoke coming from a fire that is burning slowly without a flame.
Web definitions:beating as a source of erotic or religious stimulation.
- Adjective:(of a person's face) Pale, typically because of poor health.Feeble or insipid.
- Synonyms:pale - wan - colorless - sallow - mealy - colourless
- Noun:Used to refer to a person or thing whose name one cannot recall, does not know, or does not wish to specify.
- Verb:Move along slowly, typically in a small irregular group, so as to remain some distance behind the person or people in front.
- Noun:An untidy or irregularly arranged mass or group of something: "a straggle of cottages".
- Adjective:Contemptibly lacking in courage; cowardly.
- Synonyms:cowardly - lily-livered - recreant - chicken-hearted
- Adjective:Not taut or held tightly in position; loose.
- Noun:The part of a rope or line not held taut; the loose or unused part.Coal dust or small pieces of coal.
- Verb:Loosen (something, esp. a rope).Adverb:Loosely.
- Synonyms:adjective. lax - loose - sluggish - slow - remiss - limp - languidverb. slacken - relax
- Adjective:(of a person's complexion) Of an unhealthy yellowish color.Verb:Make sallow.
- Noun:A willow tree, esp. one of a low-growing or shrubby kind.
- Synonyms:adjective. pale - pallid - wan - yellowish - pastynoun. willow - osier
- Adjective:Showing wild and apparently deranged excitement and energy: "manic enthusiasm".Frenetically busy; frantic: "the pace is manic as we near our deadline".
- Noun:A deep hoarse sound made by a frog.
- Verb:(of a frog) Make a characteristic deep hoarse sound.
- Synonyms:noun. cawverb. caw
- Verb:(of a liquid) Flow or leak slowly through porous material or small holes.
- Noun:A place where petroleum or water oozes slowly out of the ground.
- Synonyms:verb. ooze - leak - percolate - trickle - drip - filternoun. seepage - leakage
- Verb:(esp. of a machine or a bird's wings) Make a low, continuous, regular sound.
- Noun:A sound of such a type: "the whir of the projector".
- Synonyms:verb. whirr - buzz - hum - zoom - drone - whiznoun. whirr - buzz - drone - hum - whizz - whiz - zoom
- Noun:A tiny spot: "the figure had become a mere speck".
- Verb:Mark with small spots: "their skin was specked with goose pimples".
- Synonyms:noun. spot - speckle - fleck - stainverb. speckle - mottle - spot
- Adjective:(of glue, paint, etc.) Retaining a slightly sticky feel; not fully dry.Showing poor taste and quality: "his tacky decor".
- Synonyms:sticky - adhesive - gummy - gluey - viscous - glutinous
- Noun:A drug taken illegally for recreational purposes, esp. marijuana or heroin.
- Verb:Administer drugs to (a racehorse, greyhound, or athlete) in order to inhibit or enhance sporting performance.
- Synonyms:noun. narcotic - drugverb. drug
Noun:A large windproof jacket with a hood, designed to be worn in cold weather.A hooded jacket made of animal skin, worn by Eskimos.
Noun:A cozy or comfortable place, esp. someone's private room or den.
- Noun:A person who drinks excessive amounts of cheap wine or other alcohol, esp. one who is homeless.
- Synonyms:drunkard - drinker - boozer
- Adjective:Looking exhausted and unwell, esp. from fatigue, worry, or suffering.Noun:A haggard hawk.
- Synonyms:emaciated - gaunt
- Noun:A lump or bundle of a soft material, used for padding, stuffing, or wiping: "a wad of cotton".
- Verb:Compress (a soft material) into a lump or bundle: "a wadded handkerchief".
- Synonyms:noun. bundle - sheafverb. pad
- Verb:Lift or carry (something heavy).Noun:The weight of someone or something.
- Synonyms:verb. raise - lift - heave - weighnoun. weight - heaviness - ponderosity - burden
Noun:A lengthy and complicated procedure.A long, rambling story or statement.
- Verb:Employ endearments or flattery to persuade someone to do something or give one something: "you can wheedle your way onto a court".Coax or persuade someone to do something.
- Synonyms:cajole - coax - blandish - flatter - adulate
Noun:A person, esp. a lawyer, who uses unscrupulous, fraudulent, or deceptive methods in business.
- Verb:Move in a feeble or unsteady way.
- Noun:A feeble or unsteady gait.
- Synonyms:verb. stagger - wobble - falter - reel - waver - wabble - swaynoun. wobble - wabble - stagger
- Adjective:Safe to drink; drinkable.
- Noun:Dirt ingrained on the surface of something, esp. clothing, a building, or the skin.
- Verb:Blacken or make dirty with grime.
- Synonyms:noun. dirt - filth - dirtiness - muck - soot - squalorverb. soil - stain - befoul - smirch - sully - foul - smear
- Verb:Make dim; blur: "you would blear your eyes with books".
- Adjective:Dim, dull, or filmy.
- Noun:A film over the eyes; a blur.
- Synonyms:verb. dim - blur - fog - bedim - haze - mistadjective. bleary - cloudy - misty - dim - hazy
мягкая фетровая шляпа
- Adjective:Shabby and untidy or dirty.
- Synonyms:slovenly - untidy - unkempt
A tangled or knotted mass: "a snaggle of import restrictions".
Web definitions:bonyness: extreme leanness (usually caused by starvation or disease).
- Verb:Crush (something, typically paper or cloth) so that it becomes creased and wrinkled.
- Noun:A crushed fold, crease, or wrinkle.
- Synonyms:verb. rumple - crush - crease - wrinkle - crinklenoun. crease - pucker - wrinkle - crinkle - fold
Liquid waste or sewage discharged into a river or the sea.
- Noun:An object or space used to contain something: "trash receptacles".An organ or structure that receives a secretion, eggs, sperm, etc.
- Synonyms:container - vessel
- Noun:An abrupt uncontrolled movement, esp. an unsteady tilt or roll.Leave an associate or friend abruptly and without assistance or support in a difficult situation.
- Verb:Make an abrupt, unsteady, uncontrolled movement or series of movements; stagger: "the car lurched forward".
- Synonyms:stagger - reel
- Noun:A small room or closet in which food, dishes, and utensils are kept.
- Synonyms:larder - storeroom
- Verb:Admit that something is true or valid after first denying or resisting it.Admit (defeat) in a contest: "he conceded defeat".
- Synonyms:admit - allow - grant - acknowledge - recognize - accept
- Verb:Formally declare one's abandonment of (a claim, right, or possession).Refuse to recognize or abide by any longer.
- Synonyms:relinquish - repudiate - disclaim - waive - abdicate
- Adjective:Lacking physical strength, esp. as a result of age or illness.(of a sound) Faint.
- Synonyms:weak - faint - frail - infirm - weakly - low - sickly
- Verb:Gather or collect (something, esp. information or approval): "garner evidence".
- Synonyms:store - collect - hoard - gather - accumulate - stock
- Noun:A person who engages in crime and violence; a hooligan or gangster.
- Synonyms:hooligan - roughneck - ruffian - rowdy - thug - hood
- small, slender, carnivorous mammal (genus Mustela), esp. M. nivalis of
- northern Eurasia and northern North America, related to, but...
- Verb:Achieve something by use of cunning or deceit: "trying to weasel my way into his affections".
- Synonyms:noun. stoatverb. equivocate - prevaricate
Noun:A person who is easily taken advantage of, esp. by being cheated or blamed for something.
Noun:A homeless and helpless person, esp. a neglected or abandoned child: "various waifs and strays".An abandoned pet animal.
- Noun:A state of depression: "I sat absorbed in my own blue funk".A
- style of popular dance music of US black origin, based on elements of
- blues and soul and having a strong rhythm that typically...
- Synonyms:fright - fear - dread - scare - alarm - coward - awe
- Noun:A person considered to be insignificant, esp. because they are small or young.
- Verb:(of a sheep, goat, or calf) Make a characteristic wavering cry: "the lamb was bleating weakly".
- Noun:The wavering cry made by a sheep, goat, or calf.
- Synonyms:verb. baanoun. baa
Verb:Force (someone) to join a ship by drugging them.Coerce or trick (someone) into a position or into doing something.
- Noun:A drinking glass with a foot and a stem.A metal or glass bowl-shaped drinking cup, sometimes with a foot and a cover.
- Synonyms:cup - beaker - bowl - chalice - glass
Web definitions:addlebrained: stupid and confused; "blathering like the addlepated nincompoop that you are"; "a confused puddingheaded,.
- Adjective:(of a person) Famous and respected within a particular sphere or profession.Used to emphasize the presence of a positive quality: "the guitar's eminent suitability for studio work".
- Synonyms:distinguished - outstanding - notable - prominent - great
- Adjective:(of a metal or other material) Able to be hammered or pressed permanently out of shape without breaking or cracking.Easily influenced; pliable.
- Synonyms:pliable - pliant - supple - flexible - yielding - ductile
- Verb:Engage in petty argument or bargaining.Treat something casually or irresponsibly; toy with something.
- Synonyms:bargain - haggle
Noun:A small traveling bag or suitcase.
A young person in the 1950s and early 1960s belonging to a subculture associated with the beat generation.
- Verb:Irritate intensely; infuriate.
- Synonyms:irritate - aggravate - provoke - nettle - enrage - anger
bestubbled: having a short growth of beard; "his stubbled chin".
- Noun:A long bench with a back, placed in rows in the main part of some churches to seat the congregation.An
- enclosure or compartment containing a number of seats, used in some
- churches to seat a particular worshiper or group of worshipers.
Web definitions: A thick quantity of sputum, usually containing phlegm; Any thick, disgusting liquid.
Noun:A dog of no definable type or breed.Any other animal resulting from the crossing of different breeds or types.
- Noun:A large, heavy shoe.A foolish, awkward, or clumsy person.
- Synonyms:bumpkin - lout - boor
- Adjective:(of a man) Confident, stylish, and charming.
- Synonyms:affable - jovial
Surprise (someone) greatly; astonish.
excited in anticipation.
any exciting and complex play intended to confuse (dazzle) the opponent.
- Adjective:Unconventional and slightly disreputable, esp. in an attractive manner.
- Synonyms:rakish - dissolute - dissipated - vulgar
- Verb:Scrape or brush the surface of (a shoe or other object) against something.
- Noun:A mark made by scraping or grazing a surface or object.
- Synonyms:verb. shufflenoun. shuffle
- Noun:A dry, rough protective crust that forms over a cut or wound during healing.
- Verb:Become encrusted or covered with a scab or scabs: "she rested her scabbed fingers on his arm".
A creeping grass (Digitaria and other genera) that can become a serious weed.
- Noun:A plant disease, esp. one caused by fungi such as mildews, rusts, and smuts.
- Verb:Infect (plants or a planted area) with blight: "a peach tree blighted by leaf curl".
- Synonyms:noun. rustverb. destroy - blast
Adjective:Unpleasant and of poor quality.Unwell.
a rattling sound as of hard things striking together; "a clattery typewriter"; "the clattery sound of dishes".
- Noun:A framework consisting of a horizontal beam supported by two pairs of sloping legs, used in pairs to support a flat surface.An open braced framework used to support an elevated structure such as a bridge.
(used colloquially) having the relationship of friends or pals.
- Verb:Raise (one's shoulders) and bend the top of one's body forward: "he hunched his shoulders"; "he hunched over his glass".
- Noun:A feeling or guess based on intuition rather than known facts: "acting on a hunch".
- Synonyms:verb. bend - crook - stoop - bownoun. hunk - premonition - humpback
Verb:Make a continuous low humming sound.Speak tediously in a dull monotonous tone: "while Jim droned on".
- Noun:Minor crime, esp. that committed by young people.Neglect of one's duty.
- Synonyms:crime - offence - offense - criminality
1. To raise or lift, especially with great effort or force: 2. a. To throw (a heavy object) with great effort; hurl: 3. To utter with effort or pain 4. To vomit (something).
1. To rise up or swell, as if pushed up; bulge: The sidewalk froze and heaved.2. To rise and fall in turn, as waves.
- Adjective:Having a sensation of whirling and a tendency to fall or stagger; dizzy.
- Verb:Make (someone) feel excited to the point of disorientation.
- Synonyms:adjective. dizzy - vertiginous - light-headed - frivolous
- verb. swim - whirl
. To cause to flow or spurt out.2. To utter volubly and tediously.
The flat part of either side of the head between the forehead and the ear.
Verb:Make a small hole in (something) with a sharp point; pierce slightly.Noun:An act of piercing something with a fine, sharp point.
- Noun:The production of a partial vacuum by the removal of air in order to force fluid into a vacant space or procure adhesion.
- Verb:Remove (something) using suction.
- Verb:Steal goods from, (typically using force and in a time of war or civil disorder.
- Noun:The violent and dishonest acquisition of property.
- Synonyms:verb. rob - loot - pillage - sack - despoil - maraud - ransacknoun. pillage - loot - spoil - booty - robbery - rapine - sack
- Noun:An animal's foot having claws and pads.
- Verb:(of an animal) Feel or scrape with a paw or hoof.
- Synonyms:claw - foot - hand - pad
- Verb:(of an animal or bird of prey) Spring or swoop suddenly so as to catch prey.Smooth down by rubbing with pounce or pumice.
- Noun:A sudden swoop or spring.A fine resinous powder formerly used to prevent ink from spreading on unglazed paper or to prepare parchment to receive writing.
- Synonyms:claw - talon
adamance: resoluteness by virtue of being unyielding and inflexible.
- Verb:Eat (something) hurriedly and noisily.(of a male turkey) Make a characteristic swallowing sound in the throat.
- Noun:The gurgling sound made by a male turkey.
- Verb:Thrust or spread (things, esp. limbs or fingers) out and apart: "her hands were splayed across his broad shoulders".
- Noun:A widening or outward tapering of something, in particular.
- Adjective:Turned outward or widened: "the girls were sitting splay-legged".
- Synonyms:verb. widen - spread - broadenadjective. oblique - skew - slanting
Adjective:Sounding harsh and unpleasant.
- To utter in a grating voice
- To grate on (nerves or feelings).
Verb:Wind a line on to a reel by turning the reel.Bring something attached to a line, esp. a fish, toward one by turning a reel and winding in the line.
- Verb:Push, elbow, or bump against (someone) roughly, typically in a crowd.
- Noun:The action of jostling.
- Synonyms:verb. shove - push - hustle - thrustnoun. push - shove - thrust
- Muslim (specifically Sufi) religious man who has taken vows of poverty
- and austerity. Dervishes first appeared in the 12th century;...Synonyms:fakir
Carry, wield, or convey (something heavy or substantial): "help him tote the books"; "a gun-toting loner".
- Noun:The skin of a male deer.Grayish leather with a suede finish, traditionally made from such skin but now more commonly made from sheepskin.
- Synonyms:deerskin - doeskin - chamois
- Verb:Destroy utterly; wipe out.Cause to become invisible or indistinct; blot out.
- Synonyms:efface - erase - delete - wipe out - expunge - annihilate
Verb:Grind (one's teeth) together, typically as a sign of anger.(of teeth) Strike together; grind.
Verb:(of a liquid) Flow out in a rapid and plentiful stream, often suddenly.Send out in a rapid and plentiful stream.
- Noun:A vehicle on runners for conveying loads or passengers esp. over snow or ice, often pulled by draft animals.A sledgehammer.
- Verb:Carry (a load or passengers) on a sledge.
- Synonyms:noun. sled - sleigh - toboggan - sledgehammerverb. sled - toboggan - sleigh
- Noun:A swordlike stabbing blade that may be fixed to the muzzle of a rifle for use in hand-to-hand fighting.
- Verb:Stab (someone) with a bayonet.
- Synonyms:sword bayonet
- Noun:A series of shots fired or missiles thrown all at the same time or in quick succession: "a fusillade of accusations".
- Synonyms:volley - firing - shooting
- Verb:Have a strong unpleasant smell: "the place stank like a sewer"; "his breath stank of drink".Fill a place with such a smell.
- Synonyms:stink - smell - reek
- Verb:(of a wheeled vehicle or its occupants) Move slowly and heavily, typically in a noisy or uneven way: "ten cars trundled past".
- Noun:An act of moving in such a way.
- Synonyms:verb. roll - wheelnoun. caster - roll - castor - truckle
Recover one's health and strength over a period of time after an illness or operation. | <urn:uuid:048cbd76-0bbe-42ff-a8c8-f98c2bef20e4> | CC-MAIN-2017-17 | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=182381 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00543-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.775266 | 6,208 | 2.75 | 3 |
Fauna (фауна). The present fauna of Ukraine began to develop in the late Eocene epoch, if not earlier. Only a few fossil remains of land animals of the Eocene and Paleocene epochs have been discovered in Ukraine. Among the mammals at the end of the Paleocene were the piglike Anthracotheriidae and the hornless rhinoceros Chilotherium; among the birds were the cormorant, sea gull, stork, wild duck, and owl. The rivers were inhabited by crocodiles and the seas by whales (zeuglodons) and many forms of mollusk and fish related to modern representatives of these groups.
In the Neogene period, when the climate was subtropical, the Hipparion fauna flourished on the extensive steppes. It was represented mostly by herbivores such as the Hipparion, the ancestor of our horse, rhinoceros (Dicerorhinus orientalis and D. etruscus), mastodon (Anancus, Mastodon), elephant (Archidiskodon meridionalis), giant deer (Eucladocerus pliotarandoides), archaic camel (Paracamelus), saber-toothed tiger (Machairodus), and beaver-like trogontherium (Trogontherium cuvieri). Beside them lived families and species that still exist (or existed until recent times) both inside Ukraine—for example, the desman (Desmana moschata), pika (Ochotona), hedgehog, fox, brown bear, rabbit, great bustard, chicken, partridge—or outside Ukraine—for example, the monkey, giraffe, porcupine, ostrich, and marabou. In the late Pliocene most of the present invertebrate and vertebrate species were to be found in Ukraine.
During the several ice ages in the Pleistocene epoch much of Ukraine's territory was covered with glaciers. In the tundra bordering them lived species adapted to the cold climate. In the interglacial periods the forest fauna returned to the lands from which the ice sheets retreated. After each glaciation the Tertiary species diminished greatly: some died out and others (eg, the monkey, ostrich, and giraffe) migrated south, but many withstood the existing conditions. Thus, in the period of maximum glaciation, there was a characteristic mixed mammoth fauna in Ukraine. Beside its typical representatives—the mammoth, woolly rhinoceros, cave-dwelling bear and lion, spotted hyena—lived such arctic animals as the musk-ox (Ovibos moschatus), arctic fox (Alopex lagopus), and true lemming (Lemmus lemmus); such steppe animals as the saiga (Saiga tatarica), bobak marmot (Marmota bobak), ground squirrel or suslik (Citellus); and such forest animals as the beaver. Besides these, wild horses, bisons, deer, foxes, wolves, brown bears, hares, wild ducks, geese, woodcocks, and many other extant species inhabited Ukraine.
In the Middle Holocene the climate became similar to what it is today. Many species of the earlier period gradually perished (mammoth, rhinoceros); some species moved farther north (musk-ox). In the relatively short dry period that succeeded the retreat of the glaciers, the steppe species advanced far north.
Human activities—hunting, herding, agriculture—contributed to the impoverishment of the mammoth fauna. The cultivation of the steppes, destruction of the forests, and draining of the marshes contracted the habitat of many species and contributed to their migration from Ukraine or to their extinction. Within historical times reindeer ceased visiting Ukraine during their winter migration. In the 16th century the Asiatic wild ass (Equus hemionus) and in the 17th century the aurochs and bison became extinct. In the 19th century the wolverine and flying squirrel disappeared from the forest zone; the tarpan, saiga, and ruddy ground squirrel disappeared from the steppe and forest-steppe zone; the alpine marmot (Marmota marmota), willow grouse (Lagopus lagopus), and white hare (Lepus timidus) vanished from the Carpathian Mountains; and the wild boar disappeared from the Crimean Mountains. Some species have become rare: the little bustard (Otis tetrax), great bustard (Otis tarda), ruddy sheldrake (Tadorna ferruginea), and swan. Some valuable fish species—sturgeon and eel—have disappeared from the rivers. Some species, for example, the suslik, have spread with the cultivation of the steppes; these include pests (mouse, hamster). As well, humans have added to the variety of fauna by acclimatizing such new species as the nutria, raccoon, silkworm, and others.
Zoogeographically, the fauna of Ukraine belongs to the Euro-Siberian zone of the Palaearctic subregion of the Holarctic region. Only the Crimean Mountains and the southwest part of Caucasia belong to the Mediterranean subzone. Because of Ukraine's border locations and the absence of natural barriers, its fauna is intermediate between the fauna of Europe and Central Asia, and between the fauna of the forest belt and the steppe belt and that of the subtropics. The western boundaries of the habitats of many eastern species run through Ukraine; for example, of the yellow suslik (Citellus fulvus) and pygmy suslik (Citellus pygmaeus). The eastern and northern limits of the habitats of many western European and Mediterranean species are found in Ukraine: Bechstein's bat (Myotis bechsteini), the wildcat (Felis silvestris), the eel (Anguilla anguilla), the Baltic sturgeon (Acipenser sturio), and others. The Dnieper River is an important natural barrier to the east-west distribution of animals. The habitat of many northern species—elk, lynx, brown bear, white hare, capercaillie, black grouse, hazel hen, and others—extends as far south as Ukraine. The northern boundary of many southern species, including the lesser mouse-eared bat (Myotis oxygnatus), long-winged bat (Miniopterus schreibersi), and many insects, lies in Ukraine. There are few endemic species in Ukraine.
In general, there are about 28,000 species on the territory of Ukraine, among them more than 690 species of vertebrates (101 mammal, 350 bird, 21 reptile, 19 amphibian, and over 200 fish, 110 of which are freshwater species), over 1,500 protozoan species, over 700 species of worm, 400 crustacean species, 330 mollusk species, about 3,300 arachnid species, 20,000 other species of insects, and about 1,080 others.
The zoogeographical regions of Ukraine coincide with the natural biogeographical zones: Polisia, forest-steppe, steppe, semidesert, littoral, mountain (Carpathian Mountains, Crimean Mountains, and western Caucasia), and marine (the Black Sea and the Sea of Azov).
The fauna of Polisia is known for its variety of forest and swamp species. In the past Polisia had large numbers of elk, lynx, bear, capercaillie, black grouse, and hazel hen, which today can be found only in the remotest areas. Wild boars, wolves, foxes, roe deer, forest martens, and other species are quite common. Valuable fur-bearing animals such as the beaver (on reservations along the Teteriv River, the Horyn River, the Prypiat River, the Dnieper River, and the Desna River), mink, otter, and ermine have survived here. The red-backed mouse (Clethrionomys glareolus), field vole (Microtus agrestis), striped field mouse (Apodemus agrarius), and red-toothed shrew (Sorex) live in the forests, swamps, and meadows. Untilled, open fields are inhabited by the common mole, gray common vole (Microtus arvalis), wood mouse, and other species. Among the birds are found the capercaillie, black grouse, hazel hen, tit, rock dove, spotted eagle (Aquila clanga), short-toed eagle (Circaëtus ferox), and tree pipit (Anthus trivialis). Along rivers and swamps wild ducks, snipes, bald coots, black storks (Ciconia nigra), and common cranes (Grus grus) thrive. At crystalline cliffs and outcrops along the rivers the bee-eater (Merops apiaster) and rock thrush (Monticola saxatilis) can be found. Among the reptiles, the forest grass snake, mud tortoise, asp, and such lizards as blindworm (Anguis fragilis), fast lizard (Lacerta agilis), and viviparous lizard (Lacerta vivipara) are quite common. Amphibians such as the newt, common fire-bellied toad (Bombina bombina), bullfrog, and frog are widely distributed. Rivers and lakes sustain such fish as the carp, tench, and pike. Various species of beetles, bugs, mosquitoes, cicadas, and other insects live here.
The forest-steppe is a transitional zone in which steppe species live side by side with forest species. Certain forest species that inhabit wooded river banks penetrate far into the steppe, for example, the squirrel and the pine marten (Martes martes). Among the forest species that inhabit the forest-steppe are the roe deer, hazel mouse (Muscardinus avellanarius), forest dormouse (Dyromys nitedula), gray dormouse (Glis glis), red-backed mouse, and field vole. The steppe species that can be found deep in the forest-steppe are the steppe polecat (Mustela eversmanni), birch mouse (Sicista subtilis), mole rat (Spalax), and gray hamster (Cricetulus migratorius). The Left-Bank forest-steppe provides a habitat for the steppe lemming (Lagurus lagurus) and the large jerboa (Alactaga jaculus). The spotted suslik (Citellus suslica) is common in the entire forest-steppe, while the European suslik (Citellus citellus) lives in the southwestern section. The black-bellied hamster (Cricetus cricetus) and gray vole are fairly widespread. The red kite (Milvus milvus), stock dove (Columba oenas), ringdove (Columba palumbus), common turtledove (Streptopelia turtur), green woodpecker (Picus viridis), thrush nightingale (Luscinia luscinia), and other species inhabit oak forests. The common quail and partridge are widely distributed. Since the Dnieper River is a migration route and the forests, ponds, and fields of the forest-steppe are stations for migratory birds, wild geese, cranes, and various species of wild duck can be found here. Among reptiles, the Aesculapian snake, viper, common and tree snakes, mud tortoise, and lizard are the most common. The amphibians are represented by the pond frog (Rana esculenta), common newt, crested newt (Triturus cristatus), and others. Among the insects of the forest-steppe are such pests as the owlet moth (Agrotis segetum), sugar-beet weevil (Bothynoderes punctiventris), and lamellicorn (Lethrus apterus).
The Tysa Lowland of Transcarpathia is a unique region of the forest-steppe. Here are found species that are rare or non-existent in other regions of the forest-steppe: the greater horshoe bat (Rhinolophus ferrum-equinum), Ikonnikov bat (Myotis ikonnicovi), lesser mouse-eared bat, and long-winged bat. There are many Mediterranean species among the insects.
In the steppe the most typical species are rodents that are adapted to open plains and an arid climate. As in the forest-steppe, so in the steppe the western, central, and eastern sections differ from one another. The mammals that are typical of the steppe are the large jerboa, pygmy suslik, steppe vole, mole-vole (Ellobius talpinus), and three-toed sand jerboa (Scirtopoda tellum), all of which live east of the Dnieper River. The western steppe is inhabited by the spotted suslik, mole rat, and steppe polecat. The lesser mole rat (Spalax leucodon) is found in the southwest, while the marbled polecat (Vormela peregusna) is limited to the southern part of the steppe and is retreating gradually eastward into the Asiatic steppes. The bobak marmot, which formerly was common in the west, still survives on reservations along the Donets River. In the eastern steppe one can still encounter the corsac fox (Vulpes corsak) and long-eared hedgehog (Hemiechinus auritus). The birds that are typical of the steppe are rare today: the great bustard, calandra lark (Melanocorypha calandra), demoiselle crane (Anthropoides virgo), black-winged pratincole (Glareola nordmanni), tawny eagle (Aquila rapax), and little bustard. The most common reptiles are the yellow-bellied coluber (Coluber jugularis), steppe viper (Vipera renardi), four-striped snake (Elaphe quatuorlineata), and green lizard (Lacerta viridis). The steppe lizard (Eremias arguta) is less common. A great variety of insects, including Italian and Asiatic locusts, inhabit the steppe.
In the east the steppe changes into semidesert, where desert species from the Aral-Caspian deserts exist side by side with steppe species.
The steppe north of the Black Sea and the Sea of Azov belongs to the littoral region, in which water fowl and aquatic animals are prominent. Many bird species are found here: the herring gull (Larus argentatus), tern (Sterna), Kentish plover (Charadrius alexandrinus), avocet (Recurvirostra avosetta), and spoonbill (Platalea leucorodia). The Old World white pelican (Pelecanus onocrotalus) and the eastern glossy ibis (Plegadis falcinellus) nest at the mouth of the Danube River. Migratory birds pass through or winter in this region: in spring and fall innumerable geese and wild ducks come here. The muskrat, European otter, mink, ondatra (acclimatized), goose, wild duck, occasional swan, gadflies, gnats, and mosquitoes inhabit the river floodplains. In the limans and deltas is found a mixture of sea and freshwater fish species. Mullet, medusa, flounder, and other sea species live alongside migratory species.
In the Black Sea dolphins and the rare white-bellied seals (Monachus monachus) are found. The fish of the Sea of Azov and the coastal waters of the Black Sea are very similar. Yet there are some local species: the Azov herring, Azov anchovy, Azov percarina, sardelle, great plaice, sprats, and gobies are found in the Sea of Azov, and the Black Sea trout, Black Sea herring, seahorse, tunny, mackerel, sturgeon, and others are found in the Black Sea.
The Carpathian Mountains contain mostly forest fauna. The vertical distribution of animals is to a great extent related to the vegetation. High mountain species—the snow vole (Microtus nivalis), alpine shrew (Sorex alpinus), water pipit (Anthus spinoletta), alpine accentor (Prunella collaris), and certain insects—are confined to the subalpine and alpine zone of the Carpathians. Some taiga species are found in the mountain forest zone: lynx, capercaillie, hazel hen, black grouse, and others. But most of the fauna consists of Central European forest species, which appear in all regions of the Carpathians: the now rare wildcat and brown bear, and roe deer, forest marten, ermine, Carpathian squirrel, wild boar, wolf, fox, golden eagle, hawk, owl, rock pipit, woodcock, and others. There are many species of reptiles and amphibia (Carpathian newt, spotty salamander, smooth snake, etc) and many European mountain species of insects, mollusks, and other invertebrates.
The fauna of the Crimean Mountains, especially that of the southern coast of the Crimea, is Mediterranean. The fauna is insular in character: species that are typical of the forest or steppe belts are absent. Instead, there are many endemic subspecies. The forests are inhabited by the Crimean red deer and roe deer, Crimean mountain fox, badger, bats, and other species. The squirrel and mouflon are acclimatized. Of the bird species, we find the griffon vulture, tawny owl, Crimean jay, and tomtit. The Crimean scorpion (Euscorpius tauricus) and spider solifug (Galeodes araneides) represent the arachnids, and the scolopendra (Scolopendra cingulata) represents the centipedes. There are many Mediterranean species, which arrived in the Paleogene when the Crimea was connected to the Balkans and western Caucasia, for example, the Crimean gecko (Gymnodactylus kotschyi Danilevskii), Crimean cicada (Cicada taurica), Crimean beetle (Procerus tauricus), and Crimean mantis (Ameles taurica). Many Mediterranean species of mollusk are found here. Of the pests introduced from the outside, the phylloxera is harmful. Mediterranean fauna appears also on the Black Sea coast of western Caucasia.
The ichthyofauna of Ukrainian rivers consists mainly of members of the Cyprinidae family. The main European watershed between the Baltic Sea and the Black Sea limits the distribution of certain fish species. In the rivers of Ukraine that flow into the Baltic are found the eel and Baltic sturgeon. The rivers flowing into the Black Sea contain gobies (Gobiidae), vyrezub (Rutilus frisi), sterlet (Acipenser ruthenus), other sturgeons, and the Ukrainian lamprey (Lampetra mariae), along with other species. In the Tysa River, the Cheremosh River, and mountain tributaries of the Danube River are found the Danube salmon (Hucho hucho), striped ruff (Acerina schraetser), and little zingle (Aspro zingel). Mountain streams contain trout and grayling (Thymallus thymallus).
To protect what remains of the rich flora and fauna of Ukraine from extinction, a network of nature preserves (zapovidnyky) that provide full protection to wildlife have been set up, including four biosphere reserves (biosferni zapovidnyky) and seventeen national parks. There are also 2709 wildlife refuges (zakaznyky) that provide partial protection. Many species—the beaver, lynx, muskrat, bobak marmot, elk, great bustard, and others—are under special protection.
Khranevych, V. Narys favny Podillia (Vinnytsia 1925)
Barabash-Nykyforov, I. Narysy favny stepovoï Naddniprianshchyny (Dnipropetrovsk 1928)
Sharleman, M. Zooheohrafiia URSR (Kyiv 1937)
———. Ptakhy URSR (Kyiv 1938)
Myhulin, O. Zviri URSR (Kyiv 1938)
Zhars’kyi, E. ‘Tvarynnist’ Ukraïny,’ in Heohrafiia ukraïns’kykh i sumezhnykh zemel’, ed V. Kubiiovych (Cracow–Lviv 1943)
Voïnstvens’kyi, M.; Kistiakivs’kyi, O. Vyznachnyk ptakhiv URSR (Kyiv 1952)
Markevych, O.; Korotkyi, I. Vyznachnyk prisnovodnykh ryb URSR (Kyiv 1954)
Slastenenko, E. The Fishes of the Black Sea Basin (Istanbul 1955–6)
Fauna Ukraïny, 40 vols (Kyiv 1956–)
Kornieiev, O. Vyznachnyk zviriv ?URSR (Kyiv 1956)
Tatarynov, K. Zviri zakhidnykh oblastei Ukraïny (Kyiv 1956)
Sokur, I. Ssavtsi fauny Ukraïny ta ïkh hospodars’ke znachennia (Kyiv 1960)
Siroechkovskii, E.; Rogacheva, E. Zhivotnyi mir SSSR: Geografiia resursov (Moscow 1975)
[This article was updated in 2008.] | <urn:uuid:8f509d28-ea57-459d-8997-2669012bca9e> | CC-MAIN-2017-17 | http://www.encyclopediaofukraine.com/display.asp?linkpath=pages%5CF%5CA%5CFauna.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00370-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.871818 | 4,885 | 3.53125 | 4 |
Type of site
|Internet encyclopedia project|
|Editor||German Wikipedia community|
|Launched||March 16, 2001|
Founded in March 2001, it is the second-oldest and, with 2,057,600 articles, at present (2017) the fourth-largest edition of Wikipedia by number of articles, behind the English Wikipedia, the Swedish Wikipedia and the Cebuano Wikipedia. It has the second-largest number of edits. On 7 November 2011, it became the second edition of Wikipedia, after the English edition, to exceed 100 million page edits.
- 1 Early history
- 2 Growth, coverage and popularity
- 3 Language and varieties of German
- 4 Characteristics
- 5 Miscellanea
- 6 Reviews and research
- 7 Off-line publication
- 8 Legal issues and controversies
- 9 Parodies and forks
- 10 Notes
- 11 References
- 12 External links
The German edition of Wikipedia was the first non-English Wikipedia subdomain, and was originally named deutsche.wikipedia.com. Its creation was announced by Jimmy Wales on 16 March 2001. One of the earliest snapshots of the home page, dated 21 March 2001 (revision #9), can be seen at the Wayback Machine site. Aside from the home page, creation of articles in the German Wikipedia started as early as April 2001, apparently with translations of Nupedia articles. The earliest article still available on Wikipedia's site is apparently Polymerase-Kettenreaktion, dated May 2001.
Andrew Lih wrote that the hacker culture in Germany and the verein concept solidified the German Wikipedia's culture. The geography of Europe facilitated face-to-face meetups among German Wikipedians.
Growth, coverage and popularity
On 27 December 2009, the German Wikipedia edition exceeded 1,000,000 articles, becoming the first edition after the English-language Wikipedia to do so. The millionth article was Ernie Wasson. In November 2008, 90% of the edition's articles had more than 512 bytes, 49% had more than 2 kilobytes, and the average article size was 3,476 bytes. In the middle of 2009 this edition had nearly 250,000 biographies and in December 2006 more than 48,500 disambiguations.
Compared to the English Wikipedia, the German edition tends to be more selective in its coverage, often rejecting small stubs, articles about individual fictional characters and similar materials. Instead, there is usually one article about all the characters from a specific fictional setting, usually only when the setting is considered important enough (for example, all characters from Star Wars are listed in a single article). A dedicated article about a single fictional entity generally exists only if the character in question has a very significant impact on popular culture (for example, Hercule Poirot). Andrew Lih wrote that German Wikipedia users believe that "having no article at all is better than a very bad article." Therefore, growth on the German Wikipedia leveled before it did for the English Wikipedia, with accelerating growth in article count shifting to constant growth in mid-2006. The number of users signing up for accounts began to steadily decline in 2007 through 2008.
The January 2005, Google Zeitgeist announced that "Wikipedia" was the eighth most-searched query on Google.de. In February 2005, Wikipedia reached third place behind Firefox and Valentine's Day. In June 2005, Wikipedia ranked first.
Language and varieties of German
Separate Wikipedias have been created for several other varieties of German, including Alemannic German (als:), Luxembourgish (lb:), Pennsylvania German (pdc:), Ripuarian (including Kölsch; ksh:), Low German (nds:) and Bavarian (bar:). These however, have less popularity than the German Wikipedia.
The German Wikipedia is different from the English Wikipedia in a number of aspects.
- Compared to the English Wikipedia, different criteria of encyclopedic notability are expressed through the judgments of the editors for deciding if an article about a topic should be allowed. The criteria for notability are more specific, each field has its own specific guidelines.
- There are no fair use provisions. Images and other media that are accepted on the English Wikipedia as fair use may not be suitable for the German Wikipedia. However, the threshold of originality for works of applied art is set much higher, which often allows the use of company logos and similar icons, too.
- The use of scholarly sources, in preference over journalistic and other types of sources, is more strongly encouraged. The German Verifiability (Belege) guideline classifies scholarly sources as inherently more reliable than non-academic sources; the latter's use is – in theory at least – only permitted if there is a lack of published academic sources covering a topic.
- In September 2005, Erik Möller voiced concern that "long term page protection is used excessively on the German Wikipedia": on 14 September 2005, 253 pages were fully protected (only editable by admins) for more than two weeks (compared to 138 in the English Wikipedia). This was the highest number of such blocks of all Wikipedias. As of May 2008[update], the German Wikipedia still had the highest percentage of semi-protected articles - 0.281% - among the ten largest Wikipedias (articles not editable by unsubscribed or recently subscribed users), but with respect to the fraction of fully protected articles (0.0261%) it actually ranks fourth, behind the Japanese, Portuguese and English Wikipedias.
- Vandalism and other abuse is often handled in a less formal way. Vandals may get blocked on their first edit and without warning if their edit clearly shows lack of interest for actual encyclopaedic work. This is especially true if the added text includes unlawful statements, such as holocaust denial.
Similarly, the Checkuser function is rarely used to determine multiple accounts, as "suspicious" accounts are often blocked on sight.
- Articles on indisputably notable subjects may be deleted if they are deemed too short. While the requirements for minimal articles (called stubs) are equivalent, the German and the English Wikipedia differ greatly in the way they are put into practice.
- On 28 December 2005 it was decided to eliminate the Category "stub" (and the corresponding template identifying articles as stubs) from the German Wikipedia.
- Users do not have to create an account in order to start a new article.
- Unlike the French, Polish, Dutch, Italian, Swedish or many other Wikipedias, the German one does not contain large collections of bot-generated geographical stubs or similar articles.
- The German Wikipedia version did not have an Arbitration Committee until May 2007. Currently, German Wikipedia's Arbitration Committee plays only a minor role in Wikipedia politics.
- Categories are singular and are not differentiated for gender. Categories are usually introduced only for a minimum of ten entries and are not always subdivided even for larger numbers of items, so that current categories often describe only one property (e.g., nationality). Other categories are subdivided, but differently from in the English Wikipedia. For example, "chemists" are subdivided by century, not by nationality. A university professor, on the other hand, will usually be categorized according to where he or she teaches.
- The equivalent to the English Wikipedia's featured articles and good articles are exzellente Artikel (excellent articles) and lesenswerte Artikel (good articles; literally: articles "worth reading").
- In 2005, there was a discussion and poll resulting in the decision to phase out the use of local image uploads and to exclusively use Wikimedia Commons for images and other media in the future. The attempt to implement this lasted for about a year and the German "Upload file" page displayed a large pointer to Commons in this time, but since December 2006, there is again a local image upload page without any pointer to Wikimedia Commons. This was prompted by the deletion of images on Commons that are acceptable according to German Wikipedia policies.
- Starting in December 2004, German Wikipedians pioneered Persondata ("Personendaten"), a special format for meta data about persons (name, birth date and place etc.), introduced in the English Wikipedia in December 2005. In the beginning, the main aim of this system was to aid the search features of the DVD edition of the German Wikipedia (see below). During its introduction in January 2005, Personendaten were added to some 30,000 biographical articles on the live Wikipedia, partly aided by a somewhat automatic tool. The template is currently deprecated and is no longer on any pages.
- Like the Signpost in the English Wikipedia, the German Wikipedia also has its own internal newspaper, the Kurier. However, the Kurier is laid out on a single page and is not issued weekly but is continually updated by interested Wikipedians, with older articles being archived.
- In German Wikipedia is pronounced [ˌvɪkiˈpeːdia].
At Wikimania 2006, Jimbo Wales announced that the German Wikipedia would soon institute a system of "stable article versions" on a trial basis. The system went live in May 2008. Certain users[who?] are now able to mark article versions as "reviewed", indicating that the text contains no obvious vandalism. A note in the top right corner of the screen indicates to the reader whether or not the present version of an article has already been reviewed, and provides access to the most recent reviewed version or a more current, unreviewed version as needed.
The first real-life meetup of Wikipedians took place in October 2003 in Munich. As a result of this meeting regularly striking round tables (called “Wikipedia-Stammtisch”) established themselves at various places in Germany, Austria and Switzerland. The round tables have become an important aspect of collegial exchange within the German-speaking community.
Each spring and autumn, the German Wikipedia organizes a writing contest, where a community-elected jury rates nominated articles. Prizes are sponsored by individual community members and companies. The first contest was held in October 2004 - the article Kloster Lehnin (Lehnin Abbey) was selected as the winner from 44 nominated articles. The second contest, held in March 2005, saw 52 contributions, and the third, in September 2005, 70. A trial to extend the contest to an international level met with limited success, with only the Dutch, English and Japanese Wikipedias participating.
For the March 2006 writing contest, the 150 nominated articles were split into three sections: history and society (56 nominations), arts and humanities (36), and science (46). The article on the Brown Bear (German: Braunbär) won, and of the nominated 27 articles reached featured status a few weeks after the contest. In March 2007, the sixth contest was held, with the winner being the article on the Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict (German: Haager Konvention zum Schutz von Kulturgut bei bewaffneten Konflikten).
In 2006, the University of Göttingen hosted the first Wikipedia Academy. The Academy was intended to familiarize the academic world with Wikimedia projects. In 2007, the second such meeting took place, organized in conjunction with the Akademie der Wissenschaften und der Literatur (Academy of Science and Literature) in Mainz as part of the German Jahr der Geisteswissenschaften (Year of the Humanities), which was decreed by the German Federal Ministry for Education and Research. A third meeting was organized on 20–21 June 2008 in Berlin, during the Jahr der Mathematik (Year of Mathematics); the meeting was hosted by the Berlin-Brandenburg Academy of Sciences and Humanities.
German Wikipedians have since organised the Foto-Workshop meeting of photographers, with participants from 10 countries.
Contacts with Brockhaus
In April 2004, a complete list of article titles from the leading German encyclopedia Brockhaus was uploaded to the German Wikipedia, in an apparent attempt to facilitate the creation of still missing articles. A representative of Brockhaus asked for and obtained the deletion of what was believed to be a copyright infringement. As a result of the developing email conversation, a group of five Wikipedians visited the "new media" group of Brockhaus in Mannheim on 1 July 2004. The friendly meeting saw a lively discussion of the differing approaches to writing an encyclopedia; it became clear that Brockhaus had closely observed Wikipedia for quite some time.
On 23 November 2006, the number of articles at German Wikipedia reached 500,000. As a response to this and to the perception that quality control was not keeping up with article creation, it was proposed to declare 10 December 2006 "Article-free Sunday", a day where participants voluntarily agree to post no new articles, but instead focus on improving existing ones. It was also proposed to declare 10 December "Counter-action to Article-free Sunday", a day where participants create missing articles and improve existing ones.
Subsidies from the German government
In June 2007, a project on renewable resources (WikiProjekt Nachwachsende Rohstoffe) was initiated, the goal being to write and improve articles on the topic. The project was run for three years and was subsidized by the German Ministry of Agriculture with approximately €80,000 a year. It was organised and managed by the private company "nova-Institut GmbH". Nova GmbH and Wikimedia Deutschland e. V. fund the project with approximately €60,000 a year in addition, so the budget is approximately €420,000 in total.
These funds were mainly used to organise the project and also to search for experts in the field who have not contributed to Wikipedia yet. Nova may also have paid expense allowances to authors.
According to a 2013 Oxford University study, the article on Croatia was the most disputed article on the German Wikipedia. The top ten most disputed articles then also included Adolf Hitler, Scientology, and Rudolf Steiner. One of the largest disputes among a simple sentence was however about the Donauturm in Vienna. While the observation tower shares some architectural aspects with the Fernsehturm Stuttgart, it was never planned for TV broadcasting purposes. The German Wikipedia had a rather lengthy (about 600,000 characters) discussion about the suitable title and categories, as (often Austrian authors) denied the description of Donauturm as a "TV tower". The Spiegel coverage of the issue cited a participant with "On good days, Wikipedia is better than any TV soap".
Reviews and research
In September 2004, the respected computer magazine c't compared the German Wikipedia with the Brockhaus Multimedia encyclopedia and the German edition of Microsoft's Encarta. On a scale from 0 to 5, Wikipedia "won" with a total score of 3.4. A few weeks later, the weekly newspaper Die Zeit also compared content from Wikipedia with other reference works and found that Wikipedia only has to "share its lead position in the field of natural science."[this quote needs a citation] The DVD version of Spring 2005 received a rather negative review by Björn Hoffmann — product manager working for the Bibliographisches Institut & F.A. Brockhaus in July 2005.
In November 2005 the OpenUsability project in cooperation with the Berlin-based Relevantive AG conducted a usability test of the German Wikipedia. The study focused on finding information and included a set of recommendations to change the MediaWiki interface. In February 2006, the open usability project led a second test which focused on the experience of new editors. The reports were published in English.
A second test by c't in February 2007 used 150 search terms, of which 56 were closely evaluated, to compare four digital encyclopedias: Bertelsmann Enzyklopädie 2007, Brockhaus Multimedial premium 2007, Encarta 2007 Enzyklopädie and Wikipedia. With respect to concerns about the reliability of Wikipedia, it concluded: "We did not find more errors in the texts of the free encyclopedia than in those of its commercial competitors".
In December 2007, German magazine Stern published the results of a comparison between the German Wikipedia and the online version of the 15-volume edition of Brockhaus Enzyklopädie. The test was commissioned to a research institute (Cologne-based WIND GmbH), whose analysts assessed 50 articles from each encyclopedia (covering politics, business, sports, science, culture, entertainment, geography, medicine, history and religion) on four criteria (accuracy, completeness, timeliness and clarity), and judged Wikipedia articles to be more accurate on the average (1.6 on a scale from 1 to 6, versus 2.3 for Brockhaus with lower = better). Wikipedia's coverage was also found to be more complete and up to date, however Brockhaus was judged to be more clearly written, while several Wikipedia articles were criticized as being too complicated for non-experts, and many as too lengthy.
CD November 2004
|This section does not cite any sources. (September 2008) (Learn how and when to remove this template message)|
In November 2004, Directmedia Publishing GmbH started distributing a CD-ROM containing a German Wikipedia snapshot. Some 40,000 CDs were sent to registered customers of directmedia. The price was 3 euros per CD.
The display and search software used for the project, Digibib, had been developed by Directmedia Publishing for earlier publications; it ran on Windows and Mac OS X (and now also on Linux). The Wikipedia articles had to be converted to the XML format used by Digibib.
To produce the CD, a dump of the live Wikipedia had been copied to a separate server, where a team of 70 Wikipedians vetted the material, deleting nonsense articles and obvious copyright violations. Questionable articles were added to a special list, to be reviewed later. The final CD contained 132,000 articles and 1,200 images.
The ISO image was distributed for free via eMule and BitTorrent. In December, the CHIP computer magazine placed the Wikipedia data on the DVD that it distributes with every issue. The Wikipedia materials are published under GFDL while the Digibib software may only be copied for non-commercial use, except the Linux version which is GPLed.
CD/DVD April 2005
|This section does not cite any sources. (September 2008) (Learn how and when to remove this template message)|
A new release of Wikipedia content was published by Directmedia on 6 April 2005. This package consisted of a 2.7 GB DVD and a separate bootable CDROM (running a version of Linux with Firefox). The CDROM did not contain all the data, but was included to accommodate users without DVD-drives. The DVD used Directmedia's Digibib software and article format; everything could be installed to a hard drive. In addition, the DVD contained an HTML tree, as well as Wikipedia articles formatted for use with PDAs (specifically, the Mobipocket and TomeRaider formats).
The production of the DVD motivated the Personendaten project (see above).
The vetting process was similar to the one for the CD described above and took place on a separate MediaWiki server. The process took about a week and involved 33 Wikipedians, communicating on IRC. To prevent duplication of work, editors would protect every article that they had reviewed; links to protected articles were shown in green. Lists of potential spammed or vandalized articles had been produced ahead of time with SQL queries. Unacceptable articles were simply deleted on the spot. While the XML articles for the earlier CD version had been produced from HTML, this time a script was used to convert Wiki markup directly to the Digibib format. The final DVD contained about 205,000 articles, with every article linking to a list of contributors.
Directmedia sold 30,000 DVDs, at €9.90 each. This price included 16% taxes and a one-euro donation to Wikimedia Deutschland; production costs were about €2. The DVD image can also be downloaded for free.
DVD/book December 2005
The next edition of Wikipedia content was issued in December 2005 by the publisher Zenodot Verlagsgesellschaft mbH, a sister company of Directmedia. A 139-page book explaining Wikipedia, its history and policies was accompanied by a 7.5 GB DVD containing 300,000 articles and 100,000 images. The book with DVD is sold for €9.90; both are also available for free download.
The vetting process for this version was different and did not involve human intervention. A "white list" of trusted Wikipedians was assembled, the last 10 days of every article's history were examined, and the last version edited by a white-listed Wikipedian was chosen for the DVD. If no such version existed, the last version older than 10 days was used. Articles nominated for cleanup or deletion were not used.
DVD December 2006/2007 and 2007/2008
The December 2006/2007 and 2007/2008 edition can be downloaded from dvd.wikimedia.org.
The December 2005 book about Wikipedia was the first in a series titled Wikipress. These books, published by Zenodot, consisted of a collection of Wikipedia articles about a common topic, selected and edited by so-called "Wikipeditors" who may receive compensation from Directmedia. The books were assembled on a separate server from those used for the regular German Wikipedia pages. Every Wikipress book was accompanied by an "edit card", a post card that readers could send in to edit the book's contents. Wikipress books about the Nobel Peace Prize, bicycles, Antarctica, the solar system, and Hip hop, amongst others, were released, and other books on topics as diverse as Whales, Conspiracy theories, Manga, Astrophysics, and the Red Cross were in the works. Due to lack of interest, the project was ended after a few books.
100 volume Wikipedia
The publisher Zenodot announced in January 2006 that they intend to publish the complete German Wikipedia in print, 100 volumes with 800 pages each, starting with the letter A in October 2006, followed by two volumes each month thereafter, to end with Z in 2010. The project, code named WP 1.0, was to be supported by 25 editors employed by Zenodot as well as a scientific advisory board. Changes made to articles before publication would also be available for incorporation into the online Wikipedia.
In March 2006, Zenodot organized a "community day" to meet with Wikipedians and discuss the project. Groups of Wikipedians had already begun to polish articles with titles Aa-Af in selected topics. In late March it was announced that the project was put on hold and no books would be published in 2006; the reason given was that community support was lacking.
On 22 April 2008, the publisher Bertelsmann announced that it planned to publish a one-volume encyclopedia in September using content from the German-language Wikipedia. The volume was planned to include abbreviated entries for the 50,000 most commonly used search terms of the prior two years. The book is priced at 19.95 euros, with one euro from every sale going to the German chapter of the Wikimedia Foundation. It was released on 15 September 2008 in hardcover, containing 992 pages and many illustrations.
Legal issues and controversies
The German Wikipedia has been criticized for the deletion of articles because they seem "irrelevant" to those who deleted them, even though they seem expedient, meaningful, well written and extensive enough to other people. These discussions received press coverage in computer magazines as well as in mainstream media.
While everyone is free to use Wikipedia content, there are certain conditions, such as attribution, a copy of the license text and no non-free derivative works (see Creative Commons licenses and GNU Free Documentation License for details).
In March 2005, the German news magazine Der Spiegel published an article on the Rwandan Genocide in its online edition; it was a copy of Wikipedia's article. The article was taken down soon after and replaced with an apology.
In April 2005, the encyclopedia Brockhaus published an article about the new pope Josef Ratzinger in its online edition. Because of its close similarity to Wikipedia's article, suspicion arose right away that the Brockhaus article might have been plagiarism. The article was removed soon after but Brockhaus did not apologize or admit guilt (see the Wikipedia Signpost's coverage.)
Large-scale copyright infringement (2003–2005)
In mid-November 2005, it was discovered that an anonymous user had entered hundreds of articles from older encyclopedias that had been published in the 1970s and 1980s in East Germany. The articles were mainly on topics in philosophy and related areas. The user had started in December 2003.
A press release was issued and numerous editors started to remove the copyright protected materials. This was made difficult by the fact that the old encyclopedias were not online and not easily available from many West German libraries, and that the user had used numerous different IP addresses. The Directmedia DVD had to be updated.
Bertrand Meyer article hoax
On 28 December 2005, the article computer scientist Bertrand Meyer (creator of the Eiffel programming language) was edited by an anonymous user, falsely reporting that Meyer had died four days earlier. The hoax was reported five days later by the Heise News Ticker and the article was immediately corrected. Major news media in Germany and Switzerland picked up on the story, creating the German Wikipedia's version of the Seigenthaler incident. Meyer himself went on to publish a positive evaluation of Wikipedia, concluding, "The system succumbed to one of its potential flaws, and quickly healed itself. This doesn't affect the big picture. Just like those about me, rumors about Wikipedia's downfall have been grossly exaggerated."
In 2006, Wikimedia Deutschland, the German chapter of the US Wikimedia Foundation, was drawn into a legal dispute between the parents of the deceased German computer hacker Boris "Tron" Floricic and the Foundation. The parents did not wish Floricic's real name to be publicly mentioned, and in December 2005 they obtained a preliminary injunction in a Berlin court against the American Wikimedia Foundation, requiring removal of Floricic's name from Wikipedia. The name was not removed. On 19 January 2006 they obtained a second injunction, this time against Wikimedia Deutschland, prohibiting the address
www.wikipedia.de (which is under control of Wikimedia Deutschland) to redirect to the German Wikipedia at
de.wikipedia.org (which is controlled by the Wikimedia Foundation and hosts the actual encyclopedia) as long as Wikipedia mentioned Floricic's name. Wikimedia Deutschland complied and replaced the redirect with a note explaining the situation, but without mentioning the Tron case specifically. The German Wikipedia remained accessible through
de.wikipedia.org during this time. One day later, Wikimedia Deutschland achieved a suspension of the injunction, and linked from the note at
www.wikipedia.de to the German Wikipedia. On 9 February, the court invalidated the injunction, ruling that neither the rights of the deceased nor the rights of the parents were affected by publishing the name; this ruling was upheld on appeal, decided 12 May.
Lutz Heilmann controversy
In November, 2008, Lutz Heilmann, a member of the German parliament, obtained a preliminary injunction against Wikimedia Deutschland e. V., forbidding the forwarding of
de.wikipedia.org. According to Focus Online, Heilmann objected to claims that he had not completed his university degree, and that he had participated in a business venture involving pornography. The report also suggests that the Wikipedia article had been repeatedly altered in line with his claims by an anonymous user operating within the Bundestag building, but Heilmann denied having been involved in an edit war. Wikimedia Germany displayed a page explaining the situation. Heilmann announced on 16 November that he would drop the legal proceedings against Wikimedia Deutschland, regretting that many uninvolved users of the encyclopedia had been affected.
Superprotect and Media Viewer controversy
Reiss Engelhorn Museum
Parodies and forks
Ulrich Fuchs, a longtime contributor to the German Wikipedia, produced a fork known as Wikiweise in April 2005. It is ad-supported, uses its own software (but a similar wiki markup), admits only registered editors, and prominently displays the real names of every article's major contributors. It has since gone offline.
- Wikimedia list of Wikipedias and their statistics.. Retrieved 12 April 2009.
- Jimmy Wales [Wikipedia-l] Alternative language Wikipedias, 16 March 2001
- Internet Archive's snapshots of German Wikipedia HomePage, 21 March 2001 12:10 (revision #9), and related revision history. Retrieved 4 November 2008.
- "Nupedia German-L Section" (6 April 2001), "Vergil" (16 April 2001),"Pylos" (17 April 2001).
- "Polymerase-Kettenreaktion" article on German Wiki showing edit dated May 2001
- Lih, p. 147.
- Statistics of German Wikipedia (English)
- . Retrieved 12 April 2009.
- 29 December 2006
- Lih, p. 148.
- Erik Zachte (14 November 2011). "Wikimedia Traffic Analysis Report - Wikipedia Page Views Per Country - Trends". Wikimedia Statistics. Retrieved 19 January 2011.
- Wikipedia statistics
- ""Relevanzkriterien" (notability guidelines)". 13 January 2009. Retrieved 13 January 2009.
- ""Bildrechte" (image rights)". 10 January 2009. Retrieved 13 January 2009.
- Erik Möller: Wikipedia page protection report Wikitech-l mailing list, 14 September 2005
- "Longest page protections, September 2005 - Meta". Meta.wikimedia.org. Retrieved 20 December 2010.
- Tim 'avatar' Bartel: Entsperrung der Wikipedia WikiDE-l mailing list, 28 May 2008 07:45:55 GMT
- Wikipedia:Stub, de:Wikipedia:Stub
- German Wikipedia:Poll about the abolishment of the stub template, 28 December 2005
- German Wikipedia: Poll about uploading images exclusively in Wikimedia Commons
- German Wikipedia: Question regarding image upload and Wikimedia Commons
- Jakob Voss: Metadata with Personendaten and beyond (presentation at Wikimania 2005)
- German Wikipedia: Round tables
- International writing contest, March 2005.
- Writing contest (German)
- Exhibition "Fünf Jahre Wikipedia, exhibition charts and photos
- Wikipedia Academy web site (German)
- Report: Wikipedia meets Brockhaus
- nova-Institut (26 June 2007): Nachwachsende Rohstoffe in die Wikipedia! Press release. Retrieved 24 October 2007.
- Fachagentur Nachwachsende Rohstoffe (FNR) e. V.October 2007&zeitraum=Formular&minz=0&maxz=1&anzahl=10&zurueck=1 Projektbeschreibung: Nachwachsende Rohstoffe im Wikipedia-Online-Lexikon. Retrieved 24 October 2007.
- nova-Institut: Nachwachsende Rohstoffe in die Wikipedia! Project page. Retrieved 31 July 2008.
- Gross, Doug. "Wiki wars: The 10 most controversial Wikipedia pages." (Archive) CNN. July 24, 2013. Retrieved on July 26, 2013.
- Spiegel 19.04.2010, INTERNET, Im Innern des Weltwissens, Mathieu von Rohr
- Experts report : passion outclasses flashy sex appeal (Wikipedia) 4 October 2004
- Usability test: Finding Information in the German Wikipedia - Test Results November 2005
- Usability Test Results Available: "Editing in Wikipedia", 7 March 2006
- Dorothee Wiegand: "Entdeckungsreise. Digitale Enzyklopädien erklären die Welt". c't 6/2007, 5 March 2007, p. 136-145. Original quote: "Wir haben in den Texten der freien Enzyklopädie nicht mehr Fehler gefunden als in denen der kommerziellen Konkurrenz"
- Wikipedia: Wissen für alle. Stern 50/2007, 6 December 2007, pp. 30-44
- Wikipedia schlägt Brockhaus Stern online, 5 December 2007 (summary of the test, German)
- K.C. Jones: German Wikipedia Outranks Traditional Encyclopedia's Online Version. InformationWeek, 7 December 2007
- Heise newsticker: Neue Wikipedia-DVD im Handel und zum Download, 9 December 2005 (German)
- "Hauptseite" (in German). Wikipress.de. Retrieved 20 December 2010.
- Heise newsticker: Wikipedia wird noch nicht gedruckt, 24 March 2006 (German)
- "Wikipedia to go book-based in Germany", Agence France-Presse, 23 April 2008
- "Further deletions of Linux distributions in Wikipedia proposed" article in a Linux computer magazine 10 April 2007
- "Wikipedia: The fight for relevance" in c't computer magazine 30 October 2009
- "Wikipedia: Dispute about arbitrary deletions" article in a Windows computer magazine 19 October 2009
- "Wikipedia: World champion in deleting?" on gulli.com news 27 December 2009
- Article about the planned deletion of a Wikipedia article about a TV celebrity on the news portal of T-Home (biggest ISP in Germany) 16 July 2010
- German Spiegel Copied Wikipedia 9 March 2005
- Report on copyright infringement
- Defense and illustration of Wikipedia, by Bertrand Meyer, January 2006
- "Spiegel online article (German)". Spiegel.de. 10 January 2006. Retrieved 20 December 2010.
- Wikipedia: Superprotect-Streit spitzt sich zu
- Wikimedia-Stiftung zwingt deutschen Nutzern Mediaviewer auf
- "Letter to Wikimedia Foundation: Superprotect and Media Viewer". meta.wikimedia.org. 2014-08-19. Retrieved 2016-08-21.
- "Superprotect". meta.wikimedia.org. 2015. Retrieved 2016-08-21.
- Michelle Paulson; Geoff Brigham (2015-11-23). "Wikimedia Foundation, Wikimedia Deutschland urge Reiss Engelhorn Museum to reconsider suit over public domain works of art". Wikimedia blog. Retrieved 2016-08-21.
- Benjamin Sutton (2015-12-08). "Museum Sues Wikimedia for Hosting Copyrighted Photos of Its Public-Domain Artworks". Hyperallergic. Retrieved 2016-08-21.
- Chip.de: Brockhaus für Kamele - Wikipedia-Parodien, 11 March 2008 (German)
- Lih, Andrew. The Wikipedia Revolution: How a Bunch of Nobodies Created the World's Greatest Encyclopedia. Hyperion, New York City. 2009. First Edition. ISBN 978-1-4013-0371-6 (alkaline paper).
|German edition of Wikipedia, the free encyclopedia|
|Wikimedia Commons has media related to German Wikipedia.|
- German Wikipedia mobile version (German)
- Meta: German Wikipedia
- Wikimedia Deutschland (German)
- Publication efforts on CD/DVD (German):
- WP 1.0, publication in book form (German): | <urn:uuid:fe4a2573-89a2-40ab-b6b1-8390bf9c7796> | CC-MAIN-2017-17 | https://en.wikipedia.org/wiki/Wikiweise | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00488-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.923862 | 7,426 | 2.890625 | 3 |
Posted: September 2012
(Updated; originally posted July 2004)
|Richard F. Lockey, MD
Professor of Medicine, Pediatrics and Public Health
Director of the Division of Allergy and Immunology
Joy McCann Culverhouse Chair of Allergy and Immunology
University of South Florida College of Medicine and the James A. Haley Veterans' Hospital
Tampa, Florida, USA
This disease summary is provided for informational purposes for physicians only.
Anaphylaxis is an acute, potentially life-threatening hypersensitivity reaction, involving the release of mediators from mast cells, basophils and recruited inflammatory cells. Anaphylaxis is defined by a number of signs and symptoms, alone or in combination, which occur within minutes, or up to a few hours, after exposure to a provoking agent. It can be mild, moderate to severe, or severe. Most cases are mild but any anaphylaxis has the potential to become life-threatening.
Anaphylaxis develops rapidly, usually reaching peak severity within 5 to 30 minutes, and may, rarely, last for several days.
The term anaphylaxis is often reserved to describe immunological, especially IgE-mediated reactions. A second term, non-allergic anaphylaxis, describes clinically identical reactions that are not immunologically mediated. The clinical diagnosis and management are, however, identical.
The initial manifestation of anaphylaxis may be loss of consciousness. Patients often describe "a sense of doom." In this instance, the symptoms and signs of anaphylaxis are isolated to one organ system, but since anaphylaxis is a systemic event, in the vast majority of subjects two or more systems are involved.
Gastro-intestinal: Abdominal pain, hyperperistalsis with faecal urgency or incontinence, nausea, vomiting, diarrhea.
Oral: Pruritus of lips, tongue and palate, edema of lips and tongue.
Respiratory: Upper airway obstruction from angioedema of the tongue, oropharynx or larynx; bronchospasm, chest tightness, cough, wheezing; rhinitis, sneezing, congestion, rhinorrhea.
Cutaneous: Diffuse erythema, flushing, urticaria, pruritus, angioedema.
Cardiovascular: Faintness, hypotension, arrhythmias, hypovolemic shock, syncope, chest pain.
Ocular: Periorbital edema, erythema, conjunctival erythema, tearing.
Genito-urinary: Uterine cramps, urinary urgency or incontinence.
Severe initial symptoms develop rapidly, reaching peak severity within 3-30 minutes. There may occasionally be a quiescent period of 1–8 hours before the development of a second reaction (a biphasic response). Protracted anaphylaxis may occur, with symptoms persisting for days. Death may occur within minutes but rarely has been reported to occur days to weeks after the initial anaphylactic event.
1. IgE-Mediated Reactions
In theory, any food glycoprotein is capable of causing an anaphylactic reaction. Foods most frequently implicated in anaphylaxis are:
- Peanut (a legume)
- Tree nuts (walnut, hazel nut/filbert, cashew, pistachio nut, Brazil nut, pine nut, almond)
- Shellfish (shrimp, crab, lobster, oyster, scallops)
- Milk (cow, goat)
- Chicken eggs
- Seeds (cotton seed, sesame, mustard)
- Fruits, vegetables
Food sensitivity can be so severe that a systemic reaction can occur to particle inhalation, such as the odors of cooked fish or the opening of a package of peanuts.
A severe allergy to pollen, for example, ragweed, grass or tree pollen, can indicate that an individual may be susceptible to anaphylaxis or to the oral allergy syndrome (pollen/food syndrome) (manifested primarily by severe oropharyngeal itching, with or without facial angioedema) caused by eating certain plant-derived foods. This is due to homologous allergens found between pollens and foods. The main allergen of all grasses is profilin, which is a pan-allergen, found in many plants, pollens and fruits, and grass-sensitive individuals can sometimes react to many plant-derived foods.
Typical aero-allergen food cross-reactivities are:
- Birch pollen: apple, raw potato, carrot, celery and hazelnut
- Mugwort pollen: celery, apple, peanut and kiwifruit
- Ragweed pollen: melons (watermelon, cantaloupe, honeydew) and banana
- Latex: banana, avocado, kiwifruit, chestnut and papaya
Food-associated, exercise-induced anaphylaxis may occur when individuals exercise within 2-4 hours after ingesting a specific food. The individual is, however, able to exercise without symptoms, as long as the incriminated food is not consumed before exercise. The patient is likewise able to ingest the incriminated food with impunity as long as no exercise occurs for several hours after eating the food.
Antibiotics and Other Drugs
PENICILLIN, CEPHALOSPORIN, AND SULPHONAMIDE ANTIBIOTICS
Penicillin is the most common cause of anaphylaxis, for whatever reason, not just drug-induced cases. Penicillin and other antibiotics are haptens, molecules that are too small to elicit immune responses but which may bind to serum proteins and produce IgE antibodies. Serious reactions to penicillin occur about twice as frequently following intramuscular or intravenous administration versus oral administration, but oral penicillin administration may also induce anaphylaxis. Neither atopy, nor a genetic history of allergic rhinitis, asthma or eczema, is a risk factor for the development of penicillin allergy.
Muscle relaxants, for example, suxamethonium, alcuronium, vecuronium, pancuronium and atracurium, which are widely used in general anesthesia, account for 70-80% of all allergic reactions occurring during general anesthesia. Reactions are caused by an immediate IgE-mediated hypersensitivity reaction.
Hymenoptera venoms (bee, wasp, yellow-jacket, hornet, fire ant) contain enzymes such as phospholipases and hyaluronidases and other proteins which can elicit an IgE antibody response.
Latex is a milky sap produced by the rubber tree Hevea brasiliensis. Latex-related allergic reactions can complicate medical procedures, for example, internal examinations, surgery, and catheterization. Medical and dental staff may develop occupational allergy through use of latex gloves.
Examples of miscellaneous agents which cause anaphylaxis are insulin, seminal proteins, and horse-derived antitoxins, the latter of which are used to neutralize venom in snake bites. Individuals who have IgA deficiency may become sensitized to the IgA provided in blood products. Those selective IgA deficient subjects (1:500 of the general population) can develop anaphylaxis when given blood products, because of their anti-IgA antibodies (probably IgE-anti-IgA).
Elective Medical Procedures
2. Cytoxic and Immune Complex – Complement-Mediated Reactions
Whole Blood, Serum, Plasma, Fractionated Serum Products, Immunoglobulins, Dextran
Anaphylactic responses have been observed after the administration of whole blood or its products, including serum, plasma, fractionated serum products and immunoglobulins. One of the mechanisms responsible for these reactions is the formation of antigen-antibody reactions on the red blood cell surface or from immune complexes resulting in the activation of complement. The active by-products generated by complement activation (anaphylatoxins C3a, C4a and C5a) cause mast cell (and basophil) degranulation, mediator release and generation, and anaphylaxis. In addition, complement products may directly induce vascular permeability and contract smooth muscle.
Cytotoxic reactions can also cause anaphylaxis, via complement activation. Antibodies (IgG and IgM) against red blood cells, as occurs in a mismatched blood transfusion reaction, activate complement. This reaction causes agglutination and lysis of red blood cells and perturbation of mast cells resulting in anaphylaxis.
3. Non-immunologic Mast Cell Activators
Radiocontrast Media, Low-molecular Weight Chemicals
Mast cells may degranulate when exposed to low-molecular-weight chemicals. Hyperosmolar iodinated contrast media may cause mast cell degranulation by activation of the complement and coagulation systems. These reactions can also occur, but much less commonly, with the newer contrast media agents.
Narcotics are mast cell activators capable of causing elevated plasma histamine levels and non-allergic anaphylaxis. They are most commonly observed by anesthesiologists.
4. Modulators of Arachidonic Acid Metabolism
Aspirin, Ibuprofen, Indomethacin and other Non-steroidal Anti-inflammatory Agents (NSAIDs)
IgE antibodies against aspirin and other NSAIDs have not been identified. Affected individuals tolerate choline or sodium salicylates, substances closely structurally related to aspirin but different in that they lack the acetyl group.
5. Sulfiting Agents
Sodium and Potassium Sulfites, Bisulfites, Metabisulfites, and Gaseous Sulfur Dioxides
These preservatives are added to foods and drinks to prevent discoloration and are also used as preservatives in some medications. Sulfites are converted in the acid environment of the stomach to SO2 and H2SO3, which are then inhaled. They can produce asthma and non-allergic hypersensitivity reactions in susceptible individuals.
6. Idiopathic Causes
Exercise alone can cause anaphylaxis as can food-induced anaphylaxis, Exercise-induced anaphylaxis can occur during the pollinating season of plants to which the individual is allergic.
Catamenial anaphylaxis is a syndrome of hypersensitivity induced by endogenous progesterone secretion. Patients may exhibit a cyclic pattern of attacks during the premenstrual part of the cycle.
Flushing, tachycardia, angioedema, upper airway obstruction, urticaria and other signs and symptoms of anaphylaxis can occur without a recognizable cause. Diagnosis is based primarily on the history and an exhaustive search for causative factors. Serum tryptase and urinary histamine levels may be useful, in particular, to rule out mastocytosis.
A = Airway
Ensure and establish a patent airway, if necessary, by repositioning the head and neck, endotracheal intubation or emergency cricothyroidotomy. Place the patient in a supine position and elevate the lower extremities. Patients in severe respiratory distress may be more comfortable in the sitting position.
B = Breathing
Assess adequacy of ventilation and provide the patient with sufficient oxygen to maintain adequate mentation and an oxygen saturation of at least 91% as determined by pulse oximetry. Treat bronchospasm as necessary. Equipment for endotracheal intubation should be available for immediate use in event of respiratory failure and is indicated for poor mentation, respiratory failure, or stridor not responding immediately to supplemental oxygen and epinephrine.
C = Circulation
Minimize or eliminate continued exposure to causative agent by discontinuing the infusion, as with radio-contrast media, or by placing a venous tourniquet proximal to the site of the injection or insect sting. Assess adequacy of perfusion by taking the pulse rate, blood pressure, mentation and capillary refill time. Establish I.V. access with large bore (16- to 18-gauge) catheter and administer an isotonic solution such as normal saline. A second I.V. may be established as necessary. If a vasopressor, such as dopamine becomes necessary, the patient requires immediate transfer to an intensive care setting.
The same ABC mnemonic can be used for the pharmacologic management of anaphylaxis:
A = Adrenalin = epinephrine
Epinephrine is the drug of choice for anaphylaxis. It stimulates both the beta-and alpha-adrenergic receptors and inhibits further mediator release from mast cells and basophils. Animal and human data indicate that platelet activating factor (PAF) mediates life-threatening manifestations of anaphylaxis. The early use of epinephrine in vitro inhibits the release of PAF in a time-dependent manner, giving support to the use of this medication with the first signs and symptoms of anaphylaxis. The usual dosage of epinephrine for adults is 0.3-0.5 mg of a 1:1000 w/v solution given intramuscularly every 10-20 minutes or as necessary. The dose for children is 0.01 mg/kg to a maximum of 0.3 mg intramuscularly every 5-30 minutes as necessary. Lower doses, e.g., 0.1 mg to 0.2 mg administered intramuscularly as necessary, are usually adequate to treat mild anaphylaxis, often associated with skin testing or immunotherapy. Epinephrine should be given early in the course of the reaction and the dose titrated to the clinical response. For severe hypotension, 1 cc of a 1:10,000 w/v dilution of epinephrine given slowly intravenously is indicated. The patient's response determines the rate of infusion.
B = Benadryl (diphenhydramine)
Antihistamines are not useful for the initial management of anaphylaxis but may be helpful once the patient stabilizes. Diphenhydramine may be administered intravenously, intramuscularly or orally. Cimetidine offers the theoretical benefit of reducing both histamine-induced cardiac arrhythmias, which are mediated via H2 receptors, and anaphylaxis-associated vasodilation, mediated by H1 and H2 receptors. Cimetidine, up to 300 mg every 6 to 8 hours, may be administered orally or slowly I.V. Doses must be adjusted for children.
C = Corticosteroids
Corticosteroids do not benefit acute anaphylaxis but may prevent relapse or protracted anaphylaxis. Hydrocortisone (100 to 200 mg) or its equivalent can be administered every 6 to 8 hours for the first 24 hours. Doses must be adjusted for children.
Prevention of Anaphylaxis
Agents causing anaphylaxis should be identified when possible and avoided. Patients should be instructed how to minimize exposure.
Beta-adrenergic antagonists, including those used to treat glaucoma, may exacerbate anaphylaxis and should be avoided, where possible. Angiotensin-converting enzyme (ACE) inhibitors may also increase susceptibility to anaphylaxis, particularly with insect venom-induced anaphylaxis.
Epinephrine is the drug of choice to treat anaphylaxis. Individuals at high risk for anaphylaxis should be issued epinephrine syringes for self-administration and instructed in their use. Intramuscular injection is recommended since it results in prompt elevation of plasma concentrations and has prompt physiological effects. Subcutaneous injection results in delayed epinephrine absorption. Patients must be alerted to the clinical signs of impending anaphylaxis and the need to carry epinephrine syringes at all times and to use it at the earliest onset of symptoms. Unused syringes should be replaced immediately when they reach their use-by/expiration date, as epinephrine content and bioavailability of the drug decreases in proportion to the number of months past the expiration date.
Pre-treatment with glucocorticosteroids and H1 and H2 antihistamines is recommended to prevent or reduce the severity of a reaction where it is medically necessary to administer an agent known to cause anaphylaxis, for example, radio-contrast media.
Other important patient instructions include:
a) Personalized written anaphylaxis emergency action plan
b) Medical Identification (e.g., bracelet, wallet card)
c) Medical record electronic flag or chart sticker, and emphasis on the importance of follow-up investigations by an allergy/immunology specialist
The differential diagnosis for anaphylaxis includes:
- respiratory difficulty or circulatory collapse, including vasovagal reactions
- globus hystericus
- status asthmaticus
- foreign body aspiration
- pulmonary embolism
- myocardial infarction
- carcinoid syndrome
- hereditary angioedema
- overdose of medication
- cold urticaria
- cholinergic urticaria
- sulfite or monosodium glutamate ingestion
Upper airway obstruction, bronchospasm, abdominal cramps, pruritus, urticaria and angioedema are absent in vasovagal reactions. Pallor, syncope, diaphoresis and nausea usually indicate a vaso-vagal reaction but may occur in either condition.
If a reaction occurs during a medical procedure, it is important to consider a possible reaction to latex or medication used for or during anesthesia.
The prevalence of food-induced anaphylaxis varies with the dietary habits of a region. A United States survey reported an annual occurrence of 10.8 cases per 100,000 person years. By extrapolating this data to the entire population of the USA, this suggests approximately 29,000 food-anaphylactic episodes each year, resulting in approximately 2,000 hospitalizations and 150 deaths. Similar findings have been reported in the United Kingdom and France. Food allergy is reported to cause over one-half of all severe anaphylactic episodes in Italian children treated in emergency departments and for one-third to one-half of anaphylaxis cases treated in emergency departments in North America, Europe and Australia. It is thought to be less common in non-Westernized countries. A study in Denmark reported a prevalence of 3.2 cases of food anaphylaxis per 100,000 inhabitants per year with a fatality rate of approximately 5%.
Risk factors for food anaphylaxis include asthma and previous allergic reactions to the causative food.
Food-associated, exercise-induced anaphylaxis
This is more common in females, and over 60% of cases occur in individuals less than 30 years of age. Patients sometimes have a history of reacting to the food when younger and usually have positive skin tests to the food that provokes their anaphylaxis.
Anaphylaxis caused by radio-contrast media
Mild adverse reactions are experienced by approximately 5% of subjects receiving radio-contrast media. U.S. figures suggest that severe systemic reactions occur in 1:1000 exposures with death in 1:10,000-40,000 exposures.
One percent to 5% of courses of penicillin therapy are complicated by systemic hypersensitivity reactions. Point two percent is associated with anaphylactic shock, and mortality occurs in 0.02% of the cases. If a patient has a strongly positive skin test or circulating IgE antibody to penicillin, there is a 50-60% risk of an anaphylactic reaction upon subsequent challenge. In patients with a case history suggestive of penicillin allergy and negative skin tests, the risk of anaphylaxis is very low. Atopy and mold sensitivity are not risk factors for the development of penicillin allergy.
Anaphylaxis to muscle relaxants occurs in approximately 1 in 4,500 of general anesthesia, with fatalities occurring in 6% of these cases. Risk factors are female sex (80% of cases). Atopy is not a risk factor; previous drug allergy may be a risk factor. In patients with a history of anaphylaxis, skin tests to different muscle relaxants may be helpful. If the test result is positive, the muscle relaxant should not be used. A negative result provides evidence that the muscle relaxant can probably be administered safely.
Insect venom anaphylaxis
Studies from Australia, France, Switzerland and the USA suggest incidences of systemic reactions to Hymenoptera stings ranging from 0.4% to 4% of the population. In the USA, at least 40 deaths occur each year as a result of Hymenoptera stings.
Allergy / immunology specialists play a uniquely important role to confirm the etiology of anaphylaxis, prepare the patient for self administration of epinephrine, educate the patient and/or family about allergen avoidance, and rule out any underlying condition, such as mastocytosis, which can predispose a patient to develop anaphylaxis. Referral to an allergist / immunologist is indicated for patients with this disease. | <urn:uuid:8bc373af-23df-4481-9179-9dd6bda0db69> | CC-MAIN-2017-17 | http://www.worldallergy.org/professional/allergic_diseases_center/anaphylaxis/anaphylaxissynopsis.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00482-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.893954 | 4,500 | 3.21875 | 3 |
Human beings have two, not one,
ears at about equal height on both sides of the head. This
well-known fact is the basis of many of the outstanding features
of human auditory perception. Identifying faint signals in a
noisy environment, comprehending a specific talker in a group of
people all speaking at the same time, enjoying the
"acoustics" of a concert hall, and perceiving
"stereo" with our hi-fi system at home, would hardly be
possible with only one ear. In their effort to understand and to
take advantage of the basic principles of human binaural hearing,
engineers have done the groundwork for a new branch of technology
- now known as Binaural Technology. Binaural Technology is
able to offer a great number of applications capable of having
noticeable impact on society. One of these applications is the
representation of the auditory sensory domain in so-called
Virtual-Reality systems. To this end, physiologically-adequate
treatment of the prominent sensory modalities, including the
auditory one, is mandatory.
Technically speaking, auditory representation in VR systems is implemented by means of a sound system. However, in contrast to conventional sound systems, the auditory representation is non-stationary and interactive, i.e., among other things, dependent on listeners' actions. This implies, for the auditory representation, that very complex, physiologically-adequate sound signals have to be delivered to the auditory systems of the listeners, namely to their eardrums.
One possible technical way to accomplish this is via transducers positioned at the entrances to the ear canals (headphones). Headphones are fixed to the head and thus move simultaneously with it. Consequently, head and body movements do not modify the coupling between transducers and ear canals (so-called head-related approach to auditory representation) - in contrast to the case where the transducers, e.g. loudspeakers, are positioned away from the head and where the head and body can move in proportion to the sound sources (room-related approach). In any real acoustical situation the transmission paths from the sources to the ear-drums will vary as a result of the listeners' movements in relation to the sound sources- the actual variation being dependent on the directional characteristics of both the sound sources and the external ears (skull, pinna, torso) and on the reflections and reverberation present.
Virtual-reality systems must take account of all these specific variations. Only if this task is performed with sufficient sophistication will the listeners accept their auditory percepts as real - and develop the required sense of presence and immersion.
1. Auditory simulation for achieving a full sense of presence
2. Binaural technology in virtual auditory environments
3. Architecture of an auditory VR system
As the title of a well-known journal that deals with virtual environments is 'Presence', the concept of presence is central to research in this field. Definitions of presence have been done in a different number of ways, all these definitions include the feeling of 'being in the environment'. In the following paragraphs it will be summarised why the auditory stimulation has a great importance to reach the feeling of being in a virtual environment.
Outgoing from a report, in which Ramsdell (1978) discusses the psychological experience of hearing loss, Gilkey and Weisenberger (1995) describe the implications of sudden deafness on a number of patients. This deafness led to the description of the world as being 'dead', lacking movement, and by the use of terms of 'connected', 'part of' and 'coupling' the psychological effect that hearing has on the relationship between the observer and the environment was outlined. Ramsdell's interviews led him to divide the auditory information into three different levels: the social level, the warning level and the primitive level.
The level of information typically thought of when considering auditory functioning is the social level; comprehending language, listening to music, etc. This social or symbolic level is the venue for communication with other persons. Sounds interpreted on the second level, the warning level are for example the ringing of a telephone, the wail of a siren, etc.
On the third level, the primitive level sounds serve as neither warning nor symbol. They serve as auditory background sounds which surround us in our everyday life. These sounds can be caused by interaction with objects (e.g. writing on a keyboard, footsteps) or can consist of incidental sounds by objects in the environment (e.g. ticking of a clock). Ramsdell describes that these sounds maintain our feeling of being a part of a living world. He comes to the conclusion that the loss of sound on the primitive level is the major cause for the feelings of depression and loss reported by deaf patients.
The loss of the primitive level of hearing can be recognised
as having a significant impact on the sense of presence for
suddenly deafened persons. A straightforward approach is to
analyse an incomplete or non-existing representation of the
auditory background within a virtual environment.
Presence in Virtual Environments
In most virtual reality applications the visual feedback is considered to be the most important component. Often the effort on equipment for the visual system is by factor 10 or even by factor 100 higher than for the auditory system. That makes it obvious that the auditory feedback plays a minor role in current implementations of virtual environments. As described is the influence of the auditory stimulation for the presence in a virtual world often underestimated though it is more critical than the visual stimulation in the sense of presence. On the one hand, when a person closes one's eyes it does not significantly alter the sense of presence, the absence of the visual stimulation is experienced routinely in our everyday live when closing the eyes. One the other hand, the absence of the auditory stimulation cannot be considered as a normal situation because, as Ramsdell points out, humans have no 'earlids'. As the feeling of presence cannot be reached by one sense it is a straightforward approach to couple different senses to reach an optimal feeling of presence within a virtual reality. That aim can be achieved by the use of multimodal virtual reality systems. Within the project SCATIS (ESPRIT basic research project #6358), for example, an auditory-tactile virtual environment system was built up in order to create presence in a virtual world with more than one sense and to carry out research in the field of multi-modal psychophysics. Further research in this field is needed to gain more information about the combination of stimulation of different senses in order to reach a feeling of presence. Concerning the auditory stimulation research is needed to specify the importance of the auditory background compared to other forms of auditory stimulation as well as it has to be specified how exact and detailed the auditory background has to be modelled in order to achieve a convincing auditory experience.
The human auditory system consists at one end of two ears. Each ear scans the variation in time of the sound pressure at two different positions in the environment. Spatial hearing is performed by evaluating monaural clues, which are the same for both ears, as well as binaural ones, which differ between the two eardrum signals. In general, the distance between a sound source and the two ears is different for sound sources outside the median plane. This is one reason for interaural time, phase and level differences that can be evaluated by the auditory system for directivity perception. These interaural clues are mainly used for azimuth perception (left or right), which is usually quite accurate (up to 1 degree). Exclusively interaural levels and time differences do not allow univocal spatial perceptions. Monaural cues are mainly used for perceiving elevation. These are amplifications and attenuations in the so-called directional (frequency) bands. Particularly the presence of the external ear (consisting of head, torso, shoulders and pinnae) has decisive impact on the eardrum signals. Diffraction effects depending on the direction of incidence occur when sound waves impinge on the human's head.
Binaural technology is used in virtual auditory environments to perform spatial hearing. The basics of this technology will be described in this chapter.
Binaural-Technology Basics (Blauert&Lehnert, 1994)
At this point it makes sense to begin the technological discussion with the earliest, but still a very important category of application in Binaural Technology, namely, "binaural recording and authentic auditory reproduction." Authentic auditory reproduction is achieved when listeners hear exactly the same in a reproduction situation as they would hear in an original sound field, the latter existing at a different time and/or location. As a working hypothesis, Binaural Technology begins with the assumption that listeners hear the same in a reproduction situation as in an original sound field when the signals at the two ear-drums are exactly the same during reproduction as in the original field. Technologically speaking, this goal is achieved by means of so-called artificial heads which are replicas of natural heads in terms of acoustics, i.e. they develop two self-adjusting ear filters like natural heads. Applications based on authentic reproduction exploit the capability of Binaural Technology to archive the sound field in a perceptually authentic way, and to make it available for listening at will, e.g., in entertainment, education, instruction, scientific research, documentation, surveillance, and telemonitoring. It should be noted here that binaural recordings can be compared in direct sequence (e.g., by A/B comparison), which is often impossible in the original sound situations.
Since the sound-pressure signals at the two ear-drums are the physiologically-adequate input to the auditory system, they are furthermore considered to be the basis for auditory-adequate measurement and evaluation, both in a physical and/or auditory way. Consequently, there is a further category of applications, namely "binaural measurement and evaluation" In physical binaural measurement physically based procedures are used, whereas in the auditory cases human listeners serve as measuring and evaluating instruments. Current applications of binaural measurement and evaluation can be found in areas such as noise control, acoustic-environment design, sound-quality assessment (for example, in speech-technology, architectural acoustics and product-sound design,) and in specific measurements on telephone systems, headphones, personal hearing protectors, and hearing aids. For some applications scaled-up or scaled-down artificial heads are in use, for instance, for evaluating architectural scale models.
Since artificial heads are basically just a specific way of implementing a set of linear filters, one may think of other ways of developing such filters, e.g., electronically. For many applications this adds additional degrees of freedom, as electronic filters can be controlled at will over a wide range of transfer characteristics. This idea leads to yet another category of applications: "binaural simulation and displays." There are many current applications in binaural simulation and displays, and their number will certainly further increase in the future. The following list provides examples: binaural mixing, binaural room simulation, advanced sound effects (for example, for computer games), provision of auditory spatial-orientation cues (e.g., in the cockpit or for the blind), auditory display of complex data, and auditory representation in teleconference, telepresence and teleoperator systems.
Fig.1, showing Binaural-Technology equipment in an order of increasing complexity, is meant to illustrate some of the ideas discussed above. The most basic equipment is obviously the one shown in panel (a). The signals at the two ears of a subject are picked up by (probe) microphones in a subject's ear canal, then recorded, and later played back to the same subject after appropriate equalization. Equalization is necessary to correct linear distortions, induced by the microphones, the recorder and the headphones, so that the signals in the subject's ear canals during the playback correspond exactly to those in the pick-up situation. Equipment of this kind is adequate for personalized binaural recordings. Since a subject's own ears are used for the recording, maximum authenticity can be achieved.
Artificial heads (panel b) have practical advantages over real heads for most applications; for one thing, they allow auditory real-time monitoring of a different location. One has to realize, however, that artificial heads are usually cast or designed from a typical or representative subject. Their directional characteristics will thus, in general, deviate from those of an individual listener. This fact can lead to a significant decrease in perceptual authenticity. For example, errors such as sound coloration or front-back confusion may appear. Individual adjustment is only partly possible, namely, by specifically equalizing the headphones for each subject. To this end, the equalizer may be split into two components, a head equalizer (1) and a headphone equalizer (2). The interface between the two allows some freedom of choice. Typically, it is defined in such a way that the artificial head features a flat frequency response either for frontal sound incidence (free-field correction) or in a diffuse sound field (diffuse-field correction). The headphones must be equalized accordingly. It is clear that individual adjustment of the complete system, beyond a specific direction of sound incidence, is impossible in principle, unless the directional characteristics of the artificial head and the listener's head happen to be identical.
Panel (c) depicts the set-up for applications where the signals to the two ears of the listener are to be measured, evaluated and/or manipulated. Signal-processing devices are provided to work on the recorded signals. Although real-time processing is not necessary for many applications, real-time play back is mandatory. The modified and/or unmodified signals can be monitored either by a signal analyzer or by binaural listening. The most complex equipment in this context is represented in panel (d). Here the input signals no longer stem from a listener's ears or from an artificial head, but have been recorded or even generated without the participation of ears or ear replicas. For instance, anechoic recordings via conventional studio microphones may be used. The linear distortions which human ears superimpose on the impinging sound waves, depending on their direction of incidence and wave-front curvature, are generated electronically via a so-called ear-filter bank (electronic head). To be able to assign the adequate head transfer function to each incoming signal component, the system needs data about the geometry of the sound field. In a typical application, e.g. architectural-acoustics planning, the system contains a sound-field simulation based on the data of the room geometry, the absorption features of the materials implied, and the positions of the sound sources and their directional characteristics. The output of the sound-field modelling is fed into the electronic head, thus producing so-called binaural impulse responses. Subsequent convolution of these impulse responses with anechoic signals generates binaural signals like the ones the subject would observe in a corresponding real room. The complete method is often referred to as binaural room simulation.
To give subjects the impression of being immersed in a sound field, it is important that a sense of spatial constancy is provided perceptually. In other words, when the subjects move their heads around, the perceived auditory world should nevertheless maintain its spatial position. To this end, the simulation system needs to know the head position in order to be able to control the binaural impulse responses adequately. Head position sensors (trackers) have therefore to be provided. It is at this point that interactivity has to be introduced into the system - and the transition to the kinds of system which are referred to as Virtual-Reality systems, takes place.
Binaural room simulation (Strauss&Blauert, 1995)
As has already been mentioned above, the two ears represent the input ports of the human auditory system. Therefore, to place a subject into a virtual auditory environment it is necessary to present the signals at both eardrums as being similar to the signals that would be present in a corresponding real environment. Obviously it is easiest to use headphones as an auditory display, because the binaural signals to be presented to the eardrums can be given directly to the headphone´s transducer terminals after an adequate equalisation of the headphones transfer function. Disturbing crosstalk between the two audio channels cannot occur.
The tasks in order to create a virtual auditory environment are as follows:
Auralizing a single sound source is comparatively easy. Given the position of the sound source and the subject's head, the distance and the direction of incidence can be calculated. Auralization is performed by convolving an anechoic sound signal with the corresponding head related impulse responses (HRIR) in real-time. The overall gain is adjusted according to the distance between the sound source and listener. Absorption of sound in the air over long distances can either be modelled by appropriate overall gain reduction or by frequency dependent gain reduction. Complex directivity characteristics of the sound source can also be implemented using appropriate prefiltering as a function of frequency and direction of emittance. Monopole synthesis and spherical harmonic synthesis have been successfully examined for the purpose of efficiently storing directivity data (Giron, 1993).
The model as yet does not take into account any impact of the surrounding environment upon the listener's auditory perception. Many conclusions can be drawn alone from the auditory perception on the environment. For example the interior of a church sounds completely different compared to a small living room, independent on the type of signal that the room is excited by. The total absence of reflections, for example in an anechoic chamber, can even be an unpleasant sensation for people who are not used to the absent auditorily perceivable environment. Furthermore, it is believed that reflection patterns are important clues to proper distance perception. The impact of the reflective environments onto the perceived sound can be modelled with the help of binaural room simulation algorithms (Lehnert, 1992). These algorithms make use of geometric acoustics: Provided that the wavelength of the sound is short, compared to the linear geometric dimensions of the surfaces in the room, and long, compared to the roughness and bendings of these surfaces, the sound waves are propagated almost in straight lines in form of sound rays that are reflected according to the optical reflection law on surfaces. Though the assumption above is not true for all perceivable wavelengths of sound, it has been shown (Pompetzki, 1993) that reasonable results can be achieved using geometric acoustics. Of course, wave effects like diffraction and diffuse reflections cannot be exactly modelled using geometric acoustics at first hand. This would require the acoustic wave equation for the sound pressure to be numerically solved with respect to complex boundary conditions given by the reflective surfaces. Because a great deal of computational effort is needed, especially for high frequencies, this method is currently not suitable for real-time applications.
Two appropriate methods are presently known for the modelling:
the mirror-image method (Allen & Berkley 1979, Borish 1984)
and different kinds of ray-tracing (Krokstadt et al 1968).
Although ray-tracing is initially not suitable for the
computation of secondary sound sources, it has been shown
(Lehnert 1993) that the results of a ray-tracing procedure can be
post-processed so that the results are identical with that of the
According to the so called mirror image model (Allen, Berkley, 1979), primary sound sources have to be mirrored on all geometric surfaces of the environment, to obtain virtual secondary sound sources. The algorithm can be applied recursively for these secondary sources to obtain secondary sound sources of a higher order. However, not all of the secondary sound sources found with this procedure are acoustically relevant, because the surfaces are not generally extended infinitely. Therefore most of the calculated reflections lie outside the boundaries of the corresponding walls or the sound path is blocked by other walls. Much effort in terms of calculation power is necessary to filter out the relevant sound sources by performing visibility investigations. An alternative method for finding secondary sound sources is the ray-tracing algorithm. It is comparable to corresponding rendering algorithms applied in the field of computer graphics. Rays are sent out from each sound source in different directions and their propagation in the room is traced. Rays hitting surfaces of the environment are reflected according to the reflection law. Diffuse reflections can be modelled by adding random components to the reflection angle. All rays that hit a detection sphere around the receiver are acoustically relevant for the simulation. The positions of secondary sound sources can easily be found by backtracking these rays. However, when putting the rays-tracing algorithm into practice, because of the finite number of rays and the infinite detection sphere around the receiver, missing or multiple detections have to be considered. The role of the sound source and the receiver can also be reversed. This requires less effort if more than one sound source is present. If, in theory, calculation time were unlimited, both algorithms would find the same distribution of virtual sound sources. The ray-tracing algorithm is usually more efficient for finding high order reflections while the mirror image model is preferable when only low order reflections are required. A sequence of indices is assigned to each secondary sound source specifying the walls where the sound has been reflected on its way to the listener. Optionally, reflection angles for each reflection can be stored for simulating angle dependent reflection characteristics. The complete characteristics of the environment as a result of this sound field modelling process can therefore be represented by a spatial map of secondary sound sources (Lehnert and Blauert 1989) in a reflection free space.
A detailed description of the algorithms, their variations, their compatibility, and their performance with respect to room acoustical problems can be found in Lehnert & Blauert (1992a), Lehnert (1993).
A relatively large amount of literature is available on the application of computer models to room acoustical problems. A good overview on the current state-of-the-art and a representative collection of contemporary papers may be found in two special issues of Applied Acoustics. These are Vol. 36 Nos. 3 and 4 (1992): "Special issue on Auditory Virtual Environment and Telepresence" and Vol. 38 (1993) Nos. 2-4: "Special Issue on Computer Modelling and Auralization of Sound Fields in Rooms". Each sound source can be auralized with the help of the head related transfer functions (HRTFs) as described in an earlier section. Directivity characteristics of the source and wall reflection characteristics can be modelled using prefilters. Wall reflectance data can be found in literature or be obtained from direct measurements usually carried out in reverberation chambers. The result of this primary filter process is a binaural impulse response for each virtual sound source. The binaural room impulse response is obtained by summing up all the binaural impulse responses from each virtual sound source. It can be interpreted as the two sound pressure signals at the eardrums that would be measured if the sound source emitted an ideal impulse. The room impulse response describes completely the transfer characteristics of the environment between a sound source and the listener. For auralization purposes, anechoic audio signals have to be convolved with the binaural room impulse response. This convolution requires an enormous amount of calculation power that cannot reasonably be performed in real-time, so simplifications are necessary. Exact auralization can be restricted to first and second order reflections because reflections of a higher order are hard to be perceived separately.
These reflections can be modelled with conventional
reverberation algorithms which only consider statistical
properties of the late reflections. Direct sound and first/second
order reflections can be auralized in real-time by distributing
the corresponding virtual sound sources to several digital signal
processors (DSPs) connected in a network. The system for
auralizing one virtual sound source, shown in figure 4, is called
an auralization unit. The delay is proportional to the distance
between the source and the receiver and represents the time that
the sound needs to reach the listener. The three prefilters used
in SCATIS allow up to second order sound sources with directivity
characteristics or even up to third order reflections without
directivity characteristics of the sound source to be simulated.
All the auralization units (32 in SCATIS) can work in parallel on
several processors and their results have to be added together to
obtain the complete auralized signal.
architecture (Strauss&Blauert, 1995)
Figure 5 shows the general structure of an auditory virtual
environment system. Its afferent pathway can easily be integrated
into a complete virtual environment system with parallel afferent
pathways, for example for visual or tactile rendering.
The head position and orientation is frequently measured by head tracking hardware, usually based on modulated electromagnetic fields. Measured position and orientation data is passed on to the head renderer process that is used to buffer the data for immediate access by the central controller. Simple transformations, like offset additions, may be applied to the data in the head renderer process.
The central controller implements the protocol dynamics of a
virtual environment application. It accepts events from all
efferent renderers, evaluates them and reacts by sending
appropriate events resulting from the evaluation to the afferent
renderers. For example in a multimodal application with an
integrated hand gesture renderer, the central controller could
advise the auditory renderer to move sound sources represented by
objects that can be grasped in the virtual world. Auditory
renderer and auralization hardware have already been described in
the previous section.
The following events are relevant to the sound-field model and have to be treated in a different manner (Blauert&Lehnert, 1994):
Since the sound field model can be assumed to be the most time consuming part, any kind of optimisation is desirable. The splitting of movement in translation and rotation is somewhat arbitrary since the head-position tracker device will always deliver all six degrees of freedom at once. Looking at the tracker data one will find that the head always does very small translations even if the subject tries not to move. It is certainly not reasonable to recalculate the sound field model for these small variations.
Two steps can be used to optimise the behaviour. Firstly, a
translation threshold value can be specified. Translations below
that threshold can simply be ignored. Secondly, an approximation
for small translations, as they might occur for a seated subject,
can be given: The positions of all the secondary sources are
determined by the position of the sound source and the geometry
of the reflecting surfaces. The dependency on the position of the
receiver is only indirect and is included in the so-called
visibility and obstruction tests. These tests do not influence
the position of the secondary sources but only determine whether
a given source is valid or not. For small translations it can be
assumed that the validity does not change and the secondary
sources will remain unchanged. Consequently, the translations can
be modelled by simply recalculating the delays and the directions
of incidence for the already given set of secondary sources. A
similar approximation is possible for small translations of the
sound source. Assuming that the visibility of the current set of
secondary sources is not influenced by the translations, the new
positions of the secondary sources can be calculated according to
the trajectory of the primary source, a procedure which is
relatively easy to perform. For larger translations these
approximations are no longer valid and a complete re-execution of
the sound field model is required. If recalculating the sound
field is so time consuming that the smoothness of the simulation
is severely disturbed, it might be useful to execute the model in
several frames. To this end, only a part of the sound field is
modelled during one frame and for the remaining part the
approximation method is used. Updates are performed as soon as
new results become available. Using this method the display frame
rate can be higher than the execution rate of the sound field
model. The resulting error is that position updates of
reflections are delayed by one or more frames. The perceptual
effects of these errors can be kept small by scheduling the
updates according to the perceptual relevance of a specified
sound field component.
The task of an auditory renderer is to compute the spatial, temporal, and spectral properties of the sound field at the subject's position for a given virtual world model taking into account all physical parameters that influence the sound field in the physical reality. To this end, a sound field model needs to be established whose results can be auralized by the front end. A suitable form of describing these results is the spatial map of secondary sound sources (Lehnert & Blauert 1989). The sound field is modelled by a cloud of discrete sound sources that surrounds the listener in a free sound field. Recently, (Møller 1993) an attempt has been made to define a standard format for a description of the spatial map of secondary sound sources.
Most of the present sound-field-modelling systems are based on the methods of geometric acoustics. Two appropriate methods, the mirror-image method and ray-tracing have been described in an earlier chapter.
However, no attempt has been made up to now to apply these
complex models to non-trivial environments in real time systems.
Real-time development was first discussed by Lehnert (Lehnert
& Blauert 1991, Lehnert 1992b) where the following procedure
In contrast to the digital generation of reverberation which has a long history (e.g. Schroeder 1962), no experience with real-time sound field modelling is available. The key-problem of the application of detailed sound field models in virtual reality is, of course, the required computation time.
Both, ray-tracing and the image method, have been compared frequently concerning their efficiency (Hunt 1964, Stephenson 1988, Vorländer 1988). However, these results can not easily be applied to the current problem, since only the early reflections have to be considered. Also, a wide variety of ray-tracing dialects exists, and the papers listed above did not deal with dialects suitable for the calculation of secondary sources.
Since the application in VR is very time critical, both methods need to be compared with respect to their achievable frame rate and their real-time performance. To this end, benchmarks have been performed by RUB using existing programs on a Sun 10/30 workstation with 10 MFlops of computational power. The test scenario was a room of moderate complexity with 24 surfaces, where eight first-order and 19 second-order reflections occurred for the specific sender-receiver configuration. This virtual environment may be considered as similar to a typical acoustical scenario for VREPAR. The resulting computation times, Tr, for the ray tracing could be approximated by
Tr = N.o.100 ms, (1)
where N is the number of rays and o the maximum order up to which the rays are traced. For the mirror image method the computation time Tm can roughly be expressed as
Tm = 12(o-1) ms. (2)
If the reflections up to the second order are to be computed, the resulting rendering times are 12 ms for the image method and 60 ms for the ray-tracing method, where 300 rays were necessary to find all 8 reflections. However, if only 24 rays were traced, the resulting rendering time was less then 5 ms and still 16 out of 28 reflections were found. The results of these pilot experiments can be summarised as follows:
For the test scenario, the mirror image method showed better performance than the ray-tracing method. It is also more safe, since it will always find all geometrically correct sound paths whereas this can not be guaranteed by the ray-tracing method. It is also difficult to predict the required number of rays. The ray-tracing method, on the other hand, has the advantage that even for very small rendering times still reasonable results are produced. It can easily be adapted to a given rendering time by adjusting the number of rays. This is very difficult for the mirror image method, since the algorithm is inherently recursive.
Ray-tracing will yield better results in more complex environments, since the dependency of the rendering time on the number of walls is linear and not exponential as it is the case for the mirror image method. Recently, an extension of the ray-tracing algorithm has been developed, where the rendering time can be expected to be nearly independent of the number of walls. This extension facilitates the simulation of scenarios with nearly arbitrary high degrees of complexity. It should be noted that this extension needs a certain amount of pre-processing of the room geometry, which probably cannot be done in real time. There is considerable risk that this extension can only be applied to static scenarios which are scenarios where the geometrical arrangement of sound reflecting surfaces does not change.
As a conclusion, it seems difficult to give a clear preference
to one of the two methods. There will most probably be scenarios
where the mirror image method will be superior, and others where
the ray-tracing method offers better performance.
Please consult this chapter's references to get more information about these issues.
For any questions or requests, please contact firstname.lastname@example.org | <urn:uuid:1f5d9d6b-50a6-4e49-bcc2-505c05efa67a> | CC-MAIN-2017-17 | http://ww.cybertherapy.info/pages/sound.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00369-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.926164 | 6,959 | 3.390625 | 3 |
First of all, land tenure rights and water rights are legal rights. As such they are capable of being asserted against the state and third parties in a court of law. In the case of a dispute, a right holder can legitimately expect a valid right to be upheld by a court and as necessary enforced through the machinery and coercive power of the state. Loss of, or damage to, a land right or a water right is prima facie subject to the payment of compensation and the right to such compensation is enforceable in the courts.
Second, land tenure rights and water rights have the same basic purposes. From the perspective of society they permit the orderly allocation of valuable resources. From the perspective of the right holder, they confer the necessary security to invest in the resource or activities entailing its use. When rights are secure and tradable the holder may also be able to use them as collateral through a mortgage to raise credit.
Third, while most societies since ancient times have had their own rules concerning rights to use land and water, modern conceptions of formal land tenure rights and water rights are both overwhelmingly influenced by European conceptions of land and water as reflected through the two European legal traditions: the civil law tradition and the common law tradition.
The civil law tradition, sometimes described as the Romano-Germanic family, applies to most European countries (including the formerly socialist countries of Central and Eastern Europe), nearly all countries of Latin America, large parts of Africa, Indonesia and Japan, as well to the countries of the former Soviet Union. The common law tradition emerged from the law of England. Examples of jurisdictions where the common law tradition applies include the United States, Canada, Australia, Singapore, New Zealand, India, Pakistan and the remaining African countries that are not in the civil law tradition as well as other Commonwealth countries and a number of countries in the Middle East. The colonial period explains why European land and water law was received into the legal systems of so many countries, but it is not the only reason. A number of countries that were never occupied by the colonial powers looked to European and subsequently to North American law in revising or modernising their own legislation.
Having considered the status, purpose and background just what are land tenure rights and water rights?
As regards the substance of land tenure rights, a definition of land tenure proposed by FAO seems a logical place to start. It is:
the relationship, whether legally or customarily defined between people, as individuals or groups, with respect to land.
The definition first suggests that land tenure rights are legal rights that define the relationship between people, whether as individuals or groups and land. However it then goes beyond formal legal rights to include customary rights. Thus an examination of land tenure rights that addresses only formal rights will risk omitting coverage of a large aspect of the concept of land tenure. However, rather than considering the nature of customary rights per se this paper will examine their relationship with formal land tenure rights and formal land rights administration regimes.
Another definition notes that the expression land tenure is originally a legal term that means the right to hold land rather than the simple fact of holding it. The word tenure derives from the Latin term for holding or possession and its use in this context derives from the English feudal period when, following their conquest of England in 1066 the Normans declared all previous land rights void and replaced them with grants from the new King. As such the concept applied to the terms on which land was held, in particular the rights and duties of the holder.
In practice, a combination of private land ownership and extensive individual rights has been a cornerstone of European, North American and Australian concepts of land tenure for the last two hundred years. As a result, the main focus of the European legal traditions has been on private property rights. While all legal systems envisage that some land may be owned by the state, or its equivalent, and many have special legal rules for such holdings, the primary focus of the European traditions has been individual private land ownership.
Both of the main European legal traditions distinguish between property rights relating to land and those that relate to other goods. Immovable property rights in the civil law tradition and real property rights (or realty) in the common law tradition that relate to land are distinguished from movable or personal property, sometimes described as chattels. As will be seen below, many ongoing reforms currently seek to promote the concept of private property rights, specifically rights of land ownership. But while important, ownership is not the only type of important land tenure right.
The other principal type of land holding envisaged under the European legal traditions is leasehold tenure whereby land is rented by a tenant, someone other than the owner, for a specified period, usually in return for the payment of rent. The owner may be a private land owner or the state and the rent payable can be either in money or in kind. While leases created in respect of certain types of land or premises may be subject to specific statutory provisions that restrict, for example, the level of rent that can be charged, the circumstances under which the lease can be determined or even extended, the parties to a leases are otherwise free to agree on the level of rent payable and indeed the term of the lease, which may last from a few weeks to a thousand years. Such an agreement, the lease or lease agreement, will usually specify the use or uses to which the land will be put and will also specify the mutual obligations of the parties. Of course the parties to a lease must also comply with any prescribed legal formalities concerning the form or content of a lease.
Not all jurisdictions, however, permit the private ownership of land. For doctrinal reasons both socialist and nationalist states have often rejected the notion of private land ownership. For example, on achieving independence many African nations vested their land resources in the state or in the president. Land was nationalised in this way to assert the power of the state over traditional chiefs and to allow the appropriation of land for development in the belief that the state would be best placed to manage and distribute land in the interests of all. Under this kind of approach, individuals may typically be granted long term use rights, which usually do not attract the payment of rent, or long term leases which do. The legacy of this approach is still found in a number of African countries, such as Tanzania and Mozambique, where all land remains in state ownership, with individuals holding use rights.
While land reforms in many of the former socialist states of Eastern Europe and Central Asia have seen the introduction of freely tradable private land ownership rights, some states have taken a more cautious approach. Particularly as regards agricultural land, in some countries individuals are permitted only to hold use rights and, generally as a result of fears over land speculation and hoarding, there are restrictions on the sale or transfer of land in others even where ownership rights exist.
Even in countries that permit private land ownership, large areas of land may remain in state ownership. In some countries this is largely unproductive land; elsewhere it is, for example, forest land. Depending on the applicable legislation individuals may, or may not, be able to acquire legal rights to use such land. The amount of land under state ownership varies considerably from country to country.
Land tenure is, however, concerned with far more than ownership, lease and use rights. The unique and immovable nature of land means that it is frequently subject to numerous simultaneous uses, claims and legal rights. Take, for example, a single parcel of privately owned land. Part of this land may be subject to a lease. The remainder of the land may be subject to a legal charge or mortgage, whereby money is lent against the security provided by the land. An owner of an adjacent parcel of land may hold a right of way over part of the land parcel (an easement or servitude) or rights to use part of that parcel for a specific purpose, such as a right to graze livestock or to gather timber (a use right or right of usufruct). At the same the land parcel may benefit from a similar right over an adjacent parcel. Unknown to the owner, a third person - a squatter - may be in illegal and unauthorised occupation of a far corner of the land parcel. If nothing is done to remove him, after a certain period of time the squatter may eventually acquire legal rights over the land parcel, or part of it. Further questions may arise as to the relationship between the formal owner of the land parcel, often a male, and other family members. What interests, if any, do women and other members of the owners family hold in the land?
These kinds of relationships are all the subject of land tenure legislation, regulated either in the relevant code, in the civil law tradition, or in the other laws and on the basis of court decisions in the countries that follow the common law tradition. One way or another, such rules and principles have generally followed the spread of European concepts of land tenure.
Modern water rights, by contrast, are not subject to multiple subordinate rights, even though the water that is the subject such rights is quite likely to be subject to multiple uses. But what are water rights?
The first point to emphasise is that water rights, as the term is commonly understood, have nothing to do with the so-called right to water, a putative human right which is claimed to exist either as a right in itself or as an ancillary aspect of the right to food created by article 11 of the International Covenant on Economic, Social and Cultural Rights. Nor should water rights be confused with provisions contained in progressive constitutions such as the right of access to water found in that of South Africa.
Instead water rights are concerned with the removal (and subsequent use) of water from the natural environment or its use in that environment. In essence a water right is a legal right:
to abstract or divert and use a specified amount of water from a natural source;
to impound or store a specified quantity of water in a natural source behind a dam or other hydraulic structure; or
to use water in a natural source.
But water rights frequently go beyond an entitlement to a mere quantity of the simple chemical compound which is water: the flow of the water is also an important component of a water right.
A natural source includes a stream, river or lake, a reservoir created by the damming of a river, a swamp or pond as well as groundwater from a natural spring or a well. Historically, much of the focus of water law, and thus conceptions of water rights, has been based on rights to abstract and use water from streams and rivers, more specifically from the abundant and perennial streams rivers of Europe. This, as will be seen, has had, and indeed continues to have, implications for the export of European notions of water rights to countries with vastly different climatic and hydrological conditions. Furthermore, while groundwater is now commonly included in water rights regimes its particular features are such that it is considered separately below.
The main uses to which water abstracted on the basis of a water right is put are agriculture (for irrigation and livestock watering), industrial uses including its use as a coolant in thermal power stations, and for urban use including for domestic drinking water, household and commercial uses. Rights to impound water are either a precursor to abstraction (for example where water is held in a reservoir prior to its use for irrigation) or relate to the use of water for hydro-power generation.
As to their legal form, while in some jurisdictions (such as the western states of the United States of America in which the prior appropriation doctrine applies) water rights are still created by operation of law, water rights are mostly now created on the basis of a legal instrument issued by the state agency responsible for water resources management (the water administration). Such instruments are variously described in legislation as licences, permissions, authorisations, consents and concessions.
As to their substance, modern water rights are administrative use or usufructory rights. The question arises are they property rights? Arguably they are. The fact that they gain their existence from an administrative or regulatory procedure does not by itself preclude them from being property rights. After all, intellectual property rights in the form of trademarks and patents are usually acquired through an administrative procedure. A full discussion of this matter is beyond the scope of this paper. The key point to note is that although water rights are now generally created under public or administrative law on the basis of statutory provisions, they have, as will be seen, many but not all of the attributes of private property rights, such as land tenure rights. Indeed without such attributes, a water rights system simply would not be able to function effectively. Before looking at these features, in comparison with land tenure rights, several observations must first be made about water rights.
First of all, statute-based modern water rights are based on the concept of the hydrologic cycle, the notion that water in its natural state is in constant motion (see Box A). The effect is that water rights, in the sense described above, cannot be issued or regulated in isolation to other activities relating to watercourses.
Box A - The hydrologic cycle and the fugitive nature of water
With the exception of so-called fossil groundwater, described below, water is in a complex interlinked cycle of continuous movement. To start at the top of the cycle, as it were, water falls over both the sea and land as rain, hail or snow. Water evaporates from any wet surface including the sea which covers about 70% of the planet. As regards the water that falls over land, snow melt and rainwater runs off the surface into streams and rivers and thence down to the sea or some other terminus such as an inland lake.
Throughout this process some water enters into the soil where it is held as capillary water and returns directly to the atmosphere by way either of evaporation from the soil or through absorption by plants and then by transpiration. Finally some water percolates down into the geological strata that are aquifers. This is mostly by rainfall excess to plant requirements. Some of this water flows slowly to springs from where it rejoins the flow of surface water, or directly back to the sea. In this connection it should be noted that most ground water sources are linked with surface water bodies above them. Some however are not. Parts of so called confined aquifers pass beneath surface water bodies with which there is no direct physical link. Groundwater will be replenished provided the abstraction rate is not too fast.
While water may be temporarily removed from the cycle by human intervention - for the bottling of mineral water - sooner or later it is used and will flow as waste water back into a river, stream or sea.
The only real exception is so-called fossil ground water which is ancient water contained in aquifers that have no connection with surface waters. In its natural state such water is not in motion and as such is more similar to oil reserves: once extracted it will not be replaced. In some places particularly in arid regions a proportion of water contained in deeper aquifers can be thousands of years old, representing palaeo-recharge that occurred during past eras of wetter climates.
Thus a range of other activities that may have a negative impact on the quality and flow of water, and thus on existing water rights, are generally regulated either by the same water rights system, or in close co-ordination with it. These include:
the diversion, restriction or alteration of the flow of water within a water course;
the alteration of the bed, banks or characteristics of a water course, including the construction (and use) of structures on its banks and adjacent lands including those related to the use and management of water within a water course;
the extraction of gravel and other minerals from water courses and the lands adjacent to them;
the use of sewage water for irrigation;
fishing and aquaculture;
the discharge of wastes or pollutants to water courses.
The use of water, or the undertaking of any of these activities, without a formal right in circumstances where this is required, invariably constitutes an offence that may be punished in accordance with criminal or administrative law (depending on the jurisdiction). Activities that do not involve the abstraction of water from a water course, such as navigation or the impoundment of water for hydro-power generation and, in general, all in-stream uses of water resources (recreation, conservation of riverine and lacustrine wildlife habitats, fishing) are frequently described as non consumptive uses, in contrast to consumptive uses where water is abstracted and used off-stream, with limited or no return flows returned to the water course of origin. What is clear, though, is that a river may be simultaneously subject to numerous water and related rights much in the same way that an individual parcel is, even if the rights themselves are not formally affected.
In order to be able to establish this type of administrative rights regime, it is first necessary to bring a states water resources within the control of the state. This is done through a variety of different legal techniques varying from a declaration of state ownership, the inclusion of water within the public domain of the State, vesting water resources in the President of the State on behalf of its people, or bringing water resources under the superior use right of the state. Usually, such state ownership or control applies to all of the water resources within a states territory thus including both surface water, groundwater and even rainwater. In contrast to land tenure rights, notions of genuinely private ownership rights over water have therefore now largely gone from most jurisdictions.
Nevertheless it should be noted that water legislation typically provides a range of exemptions for activities that would otherwise require a water right. Indeed sometimes such entitlements are described in legislation in terms of rights. Typically, this is either done by reference to the type of activity, the volume of water used or a combination of both. For example, in Spain such uses are classified as common uses and include the use for drinking, bathing, and other domestic purposes as well as livestock watering. In Canada (Saskatchewan Province) the exemption derives from the size of the parcel to be watered, while regarding current water law reforms in England and Wales an exemption for abstractions of up to 20 cubic metres per day is proposed. There is no great theoretical justification for exempting such uses from formal water rights regimes. Instead, a value judgement is made by the legislature that takes account of the increased administrative and financial burden of including such uses within the formal framework, their relative value to individual users and their overall impact on the water resources balance.
This kind of de minimis exemption has no really direct equivalent in the context of land tenure regimes. The closest equivalent is probably a temporary licence or permission to cross or travel over state owned land, such as a highway or other public place. In any event such de minimis water rights are a curious type of residuary right. While they may be economically important to those who rely on them, it is hard to see how they provide much in the way of security. This issue is considered in more detail below.
Including the use of
state sanctioned force such as court bailiffs and ultimately fines and even
imprisonment for failure to comply with court orders.|
Some commentators have argued that the influence of technical assistance from experts from the common law tradition (primarily the United States) has led to the creation of a new hybrid tradition within the former socialist countries. Nevertheless, the form of post-socialist law is certainly that of the civil law tradition.
Some jurisdictions, such as Cameroon and South Africa, are influenced by both the civil law and common law traditions.
Land and water laws were not the only areas of European law that shape modern legal systems.
For example Japans 1896 Civil Code was heavily influenced by the German Civil Code.
Food and Agriculture Organization of the United Nations Land tenure and rural development FAO Land Tenure Studies No. 3 (2002) FAO Rome at page 7.
Bruce, J.W. Review of tenure terminology (1998) Tenure Brief No. 1, Land Tenure Cente, University of Wisconsin, Madison, p 1. One legacy of the Norman era is that strictly speaking all land in England and Wales is owned by the Queen, the best title that an individual can hold being the estate of the fee simple absolute. To all practical extents and purposes this is equivalent to ownership.
A pattern that their English descendants would in turn repeat in later centuries.
Hanstad, T Land Ownership in Prosterman, R. & Hanstad, T. Legal Impediments to Effective Rural Land Relations in Eastern Europe and Central Asia The World Bank, Washington D.C., 1999, at page 16.
As with the word tenure some care is needed with the word property. While it is frequently used to describe a thing that it is owned - as in the expression that is my property - from a semantic perspective property is not the actual thing that is owned but the subject of a relationship of ownership: property is the condition of being proper to or belonging to a person or persons.
For example the Crown or the Federal Government.
For example a number of jurisdictions in the civil law tradition include land assets among the domain or patrimony of the state.
In the common law a land parcel includes any buildings or structures attached to that land and they are thus included in the category or real property. Buildings and structures are similarly classed as immovable property in the civil tradition, although in some jurisdictions a building may be owned separately to the parcel of land around and below it.
Examples include tenancies concluded in respect of agricultural land, business premises and certain types of housing. The objectives of such restrictions vary. As in the case of the first two categories they are often to promote business continuity at least in the case of richer countries. As regards housing, the objectives are usually social in that the restrictions seek to protect poorer tenants against richer land lords. On the other hand, such type of social protection may also be found in respect of land leased for agricultural purposes, for example in the case of share-cropping whereby the rent is paid in kind out of the production from the land.
Some rental payments are for a nominal amount, a so-called pepper corn rent.
These might include an obligation on the part of the tenant to undertake periodic repairs to a building for example. Under the common law, the most important covenant on the part of the land lord is the covenant for quiet enjoyment whereby the tenant is, provided he pays the rent and complies with his obligations, entitled to enjoy the holding throughout the term of the tenancy without interference from the landlord.
An example from the common law will suffice. A lease must be for a specified determinable period of time, even if this period is indefinitely renewable. Thus a lease for the duration of the [second world] war was held to be void for uncertainty. LACE v CHANTLER KB 368.
Quan, J, Land Tenure, Economic Growth in Sub-Saharan Africa in Toulmin, C. & Quan, J. (eds) Evolving land rights, policy and tenure in Africa DFID/IIED/NRI London 2000, at page 33.
Forest legislation may in particular restrict or prohibit the acquisition of land tenure rights within forest areas.
In the United States, for example, although most land in most states is privately owned, in the Western states the Federal Government owns approximately half of all land, with individual states themselves owning a smaller but not insignificant share. The Federal Government owns more than half the land of the states of Alaska, Idaho, Nevada, Oregon and Utah). Huffman, J.L. Land Ownership and Environmental Regulation 25 Ecology Law Quarterly 591 (1999) pages 593 and 597.
These terms are largely synonymous: the former being used in the common law tradition and the latter in the civil law tradition.
Strictly speaking, of course, it is the land owner who enjoys such a right. Such a right is not personal to him but incidental to his ownership. In the language of the civil law tradition, the parcel of land that is subject to such a servitude is said to be burdened by it, to the benefit of the other parcel. The common law talks in terms of dominant tenements, which benefit from easements that negatively affect the servient tenement.
Indeed a further layer of complexity may be found in common law jurisdictions by reason of the concept of the trust, whereby the legal owner of an asset, such as land or a land right, may hold that resource in trust for the benefit of another person. The interest of the latter, an equitable interest may have important implications on how a formal land tenure right is exercised.
In the French Civil Code, for example, life interests (usufruit) are addressed in articles 578-624, the occupation of land (usage et habitation) in articles 625-636, easements (servitudes) in articles 637-710, pledges (nantissement or antichrèse) in articles 2071-2091 and acquisitive prescription or squatters rights (la prèscription) in articles 2219-2283.
Article 11 of the International Covenant on Economic, Social and Cultural Rights, provides that everyone has a right to an adequate standard of living for himself and his family including adequate food, clothing and housing. The Right to water was developed in General Comment 15 on the Covenant by the Committee on Economic, Social and Cultural Rights. Such General Comments constitute authoritative interpretations of the provisions of the Covenant to clarify the normative contents of rights, States parties and other actors obligations, violations and implementation of the rights at national level. Food and Agriculture Organization of the United Nations Agriculture, Food and Water FAO, Rome (2003), Annex One.
Article 24.
For practical reasons water in streams and rivers has tended to play a more important role than water in lakes and ponds as far as water rights are concerned as the gradient of flowing water makes it easier and cheaper to abstract. Water from a lake or pond must generally be pumped as the surrounding land will usually be above the level of the lake surface.
For example, apart from the rivers that form part of its northern, southern and north eastern borders, Namibia has only temporary rivers which may only last a few hours or days following periods of intense rainfall.
See discussion in Part Five below.
From a general legal perspective such terms are synonymous. Having said that, in those cases where the word concession is used in water legislation this generally relates to cases where a particularly long term of use is envisaged. The word concession is in any event a somewhat slippery term with several different meanings some of which are also used in the water sector. For example a person may hold a concession, in the sense of an exclusive right, to operate a pop-corn stand in cinema. Similarly, following the so-called French model, a private water supply company may hold a concession, in the sense of an exclusive right, to operate an urban water supply network. In a sense a water right that is described as a concession confers an exclusive right on the holder to use a given volume of water at a given location, but then this can said of any water right.
Joseph Sax, in the context of American water rights, is of no doubt that they are property rights even when created by permit. Sax, J.L. The Constitution, Property Rights and the Future of Water Law 61 University of Colorado Law Review 257 (1990).
Water evaporates from any surface.
McCaffrey, S., op cit, at page 23.
As in Albanias Water Law of 1995.
As in Argentinas Civil Code of 1869.
As in Ghanas Water Resources Commission Act of 1996 and in Zimbabwes Water Act of 1998.
As in Ugandas 1995 Water Resources Act and Victorias Water Act of 1989.
Although Spains recent water legislation omits fossil groundwater.
Article 13 of the Albanian Water Law, for example, provides that Everyone has the right to use surface water resources freely for drinking and other domestic necessities and for livestock watering without exceeding its use beyond individual and household needs....
Nevertheless water legislation usually provides that such free uses of water may also be subject to restriction in times of drought.
In the draft Water Bill that is currently subject to consultation. Similarly, agricultural irrigation is exempt from permit requirements in Kentucky and Maryland (up to 10,000 gallons a day) Getches, D.H, Water Law in a Nutshell West Publishing, St. Paul, Minn (1997) at page 57.
Such a right would not, however, be characterised as a land tenure right. | <urn:uuid:7f428b73-4b8a-4a0b-8c32-b7f68429a0cd> | CC-MAIN-2017-17 | http://www.fao.org/docrep/007/j2601e/j2601e02.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00133-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.948831 | 5,902 | 3.59375 | 4 |
From Wikipedia, the free encyclopedia
Gamma radiation, also known as gamma rays or hyphenated as gamma-rays (especially in astronomy, by analogy with X-rays) and denoted as γ, is electromagnetic radiation of high frequency (very short wavelength). Gamma rays are usually naturally produced on Earth by decay of high energy states in atomic nuclei (gamma decay). Important natural sources are also high-energy sub-atomic particle interactions resulting from cosmic rays. Such high-energy reactions are also the common artificial source of gamma rays. Other man-made mechanisms include electron-positron annihilation, neutral pion decay, fusion, and induced fission. Some rare natural sources are lightning strike and terrestrial gamma-ray flashes, which produce high energy particles from natural high-energy voltages. Gamma rays are also produced by astronomical processes in which very high-energy electrons are produced. Such electrons produce secondary gamma rays by the mechanisms of bremsstrahlung, inverse Compton scattering and synchrotron radiation. Gamma rays are ionizing radiation and are thus biologically hazardous.
A classical gamma ray source, and the first to be discovered historically, is a type of radioactive decay called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately on formation, although isomeric transition can produce inhibited gamma decay with a measurable and much longer half-life. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard's radiation was named "gamma rays" by Ernest Rutherford in 1903.
Gamma rays typically have frequencies above 10 exahertz (or >1019 Hz), and therefore have energies above 100 keV and wavelength less than 10 picometers, less than the diameter of an atom. However, this is not a hard and fast definition but rather only a rule-of-thumb description for natural processes. Gamma rays from radioactive decay commonly have energies of a few hundred keV, and almost always less than 10 MeV. On the other side of the decay energy range, there is effectively no lower limit to gamma energy derived from radioactive decay. By contrast, energies from astronomical sources can be much higher, ranging over 10 TeV (this is far too large to result from radioactive decay).
The distinction between X-rays and gamma rays has changed in recent decades. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation emitted by radioactive nuclei (gamma rays). Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. However, with artificial sources now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types, now completely overlaps. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where high energy processes known to involve other than radioactive decay are still named as sources of gamma radiation. A notable example is extremely powerful bursts of high-energy radiation normally referred to as long duration gamma-ray bursts, which produce gamma rays by a mechanism not compatible with radioactive decay. These bursts of gamma rays, thought to be due to collapse of stars called hypernovas, are the most powerful single events so far discovered in the cosmos.
Naming conventions and overlap in terminology
In the past, the distinction between X-rays and gamma rays was based on energy (or equivalently frequency or wavelength), with gamma rays being considered a higher-energy version of X-rays. However, modern high-energy (megavoltage) X-rays produced by linear accelerators ("linacs") for megavoltage treatment in cancer radiotherapy, usually have higher energy (typically 4 to 25 MeV) than do most classical gamma rays produced by radioactive gamma decay. Conversely, one of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of about the same energy (140 keV) as produced by a diagnostic X-ray machine, and significantly lower energy than therapeutic photons from linacs.
Because of this broad overlap in energy ranges, the two types of electromagnetic radiation are now usually defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce Bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or from other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet and even lower energy photons produced by these processes would also be defined as "gamma rays".
In certain fields such as astronomy, higher energy gamma and X-rays are still sometimes defined by energy, since the processes which produce them may be uncertain. Occasionally, high energy photons in nature which are known not to be produced by nuclear decay, are nevertheless referred to as gamma radiation. An example is "gamma rays" from lightning discharges at 10 to 20 MeV, which are known to be produced by the Bremsstrahlung mechanism.
Another example is gamma ray bursts, which are named historically, and now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. A few gamma rays known to be explicitly from nuclear origin are known in astronomy, with a classic example being that of supernova SN 1987A emitting an "afterglow" of gamma-ray photons from the decay of newly-made radioactive cobalt-56 ejected into space in a cloud, by the explosion. However, many gamma rays produced in astronomical processes are produced not in radioactive decay or particle annihilation, but rather in much the same manner as the production of X-rays, but simply using electrons with higher energies. Astronomical literature tends to write "gamma-ray" with a hyphen, by analogy to X-rays, rather than in a way analogous to alpha rays and beta rays. This notation tends to subtley stress the non-nuclear source of many astronomical gamma rays.
Units of measure and exposure
The measure of gamma rays' ionizing ability is called the exposure:
- The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and is the amount of radiation required to create 1 coulomb of charge of each polarity in 1 kilogram of matter.
- The röntgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation required to create 1 esu of charge of each polarity in 1 cubic centimeter of dry air. 1 röntgen = 2.58×10−4 C/kg
- The gray (Gy), which has units of (J/kg), is the SI unit of absorbed dose, and is the amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter.
- The rad is the (obsolete) corresponding traditional unit, equal to 0.01 J deposited per kg. 100 rad = 1 Gy.
- The sievert (Sv) is the SI unit of equivalent dose, which for gamma rays is numerically equal to the gray (Gy).
- The rem is the traditional unit of equivalent dose. For gamma rays it is equal to the rad or 0.01 J of energy deposited per kg. 1 Sv = 100 rem.
Shielding from gamma rays requires large amounts of mass, in contrast to alpha particles which can be blocked by paper or skin, and beta particles which can be shielded by foil. They are better absorbed by materials with high atomic numbers and high density, although neither effect is important compared to the total mass per area in the path of the gamma ray. For this reason, a lead shield is only modestly better (20–30%) as a gamma shield than an equal mass of another shielding material such as aluminium, concrete, water or soil; lead's major advantage is its density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta particles, but provide no protection from gamma radiation.
The higher the energy of the gamma rays, the thicker the shielding required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half value layer or HVL). For example gamma rays that require 1 cm (0.4″) of lead to reduce their intensity by 50% will also have their intensity reduced in half by 4.1 cm of granite rock, 6 cm (2½″) of concrete, or 9 cm (3½″) of packed soil. However, the mass of this much concrete or soil is only 20–30% larger than that of lead with the same absorption capability. Depleted uranium is used for shielding in portable gamma ray sources, but again the savings in weight over lead is modest, and the main effect is to reduce shielding bulk. In a nuclear powerplant, shielding can be provided by steel and concrete in the pressure vessel and containment, while water also provides a shielding material for fuel rods in storage or transport into the reactor core. A loss of water or removal of a "hot" spent fuel assembly into the air would result in much higher radiation levels than under water.
When a gamma ray passes through matter, the probability for absorption in a thin layer is proportional to the thickness of that layer. This leads to an exponential decrease of intensity with thickness. The exponential absorption holds only for a narrow beam of gamma rays. If a wide beam of gamma rays passes through a thick slab of concrete the scattering from the sides reduces the absorption to
where μ = nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 in the material, σ the absorption cross section in cm2 and d the thickness of material in cm.
- Photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, ejecting that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the binding energy of the electron. The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electron volts), but it is much less important at higher energies.
- Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy being emitted as a new, lower energy gamma photon with an emission direction different from that of the incident gamma photon. The probability of Compton scatter decreases with increasing photon energy. Compton scattering is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. Compton scattering is relatively independent of the atomic number of the absorbing material, which is why very dense metals like lead are only modestly better shields, on a per weight basis, than are less dense materials.
- Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over about 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (1.02 MeV) appears as the kinetic energy of the pair and the recoil nucleus. At the end of the positron's range, it combines with a free electron. The entire mass of these two particles is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles).
The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves.
High-energy (from 80 to 500 GeV) gamma rays arriving from far far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus their density may be estimated by analyzing the incoming gamma ray spectrums.
Gamma ray production
Gamma rays can be produced by a wide range of phenomena.
Radioactive decay (gamma decay)
Gamma rays from radioactive gamma decay are produced alongside other forms of radiation such as alpha or beta, and are produced after the other types of decay occur. The mechanism is that when a nucleus emits an α or β particle, the daughter nucleus is usually left in an excited state. It can then move to a lower energy state by emitting a gamma ray, in much the same way that an atomic electron can jump to a lower energy state by emitting infrared, visible, or ultraviolet light. Emission of a gamma ray from an excited nuclear state typically requires only 10−12 seconds, and is thus nearly instantaneous, following types of radioactive decay that produce other radioactive particles. Gamma decay from excited states may also happen rapidly following nuclear reactions such as neutron capture, nuclear fission, or nuclear fusion.
In certain cases, the excited nuclear state following the emission of a beta particle may be more stable than average, and is termed a metastable excited state, if its decay is 100 to 1000 times longer than the average 10−12 seconds. Such nuclei have half-lives that are easily measurable, and are termed nuclear isomers. Some nuclear isomers are able to stay in their excited state for minutes, hours, days, or occasionally far longer, before emitting a gamma ray. Isomeric transition is the name given to a gamma decay from such a state. The process of isomeric transition is therefore similar to any gamma emission, but differs in that it involves metastable excited states of the nuclei.
An emitted gamma ray from any type of excited state may transfer its energy directly to one of the most tightly bound electrons causing it to be ejected from the atom, a process termed the photoelectric effect (it should not be confused with the internal conversion process, in which no real gamma ray photon is produced as an intermediate particle).
Gamma rays, X-rays, visible light, and radio waves are all forms of electromagnetic radiation. The only difference is the frequency and hence the energy of the photons. Gamma rays are generally the most energetic of these, although broad overlap with X-ray energies occurs. An example of gamma ray production follows:
Another example is the alpha decay of 241
Am to form 237
Np; this alpha decay is accompanied by gamma emission. In some cases, the gamma emission spectrum for a nucleus (daughter nucleus) is quite simple, (e.g. 60
Ni) while in other cases, such as with (241
Np and 192
Pt), the gamma emission spectrum is complex, revealing that a series of nuclear energy levels can exist. The fact that an alpha spectrum can have a series of different peaks with different energies reinforces the idea that several nuclear energy levels are possible.
Because a beta decay is accompanied by the emission of a neutrino which also carries energy away, the beta spectrum does not have sharp lines, but instead is a broad peak. Hence from beta decay alone it is not possible to probe the different energy levels found in the nucleus.
In optical spectroscopy, it is well known that an entity which emits light can also absorb light at the same wavelength (photon energy). For instance, a sodium flame can emit yellow light as well as absorb the yellow light from a sodium vapor lamp. In the case of gamma rays, this can be seen in Mössbauer spectroscopy. Here, a correction for the energy lost by the recoil of the nucleus is made and the exact conditions for gamma ray absorption through resonance can be attained.
This is similar to the Franck Condon effects seen in optical spectroscopy.
Gamma rays from sources other than radioactive decay
Gamma radiation, like X-radiation, can be produced by a variety of phenomena. For example, when high-energy gamma rays, electrons, or protons bombard materials, the excited atoms within emit characteristic "secondary" (or fluorescent) gamma rays, which are products of temporary creation of excited nuclear states in the bombarded atoms (such transitions form a topic in nuclear spectroscopy). Such gamma rays are produced by the nucleus, but not as a result of nuclear excitement from radioactive decay.
Energy in the gamma radiation range, often explicitly called gamma-radiation when it comes from astrophysical sources, is also produced by sub-atomic particle and particle-photon interactions. These include electron-positron annihilation, neutral pion decay, bremsstrahlung, inverse Compton scattering and synchrotron radiation. In a terrestrial gamma-ray flash a brief pulse of gamma radiation occurring high in the atmosphere of Earth, gamma rays are thought to be produced by high intensity static electric fields accelerating electrons, which then produce gamma rays by bremsstrahlung interactions with atoms in the air they collide with.
High energy gamma rays in astronomy include a gamma ray background produced when cosmic rays (either high speed electrons or protons) interact with ordinary matter, producing both pair-production gamma rays at 511 keV, or bremsstrahlung at energies of tens of MeV or more, when cosmic ray electrons interact with nuclei of sufficiently high atomic number (see gamma ray image of the Moon at the beginning of this article, for illustration).
- Pulsars and magnetars. The gamma ray sky (see illustration at right) is dominated by the more common and longer-term production of gamma rays in beams that emanate from pulsars within the Milky Way. Sources from the rest of the sky are mostly quasars. Pulsars are thought to be neutron stars with magnetic fields that produce focused beams of radiation, and are far less energetic, more common, and much nearer (typically seen only in our own galaxy) than are quasars (or the rarer sources of gamma ray bursts discussed below). In a pulsar, which produces gamma rays for much longer than a burst, the relatively long-lived magnetic field of the pulsar produces the focused beams of relativistic charged particles, which produce gamma rays in interaction with matter, when these charged particles strike gas or dust in the nearby medium, and are deflected or stopped. This is a similar mechanism to the production of high energy photons megavoltage radiation therapy machines (see bremsstrahlung). The "inverse Compton effect," in which charged particles (usually electrons) scatter from low-energy photons to convert them to higher energy photons (the gamma rays) is another possible mechanism of gamma ray production from relativistic charged particle beams. Neutron stars with a very high magnetic field (magnetars) are thought to produce astronomical soft gamma repeaters, which are another relatively long-lived neutron star-powered source of gamma radiation.
- Quasars and active galaxies. More powerful gamma rays from much farther quasars and other active galaxies probably have a roughly similar linear particle accelerator-like method of production, with high energy electrons produced by the quasar, followed again by inverse Compton scattering, synchrotron radiation, or bremsstrahlung, to produce gamma rays. However, quasar gamma rays are produced from a distance much further away, in distant galaxies. As the black hole at the center of such galaxies intermittantly destroys stars and focuses charged particles derived from them into beams, these beams interact with gas, dust, and lower energy photons to produce X-ray and gamma ray radiation. These sources are known to fluctuate with durations of a few weeks, indicating their relatively small size (less than a few light-weeks across). The particle beams emerge from the rotatational poles of the supermassive black hole at a galactic center, which is thought to form the power source of the quasar. Such sources of gamma and X-rays are the most commonly-visible high intensity sources outside our own galaxy, since they shine not as bursts (see illustration), but instead relatively continuously when viewed with gamma ray telescopes. The power of a typical quasar is about 1040 watts, of which only a small fraction is emitted as gamma radiation, and much of the rest is emitted as electromagnetic waves at all frequencies, including radio waves.
- Gamma-ray bursts. The most intense sources of gamma rays known, are also the most intense sources of any type of electromagnetic radiation presently known. They are rare compared with the sources discussed above. These intense sources are the "long duration burst" sources of gamma rays in astronomy ("long" in this context, meaning a few tens of seconds). By contrast, "short" gamma ray bursts, which are not associated with supernovae, are thought to produce gamma rays during the collision of pairs of neutron stars, or a neutron star and black hole after they spiral toward each other by emission of gravitational waves; such bursts last two seconds or less, and are of far lower energy than the "long" bursts (they are often seen only in our own galaxy for this reason).
The so-called long duration gamma ray bursts produce events in which energies of ~ 1044 joules (as much energy as our Sun will produce in its entire life-time) but over a period of only 20 to 40 seconds, accompanied by high-efficiency conversion to gamma rays (on the order of 50% total energy conversion). The leading hypotheses for the mechanism of production of these highest-known intensity beams of radiation, are inverse Compton scattering and synchrotron radiation production of gamma rays from high-energy charged particles. These processes occur as relativistic charged particles leaving the region near the event horizon of the newly-formed black hole during the supernova explosion, and focused for a few tens of seconds into a relativistic beam by the magnetic field of the exploding hypernova. The fusion explosion of the hypernova drives the energetics of the process. If the beam happens to be narrowly directed in the direction of the Earth, it shines with high gamma ray power even at distances of up to 10 billion light years—close to the edge of the visible universe.
All ionizing radiation causes similar damage at a cellular level, but because rays of alpha particles and beta particles are relatively non-penetrating, external exposure to them causes only localized damage, e.g. radiation burns to the skin. Gamma rays and neutrons are more penetrating, causing diffuse damage throughout the body (e.g. radiation sickness), increasing incidence of cancer rather than burns. External radiation exposure should also be distinguished from internal exposure, due to ingested or inhaled radioactive substances, which, depending on the substance's chemical nature, can produce both diffuse and localized internal damage. The most biological damaging forms of gamma radiation occur in the gamma ray window, between 3 and 10 MeV, with higher energy gamma rays being less harmful because the body is relatively transparent to them. See cobalt-60.
Gamma rays travel to Earth across vast distances of the universe, only to be absorbed by Earth's atmosphere. Different wavelengths of light penetrate Earth's atmosphere to different depths. Instruments aboard high-altitude balloons and such satellites as the Compton Observatory provide our only view of the gamma spectrum sky.
Non-contact industrial sensors used in the Refining, Mining, Chemical, Food, Soaps and Detergents, and Pulp and Paper industries, in applications measuring levels, density, and thicknesses commonly use sources of gamma. Typically these use Co-60 or Cs-137 isotopes as the radiation source.
In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These US$5 million machines are advertised to scan 30 containers per hour. The objective of this technique is to screen merchant ship containers before they enter US ports.
Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include sterilizing medical equipment (as an alternative to autoclaves or chemical means), removing decay-causing bacteria from many foods or preventing fruit and vegetables from sprouting to maintain freshness and flavor.
Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays kill cancer cells also. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed on the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues.
Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabled sugar called fludeoxyglucose emits positrons that are converted to pairs of gamma rays that localize cancer (which often takes up more sugar than other surrounding tissues). The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on what molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones in a bone scan).
When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure.
The natural outdoor exposure in Great Britain ranges from 2 to 4 nSv/h (nanosieverts per hour). Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect.
By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose,. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual yearly background).
An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv) causes slight blood changes, but 2.0–3.5 Sv (2.0–3.5 Gy) causes very severe syndrome of nausea, hair loss, and hemorrhaging, and will cause death in a sizable number of cases—-about 10% to 35% without medical treatment. A dose of 5 Sv (5 Gy) is considered approximately the LD50 (lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see Radiation poisoning).. (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.)
For low dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv,[clarification needed] the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, that risk increase is at 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki.
- Alpha particle
- Beta particle
- Gamma camera
- Gamma-ray astronomy
- Gamma-ray burst
- Gamma spectroscopy
- Mössbauer effect
- Nuclear fission and fusion
- Radioactive decay
- ^ P. Villard (1900) "Sur la réflexion et la réfraction des rayons cathodiques et des rayons déviables du radium," Comptes rendus, vol. 130, pages 1010-1012. See also: P. Villard (1900) "Sur le rayonnement du radium," Comptes rendus, vol. 130, pages 1178-1179.
- ^ L'Annunziata, Michael F. (2007). Radioactivity: introduction and history. Amsterdam, Netherlands: Elsevier BV. pp. 55–58. ISBN 9780444527158.
- ^ Rutherford named γ rays on page 177 of: E. Rutherford (1903) "The magnetic and electric deviation of the easily absorbed rays from radium," Philosophical Magazine, Series 6, vol. 5, no. 26, pages 177-187.
- ^ Aharonian, F.; Akhperjanian, A.; Barrio, J.; Bernlohr, K.; Borst, H.; Bojahr, H.; Bolz, O.; Contreras, J. et al. (2001). "The TeV Energy Spectrum of Markarian 501 Measured with the Stereoscopic Telescope System of HEGRA during 1998 and 1999". The Astrophysical Journal 546 (2): 898. Bibcode 2001ApJ...546..898A. doi:10.1086/318321.
- ^ a b Dendy, P. P.; B. Heaton (1999). Physics for Diagnostic Radiology. USA: CRC Press. p. 12. ISBN 0750305916. http://books.google.com/?id=1BTQvsQIs4wC&pg=PA12.
- ^ Charles Hodgman, Ed. (1961). CRC Handbook of Chemistry and Physics, 44th Ed.. USA: Chemical Rubber Co.. p. 2850.
- ^ Feynman, Richard; Robert Leighton, Matthew Sands (1963). The Feynman Lectures on Physics, Vol.1. USA: Addison-Wesley. pp. 2–5. ISBN 0201021161.
- ^ L'Annunziata, Michael; Mohammad Baradei (2003). Handbook of Radioactivity Analysis. Academic Press. p. 58. ISBN 0124366031. http://books.google.com/?id=b519e10OPT0C&pg=PA58.
- ^ Grupen, Claus; G. Cowan, S. D. Eidelman, T. Stroh (2005). Astroparticle Physics. Springer. p. 109. ISBN 3540253122.
- ^ Shaw, R. W.; Young, J. P.; Cooper, S. P.; Webb, O. F. (1999). "Spontaneous Ultraviolet Emission from 233Uranium/229Thorium Samples". Physical Review Letters 82 (6): 1109–1111. Bibcode 1999PhRvL..82.1109S. doi:10.1103/PhysRevLett.82.1109.
- ^ Lightning produced "gammas" as Bremsstrahlung from 35 MeV lightning electrons
- ^ Bock, R. K.; et al (2008-06-27). "Very-High-Energy Gamma Rays from a Distant Quasar: How Transparent Is the Universe?". Science 320 (5884): pp 1752–1754. Bibcode 2008Sci...320.1752M. doi:10.1126/science.1157087. ISSN 0036-8075. PMID 18583607.
- ^ Announcement of first close study of a short gamma-ray burst.
- ^ Rothkamm, K; Löbrich, M (2003). "Evidence for a lack of DNA double-strand break repair in human cells exposed to very low x-ray doses". Proceedings of the National Academy of Sciences of the United States of America 100 (9): 5057–62. Bibcode 2003PNAS..100.5057R. doi:10.1073/pnas.0830918100. PMC 154297. PMID 12679524. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=154297.
- ^ Department for Environment, Food and Rural Affairs (Defra) UK Key facts about radioactivity, 2003
- ^ United Nations Scientific Committee on the Effects of Atomic Radiation Annex E: Medical radiation exposures – Sources and Effects of Ionizing – 1993, p. 249, New York, UN
- ^ Pattison, J. E.; Hugtenburg, R. P.; Green, S. (2009). "Enhancement of natural background gamma-radiation dose around uranium microparticles in the human body". Journal of the Royal Society Interface 7 (45): 603. doi:10.1098/rsif.2009.0300.
- ^ US National Council on Radiation Protection and Measurements – NCRP Report No. 93 – pp 53–55, 1987. Bethesda, Maryland, USA, NCRP
- ^ PET/CT total radiation dose calculations. Accessed June 23, 2011.
- ^ IARC – Cancer risk following low doses of ionizing radiation – a 15-country study – http://www.iarc.fr/ENG/Units/RCAa1.html
- Basic reference on several types of radiation
- Radiation Q & A
- GCSE information
- Radiation information
- Gamma ray bursts
- The Lund/LBNL Nuclear Data Search – Contains information on gamma-ray energies from isotopes.
- Mapping soils with airborne detectors
- The LIVEChart of Nuclides – IAEA with filter on gamma-ray energy, in Java or HTML
- Health Physics Society Public Education Website | <urn:uuid:fb886823-090c-428d-8a35-65b2bfed10bd> | CC-MAIN-2017-17 | http://wpedia.goo.ne.jp/enwiki/Gamma_rays | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00131-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.903836 | 7,181 | 4.0625 | 4 |
India Overview Essay, Research Paper
A Brief History of India
The roots of Indian civilization stretch back in time to pre-recorded history. The earliest human activity in the Indian sub-continent can be traced back to the Early, Middle and Late Stone Ages (400,000-200,000 BC). The first evidence of agricultural settlements on the western plains of the Indus is roughly contemporaneous with similar developments in Egypt, Mesopotamia and Persia.
The Indus Valley Civilization
This earliest known civilization in India, the starting point in its history, dates back to about 3000 BC. Discovered in the 1920s, it was thought to have been confined to the valley of the river Indus, hence the name given to it was Indus Valley civilization. This civilization was a highly developed urban one and two of its towns, Mohenjodaro and Harappa, represent the high watermark of the settlements.
The emergence of this civilization is as remarkable as its stability for nearly a thousand years. All the cities were well planned and were built with baked bricks of the same size; the streets were laid at right angles with an elaborate system of covered drains. There was a fairly clear division of localities and houses were earmarked for the upper and lower strata of society. There were also public buildings, the most famous being the Great Bath at Mohenjodaro and the vast granaries. Production of several metals such as copper, bronze, lead and tin was also undertaken and some remnants of furnaces provide evidence of this fact. The discovery of kilns to make bricks support the fact that burnt bricks were used extensively in domestic and public buildings. Evidence also points to the use of domesticated animals, including camels, goats, water buffaloes and fowls.
Trade seemed to be a major activity at the Indus Valley and the sheer quantity of seals discovered suggest that each merchant or mercantile family owned its own seal. These seals are in various quadrangular shapes and sizes, each with a human or an animal figure carved on it. Discoveries suggest that the Harappan civilization had extensive trade relations with the neighboring regions in India and with distant lands in the Persian Gulf and Sumer (Iraq). The Harappan society was probably divided according to occupations and this also suggests the existence of an organized government.
The Aryans and the Vedic Age
The Aryans are said to have entered India through the fabled Khyber pass, around 1500 BC. They intermingled with the local populace, and assimilated themselves into the social framework. They adopted the settled agricultural lifestyle of their predecessors, and established small agrarian communities across the state of Punjab. The Aryans are believed to have brought with them the horse, developed the Sanskrit language and made significant inroads in to the religion of the times. All three factors were to play a fundamental role in the shaping of Indian culture. Cavalry warfare facilitated the rapid spread of Aryan culture across North India, and allowed the emergence of large empires. With work specialization, the internal division of the Aryan society developed along caste lines. Their social framework was composed mainly of the following groups : the Brahmana (priests), Kshatriya (warriors), Vaishya (agriculturists) and Shudra (workers).
With land becoming property and the society being divided on the basis of occupations and castes, conflicts and disorders were bound to arise. Organized power to resolve these issues therefore emerged, gradually leading to formation of full-fledged state systems, including vast empires like The Mauryan Empire, the Gupta Empire, and the Cholas, Pandyas, Cheras, Chalukyas and Pallavas in the south.
The Great Mughals
The most important Islamic empire was that of the Mughals, a Central Asian dynasty founded by Babur early in the sixteenth century. His son Humayun succeeded Babur and under the reign of Humayun’s son, Akbar the Great (1562-1605), Indo-Islamic culture attained a peak of tolerance, harmony and a spirit of enquiry. The nobles of his court belonged to both the Hindu and the Muslim faiths, and Akbar himself married a Hindu princess. Mughal culture reached its zenith during the reign of Akbar’s grandson Shahjehan, a great builder and patron of the arts. Shahjehan moved his capital to Delhi and built the incomparable Taj Mahal at Agra. Aurangzeb, the last major Mughal, extended his empire over all but the southern tip of India, though he was constantly harried by Rajput and Maratha clans.
The power that came closest to imperial pretensions was that of the Marathas. Starting from scratch, the non-Brahmin castes in the Maharashtra region had been organized into a fighting force by their legendary leader, Shivaji. By the third quarter of the 18th century, the Marathas had under their direct administration or indirect subjection enough Indian Territory to justify use of the term “the Maratha Empire”, though it never came near the dimensions of the Mughal Empire. The Marathas also never sought to formally substitute themselves for the Mughals; they often kept the emperor under their thumb but paid him formal obeisance. Soon, however, they were to fall to India’s final imperial power, the British.
Coming of the Europeans
The next arrival of overwhelming political importance was that of the Europeans. The great seafarers of northwest Europe, the British, French, Dutch and Portuguese, arrived early in the seventeenth century and established trading outposts along the coasts. Early in the 16th Century, the Portuguese had already established their colony in Goa; but their territorial and commercial hold in India remained rather limited.
The Years of ‘The Raj’
The newcomers soon developed rivalries among themselves and allied with local rulers to consolidate their positions against each other militarily. In time they developed territorial and political ambitions of their own and manipulated local rivalries and enmities to their own advantage. The ultimate victors were the British, who established political supremacy over eastern India after the Battle of Plassey in 1757. They gradually extended their rule over the entire subcontinent, either by direct annexation, or by exercising suzerainty over local rajas and nawabs.
Unlike all former rulers, the British did not settle in India to form a new local empire. The English East India Company continued its commercial activities and India became ‘the Jewel in the Crown’ of the British Empire, giving an enormous boost to the nascent Industrial Revolution by providing cheap raw materials, capital and a large captive market for British industry. In certain areas farmers were forced to switch from subsistence farming to commercial crops such as indigo, jute, coffee and tea. This resulted in several famines of unprecedented scale.
In the first half of the 19th century, the British extended their hold over many Indian territories. A large part of the subcontinent was brought under the Company’s direct administration. By 1857, “the British empire in India had become the British empire of India.” The means employed to achieve this were unrestrained and no scruple was allowed to interfere with the imperial ambition.
A century of accumulated grievances erupted in the Indian mutiny of sepoys in the British army, in 1857. The uprising, however, was eventually brutally suppressed. The rebellion also saw the end of the East India Company’s rule in India. An Act of British Parliament transferred power to the British Crown in 1858. The Crown’s viceroy in India was to be the chief executive.
The Freedom Struggle
The British Empire contained within itself the seeds of its own destruction. The British constructed a vast railway network across the entire land in order to facilitate the transport of raw materials to the ports for export. This gave intangible form to the idea of Indian unity by physically bringing all the peoples of the subcontinent within easy reach of each other.
Since it was impossible for a small handful of foreigners to administer such a vast country, they set out to create a local elite to help them in this task; to this end they set up a system of education that familiarized the local intelligentsia with the intellectual and social values of the West. Ideas of democracy, individual freedom and equality were the antithesis of the empire and led to the genesis of the freedom movement among thinkers like Raja Rammohan Roy, Bankim Chandra and Vidyasagar. With the failure of the 1857 mutiny, the leadership of the freedom movement passed into the hands of this class and crystallized in the formation of the Indian National Congress in 1885. At the turn of the century, the freedom movement reached out to the common unlettered man through the launching of the Swadeshi movement by leaders such as Bal Gangadhar Tilak and Aurobindo Ghose. But the full mobilization of the masses into an invincible force only occurred with the appearance on the scene of one of the most remarkable and charismatic leaders of the twentieth century, perhaps in history.
Mohandas Karamchand Gandhi was a British trained lawyer of Indian origin from South Africa. He had won his political spurs organizing the Indian community there against the vicious system of apartheid. During this struggle, he had developed the novel technique of non-violent agitation which he called ’satyagraha’, loosely translated as moral domination.
Under his leadership, the Congress launched a series of mass movements – the Non Cooperation Movement of 1920 -1922 and the Civil Disobedience Movement in 1930. The latter was triggered by the famous Salt March, when Gandhi captured the imagination of the nation by leading a band of followers from his ashram at Sabarmati, on a 200 mile trek to the remote village of Dandi on the west coast, there to prepare salt in symbolic violation of British law. In August 1942, the Quit India movement was launched. It became evident that the British could maintain the empire only at enormous cost. At the end of the Second World War, the British initiated a number of constitutional moves to affect the transfer of power to the sovereign State of India. For the first and perhaps the only time in history, the power of a mighty global empire ‘on which the sun never set’, had been challenged and overcome by the moral might of a people armed only with ideals and courage.
India achieved independence on August 15,1947. The progress and triumph of the Indian Freedom movement was one of the most significant historical processes of the twentieth century. Its repercussions extended far beyond its immediate political consequences. Within the country, it initiated the reordering of political, social and economic power. In the international context, it sounded the death knell of British Imperialism, and changed the political face of the globe.
The New State
Throughout history, India had absorbed and modified to suit its needs, the best from all the civilisations with which it has come into contact. India chose to remain within the British Commonwealth of Nations. It also adopted the British system of Parliamentary Democracy, and retained the judicial, administrative, defense and educational structures and institutions set up by the British. India is today the largest and most populous democracy on earth. Today India is a pluralistic society of over 1 billion people. It is a country of contrasts. On the one hand it is the twelfth largest industrial power in the world, and, on the other hand, it is the fifteenth poorest nation in per capita income. Urban and metropolitan India presents a picture of an affluent minority with hedonistic lifestyles and coexists with the grinding poverty of rural India
India?s population just crossed the billion milestone, more than three times that of the United States and about three?fourths of China. India?s population is growing at a rate of 2 percent annually, whereas in China the growth rate is 1.2 percent annually. India is also more densely populated with an average of 547 persons per square mile.
According to the 1990 census, about 25 percent of the population in India lived in urban areas (about 2500 cities and towns). The remaining 75 percent live in more than 500,000 villages. The drift to the urban areas is obviously revolutionary in its effects on Indian life. Many urban dwellers maintain their ties with village life. They take back to the villages new ideas of what is possible and desirable, and the villages themselves thereby change. The literacy rate lags in rural areas and among women, but for all of India, it has risen from 16.6 percent in 1951 to 43.5 percent in 1990.
The prevailing impression of the larger Indian city today is one of overcrowding of immense and diverse populations, but at the same time of a teeming and vigorous life. Traffic is often chaotic, animal transport (bullock carts, horse-drawn carriages and even camels in dry areas) mingling with modern vehicles. In dock areas there is even a good deal of human labor and this, together with the frequent use of two or more persons for an apparently simple task can be regarded as a form of disguised relief for India?s underemployed population.
India is the birthplace of Hinduism, Buddhism, Jainism and Sikhism. As a secular state, however, India has no official religion and religious toleration is guaranteed under the constitution. Hindus constitute about 83 percent of the population, Muslims, 11 percent, Christians 3 percent, Sikhs 2 percent and Buddhists and Jains less than 1 percent each.
The caste system, a set of social and occupational classes into which individuals are born, is an important facet of Hinduism and thus is a dominant feature of Indian life. Since independence, the government has been attempting to eliminate castes, but caste consciousness remains important in Indian politics, despite the fact that caste discrimination is unconstitutional. ?Harijans?, the lowest caste (traditionally the untouchables) who constitute 15 percent of the population and tribals, who constitute 7 percent are given special privileges in terms of reservations for education and jobs.
Education is a concurrent responsibility of the national and state governments, with the national government laying down policy directions and the states implementing them. The system of education is comprised of primary, secondary (with vocational and technical courses) and collegiate institutions. Education is free and compulsory through age fourteen. Literacy has risen from 17 percent in 1950 to 48 percent in 1990; in that year, 52 percent of all men and 31 percent of all women were literate. Literacy is generally higher in urban areas.
About 200 different languages are spoken in India and an appreciation of the linguistic divisions provides a key to better understanding of the nation. Four principal language groups are recognized of which the Aryan and Dravidian are most important. Hindi (belonging to the Aryan linguistic group) is the official language of the country. English is also widely used in government and business. In addition, fourteen other languages have received official recognition in the constitution; Assamese, Bengali, Gujarati, Kashmiri, Marathi, Oriya, Punjabi, Sindhi and Urdu belonging to the Aryan group and Kannada, Malayalam, Tamil and Telugu belonging to the Dravidian group, and Sanskrit that provides the root for many of the Indian languages. Sanskrit is no longer a spoken language. Many other languages are spoken by smaller groups and they are either regional variations or dialects.
A nation?s material life reflects on its economic advancement and the standard of living of her people. One indicator of material life is the overall consumption pattern. Currently, India?s families spend over 60 percent of their income on food, with expenditures on health care being as little as 2.5 percent and household furnishings accounting for only 4.2 percent. However, these aggregate figures fail to reveal an important fact of life in India. While a large population lives a mediocre life, a substantial proportion maintains a decent standard of living and the number of people in the latter group is rising, which potentially makes India a promising market for a variety of goods and services. India?s large population, the physical terrain, climate and culture are all conducive to the development of a consumption society.
Government and Politics
India follows a parliamentary system of government both at the center and the states. At the center, there are two houses of parliament: the ?Lok Sabha? (house of the people) and ?Rajya Sabha? (house of the state). The head of the country is the president, chosen by an electoral college of both houses of parliament and state legislatures. The president calls the leader of the majority party in the Lok Sabha to form the government. The leader of the majority party becomes the Prime Minister and runs the government with the help of a cabinet of ministers appointed by the president on the advice of the Prime Minister. The constitution vests all executive power in the President, but the real relationship between President and the council of ministers headed by the Prime Minister is analogous to that of British Monarch, who exercises ceremonial powers, and the British cabinet which actually governs. The country is divided into twenty-five States and seven Union Territories.
Despite India?s enormous problems, it has the distinction of being a true democracy. Since independence, twelve general elections have been held, all on time and all as free as in North America and Western Europe, but with the distinction of larger voter participation despite still low literacy and insufficient transportation. India has proved that democratic stability need not precede economic prosperity and that universal literacy is not a precondition for open and fair voting.
Many critics view India?s political future as bleak. They point to the continued problem of dealing with the Sikh extremists in Punjab and the political turbulence in Kashmir as symbols of a nation in disarray. India sometimes seems fragile but its strength lies in the large and apolitical army, a ponderous bureaucracy and a powerful commitment to political freedom at the grassroots level. India is a multi-ethnic nation with a population of over 1 billion people that represent a multitude of racial, religious and ideological types and subtypes. It is beset by such problems as widespread poverty and communal disharmony. Yet it is the world?s largest democracy where ancient civilization coexists with modern technology.
The Legal System
The main sources of law in India are the constitution, statutes (legislation), customary law and case law. The statutes are enacted by the parliament, the state legislatures and union territory legislatures. There is also a vast body of subordinate legislation, which takes the form of rules, regulations and by-laws that are made by the central and state governments and the local authorities like the municipal corporations and municipalities.
In addition, local customs and conventions that are not against the statute or morality or other wise undesirable are, to a limited extent, also recognized and taken into account by the courts while they administer justice in certain cases. Also, people of different religions and traditions are governed by different sets of personal law with respect to matters relating family affairs.
A single integrated system of courts administers both the central and state laws. The supreme court of India, located in New Delhi, is the highest body in the entire judicial system. Each state or a group of states has a high court under which there is a hierarchy of subordinate courts.
The president appoints the chief justice and the other judges of the Supreme Court. The supreme court?s original jurisdiction extends to the enforcement of fundamental rights given by the constitution and to any dispute among states and the government. It has an advisory jurisdiction in matters referred by the President of India. Its decisions are binding on all courts within the country.
While the judicial process is considered fair, a large backlog of cases to be heard and frequent adjournments have meant long delays before a case can be closed. Sometimes, matters of priority and public interest are dealt with expeditiously. But for the most part, the judicial process is a lengthy one. It is for this reason that companies are increasingly seeking to solve their disputes through the process of arbitration. The arbitration laws were amended and updated in 1996 so that they could more closely conform to international practice. The Indian council of Arbitration has recently been set up and currently has around 800 members. It has entered into arbitration service agreements with major international arbitral organizations in the United States and Europe
India has always placed a high value on economic growth for creating a prosperous society. However, its economic policy has been characterized by the pursuit of multiple and sometimes contradictory objectives. These objectives are embedded in two values: self-reliance and social equity. The pursuit of these objectives led the creation of perhaps the most regulated economy in the noncommunist world. The general guidelines of India?s economic strategy are enunciated by the national Planning Commission whose five-year plans establish development priorities, production goals and guidelines for allocating investments. A large public sector was created to ensure that strategic sectors of the economy remain responsive to state objectives. A comprehensive system of licensing was established to regulate industrial investment and production capacity. Trade policy became dominated be pervasive quantitative controls and some of the worlds highest tariffs and foreign investment and technology transfer have been closely regulated to safeguard the country?s self-reliance.
Recent Economic Reforms
Reforms of India?s economic policy can be traced back to the mid 1970?s. While no political leader has been more closely identified with reforms than the late Rajiv Gandhi, most of his reforms reflected an evolution in the thinking of India?s policy-making community that began long before his rise to power. Many analysts trace the improvement in India?s economic performance since the mid-1970?s to the impact of previous reform initiatives. Rajiv Gandhi?s identification with reform is in part explained by the fact that his reforms went further and were more systematic than those of his predecessors. He is also distinguished by his fascination with high technology. Rajiv Gandhi enunciated a sweeping rationale for reform, asserting that India had reached a watershed. Deploring the country?s high-cost industry with its technological obsolescence and inadequate attention to quality, he declared that India must address its shortcomings through greater efficiency, more competition and absorption of new technology.
Greater priority for Infrastructure
Since the late 1960?s, infrastructural bottlenecks have acted as major constraints on development. Energy demand outpaced supply leading to crippling power shortages. India?s railroads were unable to meet the needs for freight transport. Communications were antiquated and highly inefficient. In recent years, increased emphasis has been placed on investment in infrastructure.
Relaxing Industrial Regulation
Economic reforms initiated under Rajiv Gandhi and further strengthened by the recent Bharatiya Janata Party government have brought significant relaxation in industrial regulation. While the measures taken have curbed the intrusiveness of the regulatory regime, they reflect a continuing belief in the need for state intervention to guide the economy. The new measures are intended as much to promote structural changes in India?s industrial base by encouraging development of backward areas, high-tech industries and economies of scale as to alleviate inefficiencies resulting from regulation.
Promoting the Development of Capital Markets
Policy reforms creating new incentives for equity and debenture issues have helped to make India?s capital market an increasingly substantial source of investment finance. From 1980 to 1988, market capitalization more than tripled, from $7.5 billion to $23.8 billion. The value of equity traded increased from $2.8 billion to 12.2 billion.
Measures to Improve Technological Capabilities
Concern for improving India?s technological capabilities preceded Rajiv Gandhi?s rise to power. In 1984, a government White Paper on Technology Policy and the Report of The Committee on Trade Policies recommended measures to increase imports of modern technologies and to provide greater support for indigenous research and development. Rajiv Gandhi?s government stressed the importance of India?s technological modernization. In pursuit of this goal, he liberalized imports of capital goods, increased funding for research and development, reformed public sector research institutions and relaxed restrictions of foreign collaboration.
Recent reforms of India?s trade policies have attempted to remove the disadvantages and disincentives that exporters suffered as a result of the Indian regulatory regime. Measures to curtail quantitative controls and reduce tariffs on the import of capital and intermediate goods have been an important element of this strategy. These policies have been designed to benefit an array of ?Thrust Industries? that the government has selected for export promotion. India?s trade reform is predicated on a calculated risk that liberalization of imports in the short run will reduce trade deficits in the long run.
India?s economic reforms have important marketing implications. First, the deregulation should encourage competition in the market providing alternative choices of products and services to the consumers. Second, emphasis on technology should raise the quality of goods available. Finally, improvements in the infrastructure should encourage the development of new marketing institutions and enhance the level of services offered.
Increased Independence of States
The economy is also becoming more federalist in nature, as states compete among themselves for much needed foreign investment especially in infrastructure. Economic reforms have meant that states have been left to fend for themselves and rely on market forces to attract foreign investment. However, initial fears that the less developed states would lose out in the race to attract foreign investment or multilateral lending, have proved unfounded. States that have a relatively less-developed infrastructure or other such disadvantages have successfully wooed private investment through incentive schemes, professional marketing strategies, setting up a presence in the capital and by state government visits to potential investor countries.
Software and Hardware Boom
The country is also seeing a computer software boom. India, which was a late entrant in the field, has become one of the largest emerging markets, with its domestic market growing at a compound rate of 45 percent. Its computer software industry has grown at a compound rate of 46 percent to reach the $1.2 billion mark in 1995. Export, the mainstay of the software industry grew by more than 38 percent. The market for computer hardware too has been growing at an impressive rate of 30 percent. The major demand for computer systems comes from private firms, with an increasing demand from small office and home office end users.
Future of India?s economic Reforms
The change in government has not altered the course of Indian economic reforms. While inflation is still an important electoral issue, the relaxation of industrial regulation did not stir much controversy. Under India?s current Prime Minister, trends toward the liberalization of domestic industrial policy and the promotion of domestic capital markets and export are continuing. As a matter of fact, the new government appears to be adopting a more liberal stand. It wants to establish a worldwide economy through large-scale liberalization be freeing foreign investment conditions, cutting down protection for the Indian industry and streamlining bureaucratic procedures.
The role of foreign investment in the Indian economy is increasing. Far from being shunned as it was in the past, foreign investment is now recognized to be vital to the development of core sectors of the economy. Foreign direct investment has increased dramatically from $ 150 million in 199- to $2.1 billion in 1996 and according to estimates, 83 percent of this investment went into core sectors of the economy. Total net foreign portfolio investment in 1996 was $2 billion, with $652 million raised through Euro-Issues. Thus in 1996, total foreign investment was estimated at around $4.5 billion.
Foreign Businesses in India
There are about 250 foreign companies that maintain branch offices in India. In addition, there are 100 Indian subsidiaries of foreign companies in which they hold majority ownership; about half of these are from the United Kingdom. In addition, India approves about 1,000 collaborations between foreign and Indian companies annually. Some of these collaborations are one-time technology transfer agreements of licensing agreements, while others are joint-venture operations to manufacture the product or service in which the foreign partner can hold no more than 40 percent equity.
The United States followed by Germany and Italy remains India?s major foreign collaborators. Industry-wide, electrical equipment, industrial machinery and chemicals are the three leading industries in which India seeks foreign collaboration. The tempo of foreign collaborations increased significantly in the 1990?s as a result of sweeping changes in the foreign investment policies.
Trends and events taking shape guarantee better things for India in the new millennium. Gradual liberalization is encouraging investment and consumption leading to overall economic prosperity. There is an even more salient factor that may drive the country?s economic growth upward in the future, providing opportunities for international companies that position themselves intelligently. The general turn away from statism and import substitution policies, and the embrace of economic orthodoxy and market-based policies should provide a powerful impetus for change. Politically, such bold new thinking is not easy to adopt overnight, but there are trends that indicate a gradual shift in attitude among Indian politicians, especially in the light of structural changes being pursued in Eastern Europe and the former Soviet Union.
Perceived Problems of doing business in India
India?s colonial past, huge population and zealous concern for self-sufficiency put constraints on the extent and kind of business activities that foreign enterprises may pursue in India. Population exerts a strong pressure toward maintaining and enhancing high levels of employment which encourages labor-intensive measures in the economy. The insistence on self-sufficiency does not allow a total freedom of investment for foreign capital and makes protection for native industry obligatory; and the perception of regional power necessarily requires large defense expenditure in the foreseeable future. These factors should temper the comparison of India with the newly industrializing pacific powers.
India in its economic endeavors decided to pursue a mixed economy whereby both state-owned enterprises and private businesses have a role to play. Considering what India had in terms of infrastructure and basic industries, the government had to take the initiative to spur economic activity. At the same time, India pursued the nonalignment course. Putting them together, many multinational companies concluded that India would eventually become a socialist country. Such a conclusion missed an important trait of Indian culture that attributes high importance to personal freedom. India adapted its policies to become a true democracy, a secular state comprised of people with different religious beliefs and significant regional differences, speaking 16 different languages. In a way, India seeks values similar to those held dear in the United States, but India must do so in it?s own way, given its environment and limitations such as large population and limited resources. Despite the agreement in vision that India and the United States share, it is a pity that U.S. companies should perceive India differently when it comes to making business decisions.
Inadequacy of Infrastructure
India?s infrastructure has been considered inadequate to sustain foreign investment. Of course, India?s infrastructure is no match for the conditions in the industrialized countries, but despite its large population, India has a fairly good infrastructure to support foreign enterprises.
India has one the best infrastructures in terms of transport, communications, commerce, banking, technical training institutes, trained manpower and supporting services among the developing countries. India?s railroad network is the fourth largest in the world. All major cities are linked be air. There are five major ports and more and more international airports are opening in smaller towns across the nation. In addition to regular postal service, voice and teleprinter communication through telephones, cables, fax services, cellular telephony and paging services are available.
Inadequate Property Protection
Intellectual property rights in India are considered insecure. Here again, there is a problem of perception. India has a highly developed state of the law on this subject providing substantial protection to foreign companies. Trademark law is a little different from the United States in that the first person to register the trademark gets exclusive use of it rather than the first person to use it. Therefore, expeditious and proper registration is the only effective way to protect trademark rights in India.
The duration of patent protection in India has also been a debatable issue. International companies allege that the duration is inadequate and short. The duration of 14 years that Indian law provides for is comparable to similar laws around Asia. Considering the pace at which technology is moving now, the period of 14 years may seem even more reasonable now than it did when the Indian Patents Act was passed.
In the field of investment regulations and practices, India has often been judged more by perceived situations than by established realities.
India provides unprecedented market opportunities. For International firms, the emerging Indian market holds both a threat and a promise. The threat is dramatically increased competition from both local companies and those from other nations. As far as the promise, there is a growing market of more than 200 million consumers. In the last ten years, as India began its time-bending leap into the twenty-first century, millions of her people began an equally rapid transition from rural to urban, from agrarian to industrial, from feudal to contemporary society. With more of India?s population traveling to the urban areas to shop every day, the demand for goods and services from the most basic household commodities to sophisticated technical devices is soaring. In coming years, as incomes continue to bolster the spending power if India?s middle class, the opportunities for shrewd marketers will be unparalleled.
Vohra, R. (1997). The Making of India: A Historical Survey. Armonk, New York: M.E.
Jain, S.C. (1993). Market Evolution in Developing Countries: The Unfolding of the
Indian Market. Binghamton, New York: International Business Press.
Thakur, R. (1994). The Politics and Economics of India?s Foreign Policy. New York,
New York: St. Martin?s Press.
Desai, R. (1999). Indian Business Culture. Woburn, Massachusetts: Butterworth
An overview of India as a growing market in the global business arena. Bibliography included in report | <urn:uuid:0ea23aa1-d1a9-4606-9c63-992ae15243ab> | CC-MAIN-2017-17 | http://mirznanii.com/a/73924/india-overview-essay-research-paper-a-brief | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.957528 | 6,962 | 3.609375 | 4 |
Department of Materials Science, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran
In present study we used five green plants for microwave assisted synthesis of Alumina nanoparticles from Aluminum nitrate. Structural characterization was studied using x-ray diffraction that showed semi- crystalline and possibly, amorphous structure. Fourier infrared spectroscopy was used to determine Al-O bond and functional groups responsible for synthesis of nanoparticles. FTIR confirmed existence of Al-O band and bio-functional groups, originated from plant extract. Morphology and size of nanoparticles were investigated using scanning electron microscopy, transmission electron microscopy and atomic force microscopy techniques. It was observed that nanoparticles have near-spherical shape. Average size of clusters of nanoparticles varied with different routes from of 60 nm to 300 nm. AFM images showed that Individual nanoparticles were less than 10 nm.
Nanoparticles are synthesized with various methods; each of them provides a certain level of controllability of properties such as: structure, morphology and purity. Synthesis methods can be categorized in to three main parts as follows: Liquid phase methods, gas phase synthesis and methods basing on surface growth under vacuum conditions . Cunha et al. , utilized sol gel method with natural organic matter to produce Alumina particles of diameter 52nm ±1. Toshio Itoh et al. , prepared γ-Al2O3 with polyol method using PVP. Optimized amount for reflux temperature and PVP molecular weight was investigated to control particles’ size. γ-Al2O3 with particles size of 142 nm to 1.0μm was successfully synthesized and α-Al2O3 was produced with subsequent annealing. Tahmasebpour et al. , investigated polyacrylamide sol–gel method to produce α-Al2O3nanoparticles. They found that with low heating rates, phase transformation is delayed and as a consequence finer particles are resulted. It was also revealed that, particle size is independent of solution concentration. Zaki et al. [5, 6], used pechini method, a modified sol-gel method, to develop α-Al2O3 nanoparticles. The main disadvantage of these conventional methods is long duration of preparation or reaction time, which is rectified by new developed methods such as microwave hydrothermal assisted synthesis. Microwave radiation can be used as a powerful heat source for synthesis of nanoparticles from liquid phase in short time. Kiranmala Laishram et al. , Developed a combustive method for synthesis of α-Al2O3 via microwave heating. In their experiment Aluminum nitrate and urea, prepared 1:2.5ratios, were dissolved in D/W to form a clear solution and subsequently heated with 900W for 3-5 minutes. After evaporation of the water, urea acts as a fuel and subsequent combustion synthesizes the Alumina nanoparticles. Nanoparticles of 18-20 nm were produced which is comparable to peer practices such as low-temperature combustion synthesis. No calcination needed for α to γ phase transformation. Leyla Sharifi et al. , investigated sol-gel microwave synthesis of Aluminum nitrate with microwave irradiation. They prepared gel from bohemite sol and dried gel was calcined in microwave with 900W. They found that in short time heating, first γ -Al2O3 nucleates and by increasing the irradiation time, more α-Al2O3 is produced. After 10 minutes of irradiation α-Al2O3 is the dominant phase. Sutradhar et al., used polyol components in plant extracts to synthesize Alumina nanoparticles.50-200 nm particles were produced from tea and coffee and 200-400 nm particles were produced from triphala. Elimination of capping agent or stabilizer and using green routs lays foundation for biological usage of nanopowders. In further studies in microwave assisted synthesis of nanoparticles, Sahu et al. , produced LaAlO3, Ragupathi et al. , synthesized Nickel Aluminate with plant extract and BaTiO3 was prepared by Katsuki et al. .Nano composites of nHAp (nano-hydroxyapatite)–alumina and alumina–zirconia were produced in researches by radha etal. , and Benavente et al. , respectively. In present study, we used microwave irradiation, as a powerful source for heating and used green routs, to maximize the purity and minimize chemical impurities in the product. Microwave-assisted synthesis using green extracts have been studied previously by plant extracts such as coffee, tea and triphala , Sesame , Biophytum sensitivum , Aerva lanata , bamboo hemicelluloses and glucose , Euphorbia nivulia . In current research few plant extracts are used as reducer, without stabilizer, and effect of plant type on size and morphology of nanoparticles is investigated.
MATERIALS AND METHODS
Extract preparation Syzygium aromaticum , Origanum vulgare , Origanum majorana , Theobroma cacao and Cichorium intybus were selected based on preliminary studies. 20 gr of mentioned plants were mixed with 100 ml of deionized water and boiled for 2h. After cooling in ambient temperature, they were centrifuged for 10 minutes and washed subsequently. Plant extracts were stored in 20-25 oC. Alumina nanoparticles synthesized with mentioned plant extracts are labeled according to Table 1.
Table 1. Labeling of nanoparticles
Synthesize procedure Aluminum nitrate (>98%, Daejung, South Korea) and plant extracts were mixed with 1:4 weight ratios and then stirred for 10 minutes at room temperature. 850W LG microwave model No: MS1040SM/00v with 2.45 GHz frequency was used as heat source and solution was irradiated for 10 minutes at 610W. Irradiated solutions were centrifuged and washed with ethanol and deionized water for 10 minutes. To reduce agglomeration, powders were dissolved in deionized water and treated by ultrasonic vibration with 150 W for 5 minutes.
Characterization X-ray Diffraction (XRD) measurements were recorded using a “XPERT” diffractometer with a Co Kα tube operating at 40 kV/40 mA and the “PROPORTIONAL Xe FILLED” detector. The data were collected in the range of 20–90°with a step size of 0.040 °and counting time of 0.8 s. The XRD patterns were evaluated using the Joint Committee Powder Diffraction Standards (JCPDS) for the phase determination. The patterns were analyzed with “High score plus” program. FTIR studies were recorded using “PerkinElmer-Frontier FT-IR”, to specify extract bio-components involved in synthesis of nanoparticles and AL-O structure. The morphology and size of nanoparticles were investigated with scanning electron microscopy (SEM, Philips XL30) operating at 25 KV. In a complementary study; finest powder was also studied via transmission electron microscopy (TEM, Zeiss - EM10C - 80 KV). Morphology and shape of nanoparticles were also studied by ARA AFM model No.0101/A with non-contact mode of imaging.
RESULTS AND DISCUSSION
Fig. 1 shows SEM images of synthesized nanoparticles. All synthesized nanoparticles seem to have near spherical form. Nanoparticle clusters were between 60-300 nm. To investigate the size and morphology of nanoparticles more precisely, AFM and TEM analysis were applied. TEM image of ALNP-1 as an example (Fig. 2) revealed that nanoparticles are nearly spherical with 3-5 nm diameter, which was verified by the measured size of nanoparticles in AFM image (Fig. 3). Summarized measurement of size of all synthesized nanoparticles, using AFM analysis, is presented in table 2. It is obvious that nanoparticles are significantly smaller than the estimated dimensions from SEM images, which may have roots in agglomeration of nanoparticles.
Fig 1. SEM image of a) ALNP-1 b) ALNP-2 c) ALNP-3 d) ALNP-4 e) ALNP-5
Fig 2. TEM image of ALNP-1
Fig 3. AFM image of ALNP-1
Table 2. Height of alumina nanoparticles measured with AFM analysis
X-ray diffraction pattern of synthesized nanoparticles are presented in Fig. 4. XRD of nanoparticles synthesized with Syzygium aromaticum extract is slightly different from others nanoparticles, which indicates poor crystalline structure . According to Fig. 4, XRD pattern of ALNP-1 has 3 characteristic peaks may be related with hexagonal corundum phase according to JCPDS card 96-900-9672. These significant peaks were detected at 2θ=40.99, 50.72, 67.80, correspond to (104), (113), (214) respectively. Broaden peaks are indicative of very small crystalline size . Crystalline size calculated according to Debye–Scherer formula, d=0.89λ/βcosθ, where‘d’ is the crystallite size and 0.89, Scherrer’s constant, λ, the wavelength of X-rays. Θ is the Bragg diffraction angle, and β, the full width at half-maximum (FWHM) of the diffraction peak. Using the Debye–Scherrer’s formula, crystalline size of nanopowders evaluated to be about 3-9 nm. XRD pattern of ALNP-3 might be assigned to cubic Al2O3 with JCPDS card 01-075-0278. XRD pattern of other nanoparticles show no significant characteristic peak (Fig. 4). It may originate from being amorphous phase or ultra-small particles of less than 5 nm [27, 28]. Moreover, It should be noted that distinguishing of amorphous and nanocrystalline structure is also dependent on line resolution which is determined by wavelength of X-ray radiation .
Fig 4. XRD pattern of Alumina nanoparticles
FTIR analysis was done to investigate Al-O bond and structural properties. Fig 5a illustrates FTIR spectroscopy of ALNP-1 nanoparticles. Alumina peaks are revealed from 467 to 922cm-1. The band at 467cm-1is assigned to AlO6 bending mode and 580 cm-1 is ascribed to asymmetric stretch of AlO6 . Broad band at 638 cm-1, indicates AlO6 structure , and 759cm-1 peak is assigned to AlO4 symmetric stretching [30, 31]. 834 and 922 cm-1 are possibly related to complex AlO4, and AlO6 interactive vibration . Other peaks are located between 1000 and 1750 cm-1, related to bio-functional groups originated from extract . These groups are assumed to be flavonoids, tannins and terpenoids attached to the nanoparticles and play an important role in synthesis and stabilization of nanoparticles [9, 15]. 1073cm-1 peak is related to C-N stretching frequency. Peak located at 1200cm-1, is possibly due to stretching vibration of Polyol . 1376 cm-1 peak is due to presence of geminal methyl group [9, 19]. 1590 cm-1 peak could be assigned to adsorption of water [34, 35]. Peak located at 1446 cm-1 is supposed to be originated from C–O–H in-plane bend of the hydroxyl groups. Small band located at 1697 cm-1 is possibly related to C=C aromatic ring . FTIR of ALNP-2 is presented in Fig. 5b. Only one peak is related to alumina nanoparticles that is located at 609cm-1and it is indicative of octahedron AlO6 only formation . 1104 cm-1 peak, is due to C-O-C stretching frequency and 1487 cm-1 band is originated from Methylene group CH2 bending [36, 37] and finally, 1614 cm-1 suggests C=C aromatic bending . Characteristic band of 711 cm-1in FTIR of ALNP-3 represents Al-O band (see Fig 5c). Origanum majorana extract contains Tannins, Flavonoids, Phenol compounds and Triterpenes. Characteristic band of 1110 and 1629 may represent C-O stretch and aromatic ring respectively . These peaks confirm presence of flavonones as adsorbed functional groups. FTIR of ALNP-4 is shown in Fig 5d, showing a broad band at 619cm-1 that is indicative of symmetric stretching of AlO6 . The bands observed at 1133 and 1645 cm-1 may be assigned to C-H or C-O stretch and C=C aromatic ring respectively . Fig. 5e presents FTIR of ALNP-5. 464, 519 and 813cm-1 peaks are related to alumina nanoparticles. It can be suggested that both AlO4 and AlO6 were synthesized.There are also broad and small bands around 1000 to 1750 cm-1 which is indicative of bio-functional groups attached to the nanoparticles. Peaks located at 1032 cm-1 may represent C-N band and 1262 cm-1peak is ascribed to C-O band . 1368 and 1452cm-1 peaks may represent methyl and hydroxyl groups respectively . There is a small band at 1515 cm-1, related to possible adsorption of water to surface [34, 35]and finally, 1655cm-1 broad band is indicative of aromatic ring .
Fig 5. FTIR analysis of a) ALNP-1b) ALNP-2c) ALNP-3d) ALNP-4e) ALNP-5
Syzygium aromaticum, Origanum vulgare, Origanum majorana, Theobroma cacao and Cichorium intybus were used as green routes for microwave assisted synthesis of alumina nanoparticles. XRD pattern of particles synthesized with Syzygium aromaticum showed semi-crystalline structure while others showed no significant peak that might be assigned to nano dimension of particles or their amorphous structure. FTIR studies of nanoparticles showed peaks in range of 450-1000 cm-1, assigned to AlO4 and AlO6 bonds, and some peaks in range of 1000-1750 cm-1: assigned to bio-functional groups responsible for particles synthesis. SEM analysis of nanoparticles showed clusters of nanoparticles in 60-300 nm range. TEM and AFM analysis revealed that individual nanoparticles have less than 10 nm size.
CONFLICT OF INTEREST The authors declare that there is no conflict of interests regarding the publicaton of this manuscript.
1. Kruis FE, Fissan H, Peled A. Synthesis of nanoparticles in the gas phase for electronic, optical and magnetic applications—a review. J Aerosol Sci. 1998;29(5):511-35.
2. da Costa Cunha G, Romão LPC, Macedo ZS. Production of alpha-alumina nanoparticles using aquatic humic substances. Powder Technol. 2014;254(0):344-51.
3. Itoh T, Uchida T, Matsubara I, Izu N, Shin W, Miyazaki H, et al. Preparation of γ-alumina large grain particles with large specific surface area via polyol synthesis. Ceram Int. 2015;41(3, Part A):3631-8.
4. Tahmasebpour M, Babaluo AA, Shafiei S, Pipelzadeh E. Studies on the synthesis of α-Al2O3 nanopowders by the polyacrylamide gel method. Powder Technol. 2009;191(1–2):91-7.
5. Zaki T, Kabel KI, Hassan H. Preparation of high pure α-Al2O3 nanoparticles at low temperatures using Pechini method. Ceram Int. 2012;38(3):2021-6.
6. Zaki T, Kabel KI, Hassan H. Using modified Pechini method to synthesize α-Al2O3 nanoparticles of high surface area. Ceram Int. 2012;38(6):4861-6.
7. Laishram K, Mann R, Malhan N. A novel microwave combustion approach for single step synthesis of α-Al2O3 nanopowders. Ceram Int. 2012;38(2):1703-6.
8. Sharifi L, Beyhaghi M, Ebadzadeh T, Ghasemi E. Microwave-assisted sol–gel synthesis of alpha alumina nanopowder and study of the rheological behavior. Ceram Int. 2013;39(2):1227-32.
9. Sutradhar P, Debnath N, Saha M. Microwave-assisted rapid synthesis of alumina nanoparticles using tea, coffee and triphala extracts. Adv Manuf. 2013;1(4):357-61.
10. Sahu PK, Behera SK, Pratihar SK, Bhattacharyya S. Low temperature synthesis of microwave dielectric LaAlO3 nanoparticles: effect of chloride on phase evolution and morphology. Ceram Int. 2004;30(7):1231-5.
11. Ragupathi C, Vijaya JJ, Kennedy LJ. Preparation, characterization and catalytic properties of nickel aluminate nanoparticles: A comparison between conventional and microwave method. J Saudi Chem Soc. (0). Available from http://dx.doi.org/10.1016/j.jscs.2014.01.006 ( Accessed 11 February 2014)
12. Katsuki H, Furuta S, Komarneni S. Semi-continuous and fast synthesis of nanophase cubic BaTiO3 using a single-mode home-built microwave reactor. Mater Lett. 2012;83(0):8-10.
13. Radha G, Balakumar S, Venkatesan B, Vellaichamy E. Evaluation of hemocompatibility and in vitro immersion on microwave-assisted hydroxyapatite–alumina nanocomposites. Mater Sci Eng, C. 2015;50(0):143-50.
14. Benavente R, Salvador MD, Penaranda-Foix FL, Pallone E, Borrell A. Mechanical properties and microstructural evolution of alumina–zirconia nanocomposites by microwave sintering. Ceram Int. 2014;40(7, Part B):11291-7.
15. Joseph S, Mathew B. Microwave-assisted green synthesis of silver nanoparticles and the study on catalytic activity in the degradation of dyes. J Mol Liq. 2015;204(0):184-91.
16. Joseph S, Mathew B. Microwave assisted facile green synthesis of silver and gold nanocatalysts using the leaf extract of Aerva lanata. Spectrochim Acta, Part A. 2015;136, Part C(0):1371-9.
17. Peng H, Yang A, Xiong J. Green, microwave-assisted synthesis of silver nanoparticles using bamboo hemicelluloses and glucose in an aqueous medium. Carbohydr Polym. 2013;91(1):348-55.
18. Valodkar M, Nagar PS, Jadeja RN, Thounaojam MC, Devkar RV, Thakore S. Euphorbiaceae latex induced green synthesis of non-cytotoxic metallic nanoparticle solutions: A rational approach to antimicrobial applications. Colloids Surf, A:. 2011;384(1–3):337-44.
19. Raghunandan D, Bedre MD, Basavaraja S, Sawle B, Manjunath SY, Venkataraman A. Rapid biosynthesis of irregular shaped gold nanoparticles from macerated aqueous extracellular dried clove buds (Syzygium aromaticum) solution. Colloids Surf, B:. 2010;79(1):235-40.
20. Sankar R, Karthik A, Prabu A, Karthik S, Shivashangari KS, Ravikumar V. Origanum vulgare mediated biosynthesis of silver nanoparticles for its antibacterial and anticancer activity. Colloids Surf, B:. 2013;108(0):80-4.
21. Vera RR, Chane-Ming J. Chemical composition of the essential oil of marjoram (Origanum majorana L.) from Reunion Island. Food Chem. 1999;66(2):143-5.
22. Nasrollahzadeh M, Sajadi SM, Rostami-Vartooni A, Bagherzadeh M. Green synthesis of Pd/CuO nanoparticles by Theobroma cacao L. seeds extract and their catalytic performance for the reduction of 4-nitrophenol and phosphine-free Heck coupling reaction under aerobic conditions. J Colloid Interface Sci. 2015;448(0):106-13.
23. Bharathi K, Thirumurugan V, Kavitha M, Muruganadam G, Ravichandran K, Seturaman M. A comparative study on the green biosynthesis silver nano particles using dried leaves of boerhaavia diffusa l. And cichorium intybus l. With reference to their antimicrobial potential. World J Pharmaceut Sci. 2014;3(5):1415-27.
24. Li X, Guo X, Liu T, Zheng X, Bai J. Shape-controlled synthesis of Fe nanostructures and their enhanced microwave absorption properties at L-band. Mater Res Bull. 2014;59(0):137-41.
25. La Porta FA, Ferrer MM, De Santana YV, Raubach CW, Longo VM, Sambrano JR, et al. Synthesis of wurtzite ZnS nanoparticles using the microwave assisted solvothermal method. J Alloys Compd. 2013;556:153-9.
26. Cullity BD, Stock SR. Elements of X-ray Diffraction: 3rd edition.Pearson; 2001.
27. Liao X, Zhu J, Zhong W, Chen H-Y. Synthesis of amorphous Fe2O3 nanoparticles by microwave irradiation. Mater Lett. 2001;50(5–6):341-6.
28. Van Hoang V, Ganguli D. Amorphous nanoparticles—Experiments and computer simulations. Phys Rep. 2012;518(3):81-140.
29. Machala L, Zboril R, Gedanken A. Amorphous Iron (III) Oxide A Review. J Phys Chem B. 2007;111(16):4003-18.
30. Sivadasan A, Selvam IP, Potty SN. Microwave assisted hydrolysis of aluminium metal and preparation of high surface area γ Al2O3 powder. Bull Mater Sci. 2010;33(6):737-40.
31. Boumaza A, Favaro L, Lédion J, Sattonnay G, Brubach JB, Berthet P, et al. Transition alumina phases induced by heat treatment of boehmite: An X-ray diffraction and infrared spectroscopy study. J Solid State Chem. 2009;182(5):1171-6.
32. Saniger J. Al-O infrared vibrational frequencies of γ-alumina. Mater Lett. 1995;22(1–2):109-13.
33. Aromal SA, Philip D. Benincasa hispida seed mediated green synthesis of gold nanoparticles and its optical nonlinearity. Physica E. 2012;44(7–8):1329-34.
34. Sankar KV, Senthilkumar ST, Berchmans LJ, Sanjeeviraja C, Selvan RK. Effect of reaction time on the synthesis and electrochemical properties of Mn3O4 nanoparticles by microwave assisted reflux method. Appl Surf Sci. 2012;259(0):624-30.
35. Goharshadi EK, Hadadian M. Effect of calcination temperature on structural, vibrational, optical, and rheological properties of zirconia nanoparticles. Ceram Int. 2012;38(3):1771-7.
36. Hosseini SF, Zandi M, Rezaei M, Farahmandghavi F. Two-step method for encapsulation of oregano essential oil in chitosan nanoparticles: Preparation, characterization and in vitro release study. Carbohydr Polym. 2013;95(1):50-6.
37. Coates J. Interpretation of infrared spectra, a practical approach. Enc Anal Chem. 2000. Available from: 10.1002/9780470027318.a5606. (Accessed 15 SEP 2006)
38. Nowak K, Ogonowski J. Olejek majerankowy, jego charakterystyka i zastosowanie. Chemik. 2010;64(7-8):539-48.
39. Rinaldi R, Schuchardt U. On the paradox of transition metal-free alumina-catalyzed epoxidation with aqueous hydrogen peroxide. J Catal. 2005;236(2):335-45. | <urn:uuid:e142cb3d-d583-48f2-9a6a-cea8d2789349> | CC-MAIN-2017-17 | http://jns.kashanu.ac.ir/article_34331.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00189-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.862284 | 5,547 | 2.625 | 3 |
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Psoriasis is a non contagious pathological condition of the skin which involves an inflammatory response (Ryan, Sheila, 2010). This disease has a greater impact on the social relationship of the patient, since there is the involvement of the skin. Skin is the largest organ in the human body, which covers almost 1.6-1.9 m2 and having thickness of less than 0.05 -0.03cm. Protection from UV radiation and microorganisms, temperature regulation, production of hormones and chemicals (e.g. Vit-D) and elimination of salts and water being its important functions (Torotora and Grabowski, 2003). Apart from the above mentioned functions skin play a crucial role in the exfoliation process. To know about psoriasis it is essential to know the anatomy of the skin and the process of cell cycle. Skin consists of two layers epidermis and dermis which meet at dermal- epidermal junction. Having a thickness on 1-3mm epidermis consists of stratified squamous epithelium with five different layers. They are,
Stratum corneum : It is the outer most layer consists of dead squamous cells which is being shed and replaced continually.
Stratum lucidam : cells are closely packed and clear without nuclei, contain gel like material called elciden which get transformed into keratin.
Stratum granulosum : keratinisation begins from this layer with the help of keratohyalin.
Stratum spinosum : it consists of 8-10 layers cells are of irregularly shaped, equipped with RNA for protein synthesis.
Stratum basale : have single layer of columnar cells. Mitosis of epithelial cells commences here. Stratum spinosum together with stratum basale called stratum germinativum.
New cells are produced from Stratum basale and move towards Stratum corneum, takes place about 28-35 days to reach and this is known as regeneration time or turn over time. But in psoriatic skin, the actively dividing keratinocytes migrate from Stratum basale to Stratum cornium within a short span which takes place almost 4-7 days (Torotora and Grabowski, 2003). As a result of this immature keratinocytes reaches the surface of the skin which lacks nuclei and there is the accumulation of keratin on stratum corneum. This keratin is not desquamated resulting in the formation of scales of psoriasis (Ryan, Sheila, 2010). As a result of this there is an increased supply of blood to the dermis layer this ultimately cause erythamatous plaques and inflammatory response. There is no age limit for the disease to occur but studies suggest an increase incidence in late adolescence (Emanuel Rubin and Howard M. Reisner, 2009)
Aetiology and causes:
The precise reason behind the occurrence of the disease is unknown. However, scientists have discovered some factors that cause the condition. One among the factor is genetics or familial history of the patient. It was found that in close relatives the occurrence is more; the twin studies also support the fact. The incidence is more in monozygotic twins than the dizygotic. Studies also suggest the involvement of different genes (Ryan, Sheila, 2010).
Environment the very next factor plays a crucial role in the development of the disease. They are known as triggering factors. Some of them are as follows,
1. Trauma: There is an increase tendency for psoriasis to occur on trauma sites also known as Koebner phenomenon.
2. Alcohol: Enhances the condition, particularly in males.
3. Smoking: Increases the severity of the disease.
4. Drugs: Condition worsens with certain drugs like β-blockers, anti-malarial, lithium etc.
5. Stress: Emotional stress has a synergistic effect on psoriasis.
6. Infections: β-haemolytic streptococcal pharingitis, HIV enhances the disease state.
Skin appears flaky with more silvery scales at skin surface predominantly occur on elbows, knees and scalp. (Principles of Anatomy and Physiology 10th edition Gerard J. Torotora, Sandra Reynolds Grabowski-2003). There are different clinical manifestations of psoriasis. They are,
Plaque psoriasis: It's the frequently occurred form of psoriasis. It can be found in any part of the body with erythematous plaque and silvery scale.
Guttate psoriasis: It looks like a pink (rain) drops on the body and hence the name guttate psoriasis. It has a clinical manifestation of red papules. Streptococcal infection is a triggering factor for this infection.
Scalp Psoriasis: It can be present alone or along with other psoriasis. The scales are silvery in colour and are powdery in nature. Chances of hair loss are also there.
Flexural psoriasis: The plaques are smooth and shiny. Usually present on groin regions, axillae, navel region, armpits, etc.
Erythroderma: Majority of the body appear red in colour due to red patches. There will be more thickening and peeling of the skin.
Localized pustular psoriasis: Occur on palms and soles of feet. As the name indicates there will be pustules which gradually become dry and form brown patches.
Generalised pustular psoriasis: groups of pustules are present with inflammation. This condition is associated with fever and pain. Systemic and topical steroids enhance the condition (Ryan, Sheila, 2010).
According to onset of age psoriasis can be classified into Type I and Type II.
Type I: Manifest at early age before 40 years of age and is associated with genetic factors.
Type II: Above 40 years and is sporadic in nature (Mallon et al, 19980).
The occurrence of the disease show geographical variation. Studies have been done by individual agencies and group of researcher workers to identify the prevalence and distribution of the disease. Various criteria's have been adopted for the study like age group of the patients, sex, type of psoriasis, onset of disease, also the environmental factors that trigger the condition etc. Studies suggest that the onset of action for childhood psoriasis is 35% in people before 20 years of age and is more common in females than male (Kumar, Bhushan et al, 2004). A study done in Middleast and Denmark agrees with the above statement. Considering the studies done with 5600 patients, 36% has a familial background.
Clinical study about severity, morbidity, and frequency of psoriasis by different dermatologists among patients in with an age group ranging from 1-74 found that, in USA 5.8% of total population were affected by psoriasis and in Australia 2.3%, UK moderately high rate- 15.8+ -8.9%. In Sweden it was found to be 0.3%. (Plunkett and Marks, 1998). Psoriasis thought to be one of the main causes of morbidity in Western countries. Cardiac disease, hypertension, and diabetes cause co-morbidity in psoriatic patients(Christophers, 2000
Physological depression is common in psoriatic patients. Symptoms like itching are higher in psoriatic women. The other symptoms commonly involved are irritation , stinging or burning sensation, less sensitivity, pain and bleeding. Symptoms vary with clinical condition. One of the commonly found symptoms in psoriasis is pruritis with a range of 67-92%(Sampogna et al, 2004).
Psoriasis is an auto immune disease initiated by activated T cell immune system. Hyper proliferation of keratinocytes in epidermis and lymphocytic activation in dermis are considered as the reason for psoriasis. (Kormeili and Yamauchi, 2004). Even though the pathogenesis is not clear, diagnosis of T- lymphocytes in the early stage of disease and response to immune system targeting therapies recommends that activation of these cells is the motivating factor responsible for disease. (Wojas Pelc and Janusz, 2007)
Antigen Presenting Cells present in both epidermis and dermis gain control over the antigen when skin exposed to various endogenous and exogenous antigens. Activation of antigen presenting cells and there by maturation indicate provoked expression of counter receptors on cell surface which stimulate T cells. Activated Antigen Presenting Cells move to lymph nodes, actuate CD4 or CD8 T cells and differentiate to Th1 and Tc1 phenotypes respectively by the influence of IFN-gamma and interleukin 12, generate Type I cytokines such as IL2, TNF -α , associated with higher level of cell mediated immunity. (Lee and Cooper, 2006) CD8+ T-lymphocytes are cytotoxic type, migrate and localized in epidermis but the CD4+ are localized in dermis. (Kormeili and Yamauchi, 2004)
Increased number of T cells and dendritic cells are found in psoriatic lesions. In psoriasis these immune mechanism together with keratinocytes develop inflammation through the exaggeration of cytokines. Psoriatic keratinocytes regularly produce wide range of cytokines like IL1, TNF-α, IL6, IL7, IL8, IL18, IL23, IL20 having different biological function. TNF-α thought to be main pro-inflammatory cytokines stimulate keratinocytes and produce other mediators like Reactive Oxygen Species and NO, involved in inflammation. Th1 cytokines are over riding in psoriasis. IL-19, IL-20, IL-22 impart keratinocytes hyperplasia (Wojas Pelc and Janusz, 2007).
T-cell activation and there by cytokine production involves various pathways. CD4+ T-cell activated by Antigen Presenting Cell lead to CD40 ligand up-regulation on T-lymphocytes. CD40 ligand of T- lymphocytes ligate with antigen presenting cell CD40, generate CD80 and CD86 on T-lymphocytes. Moreover, CD2 of T- lymphocytes bind with LFA-3 which is present on Antigen Presenting Cell enhance the cytotoxic function of T -lymphocytes. Various cytokines are induced intern regulate further activation of T cell, production of cytokines and proliferation.Th1 cells brings cell mediated immunity by the release of Interleukin-2 and Interferon gamma. But the Th2 cells contribute to human response by the production of Interleukins IL-4, IL-5 and IL-10. Th1 is pro-inflammatory and Th2 is anti-inflammatory. INF-gamma may increment the Bcl-x levels and inhibits keratinocyte apoptosis. INF gamma stimulates macrophages and produce increased level of Tumor necrosis factor [TNF-α]. These cytokines are present at increased level in synovial fluid plaques of psoriatic patients (Kormeili and Yamauchi, 2004). Polymorphism of TNF alpha in promoter region mainly when adenine is replaced with guanine in position 308 and 238 leads to increased TNF alpha production (Ferreira et al, 2010).
A normal healthy skin is equipped with anti-oxidants like glutathione peroxide, keratinocyte catalase, ascorbic acid and superoxide dismutase to defence the skin against Reactive Oxygen Species when skin is overt to various endogenous and exogenous pro-oxidant agents. A decrease level of anti-oxidants fails to defense against and there is an elevated level of H2O2 with response to pro-oxidant agents like sunlight, UVA radiation, and free iron. Excess production of reactive oxygen species in psoriasis by keratinocytes as well as activated inflammatory cells especially neutrophils, substandard functioning of anti-oxidant system leads to the production of oxygen frees radicals that damage lipids, DNA and proteins. As the reactive oxygen species detoxifying enzyme, heame oxygenase enzymes are expressed form normal keratinocytes and are responsible for protection against oxidative stress. Haeme oxygenase produce biliverdin, bilirubin and carbon monoxide are known for its anti-oxidant, anti-inflammatory, cytoprotective properties. However over expression of heame oxygenase enzymes HO-1 and HO-2 have been demonstrated in psoriasis. But an insufficient production of ferritin synthesis to deactivate free iron enhances the production of reactive oxygen species results in inflammation (Wojas Pelc and Janusz 2007).
Role of genitic factor in psoriasis is under debate. Predispositions of psoriasis in families were initially reported in 1801. Various twin studies explain the accordance rate is higher with identical twins than fraternal twins support hereditary nature in psoriasis.(Glaser et al;2001).Human leukocyte antigen-HLA- have been postulated genetic factors in association with multiple class 1 and 2 MHC- Major Histocompatibility Complex- antigen. In Type 1 psoriasis Cw 0602 allele present significantly in increased rate.(Mallon et al;1998) Early onset type psoriasis usually develop at 16-22 years thought have genetic cogency and hereditary association. Human leukocyte antigen class 1 and 2 such as HLA-B13, HLA-B17, HLA-B57, HLA-B39, HLA-Cw6, HLA-DR7 are positively associated in psoriasis. HLA-Cw6 is implicated as major risk factor. Chromosomal studies indicate the role of several genes like 17q, 4q, 6p21, 8q and 20p in polygenic predisposition. Other susceptible gene IL12B and IL23R also have been identified (Kormeili and Yamauchi; 2004).
As we know it's an auto immune disease and manifested by dermal inflammation as well excessive epidermal proliferations and plaque formations. Normally the drugs administered are not results in the eradication of the disease. But the drugs may diminish the lesions. The therapy should be continued and the doses as well the combinations should be administered according to the severity and mode of the disease, it may vary from person to person as well the severity of the disease.
The present pharmacological approach to cure this disease is done in the various profiles like combinational therapy of topical agents, emollients, antifungal, keratolytics, etc. even though the corticosteroids are the primary drugs, they gives better cure in mild to moderate patients also gives action in the chronic stage too. The therapy can be done by the administration of a potent steroid and gradually it can be replaced by milder ones.
In general the course of psoriasis is unpredictable. It can be controlled with aggressive therapy but cannot be cured completely. Exacerbations and remission are common. As we know it's an auto immune processed disease with the interference of the T-cells, the medical management is orientated to suppress or control the count of those cells in epidermis of the patient, as well controlling the cell turnover of epidermis. (K. D. Thripathi., 2008)
As mentioned above the disease has different forms and modes, so the pharmacological approach for the treatment also has a different mode which differs in every modules of the disease. Pharmacological Management as well the treatment and clinical approach of Psoriasis is enlisted below,
Generally the medicines used are
Calcipotriol-It's a moderate potency topical steroid given along with substitution of a steroid is found to be more effective. It binds with intracellular Vit D3 receptor and suppresses the proliferation-DAIVONEX 0.005%- it's an ointment applied on the skin of the psoriatic lesions twice daily.(K. D. Thripathy,2008)
Dithranol- is a potent drug used in the psoriasis especially for scalp psoriasis. It acts mainly by free radicals but its actual target is not identified yet. It is a mild irritant drug so the therapy is stated with little quantity. The cream is applied daily and gradually the quantity is increased from 0.1 to 3% until mild irritation or feeling of warmth is achieved. The course of medication is preferred with less contact time and the cream can be washed off after 30 to 45 minutes.(Carine et al.,2001)
Coal tar- it's a crude preparation which contains many phenolic groups which results in a phototoxic effect when exposed to the sunlight. It's a classic method and use has declined due to the incompatibility of the preparation like strong smell, irritation, as well allergy. Usually it is given in combination with salicylic acid in an alcoholic preparation or in the form of ointment. (carine et al., 2001)
Photo chemotherapy: PUVA-Psoralen ultra violet A- it's a method of treatment done by using UV rays in combination with photo activated psoralen. The activation is done by oxygen dependent and independent mechanisms. This therapy has its own demerits because the therapy is carried out with the help of UV rays which may cause skin cancer and other manifestations like immunological responses etc.
Tazarotene- is a topical retinoid, 0.05% and 0.1% tazarotene applied in gel form b.i.d. works by modifying abnormal epidermal differentiation. It binds to the retinoic acid receptor and modifies the gene function and stops the proliferation. It can be effective but it may produce irritation and it can be minimized by applying once in a day.
Acitretin- it's a retinoid analogue, or synthetic retinoid used i the treatment of psoriasis in combination with the other psoriatic drugs. It acts by binding on retinoic acid receptor and controls the epidermal cell maturation as well proliferation.
Demelanizing agents like-
Hydroquinone- as the name indicates it's a demelanizing agent which helps to prevent the formation of melanin in our body. In other way it inhibits the enzyme which helps in the formation of melanin, like tyrosinase, and other melanin forming enzymes. It is applied as a 2-6% creamy form for a month. Since it's a weak Demelanizing agent, there is a chance of pigmentation once the treatment is stopped or exposure to sunlight.
Monobenzone- is a derivative of hydroquinone, and gives better action. It destroys the melano cyte and produces a permanent depigmentation. The medication should be continued a period of six months and the patient should be aware of sunlight exposure.
Sunscreens- PABA and its esters- these are the agents which protect the skin from sunlight especially from those rays which are harmful to the skin tissues. They scatter those kinds of rays and protect the skin. Their ability to protect the skin can be justified by using a factor called SPF factor, most of the commercial preparation has SPF 15, (K. D. Thripathi, 2008).
Even though these above medication are available at the same time the aggressive therapy on each form of the disease gives better result than the general application of the above drugs.
Basic understanding about the pathogenesis of a disease condition is crucial for emerging new approaches to control inflammatory progression. An interventional remedy has been adopted based on the various cataract of events which lead to inflammation. They include,
- Inhibition of activated T -cell migration from epidermis to dermis.
- Disruption of Antigen Presenting cell pathway in T- cell activation.
- Inhibit cytokine production and antagonize their action.
- Annoy Th1 cytokine responses.
- Amplify Th2 cytokine reactions.
Recent and evolving immune modulatory treatments include;
Alefacept - a recombinant protein fusion consist of LFA-3 terminal part and Fe part of Human immunoglobulin. LFA-3 interaction with Antigen Presenting cell and CD2 are blocked by Alefacept by competitive inhibition. LFA-3 content of Alefacept binds with CD2 and inhibit the Antigen Presenting cell pathway, thus T- cells are not activated in addition immunosuppressant effect is produced. This is manufactured by Biogan, USA, approved by FDA in 2003, administered intra muscularly in moderate to severe plaque psoriasis.
Efalizumab- A humanized IgG1 monoclonal antibody inhibits binding of LFA-1 with ICAM-1, direct to the prevention of signal transduction and thus leading to leucocyte function loss. Manufactured by Genentech/ Xoma, USA, approved by FDA in 2003. Subcutanious injection is preferred once a week.
Etanercept - Recombinant molecular fusion of human TNF-α -p-75 receptor with Fc portion of huma IgG1. Inhibit TNF- α by binding and prevent the interaction with receptors on cell surface by inactivating TNF-α used to treat psoriatic arthritis. Manufactured by Amgen Thousand Oaks, USA. 50mg/kg/week is adult dose and for children [4-17years]0.8mg/kg/week, can be self administred subcutaneously.
Infliximab -A human mouse monoclonal antibody prevent action of TNF- alpha. It is administered intra venously 5mg/kg or 10mg/kg/week. Without any serious side effect Infliximab is well tolerated. Combination with methotrexate or alone can be used in the treatment of recalcitrant psoriasis.
Adalimumab - Human IgG1 - monoclonal antibody prevents TNF-α interact with p55 and p75 receptors on cell surface. Manufactured by Abbott Laboratories, USA and approved by FDA on 2002. 40mg/week administered sub-cutaneously.
Pimercrolimus(SDS-ASM-981) - A derivative of Macrolactum ascomycin recently being inspected to treat inflammatory disorder and known for its T- cell activation inhibition and there by proliferation. Durg for topical application is marketed by Novartis Pharmaceuticals, Switzerland.
Rosiglitazone Maleate - A well known oral thiazolidinedione manufactured by GSK, UK. Even though is used in Type 2 Diabetis treatment, it is under supervision for the psoriatic management. It is a selective agonist for Peroxime Proliferator- Activated Receptor- gamma [PPAR-gamma] exist in hepatic tissue and muscles. PPAR-gamma prevent production of cytokine and stimulate cell differentiation.PPAR- gamma is highly articulated in keratinocytes, Rosiglitazone shows decreased proliferation and differentiation of keratinocytes in the lesions of psoriasis since it is PPAR- gamma agonist.
Tazarotene - Is a retinoid orally adninistered for the management of plaque psoriasis.Tazarotenic acid an active metabolite is formed from Tazarotene is known for its action.Manufactured by Allergan, USA.(Lee and Cooper;2006)
The advancement in molecular genetics has led to the identification of genes responsible for psoriasis. Explicating the role of immune system can escort to develop improved clinical therapy. With the advancement in pharmacogenetic field the therapeutic strategies for psoriasis can be re- designed. Thus the side effect of anti-mitotic drugs is predominantly reduced; treatement is improved with maximum efficacy.
T- Cell targetting therapy: - therapies target on T- cell immune system provides better treatment in psoriasis. An effective and safe therapycan be atributed with IgG-LFA-3 fusion protein that blocks the binding of T- cell with LFA-3 via CD2.antibody targetting CD11a which is a sub unit of LFA-1 and CTLA4IG fusion protein that inhibits CD28-B7 binding are emerging therapies in psoriasis. By using these therapies, helps the patients to think ahead and lead life with conviction.
Cytokine Modulating therapies: - Treatment strategies involving monoclonal antibodies against INF-alpha have directed to a well tolerated and effective therapy for psoriatic arthrities and psoriasis. Intradermal administration of Mycobacterium vaccae has shown improvement in psoriasis by shifting Th1 cytokine to Th2. Combinational therapies like low dose of Rapamycin with low dose of cyclosporin shows significant improvement in chronic plaque psoriasis.(Kirby and Griffiths;2001). | <urn:uuid:3d170c83-b269-4316-b1d0-1250f8857a68> | CC-MAIN-2017-17 | https://www.ukessays.com/essays/biology/psoriasis-non-contagious-pathological-condition-of-the-skin-biology-essay.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00309-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.918105 | 5,234 | 2.953125 | 3 |
Friday, July 26, 2013
Standing alongside current Vietnamese President Truong Tan Sang, Obama recently said, “At the conclusion of the meeting, President Sang shared with me a copy of a letter sent by Ho Chi Minh to Harry Truman. And we discussed the fact that Ho Chi Minh was actually inspired by the U.S. Declaration of Independence and Constitution, and the words of Thomas Jefferson. Ho Chi Minh talks about his interest in cooperation with the United States. And President Sang indicated that even if it's 67 years later, it's good that we're still making progress.”
Anybody who believes communism and our Declaration of Independence have anything in common is sadly mistaken.
To begin with, let’s peek at that “letter,” actually a telegram, that Ho sent to Truman in 1946. Why President Sang had to ‘bring’ Obama copy, I can’t understand since it is readily available in our own Government archives.
But he did and as we can see, Ho used the ploy of “independence” in appealing for America to “interfere” with the French who had held Vietnam as a colony long before World War Two and was reasserting their control afterward, calling for assistance in “support of [their] independence.”
In penning the Vietnamese Declaration of Independence in September 1945 Ho quoted, “All men are created equal; they are endowed by their Creator with certain unalienable Rights; among these are Life, Liberty, and the pursuit of Happiness.”
Of that he wrote, “This immortal statement was made in the Declaration of Independence of the United States of America in 1776. In a broader sense, this means: All the peoples on the earth are equal from birth, all the peoples have a right to live, to be happy and free.”
That has been a common ploy of the Communists ever since its beginnings, preach “independence” while actually enslaving.
Shortly before his death in September 1969, Ho tried a similar ploy with then President Nixon in an exchange as Nixon was striving to bring hostilities to an end, Nixon and others not recognizing that the “Tet of 68 offensive” was a resounding Military Defeat for the Communist North Vietnamese, but allowing it to become a decisive Political Victory for them instead.
Outlining how “the United States must cease the war of aggression and withdraw their troops from South Vietnam, respect the right of the population of the South and of the Vietnamese nation to dispose of themselves, without foreign influence,” Ho also said, “This is the correct manner of solving the Vietnamese problem in conformity with the national rights of the Vietnamese people, the interests of the United States and the hopes for peace of the peoples of the world.”
The last U.S. troops left Vietnam on March 29, 1973 in accordance with the negotiated withdrawal outlined in the Paris Peace Accords, recognizing the right of the both the South and the North to determine their own governance. Or so it said.
As we know, soon after the American Congress all but stripped all support of an Independent South Vietnam while the Communist North received much support from both the Communist Soviet Union and China and seeing that America was no longer going to support the South and their freedom, the Communist North launched an all out bloody assault against the South as we witnessed Communist tanks roll into the city of Saigon deep in the South as American helicopters hurriedly evacuated Diplomatic personnel from the rooftop of our Embassy at the end of April 1975.
America’s Most Shameful Day. We sat and watched a free people fall to oppression as the South Vietnamese unable to board the last helicopters, struggled to leave the country, land on our aircraft carriers and a Naval Armada set sail for parts unknown, where they were unwanted.
Lauren Zanolli of George Mason University’s History News Network wrote in November 2006, “Historians have directly attributed the fall of Saigon in 1975 to the cessation of American aid. Without the necessary funds, South Vietnam found it logistically and financially impossible to defeat the North Vietnamese army. Moreover, the withdrawal of aid encouraged North Vietnam to begin an effective military offensive against South Vietnam. Given the monetary and military investment in Vietnam, former Assistant Secretary of State Richard Armitage compared the American withdrawal to ‘a pregnant lady, abandoned by her lover to face her fate.’ Historian Lewis Fanning went so far as to say that ‘it was not the Hanoi communists who won the war, but rather the American Congress that lost it’.”
Even though deceased by then, that was the fulfillment of Ho’s idea of being “deeply devoted to peace, a real peace with independence and real freedom.”
Not long after began what we called the exodus of the Vietnamese Boat People as untold hundreds of thousands of freedom seeking Vietnamese boarded any rickety craft they could to escape the “benevolent” Communists as the formerly free South Vietnamese were rounded up, assigned to “Reeducation Camps” never to be heard from again.
How many perished in the South China Sea at the hands of Pirates or weather will never be known. But thanks to Vietnamese author Le Thi Anh, we do know about the bloodbath within the “benevolent” The New Vietnam under Ho Chi Minh’s vision of a “unified Vietnam.”
Speaking next to Obama, Vietnam’s President Sang also said, “I also expressed my appreciation for the care that the U.S. has extended to the Vietnamese who came to settle in the United States and now they have become American citizens and contributing to the overall development of the U.S. And thanks to the support and assistance from the U.S. government as well as the American people, the Vietnamese-American community here in the U.S. has become more and more prosperous and successful in their life as well as work.”
Shameful that neither he nor Obama can make the same claim about Ho’s vision for Vietnam, expressed by a young Vietnamese woman married to a pilot I served with during the war, born after the Fall of Saigon in 1975, Do You Really Want to Rear Your Child in a Socialist Society?
In 1787 Founding Father Thomas Jefferson famously wrote: “The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is it’s natural manure.”
Nowhere will you find any of our Founders working so diligently to enslave an entire nation and place the people under an oppressive rule as Ho Chi Minh envisioned for the People’s of Vietnam.
That Obama can now compare the murderous tyrant Ho Chi Minh to the founding of the United States of America, even with all of our imperfections, is a slap in the face to the American people, the Vietnamese-American community and especially to the memory of those 58,195 names etched on the Wall in Washington D.C. of those who paid the ultimate sacrifice in support of South Vietnamese freedom.
Posted by Lew Waters at 3:24 PM
Tuesday, July 16, 2013
One thing we know the left is masters at is creating new words that make the rights effort out to be overbearing, dumb, ridiculous and to demean our traditional values at any cost.
For example, when we stood up as a ‘Tea Party’ movement, they swiftly labeled us as “Teabaggers,” knowing the word was a code word for a homosexual practice that most of us had never heard of. The steady use of it ended up demeaning our efforts and the willing lamestream media readily joined in.
The Tea Party movement today is a mere shadow of what we started out to be.
We now see the left spreading hysteria across the country over the acquittal of neighborhood watch captain, George Zimmerman for the self defense shooting death of 17-year old Trayvon Martin in Sanford, Florida as protests broke out almost immediately and invoking the name of the youth, even though not one of those out in the streets crying and whining over have a clue who or what he was or may have been.
Intersections and highways are being blocked, false information is being spread in misidentifying people as having served on the jury, in an effort to accost them for reacting to actual evidence over an emotional verdict I believe, death threats are being sent out and the race hucksters are out in full force whipping lo information people into a frenzy.
All because Zimmerman was erroneously identified as a “White” person in the beginning instead of a “Hispanic” and Martin was a “Black” youth.
Race-Baiters, ignoring the thousands of Black youth murdered at the hands of other Blacks and even the massive death count over the years of Whites at the hands of Blacks, have latched onto this single incident in an effort to convince today’s Black Community that they are under constant threat from Whites, are held in bondage to Whites and that “Whitey” is the source of all of their troubles.
The only ones that seem to present any “solution” to the false claims seem to be New Black Panther King Samir Sabazz who publicly calls on others to “kill some ‘cracker’ babies.”
The Race-Baiters only demand Zimmerman be imprisoned, as if that would resolve the plight of today’s inner-city Black youth.
And it is not restricted to Blacks as the “Trayvoner” effort is filled with liberal Whites filled with White guilt over what happened in our country over 160 years ago when most Blacks were held as slaves.
Now ignored in this “Trayvon” movement is all of the strides we have made in improving race relations and so many more doors open to today’s Blacks than ever before, if they are willing to apply themselves, get a good education and put out the effort.
More than ever seen before have done just that and live amongst Whites as equals and at times, ‘betters’ in that their success exceeded that of others, rightfully due to their extra efforts.
But listening to Race-Baiters like Al Sharpton, none of that is true or those that made it are “sell-outs” and “Uncle Toms” who forgot where they came from, meaning in large part, they fail to support Sharpton by donating money to him.
In the end, the “Trayvoning of America” is nothing more than another effort to drive a deeper divide between people in this country. It is the old tried and true “Divide and Conquer” seen throughout history by evil forces that swoop in, takeover and oppress the people, falsely selling them a lie of they will be “free.”
|Photo by Allie Morris/PBS NewsHour http://tinyurl.com/n5qlzjw|
The acquittal angered people, yes. But can you sow me any high profile trial ending in acquittal that did not anger some? People form opinions based upon erroneous reports, preconceived notions and only seeing some of the evidence presented.
But do they get twisted into a social movement in the streets? No.
Stand up, America. Stop letting them play on “White Guilt” and feed into fears of Whites to Blacks.
Push back and do to the hucksters as they do to us. Label this effort what it really is.
The Trayvoning of America.
Posted by Lew Waters at 10:13 AM
Monday, July 15, 2013
My initial concerns of him receiving a fair trial were in the forefront of my thoughts during the trial, seeing many from the judge going in favor of the prosecution, but in the end, justice prevailed and Zimmerman was acquitted of all charges.
At least, in a court of law he was acquitted. A heavily biased media whipped the Black Community and Liberals into a frenzy since they first made the case national news, editing audio to make Zimmerman sound racist, portraying Martin as an innocent 12 year-old simply buying Skittles and even trying to portray Zimmerman as White, changing that label to White Hispanic when his true ethnicity came out.
We even recently were informed of the Department of Justice, under Barack Obama and Eric Holder lent a hand in organizing and carrying out protests in Sanford, Florida once the Police initially determined Zimmerman acted within the law, defending himself.
Few people are falling for the claim the DOJ was in Sanford, Florida to “keep the peace.”
Thanks to the efforts of race baiters like Jesse Jackson, Al Sharpton, the lamestream media, the NAACP, the New Black Panthers and Barack Obama claiming “if I had a son, he’d look like Trayvon, George Zimmerman was quickly convicted in the court of public opinion last year, enraging some now that the courts, viewing the actual evidence and not race-baiting hysteria acquitted him on all counts.
Word was quick to come out of Washington D.C. of the Department of Justice is “weighing a civil rights case” against Zimmerman, responding to pressure from the NAACP and ignoring that in the initial investigation, the FBI found no evidence of racism on George Zimmerman’s part.
This is the same NAACP that is outraged over a Black woman in Houston, Texas being indicted for shooting and killing an unarmed White man over a minor traffic accident with conflicting claims of just what led up to the shooting.
Many legal experts are questioning just why he was prosecuted in the first place given that the original prosecutor saw the case was weak and evidence supported Zimmerman’s claims, not those of a grieving family or the race baiters and especially not the false narrative created by what I label the ‘lamestream media’ eager to sell copy at any cost with no regard of harm to others.
What really should be considered is just why this was even a national story and just why are many up in arms over the rightful verdict?
Florida’s Stand Your Ground law is not unique and in fact, the Black woman mentioned above is basing her actions on the same sort of law.
The Tampa Bay Times in Florida maintains a page of cases where Stand Your Ground has been used as a defense and show that it was justified 73 times, before and after the Zimmerman trial.
From a variety of cases, we see Black on Black, Black on White, White on White, White on Black and Hispanics as victims as well as shooters in a mix of cases.
Yet, only one case, the Trayvon Martin case, rises to national prominence and enrages people across the nation because the law saw the justice was to acquit the shooter?
Protesters, carrying signs like “When Will We Be Seen as Human” who never bat an eyelid at the massive numbers of young Blacks shot to death in major cities like Chicago on a daily basis wail as if life as they knew is over, even though they never met, never knew and likely would not have given Trayvon time of day had they met him on the street.
Surely they would not have been outraged had he been shot to death by another Black person as we see so often in our large cities.
Trayvon is just a tool for a much larger agenda now as we see incidents of unrest breaking out and malcontents using him to call on death to others they seem to hate. Admitted Communist Van Jones even enters the fray by comparing Trayvon to Martin Luther King Jr. in a created image.
Not addressed by any of the race-baiters or protesters is the actual evidence, relying instead of the racially biased reporting of the lamestream media who reported it wrong all along, no one ever asking why, if Trayvon was merely walking home after buying his Skittles and Iced Tea, why was he off the sidewalk in a neighborhood unfamiliar to him, where he was visiting his father after being suspended from school for multiple reasons and apparently walking in between homes in the neighborhood, as first described in Zimmerman’s initial 911 call.
Left out too was Zimmerman, after being told it wasn’t necessary to pursue Trayvon as he reportedly ran away, being attacked by Trayvon as Zimmerman stopped the pursuit and was walking back towards his truck. Apparently, Trayvon, who we learned through prosecution testimony, saw Zimmerman as a “crazy assed cracker” doubled back to confront Zimmerman, striking him first and knocking him to the ground, punching him in the face and pounding his head against the concrete sidewalk.
In spite of initial denials, it did come out that the autopsy showed show injuries to Trayvon’s knuckles indicating he was indeed striking Zimmerman, supported by photographs of injuries to Zimmerman’s face and back of his head.
None of the race-baiters or protesters address any of the actual evidence, instead they listen to race-baiter extraordinaire Al Sharpton as he whips them into a frenzy with his call of “We Won't Stop Until Justice Is Served,” while not granting any justice whatsoever to George Zimmerman, who is now a marked man with many calls of “Kill Zimmerman” heard.
Race-baiters have latched onto this case to further drive divisions between the people of this country and even appear to want major race riots between Whites and Blacks where the only possible winners can be the race-baiters and politicians who have long wanted to change our country from a freedom based Republic into a Socialist Oligarchy.
Anarchists’ are using it to call for mayhem and even killing of Police as they perform their duties.
In the end, there is no justification for the outrage, the unrest or even the continued persecution of a man who wanted to see burglaries stop in his neighborhood and who defended himself from what appears to have been a cocky teenager out of control with testosterone, thinking he was invincible (as many of us also once did) and chose the wrong person to attack, like many others did in the Tampa Bay Times page linked above.
The young lady mentioned above from Houston is entitled to her day in court as is the dead man’s family.
But like in the Zimmerman case, even though he never should have been prosecuted, let the evidence speak, not race-baiters or the agenda driven politicians who have tried to capitalize on the Zimmerman case.
Above all, we better find a way to shut out these hot heads and race hucksters promoting this division among us and come back together like we tried to do back in the 1960’s before they saw a way to build their own wealth by misinforming people and causing unnecessary unrest.
Posted by Lew Waters at 9:54 AM
Tuesday, July 09, 2013
Amidst audible groans and moans, the Columbia River Crossing light rail project died as the Senate Coalition majority gaveled the end of the second special session in Olympia, having defeated a last minute effort by Democrats to force the $10 Billion Transportation Bill to a vote.
Years of scheming, planning, fearmongering and wasting over $170 Million went up in smoke as proponents saw the bill that would have funded the CRC light rail project and other mega-projects around the state dissolve, due primarily to proponents steadfast refusal to listen to voters and taxpayers who have been rejecting Portland, Oregon’s light rail forced upon them since 1995.
The anger coming from elected officials in support was immediate and continues, many vowing to keep pushing the project in one manner or another, others just lashing out in angry outbursts directed at Washington State Senate Republicans who actually listened to and abided by the voices of those who elected them.
Continuing the false narrative long promoted by proponents, Oregon Governor Kitzhaber released a statement saying in part, “I am extremely disappointed that our legislative partners in the Washington State Senate failed to address the clear and present safety and economic need for this essential I-5 bridge.”
Yet, he is also who let it be known that he would kill any hopes of a new bridge if it did not include Portland’s financially failing light rail system being extended into Clark County.
He has yet to address how extending light rail improves bridge safety or helps Clark County’s economic needs, since it was seen early on that Clark County would foot the lion’s share of the projected Multi-Billion expense of construction and operations and maintenance of the light rail extension from well inside Oregon to the short distance proposed into Clark County.
Washington State Governor Inslee, who earlier traveled to our community to encourage proponents to “increase the decibels” and drown out concerned voters and taxpayers voices issued this own statement saying, “I’m beyond disappointed in this inaction. The failure by the Senate’s Republican-led majority to act on the transportation plan stops us from making important investments in maintaining and preserving our roads and bridges and ensuring the safety the public deserves.”
We are among the states with the highest gasoline tax now and Gov. Inslee wished to jack up that tax by another 10.5¢ per gallon, tax bicycle sales over $500, increase license fees on cars and more to fund his ‘pie in the sky’ plans to cave to Oregon’s demands.
And like Gov. Kitzhaber, he too does not explain just how a short light rail extension does anything in regards to “preserving our roads and bridges and ensuring the safety the public deserves.”
Representative Jim Moeller (D. Portland / Vancouver), famous for suing constituents to have their votes invalidated when he doesn’t like the outcome and for seeking ways to bypass voters completely to impose his socialist will and tax increases upon them showed his own mini-meltdown with “I’m very disappointed in the shortsightedness of some of my colleagues on this CRC issue. They simply failed to embrace the simple fact that transportation is about investing in the future. It’s about jobs and families and businesses and public safety” and “I will oppose any package that the state Senate proposes doesn’t include funding to begin anew on the CRC… Because the aging, outdated and dangerous bridges are still there and need for their replacement hasn’t changed.”
Yet, he too went along with the “no light rail, no bridge” meme promoted by Oregon, basically placing the very bridge he deems “unsafe” hostage to a financially failing and often unreliable light rail system from another state.
A couple days later, Moeller ranted, “As expected, there will be many an arm-chair engineer who will have their bridge ‘alternative’ ready. And as usual, they refuse to deal with the facts. Regardless, the only real culprit in this tragedy is the Washington Senate Republicans. It was them who refused to deal with the House Transportation Package,” completely ignoring his own refusal to support constituents and fellow legislators who have for several years opposed Portland’s light rail bring forced upon them even though he later wrote, “First, we must reach a compromise for the state to move forward. Secondly, Clark County must get something other than just a chance to pay more gas tax. And finally, salvaging some part of 10+ years of planning and efforts on this bridge is also important otherwise it truly was a waste to the both Oregon, Washington and the feds.”
Again, ignoring his own refusal to even consider accommodating constituents who have voted against light rail and any measure perceived to fund light rail since 1995.
Joining Moeller’s smoke screen and anger over voters and taxpayers winning one, fellow Democrats from the 49th district, Rep. Sharon Wylie and Senator Annette Cleveland chimed in with “It is easier to kill legislation than to build a future together. We came here to do what is right for our collective future. That may not be easy or always popular. The need to repair our infrastructure will still be here later and it will only get more urgent and expensive” from Wylie and “I am saddened today that not all of us in the Legislature are willing to be partners in the prosperity of our state. The very foundation for this prosperity is investment in our state’s transportation infrastructure…. The short-sightedness of refusing to invest in this state’s future will have repercussions for generations to come.”
Both women also ignore constituent opposition to light rail and mask their desire to impose ever increasing taxes and fees on an already struggling middle class as an “investment” and not the burden it would have been, considering Clark County has seen over four straight years now of double digit unemployment.
There too, neither woman has ever made any effort to actually lay out the benefit to Clark County citizens who would have been stuck in increased traffic congestion for at least eight years to build a new bridge to carry light rail since there is no other alternative bridge to cross the river except for one a few miles upriver.
Apparently displaying complete denial that the tax increase Democrats promoted is now dead, Sen. Tracy Eide, whose district sits some 150 miles north of Clark County, felt the need to chime in with “we still have until September to meet the deadline for paying Washington state’s share of the CRC. I hold out hope that those who have stymied the project so far will realize the stakes and join with the governor and us in moving our state forward in everyone’s best interests.”
Apparently she also refused to peek at the tea leaves to see Clark County voters have steadfastly opposed and voted against this boondoggle project she promotes as “in everyone’s best interests.”
Do voters and taxpayers not matter to Democrats at all unless they cave to their demands?
Far from it only being elected Democrats upset over taxpayers opting to retain a little more of their paychecks in order to care more for their own families, a handful of citizens, Democrats too, added their outrage in stronger terms in Letter to the Editor submissions to the Lazy C.
Roy G. Wilson wrote on July 6, “If the Columbia River Crossing is really dead, it is the people of Vancouver, Clark County, and Washington state who are the real losers. Apparently they are being governed by losers.”
Ryan McAbel added the following day, “Can you believe the stupidity of the Washington state Senate? They are a bunch of overpaid blithering idiotic baboons that couldn’t find their own brain if they had a map and a compass…. I’m going to laugh my head off when in three years’ time if the old bridge collapses and they all look stupid.”
Former Democrat School Board candidate Bob Travis said, “Hopefully all you anti-CRC types are on the rickety old bridge when it collapses and you can swim you’re dumb selves back over to economically isolated Vancouver.”
Called on his vitriolic outburst by me, Travis added, “Light rail just works Lew. It makes sense. Perhaps that’s why you sheep can’t handle it. The leader of the whiners talking about whining? That’s too funny, even for Facebook! Not a sore loser at all Lew, because those of us that can see beyond today, the progressive thinkers, haven’t lost-we suffered a setback. As did Vancouver and Clark County.”
Yes, as usual, those of us who cherish our independence, freedom and liberty and stand up against the Socialist Democrat efforts to strip us of them are just too dumb to see what is really good for us.
Writing for the Oregonian, Elizabeth Hovde added her op-ed, Without the CRC, get used to being stuck on this bridge, never acknowledging the real congestion problem is not the bridge itself, but poorly planned and maintained freeway structure in Portland.
She laments on the opposition to light rail, “we need to get the problem-solving about a river crossing away from political fights that mimicked on-the-ground wars between light-rail supporters and detractors,” “And given our region’s love and hatred of light rail, a replacement bridge should devote something less costly and less permanent to a crossing for a commuter option. Then should Vancouverites be smart enough to choose group transit, great. If they don’t, we don’t have an empty, expensive mass-transit component taking up a lane that dictated much of the bridge’s design” and “A continued light-rail fight is only good for providing material to ‘Portlandia’ and other spoofs about where we live.”
Again, ignoring that the main opposition is voters and taxpayers in Clark County and joined by several who live in Oregon, evidenced by a half-hour press conference held in January 2012 by 15 speakers and officials from both states.
Hovde almost hits on the problem in her op-ed with the ongoing distress over light rail, but stops short of acknowledging it is Oregon’s Governor who steadfastly refuses to budge and is determined to force taxpayers into accepting their light rail, even though voters and taxpayers have repeatedly rejected it, leaving us no choice but to demand the entire project be killed and start over without the light rail component that required a new bridge be built with unsafe clearance for river traffic and ignoring that one-fourth of all bridge collapses in America since 1940 were caused by collision from water craft.
The Lazy C, the Columbian, the so-called ‘newspaper of record’ for the community, should be teetering on the brink of bankruptcy again, given their efforts to denigrate potential subscribers opposed to light rail, labeling us “Hounds of Whinerville,” “Ankle Biters,” “Cockroaches,” and “Banana (build absolutely nothing anywhere near anything” by editorial page editor John Laird.
Then too, this blog and Clark County Politics Blog lead the way in exposing the Lazy C’s efforts to ignore or cover up discrepancies’ and questionable practices by the Columbia River Crossing project.
The Willamette Week seemed to be the only media source willing to engage the CRC properly with numerous articles written and published by Nigel Jaquiss.
Couv.com began with many exposés as well, but seemed to fall towards more of what I refer to as “fluff issues” along the way.
The demise of this seriously flawed project was also due to those who joined in much later like CRCFacts.info, Stop CRC and more. It was truly a community, citizen led effort.
Throughout it all, none of the proponents ever addressed just how their claims of an “unsafe bridge,” “easing congestion along the I-5 corridor” or “improving freight mobility” were to be improved by extending Portland’s unreliable light rail about one mile into Clark County and at a cost of $3.4 Billion before any interest on bonds were to be added or that Clark County residents could end up stuck paying $8 each way to cross the bridge, that would have added nearly $2,000 a year in expenses to the estimated 60,000 Clark County residents who work in Oregon, our elected officials failing to adequately attract jobs to our community.
Light rail would have done absolutely nothing to change the driving habits seen daily as some whip across several lanes on the bridge to make the Hwy 14 exit or to get across a few second sooner.
It would do nothing to widen or increase the ability of Portland’s narrow freeway to accommodate the increase seen over the years in traffic.
Governor Inslee not too long ago, encouraging a skeptical Cascade Bicycle Club to support the CRC lamented to them, “We move a lot of freight, but we do not move a lot of freight by bicycles. A new bridge is the only way to keep the I-5 corridor moving.”
Somehow it escapes him that the light rail he joins Gov. Kitzhaber in demanding does nothing to move freight either which is indicative of how the “debate” was promoted on false pretenses, claiming unrelated reasons and trying not to mention this was solely a light rail project all along.
Even though defeated, proponents have vowed to begin anew, Jim Moeller saying, “We need to start again. The next plan has to engage the general public better, has to be more inspiring, less utilitarian, something that people want to leave behind … for their children, or their children’s children” completely ignoring that he took the lead in is shouting down citizen concerns, demanding light rail be included and furthering the false premise of the bridge may collapse any day.
Also ignored by Moeller and Vancouver’s Mayor Tim ‘the Liar’ Leavitt is the many alternative plans proposed by citizens as well as bridge engineers such as the Third Bridge Now, Common Sense Alternative and another proposed by Bridge Architect Kevin Peterson in 2010.
To that end, all are invited to come out to the Academy, 400 E Evergreen Blvd, Vancouver, WA 98660 on July 11, 2013 from 4:30PM - 9:30PM for an open house and information meeting by the Economic Transportation Alliance.
If Jim Moeller is being honest in his call for “The next plan has to engage the general public better, has to be more inspiring, less utilitarian, something that people want to leave behind,” he and others will hopefully attend, get back in touch with constituents and learn what we want, what we will support and what we have to propose. Far from being the “arm-chair engineers” he accused us of, we are all concerned citizens who want to see our community thrive, prosper and not be used by Portland, Oregon to bail them out of their estimated $1.6 Billion in unfunded liabilities.
Republican State Senator Don Benton says, “The main lesson I hope that government bureaucrats have learned from this is, you should listen to everybody from the beginning. The CRC has been less than honest with people about facts, about deadlines.”
But above all, we are citizens that fully realize, light rail can’t fix the stupid that has been being pushed off on us for so long.
Posted by Lew Waters at 1:20 PM | <urn:uuid:2af7f21f-5bdb-455c-9bd7-c7ec73c6adec> | CC-MAIN-2017-17 | http://rightinaleftworld.blogspot.com/2013_07_01_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00370-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.970603 | 7,195 | 2.796875 | 3 |
Chlamydia trachomatis is an obligate intracellular bacterium commonly causing sexually transmitted infection in men who have sex with men (MSM), being a significant source of morbidity.1 There are 15 serovars of C. trachomatis characterized based on antigenic variations of the major outer membrane protein, and are associated with factors such as sexuality, race, exhibition of symptoms, histopathology, and spontaneous resolution of infection.2–8 C. trachomatis serovars A–C are primarily associated with ocular disease such as trachoma; serovars D–J with sexually transmitted urogenital disease as well as conjunctivitis,9 and serovars L1–L3 are responsible for a painful and often serious condition known as lymphogranuloma venereum (LGV).10,11 The gene that encodes for major outer membrane protein, omp1, consists of 4 variable domains (VDI-VDIV) that enable serovar prediction based on sequence variations,12 enabling rapid molecular screening of populations.
The estimated prevalence of rectal C. trachomatis infection in MSM in the United Kingdom and United States ranges from 6.0% to 8.5%, being primarily asymptomatic (52%–86%). Data from studies of MSM in the United Kingdom suggest that urethral infections are less prevalent (3.3%–5.4%) and largely symptomatic (67.7%–68.5%).13–18 Based on studies conducted in the United States and Sweden, the most common serovars identified in MSM are G and D (45.2%–47.9% and 26.9%–29.6%, respectively), followed by serovar J.14,19 Little data exists regarding C. trachomatis infection in Australian MSM, though a study of HIV-negative MSM reported comparable incidence rates of urethral and anal C. trachomatis infection of 7.43 and 4.98 per 100 person-years, respectively.20 The only Australian data on C. trachomatis serovars among MSM was derived from 39 samples collected from men who frequented sex-on-premises (SOP) venues with the predominant serovars being D (53.8%), G (25.6%), and J (10.2%).5 This article presents results from the largest study conducted on C. trachomatis serovars and genotypic variants found in Australian MSM.
MATERIALS AND METHODS
Study Population and Sample Collection
A total of 612 C. trachomatis positive specimens from MSM visiting the Melbourne Sexual Health Centre (MSHC) and Sydney Sexual Health Centre (SSHC) between July 2004 and August 2008 were analyzed. Samples were deidentified; so matching of sites from the same patient was not possible. MSHC collected samples from 2004 to 2006, while SSHC collected samples from 2005 to 2008, with no samples obtained by this centre during 2006. Ethical approval for this study was obtained from the South East Sydney Illawarra Area Human Research Ethics Committee and The Alfred Human Research Ethics Committee. Demographic information including age, place of residence, reported overseas sexual contact during the previous 12 months, whether site specific symptoms were present, HIV status, Neisseria gonorrhoeae coinfection, prior C. trachomatis infection (self-reported), and self-reported number of sexual partners during the previous 3 and 12 months was collected.
Serovar and Genotype Determination of C. trachomatis Infections
The methodology used for the initial C. trachomatis testing, quantitation and subsequent determination of C. trachomatis serotypes via omp1 gene sequencing and qPCR screening have been described previously.21 To determine C. trachomatis genotype, partial omp1 gene sequences used for serovar determination were aligned using ClustalW22 with the SeaView and MEGA 4 programs.23,24 Sequences with ≥1 bp difference were binned together as a genotype of that particular serovar, with multiple genotypes of a serovar given an arbitrary number (e.g., Di, Dii). The Genbank accession numbers for all sequences generated from this study, as well as reference sequences, are presented in Table 3.
All analyses were restricted to individuals with a successfully assigned C. trachomatis serovar classification. Demographic and sexual health variables including C. trachomatis serovar (pooled and by year) were stratified by city, and compared using a chi square test (categorical variables), or a Wilcoxon-Mann Whitney test (non-normal continuous variables). City, age, number of sexual partners, concurrent gonorrhea, HIV status, prior C. trachomatis infection, overseas sexual contact, and the presence of symptoms were assessed as predictors of the predominant serovars using logistic regression. An additional logistic regression analysis was performed to identify predictors of the presence of symptoms for all samples, and stratified by site of infection. Results were reported as odds ratios (OR) with 95% confidence intervals (CI). The mean log concentrations of C. trachomatis for each serovar/site were compared using an ANOVA test. All analyses were performed using Stata 11.25
A total of 612 C. trachomatis positive samples (344 anal swabs, 265 urine samples) were evaluated from MSM patients aged 18 to 73 at MSHC (Melbourne) and SSHC (Sydney) between 2004 and 2008. Of these samples, 571 (93.3%) were able to be successfully assigned a C. trachomatis serovar classification, 521 by omp1 gene sequencing, 528 by qPCR, and 482 by both methods. Anal swabs comprised 323 of these samples (Melbourne n = 158, Sydney n = 165) and there were 249 urine samples (Melbourne n = 147, Sydney n = 102). Overall, participants in Melbourne (n = 304) and Sydney (n = 267) were similar in age (range, 18–73 years), HIV status, overseas sexual partners during the previous 12 months, and number of partners in the previous year (Table 1). MSM in Melbourne were more likely to present with symptoms, and have concurrent gonorrhea, while their Sydney counterparts were more likely to have reported a prior chlamydial infection (P < 0.05).
C. trachomatis Serovar Distribution
The most common serovars detected were of the B complex group (serovars B, D, E, L2: 43.9%) followed by the Intermediate group (F, G: 36.7%) and C complex group (H, I, J, K: 19.4%), which are based on phylogenetic divisions of the omp1 gene.11 The most prevalent serovar was D (35.2%), followed by G (32.7%) and J (17.7%) (Table 1). The distribution of serovars was similar in Melbourne and Sydney, with the exception of serovar D (40% vs. 30%, P = 0.02) and serovar E (6% vs. 14%, P < 0.001) Only 4 cases of serovar H were identified in samples from Sydney, and the single case of serovars B, I, and L2 (L2b) were from Melbourne. C. trachomatis serovars A, C, L1, and L3 were not detected in this study. The proportion of each C. trachomatis serovar was similar irrespective of anatomical sampling site (data not shown), with the exception of serovar F, which was more common in urine samples compared to anal swabs (7.3% vs. 2.8%, respectively, P = 0.01).
There was a marked shift in the distribution of C. trachomatis serovar positivity between 2004 and 2008, with only serovar G remaining constant over this time period irrespective of city. For Melbourne, the proportion of serovar D samples increased (P < 0.01) in each consecutive year samples were collected, while the proportion of serovar J decreased (P = 0.02). In Sydney, a similar trend was observed for serovar D (P < 0.01); however, there was evidence of decreasing prevalence for serovar E and F (P < 0.01). No change was observed for serovar J (Table 2).
Quantification of C. trachomatis Load
Of the 528 samples assigned to serovars by qPCR, 429 (81.3%) C. trachomatis infections were quantified (including 4 mixed infections) with an average of 1.48 × 104 genome copies/anal swab (log10 = 4.17) and 3.72 × 103 copies/mL (log10 = 3.57) in urine samples. The mean log concentrations and variance were similar across serovar for each site (ANOVA, P = 0.17), with the detectable levels in anal swabs were consistently higher than samples of 1 mL of urine (Fig. 1). The mean and median concentrations shown for serovar F varied due to the low number of samples. The C. trachomatis load was not associated with a prior C. trachomatis infection, the presence of symptoms or any demographic information pertaining to the individual, including age (data not shown).
We identified 11 cases (2.6% of total quantified) of mixed C. trachomatis infection, 9 from Sydney (4.6% of total quantified for Sydney: 2 D + J, 2 G + J, 2 J + K, and 1 each of D + E, E + J, and H + K) and 2 from Melbourne (0.9% of total quantified for Melbourne: E + J and F + D). Mixed infections were equally distributed amongst urine and anal swab samples. Four of these infections had quantifiable C. trachomatis load; 2 were anal swab samples from Sydney (5.8 × 105 genome copies/swab serovar G and 2.3 × 104 copies/swab serovar J; 3.5 × 105 copies/swab serovar E and 2.1 × 105 copies/swab serovar J), 1 was a urine sample from Sydney (2.0 × 103 copies/mL serovar D and 1.3 × 105 copies/mL serovar J), and 1 was a urine sample from Melbourne (3.1 × 105 copies/mL serovar F and 5.0 × 105 copies/mL serovar D).
Predictors of Specific C. trachomatis Serovar Positivity
After adjusting for other demographic and sexual health factors, living in Sydney and having a previous C. trachomatis infection were predictors for serovar E infection (city OR: 2.75, 95% CI: 1.41–5.36; previous CT infection OR: 2.32, 95% CI: 1.19–4.52). Conversely, individuals with serovar J infection were less likely to have had a prior C. trachomatis infection (OR: 0.50, 95% CI: 0.26–0.95), and presenting with symptoms was significantly associated with serovar J positivity (OR: 1.90, 95% CI: 1.19–3.05). Interestingly, the only significant predictor of serovar G was previous sexual contact overseas (OR: 0.52, 95% CI: 0.32–0.85), suggesting that this exposure might have a protective effect against infection with this serovar. There were no significant predictors of serovar D or F positivity (data not shown).
Predictors of Symptoms
Symptoms were more common in MSM with urethral C. trachomatis infection compared to anal infection (68% vs. 28%, respectively, P < 0.001), and from Melbourne compared to Sydney (53% vs. 37%, respectively, P < 0.01) (data not shown). In a multivariable model, the significant risk factors for the presentation of symptoms were increasing age, gonorrheal coinfection, city, and a history of prior C. trachomatis infection (Table 3). When restricting the analysis to anal infections, the associations with city and history of prior C. trachomatis infection remained the same (city OR: 0.42, 95% CI: 0.23–0.74; prior C. trachomatis OR: 2.21, 95% CI: 1.09–4.45), while the association with a concurrent gonorrheal infection strengthened (OR: 5.95, 95% CI: 2.97–11.92). Age was no longer a significant risk factor. When the analysis was restricted to urethral infections, age remained a significant risk factor (18–25 years = reference; 26–30 years = OR: 1.74, 95% CI: 0.81–3.72; 31–35 years = OR: 2.88, 95% CI: 1.13–7.39; 36–40 years = OR: 2.59, 95% CI: 1.07–6.25; 40+ years = OR: 5.10, 95% CI: 1.67–15.65), as did gonorrheal coinfection (OR: 3.79, 95% CI: 1.06–13.55). City and prior C. trachomatis infection were no longer significant risk factors.
The frequency of reporting a history of C. trachomatis infection was most common in MSM between 26 and 40 years of age (25.51%), and less common in those over 40 years of age (16.92%), and those 18 to 25 years of age (11.95%) (P = 0.008). There was no association between age group and gonorrheal coinfection. In addition, there was no association between the presence of symptoms and the concentration of C. trachomatis detected, and equally no association was found comparing symptoms to those samples where no concentration was attainable.
C. trachomatis Genotype Distribution
Of the 22 genotypes identified in this study, 14 were able to be assigned a 100% match to sequences on Genbank, corresponding to 96.9% of the infections genotyped (n = 521) (Table 4). In comparison to the most abundant genotype for each serovar, there were 13 variant serovars, comprising 7.5% (n = 39) of the total number of samples, and 4 of these were solely silent mutations. The majority of variations identified in this study occurred in the conserved regions of the omp1 gene. Phylogenetic analysis of the partial amino acid coding sequences of the omp1 gene shows the largely homogenous distribution of genotypes in this study that group within their predicted clades.11 There was no significant difference between these genetic variants found in urine versus rectal samples and given the overall low level of genetic variability, there was no discernable difference between the C. trachomatis serovars and genotypic variants when compared to the demographic factors in this study (data not shown).
This was the largest study of its kind in MSM to date and depicts a high self-reported rate of prior C. trachomatis infection and median number of sexual partners over 12 months. The rate of HIV infection in this population was consistent with previous studies for the Sydney MSM community (8.0%), although was nearly double for Melbourne (9.2% vs. 5.0%).26 The proportion of individuals who exhibited symptoms for urethral and rectal C. trachomatis infection in our study was consistent with UK clinic-based MSM data,13,18 but was higher than findings from a community-based cohort study in Sydney.20 Interestingly, more anal infections from Melbourne showed symptoms than those from Sydney, as were those with a prior C. trachomatis infection. The age of the patient and prior C. trachomatis infections were associated, and both correlated with the likelihood of symptoms being present. This correlates with previous research on C. trachomatis infection in Australian heterosexual men showing symptoms were more likely to occur in those who were older with a history of C. trachomatis infection.27 This suggests that increasing age may be a surrogate marker for a prior C. trachomatis infection, and symptom recognition improves with recurrent infections, although further work is warranted to explore the possibilities of persistence and/or an immune mediated response. The age of the patient was more predictive of symptoms when looking at urethral infections, although symptoms declined for this group over the age of 40.
There appeared to be no association between C. trachomatis serovar (or genotypic variant) and anatomical site of infection, with the exception of serovar F more common in urethral infections, albeit with low sample numbers. The single LGV case identified in this study was from an anal swab collected from a symptomatic HIV-infected Melbourne MSM, supporting findings indicating a low prevalence of LGV in Australian MSM, primarily in those who are HIV-positive with no evidence of a subclinical pool.28
Of the anal swab and urine samples collected and typed, the predominant serovars were D, G, and J, consistent with other international studies.14,19 Compared to the Australian MSM SOP study of 39 samples collected by Lister et al in 2001–2002,5 our proportionate yield of serovar D was lower (55.1% vs. 35.2%), possibly attributable to our larger sample size or a more concentrated pool of serovar D in SOP venues. The majority of isolates in our study were 100% identical to that of the SOP samples (genotypes B, Di, Dvii, Ei, Gi, and Ji), and the genotypic variations in both studies were primarily associated with conserved regions of the omp1 gene though our study revealed more amino acid substitutions (our study = 13/24 substitutions; SOP = 4/12 substitutions).
The predominant serovars from our study differed over time, with the proportion that were serovar D consecutively increasing from year to year in both Melbourne and Sydney while serovar G remained constant. Interestingly, serovar G was also inversely associated with sexual contact overseas, and it would be of interest to see the distribution of the dominant genotype of this serovar in MSM communities around the world. Patients infected with serovar J were more likely to exhibit symptoms and less likely to associated with a prior C. trachomatis infection. Serovar J was also present in the majority of mixed infections. The overall prevalence of mixed C. trachomatis serovar infections in this study was relatively low, 2.6%, with a far lower rate in samples from MSHC (0.9%) compared to SSHC (4.6%), although these figures do not include multiple infections of the same serovar, and most were near the detection limit of the assays used. The majority of cases required a nested step for qPCR detection, suggesting a low C. trachomatis load in mixed infections.
There was a higher load of C. trachomatis detected in anal swabs than urine samples, although we did not observe a relationship between copy number and the presence of symptoms. This highlights the possibility that many MSM may be asymptomatic and yet carry a high load of C. trachomatis, suggesting subclinical infectivity and low correlation of the presence of symptoms and infection, particularly with rectal C. trachomatis. This is further supported by Annan et al stating that there is a reservoir of undiagnosed rectal C. trachomatis infections in the MSM community,13 and by Lister et al emphasizing there is no clear association between rectal symptoms and presence of infection.29 The log mean and variance of the C. trachomatis concentrations detected in urine samples was comparable to that cited by Michel et al30 (P = 0.37), despite different methodologies used, and to our knowledge this is the first paper to report C. trachomatis loads for MSM rectal infections. The low level of genetic variability detected within the observed C. trachomatis serovars indicates a conserved genetic pool that is of interest particularly as nearly a quarter of the MSM involved in our study reported overseas sexual contact during the previous 12 months and the distribution of these serovars changed markedly from year to year. The changes in prevalence were not consistent between Melbourne and Sydney, and further differences were identified between these 2 cities, with Melbourne MSM more likely to be symptomatic and have a gonorrheal coinfection while Sydney MSM were more likely to report a prior C. trachomatis infection. Additionally, serovar E infections were more common in Sydney, particular in 2005. It is unclear whether these trends can be generalized to a wider population of MSM in each city, or if variations occur within subpopulations. Therefore, it is important to assess whether our findings are reproducible in other samples from MSM in Melbourne and Sydney. Further work is also warranted to investigate the host relationships between the detected serovars and anal/urethral infections, and to assess the distribution of C. trachomatis serovars over time among MSM in the community. In conclusion, this study gives the most comprehensive overview to date of the predominant genetic variants of C. trachomatis among MSM in Australia.
1.Satterwhite CL, Joesoef MR, Datta SD, et al. Estimates of Chlamydia trachomatis
infections among men: United States. Sex Transm Dis 2008; 35:S3–S7.
2.Batteiger BE, Lennington W, Newhall WJ, et al. Correlation of infecting serovar and local inflammation in genital chlamydial infections. J Infect Dis 1989; 160:332–336.
3.Geisler WM, Black CM, Bandea CI, et al. Chlamydia trachomatis ompA
genotyping as a tool for studying the natural history of genital chlamydial infection. Sex Transm Infect 2008; 84:541–544; discussion 544–545.
4.Geisler WM, Suchland RJ, Whittington WL, et al. The relationship of serovar to clinical manifestations of urogenital Chlamydia trachomatis
infection. Sex Transm Dis 2003; 30:160–165.
5.Lister NA, Tabrizi SN, Fairley CK, et al. Variability of the Chlamydia trachomatis omp1
gene detected in samples from men tested in male-only saunas in Melbourne, Australia. J Clin Microbiol 2004; 42:2596–2601.
6.van de Laar MJ, van Duynhoven YT, Fennema JS, et al. Differences in clinical manifestations of genital chlamydial infections related to serovars. Genitourin Med 1996; 72:261–265.
7.Workowski KA, Stevens CE, Suchland RJ, et al. Clinical manifestations of genital infection due to Chlamydia trachomatis
in women: Differences related to serovar. Clin Infect Dis 1994; 19:756–760.
8.Workowski KA, Suchland RJ, Pettinger MB, et al. Association of genital infection with specific Chlamydia trachomatis
serovars and race. J Infect Dis 1992; 166:1445–1449.
9.Garland SM, Malatt A, Tabrizi S, et al. Chlamydia trachomatis
conjunctivitis. Prevalence and association with genital tract infection. Med J Aust 1995; 162:363–366.
10.Wang SP, Kuo CC, Barnes RC, et al. Immunotyping of Chlamydia trachomatis
with monoclonal antibodies. J Infect Dis 1985; 152:791–800.
11.Stothard DR, Boguslawski G, Jones RB. Phylogenetic analysis of the Chlamydia trachomatis
major outer membrane protein and examination of potential pathogenic determinants. Infect Immun 1998; 66:3618–3625.
12.Yuan Y, Zhang YX, Watkins NG, et al. Nucleotide and deduced amino acid sequences for the four variable domains of the major outer membrane proteins of the 15 Chlamydia trachomatis
serovars. Infect Immun 1989; 57:1040–1049.
13.Annan NT, Sullivan AK, Nori A, et al. Rectal Chlamydia—a reservoir of undiagnosed infection in men who have sex with men. Sex Transm Infect 2009; 85:176–179.
14.Geisler WM, Whittington WL, Suchland RJ, et al. Epidemiology of anorectal chlamydial and gonococcal infections among men having sex with men in Seattle: Utilizing serovar and auxotype strain typing. Sex Transm Dis 2002; 29:189–195.
15.Ivens D, Macdonald K, Bansi L, et al. Screening for rectal chlamydia infection in a genitourinary medicine clinic. Int J STD AIDS 2007; 18:404–406.
16.Kent CK, Chaw JK, Wong W, et al. Prevalence of rectal, urethral, and pharyngeal chlamydia and gonorrhea detected in 2 clinical settings among men who have sex with men: San Francisco, California, 2003. Clin Infect Dis 2005; 41:67–74.
17.Manavi K, McMillan A, Young H. The prevalence of rectal chlamydial infection amongst men who have sex with men attending the genitourinary medicine clinic in Edinburgh. Int J STD AIDS 2004; 15:162–164.
18.Ward H, Alexander S, Carder C, et al. The prevalence of lymphogranuloma venereum infection in men who have sex with men: Results of a multicentre case finding study. Sex Transm Infect 2009; 85:173–175.
19.Klint M, Lofdahl M, Ek C, et al. Lymphogranuloma venereum prevalence in Sweden among men who have sex with men and characterization of Chlamydia trachomatis ompA
genotypes. J Clin Microbiol 2006; 44:4066–4071.
20.Jin F, Prestage GP, Mao L, et al. Incidence and risk factors for urethral and anal gonorrhoea and chlamydia in a cohort of HIV-negative homosexual men: The Health in Men Study. Sex Transm Infect 2007; 83:113–119.
21.Stevens MP, Twin J, Fairley CK, et al. Development and evaluation of an ompA
qPCR assay for Chlamydia trachomatis
serovar determination. J Clin Microbiol 2010; 48:2060–2065.
22.Thompson JD, Gibson TJ, Higgins DG. Multiple sequence alignment using ClustalW and ClustalX. Curr Protoc Bioinformatics 2002; Chapter 2:Unit 2.3.
23.Gouy M, Guindon S, Gascuel O. SeaView version 4: A multiplatform graphical user interface for sequence alignment and phylogenetic tree building. Mol Biol Evol 2010;27:221–224.
24.Tamura K, Dudley J, Nei M, et al. MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0. Mol Biol Evol 2007; 24:1596–1599.
25.StataCorp. Stata Statistical Software
[computed program]. Release 11. College Station, TX: StataCorp LP; 2009.
26.Prestage G, Ferris J, Grierson J, et al. Homosexual men in Australia: Population, distribution and HIV prevalence. Sex Health 2008; 5:97–102.
27.Chen MY, Rohrsheim R, Donovan B. The differing profiles of symptomatic and asymptomatic Chlamydia trachomatis
-infected men in a clinical setting. Int J STD AIDS 2007; 18:384–388.
28.Lee DM, Fairley CK, Owen L, et al. Lymphogranuloma venereum becomes an established infection among men who have sex with men in Melbourne. Aust NZJ Public Health 2009; 33:94.
29.Lister NA, Chaves NJ, Phang CW, et al. Clinical significance of questionnaire-elicited or clinically reported anorectal symptoms for rectal Neisseria gonorrhoeae
and Chlamydia trachomatis
amongst men who have sex with men. Sex Health 2008; 5:77–82.
30.Michel CE, Sonnex C, Carne CA, et al. Chlamydia trachomatis
load at matched anatomic sites: Implications for screening strategies. J Clin Microbiol 2007; 45:1395–1402. | <urn:uuid:ac47c894-7dd3-42d6-97db-721789bea750> | CC-MAIN-2017-17 | http://journals.lww.com/stdjournal/Fulltext/2011/04000/Chlamydia_trachomatis_Genotypes_Among_Men_Who_Have.7.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00251-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.919421 | 6,257 | 2.6875 | 3 |
May 8, 1945: Victory over Europe Day. The day the German military laid down their arms and surrendered to the closest assigned base, or port.
Eleven days later the final German submarine surrendered.
She was the largest U-boat left in Germany’s arsenal, and carried a number of high-profile passengers. The media went crazy over the Luftwaffe officers processing down the gangplank in their smart leather overcoats and visible medals. Unlike previous surrendered submarines, no one was permitted to speak with these Germans on pain of death. Still, the images made for great newsreels, and this was the fourth U-boat in port anyway. The media was more than happy to take photos and footage and leave the interrogating to the professionals.
That frenzy covered the truly deeper, more terrifying story of what the U-boat had been doing, and what she was carrying when Germany surrendered.
Yanagi: The Secret Submarine Highway
Unknown to many people then (and even now), Japan and Germany had a submarine highway between their two countries. U-Boats and Japanese submarines would transit between each country by rounding the Horn of Africa and crossing the Indian Ocean.
Before the war, exchanges took place openly using cargo vessels. Then, parties resorted to the Trans-Siberian railroad across technically-neutral Russia. That ended after the Nazis invaded in June 1941 and discovered Russians were no pushovers. The final resort, submarines, began making the trek. Soon, blueprints for U-Boats, Jet Engines, Enigma Machines, blueprints for Japanese weapons, experts in a variety of fields, even critical rare supplies, slinked around Africa, back and forth.
Many of these long-haul trips ended in disaster, with the submarines more often than not getting sunk on either the German or Japanese ends of the voyage by Allied submarines. Still, the transport continued.
And this is where the U-234 comes in. Originally a large minelaying submarine, the 234 was refitted in late 1944 as a Japanese-transport submarine. Her mineshafts were refitted into cargo holds, and she was fitted with a snorkel so she would not have to surface as she crept past the British Isles and out to the open sea.
By the time U-234 sailed on April 15, 1945, Germany was in a desperate state. It’s likely that U-234 was Germany’s last gasp to assist the only remaining ally standing against the Allies.
Besides carrying 6-9 months of fuel and provisions, U-234’s cargo included
- An Me262 plane, disassembled and crated. (This was the world’s first operational jet, see at right. By some accounts, there were two aboard.)
- Components for the V-2 Rocket/Missile
- A Henschel glide Bomb,
- new electric torpedoes (that did not leave a wake or warning when fired)
- 26 tons of Mercury
- 7 tons of optical glass
- 74 tons of lead
- technical blueprints and plans of various weapons (according to some accounts, this was not just a few drawings, but 6,615 POUNDS worth of such drawings),
- Over 1 ton of mail for various German diplomats, technicians and experts already working in Japan.
- and a number of sealed barrels, weighing in at 1,200 pounds.
The High-Profile Passenger List: Possibly even more dangerous than the cargo.
- Lufwaffe General Ulrich Kessler, to be assigned to Tokyo as an Airforce Attache, helping the Japanese create and train a jet-squadron using the crated craft and drawings on 234.
- Oberleutnant (1st) Erich Menzel of the Luftwaffe. Attache to Kessler, Menzel was a skilled navigator and bombardier, with combat experience against British, American and Russian troops. and Lietenant of the Luftwaffe.
- Colonel Sandrattz von Sandrart, of the Luftwaffe. Anti-aircraft Specialist, who was assigned to boost Japan’s defense systems against the constant bomber attacks.
- Colonel Kay Neishling of the Luftwaffe; Naval Judical and investigative officer was heading to Japan to evict spies out of the German diplomatic corps.
- Fregattenkapitan (Lt. Cmdr.) Gerhard Falcke; fluent in Japanese, was an architect and construction engineer who was to oversee building the new factories for jets and ships.
- Kptlt (Lt. Cmdr) Richard Bulla. A former crewmate to 234’s captain, Bulla’s expertise lay in new armaments and weapons, and the latest in carrier aviation
- Oberleutnant (Lt.) Heinrich Hellendoorn, an artillery officer, was to serve as Germany’s observer
- Franz Ruf, civilian, industrial machinery specialist tasked with designing aircraft complenets and other small devices.
- August Bringinwald, civilian, who helped oversee the jet’s production, and was to do the same in Japan.
- Heinz Schliege, civilian scientist: a Radar, Infared and countermeasures specialist, his mission was the help the Japan manufacture many of the smaller devices depicted in the blueprints. He was also the custodian of the blueprints, and ordered to destroy them then kill himself if 234 was captured
Two Japanese Naval Officers,
- Cmdr. Hidero Tomonaga, aviator turned submarine specialist who had come to Germany aboard the I-29 in 1943.
- Cmdr. Genjo Shogi, and aircraft specialist who had spent years in Europe as a military attache in several countries.
The Japanese officers oversaw the loading of all the equipment for their military. The sealed barrels were of particular interest, and they painted “U-235” on them. The U-boat sailors laughed at this, believing the Japanese officers had already forgotten their proper hull number: 234. The officers had not forgotten-they were marking the barrels by periodic symbol. As in, this symbol:
The cargo manifest, known only to a few onboard, revealed these barrels contained 560 kilos of Uranium Oxide, aka “Yellow Cake Uranium”. To this day, there are debates about what this was meant for, but no matter what, its successful arrival could have meant the prolonging or even stalemate in the Pacific War.
U-234 departed on her mission in April 15, 1945, commanded by Ly. Johann “Dynamite” Fehler, one of the top remaining U-boat commanders left in Germany. And yet, due to the high mortality of the U-boat service, this was Fehler’s first submarine combat mission.
Despite the lofty goal of reaching Japan in three months time, many of the crew doubted they would succeed. As second watch officer Karl Ernst Phaff later said, “[We believed] ..the chances were fifty-fifty. In reality, they were much worse, but that we did not know. Because losses were never revealed.”
Another said, “It was clear that the war was lost, our morale was non-existent.”
Nonetheless, U-234 headed out, her bow for Japan. Fehler’s first order of business: use a different route than he’d been assigned, in case the Allies were listening and had set an ambush. It was a wise move, he avoided the first ambush by a British sub, and 234 made it to the Atlantic.
It was cold in the north Atlantic waters, unless you were in the engine rooms. The extra 12 people made a cramped situation even more so. Since 234 had to sneak around Britain, she ran deep most of the day, only coming close enough to the surface to expose her snorkel when she had to run her diesels. The air was typical submarine air: foul.
Still, many of the crew remember the initial part of the trip as working as well as it could. The Japanese lieutenants Tomonaga and Shogi were particularly remembered as being gracious and friendly, inviting many of the crew to visit their homes and families once the U-234 got to Japan.
Meanwhile, back in Germany, the two fronts from Russia in the east and the Allies in France, closed rapidly on Germany. On April 30, Hitler and his new wife Eva Braun committed suicide and their bodies were burnt by their comrades to keep them out of Allied hands. Many other Nazi High Command committed suicide or went into hiding (most of whom were captured, some were not.
May 8 was Victory in Europe day. All German military units were ordered to surrender. Millions of war survivors rejoiced.
Far out in the Atlantic, the U-234 missed the first announcement of Germany’s total surrender and continued on course for Japan. Two days later, Cpt. Fehler heard a shortwave radio transmission from Submarine Admiral Karl Donitz…”My U-Boat Men, six years of war lie behind us. You have fought like lions. An enemy with oppressive material advantage has contained us on our exceedingly small territory From this remaining base a continuation of our struggle is impossible. U-boat men, unbroken and immaculate, you must lay down your arms after a heroic fight. Long Live Germany. “
Fehler and his crew could not believe it. The tuned into foreign radio stations, including English-speaking ones. Each one announced Germany’s utter defeat.
Still Fehler refused to believe it. While this was his first U-boat command, he had been an officer aboard the infamous raider Atlantis, whose modus operandi was to disguise itself as a friendly merchant vessel to lure British ships within range before revealing her camouflaged guns. He knew all too well the power of a convincing radio message.
They managed to raise a fellow U-boat, the U-873, which had been en route to the Caribbean. 873’s commander was Friedrich Steinhoff, a dedicated (some said fanatical) Nazi. He confirmed the news: Germany was defeated, Hitler was dead (Donitz was, in fact, the leader of the German government, such as it was in these days), and all submarines were to surrender to the nearest Allied port. 873 herself was just 24 hours from Portsmouth New Hampshire, to join the already surrendered U-805, and the U-1228 was following the 873 by about 24 hours.
Now what? All German naval vessels currently at sea, were ordered to surrender as soon as possible. For the U-boats in the North Atlantic, they were to head to the closest of the four approved ports: Britain, Gibraltar, Canada, or the USA.
But U-234 was caught in a strange trap. She was nearly equidistant from them all, and with enough fuel and provisions to go…pretty much anywhere they wanted. No matter what they did, it would likely be days before they COULD surrender to anyone.
Some of the crew argued to go home. Others, to Argentina or the Caribbean. Whatever the decision, the crew had to decide where to go, surface and fly the black surrender flag before radioing their positions and course for Allied intercept. Anything else, and the 234 could be sunk as pirates.
All Fehler wanted to do was return home, and he reasoned that he might get home faster surrendering to the Americans.
Now for the final wrinkle: the two Japanese officers.
Surrender and Death
Japan and the Allies were still in open war. Once the U-234 was captured, these men would be taken as POWs, and high value ones at that, fully trained and versed in both German and Japanese technology, plans, and tactics.
Captain Fehler had to arrest the Japanese officers, as part of the surrender, and locked them in a cabin, but he wanted to reassure them and he wouldn’t just turn them over.
“ I informed the two Japanese about the situation. And I gave them my word that I would try my best not [to] let them fall into allied hands, but to try to put them ashore somewhere in neutral territory, as Spain, Portugal, Canary Islands or somewhere else. Apparently, they did not trust my word, or believed the idea was not feasible…” -Lettter from Capt. Fehler
But until they chose an option, they would have to remain confined to quarters, under guard.
Had this been a Japanese submarine, it is very likely the sub’s crew, would have scuttled the sub and gone down with her, rather than be captured with such valuable information. But this was not the German way.
So the Japanese officers took their own lives. According to Tomanaga’s widow, they chose to overdose on sleeping pills rather than any ritualized seppuku or bloodletting out of consideration for the boat and crew. They left behind a suicide note. The also left behind wives and children in Japan who had not seen their fathers in years, and now, never would.
Funeral, then Surrender
Capt Fehler later remembered the next morning:
“When they were discovered on the next morning, nothing could be done for them anymore. We kept their bodies on board for 20 more hours until daybreak the next morning. I had them sewed up in canvas hammocks and they were given over board in the proper seaman’s way with prayer and covered by Japanese flags…We had to carry the bodies to the engine [room] as there we had sufficient space to sew them up in their canvas coffins.”-Letter from Capt. Fehler
It was May 13. , Germany has been defeated for a week. Occupation troops for Europe were being assigned. Calculations for Operation Downfall (the invasion of Japan) are being made. Non-occupation troops would be shipped to Japan as fast as possible to push the war’s end.
In the western Atlantic, the race was on. Between Allied Intelligence before the surrender, and information gained after, both Canada and America knew the 234 was one of the most valuable submarines left at sea. Destroyers from both countries were out, intercepting and escorting enemy subs to Nova Scotia, Maine and Massachusetts. Whoever intercepted 234 first would gain her, her cargo, and her passengers. 234 radioed her position and course, with the orders to report in again within 24 hours.
Aboard the 234, Fehler, for whatever reason, jettisoned some of the cargo: acoustic torpedoes, Enigma machines, classified documents were thrown overboard. His choices of cargo to keep and cargo to retain was never explained, even by Fehler. The sealed containers marked U-235 remained in their hold.
May 14: The Canadians radioed 234 first, demanding she report her position, speed, and course again. Fehler radioed a position more northern than they were, and an 8 knot speed west, heading for Halifax. Canada sent ships to intercept, while Fehler, at almost 16 knots, fled SW to America.
In America, the destroyer SUTTON, escorting the U-1224 which had also surrendered, was re-routed back to sea to intercept the 234. The destroyer SCOTT remained with the 1224.
In an almost hysterical moment, SUTTON came upon Canadian ships WASKESIEU and LAUXON, in the search area based on 234’s initial report. For over 11 hours the three ships co-operated in a search grid, until the Canadian Navy reported 234’s (supposed) position north.
The two Canadian ships departed, leaving SUTTON behind. Four hours later, SUTTON’s radar picked up the 234 running on surface. At 2241 (10:41 pm) the SUTTON overtook the 234. Once the ship and sub sized each other up, they discovered they were nearly the same size—if anything, the 234 was bigger.
It would take five days to get back to the States, but the 234 was captured, along with her valuable cargo. 234’s Captain, officers, passengers, and most of the crew were transferred to SUTTON, and a skeleton crew was left to help the transferred American sailors sail the 234 back the States.
When the Canadians angrily radioed again demanding the 234 confirm her position and course and not slip away again , it was an American sailor who answered!
A Cruel Irony
The media went crazy over the high-ranking German personnel that disembarked. They were so top-secret that the Navy forbade the press to come within speaking distance of anyone. The Marines on guard duty were ordered to shoot anyone who tried. Nonetheless, the crew and passengers were paraded down the dock to the waiting bus in full sight of the cameras.
The Furor over the high-value prisoners, especially the Luftwaffe General, neatly hid the cargo within the boat. A cargo manifest that, after the war, mentioned the tech drawings, weapons, medical supplies, lead, mercury, steel…but no Uranium.
Truth was, at this time, no one knew how much Uranium would be needed to make an atomic bomb. Special units in Germany were collecting Uranium anywhere and everywhere it was abandoned in Germany’s unorganized retreat. The U-234’s cargo was an incredible coup.
In the days following the surrender, Watch Officer Ernst Pfaff, in charge of the manifest, was ordered to oversee the opening of the sealed containers inn a closed room in front of a number of military and one civilian man. The civilian seemed to be in charge, or at least treated with great respect. Later, Pfaff learned this man’s name: Robert Oppenheimer. History calls him the Father of the Atomic Bomb.
While no one knows for certain, as the Uranium Oxide vanished, many historians believe it was purified into almost 16 pounds of weapons-grade uranium. And that 16 pounds could have become 10-15% of the warhead of “Little Boy”.
Thus, part of the cargo meant to help Japan win the war, became part of its destruction.
Fallout and Epilogue
To this day, historians are divided about whether the cargo or the passengers of the 234 were more dangerous. Had 234 been sent to Japan in January, not April, and had she made it, it is possible the war could have concluded in a very different way. The Japanese could either have had the components of a “dirty bomb” of their own to use, or even new jets and the ability to make fuel for them.
As had happened after WWI, all the captured U-Boats were thoroughly dismantled and inspected for new technologies. Not shockingly, German sub tech like Snorkels appeared within a few years aboard American diesel boats.
The U-234 was sunk as a target by the USS GREENFISH (SS-351) on 20 November 1947 off Cape Cod.
For more Information:
A great article about U-234 with sketches done by men aboard the escorting ships. https://www.uscg.mil/history/articles/authors/thiesen/SeaHistory142%20EliotWinslow1.pdf
Hitler’s Last UBoat Documentary (2001)
Tales from the Atomic Age by Paul W. Frame. Originally published in the May 1997 issue of Health Physics Society Newsletter. Accessed May 14, 2015 from : https://www.orau.org/ptp/articlesstories/u234.htm
“Lieuteant Eliot Winslow Kaitanleutenant Johann-Henirch Fehler and the Surrenser of the Nazi’s Top-Secret Submarine, U-234. Originally published on Sea History, 142, Spring 2013https://www.uscg.mil/history/articles/authors/thiesen/SeaHistory142%20EliotWinslow1.pdf
Letter from CApt Fehler, pg. 2. Accessed: http://greyfalcon.us/the%20U.htm
Wikipedia Entries on U-234; USS SUTTON; USS SCOTT; Karl Donitz
Scalia, Joseph M, Grmany’s Last Mission to Japan; The Failed Voyage of U-234. Naval Institute Press, 2000
War Diary of USS SUTTON (DE-771) 4/1/45 – 5/31/45. US Archives Via fold3.com
Billings, Richard N; Battleground Atlantic: How the SInking of a Single Japanese Submarine Assured the Outcome of World War II Penguin Group 2006
Johann-Heinrich Fehler: http://www.sharkhunters.com/EPFehler.htm
In case this seems overly long, this was the only route that avoided the Suez Canal and the heavily-patrolled Straits of Gibraltar, two choke points where they would be seen.
Russia and Germany signed the German-Soviet Nonagression Pact in 1939. Then Hitler decided to throw it aside.
Among these drawings were plans for everything the U-234 carried, plus building plans for the needed factories, plans for the newest ships and submarines on the Germany side, bombsights, analog computers for bombsights, airplane mounted radars
Hitler and the Nazis had an atomic program for an atomic bomb, so this could have been weapon-grade uranium to give to the Japanese to complete their project. On the other hand, it is also possible that it was a catalyst for a type of synthetic aviation fuel. As we saw with the Musashi post, Japan’s military was suffering for a number of reasons, but lack of fuel was one of the greatest problems, and this would have helped.
By some accounts, this was not the first time Uranium had been shipped to Japan from Germany.
- In April 1944, the I-29, by some accounts, was loaded with uranium bound back to Japan. She was sunk in the Balintang Channel, Luzon Strait, on 26 July 1944 by American submarine SAWFISH.
- August 1, 1944, the I-52 was loaded with nearly 1,000 pounds of Uranium Oxide in Lorient, France, (part of the Nazi dominions.) Due to Allied advancements from Normandy, the I-52 is directed to finish her loading and provisions in Norway. She departs for Japan instead, and rendezvous with the U-530 on June 22, for a top off of fuel and provisions. The radio traffic between the two boats tipped Allied Intelligence to their location, and five destroyers were dispatched to attack. U-530 escaped, I-5 did not. In 1995, her wreck was located was in 17,000 feet of water 1,200 miles west of the Cape Verde Islands.
- Some accounts say that the U-1224 (re-commissioned in the Japanese Navy as RO-501)and the U-862 (surrendered to the Japanese Navy at Singapore and was re-commissioned as the I-502) were also involved in shipping Uranium to the Far East, but these accounts have fewer records at this time. At any rate, U-234’s shipment was at least the third try, they were so desperate for the stuff.
He had gained his nickname because no matter what his assignment, he tried to find some way to incorporate explosives—much to the chagrin of his commanding officers.
By this time, 70% of U-boats and 75% of U-boat sailors had already been lost. And the Allies were not letting up.
Some people wonder why the Europeans and Japanese had such a different views to surrender and POW treatment in wars. While much has been made of the Samurai code of “bushido” which lionized death before surrender and the shame, not much has been written about the history of Europeans that shaped the opposite point of view, probably because to a European or American, not shame in surrender makes intrinsic sense. But the concept of surrender to POW status has a long history.
There were a few forces shaping attitudes to battle and warfare in Europe and the Christian ethic of a “Just War”, where you are trying to force your will on your opponent, but that, once that happens, killing and destruction for the sake of killing and destruction was horrific, and a warrior could not be honorable if he reveled in such a thing. So if someone, or an army, or town surrendered to you, you had won the ‘Just War” as far as they were concerned, and no futher killing was necessary. Besides, there was money to be made at this point.
What you really wanted to do was capture as many people of status as you could. You may not have killed them, but you had them, and if their families, towns, duchys, country wanted them back, they were going to have to pay a hefty fee. (you might settle for a POW exchange if they had a bunch of yours they were trying to ransom to you, but really, everyone just wanted the ransom money.)
This ransom, depending on who they had captured, could ruin a family, a town, a county, or an entire country’s economy, which was kind of the point. You’d be ridiculously wealthy, and they’d be too poor to engage in war with you again for a number of years, which maintains the peace you’d imposed on them anyway.
A famous example: when King Richard Lionheart was captured 1192, his captor, Holy Roman Emperor Henry VI of Germany, demanded 65,000 pounds of silver in ransom. That was, at the time, three times England’s annual GDP. Everyone in England (plus the Aquitaine region of France which was part of England at that time), from the nobility, to the serfs, to the formerly-exempt clergy, was heavily taxed to raise these funds, and it was up to Queen Eleanor of Aquitaine (Richard’s mother) and Prince John Lackland (Richard’s brother and legal regent of England) to enforce these taxes…plus invent new ones, plus confiscate Church treasures, plus sell properties… to raise these funds. (And now you know where the high taxes in the Robin Hood tales come from. In those stories, Prince John wasn’t evil for levying those taxes so much as he was for levying the taxes and seriously considering paying Henry VI a discounted rate if he KEPT Richard…which actually happened!) It took two years to finally raise enough funds. Holding a high-ranking prisoner could be a lucrative business.
And of course while you’re holding the King of England (or Earl, Duke, Count, or all the men of a certain town,) you had to treat them relatively well so they survive to the payment of the ransom. To be captured was not shameful in Europe, it was part of the “business” of war in a way.
Did the murder of POWs happen in Europe? It did. One has to look no farther than Henry V of England killing POWs after the Battle of Agincourt in 1415 for an example. He spared the highest nobles he had (for the ransom). (Keep in mind, that at this point in time, while the English army is made up of English noble men -at-arms (knights in armor) the bulk of the army was peasant longbowmen. On the French side, the peasants should never be trusted with any sort of weaponry, so the whole army was, in fact, feuding noble armies who were as busy fighting each other as the English. This means where there were few hig-value POWs to be taken in the English army, EVERYONE in the French army was literally, worth taking…unless you are in a tight spot. )
In this particular case, however, Henry V had more French POWs than he had English Military under his command, and it was feared the POWs would figure this out, re-arm themselves and fight their way free, leaving the English, already deep in French territory, vulnerable or dead. In addition, there were still free French troops in reserve in the area. If the POWs started a fight, these reserves could also join in, killing the English. and the Free French were making rallying calls nearby.
This was an unusual practice, and while Henry’s decision is highly criticized now, there appear to be few contemporary chroniclers, even French, who called him out at that time for any excessive brutality for this massacre. That being said, the English knights refused, point blank, to take part in the slaughter, which they viewed as un-chivalric , indicating that this was against some understood morals of the time. The prisoners were therefore killed by English archers, who were peasants. By some accounts, after the free French troops fled, the killing appears to have stopped, so this tactic may have been a form of psychological warfare in a tight spot. Records show that Henry V ended up shipping hundreds of POWs home to wait for ransom, which proves the rule: in Europe, battle was meant to take prisoners and bankroll their release, not kill for the sake of killing.
It was this kind of battle, and this type of battle “ethic” (of a sort) that lead to the high proportion of POWs in European battles relative to Japanese. Thus most of the German military (many of the enlisted of whom were Nazi in name only) were not fanatical enough to want to commit suicide—they’d be returning home, and there was no shame in that.
To be captured in Japan was to be shamed before your family, your community, and your nation.
In Japan, this disdain for POWs was a relatively new phenomenon in some ways. The Japanese had participated in the Russo-Japanese war (1904-1905) and in WWI (1914-1918 when the Japanese fought with the Allies to keep the Pacific clear against the German Imperial Navy) . They took prisoners and were taken prisoner in turn. These early 20th century POWs were treated with respect, and most were repatriated.
Under traditional Bushido (“The Way of the Warrior” ), surrender was apparently allowable, but death in battle was lionized. Suicide after was permitted as a way to gain honor, especially if you were one of the few survivors. Thus you could join your dead brothers-in-arms in a way, both in memory and legend. Samurai and their families who performed ritualized suicide to join their masters were highly honored in Japanese society.
But as surrender was allowable, though not as honorable as death or suicide, POWs were treated with some respect, hence the better-treatment of the Russian/WWI POWs.
In the aftermath of WWI, Bushido apparently developed to be 1.) something all people could attain through the right behavior, allowing even the lowest-born Japanese person the ability to be honored like a Samurai if they followed Bushido closely enough 2.) the “new” Bushido was much more harsh. Under the new Bushido, surrender was not something that was “less-honorable” but instead, “dishonorable”. If you were captured, suicide was the only way to restore lost honor. A surrendered person under this new bushido, was essentially selfish. It meant your own life was more valuable to you than protecting these people and communities. Therefore, many Japanese military people (and civilians, as the Allies advanced) preferred death to capture or surrender. They were, in a sense, less than human.
Which is why, in a reflection of this philosophy, Allied POWs and captured civilians were treated so poorly. (and no, I’m not excusing this treatment, just revealing some of the reasons behind it)
And why the Japanese officers aboard the U-234 chose to commit suicide, rather than surrender.
Text of the Suicide Note Left by Lts. Tomanaga and Syozi according to Paul Tidwell and Richard Billings, authors of, The Secret of I-52”
“ It was a great pleasure for us to be able to be together at all times with you and your boat, whether in life or death.
But because of fate, about which we can do nothing, it has become a necessity for us to separate ourselves from you and your boat.
We thank you for your constant companionship and request the following of you:
- Let us die quietly. Put the corpses in the high sea.
- Divide our private possessions among your crew and please take the largest part yourself also.
- Inform Japan of the following as soon as possible:
“Cmdr (Freg. Kapt) Genzo Syozi
” ” ” Hideo Tomonaga
committed suicide on May 1945 on board U-234.”
In closing we express our gratitude for the friendliness of you and your crew and we hope that everything will go well for the Commanding Officer and all of you.
(signed) Genzo Syozi
(signed) Hideo Tomonaga”
Italy surrendered to the Allies back on September 8, 1943, and Germany on May 8, 1945. As recently as February 1945, at the Yalta Conference, Roosevelt and Churchill talked about 1947 being the year to end the war with victory over Japan. At that time, they figured that they would have to transport every capable man, vessel and weapon to Japan and fight the Japanese (likely civilians as well as military) inch by inch across the home islands. At this time, while multiple countries were pursuing what would become the atomic weapons, no one had yet gotten to a point where it could be used.
Even though the Germans gave up peaceably, not everything went according to plan. While collecting small arms from the Germans aboard the U-234, Radioman 3c Monroe Konemann was shot in the small of the back when “a German pistol went off in the hand of an American sailor,”. U-boat doctor Franz Walter treated Konemann, but quickly saw he needed surgery (no room on a U-boat) and another doctor’s assistance. Walter and Konemann were transferred to the hastily called Frigate FORSYTH, which had joined SUTTON during the boarding of U-234. FORSYTH’s doctor, Ralph Samson of Columbus Ohio, and Waltar worked on Konemann, who was soon stable enough to be transferred to a hospital. FORSYTH was detached from the U-234’s escort, and transferred Konemann to a hospital in New Foundland. Sadly, Konemann died of internal hemorrhaging ten days later. Still, Waltar’s efforts to save Konemann were well noted by both the SUTTON and FORSYTH crews. | <urn:uuid:6b1cd3c7-240b-4e8b-bed0-77e0c2b9f769> | CC-MAIN-2017-17 | http://www.ussflierproject.com/topics/lost-submarines-of-the-world/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.977693 | 7,305 | 3.1875 | 3 |
One goal of mine as a future educator is to create an environment where each student feels a sense of belonging. I have noticed that students who have been in the United States for many years tend to stick with other peers similar to them. Students who are recently new to the United States seem to be segregated from the dominant race as well as their own. It is important for me as a teacher to make sure everybody feels comfortable. One way I plan on doing this is to introduce the class to a project I have been thinking about. The project, “Sailing around the World,” will provide each student an opportunity to teach the class about the culture, country, and demographics of an area in the world that they are from. Students can work in groups with people from similar backgrounds. Each day I will have the ship “dock” in a different part of the world. The students will also be required to tell the class how the ship was able to get there (wind currents, ocean currents, etc.). Another idea I have is for students to pick out of a scientist from their part of the world. The students can write a brief paper that describes who the scientist is, where he is from, and what he is famous for. I think it is important for students to see that scientists and other famous people come from all over the world. On the first day of school, I am going to ask each student to put their name on a placard and draw pictures of their hobbies, what they’re proud of, or what they simply like to do with their free time. After I have learned all of the students’ names, I will hang up their placard on the wall in the class so that all of the students can see what their peers enjoy doing. Not only will this give the student a sense of pride, but students can also see what other students like to do and perhaps form a new friendship. It is also important that students know a little about each of their peers. I am going to develop a classroom website where each student will have to post during the first week a brief history of their life. Each student will be required to read and to post on at least five different student’ blogs. By reading the blogs, students can also identify similarities in other students’ blogs and develop relationships. Without using the blogs as a method to communicate, students would probably not be able to see what they have in common with the students outside their immediate group of friends. It is extremely important for me as the teacher to have background information about my students, as well as the students knowing about their peers. Catherine Little stated in her article that she was terrified of animals. I think her teacher could have prevented a traumatic experience by simply having a little background information.
Sunday, October 21, 2012
Monday, October 15, 2012
My lesson planning is designed so that students can retain the information presented. Students learn better and more efficiently when their axons are firing and with the release of endorphines. One of the reasons we get students out of their seats to participate and engage in activity is to get their endorphines flowing. Students can make better connections to material that is taught in an integrated way, rather than as isolated bits of information. Brain-based learning research has shown that the brain grows and adapts in response to external stimuli. When developing lesson plans, teachers must design learning around student interests and make learning contextual. Teachers should structure learning around real problems, encouraging students to also learn in settings outside the classroom. In my classroom, students are taught using many interdisciplinary connections which include science, computers, language arts, fine arts, geography, global history, and health.
Sunday, October 14, 2012
Classroom Management Plan
The classroom strategies I will bring into my classroom are based on the basic philosophies of experimentalists and reconstructionists. My overall philosophy of classroom management is to not just utilize one or two discipline strategies, but to use a variety of different strategies. In my opinion, each strategy has pros and cons, so I think as a teacher it will be in my best interest to use multiple disciplines to create a fair and balanced atmosphere. Out of all of the disciplines, I think the one I identify mostly with is the synergetic discipline. I like the idea of teachers working with students to create an energetic and exciting atmosphere, and when misbehavior does occur, I think it is extremely important to take care of it gently and respectfully. A few other disciplines I identified with include, Positive Classroom, Noncoercive, Discipline with Dignity, and Beyond Discipline. My main focus is to create a synergetic classroom environment by including students in decision-making, having students take responsibility for their own actions, and as a teacher to remain calm and respectful while dealing with misbehavior.
I believe the best preventative approach to misbehavior in the classroom is to have great lesson plans that keep the students engaged and working on assignments until the end of class.
1. The most important strategy I believe in the preventive approach to classroom management is establishing rules to guide the class. Not only do the students need to know the rules of the class, but also as a teacher it is important that I hold class discussions on the rules, their implications, and their consequences (Coloroso, 1994). Teachers cannot assume that the students will read the class syllabus and go over the rules. Instead, teachers should assume that the students would not read the syllabus and rules, and should take a little class time to discuss inappropriate behaviors and the consequences.
2. Not only is it a good idea to go over the rules during the first few days of school, but also it is also important to ask for student input on what the consequences should be for breaking the rules (Glasser, 1985). Another important part in establishing the rules is staying consistent in enforcing the rules and consequences (Glasser, 1985). Teachers need to stay consistent in combating disruptive behavior or students will take this as a sign of weakness and continue with the inappropriate behavior. Normally, it takes one student to get in trouble for students to comprehend that they don’t want to make the same mistake.
3. As a teacher, it is also very important that the teacher is more like a leader and not as a boss (Glasser, 1985). Anybody who has had a boss that “tells” everybody what to do instead of asking knows that telling somebody to do something is not the best approach. A good educator knows how to effectively communicate with students and to explain to them the importance of learning the assignment for the day. Teachers should ask their students to do only work that is useful and try to eliminate busy work (Glasser, 1985). Students will have a better attitude about learning when they know how it will be useful to them later in life.
4. In order to prevent misbehavior, teachers should concentrate on removing the causes of misbehavior (Charles, 2000). One of the problems I have seen in my class is students using their iPods. In order to be effective in combating the problem with iPods, teachers should explain on the first day of school that iPod use in class would not be tolerated. I would explain to the class that if I see earphones or an iPod out, I would immediately take it away and turn it into the VP. After the teacher sets these guidelines, it is very important that they stay consistent.
5. Another great way I will use to prevent misbehavior is to reward positive behavior. During my lessons this year, if students are misbehaving I will stop until I have everybody’s attention. While I am waiting, I will thank the students who are sitting quietly. I have noticed that just by thanking the students will encourage other students to behave more appropriately. I think the use of incentive programs to motivate responsible behavior is a great way to create a positive atmosphere (Jones, 1970s). Students in my classroom will also be rewarded for their good behavior by gaining participation points for the day. In my class, participation is worth twenty percent of the student’s grade, so it is very important that they behave appropriately on a regular basis.
The supportive approach is used to get students back on task. Teachers can use body language to gain students’ attention to get back on track, or they can simply use appropriate lesson planning.
1. I believe the most important aspect to the supportive approach is to always treat the students with dignity (Curwin & Mendler, 1983). When students have a lack of judgment and make a mistake, it is crucial that the teacher still treats the student with dignity and respect. One of the biggest problems a teacher can have is if a student shuts down because they feel like they were embarrassed by the teacher in front of their peers.
2. A supportive approach that I use inside my classroom is to send an individual a secret signal so that other students don’t know (Albert, 1996). As I stated above, it is very important that the teacher does not embarrass any students. Most of the times I will either shake my head towards a particular student or just give them the “eye.” Our classroom is also set up where I can move around in between desks so that I can stand near the inappropriate behavior while I give the lesson. Students often stop the inappropriate behavior when the teacher is standing close by.
3. One of the best supportive approaches is that the curriculum must be organized to meet students’ needs for survival, belonging, power, fun, and freedom (Glasser, 1985). As I stated earlier, I think one of the most important aspects to classroom management is having effective lesson plans. The curriculum needs to be taught where students are continuously challenged and engaged. The lessons also have to be designed in which they aren’t too challenging or boring for students or else there is a possibility where students will just shut down.
4. Providing efficient help to individual students is another great way to combat disruptive behavior (Jones, 1970s). It is crucial that the teacher provides assistance to the “helpless hand raiser.” Students tend to get a little restless when they don’t understand the material, so it is extremely important that the teacher walks around and assists students who need additional help.
5. Give students the opportunity to solve their own problems and ask how they plan on doing so (Coloroso, 1994). I believe this aspect gives the students the opportunity to reflect on the disruptive behavior and it gives them the opportunity to empathize with the teacher. Put them in the teacher’s shoes. How would they feel if their class was disrupted, and what would they do? It is important that students understand that there are reasonable consequences for their actions. The main goal is to get the students to think about what they did and how they would correct the inappropriate behavior.
The corrective approach is how the teacher handles students when they violate the rules. Effective corrective discipline should not intimidate students or be a struggle in power. Corrective discipline should focus on how to stop the disruptive behavior from happening again in the future.
1. Reasonable consequences are when teacher and student jointly agree on a set of reasonable logical consequences (Coloroso, 1994). I agree with this approach that the “punishment has to fit the crime.” I think teachers get this idea that if the punishment is severe, the student won’t misbehave anymore. I could not agree with this more. Going back to the leader vs. boss, most students want to please their teacher if they respect them. Students that are severely punished for a simple mistake would lose all respect for the teacher.
2. Secondly, if the misbehavior is minor enough, I think the teacher should defer discussion to later time and let the anger pass (Curwin & Mendler, 1983). If both the teacher and the student are “fired” up, words could be said out of anger. Minor misbehaviors should be dealt with after school or in between classes out of the view of others. It is sometimes important to let the student calm down for a few minutes before a discussion about a punishment ensues. The student will be most likely be angry and the teacher’s main concern is to diffuse the situation so that it doesn’t cause a bigger disruption.
3. When sitting down with the student, it is extremely important to discuss how the problem started, how the rules were broken, and how to prevent future occurrences (Glasser, 1985). Sometimes the teacher does not get to see the entire disruption, so it is important to discuss with the student exactly what happened. The student may not understand what rule they broke and in order for the student to learn from the misbehavior is to first identify what that behavior was. The teacher and the student should then discuss how the behavior could be prevented in the future.
4. If the behavior in class is a serious infraction, use the Three R’s of reconciliatory justice: restitution, resolution, and reconciliation. That means they need to fix what was done wrong, figure out how to keep it from happening again, and heal with the people they have harmed (Coloroso, 1994). It is extremely important that if a serious infraction takes place during a lesson that the teacher intervenes and takes disciplinary actions immediately. The number one priority for every teacher should be to protect each and every student. Since the student will more than likely remain in the classroom, it is extremely important that all parties involve heal together and come up with a plan to prevent future instances.
5. Since the main goal of disruptions and misbehavior in class is to prevent them from happening again, it is crucial that the teacher finds the first opportunity to recognize a student’s positive behavior after the student receives a consequence (Canter, 1976). As the teacher you want to build the student’s confidence back up after they have been disciplined. At times, students will act out just to get the teacher’s attention. Instead of the student always drawing negative attention, it is very important that the teacher commends the student when they are behaving well in class. Most students want their teachers to see them as “cool” or a nice student, so I believe the more positive attention the teacher gives the class, the more the class will act more positively in return.
The atmosphere of a classroom plays a vital role in student success. Students need to be able to walk into a classroom environment that is welcoming. Studies have shown that students’ achievement levels were lower in schools that modeled more of a prison environment than a learning environment. What type of message are we sending to our children when we send them to schools that are unkept? To me it shows students that we do not care about their welfare or well-being when we send our children to battered and weathered schools. Students should walk into an inviting atmosphere, halls filled with students’ work, bathrooms in good condition, a welcoming office staff, and students helping staff in a variety of roles (Kohn, 1996). Desks in the classrooms should be arranged in groups where students can collaborate with one another and discuss the lesson content. Classroom discussion should include students often addressing one another directly, emphasis on thoughtful exploration of complicated issues, and where students ask questions at least as often as the teacher does (Kohn, 1996).
Start Where Your Students Are:
It is extremely important that teachers recognize what environment works best for their students. Teachers may assume that when they explain something to their students, the students will think the same about it as they do. For example, when a teacher tells the students that they have to do well on a certain test because it will look better for college, the students may not care enough or realize the importance at that time. Now if the teacher knows the students are competitive, they could present the students with a friendly competition. It is also important for teachers to take time and reflect. Why aren’t the students doing their homework? Is it because I am assigning too much? The last thing students want to do when they get home from school is to sit and do twenty pages of notes for one class. Teachers also tend to make too big a deal when students make a mistake (Jackson, 2010). I liked the idea Cynthia had. As a teacher, I would try to make it a learning opportunity and at the same time allow students to redeem themselves. Everybody makes mistakes and students should not have built-up anxiety over a homework assignment. Teachers should instruct their classes to the classroom strength. If the classroom works better as a group, or broken into smaller groups to learn content, the teacher should let them do so. In contrast, if the classroom as a whole likes to work on instruction independently, the teacher should try to have a quieter classroom environment.
The synergetic classroom atmosphere promotes the best learning environment in my eyes. I think the two disciplines I will use the most are the synergetic and noncoercive models. I like how the noncoercive discipline suggests that teaching a quality curriculum is essential to good discipline. The number one priority of mine is to let the students understand that they have a voice in my classroom. By preparing well-developed lessons, my classroom will be fun, engaging, exciting, which will deter students from acting inappropriately.
Thursday, October 4, 2012
Noah Barringer The Sun and Star Factories– Box Format
1. TITLE OF THE LESSON
The Sun and Star Factories
2. CURRICULUM AREA & GRADE LEVEL
Earth Science Grades 9-12
3A. STUDENT INFORMATION: English Language Learners
Maria, 11th grade, CELDT level 2, Mexican-American, first language Spanish, father is a migrant worker and mother is a housekeeper, works well in small groups.
1.) Readiness Level
Maria can read and write at an early intermediate level. She needs assistance with scientific terms. She also struggles with conversational English.
2.) Learning Profile
Kinesthetic and visual.
Maria is interested in social interactions with friends and group work with similar peers.
3B. STUDENT INFORMATION: Students w/ Special Needs
Orion, 9th grade, gifted and talented. First language is English, and he lives at home with his parents, only child
1.) Readiness Level
Reads and writes at least two grades ahead. Extremely intelligent in science and math. Currently in Algebra.
2.) Learning Profile
He works well alone, does not like working in groups. Visual and textual learner. Does not like getting out of his seat and participating in activities.
Science, drawing, and reading
A. Enduring Understanding
The Sun is a major source of the Earth’s energy. It is essential that students understand the different parts of the Sun and what elements the Sun is made of. Students will learn about the different parts of the Sun and how heavier elements are made within stars through nuclear fusion.
B. Essential Questions
If helium, hydrogen, and lithium were the only elements in our Universe after the Big Bang, how do other elements form? How does the Sun give off energy? Which part/s of the Sun does nuclear fusion take place in?
C. Reason for Instructional Strategies and Student Activities
My classes are composed of mostly sophomores with a few students that are in other grades. At the beginning of every chapter or new lesson, I focus on the new vocabulary words they will learn. Students will watch a brief video clip and PowerPoint presentation and then participate in an activity that will demonstrate how nuclear fusion takes place in stars. The activities for the lesson will meet every student’s learning style.
5. CONTENT STANDARD(S)
1e. Students know the Sun is a typical star and is powered by nuclear reactions, primarily the fusion of hydrogen to form helium.
2c. Students know the evidence indicating that all elements with an atomic number greater than that of lithium have been formed by nuclear fusion in stars.
I.E.d. Formulate explanations by using logic and evidence.
6. ELD STANDARD(S)
Respond to messages by asking simple questions or by briefly restating the message.
Identify the main idea and some supporting details of oral presentations, familiar literature, and key concepts of subject-matter content.
Apply knowledge of text connections to make inferences.
Use decoding skills and knowledge of both academic and social vocabulary to read independently.
Demonstrate sufficient knowledge of English syntax to interpret the meaning of idioms, analogies, and metaphors.
7. LEARNING GOAL(S) - OBJECTIVE(S)
After watching the video clip and PowerPoint presentation about the Sun and nuclear fusion (cognitive), students will be able to label and define the different parts of the Sun, describe the process of nuclear fusion, and explain how heavier elements are made by filling out a graphic organizer and participating in a stand-up activity. (language development)(psychomotor)
A. Diagnostic/Entry Level
Students will be assessed by reading and following the directions on the graphic organizer worksheet.
B. Formative-Progress Monitoring
As students complete their graphic organizers, I will circulate around the room to check for student understanding of new definitions. I will also be assessing proper pronunciation of the words when students are required to say them out loud while breaking the words up into syllables. Students will also answer questions on a graphic organizer and participate in a stand-up activity. The teacher will make sure the students follow directions to the activity and do it correctly.
Students will write a paragraph or two on the different parts of the Sun, where nuclear fusion takes place, and what the product of nuclear fusion is while using the new vocabulary words. The graphic organizers will be stamped and graded before they place them in their composition books.
9A. EXPLANATION OF DIFFERENTIATION FOR
ENGLISH LANGUAGE LEARNERS
1.) Content/Based on Readiness, Learning Profile or Interest
Maria is at a CELDT level 2(Early Intermediate), so I will be focusing on the Intermediate level content for Maria. She will use a graphic organizer to follow along the PowerPoint presentation with the teacher to define and label the different parts of the Sun. She will also draw the Sun using a diagram and label the correct parts. After the class writes the definition of the word and draws a picture, all students are required to say the word out loud and clap at the different syllables (e.g. pho (clap) to (clap) syn (clap) the (clap) sis (clap) for photosynthesis) (SDAIE strategy). Maria will also watch a short video clip explaining nuclear fusion and our Sun.
2.) Process/Based on Readiness, Learning Profile or Interest
Students are arranged into groups of about 4. Maria will sit with other bilingual and English speaking students. I circulate while the students are defining, drawing, and saying the words out loud to help students learn the correct meaning and pronunciation of words.
Product/Based on Readiness, Learning Profile or
I will be circulating around the room and students can see me if they need additional assistance. I check the graphic organizers for completion and to make sure they are correct before students glue them into their composition books. I observe the students while participating in the stand-up activity to assess comprehension. I assist the students and read the question orally on a test/quiz to help them understand what the question is asking.
9B. EXPLANATION OF DIFFERENTIATION FOR
STUDENTS WITH SPECIAL NEEDS
1.) Content/Based on Readiness, Learning Profile or Interest
Orion will be sitting in his group and filling out his graphic organizer while following along with the PowerPoint. He will write the definition of the Sun’s different parts, draw a picture of the Sun and label the parts. After the students are done, the class as a whole will be required to say the word out loud and clap at the different syllables.
2.) Process/Based on Readiness, Learning Profile or Interest
Orion does not like working in groups, so he will be able to work on his graphic organizer by himself. After the students complete the vocabulary portion of the lesson they will be required to participate in an activity that gets everybody out of their seats. Orion does not need to participate if he does not want to. He may just observe if he wants.
3.) Product/Based on Readiness, Learning Profile or Interest
Orion may see me if he needs extra clarification or help with any part of the lesson. I will be circulating around the room and students can see me if they need additional assistance. I check the graphic organizers for completion and to make sure they are correct before students glue them into their composition books.
10. INSTRUCTIONAL STRATEGIES
(Describe what the teacher does. Include differentiation strategies.)
A. Anticipatory Set/Into
On the day prior to the lesson, the teacher had students research information regarding our Sun and planets for a project. When students arrive to class, the teacher will go over the schedule and objectives for the day. The students will then work on the warm-up question for the day, which is “Write down as many things you know about our Sun as you can.” While the students are answering the question, I will pass out an H, He, C, O, and Fe to each group. The letters will be glued to a Popsicle stick. Following the warm-up question, the teacher will show a quick video clip about the different parts of the Sun and explain what nuclear fusion is. (10 min)
The teacher will pass out a graphic organizer to each student. After students receive the graphic organizer, the teacher will go through each individual page with the students and explain what is expected of them. The teacher will give a brief PowerPoint presentation on the different parts of the Sun and nuclear fusion. The first few slides will consist of the Sun and it’s parts, and students will follow along, labeling and defining the different parts of the Sun on their diagram. The next slide will be a big explosion, which will simulate the Big Bang. The following slides will explain how nuclear fusion occurs and the life cycle of a star.
C. Guided Practice/Through
The teacher will go through the PowerPoint slides with students, saying and defining each word. The teacher will also show pictures of each word, and identify where each word goes on the Sun diagram. Words will be said out loud by the teacher. The words will be broken into syllables (clapping at each syllable) so that students will know how to pronounce each word correctly (SDAIE strategy). Following the slides, the students will hear a big explosion from the speakers. The teacher will tell the students that the Big Bang just happened and that all of the Hydrogens need to stand up. After they stand up, the teacher will tell them they need to pair up (fuse together). The next step the teacher will tell them that the Hydrogens are running out of fuel, and that the Heliums need to stand up and fuse together. The class will continue doing this until they get to iron. After the class gets to iron, the teacher will have the students with the iron placards accumulate to the center of the class. Finally, the teacher will tell the students that on the count of three the star will explode and that they all have to scatter. (30 min)
D. Independent Practice/Through
The teacher will say and define each new vocabulary word. The teacher will describe each word and explain what part of the Sun it is and what occurs in that specific area of the Sun. (10 min)
At the end of class, the teacher will require the students to paste their graphic organizers into their composition books. The teacher will go over the main ideas and points from the lesson at the end of class. The teacher will ask students to identify the six parts of the Sun, and what nuclear fusion does. (5 min)
The following week, the teacher will quiz the students on new vocabulary words during a test review Jeopardy game in which students will be put into teams to answer specific questions about the new terms. (1 hour)
11. STUDENT ACTIVITIES
(Describe what the students does. Include differentiation activities.)
A. Anticipatory Set/Into
The students worked on a project the day before to answer questions regarding our Sun and planets. The students also answered a warm-up question, “Write down as many things you know about our Sun as you can,” once they arrived to class. Following the warm-up question of the day, students will watch a brief YouTube video clip about the Sun and nuclear fusion. (10 min)
The students will receive a graphic organizer from the teacher. After the teacher gives the students the graphic organizer they will go through the organizer as the teacher gives them instructions on what they will do. The students will then watch a brief PowerPoint presentation about the different parts of the Sun and nuclear fusion.
C. Guided Practice/Through
Students will follow along while the teacher is going through the PowerPoint slides, saying and defining each word. Students will also view pictures of each word and observe where each word goes on the Sun diagram. Students will write the definitions to each word on their graphic organizer. After students are finished writing the definition to the new word, the teacher will instruct the students to say each word as a class (clapping at each syllable of each word). After the students have said the word (clapping at each syllable) two times, the students will be required to say the word as they normally would (SDAIE strategy). Following the slides, the students will hear a big explosion from the speakers. The students will be told that the Big Bang just happened and that all of the Hydrogens need to stand up. After they stand up, students will be told that they need to pair up (fuse together). For the next step, the teacher will tell the Hydrogens that they are running out of fuel, and that the Helium need to stand up and fuse together. The process will continue until the class gets to iron. After the class gets to iron, the teacher will have the students with the iron placards all accumulate in the center of the class. Finally, the students will be told that on the count of three the star will explode and that they all have to scatter. (30 min)
D. Independent Practice/Through
Students will draw and label the parts to the Sun diagram on their graphic organizers, using the model on the PowerPoint slide as a guide. Students will continue to work on their graphic organizers until they are completed. (10 min)
The students will be required to paste the graphic organizers they completed into their composition books. The students will write a summary to the questions the teacher asks. (Parts of the Sun, What nuclear fusion does, etc.) (5 min)
The following week, students will participate in a quiz on new vocabulary words during a test review Jeopardy game in which they will be put into teams to answer specific questions about the new terms. (1 hour)
PowerPoint, graphic organizer, video
Reflection: I think the lesson was designed pretty well. All of my students were able to understand the lesson and complete the graphic organizer. Maria was able to work with a bilingual peer and do well. The lesson was designed for each learning style. At the beginning of class, I asked a warm-up question and showed a quick YouTube clip to gain the students’ interest. I placed the Sun diagram with the different parts labeled and defined on PowerPoint so that all students could see the words while we said them out loud as a class. After we completed the vocabulary part of the lesson, the students had a lot of fun with a nuclear fusion activity. Following the activity, students worked together in think, pair, share, as well as finishing their graphic organizer. Right before the end of class, I was able say the definition of one of the new vocabulary words and call on volunteers. The students will also required to write a quick summary on what they learned for the day and what objectives they covered. Clapping at the syllables was a great SDAIE strategy. Not only could all of the students see the word and definition, but I also said the word and definition out loud, showed them a picture of the word and had them draw one of their own, and finally clapping at the syllables in order to pronounce the word correctly. Overall, I was very pleased with the class working together in groups and being able to do well on this assignment. | <urn:uuid:2f6106b6-6bc5-4b1f-a281-8055bc99054f> | CC-MAIN-2017-17 | http://noahbarringeredss511.blogspot.com/2012_10_01_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.960547 | 6,756 | 3.328125 | 3 |
What Was 2016 About? Who We Are and What Values We Cherish.tags: 2016 election, Donald Trump
Mark Byrnes is professor of history at Wofford College in Spartanburg, SC.
Not all presidential elections are created equal. Every election is a choice, of course, but the choices are not equally consequential. In some cases, the country seems largely set on what to do, and is debating little more than how to do it (Kennedy-Nixon in 1960). In others, there are more substantial questions of what we as a nation should do (Reagan-Carter in 1980). The most consequential ones, however, come down to the question of who we are as a people, how we define America as a state.
I would argue that 2016 was the last of these.
It was so because Donald Trump made it so.
The 2008 campaign easily could have been one of those, with the Democrats choosing the first African-American major party nominee, with all that choice symbolized about what kind of country this is. While there were certainly moments in the campaign that threatened to veer in that direction, the Republican nominee, Sen. John McCain, stopped his campaign from exploiting that approach. When a woman at one of his town hall meetings said she thought Obama was “an Arab,” McCain stopped her: “No, ma'am. He's a decent family man [and] citizen that I just happen to have disagreements with on fundamental issues and that's what this campaign's all about. He's not [an Arab].” McCain was given the chance to make it a campaign that said I am one of “us” and he is one of “them,” and he insisted it should instead be a campaign about issues.
Those two words—“No, ma’am”—made clear that McCain was determined not to take the low road. He would talk about what we should do, not who we are. He would say “no” to his supporters when they went down that other road. They are also the words Donald Trump never uttered in his campaign rallies, no matter what vile shouts his deliberate rabble-rousing provoked.
Long before he became a candidate, Trump took the low road by becoming the most famous “birther” in America, again and again claiming that he was finding proof that Barack Obama was not born in the US, asserting that Obama was secretly some non-American “other.” What McCain disavowed, Trump took up—with glee. McCain thought there were things more important than winning, an attitude Trump clearly views with utter disdain. To Trump, decency is for losers.
Trump’s birtherism was more than just a way to attract attention (though that may have been its chief attraction for him personally). It was in practice an attempt to repudiate the vision of America that Obama’s presidency represented, an America that defines itself by core beliefs that are available to all people, no matter their race, ethnicity, or religion—rather than by an immutable national type of person.
It is no coincidence that Trump then literally began his campaign by demonizing Mexicans as criminals and rapists. His opening salvo against Mexicans set the tone that he never abandoned: these “other” people are different, they are not good, they do not belong here, they are not “us.” His attack on Judge Curiel demonstrated this perfectly. He said the judge could not be fair to him in the Trump University case because “he’s Mexican.” The fact that the judge was born and raised in the United States did not matter to Trump. “He’s Mexican. I’m building a wall.” For Trump, Curiel’s ethnic heritage was who he was. His birthplace, his profession, his devotion to the law and the Constitution were all irrelevant to Trump. The judge’s identity was his ethnicity, and it was Mexican, not American.
He added to the ethnic dimension a religious one by calling for a ban on Muslims coming into the US. He did not call for a ban on extremists or terrorists. He called for a ban on everyone who adhered to a specific religion. He told CNN: “I think Islam hates us.” Not some Muslims, not even some people from some countries that are predominantly Muslim. “Islam hates us,” he said—ignoring the many American Muslims who are “us.” What that lays bare is that for Trump, Muslims are not “us.” For Trump, they may be here, but they don’t really belong here, because they are not really of “us.”
His positions and policies (and the rhetoric he used to promote them) made it clear that his slogan—“Make America Great Again”—meant that the US should be defined in racial, ethnic, and religious terms: as a predominantly white, Christian country again. His unabashed bigotry throughout his campaign challenged every American to decide: is this who we are? Is America defined by racial, ethnic, and religious traits or is it not?
As I see it, there have long been two competing visions of what the United States is: a country based on an idea or a nation like all the others.
The first argues that the United States is not any particular ethnicity, language, culture, or religion—some of the traits that usually comprise a “nation.” Instead, the United States is fundamentally an idea, one whose basic tenets were argued in the Declaration of Independence and given practical application in the Constitution. At its core, America is the embodiment of the liberalism that emerged from the Enlightenment, which took as a self-evident truth that all people are equal, that all people are fundamentally the same, no matter where they live. They all have basic rights as humans, rights that no government can grant or deny, but only respect or violate. Because this fundamental liberal idea erased the traditional lines that divided people based on race, ethnicity, or religion, it was a “universalist” (or, to use a common term of derision among Trump supporters, “globalist”) concept. It was open to everyone, everywhere. By extension, the American idea (and America itself) was open to everyone, everywhere.
Unlike the situation in other “nations,” since America was an idea, one could become an American by learning about and devoting oneself to that idea. This fact is embodied today in the citizenship test given to those wishing to become Americans: it is a civics test, with questions about American history and government. The final step is taking an oath of allegiance, in which one pledges to support and defend not the “homeland” but the Constitution. The oath is not to territory or blood, but to what we believe and how we do things: to become an American means to believe in certain ideas and commit to living by them.
The other concept of the state is older and more traditional. The United States is a territory, a piece of land. It is also a particular group of people with unique, identifiable national traits that set them apart from others. Trump’s constant refrain about “the wall” perfectly captures this sense of territory in concrete terms. He says that the borders are absolutely essential to defining the nation: “A nation without borders is not a nation at all.” After the Orlando shooting, Trump tied the idea of the nation explicitly to immigration. Eliding the fact that the killer himself was born in the US, he noted that his parents were immigrants and said: “If we don't get tough and if we don't get smart, and fast, we're not going to have our country anymore. There will be nothing, absolutely nothing left.” Immigrants, he suggested, will destroy the country.
This is why the border must be, in his words, “strong” or “secure.” Keeping “our” country means keeping the wrong people out. Otherwise there will be “people who don’t belong here.” While in theory this could be merely about a given immigrant’s legal status, Trump’s rhetoric and proposals give the lie to that—the Orlando killer’s parents were not “illegal” after all, but they were Afghans and Muslims. The wall won’t be on the border with Canada, either. He singles out Mexicans and Muslims, which has the effect of defining who exactly the people who do “belong here” are—those who are white and Christian. Trump’s nonsensical promise that “we are going to start saying ‘Merry Christmas’ again” signals that he will make America Christian again. He told Tony Perkins: “I see more and more, especially, in particular, Christianity, Christians, their power is being taken away.” The passive voice masks who precisely is doing the taking away, but it is not hard to imagine who he means: it must be non-Christians, maybe secularists, maybe Muslims. Either way, “them,” and not “us.” (It is also noteworthy that he says Christians had “power”—which suggests a previous supremacy that’s been lost.)
By striking these themes, Trump has appealed to this traditional, more tribal concept of what America is, or should be: not an idea based on universal principles, but a state rooted in a particular place and with a specific, dominant identity comprised of racial, ethnic, and religious traits that should never change.
The irony is that in doing so, Trump is effectively saying the United States is not really distinctive, at least not in the way it usually thinks of itself. It is a nation like all other nations. Trump has, in fact, explicitly rejected American exceptionalism: “I don't think it's a very nice term. We're exceptional; you're not…. I don't want to say, ‘We're exceptional. We're more exceptional.’ Because essentially we're saying we're more outstanding than you.” While he couched this in business terms, claiming that since the US was being bested in trade it could not claim to be better, he was openly and consciously rejecting a basic tenet of Republican orthodoxy since at least Ronald Reagan. Coming from the standard bearer of the 2016 Republican Party, which has beat the “American exceptionalism” drum relentlessly (especially in the Obama years), that is rather stunning—but it also makes sense from another perspective.
Jelani Cobb wrote recently in the New Yorker that Trump’s political rise represents the “death of American exceptionalism.” He states: “The United States’ claim to moral primacy in the world, the idea of American exceptionalism, rests upon the argument that this is a nation set apart.” By emulating the “anti-immigrant, authoritarian, and nationalist movements we’ve witnessed in Germany, the U.K., Turkey, and France,” Cobb argues, Trump forfeits that American “claim to moral superiority.”
I agree with Cobb, but I think it goes even deeper than he suggests: it is a rejection of the idea-based definition of what America is and a reversion to an older, European one. American exceptionalism not only encompassed a moral claim, not only set the United States apart from other nations. It even—or maybe especially—set the US apart from those places from which most of its founding generation fled: the states of Europe. Here in America, the thinking went, the people will create something new and different, based on first principles and following the dictates of reason, unrestrained by tradition, culture, religion—by anything but the best ideas. In Thomas Paine’s famous words, “we have it in our power to begin the world over again.” The United States would show the world what could be accomplished when free people creating a new state had the chance to write on John Locke’s tabula rosa. (It should go without saying that this was never literally true, but rather an ideal to which people aspired.)
In doing so, Americans were effectively saying: “We are not our European ancestors. We are different. They are tribal, we are not.” For most of the 19th century and well into the 20th, American isolationism was based on the foundational idea that the US, despite its ancestry, was decidedly not European. It would not be ruled by Europe and it would not be drawn into Europe’s tribal squabbles. The US was different—and better. It may have been borne of Europe, but it would supersede it and show it a better way.
More often than not in recent decades, it has been American conservatives who have shown disdain for Europe, sneering at the idea that the US should look to Europe for ideas or leadership of any kind: in law, in public policy, in diplomacy. But scratch the surface and what we see is not contempt for Europe per se but for liberalism as it has developed in Europe since the end of World War II. As right-wing, anti-liberal movements have grown in Europe, so has American conservatism’s appreciation for what Europe has to teach Americans.
As Cobb points out, what is striking about Trump is how much his program resembles that of right-wing extremists in European states who reject that better way America sought to offer in favor of the old European way. Trump’s program is not uniquely American. Arguably, it is following an ancient pattern set in Europe that is rearing its ugly head again in the 21st century. (Trump himself said his election would be “Brexit times 10”—bigger, but not original.) Trump is following more than he is leading, copying a formula that has had some success elsewhere, one that is far from uniquely American. It is, if anything, uniquely European—in the worst sense.
Recently the New York Times had an article on how the far-right European movements have adopted Vladimir Putin as their hero, for his defense of “traditional values.” It quotes an American white Christian nationalist praising Putin: “I see President Putin as the leader of the free world.” (His definition of “free” must be markedly different from the one that has dominated in American political culture, but the framing is telling. Theirs is not the freedom of the Enlightenment, but rather freedom from the threat of the non-western or non-traditional “other.”)
Most American pundits, still caught in a cold-war paradigm, marveled at Trump’s embrace of Putin, and could not understand how it failed to discredit him as it seemingly should have (even this past weekend’s stories on the CIA’s conclusion that Russia sought to help Trump in the election has yet to leave a mark on him). Those critics failed to see that a new paradigm has completely eclipsed that of the cold war. They missed the fact that, despite his KGB pedigree, Putin has transformed himself into “a symbol of strength, racial purity and traditional Christian values in a world under threat from Islam, immigrants and rootless cosmopolitan elites.” In the new paradigm, these are the new enemies, the real enemies of the 21st century. Communists have been vanquished. Islamists, immigrants, globalists, “others” of all kinds, have taken their place. The cold war was a battle of ideologies; this is a battle of identities.
If this take is correct, the combination of Trump’s willingness to jettison American exceptionalism and his embrace of Putinism as “real” leadership portends a significant transformation of what it means to be an American. Rather than a country built on ideas and principles, which defines itself by its devotion to those principles, Trump’s America is simply one (albeit the most powerful) of the many western tribes beating back the “uncivilized” hordes that threaten to undermine the white, Christian traditional identity of the west. In such a world, embracing Putin as a partner makes sense—even if he does have journalists and other political enemies murdered or imprisoned. Embracing anti-liberal autocrats and dictators in order to destroy ISIS becomes not a necessary evil, but a positive good, a desirable state of affairs, a restoration of an ancient European unity against the infidel.
Implicit in this view is a rejection of Enlightenment liberalism. Once you jettison the commitment to an idea and embrace a politics based on racial, ethnic, and religious identity, showing a reckless disregard for democratic norms and processes (as Trump reflexively does) is natural, since those things have no inherent value. How we do things does not matter—all that matters is who we are and what we must do to protect that essential identity. Since American identity is not defined by principles of any kind, it is not important to have principles of any kind. The only standard by which to judge right and wrong is success in defending the homeland from the “other.” So Trump can blithely pledge to restore “waterboarding and a hell of a lot worse than waterboarding” with no qualms whatsoever. After all, he asserts, “torture works.”
Trump has made clear repeatedly that that is his only standard: what works. When asked by the Wall Street Journal after the election whether he had gone too far with his rhetoric during the campaign, he said flatly: “No. I won.” His worldview is entirely instrumental: what works is right, what fails is wrong. Nothing could be more fundamentally opposed to a commitment to liberal process, which values process as a good in itself, as the glue that holds together people with different views and beliefs.
When Marxists, following the logic of economic determinism, claimed that class created identity, fascists countered with racial determinism: the blood determined identity. What has always set liberalism apart from these extremist ideologies is the belief that people create their own identities. As rational beings, we can create who we are by deciding what we believe. We are not merely the products of race, or ethnicity, or class. We are who we choose to be.
What made this election so consequential is that it posed the question of who Americans are as a people as clearly as it has been since 1860. Hillary Clinton’s campaign recognized this with its slogan: “Stronger Together.” Trump’s strategy was to encourage white Christian nationalism, and Clinton’s was to say we cannot go back to some tribal concept of American identity. What has disturbed so many of us about Trump’s elevation to the presidency is not simply that our candidate didn’t win. It is that the choice that 46.2% of the voters made is so antithetical to our vision of what America can and should be. It threatens a reversion to a more primitive tribalism that has proved so horrifically destructive in the past. We know the history. We know the danger. That is why this was no normal election and this will be no normal presidency. This country is about to be tested as it has not been since the 1860s, and the outcome is not at all clear.
comments powered by Disqus
- Before Ivanka Trump, other presidential daughters also wielded influence at the White House
- South Carolina Republican: scrap slave memorial if Confederate monument goes
- A 130,000-Year-Old Mastodon Threatens to Upend Human History
- Trump just promised the biggest tax cut in history
- An African Diaspora group at Columbia University draped a KKK hood over Thomas Jefferson
- Accused plagiarist Matthew Whitaker wins arbitration case against City of Phoenix over police contract
- Niall Ferguson says the liberal international order has passed its peak
- Nathaniel Philbrick wins the $50,000 2017 George Washington Prize
- In an interview Jill Lepore explains how she writes and the writers she admires most
- Trump is no Hitler – he’s a Mussolini, says Oxford historian | <urn:uuid:0a449414-77d8-4355-bf9a-108ff8886c09> | CC-MAIN-2017-17 | http://m.hnn.us/blog/153856 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00485-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.974976 | 4,208 | 2.53125 | 3 |
必修二 Unit 1 In search of the amber room Frederick William I, the king of Prussia, could never _________ (imagine) that this greatest gift _______ the Russia people would have such an _________
(amaze) history. This gift was the Amber Room, ______ was given this name because several tons of amber ________ (use) to make ______. The amber which was selected had a beautiful yellow-brown colour _____ honey. The design of the room was in the fancy style popular in those days. It was also _____ treasure _________ (decorate) with gold and jewels, ______ took the country’s best artists about ten years to make. In fact, the room was not made to be a gift. It was designed for the palace of Frederick I. ________, the next king of Prussia, Frederick William I, ______ whom the amber belonged, decided not to keep _____. In 1716 he gave it to Peter the Great. In return, the Czar sent him a troop of his best soldiers. So the Amber Room became part of the Czar’s winter palace in St Petersburg. About four meters long, the room served ______ a small reception hall for important visitors. Later, Catherine II had the Amber Room moved to a palace outside St Petersburg _______ she spent her summers. She told her artists to add more details to it. In 1770 the room was completed the way she _______ (want). Almost six hundred candles lit the room, and its mirrors and pictures shone like gold. Sadly, _____ the Amber Room was considered one of _____ wonders of the world, it is now _______ (miss). In September 1941, the Nazi army was near St Petersburg. This was a time ______ the two countries were at war. Before the Nazi cold get to the summer palace, the Russians were able to remove some furniture and small art objects from the Amber Room. However, some of the Nazi _________ (secret) stole the room itself. In less than two days 100,000 pieces were put inside twenty-seven wooden boxes. There is no doubt ______ the boxes were then put ____ a train for Konigsberg, which was at that time _____ German city on the Baltic Sea. After that, _______ happened to the Amber Room ________ (remain) a mystery. Recently, the Russians and Germans ____________ (build) a new Amber Room at the summer palace. _____ studying old photos of the former Amber Room, they have made the new one look like the old one. In 2003 it was ready for the people of St Petersburg ______ they celebrated the 300th birthday of their city. Units 1 1. 在日本,人们在进屋之前须脱鞋子。 In Japan, people should _________ their shoes ______ they come inside. 2. 这个问题值得讨论。 The problem _______________________. 3. 除非我们净化我们的环境,否则人类可能将无法生存。 _____ we clean up our environment, human beings may not ______. 4. 这时目前为止我所看过的最好的电影中的一部。 This is one of the best films ___ I ____ ever seen. 5. 他是否能通过考试仍有待证实。( remain ) _________________________ he will pass the exam or not. 6. 我宁可在家里清洁家具也不出去购物。 I _____________ stay at home to clean the furniture ___________________ go out shopping. 7. 你知道这花瓶是属于谁的吗? Do you know whom this vase ____________? 8. 她给我们食物和衣服,没有要求任何回报,这一切都让我们很感激。 She gave us food and clothes and_________ nothing ______________, _______ made us very grateful. 9. 他收到了一份礼物,难怪他那么开心。 He had received a gift. _________________ he was so happy. 10. 虽然他自己并没有觉得做了什么特殊的事情,他的同事们却对他交口称赞 (think highly of)。 His colleagues _______________________ him though he himself didn’t think he had done anything special.
Unit 2 An interview Pausanias, ______ was a Greek writer about 2000 years ago, has come on a magical journey on March 18th, 2007 ______ (find) out about the present-day Olympic Games. He is now interviewing Li Yan, _____ volunteer for the 2008 Olympic Games. P: My name is Pausanias. I lived in _____ you call “Ancient Greece” and I used to write about the Olympic Games a long time ago. I’ve come to your time to find _____ about the present-day Olympic Games because I know _____ in 2004 they were held in my hometown. May I ask you some questions about the modern Olympics? L: Good heavens! Have you really come from so long ago? But of course you can ask ______ questions you like. What would you like to know? P: How often do you hold your Games? L: Every four years. There are two main sets of Games- the Winter and the Summer Olympics, and both are held every four years ______ a regular basis. The Winter Olympics are usually held two years before the Summer Games. Only athletes ______ have reached the agreed standard for their event will _______ (admit) as competitors. They may come from anywhere in the world. P: Winter Games? How can the runners enjoy __________ (compete) in winter? And what about the horses? L: Oh, no! There are no running races ______ horse riding events. Instead there are competitions like skiing and ice skating ______ need snow and ice. That’s _______ they are called the Winter Olympics. It is in the Summer Olympics ______ you have the running races, together with swimming, sailing and all the team sports. P: I see. Earlier you said that the athletes are invited from all over the world. Do you mean the Greek world? Our Greek cities used to compete _______ each other just for the honour of winning. No other countries could join in, _____ could salves or women! L: Nowadays any countries can take part ______their athletes are good enough. There are over 250 sports and each one has its own standard. Women are not only allowed, _____ play a very important role in gymnastics, athletics, team sports and… P: Please wait _____ minute! All those events, all those countries and even women taking part! Where are all the athletes ________ (house)? L: For each Olympics, a special village is built for them to live ______, a main reception building, several stadiums for competitions, and a gymnasium as well. P: That sounds very expensive. Does anyone want to host the Olympic Games? L: As a matter of fact, every country wants the opportunity. It’s a great _________ (responsible) but also a great honour to be chosen. There’s as much competition among countries to host the Olympics as to win Olympic medals. The 2008 Olympics will be held in Beijing, China. Did you know that? P: Oh yes! You must be very proud. L: Certainly. And after that the 2012 Olympics will be held in London. They have already started ______ (plan) for it. A new village for the athletes and all the stadiums will be built to the east of London. New medals will be designed of course and… P: Did you say medals? So even the olive wreath has been replaced! Oh dear! Do you compete for prize money too? L: No, we don’t. It’s still all about being able to run ________ (fast), jump higher and throw further. That’s the motto of the Olympics, you know-“Swifter, Higher and Stronger.” P: Well, that’s good news. How interesting! Thank you so much for your time. The story of Atlanta Atlanta was _____ Greek princess. She was very beautiful and could run _____ (fast) than ____ man in Greece. But she was not allowed to run and win glory for herself in the Olympic Games. She was so angry _____ she said to her father that she would not marry anyone ++++++ could not run faster than her. Her father said that she must marry, ____Atlanta made a bargain ________ him. She said to him, “These are my rules. _______a man says he wants to marry me, I will run against him. If he cannot run as fast as me, he will be killed. No one will________ (pardon).” Many kings and princes wanted to marry Atlanta, _____ when they heard of her rules they knew _____ was hopeless. So
many of them sadly went home, but others stayed to run the race. There was a man called Hippomenes _____was amazed when he heard of Atlanta’s rules, “Why are these men so foolish?” he thought. “Why will they let______ be killed because they cannot run as fast as this princess?” However, when he saw Atlanta come out of her house to run, Hippomenes changed ______ mind. “I will marry Atlanta—or die!” he said. The race started and _______ the men ran very fast, Atlanta ran faster. As Hippomenes watched he thought, “How can I run as fast as Atlanta?” He went to ask the Greek Goddess of Love _______ help. She ________ (promise) to help him and gave him three golden apples. She said, “Throw an apple in front of Atlanta when she is running past. When she stops to pick ______ up, you will be able to run past her and win.” Hippomenes took the apples and went to the King. He said, “I want to marry Atlanta.” The King was sad to see ________ man die, but Hippomenes said, “I will marry her—or die!” So the race began. Units 2 1. 在奥运会上所放飞的鸽子象征着和平。 At the Olympic Games, the doves released at the opening ceremony ___________ peace. 2. 那些支持主席的人必须参加一场辩论。(stand for) Those people _________________ president must take part in a debate. 3. 当午餐的铃声敲响的时候, 学生们一个接一个走出教室。 When the bell _____ for lunch, the students went out of the classroom ________________________. 4. 我们用电脑取代了老式的加法计算器。 We've _________ the old adding machine _____ a computer. 5. 经理不在时, 他负责这个商店. (in charge of) He was left _____________ the shop while the manager was away. 6. 事实上,最大的荣誉是作为最后一名火炬手把火炬带入到举办奥运会的赛场。 As a matter of fact, the greatest ____ is to be the last athlete ____ carries the torch into the stadium _____ the Olympic Games will be held. 7. 他最终向警察承认他也加入了犯罪活动。 He finally ____________ the police that he ____________ part in the crime as well. 8. 作为学校的校长,他每天必须处理许多的问题。 As principal of the school, he must _____________ many problems every day. 9. 他训练了很长的一段时间,所以获得比赛的胜利是他应得的。 He’s been training for a long time, so he _________________________ win the race. 10. 我不喜欢讲价,幸运的是我不需要做了,因为这双鞋的确非常划算。 I don’t like to bargain and luckily I didn’t have to because these shoes were _________________. Unit 3 Who am I Over time I __________ (change) quite a lot. I began ______ a calculating machine in France in 1642. ________ I was young I could simplify difficult sums. I developed very slowly and _______ took nearly two hundred years ______ I was built as an analytical machine by Charles Babbage. After I was programmed by an operator ______ used cards with holes, I could “think” _________ (logical) and produce an answer quicker than _______ person. At that time it was considered a __________ (technology) revolution and the start of my “artificial intelligence”. In 1936 my real father, Alan Turing, wrote a book _______ how I could be made to work as a “universal machine” to solve any difficult mathematical problem. From then _______, I grew rapidly ________ in size and in brainpower. By the 1940s I _______ (grow) as large as a room, and I wondered ________ I would grow any larger. ________, this reality also worried my designers. As time went ______, I was made smaller. First as a PC(personal computer) and then as a laptop, I have been used in offices and homes since the 1970s. These changes only became possible _______ my memory improved. First it was stored in tubes, then on transistors and
later on very small chips. As a result I totally changed my shape. As I have grown older I have also grown smaller. Over time my memory has developed so much_______, like an elephant, I never forget anything I have been told! And my memory became so large that even I couldn’t believe ______! But I was always so _______ (lone) standing there ______ myself, until in the early 1960s they gave me a family ________ (connect) by a network. I was able to share my knowledge with________ through the World Wide Web. Since the 1970s many new applications have been found for me. I have become very important ________ communication, finance and trade. I have also been put in robots and used to make mobile phones as well as help with medical operations. I have even been put into space rockets and sent to explore the Moon and Mars. Anyhow, my goal is to provide humans _____ a life of high quality. I am now truly filled with _______ (happy) that I am a_______ (devote) friend and helper of the human race! Andy- the android …… My first football competition was in Nagoya, Japan several years ago. Last year our team went to Seattle, Washington in the USA. We won second place. ___________ (personal), I think the team ____ won first place cheated. They had developed a new type of program just before the competition. So we need to encourage our programmer to improve our intelligence too. We are determined to create an even ______ (good) system. In a way our programmer is like our coach. She programs ______ with all the possible moves she has seen while _______ (watch) human games. Then she prepares reliable moves to use ______ a new situation arises. In this way I can make up new moves _____ (using) “my artificial intelligence”. I would really like to play against a human team, ______ I have been programmed to act just like them. After all, with the help of my electronic brain _______ never forgets anything, ______ (use) my intelligence is _______I’m all about! Units 3 1. 在朋友的帮助下,我最终完成了这项工作。 Finally, I was able to finish the work _______________my friends. 2. 我们需要探索不同的方式来实现我们的人生目标。 We need to _______ different ways of ______________________. 3. 他下载了很多程序,以致他的电脑无法负荷而崩溃了。 He had downloaded so many programs onto his computer that it ___________. 4. 由于新一轮抛售的波浪,股市出现了进一步的下跌。(as a result of, there be, a further fall) _______________ a fresh wave of selling, there was a further ____ in the stock market. 5. 政府已经采取措施解决广州的交通问题。 The government __________________________ the traffic problems in Guangzhou. 6. 如果出现的任何困难,请打电话给我。 If any problems _______, call me. 7. 在现代的社会,几乎所有的东西都电子化了。 Almost everything is electronic ________________.
She will be________________________, signing copies of her latest novel. 9. 这个母亲正看护着她熟睡的孩子。 The mother is ______________ her sleeping child now. 10. 事实上每个族群的文化虽有不同和独特的习俗,却有共通的做人宗旨。 (in reality) In reality, all cultures have different and unique practices, but they also ________________ of doing good. Unit 4 How Daisy learned to help wildlife
Daisy had always longed _____ help ________ (endanger) species of wildlife. One day she woke up ______ found a flying carpet by her bed. “Where do you want to go?” it asked. Daisy responded immediately. “I’d like to see some endangered wildlife.” she said. “Please take me to a distant land _______ I can find the animal that gave fur to make this sweater.” At once the carpet flew away and took ______ to Tibet. There Daisy saw an antelope _______ (look) sad. It said, “We’re ________ (kill) for the wool beneath our stomachs. Our fur is being used to make sweaters for people _____ you. As a result, we are now an endangered species.” At that Daisy cried, “I’m sorry I didn’t know that. I wonder _____ is being done to help you. Flying carpet, please show me a place where there’s some wildlife protection.” The flying carpet travelled so fast that next minute they were in Zimbabwe. Daisy turned _______ and found that she ____________ (watch) by an elephant. “Have you come to take my photo?”it asked. In relief Daisy burst into _______ (laugh). “Don’t laugh,” said the elephant, “We used to be an endangered species. Farmers hunted us ______ mercy. They said we destroyed their farms, and money from tourists only went to the large tour companies. So the government decided to help. They allowed tourists to hunt only ______ certain number of animals if they paid the farmers. Now the farmers are happy and our numbers are increasing. So good things are being done here to save local wildlife.” Daisy smiled. “That’s good news. It shows the importance of wildlife protection, but I’d like to help as the WWF suggests.” The carpet rose again and almost at once they were in a thick rainforest. A monkey watched them _____ it rubbed itself. “What are you doing?”asked Daisy. “I’m protecting myself ______ mosquitoes,” it replied. “When I find a millipede insect, I rub it over my body. It contains a powerful drug which _______ (affect) mosquitoes. You should pay more attention to the rainforest where I live and appreciate ______ the animals live together. No rainforest, no animals, no drugs.” Daisy was _______ (amaze). “Flying carpet, please take me home so I can tell WWF and we can begin producing this new drug. Monkey, please come and help.” The monkey agreed. The carpet flew home. As they landed, things began to disappear. Two minutes later everything had gone—the monkey, too. So Daisy was not able to make her new drug. But ______ an experience! She had learned so much! And there was always WWF… Animal extinction Many animals have disappeared during the long history of the earth. The ______ famous of these animals are dinosaurs. They lived on the earth tens of millions of years ago, long _______ humans came into being and their future seemed secure at that time. There were many different kinds of dinosaur and a number of then used to live in China. The eggs of twenty-five species have been found in Xixia County, Nanyang, Henan Province. Not long ago a rare new species of bird-like dinosaurs _________ (discover) in Chaoyang County, Liaoning Province. ______ scientists inspected the bones, they were surprised _______ (find)that these dinosaurs could not only run like the others _____ also climb trees. They learned this from the way the bones were joined together. Dinosaurs died out _______ (sudden) about 65million years ago. Some scientists think ______ came after an unexpected incident when a huge rock from space hit the earth and put too much dust into the air. Others think the earth got too hot for the dinosaurs to live on any more. Nobody knows for sure why and ______ dinosaurs disappeared from the earth in such _____ short time. We know many other wild plants, animals, insects and birds have died out more recently. According _____ a UN report, some 844 animals and plants _________ (disappear) in the last 500 years. The dodo is one of _______. It lived on the Island of Mauritius and was a very________ (friend) animal. Please listen to a short story of the dodo and how _____ disappeared from the earth. Units 4 1. 由于人类的过度猎捕,许多动物都已经灭绝了。 ____________over-hunting by human beings, many animals have ______________. 2. 我们都想过平静安定的生活。 We all want to_____________________. 3. 他不肯学习,面临着期末考试不及格的危机。 He refuses to study and is__________________________ his final exams.
4. 飞机安全着陆之后,一些人喜笑颜开,而另一些人欣慰地哭了。 After the plane landed safely, some people burst into laughter while other ____________________. 5. 开车的时候,你应该注意路标。 While driving, you should____________________ the road signs. 6. 这种药物含有一些可能会影响你健康的化学成分。 This medicine contains some chemicals that may negatively _______________________. 7. 没人确切知道这一风俗是何时开始形成的。 No one knows exactly when this custom first _______________________. 8. 如果你想获得成功的话,你必须学会感恩。 You must____________ others if you want to succeed. 9. 政府提出了一项新的政策,旨在保护城市里一些重要的历史遗迹。 The government has ________________ a new policy aimed at ____________ the city’s important historic sites. 10. 根据气象报告,今天将会是一个晴朗的日子。 _______________________, it will be sunny today. Unit 5 The band that wasn’t Have you ever wanted to be part of a band as a famous singer or musician? Have you ever dreamed _____ playing in front of thousands of people at a concert, at ______ everyone is clapping and appreciating your music? Do you sing karaoke and pretend you are a famous singer like Song Zuying or Liu Huan? To be honest, a lot of people attach great importance to ___________ (become) rich and famous. But just how do people form _____ band? Many musicians meet and form a band ______ they like to write and play their own music. They may start as a group of high-school students, for _______ practising their music in someone’s house is the first step to fame. Sometimes they may play to passers-by in the street or subway so that they can earn some extra money ______ themselves or to pay for their instruments. Later they may give performances in pubs or clubs, for _______ they are paid in cash. Of course they hope to make records in a studio and sell millions of copies to become millionaires! However, there was one band that started _______ a different way. It _______ (call) the Monkees and began as a TV show. The musicians were to play jokes on each other as well as play music, most of which was based ______ (loose) on the Beatles. The TV organizers had planned to find four musicians who could act as well as sing. They put _____ advertisement in a newspaper ________ (look) for rock musicians, but they could only find _____ who was good enough. They had to use actors for the other three members of the band. As some of these actors could not sing well enough, they had to rely on other musicians to help them. So during the broadcasts they just pretended to sing. Anyhow their performances were ________ (humor) enough to be copied by other groups. They were so popular that their fans formed clubs in order to get more familiar _____ them. Each week on TV, the Monkees would play and sing songs _______ (write) by other musicians. _______, after a year or so in which they became more serious about their work, the Monkees started to play and sing their own songs like a real band. Then they produced their own records and started touring and playing their own music. In the USA they became even _______ (popular) than the Beatles and sold even more records. The band broke up about 1970, but happily they reunited in the mid-1980s. They produced a new record in 1996, _____ which they celebrated their former time as a real band. Freddy the frog Not long after Freddy and the band became famous, they visited Britain ______ a brief tour. Fans showed their devotion by _______ (wait) for hours to get tickets for their concerts. Freddy was now quite confident ______ he went into a concert hall. He enjoyed singing and all the congratulations afterwards! His most exciting _______ (invite) was to perform on a TV programme called “Top of the Pops”. He had to go to London, wear an expensive suit and give a performance to a TV camera. It felt very strange. But as soon as the programme was over, the telephones which were in the same room started singing. Everybody was asking when they could see Freddy and his band again. They were ______ (true) stars.
Then things went wrong. Freddy and his band could not go out anywhere _______ being followed. Even when they wore sunglasses or beards people recognized them. Fans found them even when they went to the toilet. They tried to hid in the reading rooms of libraries, but it was _______ (use). Someone was always there! Their personal life was regularly discussed by people who did not know them but ______ (talk) as if they were close friends. At last feeling very upset and ________ (sense), Freddy and his band realized that they must leave the country before it became too painful for them. So they left Britain, to ______ they were never to return, and went back to the lake. Units 5 1. 我梦想将来的某一天能周游全世界。 I __________________________________ one day in future. 2. 说实话,我要在公共场合表演还是太紧张了。 _________________, I’m_____________ to perform in public. 3. 有时候他们在街头或地铁里为过路人演奏来挣些额外的钱。 Sometimes they may play to passers-by in the street or subway so that they can____________________. 4. 他总是跟他的同班同学开玩笑。 He always_______________ his classmates. 5. 我很独立,不喜欢依赖别人来获得帮助。 I am______________ and don’t like to ____________ others for help. 6. 在我还没来得及熟悉这个乐队之前,他们就解散了。 Before I had the chance to _________________________ the band, they___________. 7. 除了自信,她还非常漂亮也非常敏感。 ______________________ being confident, she is also ___________ and sensitive. 8. 然而,在一年左右之后,门基乐队开始演唱他们自己的歌曲了。 However, after a year __________, the Monkees started to play and sing __________ songs. 9. 所有的付款都必须以现金的方式,不允许使用信用卡。 All __________ must be made _____________ and no credit cards are ________. 10. 我将把你的友谊视为是最重要的。 I value your friendship ______________ else.
copyright ©right 2010-2020。 | <urn:uuid:69fc385b-6ce8-4a9e-bd05-991a81fa2ad3> | CC-MAIN-2017-17 | http://www.nexoncn.com/read/b1ea197fd96b1b58389368c3.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00367-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.957236 | 7,352 | 3.46875 | 3 |
1 Bird Conservation International (2006) 16: ß BirdLife International 2006 doi: /S Printed in the United Kingdom How to manage human-induced mortality in the Eagle Owl Bubo bubo JOSE A. MARTÍNEZ, JOSÉ E. MARTÍNEZ, SANTI MAÑOSA, IÑIGO ZUBEROGOITIA, JOSÉ F. CALVO Summary The Eagle Owl Bubo bubo, which feeds mainly on rabbits and partridges, has been persecuted widely for causing damage to game interests. Although it is a protected species throughout Europe, there is a noteworthy gap in the scientific literature on the causes of mortality in this top predator. Here, we assess the relative importance and the geographical and temporal variation of human-related causes of death by reviewing 1,576 files of individuals admitted to wildlife rescue centres in Spain, a stronghold for Eagle Owls. The main known cause of death was interaction with powerlines followed by persecution and collisions with game fences and cars. There were within-year variations in the distribution of persecution, electrocution and collisions with game fences. Some man-induced causes of mortality were seen to depend on both the geographical region and the period of the year; moreover, mortality within each region was also yeardependent. Since there are strong socio-economic and ethical components involved, management guidelines are discussed bearing in mind such points of view. Introduction The Eagle Owl (Bubo bubo) is one of several birds singled out by governments and hunters as the cause of problems to game interests (Kenward 2002). It is a top avian European predator (Mikkola 1983) and it is known to live at high and increasing densities throughout Spain (Martínez and Zuberogoitia 2003a, Penteriani et al., 2005). Several studies have pointed to the importance of rabbits and Red-legged Partridges in the diet of the Eagle Owl in Spain (Hiraldo et al. 1976, Donázar and Ceballos 1984, Serrano 1998, 2000, Martínez and Zuberogoitia 2001, Martínez and Calvo 2001). However, the extent of predation is still largely unknown: for example, it remains to be determined whether Eagle Owls reduce the number of young rabbits or partridges to the point of reducing pre-harvest (autumn) hunting bags (Redpath and Thirgood 1999). Small game hunting is a socio-economically important activity (Lucio and Purroy 1992, Villafuerte et al. 1998), and hunters blame Eagle Owls (among others) for depleting their bags, which on many occasions are the result of expensive restocking operations. Consequently, Eagle Owls are persecuted across the Iberian peninsula, and are locally culled (Zuberogoitia et al. 1998, Martínez et al. 2003b). Persecution was deemed responsible for the extinction of the Eagle Owl in large areas of Europe, such as northern Germany in 1830, the Netherlands in the late nineteenth century, Luxembourg in 1903, Belgium in 1943, central and western Germany in the 1960s (Niethammer and Kramer 1964, Herrlinger 1973) and the north of Spain (Zuberogoitia et al. 2003), although electrocution and collision with
2 J. A. Martínez et al. 266 powerlines emerged as a new, more worrying cause of mortality during the last century (Marchesi et al. 2002, Mañosa 2002). Energy demands have increased exponentially and the number of avian fatalities due to dangerous pole design or siting lines in environmentally sensitive areas continues to increase, with consequent effects on bird populations (Mañosa 2002, Sergio et al. 2004a). However, to our knowledge, there is no specific agency in Europe (equivalent to the PIREA in the United States) which deals with developing cost-effective approaches for evaluating and resolving the impact of energy generation, transmission and use on bird populations. The aims of this study were: (a) to ascertain the main causes of mortality of Eagle Owls in Spain; (b) to detect possible elements affecting spatio-temporal patterns of such human-induced mortality; and (c) to propose management guidelines in an attempt to reduce such mortality. Methods We collected records of dead or fatally injured Eagle Owls from bird rehabilitation centres and birding associations across Spain over the period (n 5 1,576). Three variables were considered for analysis: cause of death (persecution, electrocution, other causes), region (South: Andalusia; East: Catalonia, Community of Valencia and Region of Murcia; Centre: Community of Madrid, Castilla-León, Castilla-La Mancha and Extremadura; North: Galicia, Asturias and Basque Country) and year. Due to the small sample size, data from Extremadura were pooled with those from Castilla-La Mancha. Not all these variables were available for every entry and, therefore, sample size varies between analyses. We also considered within-year variations in owl mortality. Although some studies divide the year into 3-month periods to study seasonal patterns in mortality (Rubolini et al. 2001), such division does not match the annual cycle of the Eagle Owl in southern latitudes (courtship: October January, 4 months; laying: February March, 2 months; post-fledging dependence period and dispersal: April September, 6 months; Martínez and Zuberogoitia, 2003b; authors unpublished data). Therefore, we studied variations of the main causes of death per month. We tested for possible interactions between causes of death, region and year by means of log-linear models (Real et al. 2001). Models were selected using the backwards stepwise method. Factors were retained or not according to the likelihood ratio x 2. Then, we built contingency tables for the interacting variables achieving statistical significance (a ) by x 2 tests. We considered that the observed cell frequencies were significantly different from the expected frequencies when the absolute value of the standardized residuals was greater than z a/2. Results Causes of mortality Within the three major causes of death, mortality was distributed as follows (Table 1): (1) powerlines (20.1%), i.e. electrocution (16.3%), collision (1.8%) and unknown causes related with powerlines (2%); (2) persecution (19.2%), with shooting (11.8%) prevailing over nest robbery or captivity (6.2%) and poisoning (1.2%); and (3) other
3 Eagle Owl mortality 267 Table 1. Causes of death of Eagle Owls in several regions of Spain between 1989 and South East Centre North Andalusia Community of Valencia Catalonia Region of Murcia Community of Madrid Castilla-La Mancha Castilla- León Extremadura Galicia Asturias Basque Country Total Persecution Shooting Nest robbery or captivity Poison Total Powerlines Collision Electrocution Unknown Total Others Drowning Accidentally trapped Game fences Car crash Trauma (unknown origin) Starvation Other anthropogenic causes Other natural causes Unknown Total Total ,576
4 J. A. Martínez et al. 268 causes (60.6%), the most frequent being traumas of unknown origin (19.3%), collision with game fences (5.9%) and collision with cars (4.3%) (Table 1). Geographical distribution of mortality Powerlines were responsible for the highest number of deaths in Castilla-León (54.5%), Castilla-La Mancha (22.3%), Catalonia (22.2%) and Andalusia (21.3%). Persecution was the main cause of death in the Community of Madrid (27.0%), Community of Valencia (24.4%) and Region of Murcia (24.3%). In the Basque Country powerlines and persecution totalled 47.1%. Within-year variations in causes of death There were significant monthly variations in mortality resulting from persecution (Figure 1), electrocution (Figure 2) and collision with game fences (Figure 3; x , d.f. 5 22, P, 0.001). Moreover, within each of the three above cited causes of death, there was a significant monthly variation (persecution: x , d.f. 5 11, P, 0.001; electrocution: x , d.f. 5 11, P ; collisions with game fences: x , d.f. 5 11, P ). Interactions between causes of death, region and year A log-linear model allowed us to analyse the 1,196 records for which complete information was available for cause of death, region and year, showing significant interactions between region and year, region and cause of death, and year and cause of death (Table 2). Low and high frequencies of persecution in Andalusia and in Eastern Spain, respectively, high frequencies of powerline impact in the Centre, as well as the Figure 1. Monthly variation of Eagle Owl persecution in Spain.
5 Eagle Owl mortality 269 Figure 2. Monthly variation of Eagle Owl electrocution in Spain. Figure 3. Monthly variation of Eagle Owl collision with game fences and with cars in Spain. relatively high frequencies of other causes in the South, were responsible for the significance of the region cause interaction (Table 3; x , d.f. 5 6, P, 0.001). The significance of the year cause interaction was due mainly to the increase in recorded powerline mortality. (Table 4; x , d.f. 5 28, P, 0.001). The frequencies of the three causes of death were remarkably high between 2000 and The significance of the region year interaction was due to a higher number of
6 J. A. Martínez et al. 270 Table 2. Marginal association x 2 values of the three factorial independence tests between cause, region and year. Factor d.f. Partial x 2 P Region 6 year ,0.001 Region 6 cause ,0.001 Year 6 cause ,0.001 Region ,0.001 Year ,0.001 Cause ,0.001 Table 3. Contingency table relating region and cause of death. South East Centre North Persecution 55* 153* 89* 6* Interaction with powerlines 99* 134* 80 4 Others 310* 415* 215* 15 *Significant difference between observed and expected frequencies (P, 0.05). Table 4. Contingency table relating year and cause of death. Year Persecution % Interaction with powerlines % Others % Total * 0* 1* * * 14* * * * * * * * * * * * * * * * * * * * * * * * * * * *Significant difference between observed and expected frequencies (P, 0.05). casualties recorded in the South, East and some areas of Central Spain in the period than in previous years. An exception was the Community of Valencia, where high numbers were generally maintained throughout the study period. This might also mirror to a certain extent the distribution and abundance of Eagle Owls in Spain, with low densities in the north and abundant populations elsewhere (Martínez and Zuberogoitia 2003a). Discussion The samples presented in reviews on the causes of mortality, such as the present study, do not represent a cross-section of all deaths (Newton et al. 1997, Mañosa 2002), and it
7 Eagle Owl mortality 271 is therefore desirable to carry out further studies aimed at gathering specific information (such as in Sergio et al. 2004a). However, compilation studies provide valuable quantitative information on the causes of mortality of wild bird populations, particularly as regards human-related causes (Mikkola 1983, Newton et al. 1997, Martínez et al. 2001, Real et al. 2001, Mañosa 2002). For example, this study showed that the killing of Eagle Owls is still a common practice throughout Spain, where the legal protection of birds of prey seems to have had a limited effect. As shown by the interaction cause_year (Table 4), a moderate number of owls were registered as killed between 1996 and 1999, but this figure rose again in A similar trend has been found for several raptors in Spain throughout the 1990s, with persecution peaking in and reaching a minimum in (Mañosa 2002). Shooting was consistently the main cause of mortality in the north of Spain during the 1990s for Peregrine Falcons (Falco peregrinus) (Zuberogoitia et al. 2002). It is also possible that more care has been put into concealing casualties after law reinforcement (Mañosa 2002), leading to the underestimation of the actual extent of persecution. The Eagle Owl s main prey in Spain are rabbits and Red-legged Partridges. Therefore, the conflict which results in the killing of this predator might be especially acute in areas where game shooting relies on re-stocking operations. This will be particularly true in areas where habitat alteration and game stock mismanagement occur. Re-stocking is a widespread practice (e.g. in eastern Spain) as a consequence of decreased hunting bags due to epizootics (Martínez and Zuberogoitia 2001, Martínez and Calvo 2001), habitat degradation and overhunting (Arques 2000), which would help to explain the high incidence of persecution recorded in these areas (Table 3). It is generally believed that killing raptors is opportunistic, i.e. it takes place during the hunting season and is not deliberately aimed at reducing raptor predation (Viñuela and Arroyo 2002). However, our finding that 12.6% of the shooting occurred outside the hunting season (March to July) indicates that killing birds of prey is proactive to a remarkable extent (Figure 1). The hypothesis that cropping avian predators is still proactive in Spain is further supported by several studies. For example, Martínez et al. (2001) found that 11.5% (n 5 329) of the raptors hunted in the Community of Valencia were shot outside the hunting season. Up to 47% of Barn Owls (Tyto alba) and 21% of Bonelli s Eagles (Hieraaetus fasciatus) killed were shot when hunting is not allowed (Martínez and López 1995, Real et al. 2001, respectively). The Eagle Owl s tendency to breed repeatedly in the same nests would make it more prone to being killed by gamekeepers or hunters (authors personal observations). Many birds of prey die due to secondary poisoning, i.e. a non-desired effect of the use of products used for pest control (Mañosa 2002, Whitfield et al. 2003, Mateo et al. 2004, Sergio et al. 2005). However, intentional poisoning in Spain is frequent (e.g. 70 Egyptian Vultures Neophron percnopterus between 1995 and 1998; Del Moral and Marti 2002) and can be especially suspected when the target species is not a carrioneater, such as the Bonelli s Eagle (Real et al. 2001, Mañosa 2002) or the Eagle Owl. Poisoning occurred throughout the year at low frequencies (Figure 1), but the lack of funding to run expensive analyses to detect phytosanitary substances and other poisons may mask the real impact of this practice on raptors. Alternatively, the apparent reduction in the frequency of persecution in the second half of the 1990s could be related to an increase in powerline casualties (Table 4). Quantitatively, electrocution is the main cause of death of Eagle Owls in Spain (Table 1) and is an important cause in Europe (Table 5). In a non-exclusive way, this
8 J. A. Martínez et al. 272 Table 5. Main causes of mortality of Eagle Owls reported in Europe. Country No. of individuals Causes of mortality (%) Interaction with powerlines Persecution Car crash Others Source Finland Saurola (1979) France Blondel and Badan (1976) France Choussy (1971) Germany Wickl (1979) Germany Radler and Bergerhausen (1988) Italy Penteriani and Pinchera (1990) Italy Marchesi et al. (2002) Italy Rubolini et al. (2001) Spain Beneyto and Borau (1996) Spain Martínez et al. (1992) Spain Hernández (1989) Spain Martínez et al. (1996) Spain 1, This study Sweden Olsson (1979) Switzerland Haller (1978) could also be due to better line monitoring or to an increase in the length of powerlines (Penteriani 1998, Janss and Ferrer 1999, Sergio et al. 2004a). The interaction region cause (Table 3) suggests that although dangerous poles and power distribution lines will always present a risk of death for raptors, physiognomic factors that increase avian use or concentrate birds in the vicinity of hazardous poles can significantly add to this risk and create a population-level effect (Sergio et al. 2004a). Our results seem to support this hypothesis in several ways. The Eagle Owl is a sit-and-wait hunter (Mikkola 1983) and, consequently, may frequently use poles in areas where they are the most suitable perches. This characteristic of Eagle Owl hunting behaviour can increase the number of fatalities due to electrocution (Benson 1980), as already demonstrated for Eagle Owls in an Italian study (Sergio et al. 2004a). Because the poles that provide the best view over the widest areas are potentially very attractive perch-sites during hunting, this could explain the high frequency of electrocuted owls from Central and Southern Spain (Table 3), where the terrain is largely undulating and agricultural (Real et al. 2001). Moreover, high prey abundance may contribute to an increased electrocution risk by sustaining locally high raptor populations and exposing more birds to hazardous pole designs (Woodbridge and Garrett 1993), as is the probably case on the border between the Community of Valencia and the Region of Murcia (Table 3). Among the other known causes of death, it is worth mentioning collisions with game fences and cars (Figure 2), the former recorded as an increasing menace (Tucker and Heath 1994, Heath et al. 2000). The frequency of collisions with game fences could be underestimated if some of the deaths attributed to traumas had been caused by impact with game fences (Table 1). Eagle Owls would be prone to impacts when flying low after their prey (Muñoz-Cobo and Azorit 1996). The Eagle Owl prefers open areas on the perimeter of mountains in shrubland or close to agro-pastoral landscapes (Marchesi et al. 2002, Penteriani et al. 2002, Martínez et al. 2003b, Sergio et al. 2004b), which largely overlap with hunting areas in Spain. Fencing off hunting estates
9 Eagle Owl mortality 273 was also frequent before our study period, when it accounted for most of the known causes of Eagle Owl deaths in certain areas of Southern Spain (31.7%; Muñoz-Cobo and Azorit 1996). There seems to be some slight between-cause variation in the seasonal pattern of mortality. Persecution and interaction with powerlines peaked between October and February (Figures 1 and 2; Rubolini et al. 2001, Sergio et al. 2004a), i.e. between courtship and laying, and mostly adult birds died. This finding may support the hypothesis that human-induced mortality can create deleterious population effects by eliminating territorial individuals (Sergio et al. 2004a). Management implications: shooting Theoretical law reinforcement by itself has had no noticeable effect on reducing the number of casualties of birds of prey (Mañosa 2002). Even if the law were strictly applied, problems such as habitat and game mismanagement would still remain to be dealt with. However, a set of ecological, sociological or economic tools exists that can be promoted to reduce the conflict surrounding illegal killing (Kenward 2002). Ecological tools Eagle Owls may respond functionally and numerically to variations in the abundance of their main prey (Martínez and Zuberogoitia 2001, Martínez and Calvo 2001). Additionally, they may or may not prey upon other raptors as a consequence of such variations (Serrano 2000, Martínez and Zuberogoitia 2001, Martínez and Calvo 2001) or due to intra-guild effects (Sergio et al. 2003). Therefore, further studies are needed to determine the type of response of the Eagle Owl to changing prey densities and to locate areas where detrimental population effects, if any, on prey or raptors occur. Zoning with quotas (Watson and Thirgood 2001) could also be implemented. This would require further political commitment because: (a) effective control of persecution and regular monitoring of shooting would have to be carried out in restricted and non-restricted areas, respectively, and (b) previous research would be needed to designate such areas. Sociological tools While there is mounting evidence that raptor persecution persists in Spain there is a lack of consensus between hunters and conservationists about how to use such information (Herranz-Barrera 2001). If both parties could come to an understanding, research-based educational campaigns among hunters and conservationists should be implemented. These campaigns must deal with the spectrum of conservation possibilities, whose limits may be shooting raptors on the one hand or refusing to treat them as renewable resources on the other (Kenward et al. 1991, Thirgood et al. 2000). Economic tools One of the aims of the agri-environmental schemes of the Common Agricultural Policy (CAP) is to protect biodiversity. Thorough evaluation of how resources are
10 J. A. Martínez et al. 274 allocated and tests on the effectiveness of such policies in promoting the sustainability of rabbits and red-legged partridges and their habitats are needed because: (a) they are the main prey for Eagle Owls in Iberia and (b) they are a major economic issue (Lucio and Purroy 1992, Villafuerte et al. 1998). However, Spain has not yet endorsed the collection of baseline data for this appraisal (Kleijn and Sutherland 2003). Joint initiatives between national institutions and hunters aimed at restoring agro-pastoral mosaics and prey stocks locally have provided acceptable solutions for raptors, game, conservationists and hunters (Sánchez 2004), stressing the need to reinforce control over the implementation of the CAP in Spain. Management implications: powerlines There is a consensus of opinion that electrocution hot-spots should be mapped and accounted for (Sergio et al. 2004a). Reducing the risk of death of birds of prey through interaction with powerlines has mostly involved a posteriori actions, i.e. mitigating the impact of existing designs, improving the design of existing structures or replacing dangerous poles (Janss and Ferrer 1999, Mañosa 2001, Rubolini et al. 2001). However, abiding by the current environmental impact laws (EC Directive 85/337/EEC) and developing strategic environmental assessments of plans and programmes of development would prove a better approach to account for the negative impact of powerlines and other hazards to birds of prey (Díaz et al. 2001, Martínez et al. 2003a). Hence, with regard to killing through inadequate pole design, or setting lines in inadequate areas, the power corporations, the environmental companies that produced flawed environmental impact reports or the managers who passed on such reports could be considered responsible for the offence (Martínez et al. 2003a). The results of the present study suggest that law reinforcement concerning bird protection is still far from being efficient in some areas of Spain. The statistical significance of the region_cause interaction underlines the fact that area- and speciesspecific mitigation and remediation measures should be developed, all in a framework of biologically meaningful spatial and temporal scales. Maintaining low levels of what, currently, seem to be secondary causes of mortality is of special interest because this mortality is additive to the main, increasing cause of loss across Europe habitat deprivation (Tucker and Evans 1997). Acknowledgements We thank the following rehabilitation centres (CR) and associations for supplying data: CR de Albacete, AMUS (Badajoz), CR de Rapaces Nocturnas BRINZAL (Madrid), CR de Buitrago Lozoya (Madrid), CR del Zoo de Jérez (Cádiz), CR de Cañada Real, CR Donosita (Guipúzcoa), CR La Granja (Valencia), CRFS El Valle (Murcia), FAPAS (Asturias), CR Forn del Vidre (Castellón), GER (Castellón), CR de Jaén, CR Martioda (Diputación Foral de Álava), CR de Torreferrusa (Barcelona), C.R. de Guadalajara, CRFS El Ardal (Cuenca), CRAS Los Guindales (Soria), CR de Cotorredondo (Pontevedra), CR de Orense, CR Los Villares (Córdoba), CREA Las Almohallas (Almería), CR Delta Ebre (Tarragona), CR Vallcalent (Lérida), CR Santa Faz (Alicante) and CRFS O Veral (Lugo). We also thank Vincenzo Penteriani and Fabrizio Sergio for valuable comments on the original manuscript.
11 Eagle Owl mortality 275 References Arques, J. (2000) Ecología y gestión cinegética de una población de conejos en el Sur de la provincia de Alicante. Unpublished PhD thesis, Universidad de Alicante, Spain. Beneyto, A. and Borau, J. A. (1996) El Búho real (Bubo bubo) en Cataluña (NE de España). Pp in J. Muntaner and J. Mayol, eds. Biology and conservation of Mediterranean raptors, Madrid: Sociedad Española de Ornitología (Monograph no. 4). Benson, P. C. (1980) Large raptor electrocution and power pole utilization: a study in six western states. Raptor Res. 14: Blondel, J. and Badan, O. (1976) La biologie du Hibou grand-duc en Provence. Nos Oiseaux 33: Choussy, D. (1971) Etude d une population de Grand-ducs Bubo bubo dans le Massif Central. Nos Oiseaux 31: Del Moral, J. C. and Marti, R. (2002) El Alimoche Común en España y Portugal. I Censo Coordinado. Año Madrid: SEO/BirdLife (Monograph no. 8). Díaz, M., Illera, J. C. and Hedo, D. (2001) Strategic environmental assessment of plans and programs: a methodology for biodiversity evaluations. Environ. Manage. 28: Donázar, J. A. and Ceballos, O. (1984) Datos sobre status, distribución y alimentación del búho real (Bubo bubo) en Navarra. Pp in Rapinyaires Mediterranis II. Barcelona. Haller, H. (1978) Zur populationsökologie des Uhus Bubo bubo im Hochgebirge: Bestand, Bestandsentwicklung und Lebensraum in den Rätischen Alpen. Ornithol. Beob. 75: Heath, M., Borggreve, C. and Peet, N. (2000) European bird populations. Estimates and trends. Cambridge, U.K.: BirdLife International. Hernández, M. (1989) Mortalidad del búho real en España. Quercus 40: Herranz-Barrera, J. (2001) Efectos de la depredación y del control de predadores sobre la caza menor en Castilla-La Mancha. PhD thesis, Universidad Autónoma de Madrid, Madrid. Herrlinger, E. (1973) Die Wiedereinbürgerung des Uhus Bubo bubo in der Bundesrepublik Deutschland. Bonn. Zool. Monogr. 4: Hiraldo, F., Parreño, F. F., Andrada, J. and Amores, F. (1976) Variation in the food habits of the European Eagle Owl (Bubo bubo). Doñana Acta Vert. 3: Janss, G. F. E. and Ferrer, M. (1999) Mitigation of raptor electrocution on steel power poles. Wildlife Soc. Bull. 27: Kenward, R. E. (2002) Management tools for reconciling bird hunting and biodiversity. Unpublished report to REGHAB Project. European Commission. Available from,http:// Kenward, R. E., Marcström, V. and Karlbom, M. (1991) The goshawk Accipiter gentilis as predator and renewable resource. Gibier Faune Sauvage 8: Kleijn, D. and Sutherland, W. J. (2003) How effective are European agri-environment schemes in conserving and promoting biodiversity? J. Appl. Ecol. 40: Lucio, A. J. and Purroy, F. J. (1992) Caza y conservación de aves en España. Ardeola 39: Mañosa, S. (2001) Strategies to identify dangerous electricity pylons for birds. Biodiv. Conserv. 10: Mañosa, S. (2002) The conflict between game bird hunting and raptors in Europe. Unpublished report to REGHAB Project. European Commission. Available from,http://www.uclm.es/ irec/reghab/informes_3.htm.. Marchesi, L., Sergio, F. and Pedrini, P. (2002) Costs and benefits of breeding in human-altered landscapes for the eagle owl Bubo bubo. Ibis 144: Martínez, J. A. and López, G. (1995) Dispersal and causes of mortality of the Barn Owl (Tyto alba) in Spain. Ardeola 42: Martínez, J. A. and Zuberogoitia, I. (2001) The response of the eagle owl (Bubo bubo) toan outbreak of the rabbit haemorrhagic disease. J. Ornithol. 142:
12 J. A. Martínez et al. 276 Martínez, J. A. and Zuberogoitia, I. (2003a) Búho real Bubo bubo. Pp in R. Marti and J. C. del Moral, eds. Atlas de las aves reproductoras de España. Madrid: Dirección General de Conservación de la Naturaleza-Sociedad Española de Ornitología. Martínez, J. A. and Zuberogoitia, I. (2003b) Factors affecting the vocal behaviour of eagle owls Bubo bubo: effects of season, density and territory quality. Ardeola 50: Martínez, J. A., Izquierdo, A., Izquierdo, J. and López, G. (1996) Causas de mortalidad de las rapaces nocturnas en la Comunidad Valenciana. Quercus 126: Martínez, J. A., Izquierdo, I. and Zuberogoitia, I. (2001) Causes of admission of raptors in rescue centres of the East of Spain and proximate causes of mortality. Biota 2: Martínez, J. A., Martínez, J. E., Zuberogoitia, I., García, J. T., Carbonell, R., De Lucas, M. and Díaz, M. (2003a) La evaluación de impacto ambiental sobre las poblaciones de aves rapaces: problemas de ejecución y posibles soluciones. Ardeola 50: Martínez, J. A., Serrano, D. and Zuberogoitia, I. (2003b) Predictive models of habitat preferences for the Eurasian eagle owl: a multiscale approach. Ecography 26: Martínez, J. E. and Calvo, J. F. (2001) Diet and breeding success of the Eagle Owls in southeastern Spain: effect of rabbit haemorrhagic disease. J. Raptor Res. 35: Martínez, J. E., Sánchez, M. A., Carmona, D., Sánchez, J. A., Ortuño, A. and Martínez, R. (1992) The ecology and conservation of the Eagle Owl Bubo bubo in Murcia, south-east Spain. Pp in C. A. Galbraith, I. R. Taylor and S. Percival, eds. The ecology and conservation of European owls. Peterborough, U.K.: Joint Nature Conservation Committee. Mateo, R., Blanco, G. and Jiménez, B. (2004) Riesgos tóxicos para las aves rapaces en España. In Actas del XVII Congreso Español de Ornitología. Madrid: SEO/BirdLife. Mikkola, H. (1983) Owls of Europe. Calton, U.K.: T. & A. D Poyser. Muñoz-Cobo, J. and Azorit, C. (1996) Amenazas de los cercados para la fauna. Ecosistemas 16: Newton, I., Wyllie, I. and Dale, L. (1997) Mortality causes in British Barn Owls (Tyto alba), based on 1,101 carcases examined during Pp in J. R. Duncan, D. H. Johnson and T. H. Nicholls, eds. Biology and conservation of owls of the Northern Hemisphere. General Technical Report NC-190. St Paul, MN: USDA Forest Service. Niethammer, G. and Kramer, H. (1964) Zum Aussterben des Uhus in der Eifel. Der Falke 11: Olsson, V. (1979) Studies on a population of Eagle Owls, Bubo bubo (L.), in southeast Sweden. Viltrevy 11: Penteriani, V. (1998) L impatto delle linee elettriche sull avifauna. Rome: WWF Italia (Scientific Series no. 4). Penteriani, V. and Pinchera, F. (1990) Censimento del Gufo reale, Bubo bubo, in un area dell appennino abruzzese. Riv. It. Ornitol. 60: Penteriani, V., Gallardo, M. and Roche, P. (2002) Landscape structure and food supply affect the Eagle Owl (Bubo bubo) breeding performance: a case of population heterogeneity. J. Zool. 357: Penteriani, V., Delgado, M. M., Maggio, C., Aradis, A. and Sergio, F. (2005) Development of chicks and pre-dispersal behaviour of young in the Eagle Owl Bubo bubo. Ibis 147: Radler, K. and Bergerhausen, W. (1988) On the life history of a reintroduced population of Eagle Owls (Bubo bubo). Pp in D. K. Garcelon and G. W. Roemer, eds. Proceedings of the International Symposium on Raptor Reintroduction, Arcata, CA: Institute for Wildlife Studies. Real, J., Grande, J. M., Mañosa, S. and Sánchez-Zapata, J. A. (2001) Causes of death in different areas for Bonellis eagle Hieraaetus fasciatus in Spain. Bird Study 48: Redpath, S. and Thirgood, S. (1999) Numerical and functional responses in generalists predators: hen harriers and peregrines in Scottish grouse moor. J. Anim. Ecol. 68: Rubolini, D., Bassi, E., Bogliani, G., Galeotti, P. and Garavaglia, R. (2001) Eagle owl Bubo bubo and power line interactions in the Italian Alps. Bird Conserv. Int. 11: | <urn:uuid:e23886e5-21c4-4953-b580-4522eeec1eea> | CC-MAIN-2017-17 | http://docplayer.net/350007-How-to-manage-human-induced-mortality-in-the-eagle-owl-bubo-bubo.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.846852 | 7,769 | 2.84375 | 3 |
4. Learning by any other name: Communication Research Traditions in Learning
4.5 Social-Cultural perspective
Even as psychologically oriented research was gaining attention and dominance in the field (i.e., during the 1940s and 1950s), theorists had begun to explore the influence of social relationships on communication. Whereas psychological theories saw messages filtered through individuals' cognitions, this perspective argues that communication occurs only through social interaction. One's definition of and experience with objects, events, other people, and even oneself, is determined through a network of interpersonal relationships. That is, the meanings we form are products of social "negotiation" with other people. These relationships determine both the symbols we use to communicate and the meanings of those symbols (Mead, 1934; Blumer, 1939, 1969). In essence, the symbols, objects, events, and self images that make up our world are the creation of a shared meaning through social communication. This model clearly demonstrates the linkage between communication theory and social psychology. It explores the potential of media as a unifying force in society. This section will describe the contributions of research traditions that emphasize the social and cultural dimensions of the communication process. This model clearly demonstrated the linkage between communication theory and social psychology. It explored the potential of media as a unifying force in society. Rather than focusing on the filtering of messages solely through cognitive constructs, researchers were interested in the ways in which messages were mediated by interpersonal networks.
4.5.2 Elements of Communication
Social-cultural perspectives present a significant refraining of the communication process. Many of die elements presented by technical and psychological models are conceptualized in very different ways (Fisher, 1978; Swanson & Delia, 1976). Senders and receivers, for example, become "participants," or "interactants," stressing their mutually dependent roles as communicators. Each interactant's perception of self, others, and the situation, working within a framework of shared culture, knowledge, and language, is a major influence on communicative episodes. This refraining of senders and receivers takes Schramm (1955) and Osgood (1954) even further in the view of socially defined interaction.
Messages, in the social-cultural view, are products of negotiation: All participants must arrive at shared meaning for successful communication. Heath and Bryant (1992) state that the message, in this case, is the effect of the sender's behavior on the receiver. They cite Whorf (1956) and his colleague Sapir, who hypothesized that the rules of one's language system contain the society's culture, world view, and collective identity. This language, in turn, affects the way we perceive the world. In short, words define reality; reality does not give us objective meaning. This presents a problematic conception of feedback, because it is difficult to tell when feedback is truly a response to a message and not just another message in and of itself (Heath & Bryant, 1992).
The most compelling application of social-cultural perspectives to mass communication has been in the conceptualization of audience. McQuail (1983) points out that one meaning for "mass" audience has been an "aggregate in which individuality is lost" (Oxford English Dictionary, 1971). Blumer (1969), on the other hand, preferred to distinguish between the "mass" and smaller groups of "publics," "crowds," and "groups." Increasingly, media use Occurs in these smaller aggregates of audience members, each with a particular medium or content form that serves preexisting interests, goals, or values.
These groups form through "boundary properties" (such as demographic characteristics like political affiliation) and "internal structures" (such as belief or value systems) that arise through attention to particular media content and the possibility of interaction about that content (Ennis, 1961). Within such audience groups, three types of internal structures reveal the social character of audience experiences with media (McQuail, 1983). The first, social differentiation, refers to basic differences in audience members' interests, attention, and perceptions of various issues and topics.
A second internal structure is the extent of social interaction within the group. Four factors are included here. Sociability refers to the extent to which media use is primarily a social occasion and secondarily a communicative event between individuals (e.g., how much interaction is permitted while watching television in a group), Groups such as families often employ media for various social purposes (e.g., teaching children about values, avoiding arguments) as well (Lull, 1980). A third factor governing the extent of interaction is the degree of social isolation that may result from excessive media use (especially television). Finally, the presence of para-social relationships (e.g., a viewer's perceived relationship with a favorite TV or radio personality) may indicate the social interaction made possible between media users and easily recognized characters.
A third internal structure in the social character of audience experience with mass media is the control norms that a society holds for its mass media. This refers to the value systems and social norms that regulate media use, types of appropriate content for each medium, and audience expectations of media performance. For example, Americans may come to expect objective news reporting on television, but may not consider a graphic portrayal of murder appropriate for their evening newscast. The types of programming we expect to see may be identified with the medium itself.
4.5.3 Assumptions and Research Focus
The idea that communication is a product of social relationships is the most pervasive assumption of the social-cultural perspective. Several other assumptions guide this philosophical stance, however (Fisher, 1978). Establishment of self is believed primarily through symbolic communication with others. This means that until one acquires the cognitive or empathic ability to "take the role of the other," the self does not exist--nor does meaningful social activity. Such activity takes place only by assuming the role of others or the generalized other. This process of role taking is a collective sharing of selves; it cannot be centered in media structures. It is not an individual act but one clearly dependent on social interaction for its purpose and existence. The concepts of self, roles, and collective meaning creation, then, are the focus of a great deal of investigation within social-cultural communication theories.
4.5.4 Discussion of Representative Research
18.104.22.168. Two-Step Flow Research. A prime example of social-cultural research is the two-step flow model of mass communication (Katz & Lazarsfeld, 1955). A landmark study that examined voters in Erie County, Ohio, during the 1940 presidential election, focused on the content of political media messages and social interaction about the election. The study (Lazarsfeld, Berelson & Gaudet, 1948), was based on a 6-month panel survey of voting behaviors and decision making. The study sought to chart various influences on voting decisions, including the emerging medium of radio. Findings demonstrated only limited media impact. People who reported making an initial decision or changing their minds, did so after speaking with others about the election. Often these "opinion leaders" received a great deal of information from mass media. The study refrained the one -way, direct-effects model of mass communication processes to account for this "two-step flow" in media influence. The first step reflects the role of opinion leaders in a community who seek out media content related to politics. In the second step, they filter and pass along political information to their social contacts. Media effects, then, were achieved by reaching opinion leaders, not mass audiences.
These findings were later elaborated in a subsequent panel study of women in Decatur, Illinois. Researchers examined the role of opinion leaders on more subtle, day-to-day issues (for example, fashions and household products) (Katz & Lazarsfeld, 1955). The hypothesis was that on less significant topics, the two-step flow would prove to be an even more dynamic and powerful process than with phenomena such as presidential elections. The findings confirmed this expectation, again noting the existence of a two-step flow of information,
Both of these studies demonstrated clearly that mediating factors intervened in the media effects process. They were among the first to identify social factors that intervened between message and audience response based on the earlier stimulus-response model. Within this theoretical framework, however, the flow of information is still linear and universal. In other words, the media message remains relatively intact. Opinion leaders, often only those wealthy enough to own radio or television and subscribe to magazines, were conduits of media messages.
22.214.171.124. Research on Social Context of Media Use. Another research tradition that falls under the general category of social-cultural research is the body of literature examining social contexts of media use such as on family and home media use (see also 11.5.4). A great deal of research has examined parent-child coviewing of media. According to one study (Desmond, Singer, Singer, Calum & Calimore, 1985), parental mediation in the media-child relationship takes three forms: (1) critical comments about programs or the medium in general, (2) interpretive comments that explain content or media to younger children, and (3) rule making/disciplinary intervention that forcibly regulates the child's viewing habits. Parental interpretation and rule making were framed as a major influence on children's viewing and comprehension of media content. One study (St. Peters, Fitch, Huston, Eakins & Wright, 1991) found that when such coviewing did take place, it was predicted more by the adult's personal viewing habits than the child's. In other words, children and parents coviewed more adult than children's programming. Further, parents' participation in regulating viewing declined as children grew older; and parental guidance or mediation with content was not related to coviewing. Dorr, Kovaric, and Doubleday (1989) echoed the finding that coviewing was largely a coincidence of viewing habits and preferences. They also found weak evidence for the positive consequences of such coviewing, but questioned the value of this concept as an indicator of parental mediation of content.
Such concerns were also discussed by Bryce and Leichter (1983) on a methodological level. They argued that quantitative measures of viewing habits and coviewing may not capture more routine or subtle processes of family viewing that mediate potential effects. They proposed using ethnographic methods (see 40.2) to study the unintentional and nonverbal behaviors that mediate television effects, as well as assessing those mediating behaviors that take place away from television. Jordan (1992) used ethnographic and depth interview techniques for just such a purpose. She concluded that family routines, use and definition of time, and the social roles of family members all played a part in the use of media. Children learned at least as much, if not more, from these daily routines than any formal efforts to regulate media use.
Corder-Bolz (1980) proposed that groups and institutions such as family, peers, school, and church should be considered as primary socializing agents that both provide social information (e.g., facts, ideas, and values) and respond to social communication about this information. McDonald (1986) pointed out that peer coviewing is more frequent and influential among young viewers. Media were defined by Corder-Bolz as the group of "secondary socializing agents" that can provide social information but cannot enforce their messages with child viewers. Media, then, can provide social facts, ideas, and values, but this information's influence is limited to the extent that the child's environment presents no competing messages or that the viewer uncritically adopts such views from media content. Thus, external factors limit the potential impact of content.
Desmond et al. (1985) studied the cognitive skills necessary to comprehend and interpret television content and the effects of family communication on these skills. In their sample of kindergarten and first-grade children, comprehension of and beliefs about the reality of television content were linked to parental mediation styles and general patterns of discipline. Children who watched low levels of TV, in environments that included family control of television, TV-related rules, and strong discipline, were better able to discern reality from fantasy in programming. Those who were raised with TV-specific rules, positive communication between child and mother, and a pattern of explanation of content from adults and older siblings were better able to gain knowledge from television content and about television techniques (e.g., camera zooms and slow motion). Further, this study found that family environmental variables influence the amount of television children viewed. Heavy viewers in this study grew up in homes where parents were heavy viewers and did not mediate viewing often. Family communication was considered the critical variable that determined a child's ability to comprehend televised material and develop the cognitive skills necessary to understand and interpret content.
The research on families and media use suggests that, especially in early childhood, family members are a prime influence on the images children form of media. The amount of and motivations for media use are part of the family's daily social routine (Bryce & Leichter, 1983). Further, other family members' responses to media content serve to shape the developing child's own responses (Corder-Bolz, 1980; Desmond et al., 1985). Such influences likely originate with both family and peers with older, school-aged children. As these children encounter media within classroom contexts, new images of mass media must compete with the definitions and expectations shaped by home media use.
126.96.36.199. Learner-Centered Studies. In addition, a series of learner-centered studies has begun to emerge from research on instructional media applications. Many of these studies address contextual and social factors that influence the communication process. Thus, they are included in the discussion of social-cultural research. One important research tradition began with a strong psychological orientation exploring students' attitudes toward the individual media systems as determinants of the amount and kinds of learning experienced. Clark (1982, 1983) identified three fundamental dimensions of people's expectations about the media: preference, difficulty, and learning. Salomon used the notion of media expectations as the foundation of a series of studies (1981, 1983, 1984) based on the learner's preconceptions about a given media activity and the relationship of those expectations to learning outcomes. His concept ion of the model relied on predicted relationships among three constructs: the perceived demand characteristics of the activity, the individual's perceived self-efficacy for using a particular medium, and the amount of mental effort the individual invested in processing the presentation. Oltman (1983) elaborated on Salomon's model by suggesting that older students may be especially familiar with certain media characteristics or the meaning of certain media codes. This familiarity may increase their perceived self-efficacy with a medium and form attitudes about the medium's impact on their thinking about both the content and the medium. It is clear that this approach assumes an active processor who approaches media activities in an individualistic but relatively sophisticated manner.
However, an additional concept missing from Salomon's model is the notion of a kind of cultural identity or stereotype associated with individual media systems and its role in influencing learning outcomes. In his research he failed to disentangle individual and cultural perceptions of media experiences. Both contributed to the kinds of outcomes he examined. That is, individuals' expectations about media experiences are based, at least in part, on the cultural identity of a medium. For example, television in the U.S. is considered primarily an entertainment medium. Though Salomon did not address the significance of a medium's cultural identity in his model, later research attempted to disentangle media perceptions and expectations to include some understanding of the broad cultural identity of media systems. Thus, the model has been included in the discussion under the social-cultural perspective. Despite its original emphasis only on the learner and the psychological orientation of the model, subsequent studies evolved to embrace a stronger social-cultural approach.
According to Salomon's original model, the relationships among these three constructs--perceived demand characteristics, perceived self-efficacy, and amount of invested mental effort--would explain the amount of learning that would result from media exposure. For example, he compared students' learning from reading a book with learning from a televised presentation of the same content. Salomon found more learning from print media, which he attributed to the high perceived demand characteristics of book learning. Students confronted with high demands, he argued, would invest more effort in processing instructional content. Conversely, students would invest the least effort, he predicted, in media perceived to be the easiest to use, thus resulting in lower levels of learning.
In a test of this model, Salomon and Leigh (1984) concluded that students preferred the medium they found easiest to use; the easier it was to use, the more they felt they learned from it. However, measures of inference making suggested that these perceptions of enhanced learning from the "easy" medium were misleading. In fact, students learned more from the "hard" medium, the one in which they invested more mental effort. A series of studies extended Salomon's work to examine the effect of media predispositions and expectations on learning outcomes. Several studies used the same medium, television, to deliver the content but manipulated instructions to viewers about the purpose of viewing. The treatment groups were designed to yield one group with high investments and one with low investments of mental effort.
Though this research began as an extension of traditional research on learning in planned, instructional settings, it quickly evolved to, include consideration of context as an independent variable related to learning outcomes. Krendl and Watkins (1983) demonstrated significant differences between treatment groups following instructions to students to view a program and compare it to other programs they watched at home (entertainment context), as opposed to viewing in order to compare it to other videos they saw in school (educational context). This study reported that students instructed to view the program for educational purposes responded to the content with a deeper level of understanding. That is, they recalled more story elements and included more analytical statements about the show's meaning or significance when asked to reconstruct the content than did students in the entertainment context.
Two other studies (Beentjes, 1989; BeentJes & van der Wort, 1991) attempted to replicate Salomon's work in another cultural context, the Netherlands. In these studies children were asked to indicate their levels of mental effort in relation to two media (television and books) and across content types within those media. The second study asked children either watching or reading a story to reproduce the content in writing. Beenqes concluded, "the invested mental effort and the perceived self-efficacy depend not only on the medium, but also on the type of television program or book involved" (1989, p. 55). Bordeaux and Lange (1991) supported these findings in a study of home television viewing. Children and parents were surveyed about the former's active cognitive processing of program content. The researchers concluded that the amount of mental effort invested varied as a function of viewer age and the type of program being viewed. These studies raise the possibility of profound cultural differences in response to various media and genres. Though few studies have examined the notion of cultural differences, clearly the learner-centered approach must investigate the existence and nature of cultural factors related to the understanding of media experiences and learning outcomes.
A longitudinal study emerging from the learner-centered studies (Krend), 1986) asked students to compare media (print, computer, and television) activities on Clark's (1982, 1983) dimensions of preference, difficulty, and learning. That is, students were asked to compare the activities on the basis of which activity they would prefer, which they would find more difficult, and which they thought would result in more learning. Results suggested that students' judgments about media activities were directly related to the particular dimension to which they were responding. Media activities have multidimensional, complex sets of expectations associated with them. The findings suggest that simplistic, stereotypical characterizations of media experiences (for example, books are hard) are not very helpful in understanding audiences' responses to media.
These studies begin to merge the traditions of mass communication research on learning and studies of the learning process in formal instructional contexts. The focus on individuals' attitudes toward, and perceptions of, various media has begun to introduce a multidimensional understanding of learning in relation to media experiences. Multiple factors influence the learning process-mode of delivery, content, context of reception, as well as individual characteristics such as perceived self-efficacy and cognitive abilities.
One additional approach (Becker, 1985) points to the perspectives offered by poststructural reader theories that define the learner as a creator of meaning. The student interacts with media content and actively constructs meaning from texts, previous experience, and outside influences (e.g., family and peers) rather than passively receiving and remembering content. According to this approach, cultural and social factors are seen as active forces in the construction of meaning.
Abelman (1989) offered a similar perspective in his study of experiential learning, within the context of computer-mediated instruction. The emphasis in this research is on cooperative or collaborative learning; students are seen in partnership with teachers, each other, and delivery systems. The idea is that media can create "microworlds" where students can have some direct experience with new, sophisticated ideas (see 188.8.131.52). Abelman described a program called "Space Shuttle Commander that teaches principles of motion through student-computer interaction in a simulated space environment. In effect, the student and the computer form a learning partnership.
Jonassen (1985) and Rowntree (1982) have pointed out that such perspectives force us to ask how the student controls learning rather than letting our concerns about the technology drive the research agenda. The concern with technology clearly describes early research on educational media, which took an ad hoc approach to measuring learning outcomes in relation to instructional treatments for each new advance in technology. | <urn:uuid:b77d4b50-a4f7-437c-80d1-99db6d38168f> | CC-MAIN-2017-17 | http://www.aect.org/edtech/ed1/04/04-05.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00602-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.953143 | 4,515 | 3.359375 | 3 |
The Judicial Overreach – Facts, Issues and Prospects for Harmonious Growth of Institutions in a Democracy – II
The Article 14 (Right to Equality), read with Article 19(Right to Freedom), Article 25(Freedom of Religion), Art 21 (Protection of life and personal liberty) are among the fundamental rights of citizen’s listed in the Part III of the constitution. Remedies for enforcement of these rights are judicial and embodied in Article 32 which gives the Citizen the right to move to the Supreme Court by appropriate proceedings for the enforcement of these rights is guaranteed. Under Article 32, ‘The Citizens are entitled to relief provided it is shown to the satisfaction of the Court that the fundamental rights of the citizens had been violated. The issuance writs, meaning orders by the Supreme Court or the High Courts under Article 226 is the form of such orders depending on the nature of the case. These writs are in the nature of ‘habeas corpus’, that is requiring the authority to produce a detainee, mandamus that is to compel performance of a Statutory duty of public nature when such an authority refuses to perform it, ‘quo warranto – to prevent a public office being held by a person without legal authority and Certiorari meaning demand for information that may lead to judicial review of administrative action.
Together these constitute in simple language the ‘writ jurisdiction’ of the Supreme Court and the High Courts (Note: The Courts subordinate to the High Courts do not enjoy this jurisdiction) and the scheme of Judicial Review under Article 32 and 226 of the Constitution.
Further Art 141 lays down that “the law declared by the Supreme Court shall be binding on all courts” which means law made while interpreting the statutes. . The orders might contain many observations and not all these comments would have “precedential” value, but only enunciation of law by the court on issues directly raised or arising of the application of law to a particular case expressed in Latin as ‘ratio decidendi’ which will be treated as binding precedent and hence a judge made law.
A reader may ask why such use of Latin expression?
The answer is that for historical reasons only two systems of jurisprudence have proved outstanding for adoption, namely the ‘Civil Law of Rome’ and the ‘Common Law of England’ and often intertwined especially in matters of procedure and hence the use of Latin words. In continental Europe, Latin America and Japan and even Scotland, Roman law is followed whereas in all countries which for sometime came under the British Rule seem to have adopted the Common Law of England.
Reverting to the role of the Supreme Court, we must note that while under Art 142 the Supreme Court enjoys unfettered independent jurisdiction to pass any order in public interest to do complete justice, under Art 143the President has the power to consult Supreme Court “on a question of Law or Fact” which is of such ‘Public Interest’ that the President feels it expedient to obtain the opinion of the Supreme Court and on receiving such a Presidential reference, “ the Supreme Court may after such hearings as it thinks fit , report to the President its opinion thereon”. Art 144 lays down all authorities, Civil and Judicial shall act in aid of the Supreme Court.
From this position it is clear that the Courts have a Constitutional mandate to define, preserve and protect public interest. The instrument of Public Interest Litigation (PIL), an important feature of Indian democracy owes its origin to the writ jurisdiction of the Courts. Thus the Supreme Court interpreted the Right to Life to include ‘the right to clear air and safe water’. Thus the Supreme Court has defined PIL as a proceeding in which an individual or a group seeks relief in the interest of general public and not for its own purpose. (SP Gupta Vs Union of India, AIR 192 SC 149), the implication of this important order is far reaching. The strength of democracy lies in the idea of Public Interest of the general public which depends on the degree of advancement of the society and its capacity to absorb innovative thoughts based on science and experiences all over the world. This makes public interest varied and ever expanding in its scope; as for instance concern for environment, conservation of wild life , clean air and water have been only brought within the public agenda sometime back but today ‘climate change’ has placed environment in the core of development philosophy and public interest and mass education as the foundation of human development . In India, public interest is often confused with sectional, communal or even private interest in a dynastic party system. There is thus an emergent need and role for the judiciary t protect genuine public interest and to prevent the State from being hijacked by interest inimical to progress of the society, as experienced in some of the Sub-Saharan and Latin American countries in recent past. This problem will however remain as in a democracy, ‘public interest’ is defined by the political class who may take a narrow and short term view on political consideration which could be against the real public interest. In such a situation, the role of judiciary becomes critical for survival of democracy and social and economic development. We may note that whatever little of Rule of Law is left in Pakistan today is almost entirely due to the role of its judiciary which despite the constraints imposed, often stood up for citizens rights and liberties.
While the constitutional position of the judiciary is unquestionably critical for preservation of the basic structure of the democratic polity, rule of law and rights and liberties of the citizens and hence the vast scope of judicial review. The role of higher courts as the final adjudicator of disputes involving citizens and corporate bodies often involving State and as supervisors of subordinate courts functions are nonetheless the substantive part of its work that is delivery of justice. And since the maxim, ‘justice delayed is justice denied’ holds good for all societies, a balancing act between these two roles is essential for efficient management of time, without which the delivery of justice might as well suffer and much against the public interest. The issue of huge pendency of cases in all courts estimated at over 1 crore or so in the entire country, merits close examination as the pendency is also attributed to courts directions on matters which fall within the executive domain. To cite some recent examples, a High Court has asked the government to augment the strength of the local police force, though the Court can hardly judge if the state of law and order is a result of understaffing or a mere total lack of accountability of the police force, a division bench had asked the State government the reason for granting parole to an otherwise respectable citizen who had been imprisoned for an offence overlooking the constitutional provision that parole and pardon etc is the sole prerogative of the State as the judiciary’s task is over once it sentences a person. The argument that if the executive fails, judiciary must step in is dangerous because of the difficulty in appropriately defining failure and what if judiciary fails, then who will step in – Executive?
There is at present a grave danger of judiciary collapsing under its own weight. According to the union law ministry 60,000 cases are pending in the Supreme Court, 4 million in the High Courts and 27 million in the subordinate Courts.
Addition of more judges and filling up of the existing vacancies is one of the means of dealing with the pendency and not the only one as started in a report of the Law Commission which founded that there was no correlation between the rate of disposal and the actual strength of the bench. Observers of the Indian Judicial System have been pointing out that, ‘the arrears of Civil Cases in the Apex Court itself outnumber criminal cases by a ratio of more than 4:1 and in the High Courts by 3:1’. There is thus a view that barring title suits and disputes over immovable property, other cases need not be entertained by the courts at all ‘as these could be left to’ other fora where non judges could preside,’ leaving the judiciary to concentrate on criminal and serious civil cases’. In fact the Arbitration Act 1996 which succeeded The Arbitration Act 1940 was designed to minimize judicial intervention in most of the civil cases (in line with the 1940 Act mandating a time bound disposal of civil cases) and envisages an alternative civil disputes resolution forum where it would not be mandatory to have judges or ex-judges presiding over the forum. The observers however point out that the Arbitration Act has not been able to achieve the desired object because the act brought ‘conciliation’ also within its ambit and hence conciliation process has been made subject to courts interpretation. It may be noted that the UN prepared a model UN Law for all countries to adopt to put in place, an “alternative civil disputes resolution” on the basis of which 1996 Act was enacted but for reasons stated above, the object has not been realized and the spirit of the UN law has had no influence in the working of the system. The order of the Apex court to cause an enquiry into the affairs of the BCCI, a private entity by a former CJI could be cited as an instance of encroachment in matters of executive. Even then the problem does not lend itself to a straightforward solution even in a situation of self imposed isolation of the judiciary because very often the public find executive indifference and wilful inaction on matters of public interest when the only course left is to seek courts intervention. It is indeed a catch-22 situation in a democracy – that is a situation in which a desired outcome is impossible to attain because of a set of inherently contradictory rules or conditions; and the only way out is to create ‘a will’ in the sense Rousseau formulated it in his “Le Contract Sociale” – a social contract to build up strong institutions within all three wings to address public interest and public policy so as to free the judiciary of excessive burden of its ‘judicial review’ and PIL related functions to enable it to pay due attention to equally important matters of disposal of cases and delivery of justice. Keeping this broad perspective in view it will be useful to take a practical view on some recent controversies in this regard, namely over the collegiums system of appointment of judges of the Supreme Court and the High Courts, implementation of the Lokpal and Lokayukttas Act 2013, and the implications of the orders of the Supreme Court in Singur Land acquisition case in West Bengal to protect the rights of the farmers and in the Vedanta case in Orissa to preserve the rights of the indigenous people. This is attempted in the following two sections:
The current stand off between the union government and the Supreme Court is about the collegiums system in vogue under which the judges appoint judges and the President approve the same and appoint judges under Art 217 of the Constitution. The Supreme Court collegiums is headed by the CJI and 4 other senior most judges while the HC collegiums is headed by the Chief Justice and 4 other senior most judges. The collegium scrutinizes the names, prepare a list and recommend the list to the government. Note that the latter has no role in making the select list and though it could raise objections or seek clarification, but if the collegiums stick to the list the government have no option but to refer the list to the President for action under Art 217 which is reproduced below with the relevant Art 124(2) appreciation of the legal position.
The Collegium system is the outcome of ‘Judge made laws’ as no provision of the constitution provides for the same nor any act of Parliament and has its genesis in “Three” Judge Cases. Briefly stated, the first case – SP Gupta Vs UOI in 1981 held that the primacy of CJI in appointment of judges is not built in the constitution and thus the term ‘consultation’ was not ‘concurrence’ meaning in plain language that the president was not bound to go by the recommendation which tilt in favour of the executive in the matter of appointment of the judges. This judgement of the Supreme Court in the case of Advocates on record vs. the UOI (Union of India) overruled the first judgement, introduced the collegium system as it exists today, accorded primacy in the CJI as the CJI’s recommendation should normally be acted upon by the President unless there is an adverse report of IB in respect of any recommended person. The third case arose out of a reference to the Chief Justice of India made by the President KR Narayanan under article 143 of the constitution as to the meaning of the term consultation that is , if only consultation with CJI on appointment of judges would suffice or require consultation with a number of judges. On this Supreme Court laid down 9 guidelines for functioning of the coram which constitute the collegium system. Not being happy with the system the government constituted the NJAC in August 2014 by the 99th amendment to the constitution supplanting the collegium system. However in October 2015 the Supreme Court struck down the NJAC Act 2014 and restored the collegium system. In the meantime the position of vacancies in the Supreme court and the High Court have gone up to 42.7% that is 461 posts out of 1079 sanctioned posts are vacant causing further rise in the number of pending cases. The stand off continues till date as the Supreme Court reiterated the list of 43 names for appointment rejected by the government. This standoff continues as the Union Finance minister is on record stating that due to encroachment of judiciary into legislative domain, “the edifice of the Indian legislature is being destroyed”. While aforesaid view is unexceptionable, there is some truth in the point that the collegium system lacks eligibility criteria and therefore is non-transparent, and that the Executive should have some say in the matter of selection of judges. However 2015 ruling of the Supreme Court paved the way for the government an alternative in the form of a Memorandum of Procedure that the Supreme Court asked the government to prepare which is still not in place as an agreed procedure. Consequently the government has been slow in approving the names and the Supreme Court unwilling to accept the government stand, and so a crisis situation has taken place with arrears mounting.
The constitutional architecture of parliamentary democracy, one must realize is delicate and requires mature understanding of each of the three wings and their role by those who head these organs and assertion that one wing is supreme for whatever reason is bound to cause damage to this architecture and upset the balance as it happened when a state of national emergency was declared in 1975-76 following a chain of events that started with an order of the Allahabad high court setting aside election of Smt. Indira Gandhi to the Lok Sabha. The continuing standoff between the Supreme Court and the government on the issue of NJAC is thus rightly viewed by Christopher Jaffrelot, a keen India watcher as an “ill-judged conflict” and it is hurting the citizens and the litigant public specially rather deeply as they are paying the price for delay in disposal of cases.
On 23rd November 2016, the Supreme court pulled up the Central government in the course of hearing of a petition in the nature of the PIL for not amending the Lokpal Act 2013 to recognize the leader of the single largest opposition party as leader of opposition in Parliament, for constituting the selection committee under section 4(1) of the Lokpal act which stands in the way of appointment of Lokpal even though the act was notified on 1st January 2014. The Honourable bench of the Apex Court observed the institution of Lokpal was intended to bring probity in public life, and then this institution must work. We will not allow the situation where then institution is rendered redundant. These are indeed very strong words and there is strength in the argument of the petitioner, NGO common cause, that if the government feel strongly about it could have issued an ordinance to amend the aforesaid section of the Lokpal act. The court proceeding reported in the press suggest that the Attorney General of India put forward a view that since the government have introduced the bill to make the necessary amendments, the judiciary can not issue any directions to the Parliament which the honourable court viewed unacceptable and hence the court’s anguish at executive failure expressed rather strongly. Is this a case of judicial activism, overreach or a reasoned response to a matter is a question that has been agitating the people’s mind particularly from 2011 when the movement against corruption was launched by Anna Hazare which resulted in the enactment of the Lokpal Act 2013 and it may be recalled that this law was the outcome of lengthy consultation between the movement leaders and the central government. We may also recall that the Lokpal bill was introduced first in 1968 by Mrs Indira Gandhi and since then it has been a part of anti corruption agenda. In this background, its non-implementation for over two and a half years on a mere technicality which could have been resolved by promulgating an ordinance leaves reasonable doubts about the Government’s commitment to the act and its stated objects. The Supreme Court’s view on the subject will therefore be shared by many and taken as a proper response and directive and not as an instance of judicial activism. Leaving aside this observation of the Supreme Court, the reader may perhaps note that the entire scheme of Lokpal under the Lokpal act as embodied chapter VI is designed to create a kind of superior oversight body to ensure probity in functioning of the highest executive office and all other public servants associated with the Central Government by exercise of wide ranging powers of causing enquiry and prosecution. Lokpal therefore will really be a power structure above the government. How such a powerful oversight body will fit into the Westminster type of Parliamentary democracy that India has adopted will be worth a serious study as the institution of Ombudsman from which the idea of Lokpal has been derived, exists in the Scandinavian countries but not in England- the source of our constitutionalism.
Let us now give a look at two landmark orders of the Supreme Court. First Singur land acquisition case for setting up of a Car manufacturing plant for which West Bengal Industrial development corporation acquired land for transfer it to Tata Motor on lease for the project and second acquisition of land for a bauxite aluminium project at Niyamgiri hills in Orissa by a UK based mining company Vedanta resources industries limited for which forest lands were acquired by a State PSU namely Orissa Mining corporation even when all 12 gram sabhas of the area exercised their power under the Forests Rights Act 2006 to pass resolution rejecting the proposed land acquisition. The similarities of these two cased are remarkable as both relate exercise of power by the state of what is called in law “eminent domain” that is power of the sovereign state to take private property for public use/ public purpose.
In Singur case the Supreme court noted a serious infirmity and procedural irregularity in not following a somewhat lengthy procedure under the Land Acquisition Act, 1894 prescribed for acquisition of land for companies for projects to serve a public purpose which could include industrial unit such as the Tata project to manufacture ‘Nano’ cars in Singur as it would generate employment, income etc. Here instead of aforesaid procedure, the state invoked the emergency clause for acquisition which can only be invoked in the situation caused by emergencies like earthquakes, flash floods or to meet defence needs which didn’t apply to Singur case and more so because land was to be acquired by the State Industrial Development Corporation. The court also noted that the mandatory enquiry into the objections filed by the affected farmers was not conducted by the Collector who ordered summary rejection presumably at the instance of the government of the West Bengal and hence a case of denial of justice and commission of procedural irregularities was established. The impugned order of acquisition was thus set aside.
In the other case – Orissa Mining Corporation Vs the Ministry of Forests and Environment, the Supreme Court set aside the order of acquisition on the following grounds. The subject land was ‘Forest land’ at Niyamgiri Hills and the tribals being ‘Traditional forest dwellers’ under the Forests Rights Act and their Gram Sabhas are well within their rights to hold that they do not want mining. Thus in its April 2013 order the Supreme Court held that the project violated the Forests Rights Act and the International convention for the protection of indigenous communities and Articles 25 and 26 of the Constitution of India.
Thus what is common in these two cases is the ‘eminent domain’ – the power of the State to acquire land which, the Court held is not absolute or unfettered and has to be tempered by the duty of the State to preserve the rights of the indigenous tribes in Odisha case and the rights to livelihoods of the farmers in Singur and the procedural rights to put across why they object to the land acquisition and its reasoned consideration by the State. These rights are inalienable and therefore the State cannot take these away in the name of development and such State actions are deemed ‘arbitrary’ and by setting aside these orders the Apex Court has established case laws to be viewed as Laws to preserve ‘public interest’. From this angle, the Dec 2, 2016 order of the Supreme Court upon hearing a batch of petitions challenging the legality of various aspects of ‘demonetisation’ order dated 8.11.2016 asking the Central Government to spell out measures taken to ease the suffering of the people arising because of shortage of cash particularly in rural areas are not instances of ‘activism’, or ‘interference’ in Executive’s domain but acting well within its constitutional mandate of protecting citizens rights which in a strictly legal sense includes such interventions as in demonetisation case or the order banning sale of firecrackers in Delhi in view of the grave air pollution posing a threat to lives of all citizens especially children. The Supreme Court has since formed a ‘Constitution bench’ to examine the question of legality of the demonetisation and raised certain objections. A Civil Service aspirant may well advised to follow this case to understand how the system of Governance actually works.
This discussion will remain incomplete unless the role of the NGT and some of its recent landmark decisions are examined for a mature appreciation of how the ‘Separation of Powers’ doctrine os actually operating in India. The NGT was established under the NGT Act 2010 enacted by the Parliament “for the effective and expeditious disposal of cases relating to environmental protection and conservation of forests and other natural resources including enforcement of any legal right relating to environment and giving relief and compensation for damages to persons and property and for matters connected therewith or incidental thereto”. NGT is thus a specialized Environment Court, structured as a tribunal because while a court is a more formal structure headed by a Judge with elaborate rules for conduct of its procedure whereas a Tribunal can include both a Court and a Administrative hearing board with Expert members with necessary knowledge and experience and emphasis on resolving a matter through a ‘straightforward approach’ to rules of enquiry and evidence without quibbling over rules and procedures. NGT is thus founded on the concept of alternative jurisprudence – a departure from a “legalistic adjudications” to a problem solving role through an interdisciplinary approach essential to deal with complex and multi-disciplinary issues of environment conservation and ecology to function as an ‘alternative dispute resolution’ mechanism. One must also note that for want of this practical approach, the National Environment Appellate Authority set up under NEAA Act 1997 was a non-starter and even a tribunal envisaged under the National Environment Tribunal Act 1997 was not even established. The constitution of NGT was therefore a bold and historic step in line with the Rio Declaration of the UN in June 1992 on Environment and Development. After establishment of the NGT, all environments related cases pending with the Supreme Court were transferred to the NGT. NGT enjoys original jurisdiction under Sec 14 of the NGT Act and is also appellate jurisdiction over its regional branches and other authorities specified under Sec 18. The Green bench of the Supreme Court retain its ‘original jurisdiction’ and appeal against the orders of the NGT shall lie with the Supreme Court under Sec 22 of the NGT Act.
There is an interesting ‘global right’ angle to this subject as over the last 3 decades, there has been a convergence of law viz human rights law and environmental rights law because the latter bundle of rights is now considered to be “enforcement of human rights within the ‘broad right to life’ that now includes right to healthy environment. It may even be stated that the concept of human rights – enshrined in the UN declaration of Human Rights 1948 and the UN Covenant on Economic, Social and Cultural Rights( 1966) and the recent UN Right to development found its conceptual integrity in the right to healthy environment, clean air and water as central to the Right to Life. | <urn:uuid:e625e726-ebc3-436e-b34a-89bebdfd6301> | CC-MAIN-2017-17 | http://chromeias.com/index.php/2016/12/30/the-judicial-overreach-facts-issues-and-prospects-for-harmonious-growth-of-institutions-in-a-democracy-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00133-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.959594 | 5,114 | 2.671875 | 3 |
A Welsh Buccaneer
This Article © John Weston from Data Wales
Reprinted here with permission for our Welsh History research pages
According to some accounts Henry Morgan was born at Llanrumney (in Welsh, Llanrhymny). In those days, Llanrumney was a manor in the ancient Hundred of Newport in Monmouthshire but nowadays is thought of as a suburb of Cardiff. The manor had been the property of the ancient family of Kemeys but an heiress married Henry Morgan of Machen in the 16th century and the Morgans were here for six generations. However, towards the end of his life Henry Morgan is said to have bought an estate in Jamaica and named it Penkarne. The manor of Pencarn (again in the Hundred of Newport) was itself associated with the Morgans for many centuries. An ancestor Owen, son of the Lord of Caerlleon, lived there in the 12th century. Sir Thomas Morgan of Pencarn became known as “the warrior” after commanding English forces overseas in the 1580s and 1590s. His nephew, Sir Matthew Morgan was wounded at the siege of Rouen in 1591. Matthew’s brother, Sir Charles also served overseas with distinction and became a member of the Privy Council of King Charles I.
A brief note of his career (revised March 2000) Of the generally available literature on Henry Morgan, I have found Dudley Pope’s “Harry Morgan's Way” (Secker and Warburg, London 1977) to be the most satisfactory and I have followed this in describing Morgan’s exploits. Dudley Pope consulted British and Spanish archives and brought his wide knowledge of maritime history to the topic. It is worth remembering that Morgan’s raids were carried out in his capacity as a “privateer”. Like commanders in many colonial outposts of the time, he was authorized to act as an agent of his country at a time when official government forces were often not available so far from home. His reports to the governor of Jamaica and papers between Jamaica and London survive. His own official reports of his exploits are usually laconic in the extreme and seem to reveal a naturally modest man, not comfortable with the sometimes rather flowery prose of his day. As he once wrote, “I ... have been more used to the pike than the book..."
There is little doubt that the detailed descriptions of his famous raids on Spanish possessions are based on the writings of a Dutch man known as Esquemeling who took part in some of these raids and published his account as De Americaensche Zee-Rovers. This was translated for the Spanish market and entitled Piratas de la America y Luz.... An English translation followed and this was called Bucaniers (sic) of America ... Wherein are contained ... the Unparallel’d Exploits of Sir Henry Morgan, our English (sic) Jamaican Hero. Who sacked Puerto Velo, burnt Panama etc..... This (and another English translation) incorporated material which Esquemeling seems to have included with an eye to his Dutch and Spanish readers many of whom would have been antagonistic towards Morgan. When the English translations were read by Morgan he promptly sued the publishers who eventually settled out of court. Each paid him 200 pounds in damages and issued new editions with apologetic prefaces. The original books had accused Morgan of atrocities but he seems to have been most upset by passages which stated that he had arrived in the West Indies as an indentured servant, like so many of the early settlers. The new prefaces pointed out that Morgan “was a gentleman’s son of good quality in the county of Monmouth, and never was a servant to anybody in his life, unless unto his Majesty...". It is well known that Welsh people were particularly proud of their pedigrees and in this respect Morgan was true to type.
Henry Morgan was born around 1635. He arrived at the West Indian Island of Barbados in 1655 as a junior officer in an expedition sent out by Oliver Cromwell and commanded by General Venables (the naval commander was Vice-Admiral Penn, whose eldest son gave his name to the American State). This was the time of the Commonwealth. King Charles I had been executed and Cromwell’s head appeared on the coinage. One of Morgan’s uncles, Major General Sir Thomas Morgan, fought for Cromwell while the other, Colonel Edward Morgan, supported the crown. After the restoration of the monarchy, Uncle Edward was sent out to Jamaica as lieutenant governor. TheVenables expedition had by now captured the island of Jamaica with its large natural Harbour and strategic position. Henry, already famous in Jamaica, courted and married his uncle’s oldest surviving daughter Mary Elizabeth, and her sisters became wed to two of his trusted friends. Henry remained faithful to his wife until his death in 1688, but they were not blessed with children.
Henry learned much from Commodore Christopher Mings when he sailed as part of the flotilla which first attacked and plundered Santiago (Cuba) and in 1663 when he commanded a vessel in the attack on the Mexican coast. In this, 1100 men described as privateeersmen, buccaneers and volunteers sailed more than 1000 miles to attack Campeche. The town, defended by two forts and regular Spanish troops, fell after a day of fighting and the buccaneers took fourteen Spanish ships from the port as prizes.
Why did the English authorities seem to encourage the activities of the buccaneers? The answer lies in the fact that people in power in London knew that Britain’s future prosperity rested on her ability to expand trading markets. The Spanish had claimed the New World and Spain had become dependant upon the gold and silver it produced. They sought to control trade and limit it to Spanish ships. At the time in question, it was not unknown for the Spaniards to capture British ships in the West Indies and to enslave their crews. The Spanish Armada had sailed to attack England only seventy or so years ago and the perceived threat from Spanish Catholicism was probably greater than the more recent worry about eastern European communism. England had no colonies where slaves toiled in gold mines and knew that only the outposts of the enfeebled Spanish Empire prevented it from exploiting new opportunities for trade.
Why were buccaneers so called? The original boucaniers were the native inhabitants of the West Indies who had developed a method of preserving meat by roasting it on a barbecue and curing it with smoke. Their fire pit and grating were called a boucan and the finished strips of meat were also known as boucan. In time, the motley collection of international refugees, escaped slaves, transported criminals and indentured servants who roamed along the coasts if the islands became known as buccaneers and the term came to describe an unscrupulous adventurer of the area.
In 1663, Henry Morgan was one of five captains who left the old Port Royal in Jamaica and set a course for New Spain. They were not to return for about 18 months. Although his fellow captains were experienced privateers, it seems likely that Morgan became leader of the expedition because of his background as a soldier. It might be as well to remind readers that the renowned exploits of the buccaneers took place on land. In most cases, ships were simply used to carry them to a safe landing from which they could march to attack a fortified town. Battles on the high seas were not liable to be so rewarding so these were generally not sought. It is also worth pointing out that whereas Morgan seemed to lead a charmed life in the face of danger on land, at sea he was rather unlucky. One ship exploded beneath him when his crew, the worse for drink, lit candles near the gunpowder stores and on another occasion his ship struck a reef near shore and he had to be rescued from a rock.
On the expedition mentioned above, the small fleet sailed from Jamaica and rounded the Yucatan peninsula to the Gulf of Mexico. They landed at Frontera and marched 50 miles inland to attack Villahermosa. After sacking this town they found that their own ships had been captured by the Spanish so they had to themselves capture two Spanish ships and four coastal canoes in which to continue their epic voyage. They sailed and paddled 500 miles against an adverse current to return around the Yucatan peninsular and continued along the coast of Central America. They landed on the coast of modern Nicaragua and again struck inland to attack a rich town called Granada. This was taken in a surprise raid and the official report said that more than a thousand of the Indians “joined the privateers in plundering and would have killed the (Spanish) prisoners, especially the churchmen...”.
Morgan and his men returned to Jamaica with great riches. As Dudley Pope points out, by 1665 Morgan had taken the lead in the most audacious buccaneering expedition ever known in the West Indies. He could have settled to the comfortable life of a planter and this might have been expected after his marriage to Mary Morgan but it was felt that Jamaica was threatened and it seems Morgan was asked to organize the island’s militia and defenses. This task completed, in 1668 he gathered a fleet of a dozen privateers at a rendezvous in the tiny islands south of Cuba known as the South Cays. 700 hundred men crewed vessels we would regard as very small in these days. The largest was perhaps the Dolphin, a Spanish prize. She was of fifty tons, carried 8 guns and was perhaps 50 feet along the deck. Some of the vessels were merely large open boats with some shelter for the crews and provisions. They would have a single mast and could be rowed when necessary.
was decided to attack the town of El Puerto del Principe, which despite
its name was 45 miles inland from the Cuban coast. In Morgan’s words “we
marched 20 leagues to Porto Principe and with little resistance possessed
ourselves of the same. ... On the Spaniard’s entreaty we forbore to fire
the town, or bring away prisoners, but on delivery of 1,000 beeves, released
them all.” This raid did not provide much plunder and on their return to
the coast most of the French captains decided to join up with their countryman,
the bloodthirsty privateer L’Ollonais, at Tortuga. Thus, in May of 1668
Morgan sailed with his remaining force south, across the Caribbean to a
place near the present day Panama Canal, called a council of war and announced
his intention to attack the heavily defended Harbour of Portobelo. He was
soon to write “we took our canoes, twenty-three in number and rowing along
the coast, landed at three o’clock in the morning and made our way into
the town, and seeing that we could not refresh ourselves in quiet we were
enforced to assault the castle...” When they had captured the fort of San
Geronimo they made their way to the dungeon and there found eleven English
prisoners covered with sores caused by the chafing of their heavy chains.
The story of the plundering and further attack on a fort in the centre
of Portobelo is too long to be told here but it made Morgan’s name as a
daring and successful leader. So much coin was plundered that Spanish pieces
of eight became additional legal currency in Jamaica.
Later in 1668, Morgan sailed with ten vessels to Cow Island off the coast of Hispaniola (modern Haiti). Here the Oxford, a warship sent out for the defense of Jamaica by the British government, found the French privateer ship Le Cerf Volant. The British master of a ship from Virginia had accused the French vessel of piracy so the Cerf Volant was arrested and condemned as a prize by the Jamaica Court of Admiralty. After the Oxford was blown up (in an explosion said to have killed 250 people) while Morgan dined in the great cabin, the Cerf Volant ultimately became his flagship, under the new name of Satisfaction. After cruising east along the coast of Hispaniola and attacking coastal towns along the way, Morgan turned south to sail across the Caribbean again, making for Maracaibo in the Gulf of Venezuela. This he took, together with the more southerly town of Gibraltar. On their return journey, the privateers were bottled up at the lake of Maracaibo by several large Spanish warships and a reinforced fort. Morgan had to use great ingenuity to escape. While doing so added to his treasure yet again.
In 1670 Morgan assembled an expedition of 36 ships and over 1800 men at a safe anchorage off Hispaniola. At a meeting with his captains, English and French, it was decided to attack Panama, the legendary Spanish City of the Indies. All the riches of the mines of Peru passed through here on the way to Spain and the city was known to be full of rich merchants and fine buildings. The task confronting Morgan was extremely difficult and dangerous. There was no Panama Canal and his force would have to take the Caribbean island of Old Providence, sail from there to land at Chagres and cross the isthmus to Panama through thick jungle and across high mountains. Even England’s hero Sir Francis Drake had failed in a similar undertaking many decades before. After many battles and privations Panama finally fell. The city burned after some houses were fired by the defenders and after the buccaneers left the ruins were overgrown with vegetation. Ultimately a new city was built miles away at Perico. (If you are interested in a more informed account of Morgan’s activities in Panama, Sean P. Kelley knows the country and describes Morgan’s exploits there within his resource on Colonial Panama.)
Morgan returned to Jamaica minus his ship the
Satisfaction which had been wrecked on a reef but his fleet docked at Port
Royal with hundreds of slaves and chests of gold, silver and jewels. Under
the strict agreement that governed the division of the spoils in those
days, Dudley Pope estimates that Morgan would have made 1000 British pounds
(around 1600 USD) from the Panama expedition and it is known that ordinary
seamen pocketed 200 pieces of eight (worth 50 pounds or 80 dollars).
By the time that the sack of Panama was known in London, politics had taken a turn. There were those who sought to conciliate Spain, especially since reports from some European capitals suggested that she was near to declaring war on England. It was thought prudent to arrest Modyford, the governor of Jamaica and later to arrest Morgan. In 1672 Morgan sailed for London in the Welcome, a leaky naval frigate. He arrived in a country which differed greatly to the one he had left seventeen years before. Then it had been Puritan, now the monarchy had been restored and London was once more a city of theatre, fashion, corruption and fascinating figures. Some of Christopher Wren’s new classically inspired churches already adorned the city and the diarist Samuel Pepys became secretary to the Board of Admiralty in 1673. There is no record of Morgan having been detained and he seems to have spent three years in London at his own expense but free to meet the people he chose. He became friendly with the second duke of Albermarle (Morgan’s uncle had fought with the duke’s father in the Civil Wars) and it seems that this friendship brought Morgan to the notice of King Charles II. In time, England’s attitude to Spain changed and when the King became aware that the English colony of Jamaica was under threat again, he asked Morgan for advice about the defence of the island, knighted him and wondered if he might like to return there as Lieutenant Governor.
At the age of 45, Sir Henry was acting governor of Jamaica, Vice-Admiral, and Commandant of the Port Royal Regiment, Judge of the Admiralty Court and Justice of the Peace. Dudley Pope sketches a picture of a tall and generally lean man but one who now exhibited a paunch. He was known to drink heavily and to be fond of the company of his old comrades in the rum shops of Port Royal. He seems to have worked to transform the island’s fortifications and he survived various political upheavals while expanding his estate. In 1687 the duke of Albermarle arrived in Jamaica to take up his post as the new governor. Christopher Monck’s private yacht was of a type never seen in those waters and the merchant ships which accompanied it carried 500 tons of his possessions and stores as well as around a hundred servants. His wife, Lady Elizabeth Cavendish, formerly the toast of London society had, by the age of 27, become mentally unstable and to attend on her the duke had brought out young Dr. Hans Sloane. Sloane’s name was to become famous in many fields but for those with an interest in the history of the buccaneers he is always remembered for his notes on the last days of Sir Henry Morgan. Sloane attempted to treat Morgan, finding him yellow of complexion and swollen but it seems that Morgan’s frame did not respond. At one time Sloane describes Morgan as having sought the advice of a black doctor who plastered him all over with clay and water but even this treatment failed and he signed his will in June of 1688. On the 25th of August he died.
For many of us today, Henry Morgan is little
more than the name of a “pirate” of yore, but I now see signs of Morgan
being re-evaluated as one of Britain’s most successful military strategists
and as a man with the leadership qualities of an Alexander. He gained the
loyalty of the buccaneers, who followed him without question, and the respect
of kings and princes. Of all the great figures in Welsh history he must
be counted among the most attractive and able.
The antecedents of the buccaneer, Sir Henry Morgan.
Most writers on Henry Morgan are rather vague
as to his background. His roots in the Welsh County of Monmouthshire are
not disputed but there is uncertainty about his parentage. This may be
as a result of deliberate obfuscation, on his part, during his lifetime.
He lived in troubled times and it is possible that, at times, he was concerned
that his reputation would not enhance the ancient Morgan pedigrees.
He says: “The eldest son of Thomas of Machen Plas was another Rowland. Rowland’s second son Henry and Henry’s son Thomas were to Llanrhumney. (The first Henry had married Catherine Kemeys, the heiress of the manor of Llanrhumney. J.W.) Thomas of Llanrhumney’s third son Robert was living in London in 1670. It was Robert’s son Henry who became notorious as Sir Henry Morgan the Buccaneer.”
This requires further investigation but Robert Morgan did have a brother called Edward and it is known that the buccaneer’s uncle, when Lt. Governor of Jamaica, was styled Colonel Edward Morgan. Note: Llanrhymny is a place name fraught with problems. It is difficult for non-Welsh speakers to pronounce the spelling changes in English (Llanrhumney or Llanrumney) and even the Welsh spelling is disputed. “Llan” implies an ancient site with religious significance but there are those who say the first syllable should be “Lan” (derived from the Welsh “Nant”) giving the meaning “the bank of the River Rhumney”. Oh - it’s also quite hard to type and get right, try it!
(Welsh: Harri Morgan, c. 1635 – 25 August 1688)
Welsh buccaneer Henry Morgan in a coloured picture.
Sir Henry Morgan (Welsh: Harri Morgan, c. 1635 – 25 August 1688) was a Welsh privateer, landowner and, later, Lieutenant Governor of Jamaica. From his base in Port Royal, Jamaica, he raided settlements and shipping on the Spanish Main, becoming wealthy as he did so. With the prize money from the raids he purchased three large sugar plantations on the island.
Much of Morgan's early life is unknown. He was born in south Wales, but it is not known how he made his way to the West Indies, or how he began his career as a privateer. He was probably a member of a group of raiders led by Sir Christopher Myngs in the early 1660s. Morgan became a close friend of Sir Thomas Modyford, the Governor of Jamaica. When diplomatic relations between the Kingdom of England and Spain worsened in 1667, Modyford gave Morgan a letter of marque, a licence to attack and seize Spanish vessels. Morgan subsequently conducted successful and highly lucrative raids on Puerto Principe (now Camagüey in modern Cuba) and Porto Bello (in modern Panama). In 1668 he sailed for Maracaibo and Gibraltar, both on Lake Maracaibo in modern-day Venezuela. He raided both cities and stripped them of their wealth before destroying a large Spanish squadron as he escaped.
In 1671 Morgan attacked Panama City, landing on the Caribbean coast and traversing the isthmus before he attacked the city, which was on the Pacific coast. The battle was a rout, although the privateers profited less than in other raids. To appease the Spanish, with whom the English had signed a peace treaty, Morgan was arrested and summoned to London in 1672, but was treated as a hero by the general populace and the leading figures of government and royalty including Charles II.
Morgan was appointed a Knight Bachelor in November 1674 and returned Jamaica shortly afterward to serve as the territory's Lieutenant Governor. He served on the Assembly of Jamaica until 1683 and on three occasions he acted as Governor of Jamaica in the absence of the post-holder. A memoir published by Alexandre Exquemelin, a former shipmate of Morgan's, accused the privateer of widespread torture and other offences; Morgan brought a libel suit against the book's English publishers and won, although the black picture Exquemelin portrayed of Morgan has affected history's view of the Welshman. He died in Jamaica on 25 August 1688. His life was romanticised after his death and he became the inspiration for pirate-themed works of fiction across a range of genres
Henry Morgan - Wikipedia
Origin of the Welsh Red Dragon
The dragon has long been a symbol of Wales. It features (in its proper red colour) on the national flag and is often to be found marking goods of Welsh origin. How did this exotic oriental beast find its way to Wales? The dragon was perhaps first seen in Wales in Roman times. The Romans were thought to have gained knowledge of the dragon from their Parthian enemies (in lands later to become part of the great Persian Empire) and it is to be seen carved on Trajan’s column. It is probable that the dragon had been seen in the West much earlier than this, as a result of Alexander the Great’s epic journey which commenced it 334 BC Alexander marched as far as northern India and after his death, the break up of his mighty empire saw an increase in trade with Africa and India and for the first time commerce with China.
The Roman draco was a figure fixed by the head to the top of a staff, with body and tail floating in the air and was the model for the dragon standard used by the Anglo Saxons. In the Bayeaux Tapestry, this device is depicted as the standard of King Harold, although written records seem to disagree. In 1190 “the terrible standard of the dragon” was borne before the army of Richard Coeur-de-Lion in an attack at Messina.
The seventh century Welsh hero Cadwaladr carried the dragon standard and the dragon had become a recognized symbol of Wales by the time Welsh archers were serving in the English army at the battle of Crecy in 1346. It is said that a dragon banner was thrown over the Black Prince when he was unhorsed at Crecy, in protection while his enemies were beaten off. The future King Henry VII carried the dragon banner at the Battle of Bosworth Field in 1485. This battle signaled the end of the War of the Roses between Lancastrian and Yorkist factions and led to unification. Henry later decided that the red dragon should figure on the official flag of Wales.
Picture © Data Wales
The Men of Harlech
The song Men of Harlech is something of an unofficial anthem in Wales. Every Welsh person knows the tune and despite the variety of lyrics over the years, the martial air has become identified with the country’s determination to retain its identity. Harlech Castle in north Wales, one of the “iron ring” of castles intended to subdue Wales in medieval times, remains as a picturesqe reminder of the ultimate futility of the invader’s ambition.
Outside of Wales, the song has become well known as a result of the film Zulu which told the story of a small detachment of soldiers and their epic stand against a huge Zulu army in southern Africa. The soldiers were from a regiment which recruited in south east Wales and the borders and their heroism came to be compared with the bold exploits of their ancestors in ancient days. The song would have been known in Wales before the Zulu War but was it actually sung at the battle of Rorke’s Drift? The curator of the unit’s regimental museum thinks it unlikely, since the song (first published in 1860) was only officially adopted by the regiment in 1881, whereas the action depicted in the film took place in 1879. Although we have the 1860 lyrics of the song, we do not have the version from the film. This was written especially for the film and enquirers are advised to seek the copyright holders (whoever they may be).
Just who were the Men of Harlech and how did they come to be associated with a bloody battle in Africa? The answer is to be sought through the mists of time and the story starts in the year 1283 when King Edward I ordered a mighty castle to be built at Harlech on the coast of Merionethshire in north Wales. This was just one of a ring of great castles designed to prevent the Welsh from challenging the sovereignty of England. The task of designing and building the castle was given to the Master of the King’s Work in Wales, James of St. George. This man, one of the great military engineers of history, built a castle of the concentric type defended at the back by the sea and at the front by massive towers and walls up to twelve feet thick.
The defences of Harlech Castle were first tested
in 1294 when a 37 strong garrison fought off Welsh besiegers led by Madog.
In the next century the castle became neglected but was repaired before
the occasion of the revolt led by Owain Glyndwr. After a long and grim
siege Harlech was captured by Owain in 1404. The revolt could not be sustained,
however, and the castle was recovered for the crown in 1408.
Eventually famine forced surrender and Dafydd handed the castle to Lord Herbert and his brother Sir Richard Herbert on honourable terms. King Edward IV at first refused to honour the terms of the settlement but Sir Richard Herbert, out of respect for the bravery of the defenders, offered his own life in exchange for Dafydd’s rather than see his promise broken. These defenders were the Men of Harlech commemorated in the song.
Harlech Castle enjoyed 200 years of peace but became a testament to the genius of the designer, Master James, when it endured a further long siege in the first part of the Civil War. It finally surrendered to Oliver Cromwell’s forces in 1647. The South Wales Borderers and Monmouthshire Regimental Museum has paintings depicting the actions in the Zulu War. The regimental chapel in Brecon Cathedral holds the Queen’s Colour, the banner which was recovered from a nearby river after the battle of Isandhlwana. Lieutenants Melvill and Coghill were cut down in its defence and were posthumously awarded Britain’s highest military honour, the Victoria Cross. The bravery of the defenders of Rorke’s Drift was also recognised when Lieutenant Bromhead and six soldiers were awarded the Victoria Cross. They might not have sung the song, not all of them were Welsh, but no one would dispute that they were Men of Harlech.
I have located the Monmouthshire resting place of one of the soldiers who won the Victoria Cross at Rorke’s Drift and will add a photograph of this when time permits. Welsh people have always taken the song Men of Harlech on their wanderings around the world but the film Zulu introduced it to lots of people who simply enjoyed the song as a traditional, rousing, martial air. Mr. B. M. of California wrote with a delightful anecdote of his student days at Pomona College. I asked permission to publish it here since his note also reminded me of William Randolph Hearst’s connection with Wales.
Died in 1456.
the father of King Henry VII who won the crown of England at the battle
of Bosworth Field in 1485. The partial Welsh ancestry of the Tudor monarchs
was ultimately responsible for better relations between England and Wales.
The illustration is based on the brass at St. David’s Cathedral in Pembrokeshire, west Wales. Edmund’s tomb was originally at the church of the Grey Friars, Carmarthen but was moved at the time of the dissolution of the monasteries. In 1641 and in 1644 Puritan parliaments ordered the removal and defacement of images, crosses, pictures and monuments. Churches around the land were desecrated and the tomb was attacked. The picture represents the restoration of around 1872 but I have removed the great helm upon which Edmund’s head rests in the original and also the hound at his feet. This was done simply to make the outlines of the armour more easily visible. In his “Welsh Monumental Brasses” J. M. Lewis implies that the restoration is probably quite accurate but points out that the tomb must date from after the accession of Edmund’s son as king. The armour pictured is that of the 1480s whereas Edmund died in 1456.
Menu | Back | Home
Webpage © 1995-2017
Isle of Standauffish
These Articles © Data Wales | <urn:uuid:2cd57716-ca56-4551-859f-e0aa41d5402c> | CC-MAIN-2017-17 | http://landoflegendslv.com/01library/05research/01con/02BH/Welsh/CaptainMorgan/morgan01.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00134-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.984236 | 6,434 | 2.8125 | 3 |
A Better World - Part Two - 5
Modern and Progressive Social and Cultural Norms
The political and administrative norms and practices in society should be modern, secular and progressive. This requires the complete purging of the state and administration from religion, ethnicity, nationalism, racialism and any ideology and institution that contradicts the absolute equality of all in civil rights and before the law, or stifles freedom of thought, criticism and scientific enquiry. Religion and nationalism by nature are discriminatory and reactionary trends and incompatible with human freedom and progress. Religion specifically, even if it remains a private affair of the individual, is a barrier to human emancipation and development.
The establishment of a modern secular state and political system is merely the first step towards complete emancipation from religious, national, ethnic, racial and sexual bigotry and prejudice.
The Worker-communist Party calls for the immediate implementation of the following:
Religion, nationality and ethnicity
1 - Freedom of religion and atheism. Complete separation of religion from the state. Omission of all religious and religiously-inspired notions and references from all laws. Religion to be declared private affair of the individual. Removing any reference in laws and in identity cards and official papers to the person's religion. Prohibition of ascribing people, individually or collectively, to any ethnic group or religion in official documents, in the media, and so on.
2 - Complete separation of religion from education. Prohibition of teaching religious subjects and dogmas or religious interpretation of subjects in schools and educational establishments. Any law and regulation that breaches the principle of secular non-religious education must be immediately abolished.
3 - Prohibition of any kind of financial, material or moral support by the state or state institutions to religion and religious activities, institutions and sects. The state to have the duty to eradicate religion from the various spheres of social life by informational means and by raising the public's level of education and scientific knowledge. Omission of any kind of reference in the official calendar to religious occasions and dates.
4 - Prohibition of violent and inhuman religious ceremonies. Prohibition of any form of religious activity, ceremony or ritual that is incompatible with people's civil rights and liberties and the principle of the equality of all. Prohibition of any form of religious manifestation that disturbs people's peace and security. Prohibition of any form of religious ceremony or conduct that is incompatible with the laws and regulations regarding health, hygiene, environment and prevention of cruelty to animals.
5 - Protection of children and persons under 16 from all forms of material and spiritual manipulation by religions and religious institutions. Prohibition of attracting persons under 16 to religious sects or religious ceremonies and locations.
6 - All religious denominations and sects to be officially registered as private enterprises. Subjection of religious establishments to enterprise laws and regulations. Auditing, by legal authorities, of the books and accounts and transactions of religious bodies. Subjection of these institutions to the tax laws which apply to other business enterprises.
7 - Prohibition of any physical or psychological coercion for acceptance of religion.
8 - Prohibition of religious, ethnic, traditional, local, etc. customs that infringe on people's rights, equality and freedom, their enjoyment of the civil, cultural, political and economic rights recognised under the law, and their free participation in social life.
9 - Confiscation and repossession of all property, wealth and buildings that the religious establishments have acquired by force or through the state and various foundations under the Islamic regime. These to be placed in the hands of popularly- elected bodies for the benefit of the public.
10 - Prohibition of ascribing individuals or groups to a particular nationality, in public, in the media, in offices, etc. without their express permission.
11 - Omission of any reference to the person's nationality in identity cards, official documents, and official business.
12 - Prohibition of incitement of religious, national, ethnic, racial, or sexual hatred. Prohibition of forming political organisations which openly and officially proclaim superiority of one group of people over others on the basis of their nationality, ethnicity, race, religion, or sex.
Cohabitation, family, marriage and divorce
1 - The right of every couple over 16 to live together by their own choice. Any form of coercion of individuals by any person or authority in choice of their partner, in cohabitation (or marriage) or in separation (or divorce) is prohibited.
2 - Simple registration is sufficient for cohabitation to be recognised officially and be covered by family laws, if the parties so wish. Secularization of marriage. Prohibition of religious rituals and recitals at state ceremonies for registration of marriage. Holding or not holding special ceremonies, religious or secular, for marriage has no bearing on its validity or status before the law.
3 - Prohibition of any form of financial transaction in marriage, such as fixing a Mehriyye, Shirbaha, Jahizieh (various cash and kind payments by the two parties), and so on, as terms and preconditions of marriage.
4 - Prohibition of Ta'addod Zowjat (Islamic right of multiple marriages for men) and Seegheh (Islamic rent-a-wife).
5 - Equal rights for woman and man in the family, in the choice of residence, in care and education of children, in decisions concerning the family's property and finances, and in all matters concerning cohabitation. Abolition of man's special status as the head of the household in all laws and regulations, and equal rights for woman and man in supervision of the family's affairs.
6 - Unconditional right of separation (divorce) for woman and man. Equal rights and obligations for woman and man in the custody and care of children after separation.
7 - Equal right of partners during separation with respect to property and resources that have been acquired or used by the family, during cohabitation.
8 - Abolition of the automatic transfer of father's family name to children. The decision on the child's surname to be left to the mutual agreement of the parents. If no agreement is reached, the child takes the mother's surname. References to parents' names to be omitted from identity cards and other official identity documents, such as passport, driving license, etc.
9 - Material and moral support by the state to single parents. Special support to mothers who have separated or born their children outside marriage, in the face of economic difficulties or reactionary cultural and ethical pressures.
10 - Abolition of all anachronistic and reactionary laws and regulations that treat the sexual relationship of men or women with persons other than their espouses as a crime.
1 - Every child's right to a happy, secure and creative life.
2 - Society is responsible for ensuring the well-being of every child irrespective of her family's means and circumstances. The state is obliged to ensure a uniform, and the highest possible, standard of welfare and development opportunities for children.
3 - Allowances and free medical, educational and cultural services to ensure a high standard of living for children and youngsters regardless of family circumstances.
4 - Placing all children without a family or familial care under the guardianship of the state, and providing for their life and education in modern, caring, progressive and well-equipped centres.
5 - Creation of well-equipped, modern nurseries to ensure that all children are provided with a creative educational and social environment regardless of family circumstances.
6 - Equal rights for all children, whether born in or outside marriage.
7 - Prohibition of professional employment for children and youngsters under 16.
8 - Prohibition of abuse of children at home, in school and the society at large. Strict prohibition of corporal punishment. Prohibition of subjecting children to psychological pressure and intimidation.
9 - Decisive legal action against sexual abuse of children. Sexual abuse of children is deemed a grave crime.
10 - Prosecution and punishment of anyone who in any way and under any pretext impedes children, whether boys or girls, from enjoying their civil and social rights, such as education, recreation, and participation in children's social activities
1 - Free and consensual sexual relationship is the undeniable right of anyone who has reached the age of consent. The legal age of consent for both women and men is 15. Sexual relationship of adults (persons over the age of consent) with under-age persons, even if it consensual is illegal and the adult party is prosecuted under the law.
2 - All adults, women or men, are completely free in deciding over their sexual relationship with other adults. Voluntary relationship of adults with each other is their private affair and no person or authority has the right to scrutinise it, interfere with it or make it public.
3 - Everyone, especially the youth and adolescents, should receive sexual education, and instruction on contraceptive methods and safe sex. Sexual education should be a compulsory part of high school curricula. The state is responsible to rapidly raise the population's scientific awareness of sexual matters and the rights of the individual in sexual relationship, by putting out information, setting up clinics and advisory services accessible to all concerned, special radio and TV programmes, and all other effective methods.
4 - Contraceptives and VD prevention devices should be freely and easily available to all adults.
Few phenomena like abortion, i.e. the deliberate elimination of the human embryo because of cultural and economic pressures, display the inherent contempt for human life in the present system and the incompatibility of existing class society and exploitative relations with human life and well- being. Abortion is a testimony to the self-alienation of people and their vulnerability in the face of the deprivations and hardships that the existing class society imposes on them.
The worker-communist party is against the act of abortion. The party fights for the creation of a society where no pressures or circumstances would drive people to performing or accepting this act.
At the same time, as long as the adverse social circumstances do drive a large number of women to resorting to backstreet abortions, the worker-communist party in order to prevent abuse by profiteers and ensure protection of women's health calls for the introduction of the following measures:
1 - Legalization of abortion up to the twelfth week of pregnancy.
To reduce the number of abortions, the worker-communist party also calls for the introduction of the following urgent measures to prevent unwanted pregnancies and to free women from economic, cultural and moral pressures:
2 - Abortion after the twelfth week to be legally permitted if there is danger to the health of the mother (until that time when Caesarean section and the saving of the foetus is possible given the latest medical expertise). Such cases to be ascertained by the competent medical authorities.
3 - Wide and freely available facilities for pregnancy tests. Instruction of people in their use to ensure quick detection of unwanted pregnancies.
4 - Free abortion and free post-abortion care in licensed clinics by gynaecologists.
5 - The decision whether to have or not to have an abortion rests with the woman alone. The state has the duty, however, to inform her before her final decision, of the dissuasive arguments and recommendations of the scientific authorities and social counsellors as well as of the financial, material and moral commitments of the state to her and her child.
1 - Broad sexual education of people on contraceptives and on the importance of the issue. Widely accessible advisory services.
2 - Wide and free access to contraceptives.
3 - Allocation of adequate funding and resources to help the women who are considering having an abortion because of economic constraints. The state should stress its duty and readiness to take care of the child should the mother decide to give birth to the child.
4 - Resolute campaigns against prejudices and moral pressures that drive women to abortion. Active state support to women against such pressures, prejudices and intimidations.
5- Campaign against the ignorant, religious, male- chauvinistic and backward attitudes that hinder the growth of people's sexual awareness and, specifically, impede women's and young people's wide use of contraceptives and safe-sex devices.
The fight against drug addiction and drug trafficking
1 - Strict prohibition of sale and purchase of narcotics and the prosecution and severe sentencing of those responsible for the illicit production, and trafficking of drugs.
2 - Helping the fight against drug addiction by eliminating the social and economic grounds that push people to drugs, and protection of drug addicts from pushers and drug-trafficking networks.
3 - Decriminalization of the life of drug addicts. Helping drug users off drugs, through:
a - Creation of state clinics that meet the needs of drug users on the condition that they agree to take part in rehabilitation programmes.
b - Legalisation of the possession of some drugs in quantities needed for personal use. Free hypodermic needles and syringes to be made available through chemists and clinics to all those who need them to protect drug users from diseases such as Aids and Hepatitis and to contain the spread of such diseases.
c - Prohibition of any form of exile, incarceration or isolation of drug users on the grounds of their addiction. Drug addiction per se is not a crime.
The fight against prostitution
Active fight against prostitution by eliminating its economic, social and cultural grounds, and decisive action against prostitution-organising networks, middlemen and racketeers.
Strict prohibition of organisation of prostitution, dealing, broking, and profiting by the work of prostitutes.
Decriminalization of the life and work of prostitutes. Helping prostitutes to regain their social dignity and self-esteem and freeing their lives from criminal networks and gangs, through:
1 - Legalising sale of sex by the individual as self- employment. Extending the protection of laws and law- enforcement authorities to prostitutes against the mob, racketeers, extortioners, pimps, etc.
2 - Issuing of work permits to those who work as self- employed prostitutes. Upholding their honour and prestige as respectable members of society, and helping them to organise in their own union.
3 - Free special preventive and therapeutic medical services to prostitutes to protect them from diseases and injuries resulting from employment in this profession.
4 - Consistent educational work, encouragement and practical help by responsible state organs to help prostitutes give up prostitution and receive vocational training for work in other areas.
Principles of trials
1 - The accused is innocent until proven guilty.
2 - Trials must take place free of provocation and pre-judgments and under fair conditions. The location of the trial, the judge and the composition of the jury must be so determined as to ensure such conditions.
3 - The accused and their counsels have the right to know and study all the proofs, evidence and witnesses of the prosecution or the plaintiff prior to the trial.
4 - The verdict of the court is appealable, at least once, by the accused, the prosecution or by both parties to the lawsuit.
5 - Prohibition of stirring up public preconceptions about the trial and about the persons involved while the trial is in progress.
6 - Prohibition of trial under circumstances where the pressure of public opinion has denied or compromised the chance of an impartial trial.
7 - The testimony of police carries the same weight as that of other witnesses.
8 - Judges and courts must be totally independent of the process of enquiry and investigation. The legal correctness of the investigation procedure should be supervised and approved by special judges.
9 - In the penal laws, abuse and violation of the person's body and mind, violence against children, so-called crimes of passion committed against women, domestic violence, hate crimes against specific groups of people, and crimes involving violence and intimidation in general, should be treated as much more serious offences than violation of property rights and wealth, both state and private. Vindictive and so-called exemplary punishments should be replaced by punishments meant to be corrective and to shield society from the recurrence of the crime.
Rights of the accused and offenders
1 - A person may be held only for a maximum of 24 hours without being charged. The place of detention should not be a prison but part of the usual quarters of law-enforcement authorities.
2 - Before the arrest, detainees should be informed of their rights.
3 - Everyone has the right to call in a lawyer or witnesses to their arrest and interrogation. Everyone has the right to make two phone calls to their lawyer or relatives, or anyone else they wish, within the first hour of detention.
4 - The law-enforcement authorities do not have the right, before charging a person, to take fingerprints or photographs of the individual or to perform medical checks or DNA tests on the individual without his/her permission.
5 - Upon arrest, the detainees' next of kin or anyone else they decide should be immediately notified of their detention.
6 - Acts of torture, intimidation, humiliation or psychological pressure against detainees, the accused or the convicted is strictly forbidden and is deemed a serious crime.
7 - Obtaining confession by threat or inducement is prohibited.
8 - Peaceful resistance to arrest, peaceful attempt to escape from prison, or evading arrest are not crimes in themselves.
9 - The law-enforcement authorities do not have the right to question or search people or enter their private premises without their permission or the authorization of competent judicial authorities.
10 - Coroner's office, forensic and technical labs responsible for the examination of physical evidence, should be independent of the law-enforcement organs. These institutions work directly under the judiciary.
11 - The police complaints tribunal should be independent of the police and law-enforcement authorities. The findings of the tribunal should be made public.
12 - Files and information kept by law-enforcement bodies on any individual should be readily accessible to him/her for study.
13 - Prisoners are covered by the labour law and the general social welfare and health care laws
14 - Prisons should be administered by institutions independent of the police and law-enforcement organs and under the direct supervision of the judiciary.
15 - The right of elected inspectors to visit prisons as they see fit and without notice.
Abolition of the death penalty
The death penalty must be immediately abolished. Execution or any form of punishment that involves violation of the body (mutilation, corporal punishment, etc.) is prohibited under all circumstances. Life imprisonment must also be abolished.
Respect for the dignity of people
1 - Prohibition of openly or implicitly grading the dignity and social worth of people on the basis of rank, position, religion, nationality, citizenship, sex, level of income, appearance, physical features, education, and so on.
2 - Prohibition of libel and defamation.
3 - Prohibition of performing medical, pharmaceutical or environmental experiments and tests on individuals without their knowledge and express consent. Prohibition of any violation of the person's physical integrity (such as sterilization, removal or transplantation of organs and limbs, genetic manipulation, abortion, circumcision, and so on) without the knowledge and consent of the individual.
4 - Prohibition of the use of academic, religious, state or military titles and appellations (such as General, Ayatollah, Doctor, Reverend, and so on) outside the appropriate professional environment. In official and state communication every person must be referred to only by his/her first name and surname. Prohibition of the use of derogatory titles and terms in describing various social groups, by any authority or instance, state or private.
5 - Prohibition of designating first and second class, deluxe and standard, etc. sections in public transport, railways, airlines, state hotels, leisure centres, holiday resorts, and so on. Such services must be available to all at a uniform and highest possible standard
The mass media
Public access to popular press and broadcast media. Creation of public radio and TV networks and sharing of broadcast time among the various organisations and associations of people, such as councils, parties, societies, etc. Total abolition of media censorship - political or otherwise
National and local languages
Prohibition of a compulsory official language. The state may designate one of the current languages in the country as the main language of administration and education, providing that the speakers of other languages enjoy the necessary facilities in the political, social and educational life and that everyone's right to use their mother tongue in all social activities and to enjoy all public facilities is protected.
Changing the Farsi alphabet
In order to help bridge the gap that separates Iranian society from the forefronts of scientific, industrial and cultural progress in the world today, and in order to help people benefit from the results of this progress and take a more direct and active part in it, the official Farsi alphabet should be systematically changed to Latin.
The party also calls for:
1 - English language to be taught from early school age with the aim of making it a prevalent language of education and administration.
2 - The Western calendar (the official calendar in use internationally today) to be officially recognised and to be used in official documents alongside the local calendar. | <urn:uuid:76b327a2-e882-420b-9ada-0f4462b26906> | CC-MAIN-2017-17 | http://hekmat.public-archive.net/en/0600en.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00483-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.938376 | 4,302 | 2.703125 | 3 |
7.6.1 Completion and Finalization
[This subclause defines completion
of the execution of constructs and entities. A master
is the execution
of a construct that includes finalization of local objects after it is
complete (and after waiting for any local tasks — see 9.3
but before leaving. Other constructs and entities are left immediately
The execution of a construct
or entity is complete
when the end of that execution has been
reached, or when a transfer of control (see 5.1
causes it to be abandoned.
due to reaching the end of execution, or due to the transfer of control
of an exit_statement
return statement, goto_statement
or of the selection of a terminate_alternative
is normal completion
. Completion is abnormal
[— when control is transferred out of a construct due to abort
or the raising of an exception].
Don't confuse the run-time
concept of completion with the compile-time concept of completion defined
After execution of a construct
or entity is complete, it is left
, meaning that execution continues
with the next action, as defined for the execution that is taking place.
Leaving an execution happens immediately after its
completion, except in the case of a master
: the execution of a
body other than a package_body
the execution of a statement
or the evaluation of an expression
is not part of an enclosing expression
other than a simple_return_statement
A master is finalized after it is complete, and before it is left.
For the finalization
a master, dependent tasks are first awaited, as explained in 9.3
Then each object whose accessibility level is the same as that of the
master is finalized if the object was successfully initialized and still
exists. [These actions are performed whether the master is left by reaching
the last statement or via a transfer of control.] When a transfer of
control causes completion of an execution, each included master is finalized
in order, from innermost outward.
As explained in 3.10.2
the set of objects with the same accessibility level as that of the master
includes objects declared immediately within the master, objects declared
in nested packages, objects created by allocator
(if the ultimate ancestor access type is declared in one of those places)
and subcomponents of all of these things. If an object was already finalized
by Unchecked_Deallocation, then it is not finalized again when the master
Note that any object whose accessibility level
is deeper than that of the master would no longer exist; those objects
would have been finalized by some inner master. Thus, after leaving a
master, the only objects yet to be finalized are those whose accessibility
level is less deep than that of the master.
To be honest: Subcomponents of objects
due to be finalized are not finalized by the finalization of the master;
they are finalized by the finalization of the containing object.
Reason: We need to finalize subcomponents
of objects even if the containing object is not going to get finalized
because it was not fully initialized. But if the containing object is
finalized, we don't want to require repeated finalization of the subcomponents,
as might normally be implied by the recursion in finalization of a master
and the recursion in finalization of an object.
To be honest: Formally, completion and
leaving refer to executions of constructs or entities. However, the standard
sometimes (informally) refers to the constructs or entities whose executions
are being completed. Thus, for example, “the subprogram call or
task is complete” really means “the execution of the
subprogram call or task is complete.”
of an object:
If the full type of the object is an elementary type, finalization has
We say “full type” in this and the following bullets as privacy
is ignored for the purpose of determining the finalization actions of
an object; that is as expected for Dynamic Semantics rules.
If the full type of the object is a tagged type, and the tag of the object
identifies a controlled type, the Finalize procedure of that controlled
type is called;
If the full type of the object is a protected type, or if the full type
of the object is a tagged type and the tag of the object identifies a
protected type, the actions defined in 9.4
If the full type of the object is a composite type, then after performing
the above actions, if any, every component of the object is finalized
in an arbitrary order, except as follows:
object has a component with an access discriminant constrained by a per-object
expression, this component is finalized before any components that do
not have such discriminants; for an object with several components with
such a discriminant, they are finalized in the reverse of the order of
Reason: This allows the finalization
of a component with an access discriminant to refer to other components
of the enclosing object prior to their being finalized.
To be honest:
The components discussed here are all of the components that the object
actually has, not just those components that are statically identified
by the type of the object. These can be different if the object has a
If the object has coextensions (see 3.10.2
each coextension is finalized after the object whose access discriminant
In the case of an aggregate
or function call that is used (in its entirety) to directly initialize
a part of an object, the coextensions of the result of evaluating the
or function call are transfered to become coextensions of the object
being initialized and are not finalized until the object being initialized
is ultimately finalized, even if an anonymous object is created as part
of the operation.
Immediately before an instance
of Unchecked_Deallocation reclaims the storage of an object, the object
is finalized. [If an instance of Unchecked_Deallocation is never applied
to an object created by an allocator
the object will still exist when the corresponding master completes,
and it will be finalized then.]
The finalization of a master performs finalization of objects created
by declarations in the master in the reverse order of their creation.
After the finalization of a master is complete, the objects finalized
as part of its finalization cease to exist
, as do any types and
subtypes defined and created within the master.
Ramification: Note that a deferred constant
declaration does not create the constant; the full constant declaration
creates it. Therefore, the order of finalization depends on where the
full constant declaration occurs, not the deferred constant declaration.
An imported object is not created by its declaration.
It is neither initialized nor finalized.
Implementation Note: An implementation
has to ensure that the storage for an object is not reclaimed when references
to the object are still possible (unless, of course, the user explicitly
requests reclamation via an instance of Unchecked_Deallocation). This
implies, in general, that objects cannot be deallocated one by one as
they are finalized; a subsequent finalization might reference an object
that has been finalized, and that object had better be in its (well-defined)
Each nonderived access type T
has an associated collection
which is the set of objects created by allocator
, or of types derived from T
removes an object from its collection. Finalization of a collection consists
of finalization of each object in the collection, in an arbitrary order.
The collection of an access type is an object implicitly declared at
the following place:
The place of the implicit declaration determines when allocated objects
are finalized. For multiple collections declared at the same place, we
do not define the order of their implicit declarations.
Finalization of allocated objects is done according to the (ultimate
type, not according to the storage pool in which they are allocated.
Pool finalization might reclaim storage (see 13.11
”), but has
nothing (directly) to do with finalization of the pool elements.
Note that finalization is done only for objects that still exist; if
an instance of Unchecked_Deallocation has already gotten rid of a given
pool element, that pool element will not be finalized when the master
Note that we talk about the type of the allocator
here. There may be access values of a (general) access type pointing
at objects created by allocator
for some other type; these are not (necessarily) finalized at this point.
For a named access type, the first freezing point
) of the type.
The freezing point of the ultimate ancestor access type is chosen because
before that point, pool elements cannot be created, and after that point,
access values designating (parts of) the pool elements can be created.
This is also the point after which the pool object cannot have been declared.
We don't want to finalize the pool elements until after anything finalizing
objects that contain access values designating them. Nor do we want to
finalize pool elements after finalizing the pool object itself.
For the type of an access parameter, the call that
contains the allocator
For the type of an access result, within the master
of the call (see 3.10.2
To be honest:
We mean at a place within the master consistent with the execution of
the call within the master. We don't say that normatively, as it is difficult
to explain that when the master of the call need not be the master that
immediately includes the call (such as when an anonymous result is converted
to a named access type).
For any other anonymous access type, the first
freezing point of the innermost enclosing declaration.
The master of an object is the master enclosing its creation whose accessibility
level (see 3.10.2
) is equal to that of the
object, except in the case of an anonymous object representing the result
of an aggregate
or function call. If such an anonymous object is part of the result of
evaluating the actual parameter expression for an explicitly aliased
parameter of a function call, the master of the object is the innermost
master enclosing the evaluation of the aggregate
or function call, excluding the aggregate
or function call itself. Otherwise, the master of such an anonymous object
is the innermost master enclosing the evaluation of the aggregate
or function call, which may be the aggregate
or function call itself.
paragraph was deleted.
This effectively imports all of the special rules for the accessibility
level of renames, allocator
and so on, and applies them to determine where objects created in them
are finalized. For instance, the master of a rename of a subprogram is
that of the renamed subprogram.
we assign an accessibility level
to the result of an aggregate
or function call that is used to directly initialize a part of an object
based on the object being initialized. This is important to ensure that
any access discriminants denote objects that live at least as long as
the object being initialized. However, if the result of the aggregate
or function call is not built directly in the target object, but instead
is built in an anonymous object that is then assigned to the target,
the anonymous object needs to be finalized after the assignment rather
than persisting until the target object is finalized (but not its coextensions).
(Note than an implementation is never required to create such an anonymous
object, and in some cases is required to not
have such a separate
object, but rather to build the result directly in the target.)
The special case for explicitly aliased parameters of functions is needed
for the same reason, as access discriminants of the returned object may
designate one of these parameters. In that case, we want to lengthen
the lifetime of the anonymous objects as long as the possible lifetime
of the result.
We don't do a similar change for other kinds of calls, because the extended
lifetime of the parameters adds no value, but could constitute a storage
leak. For instance, such an anonymous object created by a procedure call
in the elaboration part of a package body would have to live until the
end of the program, even though it could not be used after the procedure
returns (other than via Unchecked_Access).
Note that the lifetime of the master given to anonymous objects in explicitly
aliased parameters of functions is not necessarily as long as the lifetime
of the master of the object being initialized (if the function call is
used to initialize an allocator
for instance). In that case, the accessibility check on explicitly aliased
parameters will necessarily fail if any such anonymous objects exist.
This is necessary to avoid requiring the objects to live as long as the
access type or having the implementation complexity of an implicit coextension.
Bounded (Run-Time) Errors
It is a bounded error for a call on Finalize or Adjust
that occurs as part of object finalization or assignment to propagate
an exception. The possible consequences depend on what action invoked
the Finalize or Adjust operation:
Ramification: It is not a bounded error
for Initialize to propagate an exception. If Initialize propagates an
exception, then no further calls on Initialize are performed, and those
components that have already been initialized (either explicitly or by
default) are finalized in the usual way.
It also is not a bounded error for an explicit call to Finalize or Adjust
to propagate an exception. We do not want implementations to have to
treat explicit calls to these routines specially.
For an Adjust invoked as part of assignment operations other than those
invoked as part of an assignment_statement
other adjustments due to be performed might or might not be performed,
and then Program_Error is raised. During its propagation, finalization
might or might not be applied to objects whose Adjust failed.
an Adjust invoked as part of an assignment_statement
any other adjustments due to be performed are performed, and then Program_Error
In the case of assignments that are part of initialization, there is
no need to complete all adjustments if one propagates an exception, as
the object will immediately be finalized. So long as a subcomponent is
not going to be finalized, it need not be adjusted, even if it is initialized
as part of an enclosing composite assignment operation for which some
adjustments are performed. However, there is no harm in an implementation
making additional Adjust calls (as long as any additional components
that are adjusted are also finalized), so we allow the implementation
flexibility here. On the other hand, for an assignment_statement
it is important that all adjustments be performed, even if one fails,
because all controlled subcomponents are going to be finalized. Other
kinds of assignment are more like initialization than assignment_statement
so we include them as well in the permission.
Even if an Adjust invoked as part of the initialization of a controlled
object propagates an exception, objects whose initialization (including
any Adjust or Initialize calls) successfully completed will be finalized.
The permission above only applies to objects whose Adjust failed. Objects
for which Adjust was never even invoked must not be finalized.
For a Finalize invoked as part
of a call on an instance of Unchecked_Deallocation, any other finalizations
due to be performed are performed, and then Program_Error is raised.
The standard does not specify if storage is recovered in this case. If
storage is not recovered (and the object continues to exist), Finalize
may be called on the object again (when the allocator
master is finalized).
For a Finalize invoked due to reaching the end of
the execution of a master, any other finalizations associated with the
master are performed, and Program_Error is raised immediately after leaving
This rule covers both ordinary objects created by a declaration, and
anonymous objects created as part of evaluating an expression
All contexts that create objects that need finalization are defined to
For a Finalize invoked by the transfer of control
of an exit_statement
return statement, goto_statement
Program_Error is raised no earlier than after the finalization of the
master being finalized when the exception occurred, and no later than
the point where normal execution would have continued. Any other finalizations
due to be performed up to that point are performed before raising Program_Error.
For example, upon leaving
due to a goto_statement
the Program_Error would be raised at the point of the target statement
denoted by the label, or else in some more dynamically nested place,
but not so nested as to allow an exception_handler
that has visibility upon the finalized object to handle it. For example,
procedure Main is
Outer_Block_Statement : declare
X : Some_Controlled_Type;
Inner_Block_Statement : declare
Y : Some_Controlled_Type;
Z : Some_Controlled_Type;
when Program_Error => ... -- Handler number 1.
when Program_Error => ... -- Handler number 2.
when Program_Error => ... -- Handler number 3.
will first cause Finalize(Y) to be called. Suppose that Finalize(Y) propagates
an exception. Program_Error will be raised after leaving Inner_Block_Statement,
but before leaving Main. Thus, handler number 1 cannot handle this Program_Error;
it will be handled either by handler number 2 or handler number 3. If
it is handled by handler number 2, then Finalize(Z) will be done before
executing the handler. If it is handled by handler number 3, then Finalize(Z)
and Finalize(X) will both be done before executing the handler.
For a Finalize invoked by a transfer of control
that is due to raising an exception, any other finalizations due to be
performed for the same master are performed; Program_Error is raised
immediately after leaving the master.
If, in the above example,
were replaced by a raise_statement
then the Program_Error would be handled by handler number 2, and Finalize(Z)
would be done before executing the handler.
We considered treating this case
in the same way as the others, but that would render certain exception_handler
useless. For example, suppose the only exception_handler
is one for others
in the main subprogram. If some deeply nested
call raises an exception, causing some Finalize operation to be called,
which then raises an exception, then normal execution “would have
continued” at the beginning of the exception_handler
Raising Program_Error at that point would cause that handler's code to
be skipped. One would need two nested exception_handler
to be sure of catching such cases!
On the other hand, the exception_handler
for a given master should not be allowed to handle exceptions raised
during finalization of that master.
For a Finalize invoked by a transfer of control
due to an abort or selection of a terminate alternative, the exception
is ignored; any other finalizations due to be performed are performed.
Ramification: This case includes an asynchronous
transfer of control.
To be honest:
violates the general principle that it is always possible for a bounded
error to raise Program_Error (see 1.1.5
“Classification of Errors
If the execution of an allocator
propagates an exception, any parts of the allocated object that were
successfully initialized may be finalized as part of the finalization
of the innermost master enclosing the allocator
Reason: This allows deallocating the
memory for the allocated object at the innermost master, preventing a
storage leak. Otherwise, the object would have to stay around until the
finalization of the collection that it belongs to, which could be the
entire life of the program if the associated access type is library level.
The implementation may finalize objects created by allocator
for an access type whose storage pool supports subpools (see 13.11.4
as if the objects were created (in an arbitrary order) at the point where
the storage pool was elaborated instead of at the first freezing point
of the access type.
This allows the finalization
of such objects to occur later than they otherwise would, but still as
part of the finalization of the same master. Accessibility rules in 13.11.4
ensure that it is the same master (usually that of the environment task).
Implementation Note: This permission
is intended to allow the allocated objects to "belong" to the
subpool objects and to allow those objects to be finalized at the time
that the storage pool is finalized (if they are not finalized earlier).
This is expected to ease implementation, as the objects will only need
to belong to the subpool and not also to the collection.
The rules of Clause 10 imply that immediately prior to partition termination,
Finalize operations are applied to library-level controlled objects (including
those created by allocator
of library-level access types, except those already finalized). This
occurs after waiting for library-level tasks to terminate.
Discussion: We considered defining a
pragma that would apply to a controlled type that would suppress Finalize
operations for library-level objects of the type upon partition termination.
This would be useful for types whose finalization actions consist of
simply reclaiming global heap storage, when this is already provided
automatically by the environment upon program termination.
19 A constant is only constant between
its initialization and finalization. Both initialization and finalization
are allowed to change the value of a constant.
20 Abort is deferred during certain operations
related to controlled types, as explained in 9.8
Those rules prevent an abort from causing a controlled object to be left
in an ill-defined state.
21 The Finalize procedure is called upon
finalization of a controlled object, even if Finalize was called earlier,
either explicitly or as part of an assignment; hence, if a controlled
type is visibly controlled (implying that its Finalize primitive is directly
callable), or is nonlimited (implying that assignment is allowed), its
Finalize procedure should be designed to have no ill effect if it is
applied a second time to the same object.
Discussion: Or equivalently, a Finalize
procedure should be “idempotent”; applying it twice to the
same object should be equivalent to applying it once.
A user-written Finalize procedure
should be idempotent since it can be called explicitly by a client (at
least if the type is "visibly" controlled). Also, Finalize
is used implicitly as part of the assignment_statement
if the type is nonlimited, and an abort is permitted to disrupt an assignment_statement
between finalizing the left-hand side and assigning the new value to
it (an abort is not permitted to disrupt an assignment operation between
copying in the new value and adjusting it).
Either Initialize or Adjust, but not both, is applied to (almost) every
controlled object when it is created: Initialize is done when no initial
value is assigned to the object, whereas Adjust is done as part of assigning
the initial value. The one exception is the object initialized by an
(both the anonymous object created for an aggregate, or an object initialized
by an aggregate
that is built-in-place); Initialize is not applied to the aggregate
as a whole, nor is the value of the aggregate
or object adjusted.
of the following use the assignment operation, and thus perform value
explicit initialization of a stand-alone object
) or of a pool element (see 4.8
default initialization of a component of a
stand-alone object or pool element (in this case, the value of each component
is assigned, and therefore adjusted, but the value of the object as a
whole is not adjusted);
function return, when the result is not built-in-place (adjustment of
the result happens before finalization of the function);
predefined operators (although the only one
that matters is concatenation; see 4.5.3
generic formal objects of mode in
); these are defined in terms of constant
), when the result is not built-in-place
(in this case, the value of each component, and the parent part, for
is assigned, and therefore adjusted, but the value of the aggregate
as a whole is not adjusted; neither is Initialize called);
also use the assignment operation, but adjustment never does anything
interesting in these cases:
By-copy parameter passing uses the assignment
operation (see 6.4.1
), but controlled objects
are always passed by reference, so the assignment operation never does
anything interesting in this case. If we were to allow by-copy parameter
passing for controlled objects, we would need to make sure that the actual
is finalized before doing the copy back for [in
The finalization of the parameter itself needs to happen after the copy
back (if any), similar to the finalization of an anonymous function return
object or aggregate
loops use the assignment operation
), but since the type of the loop parameter
is never controlled, nothing interesting happens there, either.
Objects initialized by function results and aggregate
that are built-in-place. In this case, the assignment operation is never
executed, and no adjustment takes place. While built-in-place is always
allowed, it is required for some types — see 7.5
— and that's important since
limited types have no Adjust to call.
Finalization of the parts of a protected object
are not done as protected actions. It is possible (in pathological cases)
to create tasks during finalization that access these parts in parallel
with the finalization itself. This is an erroneous use of shared variables.
Implementation Note: One implementation
technique for finalization is to chain the controlled objects together
on a per-task list. When leaving a master, the list can be walked up
to a marked place. The links needed to implement the list can be declared
(privately) in types Controlled and Limited_Controlled, so they will
be inherited by all controlled types.
Another implementation technique, which we refer
to as the “PC-map” approach essentially implies inserting
exception handlers at various places, and finalizing objects based on
where the exception was raised.
PC-map approach is for the compiler/linker to create a map of code addresses;
when an exception is raised, or abort occurs, the map can be consulted
to see where the task was executing, and what finalization needs to be
performed. This approach was given in the Ada 83 Rationale as a possible
implementation strategy for exception handling — the map is consulted
to determine which exception handler applies.
If the PC-map approach is used, the implementation
must take care in the case of arrays. The generated code will generally
contain a loop to initialize an array. If an exception is raised part
way through the array, the components that have been initialized must
be finalized, and the others must not be finalized.
It is our intention that both of these implementation
methods should be possible.
Wording Changes from Ada 83
Finalization depends on the concepts of completion and leaving, and on
the concept of a master. Therefore, we have moved the definitions of
these concepts here, from where they used to be in Clause 9
These concepts also needed to be generalized somewhat. Task waiting is
closely related to user-defined finalization; the rules here refer to
the task-waiting rules of Clause 9
Inconsistencies With Ada 95
Ada 2012 Correction:
Changed the definition
of the master of an anonymous object used to directly initialize an object,
so it can be finalized immediately rather than having to hang around
as long as the object. In this case, the Ada 2005 definition was inconsistent
with Ada 95, and Ada 2012 changes it back. It is unlikely that many compilers
implemented the rule as written in Amendment 1, so an inconsistency is
unlikely to arise in practice.
Wording Changes from Ada 95
Fixed the wording to say that anonymous objects aren't
finalized until the object can't be used anymore.
Added wording to clarify what happens when Adjust
or Finalize raises an exception; some cases had been omitted.
Revised the definition of master to include expression
in order to cleanly define what happens for tasks and controlled objects
created as part of a subprogram call. Having done that, all of the special
wording to cover those cases is eliminated (at least until the Ada comments
start rolling in).
We define finalization of the collection
here, so as to be able
to conveniently refer to it in other rules (especially in 4.8
Clarified that a coextension is finalized at the same time as the outer
object. (This was intended for Ada 95, but since the concept did not
have a name, it was overlooked.)
Inconsistencies With Ada 2005
Better defined when objects allocated
from anonymous access types are finalized. This could be inconsistent
if objects are finalized in a different order than in an Ada 2005 implementation
and that order caused different program behavior; however programs that
depend on the order of finalization within a single master are already
fragile and hopefully are rare.
Wording Changes from Ada 2005
Removed a redundant rule, which is now covered by
the additional places where masters are defined.
Clarified the finalization rules so that there is
no doubt that privacy is ignored, and to ensure that objects of class-wide classwide
interface types are finalized based on their specific concrete type.
Allowed premature finalization of parts of failed
This could be an inconsistency, but the previous behavior is still allowed
and there is no requirement that implementations take advantage of the
Added a permission to finalize an object allocated from a subpool later
Added text to specially define the master of anonymous objects which
are passed as explicitly aliased parameters (see 6.1
of functions. The model for these parameters is explained in detail in
Ada 2005 and 2012 Editions sponsored in part by Ada-Europe | <urn:uuid:eeef2423-2820-40c4-832d-dfb238c8b697> | CC-MAIN-2017-17 | http://ada-auth.org/standards/aarm12_w_tc1/html/AA-7-6-1.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00543-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.895613 | 6,476 | 2.859375 | 3 |
Spanish Republican Air Force
Initially divided into two branches: Military Aeronautics (Aeronáutica Militar) and Naval Aeronautics (Aeronáutica Naval), the Republican Air Force became the Air Forces of the Spanish Republic, Fuerzas Aéreas de la República Española (FARE), also known as Arma de Aviación, after it was reorganized following the restructuring of the Republican Armed Forces in September 1936, at the beginning of the Spanish Civil War. This defunct Air Force is largely known for the intense action it saw during the Civil War, from July 1936 till its disbandment in 1939.
The Spanish Republican Air Force was popularly known as "La Gloriosa" (The Glorious One). But, according to some historians, the command structure of the Spanish loyalist forces was marred by ineptitude and lack of decision-making throughout the Civil War. Starting from the crucial first weeks of the conflict in July 1936, the rebel side was able to undertake a massive airlift of troops from Spanish Morocco using mostly the slow Ju 52, without any Spanish Republican interference. This was the world's first long-range combat airlift and the military planes on the Spanish Republican side failed to check it.
The Battle of Guadalajara and the defence of the skies over Madrid against Nationalist bombing raids during the capital's long siege would be the only scenarios where the loyalist air force took part in an effective manner. In other important republican military actions, such as the Segovia Offensive, the Battle of Teruel and the decisive Battle of the Ebro, where the Aviación Nacional was relentlessly strafing the loyalist positions with accurate low-level attacks, the republican military airplanes were practically absent from the skies. Moreover, when they appeared and attacked, they did so in an unorganized and inadequate manner that mostly failed to achieve positive effects.
Most of the Spanish Republican planes that survived the conflict were repainted with the markings of the Aviación Nacional after the defeat of the Spanish Republic in the Iberian battlefields.
Like all the branches of the Spanish Republican Armed Forces, the Spanish Republican Air Force went through two clear phases during its existence:
- The pre-Civil War phase, before the coup of July 1936 that would fracture the Spanish military institution
- The Civil War reorganization of the forces that remained loyal to the established republican government dictated by the pressing needs of the moment.
The first years
At the time of the democratic municipal elections that led to the proclamation of the Spanish Republic, the Spanish Air Force (Aeronáutica Española), under the names Aeronáutica Militar and Aeronáutica Naval, the former being the air arm of the Spanish Republican Army and the latter the naval aviation of the Spanish Republican Navy, included mainly French planes, some of which were remnants of the Rif War (1920–1926). Once the Republican Government was established, General Luis Lombarte Serrano replaced pro-monarchist General Alfredo Kindelán as chief-commander of the air force, but he would be quickly succeeded by Commander Ramón Franco, younger brother of later dictator Francisco Franco, a national hero who had earlier made a Trans-Atlantic flight in the Plus Ultra hydroplane.
Aviation was developing in those years in Spain; in 1931 Captain Cipriano Rodríguez Díaz and Lieutenant Carlos de Haya González flew non-stop to Equatorial Guinea, then a Spanish colonial outpost.
In 1933, under Capitan Warlela, systematic cadastral surveys of Spain were carried out using modern methods of aerial photography. The following year Spanish engineer Juan de la Cierva took off and landed on seaplane carrier Dédalo with his autogyro C-30P. In 1934 Commander Eduardo Sáenz de Buruaga became new chief-commander of the air force. On the same year a major restructuring of the Spanish military air wing took place.
Following a Government decree dated 2 October 1935, the Dirección General de Aeronáutica was placed under the authority of the War Ministry, Ministerio de la Guerra, instead of under the Prime Minister of Spain, following which in 1936 the Air Force regional units became restructured. Accordingly, the Spanish Navy-based Escuadra model was replaced by Región Militar divisions which are still operative today in the Spanish Air Force.
Five years after the proclamation of the Spanish republic, a section of the Republican Army in Spanish Morocco rebelled under the orders of General Francisco Franco. The rebellion succeeded only in fractioning Spain and Franco went ahead and began a bloody war of attrition, the Spanish Civil War.
During the Civil War the Air Force of the Spanish republican government would have to fight against the better equipped Aviación Nacional, created by the fraction of the army in revolt and their powerful Italian Fascist and Third Reich supporters.
The Spanish Civil War
After 18 July 1936 coup d'état, the Republican government lost the military planes that were in aerodromes under rebel control. The loyalist areas of Spain retained, however, a great part of the 60 Breguet XIX, 27 Vickers Vildebeest and 56 Hispano-Nieuport Ni-52 planes that the Spanish Air Force had before the hostilities, for the Republic had the control of the majority of the territory. Nevertheless, confronted with a war of attrition in the same month, the Spanish Republican government bought in France 14 Dewoitine D.371, 10 Dewoitine D.373 and 49 Potez 540, among other military aircraft, for the value of 12 million francs.All these planes were largely obsolete at the time, so that in the first four months after the start of the hostilities, the only aircraft of the Republican government that could be considered modern were three Douglas DC-2s that had been purchased recently for LAPE, the Republican airline in March 1935. These were requisitioned by the Spanish Republican Air Force and used as military transports.
Within the month of his military coup, the help received by Francisco Franco from Nazi Germany (Condor Legion) and Fascist Italy (Aviazione Legionaria) gave the rebels the upper hand in airpower over Spain. The first German and Italian bombers arrived to increase the size of the rebel air force already in July 1936 and Fiat CR.32 and Heinkel He 51 fighter planes began operating in August. These planes helped the rebel army side to gain full control of the air, as did the Italian Aviazione Legionaria and the German Condor Legion.
In September 1936 the Navy and Air Ministry (Ministerio de Marina y Aire) and the Air Undersecretariat, (Subsecretaria del Aire), both part of the National Defence Ministry (Ministerio de la Defensa Nacional) were established under the command of Indalecio Prieto as minister. For identification purposes the Republican tricolor roundel was replaced by red bands, an insignia that had previously been used on Aeronáutica Naval aircraft during the monarchy in the 1920s, before the time of the Republic. In the same month the first serious air combat took place over Madrid when Italian bombers attacked the city in a massive bombing operation.
The western democracies, like France, the United Kingdom and the United States didn't help the young Spanish Republic. Afraid of the "Communist threat" Neville Chamberlain and Léon Blum were ready to sacrifice Spain, as they later sacrificed Czechoslovakia, in the belief that Hitler could be appeased. In the void thus created, only the Soviet Union helped the Spanish government effectively. At the end of October, four months after the rebels had been supplied with German and Italian aircraft by Adolf Hitler and Benito Mussolini, the first Tupolev SB bombers arrived from Russia. They were nicknamed "Katiuska". One month later the first Soviet fighter planes arrived to alleviate the lack of operational planes in the loyalist side, the Polikarpov I-15, nicknamed "Chato" (Snubnosed) and the Polikarpov I-16, nicknamed "Mosca" (housefly) by the loyalists and "Rata" (rat) by the rebels. The Polikarpov R-5 and the R-Z reconnaissance bombers were known as "Natacha" in the Spanish Republican Air Force.
The Republican air arm was restructured again in May 1937. The new structure included two branches, the Arma de Aviación and the Subsecretaría de Aviación, but unified the Aeronáutica Militar and Aeronáutica Naval. Some sources give this date as the date of the creation of the Spanish Republican Air Force, although it had been previously operative as an air force already. The Republican Air Force would keep this structure until this disbandment two years later. Many planes belonging to the fleet of the Spanish Republican Airline LAPE (Líneas Aéreas Postales Españolas) were requisitioned by the Spanish Republican Air Force and used as military transports.
Many innovative, and often lethal, aeronautical bombing techniques were tested by Condor Legion German expeditionary forces against loyalist areas on Spanish soil with the permission of Generalísimo Franco. The pilots of the Spanish Republican Air Force were unable to check these modern-warfare attacks. Their planes were mostly obsolete and often in a bad state of disrepair. The ungainly French Potez 540, a highly vulnerable plane that proved itself a failure in Spanish skies during the Civil War, was labelled as 'Flying Coffin' (Spanish: Ataúd Volante) by loyalist pilots. The rebel side, however, claimed that both air forces were almost equal, since the Soviet Union was helping the loyalist air force, but the fact was that:
... on the other side, the fabled military support provided by the Soviet Union was too little and too late – and generally of poor quality. In addition, whilst the Nationalists received vast supplies on credit from the US and Britain, Stalin's assistance came with strings attached.
The Spanish Republican Air Force was unable to counteract the deadly low-level attacks and close support of the infantry tactics developed by Wolfram von Richthofen during the Civil War. As an air force it became practically ineffective after the Battle of the Ebro in 1938, when the spine of the Spanish Republican Armed Forces was broken. Finally the Spanish Republican Air Force was completely disbanded after the decisive rebel victory on 1 April 1939.
The last Republican military airport in Catalonia was in Vilajuiga, from where on 6 February 1939 Commander Andrés García La Calle led a great part of the planes of the Spanish Republican Air Force to France. The orders had been given in haste by the beleaguered authorities of the doomed Republican Government who wanted to prevent the aircraft from falling into the enemy's hands. The planes landed in Francazal near Toulouse, where the French authorities impounded them, arrested the Spanish Republican pilots and swiftly interned them in concentration camps.
The Escuadrilla España
The Escuadrilla España or Escuadra España, Squadron España, French: Escadrille Espagne, also known as Escuadrilla Internacional, was a Spanish Republican Air Force unit organized by French writer André Malraux. Even though it was largely ineffective, this squadron became something of a legend after the writer's claims of nearly annihilating part of the rebel army in the Battle of the Sierra Guadalupe at Medellín, Extremadura. The Escuadrilla España reached a maximum of 130 members and would fly a total of 23 combat missions before it was wrapped up in February 1937.
During the 1930s, André Malraux was active in the anti-fascist Popular Front in France. Upon hearing the news of General Franco's rebellion that marked the beginning of the Spanish Civil War, he put himself at the service of the Spanish Republic. Despite opposition from French President Albert Lebrun, Malraux helped to organize the aid to the Republican air force helped by his contacts with highly placed personalities within the French Air Ministry, like Jean Moulin, future French Resistance leader. Even though President Albert Lebrun opposed direct assistance to the threatened fellow republic, Léon Blum, then the Prime Minister of France, decided to help the Spanish Republicans with discretion. Thus 20 Potez 540, 5 Bloch 210, 10 Breguet XIX, 17 Dewoitine D.371, 2 Dewoitine D.500/510, 5 Amiot 143, 5 Potez 25 and 6 Loire 46 planes were sent to Spain at the beginning of the conflict. Thirteen more Dewoitine D.371 are mentioned by Jules Moch in his book Recontres avez Leon Blum and the Amiot 143 ended up not being delivered, for aircraft constructor Félix Amiot, who would later become a Nazi collaborator, sympathized with the enemies of Republican Spain in the civil war.
The French planes, however, were not up to the enemy aircraft. The slow Potez 540, some of them badly equipped, rarely survived three months of air missions, reaching only about 80 knots against enemy fighters flying at more than 250 knots. Few of the fighters proved to be airworthy, and were delivered intentionally without guns or gun-sights. The French Ministry of Defense had feared that modern types of planes would easily be captured by the Germans fighting for Franco, and the lesser models were a way of maintaining official "neutrality". In the end the French planes were surpassed by more modern types introduced in late 1936 on both sides and their fate was that many of them crashed or were shot down. The crash of Spanish Republican Air Force serial 'Ñ' Potez 540 plane that was shot down by rebel planes over the Sierra de Gúdar range of the Sistema Ibérico near Valdelinares inspired André Malraux to make his L'espoir movie.
In order to give the whole operation an official character, the Spanish Republican War Ministry authorities gave André Malraux the rank of lieutenant colonel, even though he was not a pilot and hadn't even been through military service. This title gave Malraux authority as Squadron Leader of Escuadrilla España, for he was only answerable to general Ignacio Hidalgo de Cisneros, the Spanish Ministerio del Aire commander-in-chief. The writer thus helped to hire crews for the planes, mainly volunteers and professional pilots who had served in the Aéropostale. After the pilots and the planes arrived to Madrid in August 1936, Malraux himself took charge of the organization of the squadron.
Malraux was given considerable autonomy, in Albacete he recruited his own personnel, who escaped the control of the International Brigades run by hard-line Stalinist André Marty who tried to impose discipline. The only thing that held together the writer's motley group of pilots, gunners, mechanicians and airfield assistants and guards, was their common antifascist resolve.
Malraux had to pay a heavy price for his freedom of action though. The Escuadrilla España would suffer a chronic shortage of spare parts and supplies. The number of planes in combat condition was greatly reduced by accidents, lack of quality and by being shot down in action. André Marty, unhappy with the group's autonomy, plotted to bring the Escuadrilla España under his command. Finally the situation was resolved by means of the integration of the squadron in the regular Spanish armed forces. Once the contracts of the professional pilots was severed, the Escuadrilla España would become part of the official Republican Air Force, losing its former status, but taking the name of Escuadrilla Malraux in honor of its founder. The losses, however, escalated, and after covering the flight from enemy-occupied Málaga, the last two bombers were shot down and the Escuadrilla Malraux was formally dissolved.
Even after France joined the Non-Intervention Committee, Malraux helped the Spanish Republic to acquire military aircraft through third countries. The Spanish Republican government circulated photos of Malraux's standing next to some Potez 540 bombers suggesting that France was on their side, at a time when France and the United Kingdom had declared official neutrality. Malraux, however, was not there at the behest of the French Government. Aware of the Republicans' inferior armaments, of which outdated aircraft were just one part of the problem, he toured the United States to raise funds for the Spanish Republican cause. In 1938 he published L'Espoir (Man's Hope), a novel influenced by his Spanish war experiences.
Malraux has often been criticized by opponents for his involvement or motivations in the Spanish Civil War. Comintern sources, for example, described him as an 'adventurer'. The professional pilots of the Escuadrilla España charged exorbitant rates to the Republican Government for their services. Other biographical sources, including fellow combatants, praise Malraux's leadership and sense of camaraderie. At any rate, Malraux's participation in such an historical event as the Spanish Civil War inevitably brought him adversaries, as well as supporters, resulting in a polarization of opinion.
Soviet pilots in Spain
The Soviet Union profited from the international isolation of the Spanish Republic imposed by the Non-intervention agreements and assisted the beleaguered Republican government by providing weapons and pilots. Some of the most effective pilots in Spain were young men from the Soviet Union. The Spanish Republican Air Force lacked modern planes and experienced pilots. Unlike most other foreign pilots in the service of the Spanish Republican Air Force, Russian pilots were technically volunteers. They received no incentives, like combat bonuses, to supplement their modest wages.
Many Soviet airmen came in the fall 1936, along with the new aircraft that the Spanish Republic had purchased from Russia. After the western democracies refused military assistance to the established Spanish Government in the name of so-called "Non-Intervention", the Soviet Union and Mexico were practically the only nations that helped Republican Spain in its struggle. In a similar manner as Hitler with his Third Reich re-armament, Joseph Stalin saw the acquisition of first-hand combat experience in Spain by Soviet pilots and technicians as essential for his plans regarding the capability and combat readiness of the Soviet Air Forces. Therefore, much emphasis was placed on detailed reporting of the results of the testing of the new Russian military equipment and air-warfare techniques.
The first planes that came to Spain were Tupolev SB bombers; the fighters would arrive later. Their first action was a morale-lifting bombing raid on the Talavera de la Reina military airfield used by the Legionary Nazi and Italian planes that dropped their bombs over Madrid every day. This action made the Russian pilots very popular among the people in Madrid. The Katiuska pilots took advantage for the time being of their aircraft's relatively higher speed, but the plane was vulnerable and its fuel tanks easily caught fire when shot at. Furthermore, when the Condor Legion brought the speedier Messerschmitt Bf 109 fighters later in the war, the SB squadrons suffered heavy losses.
Anatol Serov, nicknamed "Mateo Rodrigo", established the Escuadrilla de Vuelo Nocturno fighter squadron along with Mikhail Yakushin. This night-flight section would use I-15 Chatos that had modified exhaust pipes, so that the flames in front would not impair the pilot's night vision. M. Yakushin would become the leader of the Night Fighter Squadron that would be quite effective against the Condor Legion Ju 52 night bombing raids.
There were about 300 Russian pilots in or around Madrid by the end of November 1936. The improved defensive capacity of the Spanish Republic boosted the morale of the areas of Spain under loyalist control. The Russian pilots gave their best performance in the Battle of Guadalajara, routing the Italian Aviazione Legionaria and pounding the Fascist militias incessantly from the air.
Following the demands of the Non-Intervention Committee, Soviet pilots were phased out in the fall of 1938 and trained Spanish airmen took their places after having been trained at the flying schools of Albacete, Alicante, Murcia, El Palomar, Alhama, Los Alcázares, Lorca or El Carmolí that had been set up by the Soviet military.
From about 772 Russian airmen that served the Spanish Republican Air Force for over two years, a total of 99 lost their lives. Little gratitude or recognition were shown to the surviving pilots despite their effort and, to compound their sad lot, many would later become victims of the Stalin Purges after their return to the USSR.
The training of pilots, as well as other air force personnel, was trusted to the Instruction Services (Servicios de Instrucción). All the different units of the Instruction Services depended from the Ministerio de Marina y Aire. During the Civil War the instruction bases and centres were scattered throughout the republican zone:
- The High-speed Flying School (Escuela de Vuelo de Alta Velocidad), located at the El Carmolí air base in the Campo de Cartagena.
- The Bomber School (Escuela de bombardeo), located at the Santiago de la Ribera and Los Alcázares air bases.
- The Multiple-engined Aircraft School (Escuela de polimotores), located at Santiago de la Ribera and Los Alcázares as well.
- The Aircraft Mechanics School (Escuela de mecánicos), located at Godella, Valencia Province.
- The Weaponry School (Escuela de Armeros).
Distinguished Air Aces
|Lev L. Shestakov||Russia||4ª Escuadrilla de Moscas||39||His total victory count may be 42|
|Sergei I. Gritsevets||Russia||5ª Escuadrilla de Caza||30||Nicknamed "Sergio"|
|Manuel Zarauza Clavero||Spain||3ª & 4ª Escuadrilla de Caza||23||Reputed to be the most skilled Spanish pilot on the Mosca.
Exiled in the USSR and KIFA over Baku on 12/October/1942
|Leopoldo Morquillas Rubio||Spain||3ª & 2ª Escuadrilla de Caza||21||Also in Escuadrilla Vasca|
|Pavel Rychagov||Russia||1ª Escuadrilla de Chatos||20||Nicknamed "Pablo Palancar".
Arrested and executed in Stalin's 1941 purge
|Anatol Serov||Russia||1ª Escuadrilla de Chatos||16||Nicknamed "Mateo Rodrigo"
He established the Escuadrilla Vuelo Nocturno night-flight squadron
|Vladimir Bobrov||Russia||13||Flew more than 100 combat missions|
|Andrés García La Calle||Spain||1ª Escuadrilla de Chatos||11||Supreme commander of the fighter squadrons of the Spanish Republic in Dec. 1938|
|Manuel Aguirre López||Spain||1ª Escuadrilla, Grupo 21||11||Also in 3ª Escuadrilla de Moscas|
|Abel Guidez||France||Escuadrilla España||10|
|José María Bravo Fernández||Spain||1ª & 3ª Escuadrilla de Caza||10||Became Commander of 3ª Escuadrilla de Caza and Grupo 21.
Exiled in the USSR he took part in World War II as a Soviet pilot.
Some documents ascribe him 23 victories
|Juan Comas Borrás||Spain||3ª Escuadrilla de Caza||10||Also in Escuadrilla Lacalle & Esc. Vasca. KIA on 24/Jan/1939|
|Emilio Ramirez Bravo||Spain||4ª Escuadrilla de Caza||10||Also in Escuadrilla Lacalle|
|Miguel Zambudio Martinez||Spain||Escuadrilla Vasca||10||Also in 3ª Escuadrilla de Caza, 26 Grupo de Caza|
|Antonio Arias Arias||Spain||1ª, 3ª & 4ª Escuadrilla de Caza||9||Exiled in Russia, Arias fought in the Soviet Air Forces during World War II.
He returned to Madrid as an old man in 1990 and retired.
|Vicente Beltrán Rodrigo||Spain||1ª Escuadrilla de Chatos||9||Also in 3ª Escuadrilla, Grupo 21, shot down in Battle of the Ebro.
Exiled in Russia, joined the Soviet Air Forces. Returned to Spain in 1958
|Frank Glasgow Tinker||United States||1ª Escuadrilla de Chatos||8||Part of the Yankee Squadron|
|Sabino Cortizo Bertolo||Spain||5ª & 3ª Escuadrilla de Caza||8||KIA on 21/Jan/1939|
|José Falcón San Martín||Spain||5ª & 3ª Escuadrilla de Caza||8||Also in Escuadrilla Vuelo Nocturno|
|Pavel Agafonov||Spain||Escuadrilla Palancar||8||Nicknamed "Ahmed Amba". Returned to the USSR in April 1937|
|Felipe del Río Crespo||Spain||Escuadrilla Vasca||7||Also in Escuadrilla Norte (1937); KIA on 23/Apr/1937|
|Juan Lario Sanchez||Spain||4ª & 2ª Escuadrilla de Caza||7|
|Jan Ferák||Czechoslovakia||Escuadrilla España||7||Dewoitine D.372 pilot|
|Francisco Meroño Pellicer||Spain||1ª & 6ª Escuadrilla de Caza||7||Also in Escuadrilla Norte (1937)|
|Orrin B. Bell||USA||1ª Escuadrilla de Chatos||7||Shot down 7 He 51 over the Córdoba-Granada front|
|Andrés Fierro Menú||Spain||1ª Escuadrilla de Chatos||7||Had engine trouble during a mission protecting Tupolev SB bombers;
was taken prisoner following emergency landing at Almenar airfield.
After escaping he reached the USSR where he joined the Soviet Air Forces
|Francisco Tarazona Torán||Mexico||1ª & 3ª Escuadrilla de Caza||6||Claims 8 victories in his autobiographical book.|
|José Pascual Santamaria||Spain||1ª Escuadrilla de Caza||6||Also in Escuadrilla Norte (1937)|
|Ivan Trofimovich Yeryomenko||Russia||1ª Escuadrilla de Chatos||6||Nicknamed "Ramón", "Antonio Aragón". or "Alexandrio"
Leader of the 1ª Escuadrilla between May and October 1937.
|Evgeny Nikolayevitch Stepanov||Russia||1ª Escuadrilla de Chatos||6||Knew how to use the "aerial ramming" technique.
Shot down over Ojos Negros and made prisoner on 17/Jan/1938
Returned to Russia and fought in World War II in the Soviet Air Forces
|Božidar "Boško" Petrović||Yugoslavia||2ª Escuadrilla, Grupo 12||5||Nicknamed "Fernandez Garcia"|
|Rafael Magrina Vidal||Spain||2ª Escuadrilla de Caza||5||KIA on 16/Jul/1937|
|Julio Pereiro Peréz||Spain||2ª, 4ª & 5ª Escuadrilla de Caza||5|
|Harold E. Dahl||USA||Escuadrilla Lacalle||5||Nicknamed "Rubio". Also in 1ª Escuadrilla de Caza|
|Sergei Fyodorovich Tarkhov||Russia||1ª Escuadrilla de Caza||5||Nicknamed "Capitán Antonio". Some authors claim Tarkhov flew a Chato.
However, he was most likely a Mosca pilot.
|William Labussière||France||1ª Escuadrilla de Chatos||5||Fought also in World War II|
|James Peck||USA||1ª Escuadrilla de Chatos||5||One of the few African-American pilots in the Spanish Republican Air Force. 4 victories unconfirmed|
|Albert J. Baumler||USA||Escuadrilla Tarkhov||4||Also in Escuadrilla Lacalle and 1ª Escuadrilla de Caza|
|Benjamin Leider||USA||Escuadrilla Lacalle||3||Nicknamed "Ben Landon".
A true volunteer refusing payment for his services to the Spanish Republic
|Jesús García Herguido||Spain||1ª Escuadrilla de Chatos||3||Nicknamed "Dimoni Roig". KIA on 6/Jan/1937|
|Manuel Orozco Rovira||Spain||4ª Escuadrilla de Chatos||3||Became a lieutenant on 22/Feb/1938|
|Josip Križaj||Yugoslavia||Escuadrilla España, 2ª Escuadrilla Lafayette, 1ª Escuadrilla, grupo 71||3||Dewoitine D.371 pilot nicknamed "José Antonio Galiasso"
Victories not confirmed.
|Title||Colonel||Lieutenant Colonel||Commandant||Captain||Lieutenant||Junior Officer|
Aircraft, insignia and historical documents
The 2 first pages of the book Some Still Live by Frank Glasgow Tinker Jr.
Spanish Republican Air Force 2a Escuadrilla, Grupo 24 standard and pilot's summer uniform. La Sénia Museum
- Spanish Air Force
- Spanish Civil War
- LAPE (Líneas Aéreas Postales Españolas), Spanish Republican Airline
- List of aircraft of the Spanish Republican Air Force
- List of Spanish Civil War air aces
- Deutschland incident (1937)
- Aviazione Legionaria
- Condor Legion
- German re-armament
- Some Still Live
- Yankee Squadron
- Timofey Khryukin
- Antonio Arias Arias, Arde el Cielo: Memorias de un Piloto de Caza Participante en la Guerra de España (1936-1939) y en la Gran Guerra Patria de la URSS (1941-1945). Edited by A. Delgado Romero, 1995. Silla, Valencia. (Memoirs of a Spanish Republican Air Force fighter pilot and squadron leader, who later fought for the Soviet Union during WW2).
- Carmen Calvo Jung, Los Últimos Aviadores de la República ISBN 9788497815444
- BLOCH 200/210
- Breguet Br.413
- Br.460 B4
- "Spanish Civil War Aircraft". Retrieved 2012-04-14.
- Some authors favor the name Arma de Aviación, claiming that the term Fuerzas Aéreas de la República Española (FARE) was only used later by some pilots such as Francisco Tarazona in their memories (Francisco Tarazona Torán, Yo fui piloto de caza rojo.)
- Memoria Republicana - SBHAC
- Antony Beevor (2006) . The Battle for Spain. Orion. ISBN 978-0-7538-2165-7.
- Per photograph caption pg.146 and also text pg.201, Air Power, Budiansky, Stephen, Penguin Group, London England 2005
- Chris Goss et al. Luftwaffe Fighter-Bombers Over Britain: The German Air Force's Tip and Run Campaign, 1942-43, Stackpole, ISBN 978-0-8117-0691-9, p. 26
- Aircraft that took part in the Spanish Civil War
- Hispano Suiza E-30
- Memoria republicana — SBHAC. Estructura orgánica de las FARE
- Ejército del Aire - 1936
- Gerald Howson, Arms for Spain: Untold Story of the Spanish Civil War, John Murray Publishers Ltd, 1998, ISBN 978-0-7195-5556-5
- 11-III-1935 Llega a Barajas el primer Douglas DC-2 para las Líneas Aéreas Postales Españolas (LAPE)
- Blackburn T.1/T.2 Swift/Dart with 1927 Aeronáutica Naval markings
- Blackburn T.3 Velos with 1927 Aeronáutica Naval markings
- Ejército del Aire - 1939
- Pierre Renouvin & René Rémond, Léon Blum, chef de gouvernement. 1936-1937, Presses de la Fondation nationale des sciences politiques, coll. 'Références', 1981
- Stalin and the Spanish Civil War - Soviet Hardware Supplied to the Republic
- Unidades de la FARE que actuaron con I-15
- Polikarpov RZ Natacha
- LAPE Poster with Airline Network
- EL Potez 54 en la Guerra Civil Española
- Biplane fighter aces
- Potez 540/542
- Review: Antony Beevor, The Battle for Spain: the Spanish Civil War 1936-1939
- Edward Jablonski, Terror from the Sky: Airwar, Vol. 1, Garden City, NY: Doubleday & Co. 1972, p. 15
- Cate, pp.228-242
- Hugh Thomas, The Spanish Civil War; New revised edition (2011)
- Aircraft that didn't participate in the Spanish Civil war
- Ángel Viñas, La Soledad de la República
- Spanish Potez 540
- Air Aces - Semyon Desnitsky
- Cate, p.235
- John Sturrock (9 August 2001). "The Man from Nowhere". The London Review of Books. 23 (15).
- Beevor, p.140
- Beevor, id.
- Derek Allan, Art and the Human Adventure, André Malraux's Theory of Art (Amsterdam: Rodopi, 2009). pp. 25-27.
- Soviet Pilots in the Spanish Civil War
- Los chatos nocturnos - ADAR
- John O'Connell, The Effectiveness of Airpower in the 20th Century: Part One (1914 - 1939), iUniverse, ISBN 978-0595430826, p. 125
- Soviet Air Force (VVS) Reference List
- I-16 in Spanish Civil War
- Russian War Heroes - Sergei I. Gritsevets
- Spanish Civil War - U.S.S.R. Air Aces
- José María Bravo Fernández, el último gran 'as' de la República. El País
- Lista incompleta de aviones
- Spanish Civil War Air Aces - Spain
- Jan Ferák - ¡No pasaran!
- III.díl. Španělsko. Fašismus 1936 až 1939. Č 9
- Air Aces: Francisco Tarazona Torán
- Francisco Tarazona Torán, Yo fui piloto de caza rojo. Editorial San Martín, Madrid 1974 ISBN 978-84-7140-069-7
- Biplane fighter aces - Božidar Petrović
- ADAR - Sergei Fyodorovich Tarkhov
- Abraham Lincoln Brigade - James Peck
|Wikimedia Commons has media related to Spanish Republican Air Force.|
- Ejército del Aire, how to get to the Museum
- Museo del Aire de Madrid non-official page (Spanish)
- Polikarpovs dans la guerre d'Espagne
- Cuatro Vientos, Madrid - Polikarpov planes in the Museo del Aire
- Fuerzas Aéreas de la República Española (Spanish)
- Asociación de Aviadores de la República (Spanish)
- List of Spanish Republican Air Force pilots (incomplete)
- Enlaces Republicanos (Spanish)
- Axis History - Bibliography
- La ayuda material a la República Española (Spanish)
- The War Between the Wars - Smithsonian
- Aerial Warfare and the Spanish Civil War
- Spanish Republican Air Force emblems
- La Senia Town Hall - Types of aeroplanes which were in the aviation field
- Aviacion en la Guerra Civil Española
- Republican pilots
- Biography of Vicente Monclús Guallar, republican pilot imprisoned in the USSR | <urn:uuid:4d86040f-13c6-42ed-beb9-039e67b0456d> | CC-MAIN-2017-17 | https://en.wikipedia.org/wiki/Republican_Air_Force | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00013-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.901397 | 8,067 | 3.3125 | 3 |
Myth of the flat Earth
During the early Middle Ages, virtually all scholars maintained the spherical viewpoint first expressed by the Ancient Greeks. From at least the 14th century, belief in a flat Earth among the educated was almost nonexistent, despite fanciful depictions in art, such as the exterior of Hieronymus Bosch's famous triptych The Garden of Earthly Delights, in which a disc-shaped Earth is shown floating inside a transparent sphere.
According to Stephen Jay Gould, "there never was a period of 'flat Earth darkness' among scholars (regardless of how the public at large may have conceptualized our planet both then and now). Greek knowledge of sphericity never faded, and all major medieval scholars accepted the Earth's roundness as an established fact of cosmology." Historians of science David Lindberg and Ronald Numbers point out that "there was scarcely a Christian scholar of the Middle Ages who did not acknowledge [Earth's] sphericity and even know its approximate circumference".
Historian Jeffrey Burton Russell says the flat-Earth error flourished most between 1870 and 1920, and had to do with the ideological setting created by struggles over biological evolution. Russell claims "with extraordinary few exceptions no educated person in the history of Western Civilization from the third century B.C. onward believed that the Earth was flat", and ascribes popularization of the flat-Earth myth to histories by John William Draper, Andrew Dickson White, and Washington Irving.
In Inventing the Flat Earth: Columbus and Modern Historians, Jeffrey Russell describes the Flat Earth theory as a fable used to impugn pre-modern civilization and creationism.
James Hannam wrote:
The myth that people in the Middle Ages thought the Earth is flat appears to date from the 17th century as part of the campaign by Protestants against Catholic teaching. But it gained currency in the 19th century, thanks to inaccurate histories such as John William Draper's History of the Conflict Between Religion and Science (1874) and Andrew Dickson White's A History of the Warfare of Science with Theology in Christendom (1896). Atheists and agnostics championed the conflict thesis for their own purposes, but historical research gradually demonstrated that Draper and White had propagated more fantasy than fact in their efforts to prove that science and religion are locked in eternal conflict.
Early modern period
French dramatist Cyrano de Bergerac in chapter 5 of his Comical History of the States and Empires of the Moon (published 2 years posthumously in 1657) quotes St. Augustine as saying "that in his day and age the Earth was as flat as a stove lid and that it floated on water like half of a sliced orange." Robert Burton, in his The Anatomy of Melancholy wrote:
Virgil, sometimes bishop of Saltburg (as Aventinus anno 745 relates) by Bonifacius bishop of Mentz was therefore called in question, because he held antipodes (which they made a doubt whether Christ died for) and so by that means took away the seat of hell, or so contracted it, that it could bear no proportion to heaven, and contradicted that opinion of Austin [St. Augustine], Basil, Lactantius that held the Earth round as a trencher (whom Acosta and common experience more largely confute) but not as a ball.
Thus, there is evidence that accusations of Flatearthism, though somewhat whimsical (Burton ends his digression with a legitimate quotation of St. Augustine: "Better doubt of things concealed, than to contend about uncertainties, where Abraham's bosom is, and hell fire") were used to discredit opposing authorities several centuries before the 19th. Another early mention in literature is Ludvig Holberg's comedy Erasmus Montanus (1723). Erasmus Montanus meets considerable opposition when he claims the Earth is round, since all the peasants hold it to be flat. He is not allowed to marry his fiancée until he cries "The earth is flat as a pancake". In Thomas Jefferson's book Notes on the State of Virginia (1784), framed as answers to a series of questions (queries), Jefferson uses the "Query" regarding religion to attack the idea of state-sponsored official religions. In the chapter, Jefferson relates a series of official erroneous beliefs about nature forced upon people by authority. One of these is the episode of Galileo's struggles with authority, which Jefferson erroneously frames in terms of the shape of the globe:
Government is just as infallible too when it fixes systems in physics. Galileo was sent to the inquisition for affirming that the Earth was a sphere: the government had declared it to be as flat as a trencher, and Galileo was obliged to abjure his error. This error however at length prevailed, the Earth became a globe, and Descartes declared it was whirled round its axis by a vortex.
The 19th century was a period in which the perception of an antagonism between religion and science was especially strong. The disputes surrounding the Darwinian revolution contributed to the birth of the conflict thesis, a view of history according to which any interaction between religion and science would almost inevitably lead to open hostility, with religion usually taking the part of the aggressor against new scientific ideas.
Irving's biography of Columbus
In 1828, Washington Irving's highly romanticised biography, A History of the Life and Voyages of Christopher Columbus, was published and mistaken by many for a scholarly work. In Book II, Chapter IV of this biography, Irving gave a largely fictional account of the meetings of a commission established by the Spanish sovereigns to examine Columbus's proposals. One of his more fanciful embellishments was a highly unlikely tale that the more ignorant and bigoted members on the commission had raised scriptural objections to Columbus's assertions that the Earth was spherical.
The issue in the 1490s was not the shape of the Earth, but its size, and the position of the east coast of Asia, as Irving in fact points out. Historical estimates from Ptolemy onwards placed the coast of Asia about 180° east of the Canary Islands. Columbus adopted an earlier (and rejected) distance of 225°, added 28° (based on Marco Polo's travels), and then placed Japan another 30° further east. Starting from Cape St. Vincent in Portugal, Columbus made Eurasia stretch 283° to the east, leaving the Atlantic as only 77° wide. Since he planned to leave from the Canaries (9° further west), his trip to Japan would only have to cover 68° of longitude.
Columbus mistakenly assumed that the mile referred to in the Arabic estimate of 56⅔ miles for the size of a degree was the same as the actually much shorter Italian mile of 1,480 metres (0.92 mi). His estimate for the size of the degree and for the circumference of the Earth was therefore about 25% too small. The combined effect of these mistakes was that Columbus estimated the distance to Japan to be only about 5,000 km (or only to the eastern edge of the Caribbean) while the true figure is about 20,000 km. The Spanish scholars may not have known the exact distance to the east coast of Asia, but they believed that it was significantly further than Columbus's projection; and this was the basis of the criticism in Spain and Portugal, whether academic or amongst mariners, of the proposed voyage.
The disputed point was not the shape of the Earth, nor the idea that going west would eventually lead to Japan and China, but the ability of European ships to sail that far across open seas. The small ships of the day (Columbus's three ships varied between 20.5 and 23.5 m – or 67 to 77 feet – in length and carried about 90 men) simply could not carry enough food and water to reach Japan. The ships barely reached the eastern Caribbean islands. Already the crews were mutinous, not because of some fear of "sailing off the edge", but because they were running out of food and water with no chance of any new supplies within sailing distance. They were on the edge of starvation. What saved Columbus was the unknown existence of the Americas precisely at the point he thought he would reach Japan. His ability to resupply with food and water from the Caribbean islands allowed him to return safely to Europe. Otherwise his crews would have died, and the ships foundered.
Advocates for science
In 1834, a few years after the publication of Irving's book, Jean Antoine Letronne, a French academic of strong antireligious ideas, misrepresented the church fathers and their medieval successors as believing in a flat earth in his On the Cosmographical Ideas of the Church Fathers. Then in 1837, the English philosopher of science William Whewell, in his History of the Inductive Sciences, identified Lactantius, author of Institutiones Divinae (c. 310), and Cosmas Indicopleustes, author of Christian Topography (c. 548), as evidence of a medieval belief in a Flat Earth. Lactantius had been ridiculed much earlier by Copernicus in De revolutionibus of 1543 as someone who "Speaks quite childishly about the Earth's shape, when he mocks those who declared that the Earth has the form of a globe".
Other historians quickly followed Whewell, although they could identify few other examples. The American chemist John William Draper wrote a History of the Conflict between Religion and Science (1874), employing the claim that the early Church fathers thought the Earth was flat as evidence of the hostility of the Church to the advancement of science. The story of widespread religious belief in the flat Earth was repeated by Andrew Dickson White in his 1876 The Warfare of Science and elaborated twenty years later in his two-volume History of the Warfare of Science with Theology in Christendom, which exaggerated the number and significance of medieval flat Earthers to support White's model of warfare between dogmatic theology and scientific progress. As Draper and White's metaphor of ongoing warfare between the scientific progress of the Enlightenment and the religious obscurantism of the "Dark Ages" became widely accepted, it spread the idea of medieval belief in the flat Earth.
The widely circulated engraving of a man poking his head through the firmament surrounding the Earth to view the Empyrean, executed in the style of the 16th century, was published in Camille Flammarion's L'Atmosphère: Météorologie Populaire (Paris, 1888, p. 163). The engraving illustrates the statement in the text that a medieval missionary claimed that "he reached the horizon where the Earth and the heavens met". In its original form, the engraving included a decorative border that places it in the 19th century. In later publications, some of which claimed that the engraving dates to the 16th century, the border was removed.
20th century and onward
Since the early 20th century, a number of books and articles have documented the flat earth error as one of a number of widespread misconceptions in popular views of the Middle Ages. Both E. M. W. Tillyard's book The Elizabethan World Picture and C. S. Lewis' The Discarded Image are devoted to a broad survey of how the universe was viewed in Renaissance and medieval times, and both extensively discuss how the educated classes knew the world was round. Lewis draws attention to the fact that in Dante's The Divine Comedy about an epic voyage through hell, purgatory, and heaven, the earth is spherical with gravity being towards the center of the earth. As the Devil is frozen in a block of ice in the center of the earth, Dante and Virgil climb down the Devil's torso, but up from the Devil's waist to his feet, as his waist is at the center of the earth.
Jeffrey Burton Russell rebutted the prevalence of belief in the flat Earth in a monograph and two papers. Louise Bishop states that virtually every thinker and writer of the 1000-year medieval period affirmed the spherical shape of the Earth.
Although the misconception was frequently refuted in historical scholarship since at least 1920, it persisted in popular culture and in some school textbooks into the 21st century. An American schoolbook by Emma Miller Bolenius published in 1919 has this introduction to the suggested reading for Columbus Day (12 October):
When Columbus lived, people thought that the earth was flat. They believed the Atlantic Ocean to be filled with monsters large enough to devour their ships, and with fearful waterfalls over which their frail vessels would plunge to destruction. Columbus had to fight these foolish beliefs in order to get men to sail with him. He felt sure the earth was round.
Previous editions of Thomas Bailey's The American Pageant stated that "The superstitious sailors [of Columbus's crew] ... grew increasingly mutinous ... because they were fearful of sailing over the edge of the world"; however, no such historical account is known.
A 2009 survey of schoolbooks from Austria and Germany showed that the Flat Earth myth became dominant in the second half of the 20th century and persists in most historical textbooks for German and Austrian schools.
As recently as 1983 Daniel Boorstin published a historical survey, The Discoverers, which presented the Flammarion engraving on its cover and proclaimed that "from AD 300 to at least 1300 ... Christian faith and dogma suppressed the useful image of the world that had been so ... scrupulously drawn by ancient geographers." Boorstin dedicated a chapter to the flat earth, in which he portrayed Cosmas Indicopleustes as the founder of Christian geography. The flat earth model has often been incorrectly supposed to be church doctrine by those who wish to portray the Catholic Church as being anti-progress or hostile to scientific inquiry. This narrative has been repeated even in academic circles, such as in April 2016, when Boston College theology professor and ex-priest Thomas Groome erroneously stated that "the Catholic Church never said the earth is round, but just stopped saying it was flat."
The 1937 popular song They All Laughed contains the couplet "They all laughed at Christopher Columbus/When he said the world was round". In the Warner Bros. Merrie Melodies cartoon Hare We Go (1951) Christopher Columbus and Ferdinand the Catholic quarrel about the shape of the Earth; the king states the Earth is flat. In Walt Disney's 1963 animation The Sword in the Stone, wizard Merlin (who has traveled into the future) explains to a young Arthur that "man will discover in centuries to come" that the Earth is round, and rotates.
Historiography of the flat Earth myth
Historical writers have identified a number of historical circumstances that contributed to the origin and widespread acceptance of the flat-earth myth. American historian Jeffrey Burton Russell traced the nineteenth-century origins of what he called the Flat Error to a group of anticlerical French scholars, particularly to Antoine-Jean Letronne and, indirectly, to his teachers Jean-Baptiste Gail and Edme Mentelle. Mentelle had described the Middle Ages as twelve ignorant centuries of "profound night", a theme exemplified by the flat-earth myth in Letronne's "On the Cosmological Opinions of the Church Fathers".
Historian of science Edward Grant saw a fertile ground for the development of the flat-Earth myth in a more general assault upon the Middle Ages and upon scholastic thought, which can be traced back to Francesco Petrarch in the fourteenth century. Grant sees "one of the most extreme assaults against the Middle Ages" in Draper's History of the Intellectual Development of Europe, which appeared a decade before Draper presented the flat-Earth myth in his History of the Conflict Between Religion and Science.
Andrew Dickson White's motives were more complex. As the first president of Cornell University, he had advocated that it be established without any religious ties but be "an asylum for science". In addition, he was a strong advocate for Darwinism, saw religious figures as the main opponents of the Darwinian evolution, and sought to project that conflict of theology and science back through the entire Christian Era. But as some historians have pointed out, the nineteenth-century conflict over Darwinism incorporated disputes over the relative authority of professional scientists and clergy in the fields of science and education. White made this concern manifest in the preface to his History of the Warfare of Science and Theology in Christendom, where he explained the lack of advanced instruction in many American colleges and universities by their "sectarian character".
The flat-Earth myth, like other myths, took on artistic form in the many works of art displaying Columbus defending the sphericity of the Earth before the Council of Salamanca. American artists depicted a forceful Columbus challenging the "prejudices, the mingled ignorance and erudition, and the pedantic bigotry" of the churchmen. Abrams sees this image of a Romantic hero, a practical man of business, and a Yankee go-getter as crafted to appeal to nineteenth-century Americans.
Russell suggests that the flat-earth error was able to take such deep hold on the modern imagination because of prejudice and presentism. He specifically mentions "the Protestant prejudice against the Middle Ages for Being Catholic ... the Rationalist prejudice against Judeo-Christianity as a whole", and "the assumption of the superiority of 'our' views to those of older cultures".
- List of common misconceptions
- T and O map
- Mappa mundi
- Armillary sphere
- Pope Sylvester II
- Modern flat Earth societies
- Russell 1991, p. 3.
- Russell 1997.
- Gombrich 1969, pp. 162–170.
- Gould 1997.
- Lindberg & Numbers 1986, pp. 338–354.
- Russell 1991.
- Russell 1993.
- James Hannam. "Science Versus Christianity?".
- The Other World The Societies and Governments of the Moon, translated by Donald Webb
- Second Partition, Section 2, Member 3 "Air Rectified. With a Digression of the Air" The Anatomy of Melancholy
- Jefferson, Thomas. Notes on the State of Virginia, Query regarding RELIGION. Electronic Text Center, University of Virginia Library.
- David B. Wilson writes about the development of the conflict thesis in "The Historiography of Science and Religion" Wilson 2002.
- Irving 1861.
- Russell 1991, pp. 51–56.
- Irving 1861, p. 90.
- Ptolemy, Geography, book 1:14.
- Morison 1942, p. 65.
- Nunn & Edwards 1924, pp. 27–30.
- Nunn & Edwards 1924, pp. 1–2, 17–18.
- Morison 1942, pp. 209, 211.
- Letronne 1883.
- Gould 1997, p. 42.
- Garwood 2007, pp. 10–11.
- White 1876, pp. 10–22.
- Garwood 2007, pp. 12–13.
- Garwood 2007, pp. 13–14.
- Bishop 2008, p. 99.
- Bolenius 1919 quoted in Garwood 2007.
- Loewen 1996, p. 56.
- Bernhard 2014.
- Boorstin 1983, p. 100.
- Boorstin 1983, pp. 108–109.
- Russell 1993, pp. 344–345.
- Grant 2001, pp. 329–345.
- Grant 2001, p. 335.
- Draper 1874, pp. 63–65, 154–5, 160–161.
- Lindberg & Numbers 1986, pp. 338–352.
- Turner 1978.
- White 1917, p. vii.
- Abrams 1993, p. 89.
- Russell 1993, p. 347.
- Abrams, Ann Uhry (1993), "Visions of Columbus: The 'Discovery' Legend in Antebellum American Paintings and Prints", American Art Journal, 25 (1/2): 74–101, JSTOR 1594601
- Bernhard, Roland (2014). "Kolumbus und die Erdkugel" [Columbus and the Globe]. Damals (in German). Vol. 46 no. 7. pp. 45–46.
- Bishop, Louise M. (2008), "The Myth of the Flat Earth", in Harris, Stephen J.; Grigsby, Bryon Lee, Misconceptions about the Middle Ages, Routledge, ISBN 978-0-415-77053-8
- Bolenius, Emma Miller (1919), The Boys' and Girls' Reader: Fifth Reader, Houghton Mifflin
- Boorstin, Daniel (1983), The Discoverers, New York: Random House Publishing Group, ISBN 978-0-394-40229-1
- Draper, John William (1874), History of the Conflict between Religion and Science, New York: D. Appleton and Company
- Garwood, Christine (2007), Flat Earth: the history of an infamous idea, Macmillan, ISBN 0-312-38208-1
- Gombrich, E. H. (1969), "Bosch's "Garden of Earthly Delights": A progress report", Journal of the Warburg and Courtauld Institutes, 32: 162–170, JSTOR 750611
- Gould, Stephen J. (1997), "The late birth of a flat earth", Dinosaur in a Haystack: Reflections in Natural History (PDF) (1st pbk. ed.), New York: Three Rivers Press, pp. 38–50, ISBN 0-517-88824-6
- Gould, Stephen J. (2011) , "Columbus and the Flat Earth: An Example of the Fallacy of Warfare between Science and Religion", Rocks of Ages: Science and Religion in the Fullness of Life (e-book ed.), New York: Random House LLC, ISBN 978-0-307-80141-8
- Grant, Edward (2001). God and Reason in the Middle Ages. Cambridge University Press. ISBN 978-0-521-00337-7.
- Irving, Washington (1861), The Works of Washington Irving, University of Michigan Library, retrieved 2008-08-19
- Letronne, Antoine-Jean (1883), "Des Opinions cosmographiques des Pères de l'Église", in Fagnan, Edmond, Œuvres choises de A.-J. Letronne, 2, Géographie et Cosmographie (in French), 1, Paris: Ernest Leroux, pp. 382–414
- Lindberg, David C.; Numbers, Ronald L. (1986), "Beyond War and Peace: A Reappraisal of the Encounter between Christianity and Science", Church History, Cambridge University Press, 55 (3): 338–354, doi:10.2307/3166822, JSTOR 3166822
- Loewen, James. W. (1996), Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong, Touchstone Books, ISBN 978-0-684-81886-3
- Members of the Historical Association (1945), Common errors in history, General Series, G.1, London: P.S. King & Staples for the Historical Association
- Morison, Samuel Eliot (1991) , Admiral of the Ocean Sea. A Life of Christopher Columbus, Little, Brown & Co., ISBN 0-316-58478-9
- Nunn, George E.; Edwards, Clinton R. (1992) , The Geographical Conceptions of Columbus, Milwaukee, Wisconsin, U.S.A.: American Geographical Society Golda Meir Library, ISBN 1-879281-06-6
- Russell, Jeffrey Burton (1991), Inventing the Flat Earth: Columbus and modern historians, New York: Praeger, ISBN 0-275-95904-X
- Russell, Jeffrey Burton (1993), "The Flat Error: The Modern Distortion of Medieval Geography", Mediaevalia, 15: 337–353
- Russell, Jeffrey Burton (1997), "The Myth of the Flat Earth", Studies in the History of Science, American Scientific Affiliation, retrieved 2007-07-14
- Turner, Frank M. (September 1978), "The Victorian Conflict between Science and Religion: A Professional Dimension", Isis, 69 (3): 356–376, doi:10.1086/352065, JSTOR 231040
- White, Andrew Dickson (1876), The Warfare of Science, New York: D. Appleton and Company
- White, Andrew Dickson (1917) , A History of the Warfare of Science with Theology in Christendom, New York: D. Appleton and Company
- Wilson, David B. (2002), "The Historiography of Science and Religion", in Ferngren, Gary B., Science and Religion: A Historical Introduction, Johns Hopkins University Press, ISBN 0-8018-7038-0 | <urn:uuid:5bcb7b4d-5153-443c-ac5a-45bb0980ba70> | CC-MAIN-2017-17 | https://en.wikipedia.org/wiki/Myth_of_the_Flat_Earth | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00134-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.924256 | 5,237 | 3.671875 | 4 |
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In 1991, Wilson and Grim (1991) sought to explain why African Americans have a higher incidence of hypertension. Their hypothesis stated that an event in single genetic selection during slave capture and transport favored slaves with salt sensitive genes because they were able to retain a sufficient hydration level for survival. This "slavery hypertension hypothesis" has been widely reported and accepted in popular media, but it has met with considerable controversy in scientific circles. Most experts have rejected this hypothesis as incompatible with documented historical data as well as with principles of population genetics, and today's view is that it is most likely incorrect. This paper traces debates around Wilson and Grim's (1991) hypothesis, paying particular attention to the implicit ideologies of proponents and opponents in regard to the heightened occurrence of hypertension among African Americans.
Medical science defines hypertension as resting blood pressure above 140 mm Hg during the heart stroke (systolic value) and 90 mm Hg between strokes (asystolic value). Normal blood pressure is 120/80. Doctors distinguish two types of hypertension: "essential" hypertension, of unknown etiology, which accounts for 90-95% of cases. In secondary hypertension, which accounts for about 5-10% of cases, to which the causes are known - for example, cancers of the adrenal glands, or loss of limbs.
Essential hypertension is a multi-factorial disease. Factors correlated with its incidence include lifestyle, diet, exercise, weight, age, race, gender, and general health (e.g., diseases such as diabetes, or metabolic syndrome). There are multiple genes known to be associated with essential hypertension risk, e.g., alleles for G-protein, AGT-235, and ACE I/D (Poston 2001), but the genetic basis of essential hypertension is currently not completely understood.
African Americans have higher rates of hypertension and suffer from more severe complications than non-Hispanic whites (Ferdinand, 2007). In the 2003-2004 National Health and Nutrition Examination Survey (NHANES, 2004, survey) trials, the percentage of participants having high blood pressure (>140/90 mm Hg or taking medication) was found to be 41.4% for African American females vs. 28.0% for non-hispanic white females, and 39.0% for African American males vs. 28.5 for non-hispanic white males. "With an estimated 81 percent of African American women and 78 percent of men age 50 and older having hypertension, the disease constitutes an epidemic" (Schoenberg, 2002, p.458). The differences in morbidity and mortality from hypertension-related diseases between African Americans and the general population are even higher than these relative rates of incidence would suggest. Overall death rates from hypertension for African Americans are 49.9 for men and 40.6 for women, vs. 17.9 national average (per 100,000); African Americans have 1.8 times the rate of stroke, 4.2 times the rate of end stage renal disease, 1.7 times the rate of heart failure, and 1.5 times the rate of coronary disease mortality. Thus, hypertension-related illnesses not only have a higher incidence but also take a greater toll on African Americans than on the Caucasian Americans.
Salt uptake is known to have an effect in essential hypertension; however, the size of the effect varies from individual to individual. The physiological pathway through which salt uptake influences blood pressure is the Renin-Angiotensin-Aldosterone System (RAAS) (Fournie-Zaluski,2004, p.7775). Angiotensin I is a peptide synthesized in the liver, which is converted to Angiotensin II in a process regulated by ACE and renin. The angiotensins determine the amount of water and salt that is retained in the body by regulating kidney activity. An increased level of Angiotensin II leads to increased re-absorption of sodium in the kidney tubules. In order to maintain hypostasis of electrolytes, the body then increases effective circulating volume, which leads to higher blood pressure.
It has been demonstrated that ethnicity is strongly correlated with the effects of dietary sodium reduction on blood pressure (Bray, 2004). In the 2004 'Dietary Approaches to Stop Hypertension' (DASH) sodium trial, patients were placed on a 2100 kcal/mole per day diet using three levels of sodium (150, 100, and 50 mmol per day). For thirty days, patients either consumed a special DASH diet or a typical American diet. At the end of the trial, the observed spread in systolic blood pressure (SBP) between the highest and lowest sodium levels was significantly higher for African Americans than for non-African Americans:
DASH diet (mm Hg)
American diet (mm Hg)
Non-African American Women
Non-African American Men
African American Men
African American Women
Wilson (1986) suggested a hypothesis for the prevalence of hypertension in African Americans, which was expanded and re-published in a joint paper with Grim in 1991 (Wilson, 1991). Their hypothesis, which became known as the Wilson-Grim slavery hypertension hypothesis, stated that hypertension in African Americans is higher than in Africans of comparable genetic origins because the slave trade selected for high salt retention and was a "genetic bottleneck." Death rates among slaves were high during transport, and dehydration was the main cause of death. Wilson and Grim cited death rates during slave shipping from Africa to the Americas (the so-called "Middle Passage") of about 30% (Wilson, 1991, pgs. I-125). Persons with a higher natural ability to retain salt were less sensitive to dehydration and thus more likely to survive the slave transport to the Americas, as well as the continued environment of heavy physical labor under hot conditions in the plantations. Wilson and Grim assumed that salt was scarce in the regions of Africa where the slaves originated, and that thus there were genes for its efficient use already present in the population (Wilson, 1991, pgs. I-123).
As further proof of a difference in gene distributions, Wilson and Grim experimentally determined differences in hypertension rates between African Americans and Africans. They measured blood pressures in a West African village where salt intake was similar to that of African Americans. Hypertension rates in Africans living today in the regions where North American slaves originated are about 2.7 times lower than the rates for African Americans (Fackelmann, 1991).
In a popularization of Wilson and Grim's hypothesis, Diamond (1991) provided a reckoning of the "successive winnowing" of slave numbers along the stations of enslavement; of 100 slaves captured/sold in Africa, ~25 died during forced marches to the coast, ~12 died in camps waiting to be shipped ("barackooning"), ~5 died while ships were going up and down the African coast, filling their holds, ~10 died on the Middle Passage to the Americas, ~10 died during the first three years of plantation life ("seasoning"); sometimes many more - generally, it was cheaper for plantation owners to buy new slaves than to provide better living conditions (Diamond, 1991, p.4). At most, 30% of the original number of captured slaves survived to pass on their genes.
As main causes of death Diamond (1991) cites dehydration from sweating and lack of water, diarrhea ("fluxes") of all etiologies, seasickness/vomiting, other diseases, punishment/execution, and suicide/loss of will to live. Diamond's quoted death rates for the Middle passage are much lower than Wilson and Grim's. However, in an embellishment on the original hypothesis, he assumed a high death rate for the initial "forced marches" to the coast (Diamond, 1991, p.4).
Supporters of the Slavery Hypertension Hypothesis have pointed to anecdotal supporting evidence. Dimsdale (2000) enlists a passage from 'Moby Dick' describing sharks following slave ships to feed on the bodies thrown overboard to demonstrate the high death rates in the Middle Passage. A 1794 illustration of a slave trader licking the face of a slave in Africa was interpreted as checking whether the slave's skin was salty (Dimsdale, 2000). More salt on the slave's skin would mean a higher chance of death during transport, making the slave less valuable. While Dimsdale (2000) acknowledges that the Slavery Hypertension Hypothesis is "sheer speculation" because of lack of DNA evidence for a genetic basis, he concludes that "nevertheless, the hypothesis remains an intriguing one" (Dimsdale, 2000).
The Slavery Hypertension hypothesis was widely popularized by Diamond and others, and it found its way not only into the newspapers but also into textbooks, medical journals, and review articles (Armelagos 2005:120; Kaufman 2001). An example how widely the hypothesis has been accepted in the general population is given by a 2007 "Ask Dr. Oz" segment on "Oprah". When an audience member asked: "Why do I sweat so much?" Dr. Mehmet Oz (wearing scrubs) explained that excessive sweating can result, among other reasons, from hypertension. He then turned to Oprah to ask: "Do you know why African Americans have high blood pressure?" Oprah answered: "African Americans who survived (the slave transport) were those who could hold more salt in their body." Dr. Oz rejoiced: "That's perfect!"
Unfortunately, from a scientific point of view, the Slavery Hypertension hypothesis is far from perfect. Almost every single one of Wilson and Grim's (1991) assumptions and conclusions almost immediately drew withering criticism from historians and geneticists.
Curtin (1992), a historian of the slave trade on whose work Wilson and Grim drew extensively, methodically disassembled almost every number that they assumed. His criticism concluded that their hypothesis lacks supporting evidence and "runs counter to what evidence we do have" (Curtin, 1992, p.1686). In particular, Curtin noted that:
The regions from which American slaves came were not salt-scarce.
Slaves for the Americas were not marched for months, they came from near the coast.
Wilson&Grim's death rate numbers for the Middle Passage were too high.
Diarrhea/water loss were not key death factors for slaves.
Curtin (1992) seems to have been particularly incensed by Jared Diamond's popularization of Wilson and Grim's hypothesis (Diamond, 1991), which he perceived to present the hypothesis as proven and to "selectively mis-represent the evidence" (Armelagos, 2005).
Geneticists have argued that population dynamics do not allow the Wilson/Grim hypothesis: even the death rates during slave transport cannot influence gene distributions to that extent. However, this view is not uncontested: Fatimah Jackson supported the concept of the genetic bottleneck in a 1991 paper, and further postulated that stress experienced by slave populations lead to increased genetic variability (Jackson, 1991). Others have argued that African Americans have ~15-20% admixture of Caucasian genes, so any genetic effects should be diluted. A 2001 study comparing African Americans and African-born immigrants examined the known alleles associated with increased hypertension risk (G-protein, AGT-235, and ACE I/D) found that the AGT-235 homozygous T genotype was more prevalent among African-born immigrants, the opposite of what would be expected from the Slavery Hypertension Hypothesis (Poston, 2001). However, it is clear that our understanding of the genetic basis for hypertension is at best incomplete. Luft (2001) lists seven genes known to be associated with hypertension but concludes that "In terms of genetically explaining blood pressure variance for specific genes, we have a long way to go" (Luft, 2001, p. 503).
Kaufman was particularly outspoken in criticizing Wilson and Grim's hypothesis in a series of articles and letters. He initially attacked Dimsdale's (2000) summary of the hypothesis as a "careless repetition of the old 'Slavery Hypothesis' yarn," calling it a "medical myth" and "pseudoscientific canard" that "unwittingly plays into the hands of racial essentialists and biological determinists," and relegating it to "fantasy," not "sensible and respectable science" (Kaufman, 2001). Dimsdale (2001) responded to this by noting that "Race and ethnicity are too important to be ignored or politicized" (Dimsdale, 2001). In subsequent publications (Kaufman & Hall, 2003), Kaufman (2001) was particularly concerned with the ideology that he perceived to underlie the Slavery Hypertension Hypothesis, and accused its proponents to foster the notion that "Blacks" are "inherently different by harboring genetic defects or physiological abnormalities" (Armelagos, 2005, p.121). Certainly Wilson and Grim's initial language referring to "defective kidneys" and "renal defects" (Wilson 1991: I-123) was ill chosen in that regard, although they note in the same article that "it would be more accurate to state that American blacks simply respond differently, sometimes better and sometimes worse (depending on the circumstances), to sodium than do whites" (Wilson, 1991, pgs.I-126). Kaufman (2001) also questioned the use of race (as defined by skin color) or ethnicity as physiologically useful criteria, despite the epidemiological studies quoted above that have shown measurable differences, but it is not clear what he would suggest to use in its place.
It could be said that Kaufman's (2001) attacks added a political and ideological dimension to the Slavery Hypertension debate, but is more accurate to say that they only brought a previously hidden dimension into full view. In their 2003 paper, Kaufman and Hall attacked the concept of "genetic determinism" and "essential black abnormality" they saw as underlying Wilson and Grim's work (Kaufman & Hall, 2003). They saw it as an example of a worldview that "blames the victim and displaces economic or cultural factors from our understanding of the underlying etiology of the disease" (Armelagos, 2005, p.121). While Grim and Robinson (2003) interpreted this criticism as being called racist (Grim & Robinson 2003), racism was not really implied in it. Instead it is a difference in worldview between the reductionism of the exact sciences and the approach of the social sciences that attempts to synthesize scientific evidence and social context into a broader picture.
Both approaches have their inherent fallacies: the 'pure science' approach may focus on pieces of the puzzle, maintaining that no area of inquiry is off limits to study by unbiased and objective scientists. Never mind that there is no such thing: scientists cannot escape the mental context and preconceived notions of the societies the live in, and objectivity is a more of a lofty goal than reality. Too often in the history of science, prejudice has masqueraded as scientific conclusion. One example that comes to mind is the notion of female intellectual inferiority widely accepted in the 19th century which was based on no more evidence that lower average brain weights of women, yet was widely accepted because all scientists were male and everybody (male) already 'knew it was true.' On the other hand, preoccupation with the social consequences of a result can taint scientific inquiry and prevent a clear view of what is true or not. Kaufman (2001) disavows "genetic determinism" without disproving it based on what would be considered 'hard science.' In the final analysis, the difference in worldviews comes down to whether there is such a thing as an objective truth, dissociated from social context, or not.
Singer (1996) addresses these and other issues. He specifically refers to "Cartesianism, that Western tendency to see independent parts as composing and determining a summary whole" (Singer, 1996, p.499), and argues that the a simple one-way Darwinian adaptive response of organisms to the environment (a view that he calls "adaptionism") needs to be replaced with a dialectical view in which organisms shape the environment even as they are adapting to it. Since the human environment is significantly determined by societal and economic forces, he argues that humans undergo "unnatural selection" (Singer, 1996, p.506). He then goes on to re-capitulate Wilson and Grim's (1991) hypothesis as an example of the genetic component of such a process, and further embellishes their emphasis on the effect of gastrointestinal disorders on slave mortality with a reference to cholera. However, cholera was "completely absent from the Atlantic basin during the period of the slave trade to North America" (Curtin, 1992, p.1684).
Setting aside this small factual inaccuracy, Singer's (1996) argument that environment and organisms co-evolve is undoubtedly correct, although it is not clear exactly who would argue otherwise. Natural history is full of examples where organisms have fundamentally changed the environment and then were in turn changed by it: one example that comes to mind is the evolution of early oxygen generating organisms that fundamentally changed the earth's atmosphere and enabled animal life, but then could not compete with new life forms and became extinct. Singer's (1996) argument has more merit with respect to modern humans and the economical and societal aspects of their evolution, and it is indeed true that the natural sciences may overlook societal contributions in their effort to establish simple cause and effect relationships. However, it is not clear whether a truly dialectical perspective is needed or whether the recognition that humans have increasingly shaped their own environment and that today their natural environment is largely man-made would suffice. In any case, it would appear that the slave trade is a poor example against the adaptionist view, since slaves were essentially powerless to change their environment, so that any adaptations caused by slavery could only be one-way.
Beyond such general philosophical considerations, the Slavery Hypertension hypothesis undoubtedly has a lot of issues as a 'hard science' hypothesis. In order to analyze Wilson and Grim's (1991) hypothesis more closely, it is necessary to examine the history and economics of the slave trade. The slave trade from Africa began around 1517 and ended in 1888, when Brazil outlawed slavery (Diamond, 1991, p.4). The slave trade to the United States effectively ended in 1807. Originally in Portuguese hands, the slave trade became part of the "triangular trade", in which ships carried cotton and other plantation crops from the Americas to Europe, manufactured goods from Europe to Africa, and slaves from Africa to the Americas in the infamous "Middle Passage" (Boddy-Evans). Overall, about ten to twelve million slaves were brought to the Americas from Africa.
Slaves originated mostly from the West coast of Africa: Senegambia, Upper Guinea, the Windward and Gold Coasts, the Bights of Benin and Biafra, and West Central Africa. A smaller region of slave origins is found in South East Africa (Boddy-Evans, 2010). In the time of interest for slave transport to North America, slaves were captured or bought mostly in regions no more than 100 miles from the coast. Diamond's (1991) death rate of 25% during "forced marches" to the coast seems to be excessive for this distance.
Conditions during slave transport were appalling but death rates were lower than the 30% quoted by Wilson and Grim (1991). The duration of ships' voyages for the Middle Passage decreased from about three to six months at the beginning of the slave trade to six to eight weeks towards its end. The best existing estimates give death rates of about 24% around 1680, close to Wilson and Grim's number, but dropping to less than 6% around 1790 (Curtin, 1992, p.1684). For all slaves transported to North America, the number-averaged death rate is estimated to be about 10 to 12%, close to Diamond's number but much lower than the one assumed by Wilson and Grim.
As an aside, death rates for the ships' crews were not very different from those of slaves. This seems to indicate that slave ship captains were not completely indifferent to slave mortality figures but followed their economic interest in providing a level of care reasonably possible under the technical and logistical constraints of their time.
Wilson and Grim's (1991) argument that African populations with similar salt intake have lower incidence of hypertension can also not be considered conclusive since it does not take other differences between the populations into account, in particular prevalence of obesity, diabetes and metabolic syndrome between the populations, differences in diets and lifestyles, and the impact of psychological factors (low socioeconomic status, effects of discrimination).
Dressler (1993) looked at health effects in the African American community. He developed the idea of status incongruence, which here means having a more expensive lifestyle than people think one is entitled to. African Americans with darker skin color and higher lifestyle have about three times higher hypertension rates than those with lower skin color and lifestyle. However, any social effects of skin color are convoluted with possible genetic contributions because lighter skin color could mean more Caucasian genes. However, the hypertension rates for African Americans with higher education (>12 years of school) are about a factor of two higher than for those with lower education (£ 12 years) and there it is almost impossible to imagine that slave transport has any impact on whether a person goes to school longer or not. Dressler s (YR) study indicates that other factors can have very large influences on hypertension, as large as the observed difference between African Americans and Caucasians.
Armelagos (2005) gave an extensive and somewhat unbiased historical review of both sides of the "Slavery Hypertension story". Armelagos (2005) came down squarely in the camp of the hypothesis' opponents, stating that "there is no indication of a genetic bottleneck or evidence of 'racial' differences that are genetically determined" and that "It is time to discard the Slavery Hypertension Hypothesis and begin to examine the issue from a biological and social perspective that reflects a more realistic approach to the disparities that exist in e prevalence of hypertension" (Armelagos, 2005, p. 119). His conclusion appears to be shared by most experts today, and it is certainly true that the factual basis of the hypothesis is disputed or weak. However, Armelagos (2005) accepts the arguments of the hypothesis' opponents, and nagging questions remain.
Curtin's criticism was particularly damaging in view of the fact that Wilson and Grim drew so heavily on his work. While most of Curtin's arguments are valid, some stretch the evidence:
His view that diarrhea was not a factor is based on plantation and British army records for African conscripts, which show pulmonary diseases and fevers as main killers. However, these death rates are much lower (<5%), and thus not representative of the situation of slaves during transport. It is also disputed by other historians who state that "gastrointestinal disturbances were responsible for the greatest annual death rates in slaves during Middle Passage transit" (Dimsdale 2001:235, quoting Steckel and Jensen 1986).
He argues that vomiting due to seasickness is a 48 hour long phenomenon that cannot have a genetic impact generations later. However, it can if it leads to a permanent effect (death from dehydration).
His argument that the regions from which slaves originated were no salt scarce is limited to recent history and does not rule out a genetic contribution left over from pre-historic times.
Armelagos (2005) also does not address Dimsdale's analysis (Dimsdale 2001:235) of Eltis et al.'s (1999) work on mortality rates during the Middle Passage (Eltis 1999) that Kaufman (2001) drew on when questioning Wilson and Grim's numbers. The Eltis database contains information on only 5,130 of their 27,233 voyage data set, and their death rates are imputed. While Dimsdale's (2001) argument that this means that "the death rates in the Middle Passage are uncertain to everyone" is somewhat defensive and self-serving, it contains a kernel of truth.
Lastly, the fact remains that hypertension in African Americans is real, but it seems that it cannot completely be explained by other known factors even after adjusting for health-related behaviors. This may reflect our ignorance, but it leaves a loophole for the genetic contribution postulated by the Slavery Hypertension hypothesis,
Kaufman notes that "the seductive nature of Dr. Grim's fairytale" is in itself an interesting sociological phenomenon" (Kaufman 2001). The Wilson and Grim slave transport/hypertension hypothesis is seductive because it provides a simple, mono-causal explanation for a complex phenomenon. It also fits in well with the concept of 'Darwinian medicine' that has been successful in explaining the prevalence of sickle cell disease as a protective factor against malaria, or of Tay-Sachs disease against tuberculosis. As in these cases, adaptation to a specific environment carries a cost, and it becomes maladaptive under altered circumstances. The difference to the Slavery Hypertension hypothesis is that the molecular genetics base for these diseases is comparatively simple and well understood. The genetic basis for salt sensitivity is not well understood, and it can be assumed to be far more complex.
The widespread and ready acceptance of the Slavery Hypertension Hypothesis in the African American community is on the face of it a puzzling phenomenon when viewed in the context of Kaufman's assertions that it perpetuates myths of Black inferiority. However, it fits in with the "cult of victimology" (a term coined by John McWhorter) (McWhorter) practiced by some in the African American community in which all ills befalling them are rooted in slavery or discrimination, and it exonerates African Americans from any behavioral or diet-based contributions to their hypertension disease load.
Today, the Slavery Hypertension Hypothesis is widely seen as disproved. However, a final answer will have to await a complete understanding of the genetic basis for essential hypertension, and a comparison of the disease markers in the African American and original African populations. If such data were to show that there is indeed a genetic difference in genes regulating salt metabolism between African and African American populations, the Slavery Hypertension Hypothesis would have to be resurrected. Until these data are available, it must be considered unproven and at variance with much of the historical record. Dare we say it should at least be taken with a large grain of salt? | <urn:uuid:c4e0f5c7-5f22-4c37-b217-fc80c8297d27> | CC-MAIN-2017-17 | https://www.ukessays.com/essays/history/hypertension-in-african-americans-and-the-middle-passage-history-essay.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00365-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.957253 | 5,516 | 3.15625 | 3 |
Presentation on theme: "California Common Core Training Version 1.25"— Presentation transcript:
1California Common Core Training Version 1.25 CHILD MALTREATMENT IDENTIFICATION, PART II: Sexual Abuse and ExploitationCalifornia Common Core TrainingVersion 1.25
2Learning Objectives/Overview Present the historical background, legal definitions and dynamics of child sexual abuse;Discuss characteristics of perpetrator, victim, and non-offending caretaker;Identify physical, behavioral, and emotional indicators of child sexual abuse;Examine the dynamics involved in sexual abuse and sexual exploitation;Practice identifying child sexual abuse when allegations occur.
3History: Current U. S. “Discovery” Cycle Some publications for specific disciplines began to appear in the early 1980’s.“Stranger-Danger” was believed to be the most common form of child sexual molestation.During this same time period, the McMartin/Manhattan Beach, Country Walk, and Jordon, MN multiple victim cases involving pre-schoolers were publicized.
4History: System Responses Mandatory ReportingSpecialized Investigative Units FormedGovernment Sponsored Trainings DevelopedJoint Investigations Between CPS & LE BegunIncreased Criminalization of IncestChild Advocacy Center Concept BeginsState and Federal Laws Enhanced or DevelopedResearch on Child Sexual Abuse BeginsChild Interviewing Protocols Developed
5And Now For You…Possible personal difficulties in working cases with sexual aspects:Emotional reactions are expected and normalMatters dealing with sexualized behaviors are very personal and value ladenSexual abuse victimization historyParenthoodPersonal feelings concerning sexuality, sexually motivated behaviors, and children and sexuality
6Questions For You…Where and under what conditions you were taught or learned about sex, sexuality, and what is appropriate or inappropriate sexual behaviors?Who informed you?What were your emotions and what caused them?Girls were told by? Boys were told by?Cultural Differences?
7Question for you. . .What are some ways that your own views of sexuality may impact your handling of situations involving the sexual abuse of children?
8Exercise: Body Part Identification All terms or phrases are consideredClinical/ “proper”Slang/ EuphemismsCulturalWrite on post-its and place on appropriate body partMost post-its up wins for the team
9General Definition Components Sexual contact that is accomplished by threats or threat of force, regardless of the ages of participants;All sexual contact between an adult and a child regardless of whether there is deception or the child understands the sexual nature of the activity;Sexual contact between a teenager and a younger child can also be abusive if there is a significant disparity in age, development, or size, rendering the child victim incapable of giving informed consent. (Ryan, 1991)See California Penal Code (a,b,c,)
10Continuum of Behaviors Non-contact sexual acts such as exposure, voyeurism, showing or producing pornography, masturbation or other sexual acts in front of the child;Touching of the sexual or erogenous zones or touching designed for the sexual gratification of the perpetrator or for the furtherance of sexual activity;Penetration of vagina, anus, mouth
11Legal Definitions for Sexual Assault and Sexual Exploitation California Welfare & Institutions Code (WIC) Section 300(d)California Penal Code (PC) Sections
12Informed Consent The dimensions of informed consent: 1. Know what is being requested2. Have a thorough understanding of the consequences of the behavior3. Have an equal power base in the relationship4. Be able to say no without repercussionsAbel, G.G., Becker, J.V., & Cunningham-Rathner. (1984). Complications, consent, and cognitions in sexbetween children and adults. International Journal of Law and Psychiatry, 7,
13Prevalence 9.7% of maltreatment reports involve sexual abuse (2004) % of American females who are sexually abused or exploited in some manner before 18:1 in 3-4% of American males who are sexually abused or exploited in some manner before 18:1 in 7-10 (underreporting a major issue)90-95% of sexual abuse is perpetrated by someone the child knows. *Child abuse reporting systems and clinical programs tend to over represent intrafamilial cases. Based on general population surveys, abuse by parent figures constitutes between 6 and 16% of all cases, and abuse by any relative comprises more than 1/3 of cases.
14Prevalence The challenge of the numbers… All are estimates and have limitations:Different studies use different definitions.Child abuse reporting and clinical programs tend to over-represent intrafamilial cases.Cases reported by official agencies meet a particular standard, many cases never get reported so these data sources underestimate the number of victims.Numbers are reported for different time periods.
15Key Questions: Child Welfare Does the allegation involve intra-familial abuse?Is the child safe?Did abuse occur, per WIC Section 300?* Also refer to Trainee Content re: When Consensual Sexual Intercourse is Deemed Child Abuse in CaliforniaIs the caregiver able/willing to protect the child?Is there a viable safety plan to allow the child to stay in the home?
16Key Questions: Law Enforcement Is the child safe?Did a crime occur, per the Penal Code?Is the alleged perpetrator safe in the community?
17Sexual BehaviorsResearch has also demonstrated a consistent relationship between sexual abuse and sexual behaviors in pre-adolescent children.HOWEVER, a broad range of sexual behaviors has been observed in children who do not have a history of sexual abuse.Important to be aware of what is normal sexual development, including related behaviors, interactions, and feelings for the growing child!
18Exercise: Sexual Behavior Cards What do you think is:NATURAL/HEALTHY,PROBLEMATIC/ OF CONCERN,or ABUSIVE/ SEEK PROFESSIONAL HELP?
19When do sexual behaviors need to be addressed? Is the behavior putting the child at risk for physical harm, disease or exploitation?Is the behavior interfering with the child’s development, learning, social or family relationships?Is the behavior violating a rule?Is the behavior causing the child to feel confused, embarrassed, or bad?Is the behavior causing others to feel uncomfortable?Is the behavior abusive because it involves lack of informed consent, some type of coercion or lack of equality?
20Importance of ContextObservers of children’s normal sexual behaviors note:It is curious in nature;Children involved in normal sex play are generally of similar age, size, and developmental status;Children participate on a voluntary basis;It is balanced by curiosity about other aspects of their lives;Does not usually leave children with deep feelings of anger, shame, fear, or anxiety;The affect of children regarding their sexual behavior is generally light-hearted and spontaneous.
21Adolescent Sexual Experience Quiz What do you know?Are these statements TRUE or FALSE?
22Child Sexual Abuse in a Cultural Context Acceptance and manifestation of sex and sexuality within culturesAppropriate and inappropriate sexual behaviors and participantsSanctionsSexual orientation, gender identificationAssignment of responsibility and/or “blame”
23Cultural Aspects of Shame in Child Sexual Abuse Responsibility for the abuseFailure to protectFateDamaged goodsVirginityPredictions of a shameful futurePromiscuity, homosexuality, sexual offendingRe-victimizationLayers of shame
24Gender & Sexual Orientation Issues Double standard for males and femalesSexual orientation
25Elements to consider in Identifying Child Sexual Abuse Commonly referred to as “indicators”Four broad areas:Reporting (including aspects of the allegation and disclosure);Physical (including medical indicators);Behavioral (including emotional indicators for the victim); andFamilial (including family and caregiver dynamics)
26Presence of Indicators ≠ Abuse Remember. . .Presence of Indicators ≠ Abuse
27Reporting Elements Credibility of the report (and the reporter) Type and credibility of the child’s disclosureCorroboration of disclosure/reportStatements about prior unreported sexual abuseHistory of CWS involvement
28Physical Elements Presence of illness or injury (ies) Report of past illness or injury (ies)Explanation of illness or injury (ies)Developmental abilities of alleged victimDevelopmental abilities of alleged perpetratorMedical assessment findings
29Physical Elements: Medical Assessments When?In all cases in which the most recent episode of abuse/assault occurred within the last 72 hoursWhen penetration is disclosed, regardless of timeTo assess any injury/pain/physical complaints of the childWhen the child would benefit from a medical opinionKnow your county’s protocols!
30Behavioral ElementsHistory of sexually abusive behavior by someone in the home or with access to the childDevelopmentally or socially inappropriate sexual knowledge and/or sexual behavior by alleged victimSelf-protective behavior by alleged victimIndicators of emotional distress by alleged victimCoaching or grooming behaviors
31Behavioral Elements: Emotional Distress Trauma-related indicators:Physiological reactivity/Hyperarousal (hypervigilance, panic and startle responses, etc.)Retelling and replaying of trauma and post-traumatic playIntrusive, unwanted images and thoughts and activities intended to reduce or dispel themSleeping disorders with fear of the dark and nightmaresDissociative behaviors (forgetting the abuse, placing self in dangerous situations related to the abuse, inability to concentrate, etc.)
32Behavioral Elements: Emotional Distress Anxiety-related indicatorsObsessive cleanlinessSelf-mutilating or self-stimulating behaviorsChanged eating habits (anorexia, overeating, avoiding certain foods)
33Behavioral Elements: Emotional Distress Depression-related indicators:Lack of interest in participating in normal physical activities, loss of pleasure in enjoyable activitiesSocial withdrawal and the inability to form or to maintain meaningful peer relationsProfound grief in response to losses of innocence, childhood, and trust in oneself, trust in adultsSuicide attemptsLow self-esteem, poor body image, negative self-perception, distorted sense of one’s own body
34Behavioral Elements: Emotional Distress Other indicators:Personality changesTemper tantrumsRunning away from homePremature participation in sexual relationshipsAggressive behaviorsRegressive behaviors in young children (thumb sucking or bedwetting)Poor school attendance and performanceSomatic complaintsAccident proneness and recklessness
35Familial ElementsIsolation of the child (inhibits reporting and makes child more vulnerable)Coercion/threats made to the child to prevent disclosureCurrent caregiver’s substance abuseOpportunity for the abuse to occur
36Myths and Facts about the Forensic Medical Examination The medical examination will confirm if there was sexual abuse.If sexual abuse occurred, there will be findings.Exams can confirm if a girl is a virgin or not.The examination will likely be traumatic for the child.The exam mimics an adult gynecologic exam.If a child’s pediatrician did an exam, that is sufficient.
37Myths and Facts about the Forensic Medical Examination The medical examination will confirm if there was sexual abuse.MythIf sexual abuse occurred, there will be findings.Exams can confirm if a girl is a virgin or not.The examination will likely be traumatic for the child.The exam mimics an adult gynecologic exam.If a child’s pediatrician did an exam, that is sufficient.
39Sgroi’s Five Stages in CSA EngagementSexual interactionSecrecyDisclosureSuppression
40Summit's Child Sexual Abuse Accommodation Syndrome SecrecyHopelessnessEntrapment and accommodationDelayed, conflicting, and unconvincing disclosureRetraction
41in Johnny’s disclosure? How do we see Sgroi’s Stages and Summit’s Child Sexual Abuse Accommodation Syndromein Johnny’s disclosure?
42What Is the Evidence? Child Disclosures of Sexual Abuse Summary of Research Findings:(Olafson & Lederman, 2006)Majority of CSA victims do not disclose their abuse during childhood;
43Olafson & Lederman (2006), cont’d 2. When children do disclose sexual abuse during childhood, it is often after long delays.3. Prior disclosure predicts disclosure during formal interviews.4. Gradual or incremental disclosure of child sexual abuse occurs in many cases, so that more than one interview may become necessary.5. Experts disagree about whether children will disclose sexual abuse when they are interviewed. However, when both suspicion bias and substantiation bias are factored out of studies, studies show that 42% to 50% of children do not disclose sexual abuse when asked during formal interviews.
44Olafson & Lederman (2006), cont’d 6. School-age children who do disclose are most likely to first tell a caregiver about what happened to them.7. Children first abused as adolescents are most likely to disclose than are younger children, and they are more likely to confide first in another adolescent than to a caregiver.8. When children are asked why they did not tell about the sexual abuse, the most common answer is fear. Recantation rates range from 4% to 22%.Lack of maternal or paternal support is a strong predictor of children’s denial of abuse during formal questioning.Many unanswered questions about children’s disclosure patterns remain, and further multivariate research is warranted.
45Olafson & Lederman (2006), cont’d Additional factors that affect children’s disclosure of sexual abuse:Abuse by a family member may inhibit disclosure;Dissociative and post-traumatic symptoms may contribute to non-disclosure;Modesty, embarrassment, and stigmatization may contribute to non-disclosure; and
46Non-Offending Parent/Caregiver Reactions Reactions you may see:DenialAngerBargainingDepressionResolutionBUT- Change and movement between the reactions can happen and will!
47Why don’t moms believe? Anger Disbelief Denial Shame Guilt Self-blame HurtBetrayalConfusion and doubtOwn abuse historyJealousySexual inadequacy or rejectionMinimizationRevengeFinancial or other fearsReligious concernsProtect perpetratorHatredRepulsion
48Why don’t moms protect? Behaviors can be viewed on a continuum: Knows nothingHas knowledge and does nothingRecognizes potentially abusive behaviors, ineffectual or no protectionMay “sense” something isn’t right, but doesn’t askRecognizes potentially abusive behaviors, acts to reduce risk or intervene
49Why don’t moms protect?Growing evidence shows when mothers are incapacitated in some way children are more vulnerable to abuse. This may take a variety of forms:Absent due to divorce, sickness, or death;Emotional disturbances, psychologically absent;Their own intimidation, fear, or abuse;Large power imbalance with perpetrator undercuts her ability to be an ally for her children.
50Perpetrator DynamicsRule 1: They don’t look or act the way you’d expect-No profile of offender-Have a public self vs. private selfRule 2: The rules of logic do not apply- Need-based cognitive distortions- They come to believe their own distortions
51Perpetrator Continuum Situational:Do not have a true preference for childrenMay molest for a wide variety of reasonsMore likely to be aroused by adult pornographyFrequently molest readily available children that they have easy access toVictims young, vulnerable, accessible, less likely to be believed, easy to manipulate or threaten
52Perpetrator Continuum Preferential:Primary sexual orientation is toward childrenOver represented in the higher SES groupsBehavior tends to be scripted, compulsive, and primarily fantasy-driven.More specific sexual preferences as to age, gender, body typePornography usually focuses on the themes of their sexual preference (children)Refer to Behavioral Analysis of Sex Offenders Handout
54Finklehor’s Four Pre-Conditions to CSA Motivation of the perpetrator to sexually abuse.Internal inhibitors against acting out abuse.External inhibitions against acting out abuse.Resistance of the child to the attempted abuse.
55Information Gathering (Tab 3, pages 61 – 63) GoalsMethodsTrainingDocumentationChildWelfareInformation gathering; safety and risk assessment; protective capacity; case management; court proceedingsEngagement, empathic, strength-basedSocial workWritten summaryForensicObjective fact-finding for legal proceedings and for all members of MDITChild: Research-based protocols;Adults: Varies from empathetic to confrontationalSpecializedDetailed written,Signed statements, audiotape or videotapedClinicalInformation gathering for psychosocial assessment & treatmentEmpathic, strength-based, subjective, unstructured, supportiveBrief notes, confidential5555
56Information Gathering with a Child (Child Welfare Perspective) WhoWhere (body parts, geographical)WhatHowDocumentationClarificationClosureExplanation of next steps5656
57What is a Forensic Interview? A forensic interview is conducted with the expectation that it will become part of a court proceeding.It is intended for a judicial audience and governed by rules of evidence.Its goal is to obtain facts for a court trial or hearing.The forensic interviewer strives to:maintain a neutral and objective stance, to facilitate the child’s recall of previous events they witnessed or experienced.To ascertain the child’s competence to give accurate and truthful information.
58Examples of General Questions Which is better?Do you know why you’re here today?orTell me why you’re here today.Do you know why we’re talking today?Tell me why we’re talking today.
59Avoid These Questions!Leading (The answer to the questions is quite clear in the question itself)Your mother rubs your private parts, doesn’t she?Coercive Statements (Interviewer offers the child something in return for an appropriate response)You can’t go home until you tell me who did this.If you tell me who did this to you, I’ll buy you some ice cream.
60Information Gathering: Child Victims (also refers to section on Impact of Abuse) Guilt“Damaged Goods” beliefBlurred physical boundariesSexualized behaviorsAbility to say “NO”Difficulty in talking about ‘taboo’ materialEmbarrassment, shame, anger, fearLocation of interviewDegree of privacyRapport with interviewerPrevious decision to discloseQuestioning style of the interviewerPresence of a witness (supportive or otherwise)Response of other adults to previous disclosures of maltreatment
61Information Gathering: Non-Offending Parent Expect denial, disbelief, minimization, projection of blame and possibly hostility toward you;Choose interview location where perpetrator has little or no power;Explore observations, time frames, relationships, mental health issues (depression), use of medications, sexual abuse/activity history; possibility of DV/ emotional abuse, support system, etc.;Prepare for it to take some time before attitude or belief changes.
62Information Gathering: Non-Offending Parent Assess dependency issues; drug/alcohol useAssess ability to emotionally support child or children;Anything you tell them, you need to provide in writing;Assess ability to carry through safety plan and investigative requirements (willingness and/or cognitive or logistical ability);Be prepared to allow ventilation time;Always leave the door open for further conversations;Really LISTEN to what their primary concerns are.
63Issues With Non-Abused Siblings May be angry with victim for telling (decisional balance) and the consequences of disclosure;May develop negative behaviors or withdrawal as they cope with situation;Parent/s may develop and enforce rules to reduce the risk of sibs being victimized, causing resentment and rebellion;Need to be included in any treatment plan.
64Information Gathering: Perpetrator Law enforcement involvement;Who, where, what are you interviewing for?Denial 1st responseMinimization of behaviorsJustificationBlame onto victim or spouse“Sick and sympathy”
65Welcome Back! Questions, Comments, Clarifications… Day TwoWelcome Back!Questions, Comments, Clarifications…
66Assessment Physical and Behavioral Indicators Child’s Disclosure Evidence DiscoveryCollateral InformationChild/ Family/ Perpetrator HistoryAlternative Hypotheses/ Confirmatory BiasSource MonitoringPerpetrator Admission or Confession
67Cultural Considerations What is the general cultural perception of the act/s?How best to structure approach to child and familyRelationship with authority/government entitiesShame for the child, parent/s, communityLanguage proficiency, taboo topics or words
68Analyzing the Child’s Statement Multiple events/elements of progression;Explicit sexual knowledgeRichness of details/idiosyncratic detailsInternal logic/feasibilitySecrecyPresence of pressure, coercion, enticementChild’s perspective of events*
69Alternative Hypotheses Reasonable alternative explanations for what the child is describing or other elements uncovered through the investigative process.
70Validation of the Referral Looking at the totality of the information gathered from all sources, does it:Fit professional knowledge of dynamics of child sexual abuse?Is there a secondary gain for one of the principals?Is there medical validation/ support?Is there physical evidence to support allegation?Is there prior history?
72Information Gathering: Child Victims (also refers to section on Impact of Abuse) Guilt“Damaged Goods” beliefBlurred physical boundariesSexualized behaviorsAbility to say “NO”Difficulty in talking about ‘taboo’ materialEmbarrassment, shame, anger, fearLocation of interviewDegree of privacyRapport with interviewerPrevious decision to discloseQuestioning style of the interviewerPresence of a witness (supportive or otherwise)Response of other adults to previous disclosures of maltreatment
73Case Management Considerations Separating the FamilyPerpetrator from familyChild/children from familyCollaboration and MonitoringMultidisciplinary team functioningDeveloping and monitoring treatment plan/sVisitation/ ReunificationIf, when, and how
74Treatment Considerations for Victims Treatment ApproachesSupportiveSymptom-focusedAbuse focusedVisit this website for Evidence Based Practice:Treatment IssuesFoster healthy expression of feelings related to abuseReframe/correct distorted thinking about the abuseAssist the child in understanding nature and impact of abuseReduce behavioral symptoms and emotional distressSex education; assertiveness; self-esteem; empowerment; personal safety
75Treatment Issues for the NOP Treatment ApproachesSupportivePsychoeducationalAbuse-focusedTreatment IssuesEnhance Safety/Reduce Risk!Believe abuse occurredHold perpetrator responsibleEmpathy/ support for childIdentification of their own role in abuseResolution of own abuse/victimization issuesFoster independence
76Treatment Issues for Perpetrator Treatment ApproachesCognitiveBehavioralRelapse preventionOffense-specificTreatment IssuesAccept responsibility for behaviorsDevelop/demonstrate empathy for victims and othersModification of thinking errors/cognitive distortionsIdentify and reduce/control deviant sexual arousalResolution of own childhood abuse/victimization
77Time to see what you have learned so far! Embedded EvaluationTime to see what you have learned so far!
78ClosureTHANK YOU AND GOOD LUCK TO YOU IN YOUR CHILD WELFARE WORK WITH CHILDREN AND THEIR FAMILIES. | <urn:uuid:b8fa54e5-520f-4170-80a7-7210176b584e> | CC-MAIN-2017-17 | http://slideplayer.com/slide/3085540/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00487-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.864076 | 4,907 | 3.0625 | 3 |
Table of Contents
Three Ethnic Perspectives
Folk Music of the Florida Parishes
People of the Florida Parishes
Indians and Folklife in the Florida Parishes of Louisiana
By H. F. "Pete" Gregory
Throughout North America, the American Indian has been a powerful source of folk culture, and the Florida Parishes are no exception. What is notable about the area are the number, and kind, of Indian influences that did, and still do, affect it.
The most obvious Indian influence on the region is to be seen in the place names: Abita Springs, Amite River, Bogalusa, Boguefalaya, Manchac, Tangipahoa, Tchefuncte, and Tunica, to list only a few. Indian names are common in Louisiana, but somehow they seem to be especially concentrated between the Mississippi and Pearl River. The majority of these names can be traced to one of the Muskoghean languages: Acolapissa, Tangipahoa, Houma, or Choctaw proper. A few may reflect the influence of the Mobilian jargon, a language used in contact situations between whites, Indians, and Blacks. Tunica, a tribal name, was left by that tribe on the loessial hills where that tribe displaced the Houma in the 1600s. Eventually a red pole was raised on the bluffs to separate tribal territory, probably distinguishing the Houma from the Bayougoula to the southwest. This pole yielded the place names Istrouma (from the Choctaw for "red stick," isti huma), and the French version of that, Baton Rouge, the name of what is now the state capitol. From city streets and high schools to the names of bayous and creeks, the imprint of Native Americans on the region is apparent.
When Europeans first began to describe the tribes in the area where the Tunica had displaced the Houma, they had settled in the loess-capped hills along the Mississippi north of Tunica Bayou and Thompson Creek. The Houma shifted south in 1709 but retained control of their hunting grounds on the Amite River, crossing Lake Pontchartrain from their villages on the Mississippi River and upper Bayou Lafourche to hunt there.
According to the eighteenth century Ross Map, the Acolapissa had villages along the shore of Lake Pontchartrain. One was near New Orleans, but at least three others were shown between Pearl River and Lake Maurepas. The Tangipahoa ranged north, up the river that took their name.
These tribes were frequently targeted by colonial powers for political or military purposes. The Tunica grew wealthy from their role as middlemen in the salt and livestock trade with the Caddo and Latino populations in Texas and northwest Louisiana. They were also a mercenary tribe, serving first for the French in the Natchez war and later augmenting the Spanish troops under Bernardo de Galvez at the Battle of Baton Rouge.
Early in the eighteenth century, the French attempted to establish trade relations to the west, with Mexico, and moved the Natchitoches, a Caddoan-speaking tribe, to the Acolapissa north of Lake Pontchartrain. The two tribes gradually intermarried. When, in 1714, the French took the Natchitoches back to their old settlements in northwestern Louisiana, the Acolapissa reacted violently and killed some Natchitoches in retaliation.
From first contact, the Europeans in the Florida Parishes seem to have taken to the woods with the native peoples. Hunting and fishing are as integral to the lifestyle of Florida Parish folks as they were in the early eighteenth century. Gathering wild foods--dewberries (early May), wild strawberries (early May), and mayhaws--are still integral parts of the scenes in the Florida Parishes.
The Choctaw, closely allied to the French in their southern settlements, had considered the region their country since at least the 1720s. Their presence led to a close interaction with white Creoles in Mandeville, Bayou Lacombe, and even New Orleans proper. Creolist George Reinecke has noted that there was a special market where Choctaw from north of Lake Pontchartrain traded heavily when they came to the city. Early magazines, such as Harper's Weekly, illustrated Choctaw men and women selling their basketry and herbs at the French Market. Dressed partly in European garb, but also sporting beads, silver, and bright colors, they added an exotic touch to the market.
White Creole families had close ties across the Okwa Chitto, as Lake Pontchartrain was called by the Choctaw. Some, like Simon Favre, were part Choctaw themselves. Favre served as a Choctaw translator for the Spanish government, and today his family name is still found at Coushatta Choctaw community in Mississippi.
Creole families used Choctaw cane basketry for clothes hampers and storage containers. Various hanging baskets held cutlery and other objects, usually in the kitchen. The Creoles also integrated the Choctaw herbal lore into their cookery, especially filé, and frequently roamed the woods of Bonfouca with Choctaw peers. Cane blowguns, called sabracane, were often used by Creole families, and some have been handed down for generations. Choctaw and mixed-blood families still make them today. The Creoles also developed an affinity for Choctaw stickball. Not only did they attend the Indian games, but they formed their own teams (with streamlined, lighter ballsticks) and allowed their slaves to play on a field near Algiers. Raquette, as the Frenchmen called it, was one of the more popular Louisiana sports.
The French Creoles also incorporated the Choctaw into their literature, poetry, and paintings. In no other part of Louisiana are Indians so graphically depicted, sometimes with realism, often romantically.
As British Americans entered the region, the Creoles held them in great disrespect, and much of their own ethnocentrism was absorbed by their Choctaw neighbors and kinsmen. Certainly the gross mistreatment of Choctaws by Americans, as depicted in the writings of Dominique Rouquette, had something to do with the nature of Choctaw-Anglo interaction. The Choctaws had been killed, abused, and ridiculed by the Americans. However, the Creole planters wisely advised them not to retaliate, for fear the British Americans would annihilate the Indians completely. So, by the 1850s, Choctaws were living on the grounds of Creole plantations, protected in part by the patronage of wealthy planters. They hunted, traded, and sometimes were used to track down runaway slaves. Creole literature, especially such ante-bellum French novels as Dr. Alfred Mercier's Habitation de Saint-Ybars, offers realistic pictures of the Choctaw in such situations.
The Creoles also developed a culture hero in Adrian-Emmanuel Rouquette. Rouquette became a priest and moved among the Choctaw living north of Lake Pontchartrain. His Creole biographer, Dagmar LeBreton, noted he was called "Chata Ima," Choctaw-like, by the Indians. Rouquette collected a vocabulary and artifacts, wore his hair long and dressed "Indian style," according to Albert Gatschet, the linguist from the Smithsonian who met him in the 1880s. Chata Ima eventually entered into Northshore tradition. The mission to the Choctaws continued until 1925, the last priest being Father Francis Baley O.S.B. Some Choctaws objected to the education provided because it eroded the Choctaw language, according to Thomas Colvin.
John Peterson, based on the early descriptions of Cora Bremer, noted the Choctaw reticence to interact with whites at the turn of the century. Dominique Rouquette notes they held day-labor jobs in disdain, equating them with slavery. Both note they kept their traditions and managed to hold on to their language. Jobs cutting and rafting logs insulated them from "bosses," and allowed them to maximize their knowledge of the ecology.
Both Bremer and folklorist Andre Lafargue noted that the Choctaw maintained their traditional "annual" meetings. Such gatherings were usually held in the spring when the swamps were flooded, and were well attended by Choctaw from Alabama, Louisiana, and Mississippi. These gatherings were versions of the traditional Choctaw "corn feast." Lafargue noted they consisted of ritualized animal dances, sun worship, and "calling for rain." These rituals were always held at night, as opposed to the midday corn ceremonials once held across the southeast. Claude Medford, Jr., has suggested that holding them at night freed the celebrants from white interference, although sympathetic whites were sometimes welcome and invited.
By the early decades of the twentieth century only a handful of Choctaw survived at Bayou Lacombe. Anthropologist David I. Bushnell describes them living in shotgun houses--the architecture of poverty--set in a set of pine stumps. The Choctaw, like everyone else in the piney woods, were left stranded in the cut-over woods. Abandoned by the sawmills, they fell back on old skills: basketry, hide tanning, subsistence farming, and trapping or hunting to survive.
In spite of their dire economic straits, they maintained their traditional beadwork, silversmithing, and distinct female hairstyles. Most outstanding was their basketry, made of cane primarily, but also utilizing the cortical surface of palmetto stems. As cane brakes began disappearing at the turn of the century, impacted by logging, forest fires, and open-range cattle and hogs, more and more palmetto basketry appeared. By 1939 only six fullbloods were counted at Bayou Lacombe, but some still spoke only Choctaw.
By the 1930s only Mathilde Favre Johnson still made baskets at Lacombe, according to her avid student Tom Colvin. With few young Choctaw left in the region, Sanville and Mathilde Johnson taught a relative, Hazel Cousin, and a young non-Indian, Tom Colvin, how to make the traditional crafts. Colvin has helped the Jena Band of Choctaw revitalize their cane basketry and spends his own time and funds trying to perpetuate Choctaw crafts and tradition. This Choctawnon-Indian relationship is a mirror image of those in old Creole days, the time of Chata Ima.
There are scattered descendants of the Bayou Lacombe settlement living in New Orleans, while other relations live in southern Mississippi. Certain families at Bay St. Louis, Mississippi, are closely related. At Bayou Lacombe "Creolized" families of French, British American, and Blacks still participate in some Choctaw customs. The so-called "swamp communities" near Lake Maurepas live virtually as did the Acolapissa and Choctaw. Dugout pirogues were built there, and people trap, catch crabs, and fish the lake with nets, traps, and lines. Somehow the buzz of interstate traffic has passed them by. Their independence has survived.
Baton Rouge, the state capitol, now has Huey P. Long's phallic-shaped capitol standing where the Indians' isti huma or red stick marked the boundaries between their tribes. The industrial complex lights the night sky and spreads north toward St. Francisville and the bluffs of the Tunica Hills. North Baton Rouge is filled with streets bearing Indian names. The high school is called Istrouma, named after the red stick itself. The giant petrochemical plants in north Baton Rouge attracted workers from all across the Florida Parishes, mainly people of British American descent. Still, as is common in the Upland South, many of these families boasted Indian connections, some stretching back to colonial days. Moreover, Indians from other parts of the United States--Sioux, Apache, Choctaw, Creek, and others--came seeking jobs. So, contrary to other cities in Louisiana, a large urban Indian community has sprung up near Istrouma in Baton Rouge, and in Baker, north of the city.
In the 1960s some of the families in north Baton Rouge, out of sympathy for Indians across the country, started an organization known as the Indian Angels. The Indian Angels organized annual pow-wows, secular dances across the nation by both urban and rural tribesmen, where Indians participate in round dances, war dances or feather dances, and "straight" dances, which have spread from the Great Plains to virtually every tribe in North America. The Angels went across Louisiana looking for Indians, holding monthly meetings wherever they could. Their activities encouraged Indians to lobby politically, and the "Esso Indians" or "Angels" came to be well known across the state. Dressed in Plains Indian costumes, they danced on the steps of the Capitol itself. They participated in the coalition of Eastern Native-Americans and helped make the U.S. government aware of the Louisiana Indians. Their American Indian Store in North Baton Rouge serves as a pan-tribal gathering place and stocks crafts from a variety of sources.
In Baker, a more conservative faction of local Indians began their own "tribe," the Louisiana Band of Choctaws. They, along with a few families in North Baton Rouge, have worked diligently to resurrect their Choctaw heritage. Even the fullbloods involved in these organizations are from different communities, and few if any children speak the native languages. To compensate, these "new" Indians hold their annual pow wow at Poverty Point State Commemorative Area, teach the Choctaw language to their children in school rooms, and travel to Mississippi and Oklahoma to "reconnect" with their heritage. Choctaw dance and music have gradually begun to replace pow-wow music, and traditions--once thought forgotten--have been revitalized. From Baton Rouge to Bogalusa, Florida Parish Indians have participated in these activities. The Louisiana Band of Choctaw has extended cultural programs into economic and political action. These groups have, as some anthropologists have suggested, provided an excellent avenue for maintaining and sharing Indian identity. For displaced people, rural-urban migrants, and tribal isolates, the urban groups may well represent some of the mechanisms of cultural maintenance and transferral that stabilized and regulated the old Indian communities.
The drums beat in the Istrouma gymnasium, at Police Academy Camp, the Teamsters Union Hall, or at Poverty Point. Old songs are still sung, and new songs come into being. Young boys learn the "feather dances" of the Kiowa or Sioux, but others dance the Horse Dance or Bear Dance of their Choctaw ancestors.
At Mandeville, an old Choctaw lady and a young non-Indian split palmetto stems and cane, quietly plaiting the baskets that have served generations of Florida Parish families. They keep the reciprocity of Creole days.
Out at Bayou Lacombe, an old man still builds bird traps and makes cane blowguns--the old ways perish but slowly. However, in the city, away from their origins, the young people are awakening, looking about, and straining to find a way to keep their cultures alive.
After two centuries of white contact, Indian culture in the Florida Parishes is still an active, viable part of the region's folk traditions.
"A Choctaw Pack Basket." Papers of the Denver Art Museum. n.d., 22-24.
Bowman, Greg, and Roper-Curry, Janel. The Houma People of Louisiana: A Story of Indian Survival. Dulac: United Houma Nation, 1970.
Brain, Jeffrey P. "The Tunica Treasure." Lower Mississippi Survey Bulletin No. 2, Cambridge: Harvard University, 1970.
Brain, Jeffrey P. "Trudeau, An 18th Century Tunica Village." Lower Mississippi Valley Survey Bulletin No. 3. Cambridge: Harvard University, 1973.
Brain, Jeffrey P. "From the Words of the Living: The Indian Speaks." Clues to America's Past. Washington: National Geographic Society, 1975.
Brain, Jeffrey P. "On the Tunica Trail." Louisiana Archaeological Survey and Antiquities Commission, Anthropological Study No. 1. Baton Rouge: Department of Culture, Recreation & Tourism, 1977.
Brain, Jeffrey P. "Tunica Treasure." Papers of the Peabody Museum of Archaeology and Ethnology, Harvard University - Volume 71. Cambridge: Peabody Museum, 1979.
Bremer, Cora. The Chata Indians of Pearl River. New Orleans: Picayune Job Press, 1907.
Bremer, Cora. Archives, American Museum of National History. Letter to Frank Boas, 1907.
Bushnell, David I., Jr. "The Choctaw of Bayou Lacombe, St. Tammany Parish, Louisiana." Bureau of American Ethnology Bulletin No. 48. Washington, D.C.: Government Printing Office, 1909.
Bushnell, David I., Jr. "Myths of the Louisiana Choctaw." American Anthropologist 12 (1910):526-535.
Bushnell, David I., Jr. "Some New Ethnologic Data from Louisiana." Journal of the Washington Academy of Sciences 12 (1922):303-307.
Butler, Mabel Johnston. Archives, Eugene P. Watson Library, Northwestern State University, Natchitoches, Louisiana. Letter to Caroline Dormon, 1933.
Colvin, Thomas A. Cane and Palmetto Basketry of the Choctaw of St. Tammany Parish, Lacombe, Louisiana. Edited by Melba Efer Colvin. Mandeville: Private, 1978.
Cushman, H. B. History of the Choctaw, Chickasaw, and Natchez Indians. Edited by Angie Debo. Stillwater, Oklahoma: Redlands Press, 1962.
Drechsel, Emanuel and Drechsel, T. Haunani Makuakane. The Ethno-history of the 19th Century Louisiana Indians. New Orleans: Jean Lafitte National Park, 1982.
Friends of the Cabildo. Louisiana Indians, 12,000 Years. New Orleans: The Presbytere, 1966.
Juneau, Donald. "The Judicial Extinguishment of the Tunica Indian Tribe." Southern University Law Review 7(1) (1980): 43-99.
Lafargue, Andre'. "Louisiana Linguistic and Folklore Backgrounds." Louisiana Historical Quarterly 24 (1941): 744-755.
LeBreton, Dagmar R. Chata-Ima, The Life of Adrian-Emmanuel Rouquette. Baton Rouge: Louisiana State University Press, 1947.
McDermott, John Francis. Tixier's Travels on the Osage Prairies. Norman: University of Oklahoma Press, 1940.
McWilliams, Richebourg Gaillard. Fleur de Lys and Calumet. Baton Rouge: Louisiana State University Press, 1953.
"Mathilde Johnson, Last Full Blooded Choctaw in Lacombe." St. Tammany News Banner, Covington, Louisiana, 18 February 1976.
Peterson, John R. "Louisiana Choctaw Life at the End of the Nineteenth Century." Four Centuries of Southern Indians. Edited by Charles Hudson. Athens: University of Georgia Press, 1975.
Rouquette, Dominque. "The Choctaws with Data on the Chickasaw Tribe and Other Sketches." New Orleans: Tulane University Library, WPA Archives Survey, 1845-50.
"Six Choctaw Remain of Proud Bayou Lacombe Tribe." Times Picayune. 2 July 1939.
Smithsonian Institution, Washington. National Anthropological Archives. Albert S. Gatschet. "Report on Cata'ba, Cha'ta, and Sheti'masha Indians," 1881-82.
Smithsonian Institution, Washington. National Anthropological Archives. Albert S. Gatschet. "Choctaw Vocabulary: Dialect of Southeastern Louisiana: Collect in Various Settlements of Tangipahoa and St. Tammany Parishes, Louisiana," 1881-82.
Swanton, John R. "Mythology of the Indians of Louisiana and the Texas Gulf Coast." Journal of American Folklore 20 (1907): 285-289.
Swanton, John R. "Indian Tribes of the Lower Mississippi and Adjacent Coast of the Gulf of Mexico." Bureau of American Ethnology Bulletin No. 43. Washington, D.C.: Government Printing Office, 1911.
Swanton, John R. "Source Material for the Social and Ceremonial Life of the Choctaw Indians." Bureau of American Ethnology Bulletin No. 103. Washington, D.C.: Government Printing Office, 1931.
Van Doren, Mark. Travels of William Bartram. New York: Dover Publications, 1928. | <urn:uuid:17c71660-9a78-4513-917d-495ca2ae2d7e> | CC-MAIN-2017-17 | http://www.louisianafolklife.org/lt/virtual_books/fla_parishes/book_florida_indians.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00012-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.939123 | 4,420 | 3.40625 | 3 |
Constructing The Ideal Computer Game
Orson Scott Card Editor, COMPUTE! Books
Last month, in Part I, we explored the general notion of the ideal, involving computer game. This article now concludes with some hands-on, specific programming for an Atari version of the example game.
Laying Track At The Expert Level
If you are playing the expert game, there are a lot of track-laying options open to you, for you are allowed to create switches.
Simple Switches. To create switches, hold down the joystick button when you push or pull the joystick. You will get the following results.
If, with the button held down, you push the joystick in the direction that would normally lay a straight track unit, a Y-switch will be laid:
push straight ahead
pull toward you
If, with the button held down, you push the joystick in the direction that would normally curve the track to one side or the other, one spur of the switch will go straight ahead, while the other spur will curve in the direction you pushed.
push straight ahead
pull toward you
Laying Complex Switches. The most complicated switching operation is when you want the track to branch from another direction. If, with the button held down, you push the joystick back in the direction you came from, which would normally let you re-lay the last track unit, a low hum comes from the television.
Push the button and then push the joystick back in the direction you came from.
While that low hum is sounding, the program will wait for you to push the joystick in one of the three valid directions (straight or curved to either side). The new switch will branch from whatever direction you chose.
push straight ahead
Now a high-pitched sound will come from the television. This means that the program is waiting for you to choose one of the two remaining valid directions. The switch will branch toward the direction you choose.
pull toward you
The high-pitched sound will end. You can then change your mind, of course, and lay a different switch or a simple track unit – nothing is definite until you push START. But while those tones are sounding, you can choose only valid switching options, until you have completed the switch.
As you can see, there are only three possible switches – a left switch, a right switch, and a Y-switch. All switch units are laid by pressing down the button while moving the joystick. Only when you want a switch to branch from another direction does it take more than one step to lay a switch unit.
This sounds harder, and it is – but it also gives you more freedom when you come to track you have already laid. You still can do only crossovers and curved bypasses of the other player's track, but you can now join the spur you are working on to another segment of your own track.
For instance, say you are laying a unit of track in the square shown below.
your old track
At the beginner level, you could lay only a straight unit, creating a crossover. But at the expert level, you can also choose a left curve or a right curve, which would create one of the following switches:
Please notice that you don't have to push a button to create one of these switches. In fact, the program will ignore the button if you are about to cross an existing track segment, for each switch can only branch into two spurs.
This means that every switch that creates a new spur must end with a switch that rejoins the spur to the main line.
To keep things from getting too cluttered in your layout, you can create a total of only eight switch-pairs if you are playing alone, or four switch-pairs for each player in a two-player game. So if you try to push the button to create a ninth (or fifth) switch, the program will ignore the button.
How can you tell a spur from the main line? The only difference is the way the spur ends. If the spur ends by joining directly to the beginning of the very first track unit laid, it is the main line. If the spur ends by creating a switch to join it to any track segment, then that spur is not the main line.
"Railroader" keeps track of how many spurs there are, and will not let you join the last spur back to the main line with a switch, unless you have already joined the main line back to the first track unit. And if you press OPTION with any spurs left open, without being joined back to the main line, Railroader will automatically make one spur the main line by joining it to the first track segment, and then will join all the other spurs to the nearest segment of the main line by using switches.
• Choosing Which Spur to Build On. When you have more than one spur, of course, you get to decide which spur you are adding to. You do this by pressing the SELECT button at the beginning of your turn. Railroader remembers the location of every uncompleted spur end, and each time you press SELECT the cursor moves from one spur end to the next. Even if you have already laid a track unit in that turn, but have not yet pressed START, you can press SELECT and Railroader will erase the unit you just laid, then move the cursor square to the end of the next uncompleted spur.
• Crossovers and Bypasses. Just because you can join one track to another with switches at the expert level doesn't mean you have to. You can still create a crossover or curving bypass by pushing the joystick in the direction that would normally lay those track units.
• Erasing with Switches. What about erasing track units by pushing the joystick back in the direction you came from? You can still do that, but when you come to a switch, Railroader will not let you erase it until you have erased all of both spurs leading away from that switch. When you have erased all of one spur, up to the switch, then push SELECT until you are at the uncompleted end of the other spur, and erase that line of track up to the switch. Now Railroader will let you erase the switch. (Notice, though, that this works only if the spur has not been completed. If you come to a switch whose other end is already joined to the main line, pushing SELECT won't get you to the uncompleted end of that spur, since it has no uncompleted end.)
• Illegal Moves. Now that you can use switches to join onto existing lines of track, there are fewer illegal moves to worry about, right? Unfortunately, it isn't so. You still can't join your spur to the other player's track. And now you can't cross over or bypass any track unit that contains a switch, either your own or the other player's! This means that you will end up erasing more often, as you or the other player occasionally get one of your spurs in a box.
• Ending the Expert-level Session. Just push OPTION. If you left any loose ends, Railroader will clean them up, just as in the beginning level. If you left a spur in a box, however, from which Railroader can't legally escape without erasing, the program will put the cursor at the uncompleted end of that spur, so you can erase that line of track back to a point where either you or Railroader can legally complete the spur.
Running The Trains
When you end your track-laying session (or if you chose "Run Trains" instead of "Lay Track" at the beginning of the game), Railroader will ask you whether you want to use the layout you just created or load one from cassette or diskette. If you choose diskette, you will be asked the file name.
When Railroader saves a layout, the file that holds the data also remembers whether there was one player or two. When you decide to run trains on a layout, you do not get to choose one or two players – Railroader will run two trains if there are two tracks, one train if there is only one track.
If there is only one train, it is twice as long as each of the trains in a two-player game. (Since two trains use up twice as much CPU time as one train, this makes it so that one- and two-train games run at the same speed.) You cannot stop or speed up, but you can slow down your train by holding down your joystick button. When you let go, the train immediately resumes normal speed.
You can control the switches with your joystick. Of course, if the spur you are on is merely joining onto another line, with no choice of direction, you have no choice. But if your train could go either way, Railroader remembers whether you last pushed your joystick left or right. Other directions are ignored. If you last pushed left, your train will take the left-hand track at every switch it comes to until you push right. It doesn't matter when you push the joystick, except that once your engine has passed the switch, Railroader will not change that switch; instead, the program will assume you have changed the next switch.
Of course, if the train layout you are playing on was created at the beginner level, there are no switches. There will probably be crossovers and bypasses, however, which will make running the train more interesting.
If there are two players, Railroader keeps a score. You get one point for each track unit you pass through (which encourages you to stay at top speed); two points for each switch you cross over, and ten points if your opponent crashes into you. (You get no points for crashing into your opponent.) Only relative scores are kept - the difference between your scores. Your engines change color, depending on which of you is ahead. The leader has a brighter, warmer-colored engine; the other player has a darker engine, in cooler colors. The actual number of the difference in scores between the two players is not displayed until the end. This means that when you are playing noncompetitively, or with young children, they do not have to be aware of "winning" or "losing" - the color changes can be purely decorative.
The game ends when one player or the other pushes OPTION, or when the difference between the two players is greater than 255.
Programming Hints: Creating The Screen
The easiest way to create the train layout is to use an alternate character set with a multicolor character mode, if your computer will allow it, though direct pixel manipulation will also work. On the Atari, for instance, you would probably use ANTIC mode 4, which provides a screen 24 characters high and 40 characters wide (just like Graphics 0). You might then divide the screen into four-character by four-character blocks, giving you a grid of six blocks vertically by ten blocks horizontally. (Any arrangement that comes out even will do.) Obviously, these blocks correspond to the "square" track units.
Individual characters might look like the seven characters depicted in Figure 1.
These characters might be combined into an up-right curving block of track as shown in Figure 2.Figure 1. Seven Multicolor Characters
You might notice that the four corners of every block are never used, and depending on the track layout within each block, many other characters are blank. You could fill these blank spaces with almost anything. In fact, since the place where the corners of four blocks join will always be blank, you might put buildings, foliage, water, or practically anything into these spaces before the game begins, giving a sense of the space remaining to be filled.
How Many Characters Will It Take?
Surprisingly few characters will be needed to create the track itself. On the Atari, for instance, if the rails are drawn using color register 2 at location 710, then the second player's track can use the same characters, but entered in inverse mode. In inverse mode, the color of the rails will come from color register 3 at location 711.
There are two possible straight tracks: vertical and horizontal. Each requires two characters. The four possible curves (up-left, up-right, down-left, and down-right) require 12 more characters. There are 12 switches - four Y-switches, four left-hand switches, and four right-hand switches - but they might be able to use some pieces from the curves and straight tracks, so that only 32 new characters would be needed to make them. Bypasses and crossovers require another eight characters.
That means that 68 characters are required to make every essential track element - leaving you 60 characters for drawing buildings, foliage, ponds, or anything else you might want to add.
Putting Together The Blocks
How many total blocks would you need? For one player, you would need two straightaways, four curves, one crossover, two bypasses, four Y-switches, four left-hand switches, and four right-hand switches. For two players, double that and add six new blocks for situations where two different-colored tracks are present on the same block (two crossovers and four bypasses). That gives you a total of 48 blocks, each consisting of 16 characters.
Blocks could be stored as a two-dimensional or three-dimensional numeric array, and your program could POKE them into screen memory:
500 FOR 1 = 0 TO 3 510 FOR J = 0 TO 3 520 POKE SCREEN + PLACE + (40*1) + J,BLOCK (UPLEFT,I,J) 530 NEXT J : NEXT I : RETURN
In this subroutine, BLOCK is a three-dimensional array, in which the first subscript defines which block it is, the second defines the row of the block, and the third defines the character on the row. The characters in Block 7 would be defined like this:Figure 2. Block Of 16 Characters Forming An Up-Right Curve
BLOCK(7,0,0) BLOCK(7,0,1) BLOCK(7,0,2) BLOCK(7,0,3) BLOCK(7,1,0) BLOCK(7,1,1) BLOCK(7,1,2) BLOCK(7,1,3) BLOCK(7,2,0) BLOCK(7,2,1) BLOCK(7,2,2) BLOCK(7,2,3) BLOCK(7,3,0) BLOCK(7,3,1) BLOCK(7,3,2) BLOCK(7,3,3)
ULEFT is the variable holding the number of the block that draws an up-left curve. SCREEN holds the address of the start of screen memory. PLACE holds the offset of the block's starting address from SCREEN: 40 is added to PLACE for each new line, and 1 for each new character.
The same sort of thing could be done with string arrays, using POSITION and PRINT commands:
500 FOR I = 0 TO 3 510 POSITION COLUMN, LINE +I 520 PRINT BLOCK$(ULEFT, I) 530 NEXT I : RETURN
Atari users could dimension one long string –DIM BLOCK$(767) – and then use POSITION and PRINT commands like this:
500 FOR I = 0 TO 3 510 POSITION COLUMN, LINE +I 520 PRINT BLOCK$(ULEFT + (I*4), ULEFT + (1*4) + 3) 530 NEXT I : RETURN
You don't have to settle for the 24-row by 40-column screen, either. Even with coarse scrolling, instant vertical wraparound can be achieved by making the last 24 rows of screen memory identical with the first 24 rows, and then page-flipping instead of scrolling at the very top and bottom of screen memory. As players lay track at the top or bottom of the screen, they might notice a slight delay as the program POKEs the blocks into two places in screen memory instead of one, but during the actual scrolling there will be little if any hesitation.
Moving The Train
If you want to have a smoothly moving train, you'll heed to use player/missile graphics. You'll get best results with machine language subroutines for movement. The train can still be run 1 with BASIC, however, and the illusion of speed can be maintained if you move the train in increments of, say, half a screen character – two horizontal pixels or four vertical pixels at a time, each way. Movement is a little jerky, but it is fast.
Animation will be a little tricky. On straight tracks it is simple enough – you need only four positions for each car – two, if the front and back of the car are identical, so that it doesn't matter which way it is facing. If your engine and train cars are identical, except for color, it is all the simpler, since one shape will control each position for all the cars.
There is nothing wrong with using only straight vertical and horizontal movements – the curving tracks are abrupt enough so that the train won't "leave" the track. However, for smooth movement you may want intermediate positions:
Another animation technique is to use part of your character set to generate trains, with characters representing track sections with train cars on them. By POKEing "train car" characters into screen memory and then restoring the old values afterward, you can get longer, four-colored trains – but with jerkier movement.
You will also need to decide how to handle collisions. Stop one train? Let them pass through each other? Design an explosion?
The answers to these and many other questions are best left to your own creativity. After all, there are hundreds of ways to design elegant programs to bring this game to life. Solving the problems to create your version of Railroader is half your fun.
The other half is making layout after layout. No two games will ever be the same; and as generations of model railroaders can tell you, actually running the trains is just an extra, like the orchestra doing a quick encore when the concert is over.
After you've carried out this game design (no doubt improving on it many times along the way), you might try one of these variations:
• Traffic. One player designs a system of one-way and two-way streets, setting up stoplights. Then up to five players use paddles to drive cars on the streets, getting "tickets" for disobeying laws and losing even more points for crashing, while the program systematically changes the red and green traffic lights.
• Treasure Map. Using a font of old-fashioned map characters, a player designs a treasure map; when the game is played, the program randomly or systematically assigns certain treasures and dangers to certain locations.
• Houseplan. The player uses the joystick to build the walls of a house, and the keyboard to put in doors and windows and furnish the house.
Does It Matter?
After all, it's only a game. It's only play. It's only supposed to make money, isn't it? Like the movies. The success of a game is measured in dollars per week. It couldn't possibly be art.
But it is art. Computer games are created by human beings, using the computer, the television screen, and the sound speaker as their medium. And like other artists, computer gamemakers – let's call them videowrights – find that their medium is at once limiting and liberating.
The videowright has only a tiny fraction of the painter's palette to work with. The scan lines and color clocks of the TV set force the videowright to paint in discrete dots, while memory limitations discourage extravagant use of color and images. Yet painters cannot make their paintings move.
Novelists and playwrights can create far deeper characters, far more intricate plots than the videowright, but novelists cannot make you see, and playwrights cannot bring off the fantastic milieux of the videogame.
Above all, the videowright can create an art that the audience takes part in. When you play a videogame, you become part of the act. It's as if you went to the movies and, without stopping the flow of the film, you got to decide what Clint Eastwood or Katharine Hepburn would say next; as if you went to the theatre, and were given a script and put into the play; as if you went to a concert and got to control the program as it went along.
Despite their differences, all the arts have some things in common. I believe that this is the most important:
The audience voluntarily comes to dwell in the world that the artist has created.
Playing Joust and Dig-Dug is more than racking up points. It's dwelling for a time in a world that you can't visit any other way. There are dangers; there are laws; there are strategies for survival; there are rewards for achievement. There is a beginning, an ending. You have more than one chance to make good.
Audience Or Artist
My children are still so young that they don't know that it takes years of training to dance or sing or act out plays or write books. Geoffrey is halfway through writing a novel. Emily improvises plays all day. When the kids like the music they hear, they dance. When they want to sing, they sing, and never mind the melody. And we have enough drawings and paintings to paper a good-sized office building.
We wouldn't dream of telling children that baseball and basketball were only for grownups – they can only go to the ballpark and watch. It's no better to limit them to being in the audience of videogames. Even though it's the most participatory of the arts, the barrier between maker and audience shouldn't be so vast.
Of course, people don't always want to be creative. More often than not, I prefer to play. I like dwelling in some of those worlds that videowrights have made for me.
But when I want a more creative kind of entertainment, I'd like to be able to sit down at the computer and build, the way my children and I build with wooden blocks and plastic bricks. I can always write my own program if I want to, of course. But that's like cutting down a tree and sawing it into blocks and sanding them in order to play with building blocks. Doing it once is fine, but you wouldn't want to have to do it every time. | <urn:uuid:a223cd30-b433-4223-bb26-f8b53a22c0f0> | CC-MAIN-2017-17 | http://atarimagazines.com/compute/issue39/CONSTRUCTING_THE_IDEAL_COMPUTER_GAME.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00192-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.944911 | 4,731 | 3.34375 | 3 |
Nature did not construct human beings to stand alone. . . . Those who have never known the deep intimacy and intense companionship of happy mutual love have missed the best thing that life has to give. Love is something far more than desire for sexual intercourse; it is the principal means of escape from loneliness which afflicts most men and women throughout the greater part of their lives. (Russell 1929, 122–123)
To shed light on Bertrand Russell's proposition that love is the principle means to escape from loneliness, this entry will examine the links between loneliness and the family. In thinking about loneliness in a family and life cycle perspective, several questions come to mind. What is the relationship between marriage and loneliness? Is loneliness passed from parents to their children and, if so, how? From birth to death are there predictable fluctuations in loneliness due to parents and their children's life stages? Is it true, as is frequently depicted, that the loss of intimate relationships leads to loneliness? Since the mid-1970s social scientists have published a growing number of studies addressing these questions (Ernst and Cacioppo 1999).
Concept and Prevalence
Contemporary social scientists have defined loneliness as the unpleasant experience that occurs when a person's network of social relationships is deficient in some important way, either quantitatively or qualitatively (Peplau and Perlman 1982, p. 4). According to this conceptualization, loneliness stems from a discrepancy between the level of social contact a person needs or desires and the amount she or he has. The deficits can be in the person's intimate relationships, as Russell's quote implies, leading to emotional loneliness or in the individual's broader network of relationships leading to social loneliness (Weiss 1973). In either case, loneliness is a subjective experience—people can be alone without being lonely or lonely in a crowd.
Loneliness is widely prevalent. Although loneliness appears to occur in virtually all societies, its intensity varies by culture. In an eighteen-country survey (Stack 1998), the United States was in the top quarter of countries in terms of average levels of loneliness. Perhaps this in part reflects the individualistic, competitive nature of life in the United States. Individuals in European social democracies such as the Netherlands and Denmark were least lonely. Sociologists have associated national differences in loneliness with differences in social integration. The Dutch, for example, are socially well-integrated in terms of having more people in their social networks, such as being involved in civic organizations and volunteer work and receiving emotional support.
Loneliness and Marriage
One cultural universal found in a multinational study (Stack 1998) was that married men and women are less lonely than their unmarried counterparts. Cohabitation also buffered individuals from loneliness but not as much as marriage. When the unmarried are categorized into subgroups (never married, separated or divorced, widowed), the results vary somewhat by study. The general tendency appears to be for single people to be less lonely than the divorced or widowed (Perlman 1988, Table 3). In at least one Dutch study, single parents were also a group high in loneliness. Overall, loneliness seems to be more a reaction to the loss of a marital relationship rather than a response to its absence.
Differences in loneliness as a function of marital status can be explained either in terms of selection or what marital relationships provide. If selection is operating it means that the people who marry are different and would avoid loneliness even in the absence of getting married. This explanation is difficult to definitively test, although it is challenged to some extent by the relatively low levels of loneliness among never married respondents. The second view implies that the more the marital relationships provide, the less lonely the partners should be. Consistent with this explanation, low marital satisfaction is associated with greater loneliness. Similarly, compared with individuals who confide in their spouses, married individuals who talk most openly about the joys and sorrows of their lives with somebody besides their spouse are more prone to being lonely. One can conclude from the evidence that when marriages are working well, they provide partners with ingredients that buffer them from loneliness.
Parents, Children, and Loneliness
Social scientists frequently debate questions of heredity versus the environment. In the origins of loneliness, both appear to have a role. Consistent with there being an inherited component to loneliness, in a 2000 study (McGuire and Clifford 2000) both siblings and twins had some similarity in their levels of loneliness, but the similarity was greater for identical twins than for either fraternal twins or singleton siblings.
Researchers have also checked for an association between parents and their children in the likelihood of being lonely. Working with older parents (85 or older) and their mid-life children, M. V. Long and Peter Martin (2000) did not find evidence of intergenerational similarity. In contrast, J. Lobdell and Daniel Perlman (1986) administered questionnaires to 130 female undergraduates and their parents. As expected, they demonstrated that the parents' loneliness scores were modestly correlated with those of their daughters. Of course, such an association could be explained by either genetic or environmental factors.
To explore possible psychosocial factors, Lobdell and Perlman also had the university students in their study rate their parents' marriages and childrearing practices. Lonely students depicted their parents as having relatively little positive involvement with them. This is one of several studies showing the cold, remote picture of parent-child relations reported by lonely young adults. They also saw their parents as having lower than average marital satisfaction. This finding compliments other studies showing that children whose parents divorce are at risk for loneliness, especially if the divorce occurs early in the child's life. These findings can be interpreted within an environmental framework. In sum, the work on the origins of loneliness suggests that both genetic and family factors each play a role in levels of loneliness, although nonfamilial environmental influences are likely also critical.
The parental contribution to children's loneliness is not simply a one-time input. Instead, loneliness bidirectionally intertwines with parent-child relations over the life-cycle. A first noteworthy lifespan phenomenon is that in the transition to parenthood, women who are lonely during their pregnancy are at higher risk for postpartum depression.
In infancy, children are highly dependent upon their parents and caretakers. As they get older, peer relations become more important. Along with this shift comes a shift in what type of relations are most closely linked with loneliness. In the middle elementary years, it is the quality of children's relationships with their mothers. In late adolescence, it is the quality of university students' relationships with their peers.
Concerning more mature children, Pauline Bart (1980) has analyzed how children's leaving home affects middle-aged mothers. She concluded that women who adopt the traditional role of being homemakers devoted to their children are prone to experience greater loneliness and depression when their children leave home than are women less invested in a maternal, homemaker role.
For many people, one perceived benefit of having children in the first place is the notion that they will provide comfort and support in old age. As far as loneliness goes, there is evidence challenging this view. Tanya Koropeckyj-Cox (1998) looked at older adults with and without children. Contrary to common belief, the results didn't show a clear advantage of having children. A second line of research has examined whether family or friends are more strongly associated with avoiding loneliness in old age. Martin Pinquart and Silvia Sorensen's (2001) meta-analysis, a technique for statistically combining the results of several studies, shows the primary role of friends as opposed to family members in buffering seniors from loneliness.
Relationship Endings and Loneliness
Having examined separation and loss in parent-child relationships, what happens when these phenomena occur in romantic relations? As young adult dating relationships end, presumably both partners experienced a decline in the social aspects of their lives. But in many couples, one person initiates the breakup whereas the other is "left behind." Charles Hill, Zick Rubin, and Letitia Peplau (1976) found that the initiators suffered significantly less loneliness than the partners who were spurned. Perhaps having control over such life changes helps reduce the distressing effects of loosing a partner.
After their young adult dating experiences, many individuals marry and eventually end those unions via divorce. In one study (Woodward, Zabel, and Decosta 1980) fifty-nine divorced persons were asked when, and under what circumstances, they felt lonely. For these respondents, the period of greatest loneliness occurred before (rather than after) the divorce decree became final. Both ex-husbands and ex-wives felt lonely when they felt out of place at a particular social event or excluded by others. For ex-wives, loneliness was also triggered when (1) they wanted to join an activity but were unable to do so; (2) they had no one with whom to share decision-making responsibilities and daily tasks; (3) they felt stigmatized by being divorced; and (4) they had financial problems.
A University of Tulsa study involving seventy-four men and women compared the divorce experiences of lonely versus nonlonely individuals. Lonely individuals blamed more of the marriage's problems on their former spouse. They also had more difficulties in their relationships with their ex-partners. They argued more over childrearing, felt less affection, and had less friendly interactions. In terms of adjusting to separation, lonely respondents drank more, experienced greater depression, and felt more cut-off from their friends. They spent more time with their children and were less likely to become romantically involved with a new partner.
For many North Americans, marriage lasts "till death do us part." If relationships end via death of a spouse, U.S. Census data show a 5 to 1 sex ratio with women predominantly being the individual left widowed. Helena Lopata (1969) has identified several ways that widows miss their husbands. For example, when their spouse dies, women lose a) a partner who made them feel important; b) a companion with whom they shared activities; c) an escort to public encounters as well as a partner in couple-based socializing; and d) a financial provider who enabled them to participate in more costly activities and enjoy a more expensive lifestyle. With such losses, it is not surprising that loneliness is a major problem in bereavement.
Robert Hansson and his associates (1986) found a general tendency for greater loneliness to be associated with a maladaptive orientation toward widowhood. Prior to the death of their husbands, the lonely widows engaged in less behavioral rehearsal (e.g., finding jobs, getting around on their own) for widowhood and instead engaged in more rumination about the negative consequences of their spouse's impending death. At the time of their spouse's death, subsequently lonely widows experienced more negative emotions and felt less prepared to cope. Lonely widows were also less likely to engage in social comparison with widowed friends.
If a spouse dies unexpectedly, loneliness is especially pronounced. To overcome loneliness, widows typically turn to informal supports (e.g., friends, children, and siblings) as opposed to formal organizations or professionals (e.g., their church, psychotherapists). In widowhood as in other transitions, time heals: Feelings of loneliness are greatest shortly after the loss of a spouse but decline over the months and years. As widows continue their lives, the quality of their closest friendship is more likely to be associated with their experiences of loneliness than is the quantity or quality of their quality of their closest kin relationship (Essex and Nam 1987).
In sum, the findings from contemporary social science research indicate that married individuals are less likely to be lonely. However, the picture is more complex than Russell's simple suggestion that love, at least as provided by marital and kin relations, provides a surefire escape from loneliness. At some ages and positions in life, kin relationships appear to be a less important aspect of the loneliness equation than friendships or other factors. Parents not only protect their children from being lonely but also they contribute to it. If siblings are close, they tend to be less lonely (Ponzetti and James 1997). Throughout adulthood, unsatisfying marriages and the endings of intimate relationships are associated with greater loneliness. Thus, it is not simply relationships but what happens in them that counts.
bart, p. (1980). "loneliness of the long-distance mother." in the anatomy of loneliness, ed. j. hartog, j. r. audy, and y. a. cohen. new york: international universities press.
ernst, j. m., and cacioppo, j. t. (1999). "lonely hearts: psychological perspectives on loneliness." applied and preventive psychology 8:1–22.
essex, m. j., and nam, s. (1987). "marital status and loneliness among older women: the differential importance of close family and friends." journal of marriage and the family 49:93–106.
hansson, r. o.; jones, w. h.; carpenter, b. n.; and remondet, j. h. (1986). "loneliness and adjustment to old age." international journal of aging and human development 24:41–53.
hill, c. t.; rubin, z.; and peplau, l. a. (1976). "breakups before marriage: the end of 103 affairs." journal of social issues 32:147–168.
koropeckyj-cox, t. (1998). "loneliness and depression in middle and old age: are the childless more vulnerable?" journal of gerontology: series b: psychological sciences and social sciences 53b:s303–s312.
lobdell, j., and perlman, d. (1986). "the intergenerational transmission of loneliness: a study of college females and their parents." journal of marriage and the family 48:589–595.
long, m. v., and martin, p. (2002). "personality, relationship closeness, and loneliness of oldest old adults and their children." journal of gerontology: series b: psychological sciences and social sciences 55b: p311–p319.
lopata, h. z. (1969). "loneliness: forms and components." social problems 17:248–261.
mcguire, s., and clifford, j. (2000). "genetic and environmental contributions to loneliness in children." psychological science 11:487–491
peplau, l. a., and perlman, d., eds. (1982). loneliness: asourcebook of current theory, research, and therapy. new york: wiley.
perlman, d. (1988). "loneliness: a life span, developmental perspective." in families and social networks, ed. r. m. milardo. newbury park, ca.: sage.
pinquart, m., and soerensen, s. (2001). "influences on loneliness in older adults: a meta-analysis." basic and applied social psychology 23:245–266.
ponzetti, j. j., jr., and james, c. (1997). "loneliness and sibling relationships." journal of social behavior and personality 12:103–112
russell, b. (1929). marriage and morals. new york: liveright.
stack, s. (1998). "marriage, family and loneliness: a cross-national study." sociological perspectives 41: 415–432.
weiss, r. s. (1973). loneliness: the experience of emotional and social isolation. cambridge, ma: mit press.
woodward, j. c.; zabel, j.; and decosta, c. (1980). "loneliness and divorce." journal of divorce 4:73–82.
"Loneliness." International Encyclopedia of Marriage and Family. . Encyclopedia.com. (April 25, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/loneliness
"Loneliness." International Encyclopedia of Marriage and Family. . Retrieved April 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/loneliness
Social scientists agree that loneliness stems from the subjective experience of deficiencies in social relationships and that these deficiencies are unpleasant, aversive, and exceptionally common. Objectively deficient social relationships (i.e., social isolation) do not necessarily correspond with feeling lonely. Thus, it is common to appear alone but not feel lonely, and to feel lonely within a seemingly rich social relationship network. This potential paradox highlights an important distinction between quantitative and qualitative aspects of social relationships.
Theoretical perspectives on loneliness differ concerning the nature of loneliness, whether loneliness stems from internal or situational causes, and where such causes occur developmentally. Psychoanalysts view loneliness as a pathological result of internal factors rooted in childhood. Sociologists view loneliness as a normative event stemming from societal influences that occur throughout development. From cognitive perspectives, loneliness occurs when people perceive discrepancies between their desired and actual patterns of relationships, with desired patterns stemming from previous relationships and comparisons of one’s own relationships to those of similar others (i.e., social comparison). Although different perspectives contribute uniquely to our understanding, the cognitive perspective serves as the dominant model for studying loneliness.
The measurement of loneliness depends inherently on theoretical conceptualizations of the construct. Unidimensional views of loneliness posit a common core of experience that varies in intensity regardless of the antecedents or causes of feeling lonely. Many unidimensional scales exist, but the UCLA Loneliness Scale is the most commonly used. This measure assesses loneliness via self-report without using the terms lonely or loneliness, thereby reducing social desirability influences. Extensive psychometric work indicates that it is a reliable and valid measure of loneliness.
Multidimensional measures involve assessments of perceived quality and quantity of social interaction across multiple domains such as romantic, friendship, family, and community relationships. Multidimensional views of loneliness have become more common in the research literature, spurring intensive psychometric work on these types of scales. Two multidimensional views of loneliness have received considerable research attention. The first breaks down loneliness into distinct components that reflect stable enduring traits or transitory states tied strongly to situation/context. Studies examining these components using specialized scales typically yield test-retest correlations that are higher (indicating greater stability) for measures assessing trait loneliness than for measures assessing state loneliness, suggesting that this distinction is valid. A second multidimensional view involves a distinction between social loneliness that results from a lack of relationships that provide a sense of belonging (e.g., friendships) and emotional loneliness that occurs when people lack relationships that foster deep connection or feelings of attachment (e.g., romantic relationships). Many scales designed to measure social and emotional loneliness exist, and research findings suggest that this distinction is also a valid one.
Studies examining associations between personality characteristics and loneliness consistently show that extroverted people report less loneliness, whereas highly neurotic people often feel lonely. Low self-esteem, shyness, and pessimism also correspond to higher levels of loneliness. It remains unclear whether these personality traits lead to loneliness by limiting social contact and preventing the formation and maintenance of quality relationships, whether feeling lonely biases self-assessment of personality, or whether personality predisposes one to develop few relationships, and the subsequent lack of relationships reciprocally influences one’s personality. There remains a need for continued research that delineates causal paths among personality characteristics and loneliness.
Most children understand that being alone does not necessarily mean one is lonely, and that people can feel lonely when they do not appear to be alone. Adolescents experience more loneliness than do other individuals due to necessary restructuring of social groups to include friendships and other social relationships outside of the family during transitions to elementary, junior high, and high school. Family environments also influence the development of relationships in that children surrounded by parental conflict exhibit social anxiety and avoidance that contribute to loneliness.
Young adults face many contextual events, such as moving out of the home or going away to college, that require the development of new social ties. During these transitions, interpersonal difficulties may hinder the development of new social relationships, leaving one feeling lonely. Shy people who maintained a few high-quality relationships during high school may suddenly feel very lonely away at college when their shyness interferes with opportunities to make new friends. Although these types of life transitions often foster feelings of temporary loneliness that subside over time for most people, some individuals remain chronically lonely, suggesting that interpersonal difficulties or personality characteristics contribute to feelings of loneliness across the life span.
During adulthood, contextual transitions including college graduations and the establishment of careers present challenges to existing social networks and the need to form new relationships. New obstacles include individualistic or competitive work environments that make the formation and maintenance of satisfying relationships difficult. As adults, people place less emphasis on friendships than on intimate or romantic relationships. Although intimate relationships provide protection from loneliness, relationship quality is vital as adults in strained or unsatisfying relationships often report feeling lonely.
Elderly individuals face a number of challenges to maintaining their social networks, including the loss of relationships with coworkers through retirement, reduced contact with adult children, and the deaths of spouses or friends. Decreased functional mobility, cognitive impairment, and physical illness strain existing relationships and impede the establishment of new relationships. Despite these challenges, the elderly are less lonely on average than are college students, and increases in loneliness occur only among individuals eighty years and older. Married men and women report less loneliness than do elderly widows and widowers and men and women who are divorced or never married. Spouses in elderly couples provide functional support in addition to companionship and emotional connection, suggesting multiple ways in which marriage serves to protect against feelings of loneliness. For those without spouses, friendships with similar others provide more protection against loneliness than do relationships with adult children and neighbors.
Consistent links between loneliness, life satisfaction, and anxiety exist, and loneliness is associated with depression independently of age, gender, physical health, cognitive impairment, network size, and social activity involvement.
In addition, loneliness influences well-being and feelings of hopelessness independently of associations with social isolation and perceived social support. Loneliness also relates to physical health, as evidenced by its consistent associations with alcohol abuse, admission of the elderly to nursing homes, suicide, and mortality.
Although loneliness uniquely influences physical health, potential causes for these connections have received varying degrees of support. One view posits that loneliness affects health through maladaptive behaviors including smoking, drinking, poor exercise habits, and substandard dietary practices; however, lonely and nonlonely people rarely differ in the frequency of such behaviors. Alternative models argue that loneliness influences physical reactions to stress including cardiovascular activation, cortisol production, immunocompetence deficiencies, and sleep disruptions that link directly to development of cardiovascular disease, susceptibility to disease and infection, and diminished restorative processes that maintain overall resilience. Emerging findings provide initial support for these links, suggesting promising avenues for future research.
Strategies for coping with loneliness include changing actual relationships, expectations about relationships, or reducing the importance of relationships. Attempts to change one’s social relationships are active coping strategies wherein feelings of loneliness motivate people to form new relationship ties. Changing expectations about social relationships involves cognitive restructuring of how people view the social relationships of others. Attempts to reduce the importance of social relationships or engage in diversionary activities are passive coping strategies that often do little to alleviate loneliness.
Researchers have begun to explore the success of intervention programs in reducing feelings of loneliness, and promising findings have emerged. Social skills training for children and young adults provide the tools needed to effectively initiate, develop, and maintain satisfying social relationships. College orientation, mentoring, and buddy-pairing programs provide social contact with similar others in hopes of fostering relationship development during a transitional period when loneliness is quite common. Finally, interventions that effectively reduce loneliness among older adults target specific groups (e.g., divorcées or widows) and provide social contact opportunities with similar others as well as information that is useful for maintaining established social relationships.
Brashears, Matthew E., Miller McPherson, and Lynn Smith-Lovin. 2006. Social Isolation in America: Changes in Core Discussion Networks over Two Decades. American Sociological Review 71 (June): 353–375.
Peplau, Leticia A., and Daniel Perlman, eds. 1982. Loneliness: A Sourcebook of Current Theory, Research, and Therapy. New York: Wiley-Interscience.
Russell, Daniel W. 1996. The UCLA Loneliness Scale (Version 3): Reliability, Validity, and Factor Structure. Journal of Personality Assessment 66 (1): 20–40.
Russell, Daniel W., Carolyn E. Cutrona, Arlene de la Mora, and Robert B. Wallace. 1997. Loneliness and Nursing Home Admission among Rural Older Adults. Psychology and Aging 12 (4): 574–589.
Steptoe, Andrew, Natalie Owen, Sabine R. Kunz-Ebrecht, and Lena Brydon. 2004. Loneliness and Neuroendocrine, Cardiovascular, and Inflammatory Stress Responses in Middle-Aged Men and Women. Psychoneuroendocrinology 29: 593–611.
Weiss, Robert S. 1973. Loneliness: The Experience of Emotional and Social Isolation. Cambridge, MA: MIT Press.
W. Todd Abraham
Daniel W. Russell
"Loneliness." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (April 25, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/loneliness
"Loneliness." International Encyclopedia of the Social Sciences. . Retrieved April 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/loneliness
Definition and theory
Loneliness is an affective emotional condition experienced when a person feels apart from familiar social supports. It is a psychosocial condition that is differentially experienced within different cultures. Most developments in this condition have been observed in the United States and Europe. Among those not in institutions, studies show that less than 20 percent of older persons experience loneliness. There is some evidence to suggest that loneliness increases with increasing totality of institutionalization. Over the life span, loneliness seems to vary curvilinearily by age. It is highest among adolescents, declines into late middle age, then increases again with advancing older age.
Theoretical conceptualizations of loneliness can be categorized as: 1) the social needs approach, with foundations in the social developmental approaches and the social support perspectives; 2) the behavioral-personality approach; and 3) the cognitive processes approach. Common to these approaches are three points. Loneliness is: 1) a subjective emotional experience that may be unrelated to actual social isolation, that is, aloneness; 2) an aversive psychological condition; and 3) caused by some form of social relationship deficit.
Measurement instruments to assess loneliness include scales developed by Russell, Peplau, and Cutrona (1980), van Tilburg and de Jong Gierveld (1999), and Vincenzi and Grabosky (1989).
The loneliness related to emotional isolation results from the absence of a person with whom one is emotionally connected. The loneliness experienced is a psychological state characterized by feelings of loss, distress, separation, and isolation. Loneliness resulting from social isolation is related to a person's perceived isolation from those around him or her. The emotional condition of loneliness in this regard is influenced by a deficit in the quantity of relationships, and/or the lack of relatedness to the social environment.
Loneliness and selected factors
Background issues related to loneliness include the following:
- Gender. Gender is a more consistent predictor of loneliness than is age. Studies show that either gender has no effect on loneliness, or that women are more lonely than men.
- Race and ethnicity. Race and ethnicity have not been systematically examined with regard to loneliness. Cross-racial, or cross-ethnic, comparisons of loneliness and its antecedents have not yet been conducted in a manner that lends any clarity to interpretation.
- Urban/rural residence. It is commonly held that urban elders are more lonely and isolated than their rural counterparts, though research has not consistently confirmed this stereotype.
- Health. The overall weight of the evidence points to a reasonably strong and consistent association between poorer physical and/or mental health, and greater loneliness.
Interpersonal relationships also factor into an individual's potential loneliness.
- Spouse. Results indicate greater loneliness in the absence of a mate. Severe loneliness appears to be unusual among married men, somewhat more prevalent among married women, and quite prevalent among unmarried individuals of either sex.
- Children. Studies of the relationship between adult children and loneliness show conflicting results. Most have found no association between frequency of contact with children and loneliness. The commitment in the relationship seems to be more important than the actual contact.
- Friends. Research shows that close friends exert a positive influence on the emotional well-being of older persons. Older persons who have contact with their friends, and especially those who are satisfied with these relationships, are less lonely.
The two essential aspects of loneliness, i.e., the loneliness associated with social isolation and/or with emotional isolation, shows that they can be experienced as an affective emotional experience in which a person feels apart from other persons and from familiar support networks. In turn, this can lead to a realization that social contacts are diminishing, lacking, or not at a level that are emotionally supportive or satisfying.
Larry C. Mullins
See also Depression; Social Support.
Andersson, L. "A Model of Estrangement—Including a Theoretical Understanding of Loneliness." Psychological Reports 58 (1986): 683.
Hall-Elston, C., and Mullins, L. "Social Relationships, Emotional Closeness and Loneliness among Older Meal Participants." Social Behavior and Personality 27 (1999): 503.
Johnson, D., and Mullins, L. "Growing Old and Lonely in Different Societies: Toward a Comparative Perspective." Journal of Cross-Cultural Gerontology 1 (1987): 257.
Marangoni, C., and Ickes, W. "Loneliness: A Theoretical Review with Implications for Measurement." Journal of Social Psychology 116 (1989): 269.
Peplau, L., and Perlman, D., ed. Loneliness: A Sourcebook of Current Theory, Research and Therapy. New York: Wiley, 1982.
Russell, D.; Peplau, L.; and Cutrona, C. "The Revised UCLA Loneliness Scale: Concurrent and Discriminant Validity Evidence." Journal of Personality and Social Psychology 39 (1980): 472.
Van Tilburg, T., and de Jong Gierveld, J. "Cesuurbepaling van de Eenzaamheidsschall [Cutting Scores on the De Jong Gierveld Loneliness Scale]." Tijdschrift voor Gerontologie en Geriatrie 30 (1999): 158.
Vencenzi, H., and Grabosky, F. "Measuring the Emotional/Social Aspects of Loneliness and Isolation." In Loneliness: Theory, Research, and Applications. Edited by M. Hojat and R. Crandall. Newberry Park, Calif.: Sage, 1989. Pages 257–270.
Weiss, R. Loneliness: The Experience of Emotional and Social Isolation. Cambridge, Mass.: MIT Press, 1973.
"Loneliness." Encyclopedia of Aging. . Encyclopedia.com. (April 25, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/loneliness
"Loneliness." Encyclopedia of Aging. . Retrieved April 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/loneliness | <urn:uuid:45e951ba-0973-40f7-a2c3-cc4971aa1cce> | CC-MAIN-2017-17 | http://www.encyclopedia.com/social-sciences-and-law/sociology-and-social-reform/sociology-general-terms-and-concepts/loneliness | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.930083 | 6,724 | 2.984375 | 3 |
- freely available
Sensors 2014, 14(4), 6910-6921; doi:10.3390/s140406910
Abstract: This paper covers research on novel thin films with periodical microstructure—optical elements, exhibiting a combination of piezoelectric and surface plasmon resonance effects. The research results showed that incorporation of Ag nanoparticles in novel piezoelectric—plasmonic elements shift a dominating peak in the visible light spectrum. This optical window is essential in the design of optical elements for sensing systems. Novel optical elements can be tunable under defined bias and change its main grating parameters (depth and width) influencing the response of diffraction efficiencies. These elements allow opening new avenues in the design of more sensitive and multifunctional microdevices.
Recently, a main challenge for researchers has been the design and manufacture of more efficient, reliable and accurate low-cost sensing devices. Surface Plasmon Resonance (SPR) of noble metal (such as gold, silver, etc.) nanoparticles has led to the development of many sensors with unique properties in the past few decades. The advantages of these plasmonic materials are significantly influenced by the distinctive optical properties of immobilized nanoparticles [1,2]. The immobilization of metal nanoparticles in a polymer matrix gives the sensing element the properties of processability and transparency. Using a combination of metal nanoparticles with polymers, it is possible to obtain exactly the right combination of electronic and optical properties which can be useful in solving problems such as a lack of sensitivity, accuracy, calibration, background signal, hysteresis, long-term stability, dynamic response, and biocompatibility . Past researches have confirmed that novel nanomaterials with a high specific surface area (or surface-to-volume ratio) or high aspect to ratio (i.e., length-to diameter ratio) can lead to a significant improvements in sensitivity and fast response time, exhibited by higher electron transport phenomena . Application of nanotubes and nanofibers is one novel method for amplifying the sensing signal in immunosensors for medical purposes , acoustic and optical sensors used in detection of various solvents , novel optical pressure sensors in tactile robotic systems , imaging systems , etc. These sensors have had a huge impact in the food industry , bio-industry , medicine , environmental control , etc., because of their capability to continuously and reversibly give a selective and fast response to the presence of a specific compound in a complex mixture of components, without perturbing the system.
The essence of piezoelectric sensors and biosensors is the piezoelectric effect. Piezoelectric sensors are mainly used in biosensing applications because of their simple construction and robust performance . However, the small size and fragile nature of some piezoelectric materials leads to still very complicated sensing element designs. Sometimes using single materials in sensing can offer good spatial resolution, but due to their brittleness, the designed sensing elements are not reliable for robotics in medicine, military, automation, etc., because of their rigid substrates, complex wiring or fragile elements. As for example, one of the major drawbacks of the incorporation of high aspect ratio nanomaterials, such as carbon nanotubes, into polymeric matrices is the increase of viscosity. To overcome the mentioned shortcomings, composite materials on a nanometric scale were proposed by many researchers. In this paper, a combination of specific materials (e.g., polyvinylidene fluoride (PVDF), lead zirconate titanate (PZT), Barium Titanate (BaTiO3), (poly(methylmethacrylate) (PMMA), etc.) and their properties help propose new ways for sensing devices to avoid certain limitations like fragility, size, form, sensitivity, reliability, rapid response, etc. Such films are more flexible and lightweight, with low acoustic impedance and a large working area compared to ceramics. Thus, the selection of suitable composite materials is one of main goals in the design of sensing elements for quantitative and qualitative analysis.
An undiscovered aspect of today's research on novel materials and elements is the surface modification possibilities in sensors for advanced applications. This paper discusses results obtained by implementing piezoelectric materials and periodical microstructure exhibiting SPR effects in a single novel deformable material—a tunable optical element. Electrical tuning has great advantages when compared to simple methods like tuning by cooling, heating or external pressure. The designed elements are rather simple to integrate in small microsystems and are low cost, simple and flexible. The selected materials and technological processes were chosen to be cost-effective (using low-cost microfabrication methods on a single substrate) so that the designed sensing system would be competitive in the economic sense compared to conventional methods able to reduce costs. The combination of specific properties in one element can significantly influence the results in analyte studies, medical, biomedical, optical or environmental applications. The tunable optical elements can be miniaturized, which is essential when designing e.g., communication applications (where beam switching is needed), displays with small pixels, monochromatic light sources or laboratory equipment for investigation of biological systems (when exposed to microstructure shape changing).
2. Experimental Section
2.1. Chemicals and Reagents
5% Polyvinylidene fluoride (PVDF, average MW = 34,000) and nanoparticles of barium titanium oxide (BaTiO3) were taken in suitable and defined compositional ratios and mixed with 5% PMMA (average MW = 15,000). Silver nitrate (AgNO3, analytical reagent) from Sigma Aldrich (Dorset, UK) was synthesized with PVDF-based mixture. All the reagents were of analytical grade and used as received without further purification. Deionised water was prepared with a Millipore water purification system. Conductive silver paste was used for formation of electrodes.
2.2. Formation of Elements
Thin films were produced on the pre-treated glass substrates. These substrates were sonicated for 10 min in acetone and chemically etched in warm special chrome solution (K2CrO7 + H2SO4 + H2O) for 10 min, and dried in an air stream. A spin-coating technique was used for processing thin film layers by means of a Dynapert Precima (Colchester, UK) centrifuge. Film of even thickness was obtained when the spin speed was 2,000 rpm and the centrifugation duration adjusted to 30 s. After spin coating, drying was performed in an oven at 90 °C for 10 min and then exposed by UV light using a Hg lamp (Philips, Aachen, Germany) of 1.2 kV (230–320 nm).
Periodical microstructures were fabricated onto thin films by low-cost and high-throughput replication technology—a hot embossing process. This process allows simultaneous and single-step formation of gratings on various microstructures with high accuracy. Thin film layer and master grating were placed together and fixed in the metallic mandrel between plates and putted in the furnace of 110 °C for 12 min (see Table 1.).
The hot embossing procedure in polymers was discussed in previous reports .
2.3. Design Concept
The designed elements combine the properties of piezoelectricity and SPR. It is rather complicated in structure but simple in design (Figure 1). The development of novel elements includes three stages: (a) synthesis of biologically compatible materials into high aspect ratio thin films; (b) formation of electrode on substrate; (c) formation of thin films; (d) formation of gratings; and (e) formation of the electrode on top of the thin film. Thus, a layer with piezoelectric material is formed between an upper and lower electrode. By applying a voltage between the electrodes, they are attracted to each other by Coulomb forces. The periodical microstructure is implemented in the material such that the deformations (occurred when voltage is applied) influence periodic structure and the angle of diffraction of the grating. Two different type tunable optical piezoelectric–plasmonic elements (PVDF-PMMA-Ag and PVDF-PMMA-BaTiO3-Ag) were designed and the corresponding investigation results are further presented in this paper.
2.4. Analytical Equipment
The influence of silver nanoparticles in PVDF based solutions on the optical properties was confirmed by UV-VIS (UV/VIS/NIR AvaSpec-2048, Surrey, UK). The optical absorbance of the novel elements was measured by a spectrometer in the 200−700 nm wavelength range (accuracy of measurements 0.2%).
The morphology and elastic properties of novel thin films were investigated by atomic force microscopy (AFM) using a NT-206 instrument (Microtestmachines Co., Minsk, Belarus). A NSC11/15 V type non-contact silicon cantilever probe (Micromasch Inc., Wetzlar, Germany, force constant 3 N/m, resonant frequency 60 kHz) was used for measurements. Image processing and analysis of scanning probe microscopy data was performed using a Windows-based program Surface View 2.0. Statistical evaluation of the surface morphology was performed for a scanning area of 12 μm × 12 μm.
A PMT-3 (Okb Spectr, Saint-Petersburg, Russia) microhardness tester was used for evaluation of the formed thin film microhardness. It consists of a standard tester with integrated flat field optics and a main measuring tool—a diamond stylus probe. The substrate starts to contribute the measured hardness at penetration depths of the order 0.07–0.20 times the coating thickness using five to seven loads ranging from 0.049 N. The indentation period was 15 s; 10 to 15 indentations were taken for each load. The coefficient of variation was not higher than ±8%. The Vickers diamond pyramidal indent diagonal was parallel to a prime flat for both orientations (i.e., the diagonals were parallel with orientations for both substrate orientations). It was defined that the thickness of thin films on substrates was 10 times larger than the depth of the indentation. Thus, the influence of the substrate material was avoided. The average values of imprinted rhombus, d (in m) were calculated from independent measurements made for every load, P (expressed in N). The Vickers microhardness, HV (in GPa), was calculated using formula :
3. Results and Discussion
PVDF polymer and BaTiO3 nanoparticles have been studied extensively, mainly in relation to their piezoelectric properties. Significant problem in the synthesis of novel element formation from piezoelectric materials is their miscibility with the amorphous phase, crystallization and the molecular origin of interactions. To overcome this limitation, piezoelectric materials were synthesized with the polymer PMMA. This is an original way to force PVDF to crystallize in the piezoelectric phase. The results also proved that no poling is needed for thin films when PMMA is introduced, i.e., the piezoelectric elementary cells have a uniform orientation.
3.1. Evaluation of Optical Properties
The optical properties of PVDF-PMMA-Ag and PVDF-PMMA-BaTiO3-Ag were confirmed by UV-VIS. For better evaluation of how silver nanoparticles influence the optical response, novel thin films with Ag were compared with thin films fabricated under the same conditions but without Ag. Results are given in Figure 2. Here, the sharp absorption edge of pure PVDF were observed at a wavelength of 240 nm with absorbance of 2.1 a.u.; this peak indicates the semicrystalline nature of PVDF. A shift in band edges toward the higher wavelengths with different absorption intensity for PMMA synthesized with PVDF was observed at 290–320 nm with absorbance of 2.3 a.u. Adding to PVDF-PMMA some BaTiO3 leads to broadening of the peaks with a shift, i.e., two dominating peaks are observed at 290 nm and 380 nm with absorbance of 2.1 a.u. and 1.25 a.u., respectively. Significant changes ocurred when PVDF-PMMA and PVDF-PMMA-BaTiO3 were synthesized with silver nanoparticles, i.e., sharp dominating SPR peaks were observed at 372 nm with absorbance 3.6 a.u. for PVDF-PMMA-Ag and at 424 nm with absorbance 3.2 a.u. for PVDF-PMMA-BaTiO3-Ag. Therefore dominating peaks indicate the formation of inter/intra chain between PMMA and PVDF. Also, the shift in the absorption edge in the PVDF based films reflects the variation in the optical energy band gap.
Incorporation of Ag nanoparticles in piezoelectric films shifts a dominating peak in the visible light spectrum. Dominating peaks at 372 nm and 424 nm in Figure 2 are defined as SPR peaks and explained as silver nanoparticles agglomeration, i.e., intensive formation of small silver particles in the polymer matrix . Thus, the optical window is essential for design of active optical elements for sensing systems, because for most of them the working range is in the visible light spectrum from 400 to 700 nm. PVDF blends with Ag exhibit a well-defined window of wavelength range 372–424 nm. The diameter of the silver nanoparticles was approximately 50–60 nm.
3.2. Evaluation of Surface Morphology and Microhardness
Surface morphology and elastic behavior of novel sensing elements with Ag were analyzed by atomic force microscopy. 3-D views of the surface morphology and grating on the novel PVDF-PMMA-Ag element is shown in Figure 3a,b. On its surface, very small islands are observed, i.e., crystalline structures, with a surface roughness of 0.8 nm. There are irregularities of average depth of 3 nm and width of 0.6 nm. The large surface morphology parameters lead to a well defined regular periodical microstructure, imprinted onto the surface of PVDF-PMMA-Ag thin film (Figure 3b), i.e., grating average width was 4 μm and average depth of 0.769 μm The element formed from PVDF-PMMA-BaTiO3-Ag has a rougher surface of about 8.1 nm, i.e., BaTiO3 nanoparticles reduced the smoothness of the element (observed irregularities of average depth of 76 nm and width of 37 nm) (Figure 3c). Thus, the periodical microstructure imprinted onto the surface of the thin film was less similar to the master grating, i.e., average depth of 0.73 μm with width of 3.9 μm (Figure 3d).
Previous research proved that the addition of PMMA to PVDF-based thin films significantly reduces the surface roughness of composite material and thin film becomes more elastic. The surface elastic behavior of novel elements was evaluated by load–distance curves (from AFM measurements). The contact between tip and thin film surface is elastic and no material transfer occurs at these loads when the adhesion forces, measured with increasing external loads, remain constant. Moreover, the degree of the roughness structures' deformation influences the adhesion of at the contact. This deformation is dominated by the attractive portion of the interacting forces between the film surface atoms of the contact tip. Thus, the adhesion values are one of the basic mechanisms of friction and also influence the deformation of thin films when periodical microstructures are imprinted. Measurement data is given in Table 2.
Microhardness measurements were performed by a PMT-3 microhardness tester, using five loads ranging from 0.049 to 0.392 N. The indentation period was 15 s; 10 to 15 indentations were taken for each load. The Vickers and absolute microhardness of the elements were evaluated. Thus, PVDF-PMMA-Ag have a rather elastic surface with Vickers microhardness of 0.647 ± 0.0026 GPa. Nanoparticles of BaTiO3 increase the microhardness for about 48% for PVDF-PMMA-BaTiO3-Ag element (Table 2). The PSR model was used for analysis in the variation of microhardness with loads, i.e., calculating the load independent value—the absolute microhardness HA according to Formula (2). For the element PVDF-PMMA-Ag the absolute microhardness was 1.027 GPa, and for PVDF-PMMA-BaTiO3-Ag element it was almost three times larger (about 3.143 GPa). Thus, addition of BaTiO3 nanoparticles reduces the elasticity, but, at the same time, increases the microhardness of the elements' surface. The importance of mechanical properties evaluation is significant in many phenomena even beyond tribology, like coating performance, wettability, and micro/nanotechnology.
3.3. Evaluation of Piezoelectric Properties and Diffraction Efficiencies
There are a few ways to determine the piezoelectric effect and its value in a piezoelectric material. One way is to observe the converse piezoelectric effect by applying a certain voltage to the material. In this paper, the indirect piezoelectric effect of thin films was observed with a help of a vibrometer—a Pulse LabShop instrument. The drawback of this method—the observation of changed grating deformation is quite difficult, as the deformations occurring are very small, and a special high accuracy measuring devices has to be used. The amplitude—frequency dependence of the PVDF-PMMA-Ag and PVDF-PMMA-BaTiO3-Ag elements is given in Figure 4.
When an AC voltage of 130 mV (Figure 4) is applied to the piezoelectric thin film, varying the frequency it can be seen that there is a specific frequency at which the film starts to vibrate at higher amplitudes. This frequency, so called resonant frequency, for a PVDF-PMMA-Ag piezoelectric–plasmonic film was 40 Hz with a displacement of 607 nm, and for PVDF-PMMA-BaTiO3-Ag it was 44 Hz with a displacement of 1,235 nm. Thus, applying too high a voltage (higher than 310 mV), no response is registered and the elements stops oscillating, i.e., the piezoelectric effect disappears. The designed elements are low frequency elements and this feature allows them to be integrated in systems where motion can harvest the energy, i.e., wireless sensor networks or human monitoring devices where energy harvesting occurs from the human motion. The power density of such devices has a limited frequency range of less than 50 Hz [18,19] and design of the elements resonating at this frequency range with a large bandwidth, is a step towards novel solutions for low frequency monitoring devices.
Further, the designed novel elements were investigated by applying a defined bias and observing the changes of the gratings' geometry parameters—grating width and its depth. Using AFM, the differences of microstructure relief to applied voltage were evaluated at each step and are given in Figure 5.
The obtained results showed that applying a voltage from 0 to 3 V gives only very small changes in the grating geometries of the elements: ∼12 nm in grating depth and ∼0.05 μm in grating width. Increasing the voltage up to 10 V, the grating width varies ∼30 nm and depth ∼0.1 μm in both optical elements. Thus, descriptive statistics (Table 3) of the experimental results show that PVDF-PMMA-Ag has a higher standard deviation indicating that the measured grating depth and width are spread out over a larger range of values than PVDF-PMMA-BaTiO3-Ag. The element has rather large positive linear association of grating width and depth on applied bias, too.
The results stated above prove the fact that when voltage is applied to a piezoelectric material, it stretches the grating changing the periodicity of the grating. It affects the angle of diffraction of the grating, too. These changes were registered by the diffractometer. Distribution of diffraction efficiency was observed in 0, ±1, ±2 and ±3 order maxima. Results showed that designed optical elements are based on a high efficiency reflection type gratings, i.e., elements mainly diffracts a maximum amount of light into the first order maxima and then minimizes the amount of light in its zero and higher orders. Characteristics of efficiencies were registered and given in Figure 6.
Measurement results of the diffraction efficiencies of PVD-PMMA-Ag show that the applied bias significantly changes the efficiency distribution in different diffraction maximum orders, i.e., its zero order efficiency may change from 15% to 30% depending on the applied bias of 5 V and 10 V, in its first orders about 8%–11%. Diffraction efficiencies of PVDF-PMMA-BaTiO3-Ag element can vary in its zero order about 10%, in its first orders about 7% when the voltage varies from 5 to 10 V. Results define that designed novel optical elements may be integrated in such applications where control over a light beam of the diffraction angle is needed.
Results prove the relevance of novel elements based on piezoelectric and surface plasmon resonance effect in a single chip. Simply said, the optical, mechanical, piezoelectric and surface properties become significantly important when dealing with composite materials used to construct smaller, faster, cheaper and more efficient optical elements, which can be routinely employed as active optical components in MEMS and MOEMS. Novel elements may be constructed for easy experimental monitoring of the element reaction, when the surface of the biosensor is in contact with a liquid analyte to be investigated. The integrated piezoelectricity and optics techniques in a single element is an excellent tool for the in situ study of the kinetics and equilibrium constants of relevant surface processes in biomedicine, medicine, pharmacy, etc.
The research results with the designed novel elements showed that surface plasmon resonance in piezoelectric–polymeric materials is not a typical phenomenon and implementation of metal (Ag) nanoparticles increase the absorbance for the specified wavelength. These tunable elements can be miniaturized and integrated in areas where beam switching is needed, in displays with small pixels, used as monochromatic light sources or for laboratory equipment to investigate biological systems. Low frequency working ranges of the elements is essential when designing human monitoring devices where the frequency limitations are defined. Moreover, it is possible to select appropriate piezoelectric and SPR effects-based elements, working at a wavelength range between 372 and 424 nm in order to increase the functionality and sensitivity of the overall system.
Results of optical elements with piezoelectric and SPR properties proved the relevance of voltage-driven periodical microstructures which change their geometrical parameters when the voltage is applied, i.e., the average grating depth can vary by approximately 30 nm and the average width of the grating by about 100 nm.
These investigations open the gates to more significant challenges in the design of microsystems—the combination of SPR with piezoelectric effects for identification techniques, pushing the sensitivity towards the single-molecule detection limit, and, most importantly, practical development of sensing elements for routine use in humans' everyday life.
This research work was funded by EU Structural Funds Project Microsensors, microactuators and controllers for mechatronics systems“(Go-Smart)” (Nr. VP1-3.1-ŠMM-08-K-01-015).
Both authors contributed equally to performing the experiments related to the design of novel tunable optical elements. Each author contributed in performing investigations of surface morphology, optical, mechanical and piezoelectric properties. General conclusions were reached and the main experimental results explained in detail and stated in this paper.
Conflicts of Interest
The authors declare no conflict of interest.
- Chou, K.S.; Lu, Y.C.; Lee, H.H. Effect of alkaline ion on the mechanism and kinetics of chemical reduction of silver. Mater. Chem. Phys. 2005, 94, 429–433. [Google Scholar]
- Jain, P.K.; Huang, X.; El-Sayed, I.H.; El-Sayed, M.A. Review of some interesting Surface Plasmon Resonance-enhanced properties of noble metal nanoparticles and their applications to biosystems. Plasmonics 2007, 2, 107–118. [Google Scholar]
- Justino, C.I.L.; Rocha-Santos, T.A.P.; Duarte, A.C. Review of analytical figures of merit of sensors and biosensors in clinical applications. Trends Anal. Chem. 2010, 29, 1172–1183. [Google Scholar]
- Pramanik, S.; Pingguan-Murphy, B.; Osman, N.A.A. Developments of immobilized surface modified piezoelectric crystal biosensors for advanced applications. Int. J. Electrochem. Sci. 2013, 8, 8863–8892. [Google Scholar]
- Sun, X.; Qiao, L.; Wang, X. A novel immunosensor based on Au nanoparticles and polyaniline/multiwall carbon nanotubes/chitosan nanocomposite film functionalized interface. Nano-Micro Lett. 2013, 5, 191–201. [Google Scholar]
- Penza, M.; Cassano, G.; Aversa, P.; Cusano, A.; Cutolo, A.; Giordano, M.; Nicolais, L. Carbon nanotube acoustic and optical sensors for volatile organic compound detection. Nanotechnology 2005, 16, 2536. [Google Scholar]
- Massaro, A.; Spano, F.; Lay-Ekuakille, A.; Cazzato, P.; Cingolani, R.; Athanassiou, A. Design and characterization of nanocomposite pressure sensor implemented in tactile robotic system. IEEE Trans. Instrum. Meas. 2011, 60, 2967–2975. [Google Scholar]
- Soloperto, G.; Conversano, F.; Greco, A.; Casciaro, E.; Ragusa, A.; Lay-Ekuakille, A.; Casciaro, S. Assessment of the enhancement potential of halloysite nanotubes for echographic imaging. Proceedings of the IEEE International Symposium on Medical Measurements and Applications Proceedings (MeMeA), Gatineau, QC, Canada, 4–5 May 2013; pp. 30–34.
- Acharya, G.; Chang, C.L.; Holland, D.P.; Thompson, D.H.; Savran, C.A. Rapid detection of S-adenosyl homocysteine using self-assembled optical diffraction gratings. Angew. Chem. Int. Ed. 2007, 46, 1–4. [Google Scholar]
- Ramanaviciene, A.; German, N.; Kausaite-Minkstimiene, A.; Voronovic, J.; Kirlyte, J.; Ramanavicius, A. Comparative study of Surface Plasmon Resonance, electrochemical and electroassisted chemiluminescence methods based immunosensor for the determination of antibodies against human growth hormone. Biosens. Bioelectron 2012, 36, 48–55. [Google Scholar]
- Aslan, K.; Lakowicz, R.; Geddes, C.D. Plasmon light scattering in biology and medicine: New sensing approaches, visions and perspectives. Curr. Opin. Chem. Biol. 2005, 9, 538–544. [Google Scholar]
- Snitka, V.; Bruzaite, I.; Lendraitis, V. Porphyrin nanotubes film for optical gas sensing. Microelectron. Eng. 2011, 88, 2459–2462. [Google Scholar]
- Abad, J.M.; Pariente, F.; Hernández, L.; Abruña, H.D.; Lorenzo, E. Determination of organophosphorus and carbamate pesticides using a piezoelectric biosensor. Anal. Chem. 1998, 70, 2848–2855. [Google Scholar]
- Guobiene, A. Formation and Analysis of Periodic Structures in Polymer Materials. Ph.D. Thesis, Kaunas University of Technology, Kaunas, Lithuania, June 2005. [Google Scholar]
- Lamovec, J.; Jovic, V.; Randjelovic, D.; Aleksic, R.; Radojevic, V. Analysis of the composite and film hardness of electrodeposited nickel coatings on different substrates. Thin Solid Films. 2008, 516, 8646–8654. [Google Scholar]
- Stevenson, M.E.; Bradt, R.C. Micron and sub-micron level hardness testing for failure analysis. J. Fail. Anal. Prev. 2001, 1, 37–42. [Google Scholar]
- Tudos, A.J.; Schasfoort, R.B.M. Introduction to Surface Plasmon Resonance. In In Handbook of Surface Plasmon Resonance; Royal Society of Chemistry: Cambridge, UK, 2008; Volume Chapter 1, pp. 1–14. [Google Scholar]
- Jiang, Y.; Shiono, S.; Hamada, H.; Fujita, T.; Higuchi, K.; Maenaka, K. Low-Frequency Energy Harvesting Using a Laminated PVDF Cantilever with a Magnetic Mas. Power MEMS 2010, 375–378. [Google Scholar]
- Markose, S.; Raja, S.R.P.; Jain, A.; Elias, B. Experimental study on dimension effect of PVDF film on energy harvesting. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2013, 2, 270–278. [Google Scholar]
|Master Grating Periodicity||Depth||700 nm|
|Master Grating Dimensions||Length||2 mm|
|Grating Lines||Parallel to the short edge|
|Pressure in Metallic Mandrel||12 MPa|
|The Vickers microhardness (Formula (1)): |
HV = 0.647 ± 0.0026 GPa
|The Vickers microhardness (Formula (1)): |
HV = 1.223 ± 0.0029 GPa
|The absolute microhardness (Formula (2)) |
HA = 1.027 GPa
|The absolute microhardness (Formula (2)) |
HA = 3.143 GPa
|N = 8 Measurements (0–20 V)||Element||Std. Deviation||Variance||R–Squared Value|
|Grating width, μm||PVDF-PMMA-Ag||0.11370||0.013||0.676—large positive linear assoc.|
|PVDF-PMMA-BaTiO3-Ag||0.08396||0.007||0.341—small positive linear ssoc.|
|Grating depth, nm||PVDF-PMMA-Ag||40.44021||1,635.411||0.766—large positive linear assoc.|
|PVDF-PMMA-BaTiO3-Ag||18.33030||336.000||0.743—large positive linear assoc.|
© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/). | <urn:uuid:e9666a4d-1c59-4855-851f-508fbe30dcaf> | CC-MAIN-2017-17 | http://mdpi.com/1424-8220/14/4/6910/htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.887525 | 6,837 | 2.578125 | 3 |
In molecular biology and biotechnology, a fluorescent tag, also known as a label or probe, is a molecule that is attached chemically to aid in the labeling and detection of a biomolecule such as a protein, antibody, or amino acid. Generally, fluorescent tagging, or labeling, uses a reactive derivative of a fluorescent molecule known as a fluorophore. The fluorophore selectively binds to a specific region or functional group on the target molecule and can be attached chemically or biologically. Various labeling techniques such as enzymatic labeling, protein labeling, and genetic labeling are widely utilized. Ethidium bromide, fluorescein and green fluorescent protein are common tags. The most commonly labelled molecules are antibodies, proteins, amino acids and peptides which are then used as specific probes for detection of a particular target.
- 1 History
- 2 Methods for tracking biomolecules
- 3 Use of tags in fluorescent labeling
- 4 Recent advancements in cell imaging
- 5 Advantages
- 6 See also
- 7 Notes
- 8 External links
The development of methods to detect and identify biomolecules has been motivated by the ability to improve the study of molecular structure and interactions. Before the technology of fluorescent labeling, radioisotopes were used to detect and identify molecular compounds. Since then, safer methods have been developed that involve the use of fluorescent dyes or fluorescent proteins as tags or probes as a means to label and identify biomolecules. Although fluorescent tagging in this regard has only been recently utilized, the discovery of fluorescence has been around for a much longer time.
Sir George Stokes developed the Stokes Law of Fluorescence in 1852 which states that the wavelength of fluorescence emission is greater than that of the exciting radiation. Richard Meyer then termed fluorophore in 1897 to describe a chemical group associated with fluorescence. Since then, Fluorescein was created as a fluorescent dye by Adolph von Baeyer in 1871 and the method of staining was developed and utilized with the development of fluorescence microscopy in 1911.
Within the past century, Ethidium bromide and variants were developed in the 1950s, and in 1994, fluorescent proteins or FPs were introduced. Green fluorescent protein or GFP was discovered by Osamu Shimomura in the 1960s and was developed as a tracer molecule by Douglas Prasher in 1987. FPs led to a breakthrough of live cell imaging with the ability to selectively tag genetic protein regions and observe protein functions and mechanisms. For this breakthrough, Shimomura was awarded the Nobel Prize in 2008.
In recent years, new methods for tracking biomolecules have been developed including the use of colorimetric biosensors, photochromic compounds, biomaterials, and electrochemical sensors. Fluorescent labeling is also a common method in which applications have expanded to enzymatic labeling, chemical labeling, protein labeling, and genetic labeling.
Methods for tracking biomolecules
There are currently several labeling methods for tracking biomolecules. Some of the methods include the following.
Common species that isotope markers are used for include proteins. In this case, amino acids with stable isotopes of either carbon, nitrogen, or hydrogen are incorporated into polypeptide sequences. These polypeptides are then put through mass spectrometry. Because of the exact defined change that these isotopes incur on the peptides, it is possible to tell through the spectrometry graph which peptides contained the isotopes. By doing so, one can extract the protein of interest from several others in a group. Isotopic compounds play an important role as photochromes, described below.
Biosensors are attached to a substance of interest. Normally, this substance would not be able to absorb light, but with the attached biosensor, light can be absorbed and emitted on a spectrophotometer. Additionally, biosensors that are fluorescent can be viewed with the naked eye. Some fluorescent biosensors also have the ability to change color in changing environments (ex: from blue to red). A researcher would be able to inspect and get data about the surrounding environment based on what color he or she could see visibly from the biosensor-molecule hybrid species.
Colorimetric assays are normally used to determine how much concentration of one species there is relative to another.
Photochromic compounds have the ability to switch between a range or variety of colors. Their ability to display different colors lies in how they absorb light. Different isomeric manifestations of the molecule absorbs different wavelengths of light, so that each isomeric species can display a different color based on its absorption. These include photoswitchable compounds, which are proteins that can switch from a non-fluorescent state to that of a fluorescent one given a certain environment.
The most common organic molecule to be used as a photochrome is diarylethene. Other examples of photoswitchable proteins include PADRON-C, rs-FastLIME-s and bs-DRONPA-s, which can be used in plant and mammalian cells alike to watch cells move into different environments.
Fluorescent biomaterials are a possible way of using external factors to observe a pathway more visibly. The method involves fluorescently labeling peptide molecules that would alter an organism's natural pathway. When this peptide is inserted into the organism's cell, it can induce a different reaction. This method can be used, for example to treat a patient and then visibly see the treatment's outcome.
Electrochemical sensors can used for label-free sensing of biomolecules. They detect changes and measure current between a probed metal electrode and an electrolyte containing the target analyte. A known potential to the electrode is then applied from a feedback current and the resulting current can be measured. For example, one technique using electrochemical sensing includes slowly raising the voltage causing chemical species at the electrode to be oxidized or reduced. Cell current vs voltage is plotted which can ultimately identify the quantity of chemical species consumed or produced at the electrode. Fluorescent tags can be used in conjunction with electrochemical sensors for ease of detection in a biological system.
Of the various methods of labeling biomolecules, fluorescent labels are advantageous in that they are highly sensitive even at low concentration and non-destructive to the target molecule folding and function.
Green fluorescent protein is a naturally occurring fluorescent protein from the jellyfish Aequorea victoria that is widely used to tag proteins of interest. GFP emits a photon in the green region of the light spectrum when excited by the absorption of light. The chromophore consists of an oxidized tripeptide -Ser^65-Tyr^66-Gly^67 located within a β barrel. GFP catalyzes the oxidation and only requires molecular oxygen. GFP has been modified by changing the wavelength of light absorbed to include other colors of fluorescence. YFP or yellow fluorescent protein, BFP or blue fluorescent protein, and CFP or cyan fluorescent protein are examples of GFP variants. These variants are produced by the genetic engineering of the GFP gene.
Synthetic fluorescent probes can also be used as fluorescent labels. Advantages of these labels include a smaller size with more variety in color. They can be used to tag proteins of interest more selectively by various methods including chemical recognition-based labeling, such as utilizing metal-chelating peptide tags, and biological recognition-based labeling utilizing enzymatic reactions. However, despite their wide array of excitation and emission wavelengths as well as better stability, synthetic probes tend to be toxic to the cell and so are not generally used in cell imaging studies.
Fluorescent labels can be hybridized to mRNA to help visualize interaction and activity, such as mRNA localization. An antisense strand labeled with the fluorescent probe is attached to a single mRNA strand, and can then be viewed during cell development to see the movement of mRNA within the cell.
A fluorogen is ligand (fluorogenic ligand) which is not itself fluorescent, but when it is bound by a specific protein or RNA structure becomes fluorescent. For instance, Y-FAST is an variant of Photoactive Yellow Protein which was engineered to bind chemical mimics of the GFP tripeptide chromophore. Likewise, the Spinach aptamer is an engineered RNA sequence which can bind GFP chromophore chemical mimics, thereby conferring conditional and reversible fluorescence on RNA molecules containining the sequence.
Fluorescent labeling is known for its non-destructive nature and high sensitivity. This has made it one of the most widely used methods for labeling and tracking biomolecules. Several techniques of fluorescent labeling can be utilized depending on the nature of the target.
In enzymatic labeling, a DNA construct is first formed, using a gene and the DNA of a fluorescent protein. After transcription, a hybrid RNA + fluorescent is formed. The object of interest is attached to an enzyme that can recognize this hybrid DNA. Usually fluorescein or biotin is used as the fluorophore.
Chemical labeling or the use of chemical tags utilizes the interaction between a small molecule and a specific genetic amino acid sequence. Chemical labeling is sometimes used as an alternative for GFP. Synthetic proteins that function as fluorescent probes are smaller than GFP's, and therefore can function as probes in a wider variety of situations. Moreover, they offer a wider range of colors and photochemical properties. With recent advancements in chemical labeling, Chemical tags are preferred over fluorescent proteins due to the architectural and size limitations of the fluorescent protein's characteristic β-barrel. Alterations of fluorescent proteins would lead to loss of fluorescent properties.
Protein labeling uses a short tag to minimize disruption of protein folding and function. Transition metals are used to link specific residues in the tags to site-specific targets such as the N-termini, C-termini, or internal sites within the protein. Examples of tags used for protein labeling include biarsenical tags, Histidine tags, and FLAG tags.
Fluorescence in situ hybridization, or FISH, is an example of a genetic labeling technique that utilizes probes that are specific for chromosomal sites along the length of a chromosome, also known as chromosome painting. Multiple fluorescent dyes that each have a distinct excitation and emission wavelength are bound to a probe which is then hybridized to chromosomes. A fluorescence microscope can detect the dyes present and send it to a computer that can reveal the karyotype of a cell. This technique allows abnormalities such as deletions and duplications to be revealed.
Recent advancements in cell imaging
In recent years, chemical tags have been tailored to advanced imaging technologies more so than fluorescent proteins due to the fact that chemical tags can localize photosensitizers closer to the target proteins. Proteins can then be labeled and detected with imaging such as super-resolution microscopy, Ca2+-imaging, pH sensing, hydrogen peroxide detection, chromophore assisted light inactivation, and multi-photon light microscopy. In vivo imaging studies in live animals have been performed for the first time with the use of a monomeric protein derived from the bacterial haloalkane dehalogenase known as the Halo-tag. The Halo-tag covalently links to its ligand and allows for better expression of soluble proteins.
Although fluorescent dyes may not have the same sensitivity that radioactive probes did, they are able to show real-time activity of molecules in action. Moreover, radiation and appropriate handling is no longer a concern.
With the development of fluorescent tagging, fluorescent microscopy has allowed the visualization of specific proteins in both fixed and live cell images. Localization of specific proteins has led to important concepts in cellular biology such as the functions of distinct groups of proteins in cellular membranes and organelles. In live cell imaging, fluorescent tags enable movements of proteins and their interactions to be monitored.
Latest advances in methods involving fluorescent tags have led to the visualization of mRNA and its localization within various organisms. Live cell imaging of RNA can be achieved by introducing synthesized RNA that is chemically coupled with a fluorescent tag into living cells by microinjection. This technique was used to show how the oskar mRNA in the Drosophila embryo localizes to the posterior region of the oocyte.
- Sahoo, Harekrushna (1 January 2012). "Fluorescent labeling techniques in biomolecules: a flashback". RSC Advances. 2 (18): 7017–7029. doi:10.1039/C2RA20389H. Retrieved 9 March 2013.
- Presentation on Fluorescent labelling of biomolecules with organic probes | PharmaXChange.info
- Gwynne and Page, Peter and Guy. "Laboratory Technology Trends: Fluorescence + Labeling". Science. Retrieved 10 March 2013.
- Kricka LJ, Fortina P (April 2009). "Analytical ancestry: "firsts" in fluorescent labeling of nucleosides, nucleotides, and nucleic acids". Clin. Chem. 55 (4): 670–83. doi:10.1373/clinchem.2008.116152. PMID 19233914.
- Jing, C; Cornish, VW (2011). "Chemical Tags for Labeling Proteins Inside Living Cells". Acc. Chem. Res. 44: 784–92. doi:10.1021/ar200099f. PMC . PMID 21879706.
- "Green Fluorescent Protein - GFP History - Osamu Shimomura".
- Shimomura, Osamu. "The Nobel Prize in Chemistry". Retrieved 5 April 2013.
- Chen, Xian; Smith, Lloyd M.; Bradbury, E. Morton (1 March 2000). "Site-Specific Mass Tagging with Stable Isotopes in Proteins for Accurate and Efficient Protein Identification". Analytical Chemistry. 72 (6): 1134–1143. doi:10.1021/ac9911600. PMID 10740850. Retrieved 3 April 2013.
- "Colorimetric Assays". Retrieved 3 April 2013.
- Halevy, Revital; Sofiya Kolusheval; Robert E.W. Hancock; Raz Jelinek (2002). "Colorimetric Biosensor Vesicles for Biotechnological Applications" (PDF). Materials Research Society Symposium Proceedings. 724. Biological and Biomimetic Materials - Properties to Function. Retrieved 4 April 2013.
- Lummer, M; Humpert, F; Wiedenlüebbert, M; Sauer, M; Schüettpelz, M; Staiger, D (Feb 23, 2013). "A new set of reversibly photoswitchable fluorescent proteins for use in transgenic plants.". Molecular plant. 6 (5): 1518–30. doi:10.1093/mp/sst040. PMID 23434876.
- Perrier, Aurélie; Maurel, François; Jacquemin, Denis (21 August 2012). "Single Molecule Multiphotochromism with Diarylethenes". Accounts of Chemical Research. 45 (8): 1173–1182. doi:10.1021/ar200214k. PMID 22668009.
- Zhang, Yi; Yang, Jian (1 January 2013). "Design strategies for fluorescent biodegradable polymeric biomaterials". Journal of Materials Chemistry B. 1 (2): 132–148. doi:10.1039/C2TB00071G.
- "bioee.ee.columbia.edu" (PDF).
- Cox, Michael; Nelson, David R.; Lehninger, Albert L (2008). Lehninger principles of biochemistry. San Francisco: W.H. Freeman. ISBN 0-7167-7108-X.
- Jung D, Min K, Jung J, Jang W, Kwon Y (January 2013). "Chemical biology-based approaches on fluorescent labeling of proteins in live cells". Mol Biosyst. 9 (5): 862–72. doi:10.1039/c2mb25422k. PMID 23318293.
- Weil, Timothy T.; Parton, Richard M.; Davis, Ilan (1 July 2010). "Making the message clear: visualizing mRNA localization". Trends in Cell Biology. 20 (7): 380–390. doi:10.1016/j.tcb.2010.03.006. PMC . PMID 20444605.
- Szent-Gyorgyi C, Schmidt BF, Creeger Y, et al. (April 2008). "Fluorogen-activating single-chain antibodies for imaging cell surface proteins". Nature Biotechnology (Abstract). 26: 470. doi:10.1038/nbt1368. PMID 18157118. (subscription required (. ))
We report here the development of protein reporters that generate fluorescence from otherwise dark molecules (fluorogens).
- Plamont MA, Billon-Denis E, Maurin S, et al. (19 January 2016). "Small fluorescence-activating and absorption-shifting tag for tunable protein imaging in vivo". Proceedings of the National Academy of Sciences of the United States of America. 113: 497–502. doi:10.1073/pnas.1513094113. PMC . PMID 26711992.
- Paige JS, Wu KY, Jaffrey SR (29 July 2011). "RNA Mimics of Green Fluorescent Protein". Science. 333 (6042): 642–646. Bibcode:2011Sci...333..642P. doi:10.1126/science.1207339. PMC . PMID 21798953.
- A. Richter; C. Schwager; S. Hentze; W. Ansorge; M.W. Hentze; M. Muckenthaler (2002). "Comparison of Fluorescent Tag DNA Labeling Methods Used for Expression Analysis by DNA Microarrays" (PDF). BioTechniques. 33: 620–630. Retrieved 4 April 2013.
- Wombacher R, Cornish VW (June 2011). "Chemical tags: applications in live cell fluorescence imaging". J Biophotonics. 4 (6): 391–402. doi:10.1002/jbio.201100018. PMID 21567974.
- Jung, Deokho; Min, Kyoungmi; Jung, Juyeon; Jang, Wonhee; Kwon, Youngeun (1 January 2013). "Chemical biology-based approaches on fluorescent labeling of proteins in live cells". Molecular BioSystems. 9 (5): 862–72. doi:10.1039/C2MB25422K. PMID 23318293.
- Matthew P Scott; Lodish, Harvey F.; Arnold Berk; Kaiser, Chris; Monty Krieger; Anthony Bretscher; Hidde Ploegh; Angelika Amon (2012). Molecular Cell Biology. San Francisco: W. H. Freeman. ISBN 1-4292-3413-X.
- N Peterson S, Kwon K (2012). "The HaloTag: Improving Soluble Expression and Applications in Protein Functional Analysis". Curr Chem Genomics. 6 (1): 8–17. doi:10.2174/1875397301206010008. PMC . PMID 23115610.
- Proudnikov, Dmitri; Andrei Mirzabekov (1996). "Chemical methods of DNA and RNA fluorescent labeling" (PDF). Oxford University Press. 24 (22): 4535–42. doi:10.1093/nar/24.22.4535. PMC . PMID 8948646. Retrieved 5 April 2013.
- Weil, TT; Parton, RM; Davis, I (Jul 2010). "Making the message clear: visualizing mRNA localization". Trends Cell Biol. 20: 380–90. doi:10.1016/j.tcb.2010.03.006. PMC . PMID 20444605.
|Library resources about | <urn:uuid:cd9edee4-0dc2-425d-b59b-5b89ad42924f> | CC-MAIN-2017-17 | https://en.wikipedia.org/wiki/Fluorescent_marker | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00366-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.84828 | 4,206 | 3.875 | 4 |
Communicating with palliative care patients nearing the end of life, their families and carers
Pharmacists are becoming increasingly involved in palliative care and can be a source of important information and support for patients at the end of life. Health professionals caring for patients with advanced illness should develop skills for communicating with patients, and their families and carers.
It can be difficult to initiate discussions with patients about nearing the end of life. Pharmacists and healthcare professionals require specialised communication skills to support patients, their families and carers during this time.
Source: Agencja Fotograficzna Caro / Alamy Stock Photo
Effective communication with patients lies at the heart of good healthcare, is associated with increased levels of satisfaction and may improve clinical outcomes. Sensitive, honest and empathic communication may, in part, relieve the burden of difficult treatment decisions, and the physical and emotional complexities of death and dying, and lead to positive outcomes for people nearing the end of life (EoL) and their companions,,,. By contrast, poor communication is associated with distress and complaints,,,, and aggravated by resource constraints and increasing work pressures owing to an ageing population with chronic conditions and complex care needs. An emerging body of empirical work that utilises conversation analysis details the unique challenges encountered by healthcare professionals when communicating with patients nearing the EoL,,,,,,. However, the precise nature of EoL interaction in pharmacy settings is unknown — a focus on traditional counselling interactions in hospital- or community-based pharmacy,,, means that the explicit details of EoL discussions between pharmacists and their patients, relatives and carers has been largely undocumented.
This article focuses on the communication skills associated with the pharmacist’s extended role in palliative care, with specific reference to the skills involved in communicating with people who are approaching EoL, their families and carers. This is an increasingly relevant topic in light of pharmacists’ involvement in palliative care services both nationally and globally,,,,,.
Role of the pharmacist in palliative care
The medical needs of palliative care patients and their carers are often complex and protracted. Pharmacists’ expert pharmaceutical knowledge and broad skill set — coupled with their widespread accessibility and trusted relationships with patients and carers — mean that they are well suited to better integration within the palliative care team,. The pharmacist’s role is extending beyond the more traditional elements of dispensing activity and management of minor ailments towards the provision of greater psychosocial support for patients,. This creates opportunities for further interaction, not only with patients, but also with loved ones, relatives and carers.
Pharmacists should have a clear understanding of the unique challenges associated with palliative care-related interactions, such as ongoing assessment, clarification of clinical options where there is uncertainty, pharmacotherapeutic monitoring of the patient’s condition, as well as offering advice and support for patients, relatives and carers. Pharmacists need to be mindful that this is an emotionally difficult time for patients, relatives and carers which, for the latter two groups, may extend beyond the patient’s death and into the bereavement period. Pharmacists must, therefore, adapt their communication skills to demonstrate compassion and empathy. Moreover, they will need to work closely with the multi-disciplinary team, including the palliative care teams in outpatient, inpatient and hospice services, to ensure that patient care is not delivered in therapeutic silos, but is integrated throughout.
How pharmacists should communicate and have open discussions with the patient, carers and their families
EoL is seen as the last year of a person’s life with an advanced, progressive illness or, more likely, a number of comorbidities. The government’s EoL strategy sets out its recommendations governing the six key elements that characterise the EoL pathway (see ‘Figure 1: Department of Health: End of life (EoL) care pathway’).
Figure 1: Department of Health: End of Life (EoL) care pathway
The six key elements of an EoL pathway. It is important that support for carers and families, information for patients and carers, and spiritual care services are provided alongside these steps.
The point at which discussions around EoL care are undertaken varies from person to person, but open and honest communication that is sensitive to the situation, commences early and continues through the patient’s journey, is crucial. Consistent with the key skills that are fundamental to effective communication in any clinical encounter, the aims of such discussions include:
- Eliciting the patient’s level of understanding, main problem(s) or concern(s) about their medicines (especially those that are anticipatory) and any impact (physical, emotional or social) that these are having on the patient;
- Determining how much information the patient wishes to receive and providing this to ensure medicine optimisation;
- Ascertaining whether the patient wishes more support to engage in medicine or EoL conversations with other family members or carers.
An e-learning programme produced by Health Education England for End of Life Care (e-ELCA) includes sessions focusing on communication skills development. Clinical guidance further elaborates on the skills required, including the importance of an individualised, patient-centred approach in which patients are treated respectfully, with kindness, dignity, compassion and understanding, and in which their needs, fears or concerns about EoL are listened to and discussed in a sensitive, non-judgmental manner.
Through timely identification of patients who are at risk of deteriorating and dying, and in the last year of life, healthcare professionals can tailor information and provide opportunities for patients to consider their choices for care. Patients want choices relating to seven key themes (see ‘Box 1: Patients’ preferences for choices at the end of life’).
Box 1: Patients’ preferences for choices at the end of life (EoL)
The Department of Health ran a two-month public engagement exercise to gather people’s views on choice in EoL care, from which seven main themes emerged.
What choices are important to me at the end of life and after my death?
- I want to be cared for and die in a place of my choice;
- I want involvement in, and control over, decisions about my care;
- I want access to high quality care given by well trained staff;
- I want access to the right services when I need them;
- I want support for my physical, emotional, social and spiritual needs;
- I want the right people to know my wishes at the right time;
- I want the people who are important to me to be supported and involved in my care.
Source: Department of Health. What’s important to me: a review of choice in EoL care. London: Department of Health; 2015. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/407244/CHOICE_REVIEW_FINAL_for_web.pdf
It may be helpful to signpost some patients to advice about drawing up an advance statement of wishes and preferences or an advance decision to refuse treatment, or to information about how to appoint someone with a lasting power of attorney (LPA) should they lose capacity to make their own decisions (see ‘Additional resources’).
Pharmacists will also need to develop strategies for supporting the patient’s relatives or carers, who may be involved in enabling the patient to make choices or communicate their wishes. General Medical Council (GMC) guidance states that family members may be directly involved in the patient’s treatment or care, and in need of relevant information to help them care for, recognise and respond to changes in the patient’s condition. In cases where patients may lack capacity, pharmacists can facilitate talk about the patient’s likely wishes, beliefs and values regarding treatment. However, pharmacists will need to be skilful to ensure that relatives or carers do not think themselves, or have the burden of being, the decision-maker if they lack the delegated authority of an attorney (LPA).
In a summary of evidence-based techniques for communicating with patients and family members, LeBlanc and Tulsky identify core communication strategies that include:
- Using open-ended questions;
- Soliciting agendas early in the interaction;
- Asking permission to raise particular topics;
- Using verbal and non-verbal expressions of empathy;
- Using praise;
- Using ‘wish’ statements;
- Aligning with the hopes of patients, loved ones and carers through the use of ‘hope for the best’ phraseology.
Explaining clinical options, and risks and benefits
Patients with advanced illness are usually receiving complicated treatment regimes consisting of multiple doses of medicines and formulations. Pharmacists will need to explore the best ways to communicate such information, especially where the patient’s first language is not English, the patient is a child, or the patient is suffering from a condition that reduces their cognitive function. Such communication might need to involve supplementing written information with symbols or visual images (e.g. pictures of tablets), using jargon-free explanations, as well as providing options for regular contact.
When discussing treatments, pharmacists should: help patients (and relatives or carers) understand the purpose of a medicine and how it is optimally used; elicit and answer questions; discuss the risks, benefits and consequences of treatments; minimise side effects and maximise safety; allay fears about opioid medications and their use; offer adherence strategies (e.g. dosette boxes) and strategies to reduce stress (e.g. prescription collection and delivery); and review medications to highlight potential interactions and those that may no longer be of value.
When discussing the risks and benefits of treatment and the use of strong opioids, pharmacists should be aware of the recommendations and strategies in current clinical guidance, (see boxes 2 and 3).
Box 2: National Institute for Health and Care Excellence (NICE). Clinical guideline [CG138]. Patient experience in adult NHS services: improving the experience of care for people using adult NHS services
Enabling patients to actively participate in their care: shared decision-making
1.5.24 When offering any investigations or treatments:
- Personalise risks and benefits as far as possible;
- Use absolute risk rather than relative risk (for example, the risk of an event increases from 1 in 1000 to 2 in 1000, rather than the risk of the event doubles);
- Use natural frequency (for example, 10 in 100) rather than a percentage (10%);
- Be consistent in the use of data (for example, use the same denominator when comparing risk: 7 in 100 for one risk and 20 in 100 for another, rather than 1 in 14 and 1 in 5);
- Present a risk over a defined period of time (months or years) if appropriate (for example, if 100 people are treated for 1 year, 10 will experience a given side effect);
- Include both positive and negative framing (for example, treatment will be successful for 97 out of 100 patients and unsuccessful for 3 out of 100 patients);
- Be aware that different people interpret terms such as rare, unusual and common in different ways, and use numerical data if available;
- Think about using a mixture of numerical and pictorial formats (for example, numerical rates and pictograms).
Source: Reproduced with permission of National Institute for Health and Care Excellence. https://www.nice.org.uk/guidance/cg138/chapter/1-guidance
Box 3: National Institute for Health and Care Excellence (NICE). Clinical Guideline [CG140]. Palliative care for adults: strong opioids for pain relief
1.1.1 When offering pain treatment with strong opioids to a patient with advanced and progressive disease, ask them about concerns such as:
- Side effects;
- Fears that treatment implies the final stages of life.
1.1.2 Provide verbal and written information on strong opioid treatment to patients and carers, including the following:
- When and why strong opioids are used to treat pain;
- How effective they are likely to be;
- Taking strong opioids for background and breakthrough pain, addressing:
- How, when and how often to take strong opioids;
- How long pain relief should last.
- Side effects and signs of toxicity;
- Safe storage;
- Follow-up and further prescribing;
- Information on who to contact out of hours, particularly during initiation of treatment.
1.1.3 Offer patients access to frequent review of pain control and side effects.
Source: Reproduced with permission of National Institute for Health and Care Excellence. https://www.nice.org.uk/guidance/cg140/chapter/1-recommendations
Managing patients who are not emotionally ready to discuss their future care
Patients with advanced illness are often required to make complex decisions about treatment and care that may require consideration of risks and benefits, or decisions about whether a particular course of action should be discontinued. GMC guidance advises healthcare professionals to ensure that patients have opportunities to discuss the options available to them and what treatment (or refusal) is likely to involve. While some patients will be able to engage in EoL discussions, others may be less willing to think about future issues or may find the prospect of thinking about deterioration and dying too distressing. All healthcare professionals and pharmacists will find it challenging to engage such patients in discussion about future treatment and ‘what if?’ scenarios. Pharmacists will need to develop strategies that enable them to deal with these discussions. As recent research has shown, this requires skilful techniques that rely on the patient’s use of verbal or non-verbal cues in order to explore the concerns of patients, their relatives or carers in ways that allow patients to raise EoL considerations when they are ready. Where patients wish to nominate a relative or carer to have these discussions, it is important to support their wishes and establish their consent.
Managing expectations among patients, relatives and carers
Particular difficulties may arise in settings where families or carers are involved in planning ongoing care, such as in the event of a loss of capacity or in cases where treatment options for children or young people need to be discussed. The stresses associated with life-limiting and life-threatening illness can give rise to differing opinions over treatment options that can have implications for the quality of care (e.g. misunderstanding prognosis, advice and clarity over when to administer anticipatory medicines). Managing expectations or disagreements takes time, and patients, their families and carers will need support to arrive at important decisions. Advice for recognising, acknowledging and managing divergent views and potential conflict has been suggested as a useful negotiating strategy in working towards a consensus.
Managing uncertainty: hoping for the best but preparing for the worst
For an individual patient, the likely benefits and harms of palliative treatments aimed at maintaining disease stability, slowing deterioration or reducing symptoms are uncertain. While research may provide pharmacists with medical information about the number needed to treat or number needed to harm to achieve a defined outcome for the individual patient, this is unpredictable. Enabling patients to appreciate this uncertainty, while supporting their need for hope, is a tricky communication task and professionals, in their communication and relationships with patients, can enhance or diminish their sense of hope (see ‘Box 4: Contrasting communication behaviours’).
Box 4: Contrasting communication behaviours
Communication behaviours that contribute to a patient’s sense of hope:
- Being present and taking time to talk;
- Giving information in a sensitive, compassionate manner;
- Answering questions;
- Being nice, friendly and polite.
Behaviours that have a negative influence on hope:
- Giving information in a cold way;
- Being mean or disrespectful;
- Having conflicting information from professionals.
Behaviours that demonstrate the valuing of patients as individuals and the enormity of their life situation can really help. These behaviours are not simply ‘being nice’. They are important therapeutic interventions that promote a sense of hope and preserve a sense of dignity, something that many patients fear the loss of.
While a focus on unrealistic hopes (e.g. cure or more time) may not be helpful, a balance of hoping for a realistic best (e.g. that the pain will go away with this treatment), while preparing for deterioration (e.g. “what we will do if this isn’t sufficient is…”), is the best strategy,,.
Understanding and respect for cultural, religious or spiritual beliefs
A person’s culture, religious or spiritual beliefs provides them with a framework and ideas about health and illness, through which healthcare decisions are made.
Pharmacists need cultural competence to ensure treatment is aligned with these preferences and wishes. They can achieve this through seeking clarification from patients and family about aspects of their care that may be influenced by their cultural, religious or spiritual beliefs. Consider, for example, asking the patient about:
- Faith: What is important to them? What rituals, books or places for prayer/meditation do they need?
- Diet: Are there any preferred foods or foods that are considered taboo? (e.g. gelatin in some capsules is not appropriate for vegetarians, and pork is not considered halal in Islam).
- Touch and respect: Consider any issues that gender differences may provoke. Would the patient like to discuss matters alone or with someone else present?
It can be helpful to have insight into traditional practices of different faiths and how this influences dying and bereavement. Several useful sessions about working with cultural diversity can be found in the e-learning for End of Life Care (e-ELCA) programme. Public Health England have recently produced a guide for healthcare professionals working with diverse communities.
Recognising the changing needs of the patient and those close to them
As patients become sicker, their needs and those of their companions increase. Pharmacists can make a huge difference to the stress and satisfaction that families experience by offering practical support. Strategies that anticipate needs and offer solutions are appreciated. Remaining professional but also demonstrating understanding and compassion can help families who can sometimes find it more difficult to cope with their emotions when people are ‘too nice’ (e.g. asking them how they are). For example:
- Would it help if I put the tablets for each time and day in an easy to remember pack?
- I can make an arrangement for the GP to send prescriptions directly to me if that helps you? I can dispose of the old medicines if you can bring them in.
Effective communication with patients is central to the goal of medicines optimisation. Following often simple strategies and techniques on how to manage difficult situations will improve pharmacists’ confidence and ensure patients’, carers’ and families’ medicine needs at the EoL are fully acknowledged and met.
- NHS. Health Education England. e-Learning for End of Life Care (e-ELCA)
- NHS Choices. Planning ahead for the end of life
- NHS. Improving Quality. Planning for your future care – a guide
- Public Health England. Faith at end of life: a resource for professionals, providers and commissioners working in communities
- National Institute for Health and Care Excellence (NICE). NICE guideline [NG31]. Care of dying adults in the last days of life
Citation: The Pharmaceutical Journal DOI: 10.1211/PJ.2017.20202154
Recommended from Pharmaceutical Press | <urn:uuid:9b55141a-a937-4b1b-b361-4112bde7d543> | CC-MAIN-2017-17 | http://www.pharmaceutical-journal.com/learning/learning-article/communicating-with-palliative-care-patients-nearing-the-end-of-life-their-families-and-carers/20202154.article | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119782.43/warc/CC-MAIN-20170423031159-00248-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.919432 | 4,101 | 2.515625 | 3 |
i. PETROLEUM AND ITS PRODUCTS
Petroleum has been known throughout historical time. It was used in mortar, for coating walls and boat hulls, and as a fire weapon in defensive warfare. By the middle of the 19th century, the Industrial Revolution had brought about a search for new fuels in order to power the wheels of industry. Moreover, due to the social changes, people in the industrial countries wished to be able to work and read after dark and, therefore, needed cheap oil for lamps to replace the expensive whale oil or the malodorous tallow candles. Some believed that rock oil from surface seepages would be a suitable raw material for good quality illuminating oil.
In 1854, Benjamin Silliman, Jr., the son of the great American chemist and himself a distinguished professor of chemistry at Yale University, took an outside research project given by a group of promoters headed by George Bissel, a New York lawyer and James Townsend, president of a Bank in New Haven, Connecticut, to analyze the properties of the rock oil and to see if it could be used as an illuminant (Yergin, p. 20). In his research report to the group, Silliman wrote that the petroleum sample (collected by skimming the seepages on streams) could be brought to various levels of boiling and thus distilled into several fractions, all composed of carbon and hydrogen. One of the fractions was a very high quality illuminating oil. Although others in Britain and Canada had already produced clean burning lamp fuel from rock oil, it was the publication of Silliman’s report that provided the impetus to the search for crude oil in the deeper strata of the earth’s surface.
The modern petroleum industry began in 1859, when “Colonel” Edwin L. Drake, hired by the same group of promoters, set up operations about two miles down Oil Creek from Titusville in Pennsylvania on a farm that contained an oil spring. Drake and his team built a derrick and assembled the necessary equipment to drill a well that, towards the end of August 1859, struck oil at the depth of 69 feet. Many wells were then drilled in the region, and kerosene, the chief product, soon replaced whale oil lamps and tallow candles. Little use other than lamp fuel was made of petroleum until the development of internal combustion engines (automobiles and airplanes). To day, the world is highly dependent on petroleum for motive power, lubrication, fuel, synthetics, dyes, solvents, and pharmaceuticals (Yergin, p. 26).
Origin of petroleum. There are two theories for the origin of petroleum: Inorganic and Organic. The Inorganic theory, which has some supporters (amongst chemists and astronomers rather than geologist), believes in magmatic origin of petroleum. Mendele’ev (Mendeléeff, I, p. 552) suggested that the mantle contained iron carbide. This iron carbide could react with percolating water and form methane and other hydrocarbons that is analogous to production of acetylene from reaction of calcium carbide and water.
The Organic theory, which has far more supporters, states that the amount of carbon in the earth crust has been estimated to weigh 2.6 times 10 to the power of 28 grams (Hunt, 1977, pp. 100-16). Some 82% of this carbon is located as CO3 in limestone and dolomite (Ca CO3 and Mg CO3). About 18% occurs as organic carbon in coal, oil and gas (M. Schedlowski, R. Eichmann, and C. E. Junge, “Evolution des irdischen Sauerstof Budgets und Entwicklung de Erdatmosphare,” Umschau., 22, 1974, pp. 703-707). The key reaction is the conversion of inorganic carbon into hydrocarbon by photosynthesis. This process happens within plants or within animals, which eat the plants. Plants and animals die and their organic matters are oxidized into carbon and water. In certain exceptional circumstances, the organic matter may be buried in sediment and preserved, albeit in a modified site, in coal, oil and gas, by a complex process of chemistry. Occurrence of petroleum reserves in sedimentary rocks is a strong proof of this theory.
Geological requisites for an oil and gas field. The first requisite for an oil or a gas field is a reservoir: a rock formation porous enough to contain oil or gas and permeable enough to allow their movement through it. Oil and gas occur in sedimentary rock formation laid down in ancient riverbeds or beaches, and also occasionally in dune sands or deep-sea sands. Where the limestone is porous and permeable they also form reservoirs, as in reefs built up by corals, and in places where waves and tidal currents exist. In carbonate formation, limestone (Ca CO3) and dolomite (Mg CO3), which are more brittle and soluble than sandstones, secondary porosity is found in fractures, solution channels and vugs. The more prolific Iranian Petroleum Reservoirs are made of fractured carbonates.
The second requisite is a source of hydrocarbons. Most source rocks are fine- grained shales deposited in quiet water and containing appreciable (more than 0.5%) of mostly insoluble organic matter called kerogen. Kerogen is an intermediate organic compound formed by the body of animals and plants buried under sediments by the passage of time, action of bacteria, and overburden pressure of sediments, preserved by lack of oxygen under water or sediment thus not oxidized. Kerogen, depending on the depth of burial and its corresponding temperature at its nature, changes to gas, mainly methane or heavier hydrocarbons (oil), specifically if it contains more fatty materials. As the shale (source rock) becomes compact, pressure squeezes out water and hydrocarbons, thus migrating until the fluids find a porous medium.
The third requisite is the trap to hold the oil and/or gas. Because oil and gas are lighter than the water saturating the earth crust, they would rise to the surface and escape unless stopped by an impermeable layer (cap rock) or surrounding. Traps are divided into three categories, structural, stratigraphic and combination. The structural traps are formed by the deformation of the earth crust, commonly anticlines.
One half of the petroleum in the world is obtained from Cenozoic era (present to 63 million years ago). The Paleozoic era (255-580 million years ago) ranks next in production, followed by Mesozoic era (63-255 million years ago). Most of the oil in Iran is obtained from rocks of tertiary and cretaceous systems (27-145 million years ago); For geological time scale see Table 1; Link, 1983, p. 7). For location and true scale cross-sections of the anticline traps of some of the Iranian reservoirs, see Figure 3 and Figure 4; Selley, p. 283). Geophysical exploration. Traps are sought by geophysical methods, of which the seismograph is the most useful. Sound waves are produced at the earth’s surface by explosions or by a heavy, vibrating weight and by air guns at sea beds (offshore). Part of the wave energy is then reflected back to the surface, where it can be detected by sensitive receivers and recorded. The recorded data then are processed mathematically and portrayed graphically as a sort of geological section. Seismographs, while giving subsurface geological cross sections will not determine whether oil is present in the structure.
Two other cheaper geophysical techniques are gravity and magnetic methods, but they are seldom able to locate individual traps. Rather, they are used regionally to ascertain the shape of sedimentary basins. Surface geology and geo-chemical studies of shallow cores, also help to show the possibility of the presence of hydrocarbon traps.
Exploratory drilling. The presence of oil and gas is confirmed only by drilling into the prospective rock formation. The first well drilled in a new area is called an exploratory, or wildcat well, because the chance of finding oil or gas is speculative. Step-up wells are drilled to extend the boundary of a known producing area in an attempt to discover new pools or define the limits of known reservoirs. Development wells are drilled to produce oil or gas from a reservoir that has been located by exploratory wells.
Before drilling can begin, it is usually necessary in the United States to obtain the right to drill by securing a lease from the landowner. Outside of the United States, the subsurface usually belongs not to the landowner but to the government, as is the case in Iran.
DRILLING FOR PETROLEUM
Cable tool drilling .For many years in the early days, oil and gas wells were drilled by cable tools, but that has been replaced practically everywhere by rotary drilling.
Rotary drilling. The rotary drilling method consists of revolving a steel bit at the end of a string of pipes called the drill pipe. The most common types consist of three cones with teeth made of hard metals with embedded industrial diamond bits or carburandum. The rotation of the bit grinds the rock to fingernail-size cuttings. Drill pipes and the bit are rotated by the rotary table on the surface of the well. Liquid drilling fluid (mud) is pumped, down the hollow drill pipes, and out of the jet orifices onto the bit. Mud then returns to the surface through the space between the drill pipe and the wall of the well with cuttings that are removed at the surface, once the mixture is passed through surface screens. The drilling mud’s functions are to remove cuttings and to cool and lubricate the bit. Mud also exerts pressure at the bottom of the hole so that water, oil, or gas in porous rocks cannot enter the hole (Figure 5; Figure 6; Selley, pp. 38-41).
When a good sample of the formation is desired, a special hollow bit is used and a short section of the formation (core) can be retrieved. Cores are necessary to determine the kind of rock and to measure its porosity and permeability, and fluid saturation.
Directional drilling. In directional drilling, an oil well is drilled at an angle rather than straight down. Crews use such tools as whip-stocks and turbo-drills to guide the bit along a slanted path. This method is often used in offshore operations because many wells can be drilled directionally from one platform. By means of remote control, drillers could rotate to expose a new drilling surface. Such a bit would coordinate the need to pull the drill pipe out of the hole at the time the bit is changed.
Horizontal drilling. When the angle of directional drilling can be adjusted to 90 degrees, horizontal wells are drilled. This is a technique that is successfully applied in certain reservoirs. Horizontal drillings are most effective in oil production when the thickness of producing formation is not large and has good vertical permeability. This type of drilling is being used in some of fractured carbonate reservoirs of Iran in recent years (A. R. Zahedani, “Optimization of Horizontal Drilling in Iranian Oilfields,” M.Sc. thesis, Sharif University of Technology, 2003, Iran; Jamshid-Nezhad, pp. 43-46)
Experimental methods of drilling include the use of electricity, intense cold, and high frequency sound waves. Each of these methods is designed to shatter the rocks at the bottom of the hole. Petroleum engineers are also testing a drill that has a bit with a rotating surface.
Logging. Graphical records called logs show the position and character of geological formation encountered as the well is drilled. One type of log is made by examining samples of the cuttings, taken every 3-9 meters. However, rotary drilling mixes the cuttings considerably, and the heavy mud conceals shows of oil or gas. Consequently, the geologist must depend on logs taken by geophysical methods. The most useful logs determine the formation’s natural radioactivity, resistivity, and acoustic velocity. Other physical properties may also be measured.
Offshore drilling. Drilling offshore has become increasingly important, as large petroleum reserves have been discovered in the ocean. Modern offshore drilling began in the Gulf of Mexico, where some producing fields are 100 miles (160 km) from the coast. Offshore fields are found in many parts of the world including the Persian Gulf, the Atlantic and the Pacific coasts, the Caspian Sea and the North Sea. Depending on water depth, offshore drilling rigs may be mounted on bottom-supported vessels such as jack-ups or submersibles, or on floating vessels such as ships or semi-submersibles. Jack-ups typically have three long legs that are lowered to the bottom when the rig is in place and the rig is lifted out of the water. Drilling ships and semi-submersibles are held over the well location with anchor chains and anchors. If an oil or gas field is discovered, the mobile drill vessel is moved away, and a fixed, permanent platform is installed with a drill mounted on it. As many as 30 wells can be drilled from a platform, deviated from the vertical in different directions so as to penetrate the producing formation in a desired pattern. The platforms are large structures with living quarters for the personnel, who are served by special ships and helicopters.
Completing the well. The usual method of completing a well is to drill through the producing formation. The drill pipe is withdrawn and a larger diameter pipe called casing is run into the hole, section by section, to the bottom. A measured amount of cement is pumped down the inside of the casing, followed by mud. The cement rises to fill the annular space between the casing and the hole. The mud is replaced by water or a non-damaging fluid to the producing zone, prior to perforation of the producing formation. Perforation usually takes place by creating holes by jet or gun perforators. After perforating the formation the fluid in the borehole is removed, and the oil or gas from the formation is free to enter the well. If the formation is damaged by prior operations or the permeability of the producing rock is too low, the well has to be stimulated. There are two general techniques of well stimulations, acidizing and fracturing.
PRODUCTION OF PETROLEUM
Petroleum is recovered in the same way as underground water is obtained. Like water, if the pressure at the bottom of the well is high enough the oil will flow to the surface. Otherwise pumps or other artificial means have to be used. Once the oil reaches the surface through wellhead equipments, it passes through separators in which oil is separated from gas and water. If natural pressure provides the required energy for free flow of the oil to the surface, production of petroleum is called primary recovery .If artificial techniques are used, the process is called enhanced recovery.
Primary recovery: The natural energy used in recovering petroleum comes chiefly from gas or water in reservoir rocks. The gas may be dissolved in the oil or separated at the top in the form of gas cap. Water that is heavier than oil collects below the petroleum.
Depending on the source, the energy in the reservoir is called: (i)solution -gas drive, or (ii) gas-cap drive or (iii) water drive (Allen and Roberts, p. 20).
Solution-gas drive. The oil in nearly all reservoirs contains dissolved gas. The impact of production on this gas is similar to what happens when a can of soda is opened. The gas expands and moves towards the opening, carrying liquid with it. Solution-gas drive brings only small amounts of oil to the surface.
Gas-Cap Drive. In many reservoirs, gas is trapped in a cap above the oil as well as dissolved in it. As oil is produced from the reservoir, the gas expands and drives the oil toward the well.
Water-drive. Like gas, water in the reservoir is in place mainly by underground pressure. If the volume of the water is sufficiently large, the reduction of pressure that occurs during production of oil will cause the water to expand. The water then displaces the petroleum, making it flow to the well.
Enhanced recovery. This includes a variety of means designed to increase the amount of oil that flows into the producing well. Depending on the stage of production in which they are used, these methods are generally classified as secondaryrecovery or tertiary level recovery.
Secondary and tertiary recovery .Many oilfields that were produced by the solution-gas drive mechanism until they became uneconomical have been revived by water flooding. Water is injected into specially drilled wells, forcing the oil to the producing wells. After water flooding about 50% of the original oil still remains in place. This would constitute an enormous reserve, if recovery were possible. Many methods of tertiary enhanced recovery have been researched and field-tested. Certain fluids will recover most of the residual oil when injected into the rock. These include such solvents as propane and butane, and such gases as carbon dioxide and methane, all of which will dissolve in the oil and form a bank of lighter liquid, which picks up the oil droplets left behind in the rock and drives them to the producing wells.
Moreover, surfactants (detergents) in water reduce the interfacial forces between oil and water and make the oil easier to move. Thickening agents may be added to the injected water, and viscous emulsions of oil and water have been used. Some of these methods seem promising in laboratory and pilot tests, but they have been generally uneconomical in the field.
In Venezuela, and in Alberta in Canada, where primary methods only recover about 15% of the heavy oil existing initially in the reservoir, the only commercially successful enhanced recovery method to date has been steam injection. Another thermal recovery method that shows promise but has not generally been successful is in situ combustion (Moore, Gordon, Department of Chemical and Petroleum Engineering, University of Calgary, Canada, Authority on in situ combustion, Personal Communication, 2004). Large amounts of air are injected into the reservoir, and the oil is ignited. The hot products of combustion vaporize the oil and water ahead of the burning zone and drive them toward the producing wells. In Iran at present, the technique of secondary recovery for carbonate reservoirs is confined to gas injection for reservoir pressurization, and to a limited extent, water flooding in one of the offshore fields in the Persian Gulf (A. Badakhshan et al., 1993; P. A. Bakes and A. Badakhshan, 1988).
PETROLEUM COMPOSITION AND CLASSIFICATION
Petroleum exploration is largely concerned with the search of oil and gas, two of the chemically and physically diverse, group of compounds called hydrocarbons. Physically, hydrocarbons grade from: gases, liquids (crude oil) and plastic substances (bitumen) to solids (tar sand, oil shale and hydrates).
Gas. Petroleum gas or natural gas is defined as a mixture of hydrocarbons and varying amount of non-hydrocarbons that exist either in gaseous phase or in solution with crude oil in underground reservoirs. Natural gas is classified into dissolved, associated, and non-associated gas. Dissolved gas is in solution in crude oil in the reservoir. Associated gas, commonly known as gas-cap gas, overlies and is in contact with crude oil in the reservoir. Non-associated gas is in reservoirs that do not contain significant amount of crude oil. Apart from hydrocarbon gases, non-hydrocarbon gases also exist in the reservoirs in varying amounts. The non-hydrocarbon gases are nitrogen, hydrogen, carbon dioxide, hydrogen sulfide, and rare gases such as helium.
In general, hydrocarbon reservoirs, dependent on their phase status under the ground are classified as: under- saturated, saturated, retrograde condensate, dry gas and wet gas Reservoirs(Allen and Roberts, pp.43-46).
Crude oil. Crude oil is defined as a mixture of hydrocarbons that exists in liquid phase in natural state under ground and remains liquid at normal conditions after passing through surface separators. In appearance crude oils vary from straw yellow, green, brown to dark brown or black in color, and with varying viscosities. The density of crude oil or its API (American Petroleum Institute) gravity is a good indicator of its quality, and is the major basis for its pricing.
Chemistry. Crude oil consists largely of carbon and hydrogen (hydrocarbons with three sub-groups) and the hetero-compounds that contain with minor amounts of oxygen, nitrogen and sulfur together with trace amounts of metals such as vanadium, nickel etc. These compounds which exist in various amounts in different crude oils have adverse effects on quality of crude oil, its price, and cause difficulty in crude oil refining and, if not removed from petroleum products, would cause environmental pollutions upon utilization.
Hydrocarbons subgroups are: paraffins, naphthenes and aromatics. Paraffins are saturated hydrocarbons, either as straight or branched chain (iso-paraffins). The paraffins in crude oil start from pentane to very high molecular weight compounds. Paraffins are the major compounds of crude oils- about 50%. Naphthenes are the second major group of hydrocarbons. Examples are cycloheptane and cyclohexane. They make up to 40 % of crude oils. Aromatics or unsaturated cyclo-hydrocarbons start with the smallest molecule of benzene. The aromatic hydrocarbons are liquid at normal conditions. They are present in relatively minor amount- about 10%- in light crude oils, but increase with density of crude oils.
CRUDE OIL CLASSIFICATION
Broadly speaking the classifications fall into two categories:
(i) Those proposed by chemical engineers interested in refining of the crude oil and
(ii) Those devised by geologists, and geochemists, as an aid to understanding, the source, maturation, history, etc., of crude oil occurrence.
(i) Is concerned with the quantities of various hydrocarbons present in the crude oil and their physical properties.
(ii) Is concerned with the molecules structure of the crude oil. One of the first schemes was developed by the U. S. Bureau of Mines. In this case the crude oils are classified into; paraffinic, napthtenic, aromatic intermediate, aromatic asphaltic and aromatic naphthenic types according to their distillate fraction at different temperatures and pressure. Tissot and Welte have given another classification (Selley, pp. 30-33) that has the advantage of demonstrating the maturation paths of oil in the subsurface (Table 2).
The quality and quantity of products produced by crude oil depend on its initial type. For example, a paraffinic type is better for producing kerosene and diesel oil, but not so suitable for producing gasoline. Aromatic types are good for producing gasoline and naphthenic oils give better lubricating oils.
Tar sands and oil shales (plastic and solid hydrocarbon). Besides crude oil and gas, vast reserves of energy are also locked in the tar sands and oil shales. Terminology and classification of plastic and solid hydrocarbons are shown on Figure 7 (Abraham, p. 432).
The solid and heavy viscous hydrocarbons occur as lakes or pools on the earth’s surface and are disseminated in the veins and pores in the surface. Notable examples of such kinds of hydrocarbons (inspissated deposits) as seeps are known all over the world particularly in Oklahoma, Venezuela, Trinidad, Burma, Iran, and Iraq and in other localities in the Middle East.
Tar sands. Heavy viscous oil deposits occur at or near the earth surface in many parts of the world Table 3 shows the vast reserves of tar sand deposits, worldwide (Hills, 1974, p. 263).
Two basic approaches for extraction of oil from tar sands are in practice: surface mining and subsurface extraction. In the first case the technology of strip mining, and separation of oil from the quarried tar sand is done either by hot water or by steam. Where overburden is too thick, two types of extraction methods, namely the injection of solvent (vapex) (R. M. Butler, Chemical and Petroleum Engineering Department, University of Calgary, authority on vapex, personal) to dissolve the oil and the use of heat in the form of steam (steam stimulation, or steam flooding) and in-situ combustion to extract and reduce the viscosity of the oil is used (G. Moore, Department of chemical and Petroleum Engineering, University of Calgary, authority on in situ combustion, personal communication, 2004). In these operations the oil flows to the well bore where it is pumped to the surface.
Oil shale. Oil shale is a fine grain sedimentary rock that yields oil on heating. It differs from tar sands in that in tar sands oil is free and occurs in the pores, but in oil shale the oil is contained within the complex structure of kerogen, from which it may be distilled. The reserves of oil shale are widely distributed in the world. Its reserves are estimated at 30 trillion barrels of oil. Only about 2% is accessible using present-day technology (Yen and Chillingarian, p. 292; Dinneern, pp. 181-98).
As in the case of tar sands, there are two basic methods of winning oil from shale: (i) by retorting shale quarried at the surface or (ii) by underground in-situ extraction. The cost of extraction of oil from oil shale is very high at present (Hunt, 1979, p. 617).
Figure 8. Map of oil resources of the Persian Gulf region.
Figure 9. Map of oil resources of the Caspian Sea region.
Gas hydrate. Gas hydrates are compounds of frozen water that contain gas molecules. The ice molecules themselves are referred to as clathrates. Physically, hydrates look similar to white, powdery snow. Gas hydrates occur only in very specific pressure-temperature conditions. They are stable at high pressures and low temperatures. Gas hydrates occur in shallow arctic sediments and in deep oceanic deposits. Gas hydrates in arctic permafrost have been described from Alaska and Siberia (Holder et al., 1976, pp. 981-88) and (Makegon et al.; “Detection of A Pool of Natural Gas in Solid State,” Dokl., Akads., Nauk, SSR 196, 1971, pp. 197-200).
Herbert Abraham, Asphalts and Allied Substances, Their Occurrence, Modes of Production, Uses in the Arts and Methods of Testing, 2 vols., New York, 1945.
T. O. Allen and A. P. Roberts, Production Operation, Tulsa, 1993.
A. Badakhshan et al., “The impact of Gas Injection on the Oil Recovery of a Giant Naturally Fractured Carbonate Reservoir,” in the Proceedings of the Fourth Petroleum Conference of the South Saskatchewan Section, The Petroleum Society of the Canadian Institute of Mining (CIM), 18-20 October, 1993, n.p. Philip A. Bakes and A. Badakhshan, “The Application of Some New Surfactants to the Water Flooding of Carbonate Reservoirs,” in the Proceedings of the 39th Annual Technical Meeting of the Petroleum Society of the Canadian Institute of Mining (CIM), Calgary, June 12-16, 1988; Paper no. 88-39-121.
G. V. Dinneern “Retorting Technology of Oil Shales,” In Teh Fu Yen and George V. Chilingarian, eds., Oil Shales, Amsterdam, 1976, pp. 181-98.
M. Jamshid-Nezhad, “Horizontal Drilling Proves Zone-Specific Application in Iranian Carbonate Reservoirs,” in Oil and Gas Journal, Dec.9, 2002, pp. 43-46.
L. V. Hills, “Oil Sands, Fuel of the Future,” in TheManual of the Canadian Society of Geology 3, 1974, p. 263.
G. D. Holder, D. L. Katz and J. H. Hand, “Hydrate Formation in Subsurface Environments,” Bulletin of American Association of Petroleum Geologists 60, 1976, pp. 981-88.
John Meacham Hunt, “Distribution of Carbon as Hydrocarbons and Asphaltic Compounds in Sedimentary Rocks,” in Bulletin of American Association of Petroleum Geologists 61, 1977. pp. 100-16.
Ibid, Petroleum Geochemistry and Geology, San Francisco, 1979.
Peter K. Link, Basic Petroleum Geology, Tusa, 1983.
D. I. Mendeléeff (Mendele’ev), The Principles of Chemistry, I, tr. from Russian by G. Kamensky, ed. T. A. Lawson, 6th ed., London and New York, 1897.
Richard C. Selley, Elements of Petroleum Geology, New York, 1985.
Teh Fu Yen and George V. Chilingarian, eds., Oil Shale, Developments in Petroleum Science series, No. 5., New York, 1976.
Daniel Yergin, The Prize: The Epic Quest for Oil, Money, and Power, New York, 1991.
(A. Badakhshan and F. Najmabadi)
Originally Published: July 20, 2004
Last Updated: July 20, 2004 | <urn:uuid:f606511b-3c51-4f50-b797-bb668e04286d> | CC-MAIN-2017-17 | http://www.iranicaonline.org/articles/oil-industry-i | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00603-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.943656 | 6,237 | 3.671875 | 4 |
U.S. Dollar and Assets - America’s ‘Exorbitant’ Privilege Will ContinueCurrencies / US Dollar Apr 19, 2013 - 10:28 PM GMT
In July 1944, delegates from 44 nations met at Bretton Woods, New Hampshire - the United Nations Monetary and Financial Conference - and agreed to “peg” their currencies to the U.S. dollar, the only currency strong enough to meet the rising demands for international currency transactions.
Member nations were required to establish a parity of their national currencies in terms of the US dollar, the "peg", and to maintain exchange rates within plus or minus one percent of parity, the "band."
What made the dollar so attractive to use as an international currency was each US dollar was based on 1/35th of an ounce of gold, and the gold was to held in the US Treasury. The value of gold being fixed by law at 35 US dollars an ounce made the value of each dollar very stable.
The US dollar, at the time, was considered better then gold for many reasons:
- The strength of the U.S. economy
- The fixed relationship of the dollar to gold at $35 an ounce
- The commitment of the U.S. government to convert dollars into gold at that price
- The dollar earned interest
- The dollar was more flexible than gold
There’s a lesson not learned that reverberates throughout monetary history; when government, any government, comes under financial pressure they cannot resist printing money and debasing their currency to pay for debts.
Lets fast forward a few years…
The Vietnam War was going to cost the US $500 Billion. The stark reality was the US simply could not print enough money to cover its war costs, it’s gold reserve had only $30 billion, most of its reserve was already backing existing US dollars, and the government refused to raise taxes.
In the 1960s President Lyndon B. Johnson's administration declared war on poverty and put in place its Great Society programs:
- Head Start
- Job Corps
- Food stamps
- Funded education
- Job training
- Direct food assistance
- Direct medical assistance
More than four million new recipients signed up for welfare.
During the Nixon administration welfare programs underwent major expansions. States were required to provide food stamps. Supplemental Security Income (SSI) consolidated aid for aged, blind, and disabled persons. The Earned Income Credit provided the working poor with direct cash assistance in the form of tax credits and welfare rolls kept growing
Bretton Woods collapsed in 1971 when Nixon severed (known as the Nixon Shock because the decision was made without consulting the other signatories of Bretton Woods, even his own State Department wasn’t consulted or forewarned) the link between the dollar and gold – the US dollar was now a fully floating fiat currency and the government had no problem printing more money. With gold finally demonetized the US Federal Reserve (Fed) and the world’s central banks were now free from having to defend their gold reserves and a fixed dollar price of gold.
The Fed could finally concentrate on achieving its mandate - full employment with stable prices - by employing targeted levels of inflation. The Fed’s ‘Great Experiment’ had begun – the objective being a leveling out of the business cycle by keeping the economy in a state of permanent boom - gold's "chains of fiscal discipline" had been removed.
But there was a problem - because of the massive printing of the US dollar to cover war and welfare reform costs Nixon worried about the strength of his country’s currency – how do you keep the U.S. dollar as the world’s reserve currency, how do you keep demand strong, if one you remove gold’s backing and two print it into oblivion?
Recognizing that the US, and the rest of the world, was going to need and use more oil, a lot more oil, and that Saudi Arabia wanted to sell the world’s largest economy (by far the US) more oil, Nixon and Saudi Arabia came to an agreement in 1973 whereby Saudi oil could only be purchased in US dollars. In exchange for Saudi Arabia's willingness to denominate their oil sales exclusively in U.S. dollars, the United States offered weapons and protection of their oil fields from neighboring nations.
Nixon also abolished the International Monetary Fund’s (IMF) international capital constraints on American domestic banks. This allowed Saudi Arabia and other Arab producers to recycle their petrodollars into New York banks.
Global oil sales in U.S. dollars caused an immediate and strong global demand for US dollars – the ‘Petrodollar’ was born.
By 1975 all OPEC members had agreed to sell their oil only in US dollars in exchange for weapons and military protection.
“In a nutshell, any country that wants to purchase oil from an oil producing country has to do so in U.S. dollars. This is a long standing agreement within all oil exporting nations, aka OPEC, the Organization of Petroleum Exporting Countries. The UK for example, cannot simply buy oil from Saudi Arabia by exchanging British pounds. Instead, the UK must exchange its pounds for U.S. dollars…
This means that every country in the world that imports oil—which is the vast majority of the world's nations—has to have immense quantities of dollars in reserve. These dollars of course are not hidden under the proverbial national mattress. They are invested. And because they are U.S. dollars, they are invested in U.S. Treasury bills and other interest bearing securities that can be easily converted to purchase dollar-priced commodities like oil. This is what has allowed the U.S. to run up trillions of dollars of debt: the rest of the world simply buys up that debt in the form of U.S. interest bearing securities.” Christopher Doran, Iran and the Petrodollar Threat to U.S. Empire
As developed economies grew and prospered, as developing economies took center stage with their massive urbanization and infrastructure development plans their need for oil grew, and so too did the need for new U.S. dollars, as demand grew the currency strengthened. The U.S. Dollar quickly became the currency for global trades in almost every commodity and most consumer goods, it wasn’t used just for oil purchases anymore. Countries all over the world bought, had to buy, more and more dollars to have a reserve of currency with which to buy oil and ‘things.’ Countries began storing their excess US dollar capacity in US Treasury Bonds, giving the US a massive amount of credit from which they could draw.
There’s no disputing the U.S. greenback is the world's currency - the dollar is the currency of denomination of half of all international debt securities and makes up 60 percent of countries foreign reserves.
The Petrodollar replaced the Gold Standard
Currently the only source of backing for the U.S. dollar is the fact that oil is priced in only U.S. dollars and the world must use the Petrodollar to make their nation’s oil purchases or face the weight of the U.S. military and economic sanctions. Many countries also use their Petrodollar surplus for international trade - most international trade is conducted in U.S. dollars.
It’s very obvious that the United States economy, and the global economy, are both intimately tied to the dollar's dual role as the world’s reserve currency and as the Petrodollar.
“Trade between nations has become a cycle in which the U.S. produces dollars and the rest of the world produces things that dollars can buy; most notably oil. Nations no longer trade to capture comparative advantage but to capture needed dollar reserves in order to sustain the exchange value of their domestic currencies or to buy oil. In order to prevent speculative attacks on their currencies, those nations’ central banks must acquire and hold dollar reserves in amounts corresponding to their own currencies in circulation. This creates a built-in support for a strong dollar that in turn forces the world’s central banks to acquire and hold even more dollar reserves, making the dollar stronger still.” Harvey Gold, Iran’s Threat to the U.S. – Nuclear or the Demise of the Petrodollar?
It’s also very obvious that if global Petrodollar demand were ever to crumble the use of the U.S. dollar as the world’s reserve currency would abruptly end.
- Energy costs would rise substantially. American’s, because their dollar is the world’s reserve currency and they control it, have been buying oil and gasoline for a fraction of what the rest of the world pays.
- There would be substantially less demand for dollars and U.S. government debt. All nations that buy oil and hold U.S. dollars in their reserves would have to replace them with whatever currency oil is going to be priced in - the resulting sell-off would weaken the U.S. currency dramatically.
- Interest rates will rise. The Federal Reserve would have to increase interest rates to reduce the dollar supply.
- Foreign funds would literally run from U.S. stock markets and all dollar denominated assets.
- Military establishment collapses.
- There would be a 1930s like bank run.
- Dollar exchange rate falls. The current-account trade deficit would become unserviceable.
- The U.S. budget deficit would go into default. This would create a severe global depression because the U.S. would not be able to pay its debts.
Why some might think the Petro dollar is history, consider:
- Several countries have attempted to, or have already moved away from the petrodollar system – Iraq, Iran, Libya, Syria, Venezuela, and North Korea.
- Other nations are choosing to use their own currencies for inter country trade; China/Russia;China/Brazil;China/Australia;China/Japan;India/Japan;Iran/Russia; China/Chile; China/The United Arab Emirates (UAE);China/Africa Brazil/Russia/India/China/South Africa (the new BRICS are plus S.A.).
- Countries began storing their excess US dollar capacity in US Treasury Bonds, giving the US a massive amount of credit from which they could draw. But by keeping interest rates excessively low for so long a period of time, and with no relief in sight, the rate of return on U.S. interest bearing securities has been so low it’s not worth holding them to generate any kind of return for your U.S. foreign reserves, the very same reserves you want to hold to buy oil.
- The U.S. does not need its Arab Petrodollar partners as much since the invasion of Iraq with its immense oil resources (second largest in the world) and discovery of how to obtain oil from unconventional sources – shale oil, oil sands etc. Saudi Arabia and other OPEC countries in the region might be less needy for U.S. protection now that Iraq has been neutralized and Iran is in the crosshairs.
- Russia is the number one oil exporter, China is the number two consumer of oil and imports more oil from the Saudis then the U.S. does. Chinese and Russian trade is currently around US$80 billion per year. China has agreed to lend the world’s largest oil company, Russia’s Rosneft, two billion dollars to be repaid in oil.
- U.S. federal debt is close to 17 trillion dollars and is 90 percent of GDP. The deficit is a horrendous 7 percent of GDP. Political infighting and bickering has made cooperation nearly impossible and effective measures just aren’t being taken. The Federal Reserve is increasing its reserves by over a trillion dollars a year, the Fed is out of tools, its measures are not working. The ‘recovery’ is false, jobs are scarce and 6.2 million Americans have dropped out of the workforce.
Valéry Giscard d'Estaing referred to the benefits the United States has due its own currency being the international reserve currency as an "exorbitant privilege."
“Reserve currency status has two benefits. The first benefit is seigniorage revenue—the effective interest-free loan generated by issuing additional currency to nonresidents that hold US notes and coins... The second benefit is that the United States can raise capital more cheaply due to large purchases of US Treasury securities by foreign governments and government agencies…The major cost is that the dollar exchange rate is an estimated 5 to 10 percent higher than it would otherwise be because the reserve currency is a magnet to the world's official reserves and liquid assets. This harms the competitiveness of US exporting companies and companies that compete with imports...
There is no realistic prospect of a near-term successor to the dollar. Although the euro is already a secondary reserve currency, MGI finds that the eurozone has little incentive to push for the euro to become a more prominent reserve currency over the next decade. The small benefit to the eurozone of slightly cheaper borrowing and the cost of an elevated exchange rate today broadly cancel each other. The renminbi may be a contender in the longer term—but today China’s currency is not even fully convertible.” McKinsey Global Institute,An exorbitant privilege? Implications of reserve currencies for competitiveness
There has lately been a lot of talk about the demise of the Petrodollar. Fortunately, or unfortunately (depends what side of the debate your on) there exists no viable alternative to the U.S. dollar, not today, not tomorrow, not for a very long time.
The EU is a waste land, will the deeply flawed Euro even survive?
“The euro’s major weakness comes from its political base. If the entire 27-country strong European Union (EU) were backing the euro, its long-term international standing would be considerably enhanced. With only half of the E.U countries backing it, the euro zone is vulnerable in the future to a possible dissolution under the pressures of economic hardships. This is more so since the statutes of the European Central Bank are unduly rigid, not only freezing exchange rates between member states, which is OK, but also de facto freezing their fiscal policies, while the central bank itself has the goal of fighting inflation as its only objective. It seems that the objective of supporting economic growth was left out of its statutes, with the consequence that it may be unable to ride successfully future serious economic disturbances.” Prof Rodrigue Tremblay, Nothing in Sight to Replace the US Dollar as an International Reserve Currency
Well what about China you ask?
One of the preconditions of reserve currency status is relaxing capital controls so foreigners can reinvest their accumulated yuan back into a countries markets. China has strict capital controls in place. If they were to be relaxed to the level needed then market driven money flows, not China’s Communist leaders, would drive exchange and interest rates. Communist leaders would be facing the thing they fear the most – instability because they lose control over two of their main economic levers.
China has well over US$3.2 trillion in its foreign reserves. They’ve accumulated this massive amount of money over the years by maintaining the yaun’s semi fixed peg to the dollar.
Think about it; the euro-crisis makes the US dollar the preferred safe-haven, this lowers US borrowing costs. This in turn means China has to continue to lend to the US in order to hold-up the value of its current reserves, pushing down US borrowing costs.
A massive shift as many envision – China out of the U.S. dollar - would destroy the dollar and cause the instability the Communist Party fears so much. Why would China deliberately destroy its own wealth and why would Chinese communist leaders set themselves up for discord among its citizens?
“China, itself as a country, has a very limited moral international stance. It is still a totalitarian, authoritarian and repressive state regime that does not recognize basic human rights, such as freedom of expression and freedom of religion, and which crushes its linguistic and religious “minority nationalities. It is a country that imposes the death penalty, even for economic or political crimes…Only a fundamental political revolution in China could raise this country to a world political and monetary status. This is most unlikely to happen in the foreseeable future and, therefore, no Chinese currency is likely to play a central role in financing international trade and investment.” Prof Rodrigue Tremblay, Nothing in Sight to Replace the US Dollar as an International Reserve Currency
U.S. assets are free from default risk, free from political risk, the U.S. has never imposed capital controls and has only frozen funds once – Iran’s in 1978.
Fact; the United States of America, and only the United States of America, controls the fate of the Petrodollar. Not communist China, not Russia or Saudi Arabia or the EU.
The questionable ‘exorbitant’ privilege (the interest-free loans, U.S. Treasury purchases by foreign governments versus the loss of business competitiveness and all that entails) bestowed upon America for having the world’s reserve currency is going continue for the for-seeable future. This fact, and what it means to the U.S. and the world, should be on all our radar screens. Is it on yours?
If not, maybe it should be.
By Richard (Rick) Mills
If you're interested in learning more about the junior resource and bio-med sectors please come and visit us at www.aheadoftheherd.com
Site membership is free. No credit card or personal information is asked for.
Richard is host of Aheadoftheherd.com and invests in the junior resource sector.
His articles have been published on over 400 websites, including: Wall Street Journal, Market Oracle, SafeHaven , USAToday, National Post, Stockhouse, Lewrockwell, Pinnacledigest, Uranium Miner, Beforeitsnews, SeekingAlpha, MontrealGazette, Casey Research, 24hgold, Vancouver Sun, CBSnews, SilverBearCafe, Infomine, Huffington Post, Mineweb, 321Gold, Kitco, Gold-Eagle, The Gold/Energy Reports, Calgary Herald, Resource Investor, Mining.com, Forbes, FNArena, Uraniumseek, Financial Sense, Goldseek, Dallasnews, Vantagewire, Resourceclips and the Association of Mining Analysts.
Copyright © 2013 Richard (Rick) Mills - All Rights Reserved
Legal Notice / Disclaimer: This document is not and should not be construed as an offer to sell or the solicitation of an offer to purchase or subscribe for any investment. Richard Mills has based this document on information obtained from sources he believes to be reliable but which has not been independently verified; Richard Mills makes no guarantee, representation or warranty and accepts no responsibility or liability as to its accuracy or completeness. Expressions of opinion are those of Richard Mills only and are subject to change without notice. Richard Mills assumes no warranty, liability or guarantee for the current relevance, correctness or completeness of any information provided within this Report and will not be held liable for the consequence of reliance upon any opinion or statement contained herein or any omission. Furthermore, I, Richard Mills, assume no liability for any direct or indirect loss or damage or, in particular, for lost profit, which you may incur as a result of the use and existence of the information provided within this Report.
Richard (Rick) Mills Archive
© 2005-2016 http://www.MarketOracle.co.uk - The Market Oracle is a FREE Daily Financial Markets Analysis & Forecasting online publication. | <urn:uuid:917a2ba7-b8fd-454a-8055-4693ac175478> | CC-MAIN-2017-17 | http://www.marketoracle.co.uk/Article40043.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00601-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.948904 | 4,103 | 3.546875 | 4 |
The Kaiser Permanente Northern California research program on genes, environment, and health (RPGEH) pregnancy cohort: study design, methodology and baseline characteristics
© The Author(s). 2016
Received: 16 January 2016
Accepted: 8 November 2016
Published: 29 November 2016
Exposures during the prenatal period may have lasting effects on maternal and child health outcomes. To better understand the effects of the in utero environment on children’s short- and long-term health, large representative pregnancy cohorts with comprehensive information on a broad range of environmental influences (including biological and behavioral) and the ability to link to prenatal, child and maternal health outcomes are needed. The Research Program on Genes, Environment and Health (RPGEH) pregnancy cohort at Kaiser Permanente Northern California (KPNC) was established to create a resource for conducting research to better understand factors influencing women’s and children’s health. Recruitment is integrated into routine clinical prenatal care at KPNC, an integrated health care delivery system. We detail the study design, data collection, and methodologies for establishing this cohort. We also describe the baseline characteristics and the cohort’s representativeness of the underlying pregnant population in KPNC.
While recruitment is ongoing, as of October 2014, the RPGEH pregnancy cohort included 16,977 pregnancies (53 % from racial and ethnic minorities). RPGEH pregnancy cohort participants consented to have blood samples obtained in the first trimester (mean gestational age 9.1 weeks ± 4.2 SD) and second trimester (mean gestational age 18.1 weeks ± 5.5 SD) to be stored for future use. Women were invited to complete a questionnaire on health history and lifestyle. Information on women’s clinical and health assessments before, during and after pregnancy and women and children’s health outcomes are available in the health system’s electronic health records, which also allows long-term follow-up.
This large, racially- and ethnically-diverse cohort of pregnancies with prenatal biospecimens and clinical data is a valuable resource for future studies on in utero environmental exposures and maternal and child perinatal and long term health outcomes. The baseline characteristics of RPGEH Pregnancy Cohort demonstrate that it is highly representative of the underlying population living in the broader community in Northern California.
KeywordsPregnancy Cohort Resource Biorepository Maternal health
Exposures before and during pregnancy contribute to the immediate and future health outcomes of both women and their children. Emerging evidence supports the notion that the prenatal period is a critical developmental window during which in utero exposures may have lasting effects on a child’s future health [1, 2]. Biological programming occurs during fetal life in response to in utero exposure to nutrient substrates, hormones, growth factors, cytokines, environmental conditions or toxins, and other exposures. Evidence also shows that women who develop pregnancy complications are at increased risk of developing chronic diseases later in life [4–7]. However, the mechanisms underlying many of these findings remain unclear, and further research is needed to advance our understanding of how the in utero environment impacts the short- and long-term health of both the woman and her child.
Large studies with multiple measurements of biomarkers during pregnancy are needed to better measure perinatal exposures and to understand the etiologically relevant period of the effects of exposures on perinatal outcomes. To fully understand how the in utero environment influences the short- and long-term health of women and their children, large representative study populations with comprehensive information on a broad range of factors, including biomarkers, medical conditions, medications, nutrition, physical activity and environmental exposures, are needed.
The Kaiser Permanente Northern California (KPNC) Research Program on Genes, Environment, and Health (RPGEH) has established a large pregnancy cohort that integrates biospecimens with rich and accurate clinical and health data available from the electronic health record (EHR), creating a unique resource available to advance research on women’s and children’s health. The establishment of this pregnancy cohort within an integrated health care delivery system with an EHR has the additional advantage of enabling accurate assessment of short- and long-term maternal and child health outcomes and the rapid translation of clinically meaningful findings into clinical practice. This report describes the design and methods used to establish this pregnancy cohort and its biorepository in KPNC. We present preliminary data on the baseline characteristics of the cohort to demonstrate its racial-ethnic diversity and the prevalence of several perinatal complications of interest, as well as its representativeness with regard to the underlying population of pregnancies at KPNC. We further discuss possible use of this large cohort including the ability to efficiently follow it prospectively through the EHR to answer pressing questions regarding women’s and children’s health.
The aim of this project is to establish a large pregnancy cohort that integrates biospecimens with rich and accurate clinical and health data to create a resource to advance scientific research on women’s and children’s health. The pregnancy cohort is able to be linked to short- and long-term maternal and child health outcomes to facilitate the rapid translation of clinically meaningful findings into clinical practice.
The KPNC Division of Research started the Research Program on Genes, Environment and Health (RPGEH) in 2007 to develop a genetic epidemiology population resource which integrates data from multiple sources from consenting KPNC adult members, including biospecimens, clinical data from the EHR, lifestyle and risk factor data from surveys, and environmental exposure data from both laboratory and geographic information systems. One component of the RPGEH is the RPGEH Pregnancy Cohort.
Establishment of RPGEH pregnancy cohort
The Division of Research worked closely with KPNC clinical partners to develop facility-based recruitment procedures and laboratory blood processing workflows that could be easily integrated as part of routine prenatal medical care. The entire recruitment process was designed to become an integrated and routine part of the clinical prenatal intake process. To avoid disruption of clinical workflows, all RPGEH program-related processes (e.g., questions from patients, and follow-up) are handled by research staff. The recruitment and biospecimen collection protocol processes are described below.
KPNC provides integrated health care to over 3.6 million members through 7,000 physicians, > 240 medical office buildings and 22 hospitals. The KPNC service area spans 14 counties of the greater Bay Area, as well as the California Central Valley from Sacramento to Fresno and includes urban and rural areas. The population is highly representative of the demographic characteristics of the entire population from this geographic area . The membership is racially and socio-economically diverse. KPNC is vertically integrated such that all care is provided in a closed system and documented in an EHR. The EHR are clinical records, not claims data, and thus are robust with regard to data quality and completeness. The membership of reproductive-aged women (15–44) includes women with KP commercial insurance (varying copays, varying deductible levels), MediCal, and other California state subsidized programs. Within KPNC, there are 16 delivery hospitals and approximately 38,000 pregnancies each year.
Recruitment of participants
Biospecimen collection and storage process
Women who consent have blood drawn for research purposes into one 8.5 mL serum separator tube (SST) tube and one 6.0 mL ethylenediaminetetraacetic Acid (EDTA) tube at the same time as the clinically ordered blood tests at her local KPNC laboratory at two times during their pregnancy: in the first trimester during a standard first trimester panel or genetic screening (~10–13 weeks, 6 days) and during the second trimester either along with standard genetic screening (~15–20 weeks) or with the gestational diabetes screening (~24–28 weeks). The blood tubes are couriered as part of the normal KPNC laboratory system to the Regional Laboratory, where they are transferred to the RPGEH Biorepository (see description below) and further processing occurs.
The RPGEH Research Biorepository
The RPGEH Biorepository is a state-of-the-art research biorepository and staffed with research laboratory personnel who are responsible for maintaining the laboratory space, checking in, processing and storing samples, and retrieving aliquots for studies. Equipment includes an ABF 500 automated blood fractionation robot unit, an RTS A4 temperature and humidity controlled robotic ambient storage unit for archiving DNA using Biomatrica DNA stable storage medium, and a walk-in−80 ° C freezer. A custom developed Laboratory Information Management System (LIMS) tracks specimens at each step and is linkable to RPGEH operations and clinical information databases.
Once at the Biorepository, serum from the SST is aliquotted into 4, 0.8 mL cryovials. The EDTA tube is centrifuged and plasma is aliquoted into 2, 0.8 mL cryovials, while 1.0 mL of buffy coat is aspirated and placed in a cryovial. All cryovials are stored at−80 °C.
Information on participants in the Pregnancy Cohort is obtained from several sources of rich clinical data (resources are described below).
Information obtained from the EHR during the first prenatal visit
As part of routine prenatal care, all pregnant women complete a prenatal questionnaire during the first trimester or shortly after the pregnancy is clinically confirmed. This questionnaire includes questions on parity, gravidity, prior delivery and birth history, reproductive history, menstrual history, prior medical history, social circumstances (e.g., stress, domestic violence, etc.), and an Adult Outcomes Questionnaire (AOQ) which includes the PHQ-9 [9, 10] depression screener and the Generalized Anxiety Disorder scale (GAD-2) as well as functioning items. The information from the Prenatal Questionnaire is recorded in the KPNC EHR for access to extensive health and reproductive history on the cohort. Several other sources of pre-pregnancy information are available in the EHR including pre-pregnancy body mass index (BMI) if a woman had been a KPNC member prior to conception.
Early start substance use data
In addition to the Prenatal Questionnaire, a self-administered Early Start Program Prenatal Substance Use Screening Questionnaire is completed at entry into prenatal care. Early Start is an integrated prenatal program to intervene when a pregnant woman reports alcohol, tobacco and other drug use during pregnancy . The questionnaire asks about substance use before pregnancy and since pregnancy began, including alcohol, smoking, and prescription drug use.
Clinical data available in KPNC EHR
KPNC maintains complete databases that capture all encounters including hospitalizations, outpatient visits, radiology/imaging, laboratory tests, and prescription medications and combines these data for presentation to clinicians as part of the EHR. Data captured in these databases include inpatient and outpatient diagnostic information, imaging reports, laboratory tests and results, pharmacy dispenses including dosages and days of supply, and surgery outcomes, among others. All vital signs including weight and height, blood pressure and physical activity are recorded in the EHR. As noted above, these data are clinical information maintained in an EHR and are not claims data, and enable the detailed examination of diagnoses and treatments before, during and after pregnancy. In addition, when an infant is born, he/she is issued a unique medical record number (MRN) that is used for all care associated with the infant. It is linkable to the mother’s unique MRN that allow identification of the mother-infant pair. This allows us to also link to the women’s infants and examine infant growth and outcomes at birth and during childhood, along with other health outcomes, including the mother’s outcomes.
RPGEH pregnancy cohort questionnaire
To obtain more detailed information not captured in the HER, each participant is invited to complete the RPGEH Pregnancy Cohort questionnaire. The RPGEH questionnaire ascertains information about a variety of socio-demographic, lifestyle and environmental factors not routinely captured in the EHR, including diet, physical activity, multivitamin use and self-reported health history before and during the study pregnancy (Additional file 1).
Environmental exposure data
Over 98 % of the RPGEH pregnancy cohort has been successfully geocoded and can be linked to contextual or environmental data, including spatiotemporal data that exist in public access databases. These data come from commercial sources, non-profit agencies, and local, regional, state and national government agencies. Data from these various sources are being incorporated into a KPNC geographic information system (GIS) database using ArcGIS software (Redlands, CA). The database will include data on retail food outlets, green space, infrastructure (roads, educational facilities, health delivery centers, and public assistance facilities), traffic density, air pollution, pesticide use, toxic sites, toxic release inventories, and other factors. Other relevant information, currently located at other agencies but available for linkage, includes water quality, centers of social congregation (e.g., religious or spiritual institutions, senior centers, youth activity centers, etc.), and crime data. California has some of the most complete publically available geospatial data across these environmental factors anywhere in the world.
Below we describe the sources used for determining the clinical outcomes of the RPGEH Pregnancy Cohort participants and non-participants for this preliminary report.
Clinical outcomes during pregnancy/in utero exposure to maternal metabolism
Women’s body mass index and gestational weight gain
Through the EHR we are able to capture a woman’s body mass index prior to pregnancy as well her gestational weight gain trajectory and total gestational weight gain, allowing us to assess possible determinants of gestational weight gain, as well as to define the sequelae of in utero exposure to maternal obesity and excessive gestational weight gain (i.e., over nutrition) or inadequate gestational weight gain (i.e., undernutrition) in relation to the current Institute of Medicine guidelines on child health.
Pregestational diabetes and gestational diabetes mellitus (GDM) and impaired glucose tolerance
Pregestational diabetes is obtained from the KPNC Diabetes Registry and GDM is obtained from the KPNC pregnancy glucose tolerance and GDM Registry . These registries allow for the identification of GDM based on objective glucose measurement defined according to laboratory glucose values meeting the Carpenter and Coustan diagnostic criteria .
Preeclampsia/Hypertensive disorder of pregnancy
Preeclampsia and hypertensive disorders of pregnancy were also obtained from the EHR and were defined according to the following ICD-9 codes: pre-existing hypertension 642.0–642.2, gestational hypertension 642.3, preeclampsia or eclampsia 642.4–642.7. The validity of these ICD-9 codes to diagnose hypertensive disorders of pregnancy has previously been reported .
Clinical outcomes at birth
Gestational age is based on the estimated date of delivery recorded in the EHR, which is determined by the woman’s self-reported last menstrual period (LMP), or by first trimester ultrasound if different from the LMP-based calculation by more than 1 week. Preterm birth was defined as birth at <37 weeks’ gestation. We also examined the degree of preterm birth using the following definitions: extreme preterm (<28 weeks’ completed gestation), severe preterm (28–31 weeks' completed gestation), moderate preterm (32–33 weeks' completed gestation) and late preterm (34–36 weeks' completed gestation) .
Infant size for gestational age
Infant birthweight was obtained from the EHR. Large for gestational age was defined as birthweight >90th percentile and small for gestational age was defined as birthweight <10th percentile for the underlying KPNC population’s race-ethnicity and gestational age–specific birthweight distribution .
Cesarean delivery information was obtained from the KPNC neonatal and infant cohort and is defined according to ICD-9 codes 654.2× for delivery mode recorded in the EHR.
Recruitment to date and prevalence of outcomes of interest
Characteristics of RPGEH Prenatal Cohort Participants Compared to Non-Participants in Kaiser Permanente Northern California
Overall Denominator (n = 93,409)
Participants (n = 16,977)
Non-Participants (n = 76,432)
40 or older
Pre-pregnancy BMI (kg/m2)
20, 758 (25.9)
Gestational age at initiation of prenatal care
Length of enrollment before pregnancy (years)
Length of enrollment after pregnancy (years)
Perinatal Outcomes of the RPGEH Prenatal Cohort Participants Compared to Non-Participants in Kaiser Permanente Northern California
Overall Denominator (n = 80,086)
Participants (n = 14,757)
Non-Participants (n = 65,329)
Glucose Tolerance Status
< 37 weeks
≥ 37 weeks
Extreme Preterm (<28 weeks)
Severe Preterm (28–31 week)
Moderate Preterm (32–33 weeks)
Late Preterm (34–36 weeks)
≥ 37 weeks
Gestational weight gain in relation to the IOM weight gain guidelines
Infant birth weight (grams)a
3,361 ± 556
3,400 ± 565
3,350 ± 553
Infant birth weight (grams)
Very low birth weight (<1,500)
Low birth weight (1,500–2,500)
Normal birth weight (2,501–3,999)
Delivery room death
Substance use before and during pregnancy among RPGEH Prenatal Cohort Participants compared to Non-Participants in Kaiser Permanente Northern California
Overall Denominator (n = 87,928)
Participants (n = 16,584)
Non-Participants (n = 71,344)
Early Start Data
Smoking in 12 Months before Pregnancy
Monthly or Less
Smoking since Pregnancy
Monthly or Less
Alcohol in 12 Months before Pregnancy
Monthly or less
Alcohol since Pregnancy
Monthly or Less
Use of Prescription Drug in 12 Months before Pregnancya
Use of Prescription Drug since Pregnancy
Use of Illegal Drug in 12 Months before Pregnancyb
Use of Illegal Drug since Pregnancy
The use of RPGEH Pregnancy Cohort specimens and data are governed by the guiding principles of use and access established by the RPGEH. These principles include: 1) promote good science for the benefit of the public; 2) protect participant confidentiality and privacy; honor commitments made to participants and act within the scope of their consent; and preserve the trust that KPNC members have in KPNC; 3) comply with applicable legal and regulatory requirements; 4) consider whether the Resource is the best or only resource to address proposed research questions; 5) conserve limited materials or resources for high-value research, such as biospecimens, which can be exhausted, and use of biospecimens that are rare or of higher value because of the data associated with them; 6) ensure that an investigator at the KPNC Division of Research (DOR) is involved in the research question and the conduct of the study to ensure the right and appropriate use of the resources. Applications for use of RPGEH Pregnancy Cohort samples and data are submitted and reviewed by the RPGEH Access Review Committee (ARC). The ARC meets three times a year to review applications for use of RPGEH data and specimens. The ARC includes DOR investigators, plus external stakeholders and investigators to address specific content and methodological issues as required by the projects under consideration. The ARC governs access to and use of all RPGEH data and specimens by requestors. Applications for access will be subject to three phases of review, and the ARC’s decisions are made based on a formalized set of criteria that can be reviewed.
Statistical analyses, power and sample size considerations
Based on our expected cohort size of 25,000 women we computed power for hypothetical case-control studies. We assumed all available cases will be included and controls will be sampled at a ratio of 5:1. We computed the minimum detectable odds ratio (OR) for a two-sided test at level 0.05 and 80 % power for several outcomes with different prevalences. For the outcome of small for gestational age (prevalence 9.3 %) a case-control study will be powered to detect an OR of 1.15. For the outcome of gestational hypertension (prevalence 4.1 %) a case-control study will be powered to detect an OR of 1.22. For the rare outcome of very low birthweight (prevalence 0.7 %) a case-control study will be powered to detect an OR of 1.57.
This report provides a brief overview of the establishment of the KPNC RPGEH Pregnancy Cohort and its biorepository, which were created to provide a resource for women’s and children’s health research. The KPNC RPGEH Pregnancy Cohort is uniquely integrated into routine clinical prenatal care within the KPNC health care system setting and can be linked with data from the EHR. KPNC contains a racially and ethnically diverse population, thereby increasing the likelihood of obtaining a highly representative sample with generalizable findings.
The establishment of this valuable resource has the potential to address many key questions related to women’s and children’s health and is particularly timely, in light of the recent dissolution of The National Children’s Study. The National Children’s Study (NCS) was developed after a 1990s White House Task Force highlighted the paucity of evidence evaluating the links between environmental exposures, development, and health outcomes in children and adults. The Children’s Health Act of 2000 initiated the conduct of a national longitudinal study of environmental influences (including physical, chemical, biological, and psychosocial) during pregnancy on child health and development. A recent report explains that this study was dissolved due to feasibility and oversight issues [21, 22] and suggests that funding agencies support smaller focused studies designed as tailored explorations as well as cohorts to facilitate longitudinal biospecimen collection and banking.
This large pregnancy cohort, derived from a diverse base population, can be used to generate sets of cases and controls for future clinical research studies, as demonstrated by our preliminary data. The availability of rich clinical data from the EHR, the questionnaire data, and existing perinatal research programs provide detailed phenotypic information that will further facilitate the conduct of perinatal epidemiology and translational studies. The RPGEH Pregnancy Cohort, coupled with the state of the art KPNC Biorepository for long-term storage of serum, plasma and DNA samples and an ability to follow both women and their child long term for future health outcomes in the EHR, provides a truly unique and valuable resource for improving our understanding of women and children’s health.
Our preliminary data on the RPGEH Pregnancy Cohort demonstrate that at least 18.2 % of pregnant women participated, and the cohort is highly representative of the underlying KPNC pregnant population in terms of both maternal demographics and key perinatal outcomes. Pregnancy cohort participants were KNPC members on average 10 years before their index pregnancy and remained members on average 2.7 years after pregnancy to date, and most are still currently KPNC members. Thus, there is a unique ability to examine exposures even years before pregnancy and to follow women and their infants for years after delivery. While participating women were slightly more likely to be non-Hispanic white and less likely to be Asian, this pattern is frequently observed in cohort studies with multiethnic populations such as KPNC women of reproductive age. Overall, the RPGEH Pregnancy Cohort is extremely diverse, with 53 % of participants from non-white racial ethnic minority groups, and Asian women comprise 23 % of the cohort. This is especially significant as Asian women have previously been reported as less likely to participate in reproductive and biospecimen research [22, 23]. The racial-ethnic diversity of this population provides important potential for studies examining racial-ethnic disparities in diseases and health care delivery. Given the recruitment efforts integration within clinical care, it is possible that not all pregnant women at participating medical facilities were invited to participate in the pregnancy cohort; therefore, 18.2 % is likely an underestimate of the overall participation rate.
The prevalence of several perinatal complications was similar between RPGEH cohort participants and the underlying populations of women delivering in KPNC. Cohort participants were slightly less likely to have gestational diabetes mellitus (GDM) and infants of participants were slightly more likely to be macrosomic relative to non-participants. The lower prevalence of GDM among RPGEH participants is probably due in part to the fact that participants were less likely to be Asian and more likely to be non-Hispanic white; in this setting, Asian woman have the highest prevalence of GDM [15, 24] and non-Hispanic white women have the lowest prevalence of GDM.
The fetal origins of adult disease hypothesis posits that “fetal programming” occurs when maternal metabolic nutrition, environment and hormonal milieu during development permanently programs the structure and physiology of organs and hence the future health of the offspring . While there is some epidemiologic evidence supporting the “fetal programming” hypothesis, more longitudinal, observational studies examining the effects of a broad range of environmental and biological factors assessed in utero are needed to clarify the extent to which fetal programming contributes to adult diseases. In addition, a woman’s health status during pregnancy may also influence her future health . For example, women diagnosed with pregnancy-related hypertension and/or preeclampsia, gestational diabetes and preterm birth are at higher risk for hypertension, diabetes and cardiovascular disease later in life . Therefore, given the rich health data in the KPNC EHR, the RPGEH Pregnancy Cohort will also allow for a lifecourse research approach .
The resource is available to be used by Kaiser Permanente researchers as well as outside investigators who wish to collaborate with a Kaiser Permanente researcher to conduct biomarker, genetic, environmental and gene environment interaction studies. The RPGEH Pregnancy Cohort has the unique ability to connect biospecimens collected at two time points during pregnancy with detailed short- and long-term environmental and clinical data on both women and their children, enabling research of immediate perinatal complications as well as longer term maternal, child, and adult outcomes.
Access review committee
Division of Research
Electronic health record
Gestational diabetes mellitus
Kaiser Permanente Northern California
Last menstrual period
National children’s study
Research Program on Genes Environment and Health
Serum separator tube
This work was supported by grant RC2 AG036607 from the National Institutes of Health, grants from the Robert Wood Johnson Foundation, and Kaiser Permanente Northern California Community Benefit. We are grateful to the Kaiser Permanente Northern California Members who have generously agreed to participate in the Research Program on Genes, Environment and Health.
This work was supported by grant RC2 AG036607 from the National Institutes of Health, grants from the Robert Wood Johnson Foundation, and Kaiser Permanente Northern California Community Benefit. Drs. Hedderson, Avalos, Ferrara and Croen received support from UG3OD0 23289 for this work.
Availability of data and material
The datasets generated during and/or analysed during the current study are not publicly available due to the fact that the data used for this study contain protected health information (PHI). Kaiser Permanente IRB policies prohibit releasing PHI. Data are available from the Kaiser Permanente Division of Research for researchers who meet the criteria for access to confidential data from the corresponding author on reasonable request.
MMH. Overseeing the data analysis, interpretation of the data and drafting the manuscript and revising it critically for important intellectual content. AF. Drafting the manuscript and revising it critically for important intellectual content. LAA. Drafting the manuscript and revising it critically for important intellectual content. SKV. Made significant contributions to the design and drafting the manuscript. EPG. Drafting the manuscript and revising it critically for important intellectual content. DKL. Drafting the manuscript and revising it critically for important intellectual content. AA. Involved in acquisition of the data and drafting the manuscript. SW Involved in acquisition of the data and drafting the manuscript. SR Involved in acquisition of the data and drafting the manuscript. CS. Made significant contributions to the conception and design and drafting the manuscript. LAC. made significant contributions to the design and drafting the manuscript. TF. Drafting the manuscript and revising it critically for important intellectual content. FX. Acquisition of data and data analysis. VC. Acquisition of data and data analysis. All read and approved the final manuscript.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
We obtained ethics approval and consent from the human subjects committee of the Kaiser Foundation Research Institute; the project reference number is CN-05CScha-04-H. Ethical approval covers all sites included in the study. All study participants provided written informed consent and all data assessment tools and components for the RPGEH pregnancy cohort have been approved by the human subjects committee of the Kaiser Foundation Research Institute.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Dabelea D, Crume T. Maternal environment and the transgenerational cycle of obesity and diabetes. Diabetes. 2011;60:1849–55.View ArticlePubMedPubMed CentralGoogle Scholar
- Gluckman PD, Hanson MA, Cooper C, Thornburg KL. Effect of in utero and early-life conditions on adult health and disease. N Engl J Med. 2008;359:61–73.View ArticlePubMedPubMed CentralGoogle Scholar
- Ben-Shlomo Y, Kuh D. A life course approach to chronic disease epidemiology: conceptual models, empirical challenges and interdisciplinary perspectives. Int J Epidemiol. 2002;31:285–93. 11980781.View ArticlePubMedGoogle Scholar
- Bellamy L, Casas JP, Hingorani AD, Williams DJ. Pre-eclampsia and risk of cardiovascular disease and cancer in later life: systematic review and meta-analysis. BMJ. 2007;335:974.View ArticlePubMedPubMed CentralGoogle Scholar
- Bellamy L, Casas JP, Hingorani AD, Williams D. Type 2 diabetes mellitus after gestational diabetes: a systematic review and meta-analysis. Lancet. 2009;373:1773–9.View ArticlePubMedGoogle Scholar
- Catov JM, Newman AB, Roberts JM, et al. Preterm delivery and later maternal cardiovascular disease risk. Epidemiology. 2007;18:733–9.View ArticlePubMedGoogle Scholar
- Fraser A, Nelson SM, Macdonald-Wallis C, et al. Associations of pregnancy complications with calculated cardiovascular disease risk and cardiovascular risk factors in middle age: the Avon Longitudinal Study of Parents and Children. Circulation. 2012;125:1367–80.View ArticlePubMedPubMed CentralGoogle Scholar
- Krieger N. Overcoming the absence of socioeconomic data in medical records: validation and application of a census-based methodology. Am J Public Health. 1992;82:703–10.View ArticlePubMedPubMed CentralGoogle Scholar
- Spitzer RL, Williams JB, Kroenke K, Hornyak R, McMurray J. Validity and utility of the PRIME-MD patient health questionnaire in assessment of 3000 obstetric-gynecologic patients: the PRIME-MD Patient Health Questionnaire Obstetrics-Gynecology Study. Am J Obstet Gynecol. 2000;183:759–69.View ArticlePubMedGoogle Scholar
- Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD: the PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA. 1999;282:1737–44.View ArticlePubMedGoogle Scholar
- Kroenke K, Spitzer RL, Williams JB, Monahan PO, Lowe B. Anxiety disorders in primary care: prevalence, impairment, comorbidity, and detection. Ann Intern Med. 2007;146:317–25. 17339617.View ArticlePubMedGoogle Scholar
- Armstrong MA, Lieberman L, Carpenter DM, et al. Early Start: an obstetric clinic-based, perinatal substance abuse intervention program. Qual Manag Health Care. 2001;9:6–15.View ArticlePubMedGoogle Scholar
- Weight Gain During Pregnancy. Reexamining the Guidelines. Washingtion: National Academies Press; 2009.Google Scholar
- Selby JV, Newman B, King MC, Friedman GD. Environmental and behavioral determinants of fasting plasma glucose in women. A matched co-twin analysis. Am J Epidemiol. 1987;125:979–88.View ArticlePubMedGoogle Scholar
- Ferrara A, Kahn HS, Quesenberry C, Riley C, Hedderson MM. An increase in the incidence of gestational diabetes mellitus: Northern California, 1991–2000. Obstet Gynecol. 2004;103:526–33.View ArticlePubMedGoogle Scholar
- Committee opinion no. 504: screening and diagnosis of gestational diabetes mellitus. Obstet Gynecol. 2011;118:751–3.View ArticleGoogle Scholar
- Hedderson MM, Darbinian JA, Sridhar SB, Quesenberry CP. Prepregnancy cardiometabolic and inflammatory risk factors and subsequent risk of hypertensive disorders of pregnancy. Am J Obstet Gynecol. 2012;207:68–9. 22727352.View ArticlePubMedPubMed CentralGoogle Scholar
- Goldenberg RL, Culhane JF, Iams JD, Romero R. Epidemiology and causes of preterm birth. Lancet. 2008;371:75–84. 18177778.View ArticlePubMedGoogle Scholar
- Ehrlich SF, Crites YM, Hedderson MM, Darbinian JA, Ferrara A. The risk of large for gestational age across increasing categories of pregnancy glycemia. Am J Obstet Gynecol. 2011;204.
- Escobar GJ, Fischer A, Kremers R, Usatin MS, Macedo AM, Gardner MN. Rapid retrieval of neonatal outcomes data: the Kaiser Permanente Neonatal Minimum Data Set. Qual Manag Health Care. 1997;5:19–33.View ArticlePubMedGoogle Scholar
- National Institutes of Health. National Children’s Study (NCS) Working Group: Final Report - December 12, 2014. 2014.Google Scholar
- Talaulikar VS, Hussain S, Perera A, Manyonda IT. Low participation rates amongst Asian women: implications for research in reproductive medicine. Eur J Obstet Gynecol Reprod Biol. 2014;174:1–4.View ArticlePubMedGoogle Scholar
- Lee CI, Bassett LW, Leng M, et al. Patients’ willingness to participate in a breast cancer biobank at screening mammogram. Breast Cancer Res Treat. 2012;136:899–906.View ArticlePubMedPubMed CentralGoogle Scholar
- Hedderson M, Ehrlich S, Sridhar S, Darbinian J, Moore S, Ferrara A. Racial/ethnic disparities in the prevalence of gestational diabetes mellitus by BMI. Diabetes Care. 2012;35:1492–8.View ArticlePubMedPubMed CentralGoogle Scholar
- Barker DJ. The origins of the developmental origins theory. J Intern Med. 2007;261:412–7. 17444880.View ArticlePubMedGoogle Scholar
- Saade GR. Pregnancy as a window to future health. Obstet Gynecol. 2009;114:958–60. 20168094.View ArticlePubMedGoogle Scholar
- Callahan T, Stampfel C, Cornell A, et al. From Theory to Measurement: Recommended State MCH Life Course Indicators. Matern Child Health J. 2015;19:2336–47. 26122251.View ArticlePubMedPubMed CentralGoogle Scholar | <urn:uuid:77b5083f-a190-488e-8f44-60e33f8a39ce> | CC-MAIN-2017-17 | https://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/s12884-016-1150-2 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00012-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.874483 | 7,746 | 2.71875 | 3 |
The two preceding chapters have drawn a general picture of the origin and evolution of the quasars, as seen in the light of the Reciprocal System of theory, and have presented sufficient evidence from observation to show that this theoretical picture is a true and accurate representation of the physical phenomena. This accomplishes the specific objectives of the work, which were, first, to produce the general explanation of the quasars that conventional theory has been unable to discover, and second, by so doing to demonstrate the ability of the Reciprocal System to account for the phenomena of the far-out regions of the universe in the same comprehensive and accurate way in which it explains the basic physical relations that have been the primary subject matter of previous publications. Further development of the details of the behavior of the quasars and associated objects is a task for the astronomers, who have the facilities for gathering the additional observational information that will be required. There are, however, a few conclusions with respect to some of these details that can be drawn from the data already available, and since these points will serve, to some degree, as further confirmation of the findings of the previous chapters, they will be discussed briefly before concluding the presentation.
In essence, this chapter will be a sort of catch-all for the things that should be said, but did not fit into the previous discussion. Because of the heterogeneous character of this material no attempt will be made at a systematic order of presentation, except that we will deal first with those items that are most directly connected with the subject matter of Chapters VIII and IX, and will then turn our attention to the various lines of inquiry that are opened up by the results of this initial phase of the supplementary investigations. It should be noted that some of the conclusions reached in this portion of the work are less firmly based than those of the two preceding chapters, and may require a certain amount of modification in the future as the accumulation of observational data continues.
One significant, and hitherto unexplained, item that is clarified by the theoretical development is the existence, in some quasars, of absorption spectra that are red shifted by different amounts. The stellar explosions that initiate the chain of events leading to the ejection of a quasar from the galaxy of origin reduce these stars mainly to kinetic and radiant energy. The remainder is broken down into dust and gas particles. A portion of this material penetrates into the sections of the galaxy surrounding the sector where the explosions are taking place, and when one such section is ejected as a quasar it contains some of this fast-moving dust and gas. Inasmuch as the maximum particle speeds are above the requirement for escape from the gravitational attraction of the individual stars, a part of this material ultimately assumes the form of a cloud of dust and gas around the quasar—an atmosphere, we might call it—and the radiation passes through this cloud, giving rise to absorption lines. This material is moving at nearly the same speed as the quasar itself, and the absorption redshift is therefore approximately equal to the emission redshift.
From a consideration of the various factors involved, we may deduce that in many instances the fragment of the original galaxy that is ejected as a quasar contains stars of such an advanced age that they reach the destructive limit and explode while the quasar is moving outward. This not only increases the amount of dust and gas, but may also release sufficient energy to increase the speeds of some of the particles by one or more additional units of motion in time. If one unit is added to the original single unit, the two time units that are now effective are equivalent to 8-2, or 6, space units, and the explosion speed of the particles involved therefore becomes 3 z½ rather than 3.5 z½. The quasar radiation passing through particles moving at this speed acquires an absorption spectrum with a redshift z + 3z½. Further additions to the explosion speeds of the gas and dust particles have a similar effect, the general equation applicable to n units in time being y2 (8-n) z½ equivalent space units. It should be noted, however, that in the general situation two of the single units of motion in time are required to increase the two-dimensional speed of objects moving faster than light by one unit. This does not apply to the first unit, since the unit speed in space due to the normal recession is also a unit speed in time (that is, one unit of space per unit of time), hence one single unit of motion in time results in a change from unit one-dimensional speed to unit two-dimensional speed. Beyond this point two time units must be added to increase n by one unit. Where the available energy only amounts to the equivalent of one additional time unit we therefore find intermediate values of the absorption; i.e., 3.25 z½, and so on. The additional explosions occurring within the quasar have no effect on the speed of the quasar as a whole, and that speed remains constant regardless of the changes that are taking place in the mo
tions of the constituent particles. This quasar speed never exceeds one time unit (redshift factor 3.5) because the portions of the galaxy of origin overlying the nucleus where the explosions are taking place are not able to offer sufficient resistance to permit the pressure in the interior to build up to the point where it would eject the quasar at a two-unit speed. Indeed, as we will see later, the ejection may take place even before the pressure is high enough to produce a one-unit speed in time, in which case no quasar is formed. Of course, if the galaxies of origin were larger a higher pressure would be possible, but, as pointed out earlier, the limiting age of matter establishes a galactic age limit, which automatically limits galactic size, and the existence of larger galaxies is therefore precluded.
Unfortunately, the amount of observational information available for the purpose of checking these theoretical conclusions is very limited. In fact, only one of the quasars thus far investigated has a system of absorption redshifts that is extensive enough to enable a good comparison with the theory. As can be seen from Table IV, however, the results of this one full-scale test of the theory are very satisfactory. Column 1 of the table gives the number of equivalent units of spatial motion added to the total quasar motion (and redshift) by the explosion speed (the redshift factor, as it was called in the preceding paragraph). Column 2 is the corresponding excess redshift. In column 3 the recession redshift, which was subtracted from the emission redshift to arrive at the excess redshift under the conditions applying to the quasar as a whole (redshift factor 3.5) is added back again to give us the total redshift at the particle speed which is attained by reason of the energy released in the additional stellar explosions. This is the theoretical absorption redshift at the specified redshift factor. The values in column 4 are the observed redshifts, 4? the emission values being distinguished by the notation “em.”
As the table shows, seven absorption redshifts have been reported for the quasar PKS 0237-23, of which two are listed as only “possible.” There are five theoretical absorption redshifts in the range of redshift factors from 3.5 to 2.0, and all of these are represented in the list of seven observed values, within the margin that can be ascribed to additional motion of a random character. In one instance, two of the observed redshifts are close enough to be identified with the same theoretical value. The only reported value that cannot be theoretically accounted for is one of the “possible” redshifts (1.595).
The three other quasars included in the table have absorption redshifts that agree with the values calculated on the basis of the first increment of particle motion; that is, with a redshift factor of 3.25. All but one of the remaining absorption redshifts reported to date (1970) are close to the emission values and thus readily understandable theoretically. The single exception is a 0.6128 measurement for the quasar PHL 938. If the interpretation of the spectrum that arrives at this result proves to be valid, the absorption in this instance must be due to some phenomenon other than that responsible for all of the other absorption redshifts that have thus far been measured. Although the amount of observational information available for correlation with the theoretical deductions is small, the agreement is so close that it constitutes a rather strong case in favor of the validity of the theoretical development. But we do not have to rely entirely on this mathematical correlation, as the theoretical conclusion that the absorption phenomenon is due to additional stars reaching the destructive limit and exploding during the outward progress of the quasar can be verified by other means, inasmuch as such explosions have further consequences which we can subject to examination.
An interesting point in this connection is that the 1.891 absorption redshift given for PHL 5200 in Table IV is a relatively recent value that was not found in earlier observations of this quasar, and E. M. Burbidge suggests that a change may have occurred in the emission from this object.41 While the evidence is not sufficient to establish conclusively that such an event did take place, this is just the kind of a thing that the theory predicts: the appearance of a new lower absorption redshift either because of additional stellar explosions in the quasar or because some high-speed material already present has moved out to where it can cause absorption.
Explosions of this kind could occur relatively soon after the original ejection, as some stars of a very advanced age might be present in the ejected fragment of the original galaxy. Obviously, however, the probability of reaching a limiting age increases with time, and the older quasars are therefore the most likely to have complex absorption spectra. Because of the outward progression, the nearby regions, with the possible exception of those that are very near, contain only relatively young individuals, whereas the more distant regions contain not only the young quasars that have originated there, but also some older ones that have moved outward since their origin. The average age of the visible quasars thus increases with distance. Furthermore, there is an additional selection effect because the secondary explosions in the older quasars increase the intensity of the radiation and thus enable locating them at distances where the younger and fainter units cannot be detected. On the basis of these considerations we may deduce that extensive absorption systems should be found preferentially at the maximum distances.
Some observers have concluded that there is a general correlation between distance and the presence of absorption spectra. The Burbidges, for instance, point out that 7 of the 14 quasars with z greater than 1.9 in their Table 3.1 have absorption lines, and they conclude that “at present the admittedly poor statistics suggest that the presence of absorption lines is strongly correlated with large redshift.”42 Other investigators have contested this assertion. It should therefore be noted that our theoretical development indicates that absorption lines with redshifts approximately equal to the emission redshift are possible for all quasars, but that the multiple redshift systems should be confined mainly to the more distant regions. In this connection it is significant that all of the quasars listed in the upper section of the Burbidge Table 3.7, the quasars with mare than one absorption line, have redshifts in the neighborhood of 2.00.
The various changes that take place during the aging of the quasars necessarily affect their spectral characteristics, and it should therefore be possible to gain some further insight into this aging process by analyzing the quasar spectra. A detailed analysis of this kind is a complex task that is beyond the scope of this work, but we can get a good indication of the general situation without having the complete picture. We can expect, for instance, that the evolution of the quasars from the early to the later stages will be accompanied by color changes. If we can identify certain specific color characteristics that vary systematically with the quasar age, this will be sufficient for present purposes, even though we are not able, as matters now stand, to produce a full explanation of the origin of these changes.
A study of the various possibilities has indicated that the color measurement most suitable for this application is the difference between the quasar magnitude as measured through an ultraviolet filter and that measured through a blue filter, the U-B index, as the astronomers call it. In normal stars, those that fall on the so-called “main sequence,” color is related to the optical magnitude, and the U-B index is positive; that is, more energy is received in the blue range. The index is also positive in ordinary galaxies, which are composed mainly of such stars. By reason of the inversion that takes place when the speed of light is exceeded, the theoretical development indicates that in the quasars the color should be related to the magnitude of the radio emission rather than to the optical magnitude, and the U-B index should be negative, indicating that more energy is received in the ultraviolet range. All of the U-B values quoted herein will be negative, and should be so understood. Figure 8 demonstrates that the U-B value is, in fact, independent of the optical magnitude; as the theory predicts. For purposes of this diagram the absolute magnitude has been calculated on the basis of the l/d relation previously established, taking an excess redshift value of 1.00 as a standard. The radio flux measurements that will be used later will be converted to the absolute basis in the same manner. There is a large scatter in the diagram, which includes the values for all of the quasars in the Burbidge Table 3.1 within the indicated magnitudes, and the amount of scatter increases somewhat with the magnitude, but it is evident that in this magnitude range,
the range with which we will be mainly concerned, the average U-B value is constant, and hence there is no systematic relation between U-B and optical magnitude.
In examining the other side of the theoretical proposition, the conclusion that the U-B value should be related to the radio flux, we find the situation complicated by the existence of two distinct classes of quasars, a fact that has not hitherto been recognized. For purposes of classification we will establish dividing lines at U-B = 0.60 and at an absolute radio flux (R.F.) of 6.0 measured at 178 MHz. All quasars with U-B values of less than 0.60 will be placed in Class I. Those which have higher U-B but low R.F. (below 6.0) are continuous with the low U-B quasars in their properties and will also be placed in Class I. The high R.F: high U-B quasars form a discontinuous group with quite different properties, and they will constitute Class II.
Figure 9 shows the relation between U-B and R.F. for those of the Class I quasars listed in the Burbidge Table 5.1 for which the necessary information is available. The scales of the diagram are inverted in order to conform to the conventional practice of showing increas-
ing age from left to right and decreasing activity from top to bottom. When the quasar is first ejected from the galaxy of origin its constituents are in a state of extreme activity and its radio flux is abnormally high. Only one of the quasars included in the present study is still in this very early stage. This is 3C 196, which has U-B = 0.43 and absolute R.F. = 48.3. Its quasar distance is 0.817, which makes it the most distant Class I quasar in the Burbidge table.
After the initial spurt of activity dies down to some extent, the quasar can be found in the zone designated “early” in the upper left of the diagram. As it ages and its activity drops still farther it moves to the right (toward lower R.F.) and downward (toward higher U-B). Ultimately it passes the zero flux line and enters the radio quiet zone.
The quasars that are included in the group under consideration were originally located by radio observation and later identified optically. As Fred Hoyle puts it, “To the optical astronomer, radio data serves like a good dog on a hunt.” The capabilities of the radio facilities available at the time the observational work was done therefore
establish the limit to which the observations could be carried; that is, these facilities were capable of detecting a Class I quasar of the earliest type shown on the diagram at a certain maximum quasar distance, in the neighborhood of 0.900. It then follows that at distances less than this maximum the same facilities were capable of detecting less powerful radio sources, and the range of observed U-B values should therefore widen as the distance decreases. Figure 10 shows that this is true. The curved line in this diagram is a theoretical cut-off limit based on a linear relation between U-B and radio emission. This relation assumed for the purposes of the diagram is probably not accurate, but it is close enough to show that the observed Class I quasars are, in fact, subject to an observational limit of the nature required by the theory.
This limit is dependent entirely on the kind of equipment and techniques available, as the distribution of these Class I quasars in space is theoretically uniform from a large scale standpoint. As more powerful and versatile equipment becomes available, the cut-off limit will therefore move outward to greater distances. Considerable progress in this respect has already been made even in the short time since the Burbidge list which we are using for our analysis was published in 1967. At the moment, however, the fact in which we are interested is that, with the observational facilities available prior to the compilation of the 1967 list, Class I quasars—fast moving galactic fragments that are in essentially the same condition, aside from a decrease in activity, in which they were originally ejected from their galaxies of origin—could not be detected at distances greater than about 0.900, and those of this class that are old enough to have U-B indexes above 0.60 could not be detected beyond a distance of approximately 0.700.
The foregoing discussion is rather elementary, and it may seem to be belaboring the obvious point that stronger sources can be detected at greater distances. Its significance lies in the fact that there are other quasars with U-B indexes above 0.60 that can be detected beyond a quasar distance of 0.700. Indeed, we can follow them all the way out to the ultimate limit at 2.00. It is clear, then, that we have here a class of quasars that are not in the same condition in which they were originally ejected. In order to move into the observational range, these more distant quasars must undergo some process that releases a substantial amount of additional radiant energy at radio wavelengths.
We have already deduced that such a process exists. From a consideration of the absorption redshifts, we have concluded that secondary explosions occur in the older quasars due to the fact that some of the constituent stars reach the age that corresponds to the destructive limit. Obviously, this is just the kind of a process that is required in order to explain the emergence of a second class of quasars at distances beyond the observational limit of the Class I objects. A very important point here is that a secondary explosion of this kind is a natural sequel to the explosion in the galaxy of origin. That original explosion was initiated when the oldest stars in the galaxy reached the age limit. The stars in the ejected fragment that became the quasar were younger, but many of them were also well advanced in age, and after another long period of time some of them must also arrive at the destructive limit.
The original stellar explosions occurred outside the portion of the galaxy that was ejected as a quasar, and the radio emission from the Class I quasar is mainly a result of the extremely violent push that caused the ejection. On the other hand, the secondary explosions occur in the body of the quasar itself, and the emission from the Class II
quasars therefore comes directly from the exploding stars. This difference in origin is reflected in the relation between the U-B index and the radio flux, and hence we are able to utilize this relation to draw a definite distinction between the two classes. Figure 11 is a plot of U-B vs. R.F. for the Class II quasars in the Burbidge table. As can be seen, the points representing these objects fall entirely outside the section of the diagram occupied by the quasars of Class I. There is no indication in this diagram that the Class II quasars follow any kind of an evolutionary pattern, but we will give this question some consideration later in another connection.
The quasar 3C 273 is of particular interest. This is definitely a Class II quasar, according to the criteria that have been defined, but its distance is far out of line with that of all other known objects in its class. No other Class II quasar in the group now under examination has a recession redshift of less than 0.052, equivalent to a quasar distance of about 0.800, whereas the quasar distance of 3C 273 is only 0.156. Ordinarily we can consider that when we measure the redshift of an object we are also determining its maximum possible age, as this age cannot be greater than the time required to move out to its present position. On this basis we would interpret the low redshift of 3C 273 as an indication that it is an unusually young Class II quasar. This could be true. It was pointed out in the earlier discussion that the secondary explosions may occur relatively soon after the original ejection, inasmuch as some of the stars in the galactic fragment ejected as a quasar may already be near the age limit at the time of ejection. Very young Class II quasars are therefore definitely possible, but the absence of Class II quasars between 3C 273 and distance 0.800 suggests that they must be very rare.
But 3C 273 is not necessarily young. It may be very much older than the 0.156 distance would indicate, as the general relation between redshift and age does not hold good at the very short distances where the magnitude of the possible random motion is comparable to that of the recession. Two galaxies that are separated by a distance in the neighborhood of their mutual gravitational limit can maintain this separation indefinitely, and the width of the zone in which the relative motion can be little or none at all is increased considerably if there is random motion with an inward component. Hence 3C 273 may have spent a long time somewhere near its present position and may be just as old as the quasars at distances around 0.800.
The observational evidence available at the moment is not adequate to enable making a definite decision between these alternatives, but where we have a choice between attributing a seeming abnormality to a chance coincidence that has resulted in an object of an unusual type being located very close to us, or attributing it to another kind of abnormality which we know that the object in question does possess—its proximity—the latter is clearly entitled to the preference pending the accumulation of further evidence. We therefore conclude tentatively that 3C 273 is about as old as the Class II quasars in the vicinity of distance 0.800. The position of 3C 273 in Figure 11 is indicated by a triangle. As can be seen from the diagram, this quasar is among the weaker radio emitters in its class (although we receive a large radio flux from it because it is so close), but so far as its properties are concerned it is not abnormal, or even a borderline case. Its proximity therefore provides a unique opportunity to observe a member of a class of objects that can otherwise be found only at great distances.
While each quasar as a whole is moving at a speed in excess of the speed of light, and the same is true of most of the constituent stars, the particles of matter of which these stars are composed are mainly moving at less-than-unit speeds in the early stages of the existence of the quasar. The radiation from these stars therefore has the normal characteristics of stellar radiation even though the ultra high speed of the quasar results in a distribution of this radiation only in two dimensions. At the time of ejection, however, the quasar also contains some matter that is moving at speeds in excess of unity, and the radiation from this matter is emitted in two dimensions only; that is, it is polarized. Some of this polarized radiation is depolarized in passing through the force fields of the surrounding stars, but a portion of it gets through unchanged, and we therefore find that an appreciable portion of the radiation received from a young Class I quasar, or from any Class II quasar, is polarized.
The percentage of polarization of the radiation as received is not an accurate measure of the magnitude of the original two-dimensional emission because of the great variability in the amount of depolarization, which depends on a number of factors, including the density and other properties of the matter present along the line of travel of the radiation. Some indication of the extent of this variability in the depolarization can be gained by examination of the polarization of the pulsed radiation received from the pulsars. For reasons which will be explained in Chapter XII, the radiation from these objects should be completely polarized as emitted, and lower polarization measurements therefore constitute evidence of depolarization.
Studies of the pulsar PSR 0833, the second youngest object of this kind thus far discovered, show the radiation as received to be 100 percent polarized,43 indicating that there is no modification on the way out of the region of origin. R. N. Manchester reports that in PSR 2022-I-51 the “polarization of the leading component is essentially complete,”44 and he finds the polarization of two other pulsars to be as high as 90 percent. In most instances, however, a substantial amount of depolarization is indicated. The maximum polarization measured in the pulsed radiation from the pulsar NP 0532 in the Crab Nebula, for example, is a little over 25 percent,45 and some of the measurements on other pulsars have produced still lower values.
It can be anticipated that there will be a similar variability in the depolarization occurring in the quasars, but because of the large number of individual radiation sources in each quasar, the average depolarization of quasar radio emission should be relatively constant. We may therefore conclude that the polarization P of the radiation received from the quasars is equal to the polarization of the emitted radiation Po multiplied by a constant factor kl, the depolarization factor; that is, P = klPo.
Radiation from thermal sources has a small component extending into the radio region, but this thermal radiation accounts for only a negligible portion of the radio emission from the quasars and associated objects. Almost all of the radiation that is received from these objects at radio wavelengths is radiation of the inverse, or cosmic, type that originates at speeds greater than that of light. Processes that would result in radiation of wavelength l/n (in natural units) if they took place at speeds less than unity produce radiation of wavelength n when they occur at speeds greater than unity. The natural unit of distance, 0.456×105 cm, is in the wavelength range of visible light. The cosmic equivalent of thermal radiation is therefore in the ultraviolet and x-ray range (which explains the negative U-B index of the quasars) and the cosmic gamma rays are received at radio wavelengths.
Thus it is wholly unnecessary to postulate the existence of complicated processes involving highly improbable physical conditions in order to explain this radiation. The radio emission is a perfectly normal result of a normal physical phenomenon, as any such common and ubiquitous product must be. It is a natural consequence of violent explosions that disrupt atomic structures, differing from the similar emission of x-rays and gamma rays in violent events only in that it is a product of the inverse process.
Inasmuch as motion at speeds in excess of that of light takes place in only two dimensions, radiation, which is a motion, is confined to these two dimensions, and all radiation emitted from atoms moving at these speeds is completely polarized. The total radiation from a quasar, or any other galaxy, that does not contain any appreciable number of very old stars, is practically constant over the relatively short active lifetime of a Class I quasar. The amount of polarized radiation from an average quasar of this class at any time during its active period is therefore proportional to the polarization; that is, Ep = k2P0. We have previously found that P0 = P/k1. Substituting the latter value for P0 in the energy equation, we then obtain EP = (k2/k1) P = kP. The average quasar has a specific distribution of radiation frequencies, and for such a quasar this energy equation is applicable to any given range of frequencies, as well as to the total radiation. We thus arrive at the conclusion that the energy received from a Class I quasar at radio wavelengths, the radio flux, is proportional to the polarization.
A direct comparison between these two quantities produces results that are consistent with this theoretical conclusion, but because of the large scatter in the diagram due to uncertainties in the polarization measurements and the lack of integrated values of the polarization, interpretation of the results obtained in this manner is somewhat ambiguous. The best way of establishing the validity of the theoretical finding appears to be a demonstration that the decrease in polarization of the Class I quasars with increasing age (as indicated by the U-B index) follows the same path as the decrease in radio flux (Figure 9).
Figure 12 duplicates Figures 9 and 11, substituting polarization for radio flux. It covers all of the quasars from the 3C catalog listed in the Burbidge Table 5.1 for which the necessary data are available. Polarization values are not given in the Burbidge work and they have therefore been taken from other sources, mainly the measurements at 21.2 cm reported by Bologna, et al,46 In view of the large variations in the polarization measurements by different investigators it has seemed advisable to have the benefit of a second set of values, and since no other measurements at the same wavelength are available, the results at 6 cm reported by Sastry, et al,47 have been averaged in with the 21.2 cm values in the following manner: The average polarization at 21.2 cm was compared to that at 6 cm for those quasars on which both measurements are available, and it was found to be 0.673 of the 6 cm average. The 6 cm values as reported were therefore reduced by the factor 0.673 and then averaged with the corresponding 21.2 cm values for purposes of Figure 12. Where no 6 cm measurement was reported, the 21.2 result was utilized without modification. This method of combining the two sets of data is based on the assumption that the radio spectra of the quasars conform to a general pattern within a reasonable margin of variation. This should be true for all but the exceptional objects, and the combination should therefore give us values which are more reliable than either set of observations individually.
Within the accuracy of the observations, the evolutionary path of the Class I quasars in this diagram, represented by the open circles, is identical with the trend of the R.F. values for these same quasars in Figure 9. In the early stages, when the U-B index is low, the effects of the forces exerted during the ejection are still very much in evidence, the polarization is consequently high, and the quasar is located in the upper left of the diagram. As it ages and the violent activity subsides, the polarization decreases, moving the quasar toward the right, and the U-B index increases, moving the quasar downward on the diagram, following the same course as the radio flux in Figure 9.
The Class II quasars, represented by the filled circles, occupy a different region of the diagram, quite distinct from the Class I region, just as they did in Figure 11. Actually two of the quasars are on the wrong side of the dividing line, but some discrepancies can be expected, in view of the uncertainties in the polarization measurements. Both of the deviant cases are in the group for which we have only the 21.2 cm measurement. As in Figure 11, the quasar 3C 273 occupies a normal position in the diagram, indicated by a triangle. This location is well away from the boundary line, where there is no doubt as to the proper classification of the quasar.
Like the previous comparison of R.F. and U-B in Figure 11, the Figure 12 diagram gives no indication of a Class II evolutionary pattern, except that the cut-off line at the right of the diagram drops sharply as it approaches zero polarization. This lack of a definite trend is quite understandable. In Class I there is a specific initial point. At the time of ejection, the violent activity is at a maximum, and as the quasar ages this activity gradually decreases, together with the visible indicators of that activity. In Class II, however, the activity is not initiated by a single event, but by a series of explosions of individual stars which extends over a very long period of time. We have seen that, aside from possible exceptions such as 3C 273, the explosions do not occur in any substantial numbers earlier than the age corresponding to a quasar distance of about 0.800, but they can begin at any later time. The radiation from the Class II quasars therefore consists of a mixture of components of different ages.
It is possible, nevertheless, to arrive at some conclusions about the evolution of these objects on the basis of the theoretical deductions that have been verified in application to other phenomena, including the Class I quasars. The polarization, as we have seen, is an indication of the amount of violent activity, and for the Class II quasars a relatively high polarization means that the large-scale series of stellar explosions which raises the quasar from the radio-quiet to the Class II status has only recently begun. We may therefore distinguish the recent arrivals in Class II from those of longer standing by their higher polarization. Inasmuch as the Class II quasars do not make their appearance at all (with the usual exception of 3C 273) before a quasar distance of about 0.800, we can expect that the distance range immediately above 0.800 will contain a preponderance of quite recent entries into Class II. This expectation is borne out by the fact that three of the four quasars below a distance of 0.900 have polarizations well above the average for the total group included in the study, and the quasar with the shortest distance also has the highest polarization. However, the quasar 3C 186 at distance 0.984 almost equals this maximum, indicating that here, too, the onset of the secondary explosions is quite recent, and there is still another quasar in this group that has a polarization above average and is located as far out as 1.272. This evidence suggests that for the quasar population as a whole the secondary explosions are a continuing phenomenon throughout the entire range beyond quasar distance 0.800.
Such a conclusion is completely in accord with the theory, inasmuch as the growth process of the galaxies results in a continuous distribution of stellar ages from the oldest downward. As soon as the oldest of the constituent stars of a galaxy or a galactic fragment (aside from a few survivors of previous explosive events) arrives at the destructive limit a regular succession of explosions follows. We may also corroborate this conclusion observationally by examining the relation between the quasar distance and the magnitude of the radio flux. We have already found that the average age of the quasars increases with distance. If the explosions persist throughout the range
of distances of the Class II quasars then there should be a gradual build-up of energy, and the average output of radiation from these objects should increase with distance rather than remaining constant in the manner of the Class I quasars. Figure 13 shows that such an increase actually takes place in the radio flux from the 3C quasars listed in the Burbidge Table 5.1. There is a range of variation at each distance, as would be expected, but the minimum R.F. more than doubles between 0.800 and 2.000, and the maximum value increases in an almost parallel line.
With these findings as to the nature of the developments during the life period of the Class II quasars, we have now completed the task of tracing the progress of the quasars from the time they are ejected from the galaxy of origin to the time that they acquire unit speed in the two inactive dimensions and disappear into the region of motion in time. In order to place the quasar in its proper perspective, however, it should be emphasized that the existence of this object is not an isolated phenomenon—something that may happen under some special conditions—it is a segment of the great cycle of physical existence, something that eventually happens, in one way or another, to all matter. The quasar state is an integral part of the physical cycle; it is the connecting link between the old in the material sector of the universe and the new in the inverse, or cosmic, sector. | <urn:uuid:df2acded-71ef-4c5f-924a-840242a57f18> | CC-MAIN-2017-17 | http://library.rstheory.org/books/qp/10.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00488-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.953566 | 7,771 | 3.265625 | 3 |
Saturday, September 25, 2010
Article below from The Evening Post 16 February 1934
The Faith flew from Muriwai Beach on 17th February 1934 taking mail to Australia from New Zealand. This was the first official trans-tasman airmail flight the forerunner of our air mail service today.
Sunday, May 23, 2010
On my search tonight in Papers Past for anything to do with the Kaipara I came across a few snippets about the North Kaipara Lighthouse. The photo above clearly shows the lighthouse with a small house behind it whereas current photos show the lighthouse standing by itself with nothing but bush, beach and sand surrounding it. The history on this lighthouse was found at Historic.Org.NZ which says this:
The Kaipara North Head Lighthouse is one of only a small number of remaining timber lighthouses in New Zealand. Constructed in 1883-84, it was erected to guide shipping across the treacherous Kaipara bar. The Kaipara, on the west coast of Northland, is New Zealand's largest natural harbour and became the focus for the widespread exploitation of kauri timber around its shores in the late nineteenth and early twentieth centuries. Such timber was a major construction material, exported throughout New Zealand and overseas. Erected on a sandstone outcrop, the structure lay some 8 km from the nearest settlement at Pouto, in an area dominated by large drifting sand dunes. It was part of a larger complex, which included residences for the two permanent lighthouse keepers and their families, as well as a signal station and other ancillary buildings. Earlier Maori occupation had occurred in the immediate vicinity, reflecting an ongoing relationship between human settlement and the maritime environment.
The lighthouse was originally 13.4m tall, with a tapering six-sided tower, three storeys high. It was topped by a large lantern, and erected with a small basement in its concrete footings to allow a weight mechanism for the clockwork light to be used. Unlike other similar towers, prefabricated and imported from Britain, the structure was constructed of local materials. Most notably, the walls of the tower were designed with an external and internal skin of local kauri, while the core was erected of basalt rubble from Mt Eden, Auckland. This has enabled the building to withstand extreme winds. The structure was designed by John Blackett (1818-1893), an important individual in New Zealand's nineteenth-century engineering history, possibly assisted by Captain Robert Johnson. Johnson had put forward the earliest scheme for national lighthouse coverage of New Zealand's shores in the 1860s, stimulating a major programme of lighthouse construction by the colonial government that lasted into the 1880s. Timber structures allowed such work to be carried out at a lesser cost, and the Kaipara light proved to be the last of the coastal lighthouses erected during this period. The lighthouse and its auxiliary buildings were built at a cost of £5,571.
The light in the lantern was imported from Britain, and was first exhibited in December 1884. The cramped interior of the lighthouse was mostly used for storage and also contained trap doors in its centre to accommodate the weights associated with the operation of the light. Although the light was visible for up to 37 km in good conditions, crossing the bar remained treacherous, and within a few months of it starting operation the lighthouse keepers played host to shipwreck victims from three vessels: the Anabell, Mary Annison and Mathieu. Extreme conditions were suffered at the station as well as at sea, with shifting sands and erosion being a major problem. This eventually led to the keepers and their families moving to nearby Pouto from the early 1900s, with an extra keeper being taken on to allow two keepers to remain at the station on rotation. Many of the station buildings succumbed to the wind or were relocated at this time.
At the outbreak of the Second World War (1939-1945), the light - which was by then of gas automatic type - was extinguished, and in 1944 the original lantern was removed and eventually set up at Cape Saunders, where it remains. A smaller lantern from Cape Foulwind was subsequently installed on the top of the tower, incorporating a new automatic light. The light was relit on the same day in November 1947 that the Kaipara was formally closed as a port of entry, although the harbour still remained open for local traffic. The lighthouse was finally closed in 1955 or early 1956, after which it became derelict. A preservation society was formed in the early 1970s to restore the building, supported by the local community at Pouto. The building and the associated station site were subsequently placed under the management of the New Zealand Historic Places Trust, who still operate the place as a historic property. The lighthouse remains popular, with an estimated 35,000 tourists visiting the site annually in the late 1990s, in spite of its remote location.
The Kaipara North Head Lighthouse is historically significant as a prominent reminder of the role that shipping and coastal transport has played in the social and economic development of New Zealand. It has archaeological value as the site of an associated lighthouse station. The place has considerable architectural significance as one of few remaining timber lighthouses in New Zealand, as the only 1880s example believed to be constructed locally of New Zealand materials, and for its association with John Blackett - an important nineteenth-century engineer. The Kaipara North Head Lighthouse has cultural significance for its close connections with the local community at Pouto.
What happened to the little houses around it? Doc.Govt.NZ says that when the lighthouse was automated in 1952 they were all barged to various other locations. I wonder if the current owners of these houses realize their history?
Sunday, April 18, 2010
Posted using ShareThis
By Bob McNeil
A century-old steamboat that once provided adult entertainment for lonely fishermen is being restored in Northland.
In its time the SS Miverva has hauled kauri logs across the Kaipara Harbour and has been used as a platform for viewing America’s Cup races.
Now it is being restored to its original self as a steam ferry – minus the brothel onboard.
With at least 100 years behind her, the Minerva is a shipload of stories to tell. Originally built to ferry more than 200 passengers between Auckland and Clevedon, the ship eventually became one of the largest pleasure launches on the Kaipara Harbour.
“It’s fairly well authenticated that she stayed out there [the Chatham Islands] as an entertainment centre for horny fishermen,” says shipwright John Clode.
Now the Minerva has been stripped back to her hull. The timber is in surprisingly good condition, but the wheelhouse and bridge will have to be re-built.
The plan is to have the old ferry linked with the Kaw Kawa vintage railway, where steam – rather than steamy excursions – will be unrivalled anywhere else in the world.
Monday, April 5, 2010
Vintage engines, antique motor mowers and historic aircraft will be brought to life next week in an interactive experience for the Karaka Museum.(story from 3 news)
The Karaka Historical Society, along with the Vintage Engine Restorers Auckland (VERA), will host the Karaka Vintage Day on March 28 to showcase the machinery used to shape New Zealand during the last century.
Money raised from the event will be donated to the Karaka Museum’s building fund to help house the artifacts previously on display at the Karaka War Memorial Hall.
As well as engines and equipment, historical buffs will have the opportunity to view classic cars, military vehicles and even a vintage Tiger Moth aeroplane flyover at the biennial event.
Iconic Kiwi brand Masport will also be represented with an impressive array of machinery, from milking machines to lawnmowers, manufactured by the company over the past 100 years.
The spokesperson for the Karaka Historical Society Rob Higham says the Vintage Day event, which started in 2000, attracts a large number of people from around the country.
“We try and ensure there is something for the whole family. The vintage displays are a real draw card along with the parade of vehicles, machinery and tractors. Children can enjoy horse and wagon rides while their parents can spend time perusing the vendor stalls for books, jewellery, clothes, art and collectibles.”
Higham says the new Karaka Museum is being constructed to display much of the vintage memorabilia, as its former home at the Karaka War Memorial Hall was too small to display the large collection.
The General Manager of Masport, Steve Hughes, says the company is proud to be showcasing products from its 100 year history at the Karaka Vintage Day.
“Partnering with the Karaka Historical Society and VERA was an obvious connection,” he says. “We are looking forward to contributing and celebrating our rich Kiwi history with both groups and the public on this great occasion.”
The Karaka Vintage Day 2010 will be held on Sunday, March 28th at the corner of Linwood and Blackbridge Roads, Karaka. Gates open at 9.30am.
Friday, April 2, 2010
Does anyone know any history about it?
Thursday, March 18, 2010
We meet Isabella Matheson-Curry. The formation of the Parish owes so much to her and her husband Phillip Edward Curry, that any history must include a tribute to them. Isabella's first husband was John Gilmer Matheson, who settled in Wellsford in 1885, on a block running approximately from what is now Batten Street, down Rodney Street to Station Road, and roughly up a line ending below the present town water supply reservoir.
John Mateson was then a single man, but married Isabella Kirton in 1891. The couple had five daughters. The youngest, Linda, is now a resident of Heritage Rest Home which was once the farm house where she was born more than 90 years ago.
John died in 1902, and Isabella struggled along, managing the farm and her five children, until 1908, when she married Phillip Curry, an Australian working on the railway project which was by the underway. Isabella and Phillip had three more daughters. The names of the Matheson girls appear in the roll of the Old Wellsford School: Ida 1898, Thelma 1900, Edith 1902, Ettie 1906 and Linda 1909. They were not then Catholic; it was Phillip Curry who brought the Catholic faith to the Matheson household.
At that time Wellsford was mostly farmland. There were no railways, shops or public hall, and virtually no roads - just a handful of farmhouses, and the Curry house, important to this story, was one of them. It still stands where it always did, but today we know it as the heritage Rest Home. Between times it had other owners, notably Fred and Aileen Preece and their family, staunchly Catholic.
Over the years, Isabella and Phillip donated or sold land for several community projects, especially the public hall now called the Community Centre. More important to this story was the gifting of two lots of land to the Catholic Bishop of Auckland. the first gift was made in 1918, and a further block, intended for a school, was donated in 1925.
(information here - photo by M Brookfield)
Tuesday, March 16, 2010
It's about a young lady called Chloe Mainwell who lives in England during the Victorian era who travels to France to stay with relatives so she can make her debut into society but an incident with her uncle leaves her parents with the impression that she has been spoiled and therefore they push her into a loveless marriage with Thomas Yates who has recently bought land and has a farm in New Zealand on the Kaipara.
Chloe starts off as a somewhat spoiled girl without any idea of how difficult life can be but gradually through circumstances that happen she grows up and learns how to be tough and eventually learns to love the land she once hated.
Kate Stirling has also written another novel which is vintage rather than Victorian in nature called Thunder Children also about early life in the Kaipara - if you can I urge you to see if your library has a copy of these books as they are well worth reading.
(image from coolslaw)
Monday, February 1, 2010
As per the previous entry about Captain John Austen here he is in the Schooner "Marion" sending a letter to a friend about his experience in a hurricane. He hasn't had much luck with shipping vessels has he?
Article from Evening Post 10th April 1880 - Papers Past.
Wednesday, January 27, 2010
John married Anne Willcox on the 12th October 1858 in the St Barnabas Church in Auckland. Anne was 17 years old whereas John is listed as 28 on the marriage certificate but was actually 36. He is also listed as a "widower". The marriage seems to have been a happy one and they had 9 children, 5 boys, and 4 girls, (information from Capt Austen's blog) one of which was my great grandfather Alfred Edward Austen.
John Austen is listed in paper's past as being a well known sea captain and various advertisements for cargo proves this, however one particular article from NZ Historical Data states that the ship Sea Breeze was wrecked through fault of Mr Austen:
Ship "Sea Breeze"
However the West Coast Times 24th January 1872 at Papers Past says this:AJHR 1872 Return of Wrecks
Date of Casualty : 25 Oct 1871
Name of Master : John AUSTEN
Age of Vessel : 10 years
Rig : Schooner
Register Tonnage : 70
Number of Crew : 6
Number of Passengers : 1
Nature of Cargo : Guano
Nature of Casualty : Stranded; total loss
Number of Lives Lost : None
Place of Accident : Reef at NW end of Staarbuck Island
Wind Direction : ESE
Wind Force : 5
Finding of Court of Inquiry
Master blamed for wreck. Loss believed to have been caused either by drunkenness (as in case of "Marwell", lost by him on Tiri Tiri
about 3 years ago, and for which his certificate of service was taken away) or from a desire to show off the capabilities of his
vessel, which had the reputation of a smart sailor.
It sounds like Captain Austen is free of blame and that the heavy rollers on the starboard side of the ship were so huge that they rocked the vessel over which cause it to shipwreck.
In fact several newspapers such as the Evening Post 1880 state that Captain Austen went on to take charge of other vessels like the Schooner "Marion" and the Wanganui Herald says as captain he left for Onehunga in the boat "Glenelg". Back then I would've thought that if he'd been found responsible for drunkenly wrecking a boat thereby endangering lives and cargo, he would have been fined and/or jailed.
There are more newspaper articles and journeys by Mr John Austen but I will share those in another post.
Sunday, January 24, 2010
Australia and its surrounding islands were settled by colonists from the British Isles in the late eighteenth and early nineteenth centuries, beginning with a penal colony established on the site of the modern city of Sydney in 1788. Tasmania (known as Van Diemen's Land in the eighteenth and nineteenth centuries and now part of Australia) was also established as a British penal colony in the early 1800s. Transportation of convicts continued through the mid-1800s. Many free immigrants also settled in Australia and Tasmania, especially during the 1850s when they were attracted by the wool industry and a series of gold rushes.
The first Europeans to settle in New Zealand were Christian missionaries who came in the 1800s to convert the native Maori. The Maori initially welcomed European settlers, but as more and more flooded in, displacing the Maori, conflicts erupted into the Land Wars of the 1860s and 1870s. Native Australians, dubbed Aborigines by European settlers, did not fare well as colonization spread, but modern novelists recognize the positive aspects of their culture.
- Quote from Historical Novels.
My personal review is that I really enjoyed this book. I've been researching my family tree for a while now and found it interesting to know that my immigrant ancestors could have or would have lived similarly in early New Zealand. I liked reading how the main character William Pollard, an escaped convict escaped from the ship he was being transported on and swam to the nearby Bay of Islands - back then was called Kororareka, met up with the local Maoris and married one of them. Although this is a fictional novel the author captures the history well.
This is definitely worth reading and is one of my favourites - I highly recommend it.
Friday, January 22, 2010
The late Alexander Chapman of Hokianga, a notion of whose decease appeared in our issue (N.Z. Herald) of 16 March, was one of the few remaining who were contemporary with the earliest settlers in New Zealand.
He was born at Dunbar, Scotland, on the 2nd of February, 1805. His father was a lieutenant in the navy, and died in India when Alexander was quite a child. His mother with her two sons, then removed to London.
At the age of eleven years Alexander went with Sir Edward Parry's expedition to the Arctic regions, and retained some lively recollections of the severity of the cold there.
After his return he was indentured for seven years to William Yateman, shipbuilder, Deptford. About 1828 he arrived in Sydney, N.S.W., and in 1830 came to New Zealand with the late Mr. G.F. Russell to superintend, at Horeke, Hokianga, the construction of the first large ship in New Zealand, viz, the Sir George Murray, of 400 tons. When completed, Chapman along with several Ngapuhi chiefs took passage in her to Sydney. Shortly afterwards he returned to Hokianga and returned to his trade.
Being of frugal habits, Chapman saved enough money to enable him to live in easy circumstances in his old age. In 1858 he took his daughter (his only child, now the wife of Mr. George Martin, pilot of Hokianga Heads) to Scotland to be educated, returning himself to New Zealand the same year.
Chapman had vivid remembrances of the "early days' if Hokianga, and among his papers is a very interesting account of what he calls "The Battle of Pork."
It appears that some natives on the Mangamuka River had looted the house of a European named Ryan. Mr G.F. Russell, at the head of fifty Europeans and about 400 natives, armed with muskets and a small cannon from Te Horeke, started for Mangamuka to punish the offenders..
On arriving near the pa of the offending natives, the latter, frightened at the formidable appearance of the attacking party, fled, leaving behind them their canoes, two muskets, a quanity of potatoes, and 150 pigs. So the whole affair was accomplished without bloodshed, save that the pigs killed to satisfy the whetted appetites of the triumphant warriors. Would that subsequent battles would be no more sanguinary.
For the last 20 years Mr. Chapman lived with his daughter at Omapere, surrounded by his grandchildren. About a fortnight before his death he caught a severe cold, which with natural causes hastened his decease. The Rev. T.A. Joughin was with him just before he died, and to him Mr. Chapman expressed the happiness he experiences through faith in Christ. Thus, his end was "quietness and assurance." He was interred in the Pakanae Cemetery on Sunday afternoon, 10th March. Of the two 'old hands" in this district only two remain now - R. Hardiman, over 80, and Frank Bowyer, nearly 100 years.
Thursday, January 21, 2010
Wednesday, January 20, 2010
Monday, January 18, 2010
Here's some of the names you can find in the book along with stories of how they came to be in the area they lived in.
- Yvonne Rust
- Sister Ivy Driffill
- Ethel Maude Sands
- Iritana Rangi Kamara Randell
- Mary Ann Matthews
- Daisy Schepens
- Dame Whina Cooper
- Violet Pau
- Caroline Bedlington
- Marie King
- Hannah Chiffinch Hare
- Susannah Cullen
- Anka Matich
If you're looking for photos you can find some here at the Whangarei Library website, the book has been privately published. | <urn:uuid:16511d88-f9a5-4b82-8fea-e5b022456380> | CC-MAIN-2017-17 | http://northlandhistory.blogspot.co.nz/2010/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00425-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.977872 | 4,327 | 2.671875 | 3 |
Hillary Clinton’s Comprehensive Agenda on Mental Health
Today, Hillary Clinton announced her comprehensive plan to support Americans living with mental health problems and illnesses—by integrating our healthcare systems and finally putting the treatment of mental health on par with that of physical health. Nearly a fifth of all adults in the United States, more than 40 million people, are coping with a mental health problem. Close to 14 million people live with a serious mental illness such as schizophrenia or bipolar disorder. Moreover, many of these individuals have additional complicating life circumstances, such as drug or alcohol addiction, homelessness, or involvement with the criminal justice system. Veterans are in acute need of mental health care, with close to 20% of those returning from the Iraq and Afghanistan wars experiencing post-traumatic stress or depression. And the problem is not limited to adults: an estimated 17 million children in the United States experience mental health problems, as do one in four college students.
Americans with mental health conditions and their families need our support. The economic impact of mental illness is enormous –at nearly $200 billion per year nationwide in lost earnings —and the human cost is worse. Too many Americans are being left to face mental health problems on their own, and too many individuals are dying prematurely from associated health conditions. We must do better. To date in this campaign, Hillary set out policies that will direct support to individuals with mental health problems and their families—including a detailed agenda to support military service members and veterans, an initiative to end America’s epidemic of drug and alcohol addiction, and a robust caregivers’ agenda. Today, she is building on those proposals with a comprehensive agenda on mental health. Hillary’s plan will:
- Promote early diagnosis and intervention, including launching a national initiative for suicide prevention.
- Integrate our nation’s mental and physical health care systems so that health care delivery focuses on the “whole person,” and significantly enhance community-based treatment
- Improve criminal justice outcomes by training law enforcement officers in crisis intervention, and prioritizing treatment over jail for non-violent, low-level offenders.
- Enforce mental health parity to the full extent of the law.
- Improve access to housing and job opportunities.
- Invest in brain and behavioral research and developing safe and effective treatments.
As a down-payment on this agenda, Hillary will convene a White House Conference on Mental Health during her first year as President. Her goal is that within her time in office, Americans will no longer separate mental health from physical health when it comes to access to care or quality of treatment. The next generation must grow up knowing that mental health is a key component of overall health and there is no shame, stigma, or barriers to seeking out care.
Early Diagnosis and Intervention
Most mental health conditions have their origins in childhood and adolescence. But today, two-thirds of children with mental health problems receive no treatment at all, and children in high-risk groups – such as those in juvenile justice settings, in the child-welfare system, or whose mothers experienced depression during or after pregnancy – are particularly underserved. The consequences of delayed and inadequate treatment for children and young adults with mental health problems play out over decades. For instance, adolescents with serious mental illness are about three times more likely to drop out of school and twice as likely to face a premature death as are their cohorts who do not face such problems. Hillary is committed to expanding early diagnosis and treatment of mental health conditions, and preventing them when possible. As president, she will:
- Increase public awareness and take action to address maternal depression, infant mental health, and trauma and stress in the lives of young children. Hillary will ensure that the public health and early education communities receive needed information and action steps to address maternal depression, infant mental health, and trauma and stress. New studies show that as many as 1 in 5 women develop symptoms of depression, anxiety, or mental health disorders in the year after giving birth. The U.S. Preventive Services Task Force now recommends that women be screened for depression during pregnancy and after giving birth. We also know that infant mental health depends on children forming close and secure relationships with the adults in their lives, and that too many children are growing in environments that cause them to experience trauma or develop stress. Hillary will build on innovative state Medicaid practices to increase screenings for maternal depression, infant mental health, and toxic stress, with the goal of these screenings becoming standard practice in Medicaid
- Scale up efforts to help pediatric practices and schools support children facing behavioral problems. Hillary believes we must redouble our efforts around early screening and intervention – and that means training pediatricians, teachers, school counselors, and other service providers throughout the public health system, to identify mental health problems at an early age and recommend appropriate support. There are many promising state and local programs aimed at early detection and intervention of mental health problems, such as Positive Parenting Program, Nurse Family Partnership, Typical or Troubled Program, Mental Health First Aid, Incredible Years, and Massachusetts’ Child Psychiatry Access Project (MCPAP).Hillary will fund promising programs like these by increasing the set-aside in the Mental Health Block Grant for early intervention from 5% to 10% of the annual budget, and she will move this from set-aside funding to a stand-alone program.
- Help providers share information and best practices. People experience mental illness in a variety of ways, with symptoms often differing even with the same illness. The Early Psychosis Intervention Network (EPINET) is a platform at the National Institute of Mental Health, which serves as a centralized source of information, data, and best practices to providers and clinicians who treat psychosis. Hillary will support EPINET and other efforts like it that enable mental health practitioners to share information, and she will build on what works.
- Ensure that college students have access to mental health services. Mental health and well-being are integral to campus success. Hillary will encourage every college to put in place preventive services, comprehensive treatment and coverage of services, and an interdisciplinary team (including but not limited to school leadership, faculty, students, and personnel from counseling, health services, student affairs, and the office supporting students with disabilities) to oversee the campus’s mental health policies and programming. Hillary will also strengthen support for under-resourced schools that serve a disproportionate number of low- and middle-income students and communities of color, and she will help those schools improve coordination of care with local clinical providers.
Federal Support for Suicide Prevention
Suicides, which are usually fueled by mental illness, are rising among numerous population groups, from adolescents and college students to veterans and older adults. The overall rate of suicide increased by 24 percent between 1999 and 2014, and is now at its highest level in 30 years. Over 40,000 Americans die of suicide every year, making it the tenth-leading cause of death nationally. As the former director of NIMH, Dr. Tom Insel, often notes, suicides have 11 victims: the person who dies, and at least 10 people close to them who will never be the same. Hillary believes that suicide is a critical issue that she will prioritize as president. She will:
- Create a national initiative around suicide prevention across the lifespan that is headed by the Surgeon General: As president, Hillary will move toward the goal of “Zero Suicide” that has been promoted by the Department of Health and Human Services. She will direct all relevant federal agencies, including HHS, the VA, and the Department of Education, to research and develop plans for suicide prevention in their respective settings, and create a cross-government initiative headed by the Surgeon General to coordinate these efforts. She will also launch a citizen input and feedback mechanism, to enable outside groups to comment on agency recommendations, and explore how we can harness technology to reach out to people who need support.
- Encourage evidence-based suicide prevention and mental health programs in high schools. In 2013, a survey of high school students revealed that 17 percent considered attempting suicide in the last year, with 8 percent actually attempting it. The suicide rate among American Indian/Alaska Native adolescents is even higher, at 1.5 times the national average. There are effective ways to respond. It is critical that school districts emphasize evidence-based mental health education, so that students, teachers, and school nurses are aware of the warning signs and risk factors of mental illness and how to address them. The Model School District Policy on Suicide Prevention, released by four leading mental health organizations, includes concrete recommendations that school districts can follow. Hillary will direct the Department of Education to emphasize mental health literacy in middle and high schools and will work with regional and national PTA, school counselor associations, and associations of secondary school principals to encourage school districts to adopt this model policy.
- Provide federal support for suicide prevention on college campuses. Hillary believes that every college campus should have a comprehensive strategy to prevent suicide, including counseling, training for personnel, and policies that enable students to take leave for mental health. Such multi-layered approaches have a proven track record of decreasing suicides. For instance, the Air Force launched an initiative in 1996 that brought together multiple intervention programs and reduced the suicide rate among Air Force personnel by nearly a third in under a decade. Groups such as the Jed Foundation, American Foundation for Suicide Prevention, the Suicide Prevention Resource Center, and Active Minds have created frameworks around suicide prevention tailored for colleges and universities. Hillary will dramatically increase funding for campus suicide prevention, investing up to $50 million per year to provide a pathway for the country’s nearly 5,000 colleges – whether private or public, two-year or four-year – to implement these frameworks on behalf of students.
- Partner with colleges and researchers to ensure that students of color and LGBT students are receiving adequate mental health coverage. Evidence suggests that the psychological needs of students of color are disproportionately unmet, impeding their ability to adapt to college life. LGBT students face added burdens as well, with gay youth being four times more likely than their straight peers to attempt suicide. Hillary will direct the Departments of Education and Health and Human Services to work with universities, researchers and community programs to determine how best to meet and respond to the challenges these students face and to provide specialized counseling.
Integrate our Healthcare Systems and Expand Community-Based Treatment
Demand for mental health services far outpaces supply, and our health care system lacks the treatment infrastructure and behavioral health workforce necessary to provide adequate care. Of adults experiencing any mental illness today, nearly 60% are untreated. Of those with a serious condition, 40% are untreated. We need to close the treatment gap, and ensure that there is no wrong door to access care. Hillary’s plan will:
- Foster integration between the medical and behavioral health care systems (including mental health and addiction services), so that high-quality treatment for behavioral health is widely available in general health care settings. The responsibility for mental health care and substance use disorders is increasingly falling on general care providers. One study finds that today, over a third of patients with mental disorders who use the health care system are treated by primary care providers. While some primary care providers offer excellent care, many do not receive dedicated training in treating mental illnesses. In addition, the medical and behavioral health care systems are highly segmented—to the point that when a patient has both conditions, providers have trouble collaborating and jointly managing the patient’s treatment plan. Hillary believes we should break down the barriers between medical and behavioral health care, and move the system so that it focuses on the whole person. System integration could yield $26-$48 billion in savings annually to the health care system, with $7-$10 billion coming from Medicaid alone. That is why Hillary will:
- Expand reimbursement systems for collaborative care models in Medicare and Medicaid. Collaborative care is a model of integrated care that treats mental health and substance use conditions in primary care settings. A team of health care professionals work together to coordinate the patient’s services, including a primary care doctor, a care manager, and a behavioral health specialist. These integrative approaches not only produce better medical outcomes and patient satisfaction, they also result in significant savings to the health care system. Hillary will expand reimbursement structures in Medicare and Medicaid for collaborative care by tasking the Center for Medicare and Medicaid Innovation to create and implement new such payment models. She will also issue recommendations on best practices for private plans.
- Promote the use of health information technology to foster coordination of care. Hillary will adjust payment systems in Medicare, Medicaid, and under the Public Health Service Act, to allow for reimbursement of tele-psychiatry and other telehealth services delivered through primary care and hospital settings.
- Promote the use of peer support specialists. Peer support specialists have been shown to provide needed, cost-effective services for individuals with mental health conditions and addiction. Hillary will support initiatives to include peers in clinical care teams in primary care settings, mental health specialty care settings, hospitals, and Accountable Care Organizations. She will encourage all 50 states to reimburse peer services in state Medicaid programs, which 30 states do currently, and continue providing the Consumer and Consumer Supporter Technical Assistance Center grants.
- Encourage states to allow same-day billing. Many state Medicaid programs prohibit payments for mental health services and primary care services furnished to the same individual on the same day. This results in unnecessary obstacles to care and segmentation of health care practices. Hillary will issue best practices guidance to states, encouraging them to lift this restriction.
- Support the creation of high-quality, comprehensive community health centers in every state. A 2014 law established a demonstration program in eight states, under which new benefits would be available to health centers certified by the federal government as Certified Community Behavioral Health Clinics (CCBHCs). To be a CCBHC, a clinic must provide a range of physical and mental health services, including emergency psychiatric care, treatment for mental health and substance use disorders, and peer support. In return, the clinic can receive reimbursement at rates similar to those received by federally-qualified health centers. Hillary will invest $5 billion over the next ten years to scale up this demonstration project and help bring it to every state in America. This will vastly expand community-based treatment, by enabling thousands of health centers across the country (i.e., FQHCs, CMHCs, etc.) to upgrade to an integrated center.
- Launch a nationwide strategy to address the shortage of mental health providers. The United States is already experiencing shortages in its mental health workforce, and those shortages are projected to worsen in the coming years. For example, there are only 8,300 practicing child and adolescent psychiatrists today, which is one provider per 38,000 children. Moreover, there is an increasing need for mental health professionals to be trained in cultural competency, so that they can deliver effective care to different populations. Hillary will launch a national strategy to bolster our mental health workforce, pulling together the Substance Abuse and Mental Health Services Administration, Health Resources and Services Administration, Center for Medicare and Medicaid Services, Indian Health Service, Department of Education, and public and private partners. This cross-governmental initiative will aim to: recruit more persons into the mental health fields; expand resources for mental health training, from loan forgiveness programs, to scholarships, to grants for training programs and additional GME funding; disseminate telehealth systems so that providers can reach underserved populations remotely; and expand culturally competent care.
Improve Outcomes in the Criminal Justice System
Today, our criminal justice system is increasingly becoming the “front line” of engagement with individuals with mental health problems. Law enforcement officers routinely have to intervene or respond to unfolding situations that involve individuals with mental illness. As many as 1 in every 10 police encounters may be with individuals with some type of mental health problem. And our county jails today house more individuals with mental illness than our state and local psychiatric hospitals. Hillary believes that while greater investments in prevention and community-based treatment for behavioral healthcare will minimize these encounters with the criminal justice system, there are also specific steps we should take to improve outcomes for those individuals who do end up interfacing with law enforcement. She will:
- Dedicate new resources to help train law enforcement officers in responding to encounters involving persons with mental illness, and increase support for law enforcement partnerships with mental health professionals.Even though an increasing number of police encounters or use-of-force incidents involve people with mental health problems, law enforcement officers receive minimal training in how to handle such situations. According to one study, the average police officer receives only 8 hours of training for crisis intervention, which is far below the recommended amount. Hillary will ensure adequate evidence-based training for law enforcement on crisis intervention and referral to treatment, so that officers can properly and safely respond to individuals with mental illness during their efforts to enforce the law.
- Prioritize treatment over punishment for low-level, non-violent offenders with mental illnesses. Over half of prison and jail inmates today have a mental health problem, and up to 65% of the correctional population meets the medical criteria for addiction. Many of these individuals are first-time or nonviolent offenders, whose prospects for recovery and reentry would be far enhanced were they to participate in diversionary programs rather than serve time in jail. Hillary will increase investments in local programs such as specialized courts, drug courts, and veterans’ treatment courts, which send people to treatment and rehab instead of the criminal justice system. She will also direct the Attorney General to issue guidance to federal prosecutors, instructing them to prioritize treatment over incarceration for low-level, non-violent offenders. Finally, she will work to strengthen mental health services for incarcerated individuals and ensure continuity of care so that they get the treatment they need.
Enforcing Mental Health Parity
The Mental Health Parity and Addiction Equity Act of 2008, which Hillary proudly co-sponsored, requires that mental health benefits under group health plans be equal to benefits for other medical conditions. The Affordable Care Act built on this important law by requiring that insurance plans offered in the individual and small group markets offer mental health coverage as an essential health benefit. But while the right laws are on the books, they are too often ignored or not enforced. Millions of Americans still get turned away when seeking treatment for mental illness, even when the interventions are well-established and evidence-based. A recent report published by the National Alliance on Mental Illness suggested that a patient seeking mental health services is twice as likely to be denied coverage by a private insurer as a patient seeking general medical care. As part of her commitment to fully enforcing the mental health parity law, Hillary will:
- Launch randomized audits to detect parity violations, and increase federal enforcement.Hillary will ensure that the Departments of Labor and HHS have the authority they need to conduct randomized audits of insurers, to determine whether they are complying with the parity law. She will direct both agencies to bring appropriate enforcement actions against insurers, and to make their enforcement actions more transparent so that the general public is more aware when insurers violate the law.
- Enforce disclosure requirements so that insurers cannot conceal their practices for denying mental health care.The parity legislation provided the Departments of Labor and HHS the power to demand key information from insurers on the medical management decisions they use to deny care for behavioral health care. This information is essential for the government and patients to be able to identify and prove parity violations. Hillary will direct the DOL and HHS to fully enforce the disclosure requirements—requiring that plans specifically disclose how their non-quantitative treatment limitations comply with the parity law—and she will work to ensure that public insurers are subject to the same transparency.
- Strengthen federal monitoring of health insurer compliance with network adequacy requirements. The list of providers that health insurers give to beneficiaries should adequately reflect the providers who are in-network and provide care to patients with that insurance. Hillary will ensure that insurers provide up-to-date lists on mental health provider networks, so patients know where to get care.
- Create a simple process for patients, families, and providers to report parity violations and improve federal-state coordination on parity enforcement. Hillary will direct the Departments of Labor and HHS to issue clear, easy-to-follow guidance on where to report parity complaints, and to publish data on complaints the agencies received and how they responded. She will also ensure that patients and families are aware of consumer hotlines that they can call to understand their rights under the parity law, and navigate the complaint and appeals processes. Finally, she will direct officials to work with the National Association of Insurance Commissioners as well as state leaders, patient advocates, and other key stakeholders to set milestones and hold one another accountable to improve parity enforcement across-the-board.
Housing and Job Opportunities
Hillary supports a full range of housing and employment support for individuals with mental health problems, to help them lead independent and productive lives. As president, Hillary will:
- Expand community-based housing opportunities for individuals with mental illness and other disabilities. Hillary will launch a joint initiative among the Departments of Housing and Urban Development (HUD), Health and Human Services, and Agriculture to create supportive housing opportunities for thousands of people with mental illnesses and disabilities, who currently reside in or are at risk of entering institutional settings. As the Supreme Court held in the Olmstead decision, individuals with mental or physical disabilities should not be segregated in institutional settings when community-based services can be accommodated. Hillary’s new program will provide dedicated Housing Choice Vouchers and other critical assistance to individuals with mental illnesses or disabilities, enabling such persons to live independently while paying no more than 30% of their adjusted monthly income in housing costs. Public housing authorities will administer the new housing subsidies, while HUD will work with HHS and USDA as well as state mental health agencies to identify qualifying individuals. Hillary will dedicate an average of $100 million to this initiative per year over the next ten years. This funding builds on her stated commitment to expand support for community-based housing through the HUD Section 811 program, authorized by the Supportive Housing Investment Act of 2010.
- Expand employment opportunities for people with mental illness. Research has shown that supported employment helps people with mental illness avoid hospitalization, while also giving them the opportunity to earn money and contribute to society. The employment rate for people with serious mental illness is below 20 percent, even though many of these adults want to work and more than half could succeed with appropriate job supports. Hillary will work with private employers and state and local mental health authorities to share best practices around hiring and retaining individuals with mental health problems, and in adopting supported employment programs. That includes expanding HHS’s “Transforming Lives Through Supported Employment” program, which already assists states and communities in providing supported jobs to people with mental illness. Another area of focus will be encouraging employment for individuals with mental illness within the mental health sector itself, including as peer support specialists and recovery coaches.
- Expand protection and advocacy support for people with mental health conditions. Hillary will support and expand funding for the Protection and Advocacy for Individuals with Mental Illness (PAIMI) Program to ensure advocacy services for individuals with mental health conditions. These services make a critical difference for those who need reasonable accommodations for housing, employment, and other support and services.
Brain and Behavioral Science Research
We are still in the early stages of unraveling the mysteries of human brain development and behavior. Hillary believes we need a pioneering, multi-sector effort to transform our knowledge of this field—from mapping the human brain to generating new insights into what drives our behavior to investing in clinical and services research to understand the interventions that work best and how to deliver them to patients. Combining neurobiological research with behavioral, clinical, and services research will help us develop new therapies to help patients today while laying the foundation for future breakthroughs. Through it all, Hillary believes we must ensure that the resulting data and insights are widely available to researchers. As president, Hillary will:
- Significantly increase research into brain and behavioral science research. As part of a broad new investment in medical research, Hillary will provide new funding for the National Institutes of Health; build on cross-collaborative basic research efforts like the BRAIN initiative; scale up critical investments in clinical, behavioral, and services research; and integrate research portfolios with pioneering work on conditions like PTSD and traumatic brain injury already underway at DoD, the VA, and HHS. Together, these efforts will transform the landscape of funding for brain and behavioral research, and improve clinicians’ ability to detect and treat mental illness at the earliest stages.
- Develop new links with the private and non-profit sectors. Hillary will work with her biomedical research team to forge new links with the private and nonprofit sectors. In addition to the NIH, pioneering work in these fields is taking place at foundation-funded centers, academic institutions, and private firms. As she scales up investments in brain and behavioral research, Hillary will ensure that federal government efforts are aligned with those of other sectors to ensure that progress occurs as quickly as possible.
- Commit to brain and behavioral science research based on open data. Hillary understands that we must not only improve funding of brain and behavioral research but ensure that findings are widely shared. Beyond promoting research partnerships across sectors, Hillary believes that the way we fund research must change to fully embrace open science and data. The open science principles put forth by One Mind offer a useful guide, and the success of the Human Connectome Project serves as an important model. Hillary will work with leaders in the research community to structure grants in a way that promotes timely access to results for all researchers while preserving patient privacy.
Hillary is committed to delivering on the above agenda and to ensuring that mental health is treated like the national priority it already is. As a down-payment on her agenda, Hillary will convene a White House Conference on Mental Health within her first year in office, to highlight the issue, identify successful interventions, and discuss barriers that must be removed to improve today’s system.
Hillary has also laid out policies that offer additional support to individuals with mental health problems and their families, beyond today’s announcement. Earlier this year, she released a robust Caregivers Agenda, to support family members and workers who care for individuals with health conditions, including mental illness. She also set out a $10 billion Initiative to Combat America’s Deadly Epidemic of Drug and Alcohol Addiction, which provides incentives to every state to dramatically expand its prevention and treatment programs for substance use disorders. And she has released a detailed Veterans Agenda that outlines a robust plan for tackling the issues facing veterans, our service members, and their families, including expanding access to mental health care and treatment, ending the epidemic of veteran suicide, and reducing homelessness.
Hillary Clinton’s Record
The comprehensive mental health agenda Hillary released today builds on her record of fighting for better services for Americans with mental illnesses. In the U.S. Senate, she co-sponsored the Campus Care and Counseling Act, which established critical mental health support and early suicide prevention for college students across the country. She supported a $500 million increase in mental health care for veterans, co-sponsored the Joshua Omvig Veterans Suicide Prevention Act, and worked across the aisle to make sure their mental health needs would not be forgotten in policy recommendations to the Department of Veterans Affairs. And she strongly supported the enactment of mental health parity laws, which have helped ensure that millions of Americans with mental illness do not lose access to the services that they need because of financial restrictions or arbitrary treatment limits. This record reflects Hillary’s strong belief that mental illness must be treated no differently from other medical conditions and her commitment to the needs of Americans and their families coping with mental illness.
Behavioral Health Trends in the United States: Results from the 2014 National Survey on Drug Use and Health, September 2015, http://www.samhsa.gov/data/sites/default/files/NSDUH-FRR1-2014/NSDUH-FRR1-2014.pdf).
Center for Behavioral Health Statistics and Quality, Behavioral Health Trends in the United States: Results from the 2014 National Survey on Drug Use and Health (2015). The range of conditions includes depression, which the CDC estimates will soon become the second leading cause of disability in the world, PTSD, which affects nearly 8 million Americans, anxiety, and bipolar disease and schizophrenia.
Nearly half of the people in treatment for drug or alcohol addiction also have a co-occurring mental health problem, as do more than half of incarcerated individuals. See SAMSHA, National Survey of Substance Abuse Treatment Services, at 3 (2013), http://www.samhsa.gov/data/substance-abuse-facilities-data-nssats/reports; NIH, http://www.nimh.nih.gov/health/statistics/prevalence/inmate-mental-health.shtml (using DOJ reports) A quarter of those who are homeless have a mental health problem. HUD, The Annual Homeless Assessment Report to Congress, at 18 (2010), https://www.hudexchange.info/resources/documents/2010HomelessAssessmentReport.pdf.
Child Mind Institute (2015), Children’s Mental Health Report,
NAMI, Mental Health on Campus (http://www.bestcolleges.com/resources/top-5-mental-health-problems-facing-college-students/)
Insel, T.R (2008) Assessing the Economic costs of Serious Mental Illness. American Journal of Psychiatry. 165(6), 663-665.
Child Mind Institute (2015), supra.
Pam Belluck, “New Findings on Timing and Range of Maternal Mental Illness,” New York Times 15 Jun. 2014.
Efforts to identify the most promising programs are ongoing. See Nathaniel Counts and Paul Gionfriddo, “New Initiative Explores the Intersection of Education and Mental Health,” Health Affairs Blog, 23 Aug. 2016 http://healthaffairs.org/blog/2016/08/23/new-initiative-explores-the-intersection-of-education-and-mental-health/.
Suicide is the second leading cause of death for people aged 15-34. See CDC (2015), http://www.cdc.gov/violenceprevention/pdf/suicide-datasheet-a.pdf.
Veterans commit suicide at rates 50% higher than the general population. See http://www.annalsofepidemiology.org/article/S1047-2797(14)00525-0/. Veterans can be in a state of heightened suicide risk even 30 years after their active service ends.
See, e.g., The high suicide rate among elderly white men, Wash. Post (Dec. 8, 2014).
Sabrina Tavernise, “U.S. Suicide Rate Surges to a 30-Year High,” New York Times 22 Apr. 2016.
American Suicide Foundation, http://afsp.org/about-suicide/suicide-statistics/.
APA “By the Numbers,” 7-27-2015.
American Psychiatric Association Report: S. P. Melek, D. T. Norris, and J. Paulus, Economic Impact of Integrated Medical-Behavioral Healthcare: Implications for Psychiatry (Denver, Colo.: Milliman Inc., April 2014.)
AMA/AACAP resources.
The Obama Administration took similar actions to expand the pipeline of mental health providers in the wake of the tragedy at Sandy Hook Elementary. In 2013, the Department of Health and Human Services announced $30 million in grants to training programs at hospitals and universities across the country, in order to train 4,000 new mental health and substance abuse health professionals. See http://www.hhs.gov/about/news/2014/09/22/hhs-announces-99-million-in-new-grants-to-improve-mental-health-services-for-young-people.html. Hillary will evaluate the success of those investments in making future awards.
Arun Rath, “When Cop Calls Involve the Mentally Ill, Training is Key,” NPR 14 Jun. 2014, http://www.npr.org/2014/06/14/322008371/when-cop-calls-involve-the-mentally-ill-training-is-key
APA “By the Numbers,” 7-27-2015.
“Re-engineering Training on Police Use of Force,” Police Executive Research Forum, Aug. 2015, http://www.policeforum.org/assets/reengineeringtraining1.pdf
Michael Olive, “Despite Laws, Mental Health Still Getting Short Shrift,” 7 May 2015, http://www.pewtrusts.org/en/research-and-analysis/blogs/stateline/2015/5/07/despite-laws-mental-health-still-getting-short-shrift
NAMI, Road to Recovery: Employment and Mental Illness, at 3-4 (2014), https://www.nami.org/About-NAMI/Publications-Reports/Public-Policy-Reports/RoadtoRecovery.pdf | <urn:uuid:aae9d8fd-3884-4cbd-a5c3-ae0d4cace314> | CC-MAIN-2017-17 | https://www.hillaryclinton.com/briefing/factsheets/2016/08/29/hillary-clintons-comprehensive-agenda-on-mental-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00364-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.935026 | 6,849 | 2.96875 | 3 |
Bombay Presidency 1805 1/5 Rupee -KM# 277
KM# 277 1/5 RUPEE
2.3200 g., Mint: Mumbai Obverse: Persian-Zarb, mint
name, Shah Alam (II) julus Reverse: “T” between scales
Note: Mint name: Mumbai, struck at Calicut for Tellicherry.
Mumbai finds mention in Mughal coins
Aurangzeb attacked Mumbai because the British challenged his authority by starting their own mint.
Mughal emperor Akbar’s gold mohar used as a pendant
Mumbai | 9th Nov 2013
Jahangir’s gold mohar and a coin depicting the symbol of the sun.
umbai was known as Mumbai even before the British named the city Bombay. Mughal emperor Aurangzeb had sent his army to attack Mumbai in 1693 because the British had started a mint there in an attempt to overrule his authority as India's emperor. A Mughal coin researcher, Dr Mahesh Kalra, 40, revealed these unknown facts to The Sunday Guardian.
Kalra, an assistant professor with the Alkesh Dinesh Mody Institute with the Mumbai University, is also the curator of a coins museum being run inside the institute's premises. He has studied the writing on the Mughal coins and plans to put all his studies together in the form of a book.
According to him, Mughal coins mentioned the city's name as Mumbai in Persian script. The East India Company started a mint in Mumbai in 1683 and minted coins in the names of English kings, again in Persian. When Aurangzeb came to know about this he felt threatened and ordered his lieutenant Siddhi Jouhar to attack Mumbai. The Mughals and the British clashed at today's Bhendi Bazaar in south Mumbai. Their rivalry did not end even after Aurangzeb's death in 1707. Emperor Farrukhsiyar started a series of coins called "Zarb Mumbai" (Strike Mumbai) to teach the British a lesson. These coins were in use until 1768.
Mughal emperor Akbar’s gold mohar used as a pendant
He said that Mughals minted coins called mohar or asharfi — rupee and paisa respectively —in gold, silver and copper. While a mohar weighed 11 gm, an ashrafi weighed 20 gm. "The emperors used to gift the gold coins to their nobles and ambassadors to show their prosperity. Some people made pendants of the gold coins and started wearing them. One can find these gold pendants with some families in Uttar Pradesh."
Kalra, a homeopath by profession, left his luminous practice of 12 years at the high profile Lokhandwala area in Andheri to study history. He completed his masters in numismatics and archaeology and studied Brahmi, India's most ancient script. He also learnt Greek and Persian so that he could understand the writing on the Mughal coins. He studied the coins and visited three prestigious coin museums — British Museum, Fitzwilliam Museum at the Cambridge University and the Ashmolean Museum at the Oxford University — to share his knowledge with the British. The British Museum selected him for International Training Program (ITP) for curators. Today, Kalra is among those few curators who are an authority on Mughal coins. He has also registered for doctorate on Mughal coins with Mumbai University. "I am trying to attract more and more people towards the coins museum. The coins not only reflect the currency pattern of the past but also shed light on the cultural and political situation then," Kalra said.
East India Company coins with brief history and Rulers
BOMBAY-MUMBAI-1700 SHOWING CHURCH AND DAILY LIFE-'CARRYING AN UMBRELLA WAS A STATUS SYMBOL
BOMBAY PORT AND FORT BEFORE 1700 -BELOW VIEW OF BOMBAY 1800
BAIA (BOMBAY IN PORTUGUESE)-16TH CENTURY ENGLISH FORT IN BOMBAY
VIEW OF BOMBAY FROM MALABAR HILL TOWARDS BOMBAY FORT/PORT.CHOWPTTY BEACH SEEN ON LEFT-[BELOW THE SAME VIEW NOW]
Pen and ink drawing of Sewri Fort in Bombay looking across to Trombay Island by William Miller (1795-1836) in 1828.The image is inscribed: 'Suree from below the Band hill. Bandalah. W.M. December 1828'.
view from inside Bombay fort towards church gate of the Bombay fort,at the end
first shop is patek watches
second shop is Times of India ,known as Bombay Times -1860
TAXISOF 1860 A PALKHEE(PALANQUIN)AND A HORSE CAN BE SEEN WAITING OUT SIDE ,TIMES OFFICE
This view of Churchgate Street, now known as Vir Nariman Road, inside the Fort area of Bombay was taken in the 1860s to form part of an album entitled 'Photographs of India and Overland Route'. Churchgate Street runs from Horniman Circle at the east end to what was originally named Marine Drive at the edge of the Back Bay. Churchgate Station, the old General Post Office (now the Telegraph Office) and the Cathedral Church of St Thomas, the oldest still-functioning structure in the city, are all located along its length. However, Churchgate Station and the Post Office were later additions to the street and would not have been in existence at the time of this photograph.
Government House, Parell.--Artist: Gonsalves, Jose M. (fl. 1826--c. 1842) Medium: Lithograph, coloured Date: 1833
Plate two from J. M. Gonsalves' "Views at Bombay". This building at Parel in Bombay was originally a Portuguese Franciscan friary, completed in 1673 and taken over by Governor Boone in 1719 as a country residence. In 1771, when Hornby first resided here, it became the new Government House in place of the original one in the Fort area. The banqueting hall and ballroom were housed in the shell of the original vaulted chapel. In 1899 the Plague Research Laboratory founded by W M Haffkine was established here. Since 1925 it has been known as the Haffkine Institute and the original grounds now contain a number of medical institutions.
[BRITISH WERE PROTESTANT CHRISTIANS AND WHEN THEY TOOK OVER BOMBAY FROM CATHOLIC PORTUGUESE A LOT OF CATHOLIC CHURCHES WERE DESTROYED FOR EG:ONE OPPOSITE CHURCHGATE OF THE FORT ;MANY CATHOLIC INSTITUTIONS WERE CONVERTED FOR OTHER USES ,AS IS THE BUNGALOW ABOVE]
General view of the exterior of the Times of India offices, Mumbai by E.O.S. and Company, 1890. This print is from an album put together for the occasion of the newspaper's Diamond Jubilee (60 years) which was celebrated in November 1898. The newspaper was established in the 1830s following Lord Metcalfe's Act of 1835 which removed restrictions on the liberty of the Indian press. On the 3rd November 1838 the 'Bombay Times and Journal of Commerce' was launched in bi-weekly editions, on Saturdays and Wednesdays. It contained news of Europe, America and the sub-continent and was conveyed between India and Europe via regular steam ships. From 1850 the paper appeared in daily editions and in 1861 the 'Bombay Times' became the 'Times of India'. By the end of the 19th century the paper employed 800 people and had a wide circulation in India and Europe.
coins and more: Did you know Series (15) : ii) Fort St. George ...
Did you know Series (15) : ii) Fort St. George Chennai Museum (Part II): i) Setting up of the First English Mint at Fort St. George , Madras in 1640 and its minting activity . ii) Coins of the Arcot Nawabs. iii) Coins of the Bengal and Bombay Presidencies. iv) Coins of the Mughals, Nayaks,Travancore State and Mysore State.
Did you know Series (15) : Fort St. George Chennai Museum (Part II):
i) Setting up of the First English Mint at Fort St. George , Madras in 1640 and its minting activity .
ii) Coins of the Arcot Nawabs.
iii) Coins of the Bengal and Bombay Presidencies.
iv) Coins of the Mughals, Nayaks,Travancore State and Mysore State.
i) Setting up of the First English Mint at Fort St. George , Madras in 1640 and its minting activity:
- The “firman” (Royal Charter/Edict) granted to the East India Company by the Nayak Ruler Venkatdri Nayak in 1639 gave the Company the privileges of minting their own coins, in perpetuity.
- In 1640 the first English Mint in India was established by Francis Day at Fort St. George. The mint was run on contract by various “Dubashes”, but the gold and other precious metals were imported from England by the East India Company.
- By the 1650s the Company on finding various irregularities in the mint functioning, decided to take over the running of the Mint itself and English Supervisors were appointed to oversee the process of coin minting.
- The Madras Mint struck coins for territories in and around the Company’s territories and the Northern Circars for nearly 200 years since its setting up.
- The initial coinage issued were dump coins Southern Hindu territories followed by close imitations of the Mughal coins of the Subah of Arcot.
- In 1692, the Mint obtained permission to mint silver coins (rupees) for the Mughals.
- In 1695, to cater to the growing demands on the Fort St. George Mint a bigger facility was built in the Fort.
- Around 1670, the earliest coins issued for the East India Company were small silver pieces. These coins were undated with two interlinked “C’s” to indicate the reign of King Charles II.
- During the 18th century, silver coins were minted bearing the East India Company’s bale mark (an orb and a cross) inscribed C.C.E. (Charter Company of England) and G.C.E. (Governor and Company of Merchants Trading into the East Indies). All these issues were meant for use within the Company’s Factory and surrounding Areas and for exchange with European Traders. They coins were, therefore not meant for circulation in the Territories governed by the Indian Rulers.
- In 1742, a second mint was established at Chintradipet. In the same year, the Madras Mint was given a contract by Nawab Sadatulla Khan of the Subah of Arcot to strike the Arcot Rupee and Arcot coins of smaller denominations. These coins were poorly struck with Dies bigger than the blanks used. Hence only a part of the inscriptions are seen on these coins. These coins bear the name of Alamgir II with the Sixth year of reign and have a “Lotus Mint Mark”. This undated series continued for about 50 years. Subsequent issues had the Hijri date “1172” equivalent to 1758 A.D. irrespective of the Year of minting.
- In 1792, the Chintradipet Mint was relocated to Fort St. George and the two mints became the gold and silver mints and were well conversant with minting star pagodas (which replaced the Madras pagoda), Arcot Rupees and Madras and Arcot fanams and doudous (or Doodoos).
- In 1807, minting machines based on the best available technology of the time, imported from Britain was introduced and began producing silver coins in European style with oblique milling.
- One series of coins minted at the Madras Mint was based on Hindu standard coinages consisted of one and two Pagodas in gold, half and quarter pagodas and fanams in silver. The copper coins consisted of Cash denominations.
- Another series was based on the Mughal Empire coinage with gold mohurs and fractions of mohurs, i.e. half, one-third, and quarter mohurs.
- Also rupee series including one rupee, half rupee, quarter rupee, one-eighth rupee and one-sixteenth rupee in silver was issued by the Madras Mints which also minted coins in the denomination of two rupees. The copper coins were of lower denominations and included 4, 2 and 1 paise and 4 and 2 pies (the copper coins were also denominated as half and one dub or one-ninety sixth and one-forty eighth of a rupee.
- The copper coins also included Faloos (Dub), with inscriptions in Persian on one side and Tamil and Telugu inscriptions on the other side indicating its value in Dub units.
- The coins in circulation had unrelated denominational values and their exchange values were briefly as under:
3360 Cash was equal to 42 Fanam
42 Fanams were equal to I Pagoda
1 Pagoda was equal to 3 ½ Rupees
3 ½ Rupees was equal to 168 Faloos (Dub)
1 Rupee was equal to 48 Faloos (Dub)
1 Faloos (Dub) was equal to 20 Cash
1 Fanam was equal to 4 Faloos (Dub)
4 Faloos was equal to 80 Cash
- After 1818, the Rupee was made a Standard coin with a fixed weight of 180 grains and lower denominations being proportionately reduced in weight.
- From 1812 to 1835, the Madras Mint struck coins with the “Lotus” mint mark and indented cord milling, while Calcutta Mint issued coins with the “Rose” mint mark from 1823 -1835.
- Interestingly, from 1830 to 1835 (before the introduction of the Standard Coinage, Calcutta mint struck coins with a Rose mint mark but with a small crescent added on the reverse (rupee and half rupee coins) and on obverse (1/4 rupee coins).
The Madras Mints pass into history :
- The Fort St. George Mints lost their prominence when the two bigger mints were opened at Calcutta and later in Mumbai and the Uniform Coinage was introduced in 1835. Nevertheless, initially the Madras Mints assisted the Mints at Calcutta and Bombay with Uniform Coinage issues, but their coinage output was relatively small and they shut shop in 1869 and closed down.
After Musalipatnam, the next settlement in south was in Madras in 1620. The trading activities grew at a very rapid rate there. They purchased land where the Fort St. George stands today and the First English Mint was established in 1640 by one Francis day.
The firman granted to the East India Company by Venkatdri Naik in 1639 permitted it to "perpetually enjoy the privilege of mintage." This mint was run on contract by various dubashes - Komati Chetties all - but used gold imported by the Company. In the 1650s, the Company decided it would run the mint itself and appointed English supervisors.
The Madras mint struck coins for in and around the company's territories in for the Northern Circars for nearly 200 years. The initial coins were dump coinage similar to those of the neighboring Hindu territories followed by close imitations of the Moghul coins of the Subah of Arcot.
In 1692, the mint was permitted to mint the silver rupees of the Mughals. A new mint was built in the Fort in 1695, and then rebuilt in 1727 in the northwest corner of the Fort, by what became known as the Mint Bastion. In 1742, a second mint was established in Chintadripet. The same year, the Fort mint was permitted to strike the Arcot rupee and Arcot coins of lower denominations. In 1792, the Chintadripet mint was moved to the Fort and the two mints became the gold and silver mints, minting star pagodas, which were replacing the Madras Pagodas, Arcot rupees and Madras and Arcot Fanams and doodoos.
The Company decided to establish two bigger mints at Bombay and Calcutta in 1815. From 1835 - 1867 the mint also struck uniform coinage for circulation. The Madras mint assisted these mints and since its capacity was insignificant, the mint was finally closed down in 1869 to make way for the government press in the same premises. But Mint Street - once Thanga Salai - remains a Madras name.
Coins : Madras Presidency
Early Coins: Dump Coins
The earliest coins of the company in Madras were small silver pieces issued from their factory at Fort St. George in about 1670's. These coins were undated with two interlinked C's on the reverse (assigned to the reign of King Charles II).
During the 18th century silver coins were minted bearing the Company's bale mark (an orb and a cross) inscribed C.C.E (Charter Company of England) and in some cases G.C.E (Governor and company of merchants trading into the East Indies). All these issues were meant for use within the company's factory and surrounding areas and also for exchange with European traders. They were not meant for circulation in the interior of the country.
In 1742 company obtained permission from Nawab Sadatulla Khan of the Subah of Arcot to coin rupees in imitation of those struck at the imperial mint at Arcot. These coins were poorly struck with dies bigger than the blanks used. Hence, only a part of inscriptions are generally visible. They bear the name of Alamgir II with Sixth year of reign and have a 'Lotus - Mint mark'. This undated series continued for about 50 years. Subsequent issues had Hegira Date '1172' equivalent to 1758 A.D. irrespective of the year of minting. The R.Y-6 also appears on all issues.
Machine Struck Coins
In 1807 new machinery was introduced and mint produced 2 silver coins in European style with oblique milling. One series based on Hindu standard consisted of One and Two pagoda in gold, Half and Quarter pagoda and Fanams in silver. The copper coins consisted on Cash denominations.
The other series based on Moghul standard were gold mohurs and fractions of mohurs: ¼, ⅓ and ½. They issued rupees together with fractions down to ⅛ and 1⁄16 rupee in silver. Madras also issued 2 rupees coins. Although minted in 1807 and later all bear the frozen date “1172”A.H.
Copper coins in this series were Faluce (Dub) with inscriptions in Persian on one side and Tamil and Telugu inscriptions on the other side indicating the value in Dub units.
In Madras, there were copper coins for 2, 4 pies, 1, 2 and 4 paisa, with the first two denominated as ½ and 1 dub or 1⁄96 and 1⁄48 rupee. Madras also issued the Madras Fanam until 1815.
Although the two systems of coins were in circulation at the same time but they were unrelated.
3360 Cash = 42 Fanams = 1 Pagoda =31/2 Rupees = 168 Faluce (Dub)
1 Rupee = 48 Faluce (Dub)
1 Faluce (Dub) = 20 Cash; 1 Fanam = 4 Faluce (Dub) = 80 Cash
After 1818, Rupee was made the standard coin and the weight was fixed at 180 grains with smaller pieces in proportion. The pagodas and Fanams were demonetized from that year.
Next issues were:-
1. 1812-1835: Struck at Madras Mint with 'Lotus' mint mark and indented cord milling.
2. 1823-1825: Struck at Calcutta with 'Rose' mint mark and upright milling.
3. 1830-1835: Struck at Calcutta with 'Rose' mint mark and upright milling but with a small crescent added on the reverse (rupee and half rupee coins) and on obverse (1/4 rupee coins).
The Northern Circars was a former division of Madras Presidency and consisted of present-day Indian states of Andhra Pradesh and Orissa. The territory derived its name from Circar or Sarkar, an Indian term applied to the component parts of a Subah or province, each of which is administered by a deputy governor. These Northern Circars were five in number, Chicacole, Rajahmundry, Ellore, Kondapalli and Guntur.
By a treaty, signed in 1768, the nizam acknowledged the validity of Shah Alam's grant and resigned the Circars to the Company, receiving as a mark of friendship an annuity of 50,000. Guntur, as the personal estate of the Nizam's brother Basalat Jang, was accepted during his lifetime under both treaties. Finally, in 1823, the claims of the Nizam over the Northern Circars were bought outright by the Company, and they became a British possession.
The Northern Circars were governed as part of Madras Presidency until India's independence in 1947, after which the presidency became India's Madras state. The northern, Telugu-speaking portion of Madras state, including the Northern Circars, was detached in 1953 to form the new state of Andhra Pradesh.
Coins for use of Northern Circars division of the Madras Presidency with headquarters at Musalipatnam were:-
SILVER: 4 Annas and 2 Annas
COPPER: 1/48 Rupee and 1/96 Rupee
Old Mint - Kolkata 1700'S
Caroline Newton of Baldwin's forwarded this press release about a new book on the coins of the Bengal Presidency. -EditorCaroline adds:
It is the first of three volumes written by Dr. Paul Stevens that document the coins of the East India Company. The first volume explores the coins and mints of the Bengal Presidency from 1757, when the EIC first acquired the right to mint coins there, until 1835, when a uniform coinage was introduced into British India. The three volumes will, without doubt, become the new industry reference works.THE COINAGE OF THE HON. EAST INDIA COMPANY – PART 1 THE COINS OF THE BENGAL PRESIDENCY
Dr. Paul Stevens first book, ‘The Coins of the Bengal Presidency’ is an essential reference work for anyone interested in this period of Indian history, British colonial history and East India Company coinage. The first of a planned three part series on the coins of the East India Company, this will be the new standard reference work for the next generation of numismatists.
Written by the highly respected numismatist Dr. Paul Stevens who became interested in Indian coinage during the 1970’s and was particularly fascinated by the coins which were produced by the British for use in India, the book is a result of many hours spent poring over primary source documents in the British Library. It explores the coins and mints of the Bengal Presidency from 1757, when the EIC first acquired the right to mint coins there, until 1835, when a uniform coinage was introduced into British India.
Divided into ten chapters, each one deals with a different time or location of the coinage. Each chapter then consists of a short summary followed by a very detailed exploration of the information found mainly in the archives of the EIC. This part contains extensive archival extracts, which should prove useful to both numismatists and historians studying the EIC. Next, (within each chapter,) there is a detailed catalogue of the coins discussed within that chapter, and finally there is a list of references that will ensure that the original sources can easily be found.
At the end of the book there are some useful appendices: an AH/AD/RY concordance; a glossary of Indian words and abbreviations found in the extracts from the records; a concordance of Pridmore numbers with the Stevens catalogue numbers; and the mint names and rulers’ names as they actually appear on the coins. Many other fascinating pieces of information could also be mentioned but they are all in the book and that is where they should be sought.
Baldwin’s is delighted to be offering 142 lots of British Indian coins from the collection once formed by Dr. Stevens in their 3rd November Argentum auction. Lots 230 – 372 from the catalogue contain a variety of beautiful coins from the Madras, Bombay and Bengal presidencies, including lot 340 (pictured here) a 1794, Gilt Proof 2-Pice.
The Coins of the Bengal Presidency, published by Baldwin’s, is priced at £50 plus postage and can be ordered through the Baldwin’s website at www.baldwin.co.uk . The catalogue for the Baldwin’s Autumn Argentum auction with full listing of all 142 coins can be found online at www.baldwin.co.uk/a312
Wayne Homren, Editor
About 57,70,000 results | <urn:uuid:e21a8b49-7594-4323-bda7-48d65df87374> | CC-MAIN-2017-17 | http://oldphotosbombay.blogspot.com/2014/08/mumbai-finds-mention-in-mughal-coins.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00365-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.953704 | 5,476 | 2.71875 | 3 |
5.1 The need for an input-oriented measure
5.2 Fisheries-specific issues in the measurement of capacity and capacity utilization
5.3 Empirical approaches for assessing capacity
Primal measures of capacity and capacity utilization focus exclusively on output levels. However, in fisheries - and particularly in capacity reduction programs - the need is for information on capacity and capacity utilization which are based on output levels, but expressed in terms of effort or inputs. Nations downsizing their fishing fleets need to know the levels of capital stock and inputs which should be reduced to achieve their goals and objectives. Moreover, nations face multiple capital goods (heterogeneous capital), and need a means to reduce this entire capital stock to a single measure in physical units. There appears a need to develop standardized units of capital which equate to potential output levels. For example, removing a 50 GRT vessel with 300 horsepower reduces potential catch by 5,000 metric tons.
Dual-based economic measures can fill this need by directly providing the measure of capacity and capacity utilization and the corresponding optimal capital stock (in physical units) in a fishery. Catches can be freely chosen or subject to total allowable catches. However, the approach requires extensive data, which are usually unavailable.
In this next section, we present several empirical approaches for assessing capacity. These approaches can be classified in several ways. First, the approaches can be either primal or economic, the latter with an explicit economics optimizing basis. Second, the approaches can take output levels as exogenously fixed, such as with total allowable catches, or freely chosen, and then determine the corresponding appropriate level of capital stock. The approaches can alternatively take the flows of variable inputs and the stock of capital (including number of operating units) as given and then determine the corresponding level of maximum output (i.e. how much could be produced if all operating units were technical efficient). Because both capacity and capital utilization are short-run in nature, all classifications take the stocks of the resource and capital as given, where the latter is explicitly specified as a stock and not a flow of services.
Determining the maximum output possible given inputs and the resource stock begs the question of what should be the optimal number of operating units, input levels, and configuration of any given fleet. For example, the literature on fisheries repeatedly stresses that such a structure leads to overcapitalization-too many resources chasing too few fish or production is wasteful or not at minimum cost or society is not receiving the maximum net benefit from the fishery. Thus, to say that a fishery is overcapitalized or has excess harvesting capacity has no meaning unless there is some desired level of output. Alternatively, the potential maximum level of output given input levels and number of operating units can be determined and compared to observed landings or production; a simple comparison will allow the nature of capital utilization or excess harvesting capacity to be easily determined.
We also present, in Appendix XI, a wide variety of examples of using different approaches to calculate different measures of capacity and capacity utilization in fisheries. In addition to presenting the different approaches and measures, we also provide empirical analysis of the various measures of capacity and capacity utilization given the types of data often available on fisheries. These empirical examples contained in Appendix XI may be quite useful to nations having various data limitations and desiring to develop measures of capacity and capacity utilization.
The general industry and fisheries cases discussed above refer to production capacity in mostly physical terms. That is, given resources, what should the maximum output be and how close is the industry to producing the maximum output. Given input and output prices, it may make complete sense for a firm or industry to produce at less than the total maximum physical output. Alternatively, a physical measure of capacity may suggest over- or undercapitalization while an economic measure may suggest that a firm is operating at full capacity. The distinction is thus critical for assessing capital utilization and overcapitalization in fisheries.
In the case of fisheries, however, there are reasons for assessing capacity output and capital and capacity utilization relative to excess harvesting capacity rather than simply relative to short and long-run investment decisions as is the case for assessing CU in conventional industries. Fish stocks may be sustained at certain levels. Harvesting in excess of given levels can cause serious declines in resource levels. Moreover, production or harvesting activities in fisheries are subject to technological externalities (i.e., the catch of one vessel reduces the catch and raises the harvesting cost of another vessel). Simply put, the natural resource-the fish stock-imposes limits on the possible catch. Allowing capital utilization or harvesting capacity to be in excess of the level necessary to efficiently harvest the resource generates considerable economic waste and may easily allow overharvesting of the resource.
There are numerous other aspects to also consider when developing measures of capacity in fisheries. As illustrated in Appendix X, for example, larger vessels than apparently necessary for harvesting activities could be constructed to enhance skippers' flexibility to deal with adverse weather. Thus, any analysis of capacity and capacity utilization based on strictly a static and certain analysis and either vessel attributes or total capital might suggest overcapitalization. Alternatively, the capital stock may appear unnecessarily large because vessel owners added considerable amenities to the vessel to make crew more comfortable.
In addition, there is the issue of how to assess capacity, input or output-based, relative to spatial aggregation and resource availability. Many fisheries of the world involve more than one geographic area (e.g., cod are harvested from the Gulf of Maine, Georges Bank, Southern New England, and occasionally more southerly areas; similarly, sea scallops are harvested on Georges Bank and several Mid-Atlantic resource areas). Conventional economic theory and principles offers little guidance about the level of spatial aggregation and the examination of capacity and CU.
It is important that capacity be related to the resource level, particularly if managers are interested in capacity and CU because of management and regulatory concerns. Even in the absence of regulatory concerns, any conclusions regarding capacity and CU must clearly distinguish between those related to controllable factors versus those derived from resource levels. For example, a CU < 1 implies overcapitalization or that observed output divided by maximum output is less than one; what if the resource was quite low during the period that observed output was less than maximum output? Random events such as a hurricane could have caused the resource to become unavailable during part of the year; subsequently, observed catch could be quite a bit lower than maximum catch.
5.3.2 Factor requirements function
5.3.3 The frontier approach
5.3.4 Dual economic approach
5.3.5 The data envelopment analysis (DEA) approach
5.3.6 Maximum potential effort
Since managers and policy makers appear to be primarily concerned about physical capacity and overcapitalization (e.g., level of inputs relative to catch), then measures of capacity, regardless of input- or output-based or physical or economic, should ideally convey information about catch, fishing mortality, costs, industry and fleet structure (e.g., number of large vs. small boats), employment, and profits. Such measures of capacity should also ideally convey information about net benefits to society, provided policy is at least somewhat concerned with net benefits to society.
From a practical perspective, the measurement and assessment of productive capacity and capital utilization or harvesting capacity could proceed along several lines of thought. The levels of capital, labour, energy, materials, other inputs, and catch could be determined in a dynamic setting which maximizes net benefits to society subject to biological constraints and the underlying form of the technology. Such an approach would allow policy makers to compare optimal levels to actual levels and subsequently assess the necessary reductions. This is an appropriate approach, but one which would likely be severely limited by inadequate data, extreme uncertainty associated with resource conditions and the technology, and various goals and objectives of resource management. It also begs the questions of a steady-state solution vs. a bang-bang solution, determination of the appropriate social rate of discount, the nature of industry structure in response to capacity reduction programs, and the possible need for a cautious approach which ensures no excess harvesting.
Given the typical availability of data and the usual concerns of management agencies, it is useful to consider capacity and utilization in terms of productive capability. A physical approach allows resource managers to assess the potential maximum harvest level of a fleet-with and without externalities, the level of productive capacity or excess harvesting capability, input utilization, and the redundancy of capital. With additional analysis and vessel-level data, the physical measure can also be further considered to determine which vessels and level of inputs are unnecessary for a given harvest level.
In this section, alternative approaches to measuring capacity and capacity utilization are considered and offered as viable measures of capacity and CU: (1) peak-to-peak; (2) factor requirements function when there are total allowable catch (TAC) limits or a revenue function when outputs are unconstrained and freely chosen; (3) frontier production function and output; (4) dual economic based; (5) data envelopment analysis and frontier; and (6), maximum potential effort based on ideal, empirical and practical, and fishing power or fixed effects. The applicability of these measures in large part hinges upon the availability of data, especially cost data for variable inputs and a capital rental or services price, and degree of technical sophistication.
Of the various approaches for assessing capacity and capacity utilization, the peak-to-peak method has the widest applicability since its data requirements are most parsimonious. The peak-to-peak method does not require cost data and can be calculated with the broad types of data collected world-wide by FAO, such as the catch and number of vessels demonstrated in Garcia and Newton.
The peak-to-peak method is based on a trend through peaks approach that is thought to reflect maximum attainable output given the stocks of capital and fish. Peaks in production per unit of capital stock are used as indicators of full capacity and linearly interpolated capital-output ratios between peak years are then employed, along with data on the capital stock, to estimate capacity output for between-peak years (Morrison, 1985). The most recent year estimates of capacity output are obtained by extrapolating the most recent output-capital ratio peak and multiplying by an appropriate series on the capital stock. Combined fishery capacity is measured as the revenue-share weighted sum of detailed fisheries measures of capacity and capacity utilization. In this case, the level of technology in a particular time period is determined by the average rate of change in productivity between peak years. The output-capital ratios can be adjusted by a technological trend for technical progress. The utilization rate is subsequently calculated as the ratio of observed to potential output Appendix VII provides additional discussion.
While the peak-to-peak approach is the most widely applicable and least demanding of data of all the methods for examining capacity utilization, it is also quite limited. For one thing, it completely ignores the biological characteristics of fish (e.g., the concept of maximum sustainable yield). It also fails to directly link back to input utilization-variable factors and capital, which is a major focal point for assessing capacity and capacity utilization in fisheries.
There are some possible approaches for mitigating the limitations of the peak-to-peak approach discussed in Appendix VII. A sustainable yield function could be estimated. The maximum sustainable yield could be estimated and compared to observed harvest levels over time to estimate capacity and capacity utilization. Alternatively, fishery independent data could be used to estimate maximum yields. The level of effort corresponding to the maximum sustainable yield could subsequently be calculated or estimated; the estimated or calculated level of effort would be the level of effort associated with capacity output and capacity utilization.
The need to modify the peak-to-peak approach really becomes a question of what information management desires. If management desires information on actual capacity and capacity utilization in the strict economic sense (i.e., what is the potential harvest given the size of the fleet and the potential utilization of inputs in the absence of resource constraints), the peak-to-peak approach provides relatively useful, but limited, information for determining the optimum utilization of fishery resources.
The factor requirements function can assess the minimal stock of capital or effort required to produce total allowable catches. The approach also gives the utilization of the capital stock or effort. As with the peak-to-peak method, the factor requirements function is a primal or physical approach. The minimum data requirements are capital stock and other physical quantities of inputs and exogenously determined output levels. This approach may be most promising at the industry or fishery level using aggregate data. A sufficient number of observations would be necessary for satisfactory statistical estimation.
The factor requirements function is one way to describe the technology subject to a fixed or quasi-fixed input (e.g., the vessel). The factor requirements function depicts the production possibilities set and relates the minimal amount of an input required to produce a vector of outputs:
Z = g(Y1,Y2,...,YM)where Z is the fixed input and Yi is the ith output.5 The function g or the production possibilities set defines the combinations of outputs which are technically feasible given the input bundle or fixed input (Z).6 The total allowable catches are exogenously fixed and Z is an endogenous stock. Z is a measure of vessel size or of the entire input bundle. The inputs may be assumed in fixed proportions (Leontief separability) so that an aggregate input an be specified. Alternatively, a separate aggregator function for Z may be estimated if Z is assumed formed in a two-stage optimization process (Squires, 1987).
The estimated factor requirements function gives the minimum input bundle (effort) or stock of capital Z required to harvest the fixed outputs or total allowable catches Y. The ratio of estimated Z to actual Z would give a measure of capital utilization as defined by Berndt (1990), i.e. the ratio of the desired stock of capital or effort to the actual stock.
Yet another variant is to estimate the factor requirements function as a stochastic frontier using the two-stage routine of Battesse and Coelli (which allows incorporation of variable and fixed inputs other than vessel size, ages of the capital stock to capture vintage, socio-economic information, and other variables), and calculate an input-oriented measure of technical efficiency. The ratio of minimum possible Z to actual Z would again give a measure of capital utilization as defined by the ratio of the desired stock of capital to the actual stock of capital.
When outputs are endogenous, i.e. freely chosen, an alternative to obtaining information on the factor requirements function is to estimate a dual revenue function; alternative options for obtaining information on the factor requirements function include estimating dual variable and profit functions in which the vessel capital stock is fixed and solving for capital in terms of input prices and output levels (dual cost function) or input and output prices (dual profit function). With any of the dual functions, it is possible to determine the minimal amount of capital stock or fixed factor required to produce a given vector of outputs.
A second primal or physical measure of capacity and capacity output, an alternative to the peak-to-peak approach, is the frontier approach. With the frontier approach, the maximum output possible (i.e. the capacity output Y*) given input levels is estimated. Capacity utilization is the ratio of observed output to maximum potential output: Y/Y*. The frontier may be estimated at the firm level or for the fleet. The more aggregation, the less precision one obtains. The basic data requirements are observations on physical levels of output, variable inputs, capital, resource abundance levels if available (dummy variables can be used also), and any available information on capital vintage, fleet size, socio-economic characteristics, etc.
There are two basic options to consider: (1) nonparametric determined frontier, and (2) short-run stochastic production frontier7. Since the two approaches provide the same type of information and programs are readily available to estimate the stochastic frontier, we focus additional attention on using the stochastic production frontier to determine capacity and capacity utilization. With some modification, we may estimate the frontier and assess the relationship between maximum output and input levels. A particularly useful modification is that developed by Battese and Coelli (1993) in which an error term for technical inefficiency, U, is expressed as a function of variables which might influence inefficiency. Because capacity and capacity utilization are inherently short-run concepts, capital should be specified as a stock rather than as a flow of services.
The stochastic production frontier relates maximum output to inputs while using two error terms. One error term is the traditional normal error term in which the mean is zero and the variance is constant. The other error term represents technical inefficiency, that is, deviations from the best-practice frontier (which is the maximum output possible given the inputs). When the technical inefficiency error term is 0, maximum output is produced given inputs and the resource stock. When the technical inefficiency error is greater than 0, maximum output is not obtained. Technical efficiency is estimated via maximum likelihood of the production function subject to the two error terms. Appendix XIV gives additional discussion.
The primal capacity output equals the frontier output for which the technical inefficiency error term equals 0.0. A primal-based measure of capacity utilization may then be determined by calculating the ratio of observed output to the frontier output, either for the individual firm or for the industry. The industry capacity output simple equals the sum of the frontier outputs; the industry capacity utilization equals the sum of observed outputs divided by the industry capacity output.
Although both the nonparametric or stochastic production frontier approach can be used to assess capacity, capacity utilization, and overcapitalization in fisheries, there are some limitations. The frontier approach requires specification of an underlying functional relationship between catch and inputs; there is, thus, always the risk of misspecification error. Then there is the issue that information obtained from the frontier approach is estimated; there is a risk of over or under estimating full capacity utilization. The frontier approach also requires extensive data if the precision of the estimates is to be high and management desires to examine industry structure relative to a capital reduction program (e.g., small vs. large vessels and their optimal configuration). In comparison to other approaches, however, the stochastic frontier approach is perhaps the easiest approach to use given limited data-particularly economic data, and along with the dual economic approach, is one of the few approaches which adequately recognizes the stochastic nature of fisheries. It also is an approach which easily accommodates resource levels (entered as either resource stock levels or as dummy variables).
The stochastic production frontier typically permits assessment of maximal output subject to input levels; as such, it appears to be an output-oriented measure. The stochastic frontier is, in fact, a base or nonorienting measure. That is, the assessment of efficiency is not conditional on holding all inputs or all outputs constant. Utilizing the one-stage routine of Battese and Coelli (1993), however, facilitates an assessment of maximal output from an input-based perspective. With this approach, the inefficiency error term, and subsequently the maximal output, is specified as a function of inputs. Thus, it is possible to consider the input reduction coinciding with a fixed maximum or frontier output.
A major criticism of the stochastic production frontier approach is its inability to adequately handle multiple outputs. Appendix XIV discusses alternative approaches to accommodating multiple outputs.
The dual economic approach to assessing production technologies - by econometrically estimating cost, revenue or profit functions - is one of the ideal ways to assess capacity and its utilization and even endogenous capital utilization, whether outputs are unconstrained or constrained by total allowable catches. The approach can be specified at either the vessel or industry level. To be meaningful for capacity reduction programs, the capital stock should be expressed in physical terms, such as vessel sizes or numbers, rather than in monetary values.
The approach readily accommodates multiple products (Segerson and Squires, 1990) and multiple resource stocks, although only a single stock of capital can be accommodated and all evaluations are conditional upon the existing resource stocks. Dynamic adjustment of the capital stock (Morrison, 1985b) and individual transferable quotas (Squires and Kirkley, 1996) can also be accommodated. Different behavioural objectives are also accommodated: cost minimization (Berndt and Fuss, 1986; Hulten, 1986; Morrison, 1985a), profit maximization (Squires, 1987; Dupont, 1991; Segerson and Squires, 1992), or revenue maximization (Segerson and Squires, 1992, 1995; Squires and Kirkley, 1996; Just and Weninger, 1996). Constraints on catch (giving exogenous outputs) and fishing time (Dupont, 1991)can be included. Endogenous capital utilization was analyzed by Epstein and Denny (1980), Kim (1988), and Nadiri and Prochaska (1996).
The versatility and comprehensiveness of the analysis, however, makes substantial demands on data and sophistication of the econometric analysis. Panel or longitudinal data are also readily accommodated. Cost data are required on the variable inputs and a rental or services price for the capital stock and input, and output prices or revenues are often required. A sufficient number of observations is also required to realize enough degrees of freedom and there is always a concern over the representativeness of the data sample and its timeliness.
With either freely chosen output levels or exogenously fixed total allowable catches, the dual economic approach gives a single-valued cost-based measure of capacity and capacity utilization (and even endogenous capital utilization). The optimum capital stock (expressed in either physical or monetary value terms) can be found corresponding to endogenous outputs or exogenously fixed total allowable catches for one or more species. The capacity output may be estimated from the tangency between the short-run and long-run total unit or average costs. The primal measure of capacity utilization is then the ratio of the observed output to the capacity output. A dual economic measure, which is particularly well suited with multiple outputs, can also be derived on the basis of the cost gap between actual and capacity output, as discussed in Appendix II.
Of the various approaches, the DEA approach perhaps is the easiest and offers the most promising and flexible method to determine capacity and capacity utilization. Estimates are obtained of capacity output and utilization rates of the capital stock, variable inputs, and capacity, and also of technical and economic efficiency. DEA could be used to measure overcapacity defined as the ratio of the frontier (maximum possible) output unrestricted to total allowable catches to the total allowable catches. The approach is one of the easiest for fishery managers to understand. The approach accepts virtually all data possibilities, ranging from some of the most parsimonious (input and output quantities) to the most complete (a full suite of cost data), although as always the case, more complete data improves an analysis. Because DEA is a form of mathematical programming, constraints are readily accommodated, including socio-economic concerns such as minimum employment, total allowable catches, restrictions on fishing time, and others. The DEA approach can also accommodate the growing international problem of bycatch and its impact upon capacity and the different utilization rates. Appendix XV provides additional discussion.
With the DEA approach, it is possible to determine the combination of variable inputs, outputs, the fixed factors, and the characteristics of the firms which maximize output, minimize input, or optimize relative to revenue, costs, or profits. In the case of fisheries, managers may want to determine how many vessels should be in a fishery, their characteristics, the respective level of input utilization or days at sea, the gear type, the crew size, and the level of output which is allocatively or technically efficient.
The determination of capacity and capacity utilization may be done at the individual firm level or relative to fleet performance. Relative to fisheries and the needs of resource managers, the preferred solution may best be relative to individual vessel-level production. By rearranging observations in terms of maximum efficiency, the number and characteristics of operating units could be determined by simply adding output of each vessel (or unit of analysis such as vessel size class) until the total equalled a specified TAC.
There are two primary orientations of the DEA approach: output and input.8 The input based measure considers how inputs may be reduced relative to a desired output level, such as total allowable catches. The output-based measure indicates how output could be expanded to reach the maximum physical (primal capacity) level, given the input levels. Both the input- and output-based measures provide information for assessing capacity.
The input-based measure directly provides the input levels and number of operating units consistent with a TAC. That is, it would allow the determination of the optimal vessel or fleet configuration and actual vessels which should be in a fishery given a total allowable catch.
The output-based measure allows fishery managers to identify the level of output and vessels which would maximize output subject to given input levels and resource constraints. The ratio of observed total output to either the maximum physical output or maximum economic output gives a primal measure of capacity utilization. Moreover, given a TAC, the output-based measure could yield a precautionary level of total inputs and number of vessels which yield maximum technical or economic efficiency subject. In addition, the DEA approach would permit identification of which vessels to target to remove from a fishing fleet. In actuality, if sufficient data are available and the goal is the elimination of inefficient operating units, DEA permits identification of those vessels which should be eliminated and without requiring an actual assessment of capital utilization or harvesting capacity.
There is on-going debate about the applicability and usefulness of the DEA approach vs. the stochastic frontier approach. The DEA is purely deterministic and thus cannot accommodate the stochastic nature of fisheries. The criticism of nonstochasticity can be easily overcome through the use of bootstrapping DEA. The DEA approach relative to the frontier approach easily permits an assessment of a multiple-input, multiple-output technology. It also does not have the problem of zero-valued outputs or inputs which are typical of many multispecies fisheries (e.g., some vessels or trips harvest only a few of many species and thus have zero-valued output for some species). Neither the stochastic frontier nor the dual-based approaches can easily accommodate the zero-valued dependent variable problem. Even in the absence of zero valued-outputs, the stochastic frontier approach cannot easily handle multiple outputs. Canonical ridge regression as used by Vinod permits estimation of a primal with multiple outputs but does not address the zero-valued output. The stochastic frontier also requires the assumption of some underlying functional form and thus offers the possibility for specification error.
The DEA approach also recognizes or can accommodate both discretionary and nondiscretionary inputs and outputs. It also can facilitate temporal analysis using what is called a Windows technique (Charnes et al., 1994). Technical change as well as network or dynamic assessments can easily be accommodated with DEA (Fare and Grosskopf, 1996).
What about the issue of overcapitalization and economic waste? A physical-based measure of CU is really inappropriate for determining excess productive capacity and overcapitalization; the underlying economic responses to demand and supply conditions are not considered. Nevertheless, the physical measure does provide some information about excessive production possibilities relative to the resource. Moreover, the DEA, dual, and stochastic frontier approaches can all be modified or used to assess the underlying maximum output given economic conditions. DEA particularly may be used since most available DEA programs have an economic-based assessment as an option.
While the previous discussions of various approaches for measuring capacity and capacity utilization (CU) are consistent with economic premises of capacity and offer considerable information about capacity and CU, the approaches and their measures may not be consistent with available data or the needs of management agencies. In this section, we consider other measures based on ideal, empirical and practical concerns, and fishing power or fixed effects. In essence, we attempt to recognize the fact that data on fisheries are typically limited and it is necessary to utilize best information available to determine capacity and CU. Moreover, we return to the direct measure of maximum potential effort (available fishing effort) as identified in Clark, Clarke, and Munro (1979), Hannesson (1987), Mace (1997), Christy (1996), and numerous other researchers (see the review in Section III of this report).
We commence and qualify our discussion by offering what appears to be the objective of fisheries management for many nations: The essential problem is to determine the amount of fishing and hence the number of vessels required to meet catch allocations in a total fishery which is extremely heterogeneous with respect to the vessels and gears concerned and the mix of species each of them might take both by area and by season. This should be coupled with the economic implications of any proposed degree of change. The problem has two stages, first to determine the amount of fishing appropriate to the reduced number of resources available, and second to explore further modifications that may become desirable within each of the resources in the development of conservation polices. (Garrod and Shepherd, 1981, p. 325). We also proceed along the lines of Valatin (1992, p.1), It is likely to be necessary to control both capacity and its utilization, in order to ensure that capacity reductions are not offset by remaining vessels expanding their effort, such that the effort of the fleet as a whole is unchanged. We thus need to determine the maximum potential effort of an existing fleet to address these concerns.
A starting point is a measure of overall fishing effort of the fleet which recognizes the likely heterogeneous nature of vessels and gear. That is, we need to develop a measure of total fishing effort. Unfortunately, there is no universally accepted measure of fishing effort (Kirkley and Strand, 1981, 1987; Kirkley and DuPaul, 1995). In a broad sense, fishing effort, actually nominal effort, is defined as the product of time and fishing power exerted by a vessel or fleet. Fishing effort also may be thought of in terms of area of sea screened in the course of fishing (Valatin ,1992; Clark, 1985; Dickie, 1955).
It has been common practice by fishery researchers to try and develop a standardized measure of effort for the fleet; this has been necessary in order to adequately assess the relationship between fishing mortality (F) and fishing effort (f): F = q f, where q is a catchability coefficient, F is total fishing mortality, and f is total nominal effort standardized to a homogeneous measure of fishing effort. With effort so defined, Fleet capacity can be defined as the fleet's capability to catch fish. Capacity then may be defined in terms of an aggregate of those physical attributes of the vessels in the fleet, which are considered important determinants of the fleet's ability to catch fish. (Valatin, 1992, p.3). As discussed in Appendix VI, The European Union's Multi-Annual Guidance Programme uses aggregate tonnage and aggregate engine power as measures of fleet capacity. Similarly, as discussed in greater detail in Appendix VI, the U.K. has a system of vessel capacity units (VCUs) which equals the following:
vessel capacity units = Si lengthi (m) * Si breadthi (m) + 0.45 * Si poweri
where i = 1,...,N the number of operating units or vessels.
The number of vessel capacity units as defined above is indicative of total capacity and consistent with the notion of a fixed stock of capital. This measure alone, however, is not indicative of the potential total catch of a fleet. To determine the potential total catch, we must consider the maximum possible flow or utilization of the variable inputs together with the fixed capital (i.e., vessel, gear, etc.) and the resource conditions (limits on total catch).
One ad-hoc approach is to consider the individual vessel frontier and subsequently aggregate to the fleet. Utilizing a one-stage frontier routine, it may be possible to estimate the frontier or maximum output and determine the number of vessels, their size, and other factor usage over which technical efficiency declines. For example, we have a frontier function:
Cit = f(Lit,Fit, Kit,Tit,Nit,eit)where L is labour, K is capital, F is fuel, N is the resource stock, T is a technological trend, and eit is an error term composed of a normally distributed error (v) and a truncated or one-sided error (u). The value of u provides an indication of technical inefficiency or how far production is away from the maximum possible output.
Estimates, however, are only indicative of the data available and responses observed. Thus, estimates of the above frontier would be conditional upon technical and congestion externalities. By modifying the above specification to be consistent with Battese and Coelli's (1993) one-stage routine, technical inefficiency can be specified as a function of number of operating units, total days at sea, and/or total usage of other factors. It is thus possible to determine the combination of total operating units, days, etc., for which production is efficient and void of the influences of excess capital utilization.
With a sufficiently long time series of activities on the vessels in the fleet, the frontier-based estimates could be used to determine the maximum output in each year of the fleet. A peak-to-peak approach could then be used to determine the capacity output and corresponding number of operating units. Moreover, by ordering the catch and effort of each vessel in a fleet in accordance with maximal to minimal efficiency, it would be a simple matter of summing frontier output of each firm until the sum equalled a total allowable catch. Such a solution, however, may fail to recognize that vessels could have fished more days or added inputs to increase their catchability.
With a more detailed analysis of days, length of trip, and input usage, the possibility of a vessel to increase its level of fishing in a given time period could be determined. Subsequently, a total potential frontier output could be estimated for each vessel. Again, the observations could be ordered according to maximum efficiency and the potential frontier output for each vessel could be summed until the sum equalled the total allowable catch. By using the potential total frontier output and establishing the number of units, inputs, and/or effort to those levels corresponding to the TAC, a precautionary approach would be implemented. It is precautionary because it is unlikely that the vessels would substantially increase their efficiency to the maximum and vessels would likely expand their fishing activities to the maximum possible.
The above ad-hoc procedure focuses purely on the primal or physical aspects of capacity output and input. Alternatively, dual frontier cost, revenue, and profit functions could be estimated and used to determine the maximum output, number of operating units, and input usage.
Alternatively, the DEA approach could also be used to determine the maximum output corresponding to the maximum physical levels or the economic optimum levels of output, input, and number of operating units. With either the DEA or stochastic frontier approach, it becomes irrelevant as to which size, hull material, engine horsepower represents redundant capital. The DEA approach identifies which operating units are not efficient and away from the respective frontier. Groupings, however, based on vessel characteristics could be formed and used as a basis for targeting reduction of fishing units.
Only limited data may be available, such as data on catch, area fished, days at sea, days fished, and vessel and gear types or characteristics. Nonetheless, there remain viable options for determining the maximum potential effort, capacity and capacity utilization. The maximum effort can be estimated by considering days at sea, fishing power, and options for more fully utilizing days during a production period (e.g., a year). A short-run stochastic production frontier or nonparametric frontier can be estimated to determine maximum possible production at the trip or monthly level. Alternatively, DEA can be applied to determine maximum output and industry restructuring. The particularly useful aspect of DEA is that the approach allows identification of the operating units which are not producing at maximum efficiency. | <urn:uuid:1711fbd2-1612-451b-abf1-9485775013c7> | CC-MAIN-2017-17 | http://www.fao.org/docrep/003/X2250E/x2250e0a.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00190-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.909977 | 7,437 | 2.6875 | 3 |
Bernardo Aparicio García
It too often goes unnoticed that there are strong comedic elements within King Lear. This may seem an outrageous claim with regard to one of Shakespeare’s famous tragedies, but its validity relies upon the classical understanding of the terms “comedy” and “tragedy.” Simply described—tragedies end with death; comedies end with marriage. The reality, however, is much more complex. The basic classical outline of a tragic play is well known: characters who start out with an enviable position in life are brought low through some inexorable chain of events that, in the words of Aristotle, provokes feelings of “pity and fear” in the viewer. King Lear appears to follow precisely such a structure. After all, it is a tale of fathers betrayed by their children: a rightful king is brought to madness by the treason of two beloved daughters while a kindly father meets a pitiable end due to the perfidy of a trusted son. One might therefore think of King Lear as a deeply affecting yet conventional tragedy—a work that is successful to the extent that it epitomizes the virtues of the tragic form.1a Even so, the vision of King Lear as masterful but simple tragedy—a beautiful though disquieting wail at the suffering of the world, a story of loss and disillusionment that elicits pity and fear—is an incomplete picture.
What, then, confounds us in King Lear? If there is no greater evil than biological death, Lear is at his lowest point by the end of the play. There are, however, things worse than death. This is a play about love. It begins with the alienation of a beloved and affectionate daughter by a peevish, controlling father. It concludes with death, and, more importantly, with the transformation of souls and the recovery of the love that was lost. This is the uniting of souls that makes King Lear more complex than the classical structure of tragedy. Through the relationship between Lear and Cordelia we discover that there is more than just loss in love, and that although redemption may not be clearly discernable to worldly eyes, it is nevertheless a palpable reality.
From the play’s troubling first scene, Shakespeare singles out love as a particularly important theme—one with an essential role in moving the play’s action forward. King Lear has decided to abdicate the throne and divide his kingdom between his three daughters, ostensibly in hopes of shaking off “all Cares and Business” from his old age and preventing “future Strife” in the form of a power struggle after his death. 2 Lear does not intend to divide his kingdom in three equal parts; rather, he decides to portion out his lands and riches in proportion to the intensity of each of his daughters’ love for him. In doing so, Lear commits a tragic mistake and reveals a fundamental flaw in his understanding of love, a flaw with consequences that reverberate across the entire play.
Lear’s error is that, in portioning out his kingdom, he has implicitly adopted a quantitative, contractual concept of love. His paternal affection is not a gift but an exchange; its value lies not in the giving, but in the gratefulness, praise, or devotion that he may gain by it. This is not to deny that Lear has truly affectionate feelings for his daughters, but only to say that it is an affection marred by its being made a currency—one that can grow or diminish as a function of merit, that can be used to purchase material and emotional goods. As the drama soon demonstrates, this is not a tenable view of either paternal or filial love.
The untenability of his position is yet another atypical aspect in this tragic play when compared with others of Shakespeare’s works. In Macbeth, for example, the play is carefully structured so that the audience should not lose sympathy with the protagonist until the point when their loyalties shift to desiring the doom that pursues him. Lear loses sympathy almost immediately because of his perverse demands. His character trajectory, therefore, is primed for redemption and recovery. Such a transformation can only be effected through purgative suffering. In a plot hauntingly reminiscent of Greek tragedy, Lear is the stubborn catalyst of his own earthly destruction.
In order to quantify his daughters’ love, Lear organizes an assembly at which each of them must publicly profess the intensity of their feelings for him. In truth, Lear is fondest of his daughter Cordelia and plans to grant her the largest portion of his kingdom; thus, the “contest” is in reality little more than a ceremony. Goneril and Regan, the king’s eldest daughters, readily acquiesce to their father’s wish. They represent Lear’s perspective on love taken to its logical conclusion: devoid of practically all sincerity of feeling, they are skilled at a calculated flattery aimed at obtaining rewards. Cordelia, however, is profoundly troubled by her father’s imposition of this contest for the kingdom and his love. In this way, the quality of her feelings mirrors her father’s better nature—the genuine affection that lies beneath Lear’s unfortunate views and actions. “What shall Cordelia speak?” she ponders. “Love, and be Silent,” she concludes. 3
King Lear believes it is possible to quantify and portion out love without corrupting it, but Cordelia disagrees. She sees through her father’s mistake, and, with perhaps a measure of unconscious pride in her own righteousness, she decides to teach her father a lesson. When Lear inquires what she will say after her sisters’ excessively lofty speeches, she simply responds with “Nothing, my Lord.” 4 Lear is shocked at this response and urges her to say more. “Nothing will come of Nothing,” he says, further confirming that he puts a price tag on the love he offers. But Cordelia is steadfast in her purpose, and she replies with a remark that, in light of her father’s ideas, takes on the character of a subtle rebuke. In a cutting declaration, she takes Lear’s conception of love and turns it back on him: “Happily when I shall wed, / That Lord whose Hand must take my Plight shall carry / Half my Love with him, half my Care and Duty.” 5 It is this phrase, perhaps more than any other, that triggers Lear’s wrath against Cordelia and sets him on the tragic path that will bring about his downfall.
This interpretation of Cordelia’s words—that they are loaded with an almost righteous sarcasm—helps resolve two conundrums detected by many readers. First, it makes her father’s irrational rage more understandable, for it is profoundly painful to hear hard truths about oneself from the mouth of a loved one. Lear’s fury stems both from his frustrated expectations and his frustration as a father at being corrected by his child. Second, this interpretation accounts for the apparent discrepancy between the notion of love Cordelia’s words express and the heroic charity that she demonstrates later in the play. Thus, while Cordelia never intended to set in motion the tragic chain of events that unfolds throughout the subsequent acts, it is hard to miss that through her words she wished, unconsciously or not, to teach her father a lesson.
At this point, it is worthwhile to notice a possible parallel reading of the strange disconnect between Cordelia’s words and actions—one that is more historically-minded. Cordelia’s words to her father—the king—invite questions about the relationship between a subject and his sovereign. For example, can loyalty and love be quantified? What is love for one’s sovereign? Is divided loyalty necessarily treasonous? Such questions were especially pertinent at the turn of the seventeenth century and are bound to arise in light of a re-emerging wealth of literary scholarship on Catholicism during the Tudor period. The writings of Park Honan, Anthony Holden, Michael Wood, Clare Asquith, and Stephen Greenblatt are most notable in this respect. Still, many of these names are considered controversial and, while it is important to notice the existence of this debate, the intention and aim of this essay do not allow or invite a categorical description of the sundry views. An understanding of the historical context in which Shakespeare wrote can add a tense sense of timeliness to an already tense drama, but an interpretation that engages the text directly is ultimately more profitable. After all, it is more interesting to discover what Cordelia says to her father about the nature of love than to make conjectures about what Shakespeare may or may not have been covertly saying to his monarch.
There is much that has to happen before Lear will be ready to learn his lesson. Having disowned and banished Cordelia, Lear divides the kingdom between Goneril and Regan, the two daughters that have supposedly merited his affection. In accordance with his unfortunate view of paternal and filial love, the king expects that his daughters will support him in his retirement, allowing him to live in wealth and comfort, free from the cares and troubles of a head of state. Gratitude, of course, can be a natural consequence of love, and there is nothing unusual in Lear expecting that his daughters will care for him in his old age. However, it is one thing to love and expect gratitude as a happy result, and quite another to love with the aim of receiving such gratitude. The latter approach, Lear’s contractual understanding of love, can easily mar the authenticity of a relationship—for true love seeks the good of another, not another’s goods.
Unfortunately, as Lear’s method for dividing up his kingdom suggests, this is the idea of love with which he has raised his daughters. Lear may be genuine in his feelings for Goneril and Regan, but in teaching them by example that love can be quantified and expressed in units of goods and privileges, he has taught them to love nothing but the goods and privileges themselves. The result is two moral and emotional monsters who lack the capacity to care for anyone without goods or privileges to bestow. Once King Lear has turned over the kingdom to Goneril and Regan and he ceases to be of practical use to them, he soon discovers the utter untenability of his position. Instead of the care and gratitude that his “theory” of love predicted, he confronts a harsh reality of rejection and betrayal. The chasm between his expectations and what actually takes place in the real world, not to mention the pain of a broken, beaten heart, drives the old king to madness.
Lear is at his lowest point when Cordelia meets him next. Separated from his daughters, bereft of his train of knights and attendants, and even homeless, the king has fallen into despair and descended into an emotional and psychological hell. When Cordelia approaches him, Lear’s words about his current state allude to images of Hell and Hades found in the Bible and Greek mythology. “You do me wrong to take me out o’th’ Grave,” he says, “Thou art a Soul in Bliss, but I am bound / Upon a Wheel of Fire, that mine own Tears / Do scald like molten Lead.” 6 The implication in Lear’s description of himself as a damned, despairing soul is that Cordelia appears as an image of divine love. Lear’s description of Cordelia in this passage as a “Soul in Bliss” and as a “Spirit” who comes bringing “Fair Day Light” is consistent with the words of the Gentleman found in act four, scene three. Speaking of Cordelia’s tears when she learns of her father’s plight, he describes how “she shook / The Holy Water from her Heavenly Eyes” and “once or twice she heav’d the name of ‘Father.’” 7 The religious images and allusions in these passages are reminiscent of Dante’s Beatrice coming to lead the poet into Paradiso. They also recall popular depictions of the Virgin Mary, and even Christ’s passion (when he calls up to his Father in anguish) and descent into Hell.
These descriptions of Cordelia are thoroughly appropriate when one realizes the kind of love she brings. Hers is a charity—in the full, theological sense of the word—totally unlike her father’s idea of love. Cordelia, unlike Lear, does not believe that it is possible to quantify and portion out love. After a time of separation from her father, the plot finally gives her the opportunity to articulate with actions her vision of love—a filial love that is not a product of her father’s relative merits, or of the lands and titles that he may choose to give or take away. After all, she already owes him nothing less than her life, and in this she discerns an unconditional duty to love, which is very different from a contract. This is not the joyless duty of a man forced to perform an unpleasant job, but a free and honest response to Lear’s paternal love. In the former case, even if the job were not unpleasant, a contractual duty would in some sense always be joyless, for it does not exist for its own sake but only as a function of some other good. Cordelia’s gratitude is not a measured and proportional payment for favors received, but a gift freely and irrevocably given. This explains her response when an anguished Lear tells her, “I know you do not love me, for your Sisters / Have, as I do remember, done me wrong; / You have some Cause, they have not.” 8 Cordelia’s reply, “No Cause, no Cause,” is not only a statement that she has forgiven and forgotten, but actually a revelation of the character of her love. 9 In saying “No cause” Cordelia means precisely that: she has no cause not to love her father because her love, once given, is not the kind that can be reneged.
If this kind of love sounds too unearthly, it is because that is precisely what it is. It is no wonder that Shakespeare compares Cordelia to a heavenly being. But this unworldly love is necessary, for it is the only kind that has strength enough to introduce a thread of comedy into this renowned tragedy of tragedies. Cordelia’s love succeeds in retrieving Lear from his psychological hell. Moreover, it achieves a reconciliation of which Lear had explicitly despaired, and which it is reasonable to assume Cordelia did not expect. She asks her father to “hold [his] hand in Benediction” over her. 10 Lear, in turn, recognizes his error and begs Cordelia to “forget and forgive,” for he is “Old / And Foolish.” 11 In this manner, Lear and Cordelia bridge the separation that lies at the heart of the entire tragedy. Indeed, they do not merely close the breach. Cordelia’s love actually transforms the relationship between father and daughter, not to mention Lear himself.
This is the only instance in the play of a fully successful reconciliation. It is also the only example of Cordelia’s type of otherworldly charity at work. However, the play suggests that this form of love is truly not “of this world.” Having emerged from Hell, Lear is eager to recreate Heaven on Earth. Hence the speech in act five, scene three, in which he describes his vision of life in prison with Cordelia as a sort of earthly paradise. His words are a beautiful assertion of the fullness of his reconciliation with his daughter, but they are also misguided and unrealistic. Lear still needs to learn that man can make no Heaven of this earth.
The result of the king’s attempt to build a premature Heaven for himself is that Cordelia is hanged in her cell by the treacherous Edmund’s order. The question, then, is what effect this has on Lear before he dies at the end of the play. Does he return to “Hell”? Or has the retrieval of his daughter’s love redeemed him? Does he die in despair or hope? Madness or sanity? The answer is by no means clearly cut, but we may venture an inference in light of our previous interpretation. Lear clearly dies of a broken heart, overwhelmed with sorrow as he holds the body of his beloved daughter. The plan he conceived for Heaven on Earth has miscarried. Lear totters before an abyss of despair as he contemplates that Cordelia will “come no more, / Never, never, never, never, never.” 12 Indeed, Lear has despaired of finding ultimate happiness in the world, but that is not necessarily the whole story. In his final vision, Lear believes he sees life in Cordelia. “Do you see this?” he asks, “Look on her? Look her Lips, / Look there, look there.” 13 Lear leaves the world with an image of hope in his heart; he is no longer the despondent, helpless creature that appears in act four.
Is this a mad hope? Perhaps that is the essential question Shakespeare presents in King Lear. Finding an answer is the very business of life, and therefore beyond the scope of this essay. However, the fact that Lear’s love had to rise from its worldly state to survive suggests a similar process as regards his hope. The love Cordelia has taught him does not pass away with her body. This adds hints of the eternal to his mysterious final vision. And even if Lear dies in madness, he dies reconciled. Certainly, the old king suffers at the end, but one might humbly propose, alongside G.K. Chesterton in his Tremendous Trifles, that “he is a [sane] man who can have a tragedy in his heart, and a comedy in his head.”
This interpretation of the relationship between King Lear and Cordelia makes sense of many parts in the play that at first glance appear as pieces of different puzzles. King Lear is not a pure tragedy; it remains largely inscrutable if one attempts to understand it as such. As the story develops, an element of comedy become essential to the plot: the restoration of the relationship whose breach lies at the heart entire drama. Lear and Cordelia die after they have been reconciled, but the descriptions of their deaths are not nearly as harrowing as Lear’s earlier torture within his psychological Hell. The reason is that this is a play about love, not about the cruelty of mortality. The drama aims to provoke horror at the possibility of life without love, not at the reality that life has an end. By the end, Lear and Cordelia have restored and transformed their love, and it can no longer be shaken by the buffets of the world. Though father and daughter die, they are not defeated. And while King Lear is adamant that no ultimate redemption is possible within the world, it subtly dares to suggest that such redemption may nevertheless exist.
Bernardo Aparicio García is president of Dappled Things. He is an alumnus of the University of Pennsylvania and is currently studying the Great Books at St. John’s College in Annapolis.
1. [There is nothing pejorative in this use of the word “conventional,” one might add; the Iliad is quite a conventional epic, if only because it is the model for all subsequent Western epics.]↩
2. [Shakespeare, William, King Lear, ed. John F. Andrews (London: J.M. Dent, 1993), I.i.40-46.]↩
3. [Ibid., I.i.63.]↩
4. [Ibid., I.i.89.]↩
5. [Ibid., I.i.101-103.]↩
6. [Ibid., IV.vii.43-46.]↩
7. [Ibid., IV.iii.26-32.]↩
8. [Ibid,. IV.vii.71-73]↩
9. [Ibid., IV.vii.73.]↩
10. [Ibid., IV.vii.56.]↩
11. [Ibid., IV.vii.82-83.]↩
12. [Ibid., V.iii.304-305.]↩
13. [Ibid., V.iii.307-308.]↩ | <urn:uuid:07d27084-2e18-46fc-bd32-feae61010b0a> | CC-MAIN-2017-17 | http://dappledthings.org/4397/a-mad-hope/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00072-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.966615 | 4,324 | 3.296875 | 3 |
THE problem of
transportation was almost as important to the lumber industry as the
problem of production itself. The era of railroads had not yet begun
and the isolated mills at the mouths of the rivers emptying into
Green Bay, Big Bay de Noc, and Bay de Noquette,— practically the
entire northern lake region,— depended upon boats to bring them
supplies an(l to take their output to market.
The importance of
navigation on the lakes, although it was the great highway between
the East and West over which the grain from the rapidly growing
prairie states was carried in exchange for the manufactured products
of the older cities along the seaboard, was not generally recognized
by the federal government. The harbors were in wretched condition
and lights and buoys to guide the mariner and warn him of dangerous
passages were few.
When I came west in
1845 there was only nine feet of water in Milwaukee harbor and
conditions at Chicago were just as bad. Neither were there any tugs
to assist a vessel to a safe berth. In Chicago, for many years,
ships were pulled out of the river to the lake by hand, a head wind
necessitating the use of a windlass. What little aid had been
extended by the federal government in improving these conditions was
withdrawn in 1842 or 1843 when the Democratic administration, then
in power, suspended all appropriations for river and harbor work. As
a result every sailor on the lakes became a Whig and afterwards a
The idea that the
lakes were little more than a "goose pond" prevailed in Congress for
some years later. I remember hearing Captain Blake, a veteran of the
battle of Lake Erie, who had achieved notoriety in these waters in
the early days for his profanity and red waistcoats, expressing the
fervent hope, when he had a United States Senator aboard as a
passenger, that he might run into a gale to convince the
unsuspecting legislator of the hazards of inland navigation. Even at
the "Soo," the great gateway from Lake Superior, no improvements had
been made and freight was transferred around the rapids on a small
Sailing a ship was
not unlike blazing a. way through the forest. With conditions
wretched as they were the navigator was practically without charts
and the master figured his course as nearly as he could, estimating
the leeway and varying influence of the winds. By comparison with
the difficulties that confronted its the lot of the sailors of the
present day is an easy one. With compasses and lights the course of
their vessels is as plain as the tracks of a railroad, and the
steam-driven propellers keep the ship to it without variation and
bring her to harbors equipped with all the aids modern ingenuity has
been able to devise.
Among the trips we
made in the forties was one, which I still have vividly in mind,
from Racine to Escanaba on a vessel laden with hay for the lumber
camps. After setting sail we saw neither light nor land but followed
our uncharted course very much as instinct guided us. Through
Death's Door, the narrow passage from Lake Michigan into Green Bay,
we groped, feeling our way with the lead line, and headed cautiously
for the mouth of the Flat Rock or Escanaba River. Proceeding
blindly, sounding as we went, we came about in five feet of water,
stirring up sawdust from one of the mills. From this position we
retreated cautiously to deeper water, lowered a boat, pulled ashore
in the dense fog and with the aid of a compass found our general
bearings. I returned to the ship and when the fog lifted detected a
vessel lying close by. To our intense relief we discovered that we
were in the right anchorage.
At no time during
these early voyages did it seem that we were free from threatened
danger. The officers were constantly on the alert. During these
years I made several trips with Captain Davis, of the "Champion," as
a passenger. While the boat was under way he never took off his
clothes so that he might be prepared to answer a call to the deck at
To make our situation
worse the Green Bay region was largely inaccessible, except through
the dangerous passage, Death's Door, and a long detour was necessary
to clear the peninsula. This disadvantage was overcome to some
extent later by the construction of the Sturgeon Bay Canal. At the
mills also, where there were no harbor facilities, loading the
vessels was difficult as they were anchored some distance off shore
and the cargoes had to be taken out in scows or rafts. Gradually we
improved these conditions and the problem was eventually solved by
the construction of harbors and the building of railroads.
As I have said I had
already had a glimpse of the sea at St. John, New Brunswick, and at
Boston and Bangor, and was on the verge of embarking on Captain
Eustis's ship for Nova Scotia as a cabin boy when fate stepped in
and decreed otherwise. No one could have lived in Maine during the
early part of the last century without having given ear in some
measure to the call of the sea. Nearly half the people of New
England were sailors when I was a youth, a condition which
maintained for the United States an enviable position as a maritime
power and led to the upbuilding of a great merchant marine before
the Civil War. Whaling, too, was a great industry. Many of the men
who attended Harvard and other universities went on a whaling voyage
for two or three years before taking up their professions or,
possibly, for lack of opportunities to take them up, or sailed for a
time before the mast. One of these, a Harvard graduate, I came in
contact with on a trip from Milwaukee to Escanaba. As a common
seaman he received wages of sixteen dollars a month.
At Bangor, when I was
a youth and the spell of the sea was upon me, I had laid awake many
nights and in the calm security of my bed pictured myself as the
master of a vessel on the lee shore in a gale of wind. The small
catalogue of nautical terms at my command T used with extraordinary
facility and issued orders with decisiveness and despatch. In these
imaginative predicaments I never lost a ship.
My interest in
seamanship was revived by my experiences on the lakes which had an
element of danger sufficient to stimulate a young man's passion for
adventure and the time came when I wanted to try sailing as a
reality. Frequently in the summer time I was a passenger on the
vessels on which we transported lumber from Escanaba to Milwaukee.
On more than one occasion I was permitted to take the wheel.
Besides, at Escanaba on Sundays, the only time I had to myself, I
availed myself of every opportunity to practice sailing and became
so proficient that as early as 1,848 I had acquired something of a
reputation as a sailor.
At this period the
Mackinaw boat was the most common type of small vessel in use and
was deemed the most effective for all sorts of weather. They were
particularly seaworthy and, if properly handled, could survive any
gale on the lakes. These and sailing vessels carrying both
passengers and freight were the only means of transportation we had
on Green Bay and between Green Bay points and Milwaukee and Chicago
during the navigating season from April 1st to November 30th. During
the remainder of the year the bay was frozen and to communicate with
Green Bay city we went on the ice. I also had opportunity to
exercise my ingenuity in sailing a Mackinaw boat at the "Soo," where
I went to enter lands for the company. Louis Dickens, one of the
pioneer merchants, was always willing to turn over his vessel to me
and I made numerous short excursions in the waters in the vicinity.
Several years later, in 1838, I brought to Marinette a very good
Mackinaw boat which, in a heavy sea, I ran over the bar into the
My desire to sail
before the mast had always met with the unrelenting opposition of
Mr. Sinclair and, as a result of his maneuvering, my experiences in
that direction were confined to a single trip. He evidently
proceeded upon the theory that an unlimited dose of sailing would
cure me of any weakness I had for the sea, or the lakes, as it
happened to be, and he took occasion to administer it on a trip from
Milwaukee to Escanaba and return on the large schooner "Champion."
The owner of the
vessel was Mr. George Dousman who, as I have said before, was
engaged in the shipping and warehouse business in Milwaukee. He was
a friend of Mr. Sinclair, and by his direction the conspiracy having
been arranged beforehand— Captain Davis, the commander of the
"Champion," made it a point to show me no favors on my first voyage
as a real sailor and to accord me no more consideration than was
given the other men. When we left Milwaukee I went into the
forecastle with the crew and performed the duties allotted to me. We
reached Escanaba without mishap, took on cargo of lumber and
returned to Milwaukee.
Mr. Sinclair, by
accident or design, was a passenger on the return trip. Having
doubtless kept me under scrutiny and thinking that his plan had
succeeded by this time, lie asked me one morning where I had slept.
"In the forecastle
with the men," I replied.
"Didn't you feel mean
with a lot of drunken sailors?" he added.
"No," I said. "Here
in the vessel I am a sailor before the mast as they are and I can't
prevent their drinking."
Perhaps I was
somewhat defiant but, to myself, I was ready to admit that I had my
fill of sailing, at least as an occupant of the forecastle. None the
less I completed the voyage. When we arrived at Milwaukee and the
men prepared to unload the boat, which was the custom at that time,
I needed no further discouragement and told Captain Davis that he
might put a man in my place. I accept my own counsel but I had made
up my mind that failing was not a desirable avocation and that the
wages of sixteen dollars a month during the summer and from twenty
or twenty-six dollars on the last trip in the fall were very small
compensation for the hard work and discomforts to which one was
Later Mr. Sinclair
talked to me about my lost aspirations in a more kindly vein and
advised me not to choose a career of this kind. When he asked me to
get into the carriage and go with his family to .Janesville, I went
without urging, but I made no confession of defeat, keeping that
part of the situation to myself.
The experience was
the extent of my career as an ordinary seaman but I had not yet
(lone with sailing. A short time after going to the farm Mr.
Sinclair sent me to the ''Soo" to enter lands while he vent to the
mills at Escanaba. There he purchased the schooner "Galhinipper" -
the boat which I had hauled out on the ways several years before -
and instructed me to hire Captain Johnson Henderson, or anyone else
I might select, to take command of her. I made the arrangement with
Henderson, who earned lumber from Escanaba for two or three years,
until he lost the vessel, and went with him as mate.
The first few trips
were uneventful but in the early part of September, 1850, while on
way to Escanaba, with the boat light, we ran into a storm. There
were eight passengers aboard, a yawl in tow and a horse on deck all
bound for Bailey's Harbor. The yawl could not be taken aboard
because the schooner was very "crank" when unladen and had capsized
two years before at Presque Isle on Huron.
A terrific gale came
up and, while fighting the storm from Friday morning to Sunday
afternoon, we drifted from what is now called Algotua, then known as
Wolf River, twelve miles south of Sturgeon Bay, to a point ten miles
south of Racine. The yawl parted its painter and went adrift to the
east side of the lake; the horse died at midnight on Sunday, when we
were off Milwaukee harbor, and the passengers, who had despaired of
ever seeing land again, were back where they had started. The storm
which we had happily survived was said to be one of the most severe
that ever swept Lake Michigan. At Milwaukee just as we were about to
embark upon the momentous voyage I had met a Captain Davis, the
owner of a vessel called the "General Thornton" who was preparing to
cross the lake for Manistee where he was to take on a cargo. He also
ran into the storm, his ship went down and all of the crew were
drowned. He saved his own life by lashing himself to a spar on which
he floated for six days and eleven hours before he was picked up.
Famished and exhausted he sought to keep himself alive by sucking
the blood from his arm. A number of years later some of the old lake
captains wrote to me asking for information concerning Captain
Davis, who was a Welshman and an interesting character, but I knew
nothing of him except that he had gone to Chicago where he was
employed in a sail loft. After that I had lost trace of him.
After I had made a
few trips on the "Gallinipper" as mate the company commissioned me
to buy horses, oxen and supplies, another ruse of Mr. Sinclair's to
divert my attention from sailing. Mrs. Sinclair, whose maternal
interest in me had not diminished, also pleaded with me to give it
up as a career. None the less I was still absorbed in it and during
the following year, 1851, I purchased a half interest in the "Gallinipper"
on July 5, when she was on her way to Escanaba. This was not. a
fortunate venture. On July 7, when off Sheboygan the vessel capsized
and sank, a total loss although all of the crew were saved. The
transaction not having been recorded with the under-writers I saved
my outlay for the purchase. The untoward experience did not deter me
front making further ventures of the same kind. Shortly afterward I
went to Milwaukee and bought the controlling half interest in the
schooner "Cleopatra" from Captain William Porter. The ship was under
charter, the Sinclair and Wells Company owning one half. I came in
on her to Escanaba and about sunrise went ashore to make
arrangements for taking on a load of lumber. Mr. Sinclair was just
sitting down to breakfast.
"Where did you come
from?" he asked in surprise.
"On what vessel?"
"Did you buy Captain
Porter out?" he asked, evidently suspecting what had happened.
"Yes," I said.
"Who is captain of
the vessel now?"
"They call me captain
when I am aboard," I replied.
interchange obviously convinced Mr. Sinchair, who was a man of much
determination, that I had taken up sailing in earnest and he
capitulated, making no further effort to dissuade me from my course.
''Sit down," he said, "and eat your breakfast." That concluded the
I made nine trips on
the ''Cleopatra" during the summer and autumn of 1851, and netted in
profits six hundred dollars. Freight rates, however, were low and
the returns small compensation for the outlay and the hard work, not
to speak of the risk encountered. The following year I made only one
trip, after which I put another man in my place and went back East
and was married. I had definitely and finally arrived at the
conclusion that sailing was too hazardous an occupation and offered
no attractions as a permanent career. In 1853 I sold my interest in
the vessel, deciding that I wanted no more of it as seaman, officer
or vessel owner.
Although I had
abandoned this course the experience I had on the water was none the
less valuable, and I never lost interest in this aspect of activity
on the lakes. The development of water transportation, particularly
in connection with the commercial growth of Wisconsin and Michigan,
is a fascinating story, especially to one such as I who has seen the
sailing vessels and side-wheelers of the mid-century give way to the
great freighters plying between the commercial centers which once
seemed mere villages on the fringe of a vast wilderness. The
outlines of the earlier period are growing dimmer as they recede
with the years, but I am not likely to forget, however little it may
interest later generations, that to the men who then sailed the
lakes are due the honors of pioneering no less than the men who
brought the unbroken prairie to bear and laid open the wealth of the
In some of the less
important nautical incidents of this time I played a small part. The
first steamboat that came into the Escanaba River was the
"Trowbridge," a small side-wheeler, built in Milwaukee and used to
carry passengers ashore from the Buffalo boats. In 1845 the vessel
carried an excursion party to Green Bay and ran into the Escanaba to
obtain wood sufficient to carry it back to Milwaukee or Washington
harbor, the latter a place frequented by steamers on the route from
Buffalo to Green Bay. The next steamer to enter the Escanaba was the
"Queen City," a vessel drawing only forty inches, which I took up as
far as the water-mills in 1858. Isolated as we were, excursions of
this kind were about the only diversion to which we had recourse on
days of leisure, and women and children as well as the men were only
too glad to avail themselves of an opportunity to take a trip of
this kind. A number of times I commandeered all the vessels at hand
on which practically the entire population of the small settlements
embarked for an outing.
The "Morgan L.
Martin," a Fox River boat, was the first vessel oil Menominee River
to tow scows and rafts of lumber from the mills to vessels at anchor
outside the bar. At this time, 1860, the Menominee had no harbor
improvements, and there was only from three and one-half to four
feet of water at the mouth. It was the practice of the mill owners
to pull the scows and rafts out to the waiting vessels by hand, a
process which cost from five to ten dollars, low as wages were at
the time. Not long after I took charge of the null of the N.
Ludington Company at Marinette, we decided to experiment with a tug
and purchased the "Morgan L. Martin" with this end in view. In this
we were successful. We found that we could reduce the cost of towing
more than one-half. During the first year we charged $1.50 to tow a
scow both ways. The next year the price was increased to $2.00. The
mill owners, who looked upon the experiment with scepticism, soon
came to the conclusion that the tug was indispensable, and we added
two or three more to our equipment.
The 'Morgan L.
Martin" was of so light a draft, thirty inches, that it could
venture into streams which were not considered navigable for steam
vessels. I took her into Cedar River for the first trip that had
ever been made by a boat of her type in that region, and into Ford
River where the water was very shallow. But the most unusual
achievement of the vessel was a trip four miles tip the White Fish
River to the water-mill, which I took in 1860 with one hundred and
fifty people from Flat Rock and Masonville, one Sunday afternoon.
This was hailed as an extraordinary nautical event, the first and
probably the only occasion when a vessel of considerable size had
gone so far up the river. As we threaded our way cautiously up the
narrow stream the echo of our whistle reached the ears of Peter
Murphy, the superintendent of the White Fish property, who was in
one of the waterwheel pits making repairs. When we neared the mill
he emerged covered with grease and astonished beyond measure at the
unfamiliar sight. For a time, he said, he was almost convinced that
the boat was approaching overland from Lake Superior on the Grand
Island trail. In celebration of the event he wished to serve dinner
for the entire party, but I persuaded him instead to accompany us in
his boat back to Masonville. When we came to turn about we found it
necessary to shovel away a portion of the river bank to give us
adequate space and Burleigh Perkins, one of the pioneers of the
region, and some other men edged the steamer around with handspikes. | <urn:uuid:1cb190e1-2337-4db6-9349-c1a1eeb3a17d> | CC-MAIN-2017-17 | http://www.electricscotland.com/history/stephenson/chapter7.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00425-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.980482 | 4,642 | 3.296875 | 3 |
(You can download a .ZIP file containing an up-to-date version of these files)
There are lots of glossaries on the Net, for example a similar list to this one can br found in the Big Dummy's guide at http://www.eff.org/papers/eegtti/eeg_271.html
Microsoft's rival to Java. ActiveX is a programming device that allows developers to add interactive content to Web pages. However, there are some doubts over its security, and it will only work on PCs. Not that either concern has prevented Microsoft's growth in the past.
"As far as I know"
Probably the finest and most powerful search engine on the Web. Visit http://www.altavista.digital.com/ and try it. Read the advanced help on how to improve and refine your queries, and learn to use the recently added Live Topics to categorize your results.
A derogatory term for someone who's not too smart or Net-savvy. Rhymes with "loser".
Derived from the fact that AOL offer cheap and simple to use Internet access, and consequently were amongst the first to introduce a large number of clueless newbies into the system, a feat they've managed for several years now :)
Over time the term has been joined by other terms derived from ISPs who offer free email services (e.g. hotmail).
"Asynchronous Transfer Mode". A up-and-coming standard for modems that receives faster than it transmits. Ideal for squeezing the last drop of bandwidth out of a home telephone line.
A term used to describe the amount of access one has to a given Internet resource by analogy to radio bandwidths. The more bandwidth available the faster a given amount of data can be transferred, and hence the greater the amount of data that can be transferred.
As more graphics, audio and video arrive on the Net, so the demand for bandwidth increases. Consequently one of the few cardinal sins most frowned upon by the Internet community is to waste bandwidth, a resource scarcer than water in some parts.
Private individuals on modem lines have the least available bandwidth. This means they are least likely to download graphics, large software packages etc.
Universities often have the greatest access to bandwidth, and may think nothing of video lecturing over the Internet.
When designing a web page, it is vital to bear in mind the bandwidth that your desired audience is likely to have. If you make your content too large, they are likely to literally switch off. This is one reason you will often see a text alternative offered for a site.
Not strictly part of the Internet. These are usually machines that you connect to via a modem line. Depending what is on offer it may be free, charge a membership fee, or use premium rate telephone lines.
Most browsers allow you to bookmark a favorite URL in order that you can easily find it again next time you run the browser.
"By the way"
"Call for votes". New USENET newsgroups are often created through the process of stating a charter, and then calling for a vote on whether the newsgroup should be created. This is quite common when large newsgroups decide to split into smaller sub-groups. A certain minimum and majority are required for the group to become "official" and thus accepted by most news feeds.
Special programs that reside on a web server. Usually these handle particular requests "submitted" from a HTML form. The normal practice is to execute some calculation and dynamically construct a HTML page that is sent back to the client browser as a response.
"Cascading Style Sheets". These are web documents (usually with a .css extension) used to add styling to HTML documents. The basic idea is that HTML tags are used to markup the structure of a document, and a style sheet is used to layer fonts, colours etc onto the text associated with each tags.
The idea is to separate form from content, and allow users and authors to specify their own preferred stylings.
CSS is starting to be supported with the V4.0 browsers, although the support in those browsers is far from complete. Fuller support is expected in the V5.0 browsers
Where a mailing list has high volume, it can become difficult to cope with the large number of posts that result.
In such cases, you may be offered the mailing list in digest form, that is all the posts to the list are collated and sent to you as a single larger mail every so often.
A name given to an Internet node. Not all nodes have names.
See Domain Names for a fuller description.
A "proper" name for smileys
"Frequently Asked Questions", or rather, their answers. Because the ratio of newbies to old hands is permanently high, people have taken to compiling lists of typical questions and their answers. The idea being that a newbie gets presented with the FAQ, reads it, and then doesn't ask the same questions again (which the old hands are sick of by this time).
Most FAQ's are written by enthusiasts, and although their accuracy cannot be guaranteed, they are usually a veritable mine of information, well worth seeking out.
Most FAQ's are released regularly on a fortnightly or weekly basis. In addition to this you can find them at RTFM and in the various .answers newsgroups.
Another location for finding FAQ's is http://www.cis.ohio-state.edu:80/hypertext/faq/usenet/FAQ-List.html
A gateway between your machine(s) and the Internet. Commonly used by companies to limit or monitor external access to their machines and - on occasions - to control what their employees can access over the Net.
The act of flaming someone is the act of responding in a highly critical sarcastic or ridiculing manner. The name presumably derives from "to shoot down in flames". Anyone who posts an offensive article can expect to get flamed, probably by more than one person.
Vicious arguments between two or three sides often become know as flame wars.
A followup is a post to a newsgroup in response to an existing article. The combination of the original article and all its subsequent followups is known as a thread.
Freeware is any software you can (legitimately) get for free. See also shareware and postcardware
File Transfer Protocol. Originally a means of connecting to an Internet node and logging in to download files from a server.
Nowadays FTP is commonly accessed via web browsers. See the section on Netiquette for a fuller description.
"For what it's worth"
This is the symbol for the Ctrl-H or backspace key. People use it (usually) humourously as if they started to say one thing, and changed it to something else (usually more polite). For example
"I'll have to ask the drago^H^H^H^H^H wife."
Each message sent over the Internet has a header. These are usually hidden from you, but in the case of email it can be useful to check these occasionally when you wish to check a message's authenticity.
These are pieces of software that extend the capabilities of your browsers, usually by handling different file types.
A helper application is an application launched independently from the browser. An example might be Word for windows which can be configured to "help" display .DOC files.
A plug-in is a piece of software designed to integrate with the browser. Plug-ins are increasingly being used to handle audio and visual content of pages inside the browser.
"Hope that helps"
Hypertext Markup Language. The language used to define web pages their layout, images to be shown, and hyperlinks.
An introduction to this vast topic can be sound in Creating your own web pages
In a browser these are the highlighted link which, when selected, will cause the browser to go to the linked resource. These days links can be added to text or pictures.
You probably got here by clicking on one :)
"Internet Relay chat". A software program that allows you to "chat" to people over the Internet in real time. That is, you type a message in, and they see it as you type it.
"If I remember correctly"
The low-level protocol which makes up the backbone of the Internet. The IP allows messages to be routed from one node to another via whatever happens to be available at the time. It is this "pass-the-parcel" approach to networking that effectively makes Internet access so cheap. One simply needs to connect to the nearest point of contact, usually a local phone call away.
"In my (humble) opinion"
The phrase coined by Al Gore to describe the Internet.
Another phrase coined to describe the Internet.
That's Intranet with an 'a', not Internet with an 'e'.
The adoption of Internet software and standards to meet an organisation's internal networking needs.
See the section on Intranets for a fuller discussion.
"Internet Service Provider". An organisation that provides some routing nodes for the internet, and sells access to the Internet, usually for a monthly or annual fee.
"I seem to recall" or "I seem to remember"
A programming language designed to run in a secure and platform-independent "Java virtual machine" (JVM). Such virtual machines can be embedded inside Internet browsers (amongst other things), making Java an ideal choice for programming software that can be distributed and run over the InterNet.
Has since spawned a whole industry of Java-related puns.
Originally developed by Netscape. Microsoft have tried to get Visual Basic script accepted as a rival scripting language.
Unsolicited email. See also Spam
"Laughs out loud". Used to signify amusement at something being quoted.
Lurking is the act of joining a newsgroup or mailing list and just listening without contributing. This is a perfectly respectable thing to do, and lurkers probably account for 90% of the readership in some cases.
It's claimed that famous rock stars lurk on their fans mailing list to find out what people really think of them. TV script writers often do the same.
The act of breaking one's silence in this context is called de-lurking.
Mailing lists are single-topic discussions carried out through email. As such they represent a less public and more universally accessible form of newsgroups (since not everyone has access to news).
See Mailing Lists for more details.
The <META> tag can be used to simulate "header lines" when a HTML page is passed to a browser. These tags must be placed in the <HEAD>..</HEAD> portion of the page.
The syntax is
<META name="some name" content="some value">
where "some name" is the header line you're emulating, and "some value" is the value you want it to have.
An example might be:
<META name="description" content="This is Fred Bloggs's page"> <META name="keywords" content="Fred,Bloggs,wholesale,butcher">
When indexed by a search engine the description you supply is used in preference to the normal showing of the first n lines of the page (which are not always so clear).
Similarly the keywords you supply are used in indexing the page.
Controlling these attributes increases your chances of being correctly found by someone using a search engine... the preferred method of browsing these days.
The act of quoting an entire article just to add a one-line comment. So called because people used to quote lengthy articles just to add a "me too" to the opinion expressed in the original.
This is seriously bad Netiquette as it is one of the purest wastes of bandwidth known to Netkind.
A site that keeps exact copies of popular files for download. Mirror sites allow the pressure for these popular files to be spread geographically round the Internet and the globe.
Some newsgroups and mailing lists are "moderated". In these cases all articles posted to the group are checked by a Moderator.
The moderator is free to
- reject the article. This is usually only done if the article violates the group's charter (e.g. is off-topic like Spam, or is too long)
- edit the article. This is relatively unusual, but the moderator may correct spelling, factual errors, or remove information supplied in a previous post.
- Accept the article. In this case the article is forwarded to the newsgroup and mailing list, and enters the public domain.
Moderation is a good way of improving the Signal to Noise ratio in a group, but is hard work for the moderators who frequently do the job voluntarily and are unpaid.
A derogatory term for someone who is keen on computer technology. In the early days, it was only the nerds who could drive the software.
The Net-etiquette. Basically the dos and don'ts of Internet usage. See the section Netiquette for a fuller description.
Newsgroups are like discussion groups dedicated to different areas of interest.
See the chapter on News and Usenet for a fuller discussion.
A mildly derogatory term for anyone new to the Internet.
By its very nature the Net always has a high proportion of "new" people. New people have the property of making all the mistakes that all us "old hands" made 18 months ago, and would never admit to. They're also cannon fodder for all the "get rich quick" pyramid letters that saturate the Net.
However, they're not all bad. It's down to Newbies always asking the same questions, that forced people to create the various FAQS's that exist.
Newbies need know only two things... what Netiquette is (so they can avoid the mistakes we all made), and what FAQSs are (so they won't ask the same questions as we did).
These terms describe whether or not you are currently connected to a remote machine. In the context of the Internet "online" means you are connected to the Internet, whilst "off-line" means you are not.
These terms are more important when you use a modem to gain connection as being "online" usually entails an active telephone line which in many parts of the world costs money, and in all parts of the world reduces other people's chances of getting a connection.
If you do connect in this way, then it is important to choose software that allows you to as much as possible off-line. Activities that take time and which are best done off-line if possible involve :-
- Composing email. Most email packages allow this.
- Reading Usenet posts. Many newsreaders can support this.
- Modifying your web pages. All editors will allow this.
Increasingly the term "on-line" is becoming synonymous with being available through the Internet.
Postcardware is any software you can (legitimately) get for free, but the author would like to recieve a postcard in thanks. See also shareware and freeware.
The author of this guide offers some of his own software as postcardware - see AscToTab. If you like this guide, feel free to send me a postcard at the address listed there.
Posting is the act of adding an article to a newsgroup. Each article is known as a "post". Newsgroup articles are arranged into threads all on the same subject within the group.
If an article is of interest to more than one group, it can be simultaneously posted to multiple groups. This is known as "cross-posting".
Excessive cross-posting (i.e. posting to too many groups at once) is discouraged and borders on being Spam.
Standard email names for people who are in charge of the email/web at a given site. With the rise of Spam, "abuse" has almost become another standard site address.
Portal sites are sites that want you to use them as your start page whenever you start up your browser. Many of the older search emgines (My Yahoo, My Excite etc) are becoming portal sites. Other companies are setting up sites (e.g. www.netscape.net is a growing portal site helped along by the "My Netscape" button added to later copies of their popular browser)
To attract you to use them they will offer free (sometimes web-based) email access, free web space, bookmark management as well as useful directory and search services.
In return you get to read all the adverts they display. Advertising has proved to be the first big money spinner on the web for sites that can attract large enough traffic.
In email or newsgroup posts it is common practice to "quote" from the item you are replying to. This can help the reader understand what points in the original you are responding to.
Quotes are signalled by placing a character in front of the quoted line, most commonly a ">". Some mail and news packages do this for you automatically.
If you quote a quoted reply, this ends up with two characters in front e.g.>> What's the terminal velocity of a swallow?
> African or European?
When quoting, only quote selected parts that are relevant. If nothing from the original is relevant, then don't quote anything. That way you avoid getting flamed for me too posts.
Increasingly the Web is being searched by "robots". These are pieces of software that read a web page, process it in some way (usually by analysing the content to see if it's of interest), and then following the links from that page onto the next web page.
This process is fully automated. Each site can set up a policy file (usually robots.txt in the top directory) indicating how the site wishes to restrict such access. Restrictions can be complete, or on a per-directory basis.
"Rolls on floor laughing". Signals great amusement at what has just been read.
"Request For Comment". These are actually a (large) series of documents describing all aspects of the internet. In so much as the Internet has any fixed rules and standards, these are the documents that describe them.
A full list of RFCs can be found at http://www.cis.ohio-state.edu/hypertext/information/rfc.html amongst others
In NetSpeak "Read the F*****G manual". Directed at people who ask questions without first seeking answers. People new to the Net or a newsgroup should first acquaint themselves with the prevailing habits of the group or list they have just joined, and if at all possible read the FAQS.
Then you can ask questions.
The phrase RTFM has now spawned a web site, dedicated to keeping copies of all the "manuals" you should be reading.
See the section on the RTFM site for more details.
Basically the same as RTFM, but for FAQS.
Shareware is any software you can "try before you buy". Normally such software only works for a limited time, or has some features missing. The shareware is then distributed freely and widely so that as many people as possible can try it.
Often shareware is much cheaper than straight commercial software. Sadly, not all shareware is of a good standard. However, there are some very good shareware programs, e.g. WinZip, Eudora, Forte's Free Agent etc.
Once you decide you want to keep it, you register the software (usually by paying for it) and get a fully-featured version.
This file is generated by the author's own shareware program AscToHTM.
See also freeware and postcardware.
An old electrical engineering term, describing the ratio of desirable information (signal) to undesirable information (noise). It gets pretty low at times on the Net.
Most email and news-reading packages allow you to add a few lines at the end of your messages as a signature.
See the Signatures section for details.
Little text pictures used to add mood to your informal text. The standard smiley is :) To understand this, rotate your head 90 degrees to the left to see a smiling face. Get it :^) ?
People also used to use <g> and <bg> to signal grin and big grin. This is less common these days. On the other hand, as HTML markup becomes better understood, people are using that style such as
<smug> I told you so </smug>
(In HTML tags appear in <>, and a / often signals the end of a markup.
You can one of a number of unofficial smiley dictionaries at http://www.eff.org/papers/eegtti/eeg_286.html. This isn't definitive, but will give you a flavour.
Ordinary postal mail in the "real" world. So called because it takes days to arrive compared to minutes in the case of email.
Because of the Internet's popularity, junk mail, chain letters and other undesirable forms of (self-)advertising are common.
Spamming is the act of spreading a message much wider than it would normally deserve to go. This usually takes the form of posting the same message to a (very) large number of newsgroups, or emailing it to a large number of people.
Some low-volume groups become more spam that genuine content. There are a few counter measures you can take :-
- Try to find moderated as opposed to unmoderated newsgroups. These will ignore all spam messages, and will often be a guarantee of quality of posting.
- NEVER post a followup to a spam message. If the original message was posted to 100 newgroups, so will your followup be. If you must followup, trim the newsgroup list.
- Complain to the postmaster concerned. This is getting harder to do, as spammers now routinely fake their sending email address.
- Ignore it.
The act of browsing the web, clinking on link-after-link, basically riding the wave of where your interest takes you.
A series of email's or newsgroup postings on the same topic. Although the order in which articles are threaded can sometimes be approximate, following the thread is like following a discussion, and can be a very useful way of picking up arguments on a topic.
Trolling is the act of deliberately posting a contentious post in a newsgroup with the intention of provoking a hostile response and starting a long thread or flame war.
These are sometimes very subtle and mischievous, but more often are simply offensive. It can be difficult at times to tell whether or not a post is a deliberate troll.
The process of transferring files from your computer to a remote computer that you are accessing.
Downloads are when you transfer files "down" to your computer, usually because you want to take a copy of something.
Uploads are the reverse, usually when you want to release something to a publically accessible location (such as a new version of your HTML files).
See Downloading files and Publishing HTML for more details
"Unique Resource Location" or "Uniform Resource Location". Basically this is an internet address and takes the form
<Access Type>://<Domain address>/<resource address>
Where access type is http:, ftp: etc, the domain address identifies the Internet node to be contacted, and the resource address is the identifier at that node for what you want.
The actual details will vary according to the type of resource being accessed.
A fuller description of URLs can be found in RFC 1738, e.g. at http://www.cis.ohio-state.edu/htbin/rfc/rfc1738.html
The collective name given to all the Internet's newsgroups and the community of people that use them.
Usenet itself has become such a rich source of material that search engines like Altavista and Dejanews allow you to locate posts in it.
Dejanews in particular is excellent for searching for old discussions.
See News and Usenet
A term used to describe the inter-linked resources on the Internet, usually all that is browser accessible. Largely interchangeable these days with the Internet itself.
"What you see is what you get". A phrase usually applied to editors that attempt to present your data in a form identical to how it will appear to the user.
There are a number of so-called WYSIWYG HTML editors, but since HTML browsers are free to layout screens as they see fit, the resultant HTML is often very rigid.
A popular search engine that divides all its indexed pages into categories.
"Your mileage may vary" Used to indicate that the author has just expressed a possibly contentious opinion which they therefore feel you may wish to take a different view.
For example, I find Altavista to be the best search engine on the Net, but YMMV.
A way of compressing several files into a single, compressed file. This makes passing software and other information around the Internet much simpler and more efficient (because the files are smaller).
Sometimes the files are made into a self-extracting .exe file, but more usually you need special software to pack and unpack the .zip files
Most people use Winzip which can be downloaded as shareware from www.winzip.com
© 1997-1999 John A Fotheringham and
Last Minor Update : 4 December '99 | <urn:uuid:71b928e5-20fc-409d-977a-c352eab1da96> | CC-MAIN-2017-17 | http://jafsoft.com/misc/course/course_10.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00307-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.926397 | 5,343 | 2.890625 | 3 |