score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
25 | The modern world exists in a state of cultural, political, and economic globalization. During the fifteenth and sixteenth centuries two nations, Portugal and Spain, pioneered the European discovery of sea routes that were the first channels of interaction between all of the world’s continents, thus beginning the process of globalization in which we all live today. This explains the two pioneering nations, their motivations, their actions, and the inevitable consequences of their colonization. The Age of Exploration marked the highest point of Portuguese imperial power and wealth.At the beginning of the fifteenth century Portugal had an economy dependent on maritime trade with Northern Europe. Although Portugal lacked the wealth of its generation, it would lead the European society in the exploration of sea routes to the African continent, the Atlantic Islands, and to Asia and South America over the course of the sixteenth century. Several factors contributed to Portugal becoming the European pioneer in sea exploration. The first was its geographical position along the west coast of the Iberian Peninsula, which allowed for the natural development of a seafaring tradition.
The second was the development of a complex maritime economy in which the port cities of Lisbon and Oporto became the commercial centuries of the country. The third critical factor that made Portugal a pioneer in exploration was its monarchy. Portugal benefited from a relatively stable monarchy whose kings encouraged maritime trade and shipping ventures. The Crown gave every possible incentive by implementing tax privileges and insurance funds to protect the investments of ship owners and builders.Portugal was fortunate to have kings who recognized the kingdom’s dependency on overseas trade and assisted in its expansion in every possible way. The stability of the monarchy was essential to the establishment of sustainable economic growth, thus the stability of the Portuguese monarchy gave the kingdom a seventy-year head start over the Spanish who were distracted by a civil war and the Reconquista of Granada. It was not until Columbus’ voyage in 1492 that the Spanish were finally in a position to challenge Portugal’s predominance in exploration.
Europe’s interaction with the Caribbean began in 1492 with the Spanish sponsored voyages of Christopher Columbus. Columbus’ voyages to the Caribbean incorporated two differing traditions of expansion. The first was influenced by his experience in the Portuguese merchant system. This background allowed Columbus to view his task as mainly one of discovery to be followed by the establishment of commercial outposts and trading centers’ that would tap into local resources. The primary goal of this system was the quick exploitation of the local area with minimum investment.
The primary goal of this system was the conquest and eventual settlement of new lands for the purpose of long term exploitation. The difference between these two traditions created expectations that brought Columbus into immediate conflict with the Spanish settlers who accompanied him. The Crown was called on in several occasions to mediate between Columbus and the settlers, usually deciding in their countrymen’s favor. This Spanish pattern of conquest and settlement became the standard for Spanish exploration in the New World.Upon discovering a new territory, the Spanish expeditions were usually, but not always, greeted by friendly inhabitants.
During this initial stage the Europeans would survey the area and the people to determine their potential for exploitation. Within a short period of time the inhabitants would grow to resent the Spanish who helped themselves to ‘the natives’ food, women and gold. ‘ Such abuses were common in Spanish cross-cultural contact and provoked violent reactions by various indigenous populations. Once native resistance was crushed the Spanish forced the villages to grow cash crops, pay tribute, and mine for their precious gold.The Spanish command was brutal and violent. The Spanish ventures in the Caribbean had to recoup their sponsors’ initial investment and this led to an obsession with discovering gold deposits.
Once these deposits were found the Spanish had to secure sufficient laborers to mine it. The exhaustion of gold deposits and laborers in the Caribbean led to the full-scale occupation and exploitation of Puerto Rico, Jamaica, and Cuba during 1508-1511. Each occupation followed the same pattern of discovery, local conquest, settlement, exhaustive exploitation, and finally a push into the frontier for new natural resources and slaves.The men who led these campaigns were known as the conquistadors.
They adapted the reconquista pattern of military expedition and settlement, often exploiting the pre-existing indigenous rivalries in order to divide and conquer with extreme efficiency. The quest for gold brought the conquistadors to the mainland of Central America where they would repeat the conquest pattern that had been so effective in the Caribbean, to defeat the Aztec and Inca Empires. The Spanish would bring back the things they found in the new world to trade with Spain this was later known as the ‘Columbian Exchange’.
The Columbian Exchange has been one of the most significant events in the history of world ecology, agriculture, and culture. The term is used to describe the enormous widespread exchange of plants, animals, foods, human populations including slaves, communicable diseases, and ideas between the Eastern and Western hemispheres that occurred after 1492. In that year, Christopher Columbus’ first voyage launched an era of large-scale contact between the Old and the New Worlds that resulted in this ecological revolution.The Columbian Exchange greatly affected almost every society on earth, bringing destructive diseases that depopulated many cultures. Nearly 80 percent of the native population of the Americas was wiped out from the introduction of European diseases.
The contact between the two areas also circulated a wide variety of new crops and livestock. Maize and potatoes became very important crops in Eurasia by the 18th century. Manioc and the peanut flourished in tropical Southeast Asian and West African soils that otherwise would not produce large yields or support large populations.This e xchange of plants and animals transformed European, American, African, and Asian ways of life. One third of the crop value within the United States depends on foods that were first grown in the Americas. Before regular communication had been established between the two hemispheres, the varieties of domesticated animals and infectious diseases, such as smallpox, were strikingly larger in the Old World than in the New. This led, in part, to the devastating effects of Old World diseases on Native American populations.The Indian population was terrified of small pox, “for a sorer disease cannot befall them; they fear it more than the plague; for usually they that have this disease have them in abundance, and for want of bedding and lining and other helps, they fall into lamentable condition, as they lie on their hard mats, the pox breaking and mattering, and running on into another, their skin cleaving to the mats they lie on; when they turn them, a whole side will flee off at once, and they will be all of a gore blood, most fearful to behold; and then being very sore, what with cold and other distempers, they die like rotten sheep.
The smallpox epidemics probably resulted in the largest death toll for Native Americans. Scarcely any society on earth remained unaffected by this global ecological exchange. The Columbian Exchange explains why Indian nations collapsed and European colonies thrived after Columbus’s arrival in the New World in 1492.
The Columbian Exchange helped the European nations quickly became the wealthiest and most powerful in the world. New food crops and animals became part of the economy, including the potato.They imported goods from America along with pre-existing products. The European population increased by around 20% which resulted in a faster increase of urbanization. They exported wheat, sugar, rice, coffee beans, horses, cows, and pigs to the Americas. In the Americas massive populations were infected with smallpox, measles, diphtheria, whooping cough, and influenza. In the Aztec empire, the population declined by twenty million people within the first century of rule by the Spanish people ending with a total of 1 million people spared.During the three-hundred year period one-hundred million people died.
New agricultural products and domestic animals arrived during the diffusion of the Columbian Exchange. Large amounts of people migrated and arrived from Europe and Asia. Enslaved Africans were traded for different kinds of goods. The native people were forced to work in mines by the Spanish, and would eventually lose their land to Europe as a whole. They exported corn, potatoes, beans, and cocoa beans, tobacco, gold, and silver to Europe.The Europeans in America gathered slaves to produce free labor which equaled 100% profit for the settlers.
They exported slaves to make goods to Europe such as tropical foods, sugar, cotton etc. There role in the triangle trade was that they had a free workforce that produced goods for 100% profit and exported them which helped them gain enough money to buy more slaves or invest in other things. They kick started their economy through agriculture then also had the wealth to invest into the industrial revolution and prevail. This was the western world at its beginnings.Africa was affected by the Columbian Exchange and was the start of slave trading. The Europeans would export humans, gold and other natural minerals out of Africa.
The Africans would exchange humans for raw materials, which caused its population to stagger and communities to become ghost towns. The African people were captured and enslaved and traded to Europe. “As soon as the wretched Africans, purchased at the fairs, fall into the hands of the black traders, they experience in earnest those dreadful sufferings which they are doomed in future to undergo.And there is not the least room to doubt, but that even before they can reach the fairs, great numbers perish from cruel usage, want of food, traveling through inhospitable deserts, etc. ” The African people fled into regions for safety of capture into the mountains or deep forest. Africa was always at war with itself, it had rich leaders and poor populations because the leaders or Kings were the ones trading for themselves the populations only lived in fear from becoming slaves the leaders were exporting humans in exchange for guns, alcohol, etc.Africa’s part in the triangle trade was devastating to the continents economy, eventually all kingdoms involved in the trade collapsed allowing Europeans to easily take over this was Africa demise in its beginnings. In the Columbian Exchange, ecology became destiny.
Powerful environmental forces, understood by no one alive at the time and by very few people even today, determined who would thrive and who would die. And that may be the most shocking truth revealed to those who take the time to understand the Columbian Exchange: we, as humans, cannot always control our own destinies.The most important historical actors in this story are not Christopher Columbus or Moctezuma or Hernan Cortes. They are the smallpox virus, the pig, the potato, and the kernel of corn.
The history of the European voyages of exploration may be conveniently divided into two areas: the drive to the East, which was pioneered by the Portuguese and the expansion, westwards across the Atlantic to the New World, which was initially led by the Portuguese but eventually dominated by the Spanish. The two differ in that Europe had known of India and China for centuries, whereas the existence of America was totally unsuspected.When Columbus landed in the Bahamas in 1492, America was still viewed as little more than a barrier between Europe and the true prize of the Indies in the East. The odds were against the European explorers who sailed in frail wooden ships and were guided by crude and primitive instruments into uncharted seas, where shoals were unmarked and shifting currents were a perpetual mystery. These challenges of exploration were great but so were the motivations: a lust for gold and glory, missionary zeal for converting the “savage”, and the desire to gain knowledge.
Although Portugal and Spain had both developed seafaring traditions over the previous three centuries, neither one was prepared, at the beginning of the fifteenth century, for the imminent age of discovery, exploration, and worldwide commercial development that required knowledge and skills that greatly surpassed the existing resources available. The following sections examine the progress made by the Europeans in addressing these logistical obstacles to their efforts at exploration. | https://lagas.org/columbian-exchange-6/ | 21 |
31 | A brain tumor occurs when abnormal cells form within the brain. There are two main types of tumors: cancerous (malignant) tumors and benign (non-cancerous) tumors. Cancerous tumors can be divided into primary tumors, which start within the brain, and secondary tumors, which most commonly have spread from tumors located outside the brain, known as brain metastasis tumors. All types of brain tumors may produce symptoms that vary depending on the part of the brain involved. These symptoms may include headaches, seizures, problems with vision, vomiting and mental changes. The headache is classically worse in the morning and goes away with vomiting. Other symptoms may include difficulty walking, speaking or with sensations. As the disease progresses, unconsciousness may occur.
|Other names||Intracranial neoplasm, brain tumour|
|Brain metastasis in the right cerebral hemisphere from lung cancer, shown on magnetic resonance imaging|
|Symptoms||Vary depending on the part of the brain involved, headaches, seizures, problem with vision, vomiting, mental changes|
|Risk factors||Neurofibromatosis, exposure to vinyl chloride, Epstein–Barr virus, ionizing radiation|
|Diagnostic method||Computed tomography, magnetic resonance imaging, tissue biopsy|
|Treatment||Surgery, radiation therapy, chemotherapy|
|Medication||Anticonvulsants, dexamethasone, furosemide|
|Prognosis||Average five-year survival rate 33% (US)|
|Frequency||1.2 million nervous system cancers (2015)|
The cause of most brain tumors is unknown. Uncommon risk factors include exposure to vinyl chloride, Epstein–Barr virus, ionizing radiation, and inherited syndromes such as neurofibromatosis, tuberous sclerosis, and von Hippel-Lindau Disease. Studies on mobile phone exposure have not shown a clear risk. The most common types of primary tumors in adults are meningiomas (usually benign) and astrocytomas such as glioblastomas. In children, the most common type is a malignant medulloblastoma. Diagnosis is usually by medical examination along with computed tomography (CT) or magnetic resonance imaging (MRI). The result is then often confirmed by a biopsy. Based on the findings, the tumors are divided into different grades of severity.
Treatment may include some combination of surgery, radiation therapy and chemotherapy. If seizures occur, anticonvulsant medication may be needed. Dexamethasone and furosemide are medications that may be used to decrease swelling around the tumor. Some tumors grow gradually, requiring only monitoring and possibly needing no further intervention. Treatments that use a person's immune system are being studied. Outcome varies considerably depending on the type of tumor and how far it has spread at diagnosis. Although benign tumors only grow in one area, they may still be life-threatening due to their location. Glioblastomas usually have very poor outcomes, while meningiomas usually have good outcomes. The average five-year survival rate for all brain cancers in the United States is 33%.
Secondary, or metastatic, brain tumors are about four times as common as primary brain tumors, with about half of metastases coming from lung cancer. Primary brain tumors occur in around 250,000 people a year globally, making up less than 2% of cancers. In children younger than 15, brain tumors are second only to acute lymphoblastic leukemia as the most common form of cancer. In Australia, the average lifetime economic cost of a case of brain cancer is $1.9 million, the greatest of any type of cancer.
Signs and symptomsEdit
The signs and symptoms of brain tumors are broad. People may experience symptoms regardless of whether the tumor is benign (not cancerous) or cancerous. Primary and secondary brain tumors present with similar symptoms, depending on the location, size, and rate of growth of the tumor. For example, larger tumors in the frontal lobe can cause changes in the ability to think. However, a smaller tumor in an area such as Wernicke's area (small area responsible for language comprehension) can result in a greater loss of function.
Headaches as a result of raised intracranial pressure can be an early symptom of brain cancer. However, isolated headache without other symptoms is rare, and other symptoms including visual abnormalities may occur before headaches become common. Certain warning signs for headache exist which make the headache more likely to be associated with brain cancer. These are, as defined by the American Academy of Neurology: "abnormal neurological examination, headache worsened by Valsalva maneuver, headache causing awakening from sleep, new headache in the older population, progressively worsening headache, atypical headache features, or patients who do not fulfill the strict definition of migraine". Other associated signs are headaches that are worse in the morning or that subside after vomiting.
The brain is divided into lobes and each lobe or area has its own function. A tumor in any of these lobes may affect the area's performance. The symptoms experienced are often linked to the location of the tumor, but each person may experience something different.
- Frontal lobe: Tumors may contribute to poor reasoning, inappropriate social behavior, personality changes, poor planning, lower inhibition, and decreased production of speech (Broca's area).
- Temporal lobe: Tumors in this lobe may contribute to poor memory, loss of hearing, and difficulty in language comprehension (Wernicke's area is located in this lobe).
- Parietal lobe: Tumors here may result in poor interpretation of languages, difficulty with speaking, writing, drawing, naming, and recognizing, and poor spatial and visual perception.
- Occipital lobe: Damage to this lobe may result in poor vision or loss of vision.
- Cerebellum: Tumors in this area may cause poor balance, muscle movement, and posture.
- Brain stem: Tumors on the brainstem can cause seizures, endocrine problems, respiratory changes, visual changes, headaches and partial paralysis.
A person's personality may be altered due to the tumor damaging lobes of the brain. Since the frontal, temporal, and parietal lobes control inhibition, emotions, mood, judgement, reasoning, and behavior, a tumor in those regions can cause inappropriate social behavior, temper tantrums, laughing at things which merit no laughter, and even psychological symptoms such as depression and anxiety. More research is needed into the effectiveness and safety of medication for depression in people with brain tumors.
Personality changes can have damaging effects such as unemployment, unstable relationships, and a lack of control.
Epidemiological studies are required to determine risk factors. Aside from exposure to vinyl chloride or ionizing radiation, there are no known environmental factors associated with brain tumors. Mutations and deletions of tumor suppressor genes, such as P53, are thought to be the cause of some forms of brain tumor. Inherited conditions, such as Von Hippel–Lindau disease, tuberous sclerosis, multiple endocrine neoplasia, and neurofibromatosis type 2 carry a high risk for the development of brain tumors. People with celiac disease have a slightly increased risk of developing brain tumors. Smoking has been suggested to increase the risk but evidence remains unclear.
Although studies have not shown any link between cell phone or mobile phone radiation and the occurrence of brain tumors, the World Health Organization has classified mobile phone radiation on the IARC scale into Group 2B – possibly carcinogenic. The claim that cell phone usage may cause brain cancer is likely based on epidemiological studies which observed a slight increase in glioma risk among heavy users of wireless and cordless phones. When those studies were conducted, GSM (2G) phones were in use. Modern, third-generation (3G) phones emit, on average, about 1% of the energy emitted by those GSM (2G) phones, and therefore the finding of an association between cell phone usage and increased risk of brain cancer is not based upon current phone usage.
Human brains are surrounded by a system of connective tissue membranes called meninges that separate the brain from the skull. This three-layered covering is composed of (from the outside in) the dura mater, arachnoid mater, and pia mater. The arachnoid and pia are physically connected and thus often considered as a single layer, the leptomeninges. Between the arachnoid mater and the pia mater is the subarachnoid space which contains cerebrospinal fluid (CSF). This fluid circulates in the narrow spaces between cells and through the cavities in the brain called ventricles, to support and protect the brain tissue. Blood vessels enter the central nervous system through the perivascular space above the pia mater. The cells in the blood vessel walls are joined tightly, forming the blood–brain barrier which protects the brain from toxins that might enter through the blood.
Tumors of the meninges are meningiomas and are often benign. Though not technically a tumor of brain tissue, they are often considered brain tumors since they protrude into the space where the brain is, causing symptoms. Since they are usually slow-growing tumors, meningiomas can be quite large by the time symptoms appear.
The brains of humans and other vertebrates are composed of very soft tissue and have a gelatin-like texture. Living brain tissue has a pink tint in color on the outside (gray matter), and nearly complete white on the inside (white matter), with subtle variations in color. The three largest divisions of the brain are:
These areas are composed of two broad classes of cells: neurons and glia. These two types are equally numerous in the brain as a whole, although glial cells outnumber neurons roughly 4 to 1 in the cerebral cortex. Glia come in several types, which perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Primary tumors of the glial cells are called gliomas and often are malignant by the time they are diagnosed.
The thalamus and hypothalamus are major divisions of the diencephalon, with the pituitary gland and pineal gland attached at the bottom; tumors of the pituitary and pineal gland are often benign.
Although there is no specific or singular symptom or sign, the presence of a combination of symptoms and the lack of corresponding indications of other causes can be an indicator for investigation towards the possibility of a brain tumor. Brain tumors have similar characteristics and obstacles when it comes to diagnosis and therapy with tumors located elsewhere in the body. However, they create specific issues that follow closely to the properties of the organ they are in.
The diagnosis will often start by taking a medical history noting medical antecedents, and current symptoms. Clinical and laboratory investigations will serve to exclude infections as the cause of the symptoms. Examinations in this stage may include the eyes, otolaryngological (or ENT) and electrophysiological exams. The use of electroencephalography (EEG) often plays a role in the diagnosis of brain tumors.
Brain tumors, when compared to tumors in other areas of the body, pose a challenge for diagnosis. Commonly, radioactive tracers are uptaken in large volumes in tumors due to the high activity of tumor cells, allowing for radioactive imaging of the tumor. However, most of the brain is separated from the blood by the blood-brain barrier (BBB), a membrane that exerts a strict control over what substances are allowed to pass into the brain. Therefore, many tracers that may reach tumors in other areas of the body easily would be unable to reach brain tumors until there was a disruption of the BBB by the tumor. Disruption of the BBB is well imaged via MRI or CT scan, and is therefore regarded as the main diagnostic indicator for malignant gliomas, meningiomas, and brain metastases.
Swelling or obstruction of the passage of cerebrospinal fluid (CSF) from the brain may cause (early) signs of increased intracranial pressure which translates clinically into headaches, vomiting, or an altered state of consciousness, and in children changes to the diameter of the skull and bulging of the fontanelles. More complex symptoms such as endocrine dysfunctions should alarm doctors not to exclude brain tumors.
A bilateral temporal visual field defect (due to compression of the optic chiasm) or dilation of the pupil, and the occurrence of either slowly evolving or the sudden onset of focal neurologic symptoms, such as cognitive and behavioral impairment (including impaired judgment, memory loss, lack of recognition, spatial orientation disorders), personality or emotional changes, hemiparesis, hypoesthesia, aphasia, ataxia, visual field impairment, impaired sense of smell, impaired hearing, facial paralysis, double vision, or more severe symptoms such as tremors, paralysis on one side of the body hemiplegia, or (epileptic) seizures in a patient with a negative history for epilepsy, should raise the possibility of a brain tumor.
Medical imaging plays a central role in the diagnosis of brain tumors. Early imaging methods – invasive and sometimes dangerous – such as pneumoencephalography and cerebral angiography have been abandoned in favor of non-invasive, high-resolution techniques, especially magnetic resonance imaging (MRI) and computed tomography (CT) scans, though MRI is typically the reference standard used. Neoplasms will often show as differently colored masses (also referred to as processes) in CT or MRI results.
- Benign brain tumors often show up as hypodense (darker than brain tissue) mass lesions on CT scans. On MRI, they appear either hypodense or isointense (same intensity as brain tissue) on T1-weighted scans, or hyperintense (brighter than brain tissue) on T2-weighted MRI, although the appearance is variable.
- Contrast agent uptake, sometimes in characteristic patterns, can be demonstrated on either CT or MRI scans in most malignant primary and metastatic brain tumors.
- Pressure areas where the brain tissue has been compressed by a tumor also appear hyperintense on T2-weighted scans and might indicate the presence a diffuse neoplasm due to an unclear outline. Swelling around the tumor known as peritumoral edema can also show a similar result.
This is because these tumors disrupt the normal functioning of the BBB and lead to an increase in its permeability. More recently, advancements have been made to increase the utility of MRI in providing physiological data that can help to inform diagnosis and prognosis. Perfusion Weighted Imaging (PWI) and Diffusion Weighted Imaging (DWI) are two MRI techniques that reviews have been shown to be useful in classifying tumors by grade, which was not previously viable using only structural imaging. However, these techniques cannot alone diagnose high- versus low-grade gliomas, and thus the definitive diagnosis of brain tumor should only be confirmed by histological examination of tumor tissue samples obtained either by means of brain biopsy or open surgery. The histological examination is essential for determining the appropriate treatment and the correct prognosis. This examination, performed by a pathologist, typically has three stages: interoperative examination of fresh tissue, preliminary microscopic examination of prepared tissues, and follow-up examination of prepared tissues after immunohistochemical staining or genetic analysis.
Tumors have characteristics that allow determination of malignancy and how they will evolve, and determining these characteristics will allow the medical team to determine the management plan.
Anaplasia or dedifferentiation: loss of differentiation of cells and of their orientation to one another and blood vessels, a characteristic of anaplastic tumor tissue. Anaplastic cells have lost total control of their normal functions and many have deteriorated cell structures. Anaplastic cells often have abnormally high nuclear-to-cytoplasmic ratios, and many are multinucleated. Additionally, the nucleus of anaplastic cells are usually unnaturally shaped or oversized. Cells can become anaplastic in two ways: neoplastic tumor cells can dedifferentiate to become anaplasias (the dedifferentiation causes the cells to lose all of their normal structure/function), or cancer stem cells can increase their capacity to multiply (i.e., uncontrollable growth due to failure of differentiation).
Neoplasia: the (uncontrolled) division of cells. As such, neoplasia is not problematic but its consequences are: the uncontrolled division of cells means that the mass of a neoplasm increases in size, and in a confined space such as the intracranial cavity this quickly becomes problematic because the mass invades the space of the brain pushing it aside, leading to compression of the brain tissue and increased intracranial pressure and destruction of brain parenchyma. Increased intracranial pressure (ICP) may be attributable to the direct mass effect of the tumor, increased blood volume, or increased cerebrospinal fluid (CSF) volume, which may, in turn, have secondary symptoms.
Necrosis: the (premature) death of cells, caused by external factors such as infection, toxin or trauma. Necrotic cells send the wrong chemical signals which prevent phagocytes from disposing of the dead cells, leading to a buildup of dead tissue, cell debris and toxins at or near the site of the necrotic cells
Arterial and venous hypoxia, or the deprivation of adequate oxygen supply to certain areas of the brain, occurs when a tumor makes use of nearby blood vessels for its supply of blood and the neoplasm enters into competition for nutrients with the surrounding brain tissue.
More generally a neoplasm may cause release of metabolic end products (e.g., free radicals, altered electrolytes, neurotransmitters), and release and recruitment of cellular mediators (e.g., cytokines) that disrupt normal parenchymal function.
Tumors can be benign or malignant, can occur in different parts of the brain, and may be classified as primary or secondary. A primary tumor is one that has started in the brain, as opposed to a metastatic tumor, which is one that has spread to the brain from another area of the body. The incidence of metastatic tumors is approximately four times greater than primary tumors. Tumors may or may not be symptomatic: some tumors are discovered because the patient has symptoms, others show up incidentally on an imaging scan, or at an autopsy.
Grading of the tumors of the central nervous system commonly occurs on a 4-point scale (I-IV) created by the World Health Organization in 1993. Grade I tumors are the least severe and commonly associated with long term survival, with severity and prognosis worsening as the grade increases. Low grade tumors are often benign, while higher grades are aggressively malignant and/or metastatic. Other grading scales do exist, many based upon the same criteria as the WHO scale and graded from I-IV.
The most common primary brain tumors are:
These common tumors can also be organized according to tissue of origin as shown below:
Tissue of origin
|Astrocytes||Pilocytic Astrocytoma (PCA)||Glioblastoma Multiforme (GBM)|
Secondary tumors of the brain are metastatic and have invaded the brain from cancers originating in other organs. This means that a cancerous neoplasm has developed in another organ elsewhere in the body and that cancer cells have leaked from that primary tumor and then entered the lymphatic system and blood vessels. They then circulate through the bloodstream, and are deposited in the brain. There, these cells continue growing and dividing, becoming another invasive neoplasm of the primary cancer's tissue. Secondary tumors of the brain are very common in the terminal phases of patients with an incurable metastasized cancer; the most common types of cancers that bring about secondary tumors of the brain are lung cancer, breast cancer, malignant melanoma, kidney cancer, and colon cancer (in decreasing order of frequency).
Secondary brain tumors are more common than primary ones; in the United States there are about 170,000 new cases every year. Secondary brain tumors are the most common cause of tumors in the intracranial cavity. The skull bone structure can also be subject to a neoplasm that by its very nature reduces the volume of the intracranial cavity, and can damage the brain.
Brain tumors or intracranial neoplasms can be cancerous (malignant) or non-cancerous (benign). However, the definitions of malignant or benign neoplasms differ from those commonly used in other types of cancerous or non-cancerous neoplasms in the body. In cancers elsewhere in the body, three malignant properties differentiate benign tumors from malignant forms of cancer: benign tumors are self-limited and do not invade or metastasize. Characteristics of malignant tumors include:
- uncontrolled mitosis (growth by division beyond the normal limits)
- anaplasia: the cells in the neoplasm have an obviously different form (in size and shape). Anaplastic cells display marked pleomorphism. The cell nuclei are characteristically extremely hyperchromatic (darkly stained) and enlarged; the nucleus might have the same size as the cytoplasm of the cell (nuclear-cytoplasmic ratio may approach 1:1, instead of the normal 1:4 or 1:6 ratio). Giant cells – considerably larger than their neighbors – may form and possess either one enormous nucleus or several nuclei (syncytia). Anaplastic nuclei are variable and bizarre in size and shape.
- invasion or infiltration (medical literature uses these terms as synonymous equivalents. However, for clarity, the articles that follow adhere to a convention that they mean slightly different things; this convention is not followed outside these articles):
- Invasion or invasiveness is the spatial expansion of the tumor through uncontrolled mitosis, in the sense that the neoplasm invades the space occupied by adjacent tissue, thereby pushing the other tissue aside and eventually compressing the tissue. Often these tumors are associated with clearly outlined tumors in imaging.
- Infiltration is the behavior of the tumor either to grow (microscopic) tentacles that push into the surrounding tissue (often making the outline of the tumor undefined or diffuse) or to have tumor cells "seeded" into the tissue beyond the circumference of the tumorous mass; this does not mean that an infiltrative tumor does not take up space or does not compress the surrounding tissue as it grows, but an infiltrating neoplasm makes it difficult to say where the tumor ends and the healthy tissue starts.
- metastasis (spread to other locations in the body via lymph or blood).
Of the above malignant characteristics, some elements do not apply to primary neoplasms of the brain:
- Primary brain tumors rarely metastasize to other organs; some forms of primary brain tumors can metastasize but will not spread outside the intracranial cavity or the central spinal canal. Due to the BBB, cancerous cells of a primary neoplasm cannot enter the bloodstream and get carried to another location in the body. (Occasional isolated case reports suggest spread of certain brain tumors outside the central nervous system, e.g. bone metastasis of glioblastoma multiforme.)
- Primary brain tumors generally are invasive (i.e. they will expand spatially and intrude into the space occupied by other brain tissue and compress those brain tissues); however, some of the more malignant primary brain tumors will infiltrate the surrounding tissue.
In 2016, the WHO restructured their classifications of some categories of gliomas to include distinct genetic mutations that have been useful in differentiating tumor types, prognoses, and treatment responses. Genetic mutations are typically detected via immunohistochemistry, a technique that visualizes the presence or absence of a targeted protein via staining.
- Mutations in IDH1 and IDH2 genes are commonly found in low grade gliomas
- Loss of both IDH genes combined with loss of chromosome arms 1p and 19q indicates the tumor is an oligodendroglioma
- Loss of TP53 and ATRX characterizes astrocytomas
- Genes EGFR, TERT, and PTEN, are commonly altered in gliomas and are useful in differentiating tumor grade and biology
Anaplastic astrocytoma, Anaplastic oligodendroglioma, Astrocytoma, Central neurocytoma, Choroid plexus carcinoma, Choroid plexus papilloma, Choroid plexus tumor, Colloid cyst, Dysembryoplastic neuroepithelial tumour, Ependymal tumor, Fibrillary astrocytoma, Giant-cell glioblastoma, Glioblastoma multiforme, Gliomatosis cerebri, Gliosarcoma, Hemangiopericytoma, Medulloblastoma, Medulloepithelioma, Meningeal carcinomatosis, Neuroblastoma, Neurocytoma, Oligoastrocytoma, Oligodendroglioma, Optic nerve sheath meningioma, Pediatric ependymoma, Pilocytic astrocytoma, Pinealoblastoma, Pineocytoma, Pleomorphic anaplastic neuroblastoma, Pleomorphic xanthoastrocytoma, Primary central nervous system lymphoma, Sphenoid wing meningioma, Subependymal giant cell astrocytoma, Subependymoma, Trilateral retinoblastoma.
A medical team generally assesses the treatment options and presents them to the person affected and their family. Various types of treatment are available depending on tumor type and location, and may be combined to produce the best chances of survival:
- Surgery: complete or partial resection of the tumor with the objective of removing as many tumor cells as possible.
- Radiotherapy: the most commonly used treatment for brain tumors; the tumor is irradiated with beta, x rays or gamma rays.
- Chemotherapy: a treatment option for cancer, however, it is not always used to treat brain tumors as the blood-brain barrier can prevent some drugs from reaching the cancerous cells.
- A variety of experimental therapies are available through clinical trials.
Survival rates in primary brain tumors depend on the type of tumor, age, functional status of the patient, the extent of surgical removal and other factors specific to each case.
Standard care for anaplastic oligodendrogliomas and anaplastic oligoastrocytomas is surgery followed by radiotherapy. One study found a survival benefit for the addition of chemotherapy to radiotherapy after surgery, compared with radiotherapy alone.
The primary and most desired course of action described in medical literature is surgical removal (resection) via craniotomy. Minimally invasive techniques are becoming the dominant trend in neurosurgical oncology. The main objective of surgery is to remove as many tumor cells as possible, with complete removal being the best outcome and cytoreduction ("debulking") of the tumor otherwise. A Gross Total Resection (GTR) occurs when all visible signs of the tumor are removed, and subsequent scans show no apparent tumor. In some cases access to the tumor is impossible and impedes or prohibits surgery.
Many meningiomas, with the exception of some tumors located at the skull base, can be successfully removed surgically. Most pituitary adenomas can be removed surgically, often using a minimally invasive approach through the nasal cavity and skull base (trans-nasal, trans-sphenoidal approach). Large pituitary adenomas require a craniotomy (opening of the skull) for their removal. Radiotherapy, including stereotactic approaches, is reserved for inoperable cases.
Several current research studies aim to improve the surgical removal of brain tumors by labeling tumor cells with 5-aminolevulinic acid that causes them to fluoresce. Postoperative radiotherapy and chemotherapy are integral parts of the therapeutic standard for malignant tumors. Radiotherapy may also be administered in cases of "low-grade" gliomas when a significant tumor reduction could not be achieved surgically.
Multiple metastatic tumors are generally treated with radiotherapy and chemotherapy rather than surgery and the prognosis in such cases is determined by the primary tumor, and is generally poor.
The goal of radiation therapy is to kill tumor cells while leaving normal brain tissue unharmed. In standard external beam radiation therapy, multiple treatments of standard-dose "fractions" of radiation are applied to the brain. This process is repeated for a total of 10 to 30 treatments, depending on the type of tumor. This additional treatment provides some patients with improved outcomes and longer survival rates.
Radiosurgery is a treatment method that uses computerized calculations to focus radiation at the site of the tumor while minimizing the radiation dose to the surrounding brain. Radiosurgery may be an adjunct to other treatments, or it may represent the primary treatment technique for some tumors. Forms used include stereotactic radiosurgery, such as Gamma knife, Cyberknife or Novalis Tx radiosurgery.[unreliable medical source?]
Radiotherapy is the most common treatment for secondary brain tumors. The amount of radiotherapy depends on the size of the area of the brain affected by cancer. Conventional external beam "whole-brain radiotherapy treatment" (WBRT) or "whole-brain irradiation" may be suggested if there is a risk that other secondary tumors will develop in the future. Stereotactic radiotherapy is usually recommended in cases involving fewer than three small secondary brain tumors. Radiotherapy may be used following, or in some cases in place of, resection of the tumor. Forms of radiotherapy used for brain cancer include external beam radiation therapy, the most common, and brachytherapy and proton therapy, the last especially used for children.
People who receive stereotactic radiosurgery (SRS) and whole-brain radiation therapy (WBRT) for the treatment of metastatic brain tumors have more than twice the risk of developing learning and memory problems than those treated with SRS alone.
Postoperative conventional daily radiotherapy improves survival for adults with good functional well-being and high grade glioma compared to no postoperative radiotherapy. Hypofractionated radiation therapy has similar efficacy for survival as compared to conventional radiotherapy, particularly for individuals aged 60 and older with glioblastoma.
Patients undergoing chemotherapy are administered drugs designed to kill tumor cells. Although chemotherapy may improve overall survival in patients with the most malignant primary brain tumors, it does so in only about 20 percent of patients. Chemotherapy is often used in young children instead of radiation, as radiation may have negative effects on the developing brain. The decision to prescribe this treatment is based on a patient's overall health, type of tumor, and extent of the cancer. The toxicity and many side effects of the drugs, and the uncertain outcome of chemotherapy in brain tumors puts this treatment further down the line of treatment options with surgery and radiation therapy preferred.
UCLA Neuro-Oncology publishes real-time survival data for patients with a diagnosis of glioblastoma multiforme. They are the only institution in the United States that displays how brain tumor patients are performing on current therapies. They also show a listing of chemotherapy agents used to treat high-grade glioma tumors.
Genetic mutations have significant effects on the effectiveness of chemotherapy. Gliomas with IDH1 or IDH2 mutations respond better to chemotherapy than those without the mutation. Loss of chromosome arms 1p and 19q also indicate better response to chemoradiation.
The prognosis of brain cancer depends on the type of cancer diagnosed. Medulloblastoma has a good prognosis with chemotherapy, radiotherapy, and surgical resection while glioblastoma multiforme has a median survival of only 12 months even with aggressive chemoradiotherapy and surgery. Brainstem gliomas have the poorest prognosis of any form of brain cancer, with most patients dying within one year, even with therapy that typically consists of radiation to the tumor along with corticosteroids. However, one type, focal brainstem gliomas in children, seems open to exceptional prognosis and long-term survival has frequently been reported.
Prognosis is also affected by presentation of genetic mutations. Certain mutations provide better prognosis than others. IDH1 and IDH2 mutations in gliomas, as well as deletion of chromosome arms 1p and 19q, generally indicate better prognosis. TP53, ATRX, EGFR, PTEN, and TERT mutations are also useful in determining prognosis.
Glioblastoma multiforme (GBM) is the most aggressive (grade IV) and most common form of a malignant brain tumor. Even when aggressive multimodality therapy consisting of radiotherapy, chemotherapy, and surgical excision is used, median survival is only 12–17 months. Standard therapy for glioblastoma multiforme consists of maximal surgical resection of the tumor, followed by radiotherapy between two and four weeks after the surgical procedure to remove the cancer, then by chemotherapy, such as temozolomide. Most patients with glioblastoma take a corticosteroid, typically dexamethasone, during their illness to relieve symptoms. Experimental treatments include targeted therapy, gamma knife radiosurgery, boron neutron capture therapy and gene therapy.
Oligodendrogliomas are incurable but slowly progressive malignant brain tumors. They can be treated with surgical resection, chemotherapy, radiotherapy or a combination. For some suspected low-grade (grade II) tumors, only a course of watchful waiting and symptomatic therapy is opted for. These tumors show a high frequency of co-deletions of the p and q arms of chromosome 1 and chromosome 19 respectively (1p19q co-deletion) and have been found to be especially chemosensitive with one report claiming them to be one of the most chemosensitive tumors. A median survival of up to 16.7 years has been reported for grade II oligodendrogliomas.
Acoustic neuromas are non-cancerous tumors. They can be treated with surgery, radiation therapy, or observation. Early intervention with surgery or radiation is recommended to prevent progressive hearing loss.
Figures for incidences of cancers of the brain show a significant difference between more- and less-developed countries (the less-developed countries have lower incidences of tumors of the brain). This could be explained by undiagnosed tumor-related deaths (patients in extremely poor situations do not get diagnosed, simply because they do not have access to the modern diagnostic facilities required to diagnose a brain tumor) and by deaths caused by other poverty-related causes that preempt a patient's life before tumors develop or tumors become life-threatening. Nevertheless, statistics suggest that certain forms of primary brain tumors are more common among certain populations.
The incidence of low-grade astrocytoma has not been shown to vary significantly with nationality. However, studies examining the incidence of malignant central nervous system (CNS) tumors have shown some variation with national origin. Since some high-grade lesions arise from low-grade tumors, these trends are worth mentioning. Specifically, the incidence of CNS tumors in the United States, Israel, and the Nordic countries is relatively high, while Japan and Asian countries have a lower incidence. These differences probably reflect some biological differences as well as differences in pathologic diagnosis and reporting. Worldwide data on incidence of cancer can be found at the WHO (World Health Organisation) and is handled by the IARC (International Agency for Research on Cancer) located in France.
In the United States in 2015, approximately 166,039 people were living with brain or other central nervous system tumors. Over 2018, it was projected that there would be 23,880 new cases of brain tumors and 16,830 deaths in 2018 as a result, accounting for 1.4 percent of all cancers and 2.8 percent of all cancer deaths. Median age of diagnosis was 58 years old, while median age of death was 65. Diagnosis was slightly more common in males, at approximately 7.5 cases per 100 000 people, while females saw 2 fewer at 5.4. Deaths as a result of brain cancer were 5.3 per 100 000 for males, and 3.6 per 100 000 for females, making brain cancer the 10th leading cause of cancer death in the United States. Overall lifetime risk of developing brain cancer is approximated at 0.6 percent for men and women.
Brain, other CNS or intracranial tumors are the ninth most common cancer in the UK (around 10,600 people were diagnosed in 2013), and it is the eighth most common cause of cancer death (around 5,200 people died in 2012).
In the United States more than 28,000 people under 20 are estimated to have a brain tumor. About 3,720 new cases of brain tumors are expected to be diagnosed in those under 15 in 2019. Higher rates were reported in 1985–1994 than in 1975–1983. There is some debate as to the reasons; one theory is that the trend is the result of improved diagnosis and reporting, since the jump occurred at the same time that MRIs became available widely, and there was no coincident jump in mortality. Central nervous system tumors make up 20–25 percent of cancers in children.
The average survival rate for all primary brain cancers in children is 74%. Brain cancers are the most common cancer in children under 19, are result in more death in this group than leukemia. Younger people do less well.
In children under 2, about 70% of brain tumors are medulloblastomas, ependymomas, and low-grade gliomas. Less commonly, and seen usually in infants, are teratomas and atypical teratoid rhabdoid tumors. Germ cell tumors, including teratomas, make up just 3% of pediatric primary brain tumors, but the worldwide incidence varies significantly.
In the UK, 429 children aged 14 and under are diagnosed with a brain tumour on average each year, and 563 children and young people under the age of 19 are diagnosed.
Vesicular stomatitis virusEdit
Led by Prof. Nori Kasahara, researchers from USC, who are now at UCLA, reported in 2001 the first successful example of applying the use of retroviral replicating vectors towards transducing cell lines derived from solid tumors. Building on this initial work, the researchers applied the technology to in vivo models of cancer and in 2005 reported a long-term survival benefit in an experimental brain tumor animal model.[unreliable medical source?] Subsequently, in preparation for human clinical trials, this technology was further developed by Tocagen (a pharmaceutical company primarily focused on brain cancer treatments) as a combination treatment (Toca 511 & Toca FC). This has been under investigation since 2010 in a Phase I/II clinical trial for the potential treatment of recurrent high-grade glioma including glioblastoma multiforme (GBM) and anaplastic astrocytoma. No results have yet been published.
Efforts to detect signs of brain tumors in the blood are in the early stages of development as of 2019.
- "Adult Brain Tumors Treatment". NCI. 28 February 2014. Archived from the original on 5 July 2014. Retrieved 8 June 2014.
- "General Information About Adult Brain Tumors". NCI. 14 April 2014. Archived from the original on 5 July 2014. Retrieved 8 June 2014.
- "Chapter 5.16". World Cancer Report 2014. World Health Organization. 2014. ISBN 978-9283204299. Archived from the original on 19 September 2016.
- "Cancer of the Brain and Other Nervous System - Cancer Stat Facts". SEER. Retrieved 22 July 2019.
- Vos, Theo; Allen, Christine; Arora, Megha; Barber, Ryan M.; Bhutta, Zulfiqar A.; Brown, Alexandria; Carter, Austin; Casey, Daniel C.; Charlson, Fiona J.; Chen, Alan Z.; Coggeshall, Megan; Cornaby, Leslie; Dandona, Lalit; Dicker, Daniel J.; Dilegge, Tina; Erskine, Holly E.; Ferrari, Alize J.; Fitzmaurice, Christina; Fleming, Tom; Forouzanfar, Mohammad H.; Fullman, Nancy; Gething, Peter W.; Goldberg, Ellen M.; Graetz, Nicholas; Haagsma, Juanita A.; Hay, Simon I.; Johnson, Catherine O.; Kassebaum, Nicholas J.; Kawashima, Toana; et al. (October 2016). "Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1545–1602. doi:10.1016/S0140-6736(16)31678-6. PMC 5055577. PMID 27733282.
- Wang H, Naghavi M, Allen C, Barber RM, Bhutta ZA, Carter A, et al. (GBD 2015 Mortality and Causes of Death Collaborators) (October 2016). "Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1459–1544. doi:10.1016/S0140-6736(16)31012-1. PMC 5388903. PMID 27733281.
- Longo, Dan L (2012). "369 Seizures and Epilepsy". Harrison's principles of internal medicine (18th ed.). McGraw-Hill. p. 3258. ISBN 978-0-07-174887-2.
- "Benign brain tumour (non-cancerous)". nhs.uk. 20 October 2017. Retrieved 29 July 2019.
- Merrell RT (December 2012). "Brain tumors". Disease-a-Month. 58 (12): 678–89. doi:10.1016/j.disamonth.2012.08.009. PMID 23149521.
- World Cancer Report 2014. World Health Organization. 2014. pp. Chapter 1.3. ISBN 978-9283204299.
- "Brain Tumour Facts 2011" (PDF). Brain Tumour Alliance Australia. Archived from the original (PDF) on 25 January 2014. Retrieved 9 June 2014.
- "Brain Tumors". Archived from the original on 12 August 2016. Retrieved 2 August 2016.
- "Mood Swings and Cognitive Changes | American Brain Tumor Association". www.abta.org. Archived from the original on 2 August 2016. Retrieved 3 August 2016.
- "Coping With Personality & Behavioral Changes". www.brainsciencefoundation.org. Archived from the original on 30 July 2016. Retrieved 3 August 2016.
- Kahn K, Finkel A (June 2014). "It IS a tumor -- current review of headache and brain tumor". Current Pain and Headache Reports. 18 (6): 421. doi:10.1007/s11916-014-0421-8. PMID 24760490. S2CID 5820118.
- "Nosebleeds & Headaches: Do You Have Brain Cancer?". Advanced Neurosurgery Associates. 19 November 2020. Retrieved 26 November 2020.
- Gregg N, Arber A, Ashkan K, Brazil L, Bhangoo R, Beaney R, et al. (November 2014). "Neurobehavioural changes in patients following brain tumour: patients and relatives perspective" (PDF). Supportive Care in Cancer. 22 (11): 2965–72. doi:10.1007/s00520-014-2291-3. PMID 24865878. S2CID 2072277.
- "Coping With Personality & Behavioral Changes". www.brainsciencefoundation.org. Archived from the original on 30 July 2016. Retrieved 27 July 2016.
- "Mood Swings and Cognitive Changes | American Brain Tumor Association". www.abta.org. Archived from the original on 15 August 2016. Retrieved 27 July 2016.
- Warnick, MD, Ronald (August 2018). "Brain Tumors: an introduction". Mayfield Brain and Spine Clinic.
- "Changes in Vision - Brain Tumour Symptoms". www.thebraintumourcharity.org. Archived from the original on 10 February 2018. Retrieved 9 February 2018.
- "Brain Tumors". Children's Hospital of Wisconsin. 6 March 2019.
- Jones, Caleb. "Brain Tumor Symptoms | Miles for Hope | Brain Tumor Foundation". milesforhope.org. Archived from the original on 14 August 2016. Retrieved 3 August 2016.
- Beevers, Zachary; Hussain, Sana; Boele, Florien W.; Rooney, Alasdair G. (17 July 2020). "Pharmacological treatment of depression in people with a primary brain tumour". The Cochrane Database of Systematic Reviews. 7: CD006932. doi:10.1002/14651858.CD006932.pub4. ISSN 1469-493X. PMC 7388852. PMID 32678464.
- Krishnatreya M, Kataki AC, Sharma JD, Bhattacharyya M, Nandy P, Hazarika M (2014). "Brief descriptive epidemiology of primary malignant brain tumors from North-East India". Asian Pacific Journal of Cancer Prevention. 15 (22): 9871–3. doi:10.7314/apjcp.2014.15.22.9871. PMID 25520120.
- Kleihues P, Ohgaki H, Eibl RH, Reichel MB, Mariani L, Gehring M, Petersen I, Höll T, von Deimling A, Wiestler OD, Schwab M (1994). "Type and frequency of p53 mutations in tumors of the nervous system and its coverings". Molecular Neuro-oncology and Its Impact on the Clinical Management of Brain Tumors. Recent results in cancer research. 135. Springer. pp. 25–31. ISBN 978-3540573517.
- Hodgson TS, Nielsen SM, Lesniak MS, Lukas RV (September 2016). "Neurological Management of Von Hippel-Lindau Disease". The Neurologist (Review). 21 (5): 73–8. doi:10.1097/NRL.0000000000000085. PMID 27564075. S2CID 29232748.
- Rogers L, Barani I, Chamberlain M, Kaley TJ, McDermott M, Raizer J, et al. (January 2015). "Meningiomas: knowledge base, treatment outcomes, and uncertainties. A RANO review". Journal of Neurosurgery (Review). 122 (1): 4–23. doi:10.3171/2014.7.JNS131644. PMC 5062955. PMID 25343186.
- Hourigan CS (June 2006). "The molecular basis of coeliac disease". Clinical and Experimental Medicine (Review). 6 (2): 53–9. doi:10.1007/s10238-006-0095-6. PMID 16820991. S2CID 12795861.
- "Brain Cancer Causes, Symptoms, Stages & Life Expectancy". MedicineNet. Retrieved 24 February 2020.
- Frei P, Poulsen AH, Johansen C, Olsen JH, Steding-Jessen M, Schüz J (October 2011). "Use of mobile phones and risk of brain tumours: update of Danish cohort study". BMJ. 343: d6387. doi:10.1136/bmj.d6387. PMC 3197791. PMID 22016439.
- "IARC classifies radiofrequency electromagnetic fields as possibly carcinogenic to humans" (PDF). World Health Organization press release N° 208 (Press release). International Agency for Research on Cancer. 31 May 2011. Archived (PDF) from the original on 1 June 2011. Retrieved 2 June 2011.
- Moore, Keith L. (September 2017). Clinically oriented anatomy. Agur, A. M. R.,, Dalley, Arthur F., II (Eighth ed.). Philadelphia. ISBN 9781496347213. OCLC 978362025.
- "Meningioma Brain Tumor". neurosurgery.ucla.edu. Retrieved 29 July 2019.
- "Neurons & Glial Cells | SEER Training". training.seer.cancer.gov. Retrieved 29 July 2019.
- Ostrom QT, Gittleman H, Farah P, Ondracek A, Chen Y, Wolinsky Y, et al. (November 2013). "CBTRUS statistical report: Primary brain and central nervous system tumors diagnosed in the United States in 2006-2010". Neuro-Oncology. 15 Suppl 2 (Suppl 2): ii1-56. doi:10.1093/neuonc/not151. PMC 3798196. PMID 24137015.
- "Adult Central Nervous System Tumors Treatment (PDQ®)–Patient Version - National Cancer Institute". www.cancer.gov. 11 May 2020. Retrieved 29 January 2021.
- Herholz K, Langen KJ, Schiepers C, Mountz JM (November 2012). "Brain tumors". Seminars in Nuclear Medicine. 42 (6): 356–70. doi:10.1053/j.semnuclmed.2012.06.001. PMC 3925448. PMID 23026359.
- Iv M, Yoon BC, Heit JJ, Fischbein N, Wintermark M (January 2018). "Current Clinical State of Advanced Magnetic Resonance Imaging for Brain Tumor Diagnosis and Follow Up". Seminars in Roentgenology. 53 (1): 45–61. doi:10.1053/j.ro.2017.11.005. PMID 29405955.
- Margiewicz S, Cordova C, Chi AS, Jain R (January 2018). "State of the Art Treatment and Surveillance Imaging of Glioblastomas". Seminars in Roentgenology. 53 (1): 23–36. doi:10.1053/j.ro.2017.11.003. PMID 29405952.
- MedlinePlus Encyclopedia: Necrosis
- "What you need to know about brain tumors". National Cancer Institute. Archived from the original on 27 January 2012. Retrieved 25 February 2012.
- Park BJ, Kim HK, Sade B, Lee JH (2009). "Epidemiology". In Lee JH (ed.). Meningiomas: Diagnosis, Treatment, and Outcome. Springer. p. 11. ISBN 978-1-84882-910-7.
- "Brain Tumors - Classifications, Symptoms, Diagnosis and Treatments". www.aans.org. Retrieved 29 January 2021.
- "Classifications of Brain Tumors". AANS. American Association of Neurological Surgeons. Archived from the original on 24 April 2017. Retrieved 23 April 2017.
- MedlinePlus Encyclopedia: Metastatic brain tumor
- Frappaz D, Mornex F, Saint-Pierre G, Ranchere-Vince D, Jouvet A, Chassagne-Clement C, et al. (1999). "Bone metastasis of glioblastoma multiforme confirmed by fine needle biopsy". Acta Neurochirurgica. 141 (5): 551–2. doi:10.1007/s007010050342. PMID 10392217. S2CID 40327650.
- Nicolato A, Gerosa MA, Fina P, Iuzzolino P, Giorgiutti F, Bricolo A (September 1995). "Prognostic factors in low-grade supratentorial astrocytomas: a uni-multivariate statistical analysis in 76 surgically treated adult patients". Surgical Neurology. 44 (3): 208–21, discussion 221–3. doi:10.1016/0090-3019(95)00184-0. PMID 8545771.
- Lecavalier-Barsoum M, Quon H, Abdulkarim B (May 2014). "Adjuvant treatment of anaplastic oligodendrogliomas and oligoastrocytomas". The Cochrane Database of Systematic Reviews (5): CD007104. doi:10.1002/14651858.cd007104.pub2. PMC 7388823. PMID 24833028.
- Spetzler RF, Sanai N (February 2012). "The quiet revolution: retractorless surgery for complex vascular and skull base lesions". Journal of Neurosurgery. 116 (2): 291–300. doi:10.3171/2011.8.JNS101896. PMID 21981642.
- "Brain & Spinal Tumors: Surgery & Recovery | Advanced Neurosurgery". Advanced Neurosurgery Associates. Retrieved 8 October 2020.
- Paul Brennan (4 August 2008). "Introduction to brain cancer". cliniclog.com. Archived from the original on 17 February 2012. Retrieved 19 December 2011.
- "Radiosurgery treatment comparisons – Cyberknife, Gamma knife, Novalis Tx". Archived from the original on 20 May 2007. Retrieved 22 July 2014.
- "Treating secondary brain tumours with WBRT". Cancer Research UK. Archived from the original on 25 October 2007. Retrieved 5 June 2012.
- "Whole Brain Radiation increases risk of learning and memory problems in cancer patients with brain metastases". MD Anderson Cancer Center. Archived from the original on 5 October 2008. Retrieved 5 June 2012.
- "Metastatic brain tumors". International RadioSurgery Association. Archived from the original on 16 June 2012. Retrieved 5 June 2012.
- Khan, Luluel; Soliman, Hany; Sahgal, Arjun; Perry, James; Xu, Wei; Tsao, May N. (21 May 2020). "External beam radiation dose escalation for high grade glioma". The Cochrane Database of Systematic Reviews. 5: CD011475. doi:10.1002/14651858.CD011475.pub3. ISSN 1469-493X. PMC 7389526. PMID 32437039.
- Perkins, Allen; Liu, Gerald (2016). "Primary Brain Tumors in Adults: Diagnosis and Treatment". American Family Physician. 93 (3): 211–217B. PMID 26926614.
- "How Our Patients Perform: Glioblastoma Multiforme". UCLA Neuro-Oncology Program. Archived from the original on 9 June 2012. Retrieved 5 June 2012.
- Dalvi A. "Normal Pressure Hydrocephalus Causes, Symptoms, Treatment". eMedicineHealth. Emedicinehealth.com. Archived from the original on 22 February 2012. Retrieved 17 February 2012.
- "Brain Stem Gliomas in Childhood". Childhoodbraintumor.org. Archived from the original on 9 March 2012. Retrieved 17 February 2012.
- Sasmita AO, Wong YP, Ling AP (February 2018). "Biomarkers and therapeutic advances in glioblastoma multiforme". Asia-Pacific Journal of Clinical Oncology. 14 (1): 40–51. doi:10.1111/ajco.12756. PMID 28840962.
- "GBM Guide – MGH Brain Tumor Center". Brain.mgh.harvard.edu. Archived from the original on 16 February 2012. Retrieved 17 February 2012.
- Tai CK, Kasahara N (January 2008). "Replication-competent retrovirus vectors for cancer gene therapy" (PDF). Frontiers in Bioscience. 13 (13): 3083–95. doi:10.2741/2910. PMID 17981778. Archived from the original (PDF) on 19 March 2012.
- Murphy AM, Rabkin SD (April 2013). "Current status of gene therapy for brain tumors". Translational Research. 161 (4): 339–54. doi:10.1016/j.trsl.2012.11.003. PMC 3733107. PMID 23246627.
- Ty AU, See SJ, Rao JP, Khoo JB, Wong MC (January 2006). "Oligodendroglial tumor chemotherapy using "decreased-dose-intensity" PCV: a Singapore experience". Neurology. 66 (2): 247–9. doi:10.1212/01.wnl.0000194211.68164.a0. PMID 16434664. S2CID 31170268. Archived from the original on 20 July 2008.
- "Neurology". Neurology. Archived from the original on 19 February 2012. Retrieved 17 February 2012.
- "Acoustic Neuroma (Vestibular Schwannoma)". www.hopkinsmedicine.org. Retrieved 19 July 2019.
- "UpToDate". www.uptodate.com. Retrieved 19 July 2019.
- Bondy ML, Scheurer ME, Malmer B, Barnholtz-Sloan JS, Davis FG, Il'yasova D, et al. (October 2008). "Brain tumor epidemiology: consensus from the Brain Tumor Epidemiology Consortium". Cancer. 113 (7 Suppl): 1953–68. doi:10.1002/cncr.23741. PMC 2861559. PMID 18798534.
- "Cancer Stat Facts: Brain and Other Nervous System Cancer". National Cancer Institute. 31 March 2019.
- Jallo GI, Benardete EA (January 2010). "Low-Grade Astrocytoma". Archived from the original on 27 July 2010. Cite journal requires
- "CANCERMondial". International Agency for Research on Cancer. Archived from the original on 17 February 2012. Retrieved 17 February 2012.
- "What are the key statistics about brain and spinal cord tumors?". American Cancer Society. 1 May 2012. Archived from the original on 2 July 2012.
- "2018 CBTRUS Fact Sheet". Central Brain Tumor Registry of the United States. 31 March 2019. Archived from the original on 14 February 2019. Retrieved 14 February 2019.
- "Brain, other CNS and intracranial tumours statistics". Cancer Research UK. Archived from the original on 16 October 2014. Retrieved 27 October 2014.
- "Quick Brain Tumor Facts". National Brain Tumor Society. Retrieved 14 February 2019.
- "CBTRUS - 2018 CBTRUS Fact Sheet". www.cbtrus.org. Archived from the original on 14 February 2019. Retrieved 14 February 2019.
- Hoda, Syed A; Cheng, Esther (6 November 2017). "Robbins Basic Pathology". American Journal of Clinical Pathology. 148 (6): 557. doi:10.1093/ajcp/aqx095. ISSN 0002-9173.
- Chamberlain MC, Kormanik PA (February 1998). "Practical guidelines for the treatment of malignant gliomas". The Western Journal of Medicine. 168 (2): 114–20. PMC 1304839. PMID 9499745.
- "Childhood Brain Cancer Now Leads to More Deaths than Leukemia". Fortune. Retrieved 14 February 2019.
- Gurney JG, Smith MA, Bunin GR. "CNS and Miscellaneous Intracranial and Intraspinal Neoplasms" (PDF). SEER Pediatric Monograph. National Cancer Institute. pp. 51–57. Archived (PDF) from the original on 17 December 2008. Retrieved 4 December 2008.
In the US, approximately 2,200 children and adolescents younger than 20 years of age are diagnosed with malignant central nervous system tumors each year. More than 90 percent of primary CNS malignancies in children are located within the brain.
- Rood BR. "Infantile Brain Tumors". The Childhood Brain Tumor Foundation. Archived from the original on 11 November 2012. Retrieved 23 July 2014.
- Echevarría ME, Fangusaro J, Goldman S (June 2008). "Pediatric central nervous system germ cell tumors: a review". The Oncologist. 13 (6): 690–9. doi:10.1634/theoncologist.2008-0037. PMID 18586924.
- "About childhood brain tumours". Archived from the original on 7 August 2016. Retrieved 16 June 2016.
- Bloch, O (2015). Immunotherapy for malignant gliomas. Cancer Treatment and Research. 163. pp. 143–58. doi:10.1007/978-3-319-12048-5_9. ISBN 978-3-319-12047-8. PMID 25468230.
- Auer R, Bell JC (January 2012). "Oncolytic viruses: smart therapeutics for smart cancers". Future Oncology. 8 (1): 1–4. doi:10.2217/fon.11.134. PMID 22149027.
- Garber K (March 2006). "China approves world's first oncolytic virus therapy for cancer treatment". Journal of the National Cancer Institute. 98 (5): 298–300. doi:10.1093/jnci/djj111. PMID 16507823.
- Logg CR, Tai CK, Logg A, Anderson WF, Kasahara N (May 2001). "A uniquely stable replication-competent retrovirus vector achieves efficient gene delivery in vitro and in solid tumors". Human Gene Therapy. 12 (8): 921–32. doi:10.1089/104303401750195881. PMC 8184367. PMID 11387057.
- Tai CK, Wang WJ, Chen TC, Kasahara N (November 2005). "Single-shot, multicycle suicide gene therapy by replication-competent retrovirus vectors achieves long-term survival benefit in experimental glioma". Molecular Therapy. 12 (5): 842–51. doi:10.1016/j.ymthe.2005.03.017. PMC 8185609. PMID 16257382. Archived from the original on 12 September 2010.
- "A Study of a Retroviral Replicating Vector Administered to Subjects With Recurrent Malignant Glioma". Clinical Trials.gov. July 2014. Archived from the original on 26 November 2011.
- van der Pol Y, Mouliere F (October 2019). "Toward the Early Detection of Cancer by Decoding the Epigenetic and Environmental Fingerprints of Cell-Free DNA". Cancer Cell. 36 (4): 350–368. doi:10.1016/j.ccell.2019.09.003. PMID 31614115.
|Wikimedia Commons has media related to Brain tumors.| | https://en.m.wikipedia.org/wiki/Brain_tumour | 21 |
20 | Introduction to Demand • In the United States, the forces of supply and demand work together to set prices. • Demand is the desire, willingness, and ability to buy a good or service. • one individual consumer OR • total demand of all consumers in the market (market demand).
Introduction to Demand • A demand schedule is a table that lists the various quantities of a product or service that someone is willing to buy over a range of possible prices.
Introduction to Demand • A demand schedule can be shown as points on a graph. • The graph lists prices on the vertical axis and quantities demanded on the horizontal axis. • Each point on the graph shows how many units of the product or service an individual will buy at a particular price. • The demand curve is the line that connects these points.
Introduction to Demand • The demand curve slopes downward. • This shows that people are normally willing to buy less of a product at a high price and more at a low price. • According to the law of demand, quantity demanded and price move in opposite directions.
Introduction to Demand • We buy products for their utility- the pleasure, usefulness, or satisfaction they give us. • What is your utility for the following products? (Measure your utility by the maximum amount you would be willing to pay for this product) • Do we have the same utility for these goods?
Introduction to Demand • One reason the demand curve slopes downward is due to diminish marginal utility • The principle of diminishing marginal utilitysays that our additional satisfaction tends to go down as we consume more and more units. • To make a buying decision, we consider whether the satisfaction we expect to gain is worth the money we must give up.
Introduction to Supply • A supply schedule is a table that shows the quantities producers are willing to supply at various prices
Introduction to Supply • Supply refers to the various quantities of a good or service that producers are willing to sell at all possible market prices. • Supply can refer to the: • output of one producer OR • to the total output of all producers in the market (market supply).
Introduction to Supply • A supply schedule can be shown as points on a graph. • The graph lists prices on the vertical axis and quantities supplied on the horizontal axis. • Each point on the graph shows how many units of the product or service a producer (or group of producers) would willing sell at a particular price. • The supply curve is the line that connects these points.
Introduction to Supply • As the price for a good rises, the quantity supplied rises and the quantity demanded falls. As the price falls, the quantity supplied falls and the quantity demanded rises. • The law of supply holds that producers will normally offer more for sale at higher prices and less at lower prices.
Introduction to Supply • Businesses provide goods and services hoping to make a profit. • Profit is the money a business has left over after it covers its costs. • Businesses try to sell at prices high enough to cover their costs with some profit left over. • The higher the price for a good, the more profit a business will make after paying the cost for resources.
Equilibrium • When quantity demand and quantity supplied are at the same price then they reach an equilibrium price • Equilibrium is ideal for producers because more of their product gets sold • Equilibrium is ideal for consumers because the right amount of product is available for the people who want it
Surplus • If supply exceeds demand there is a surplus • Price is too high so goods and services exchanged are limited by demand
Shortage • If demand exceeds supply, there will be a shortage • Price is too low so goods and services exchanged are limited by supply
Changes in Demand • Prices of related goods affect on demand • Substitute goods a substitute is a product that can be used in the place of another. • The price of the substitute good and demand for the other good are directly related • For example, Coke Price Pepsi Demand • Complementary goods a compliment is a good that goes well with another good. • When goods are complements, there is an inverse relationship between the price of one and the demand for the other • For example, Peanut Butter Jam Demand
Elasticity • Demand elasticity- market in which a change in price will automatically cause a change in demand (luxury goods with substitutes) • Demand inelasticity- market in which a change in price will not cause a significant change in demand (necessary good with no substitutes)
Changes in supply These occur due one or more of the following: • Cost of resources • Amount available • Productivity • Technology • Role of Government: Regulations, taxes, and subsidies- gov’t money to help production • Expectations
Pure competition • Identical products- no difference in product sold • commodity- an essential product that is the same regardless who makes or sells it
Changes in Supply Changes in any of the factors other than price causes the supply curve to shift either: • Decrease in Supply shifts to the Left (Less supplied at each price) OR • Increase in Supply shifts to the Right (More supplied at each price)
Introduction to Supply • The reason the supply curve slopes upward is due to costs and profit. • Producers purchase resources and use them to produce output. • Producers will incur costs as they bid resources away from their alternative uses.
Changes in Demand • Demand Curves can also shift in response to the following factors: • Buyers (# of): changes in the number of consumers • Income: changes in consumers’ income • Tastes: changes in preference or popularity of product/ service • Expectations: changes in what consumers expect to happen in the future • Related goods: compliments and substitutes • BITER: factors that shift the demand curve
Changes in Demand • Change in the quantity demanded due to a price change occurs ALONG the demand curve • An increase in the Price of Cupcakes from $3 to $4 will lead to a decrease in the Quantity Demanded of cupcakes from 6 to 4.
Changes in Demand • Several factors will change the demand for the good (shift the entire demand curve) • As an example, suppose consumer income increases. The demand for Widgets at all prices will increase.
Changes in Demand • Demand will also decrease due to changes in factors other than price. • As an example, suppose cupcakes become less popular to own.
Changes in Demand Changes in any of the factors other than pricecauses the demand curve to shift either: • Decrease in Demand shifts to the Left (Less demanded at each price) OR • Increase in Demand shifts to the Right (More demanded at each price)
Changes in Supply • Change in the quantity supplied due to a price change occurs ALONG the supply curve • If the price of Cupcakes fell to $2, then the Quantity Supplied would fall to 4 Cupcakes.
Changes in Supply • Supply Curves can also shift in response to the following factors: • Subsidies and taxes: government subsides encourage production, while taxes discourage production • Technology: improvements in production increase ability of firms to supply • Other goods: businesses consider the price of goods they could be producing • Number of sellers: how many firms are in the market • Expectations: businesses consider future prices and economic conditions • Resource costs: cost to purchase factors of production will influence business decisions • STONER: factors that shift the supply curve
Changes in Supply • Supply can also decrease due to factors other than a change in price. • As an example, suppose that a large number of Widget producers go out of business, decreasing the number of suppliers.
1. The income of the Pago-Pagans declines after a typhoon hits the island. Price D D1 Quantity
2. Pago-Pagan is named on of the most beautiful islands in the world and tourism to the island doubles. Price D1 D Quantity
3. The price of Frisbees decreases. (Frisbees are a substitute good for boomerangs) Price D D1 Quantity
4. The price of boomerang t-shirts decreases, which I assume all of you know are a complementary good. Price D1 D Quantity
5. The Boomerang Manufactures decide to add a money back guarantee on their product, which increases the popularity for them. Price D1 D Quantity
6. Many Pago-pagans begin to believe that they may lose their jobs in the near future. (Think expectations!) Price D D1 Quantity
7. Come up with your own story about boomerangs and the Pago-Pagans. Write down the story, draw the change in demand based on the story, and explain why demand changed. Price D Quantity
1. The government of Pago-Paga adds a subsidy to boomerang production. Price S S1 Quantity
2. Boomerang producers also produce Frisbees. The price of Frisbees goes up. S1 Price S Quantity
3. The government of Pago-Paga adds a new tax to boomerang production. S1 Price S Quantity
4. Boomerang producers expect an increase in the popularity of boomerangs worldwide. Price S S1 Quantity
5. The price of plastic, a major input in boomerang production, increases. S1 Price S Quantity
6. Pago-Pagan workers are introduced to coffee as Pago-Paga become integrated into the world market and their productivity increases drastically. Price S S1 Quantity
7. Come up with your own story about boomerangs and the Pago-Pagans. Write down the story, draw the change in supply based on the story, and explain why supply changed. Price S Quantity
Supply and Demand at Work • Markets bring buyers and sellers together. • The forces of supply and demand worktogether in markets to establish prices. • In our economy, prices form the basis of economic decisions.
Supply and Demand at Work • Supply and Demand Schedule can be combined into one chart. | https://fr.slideserve.com/gizela/supply-demand-and-market-equilibrium | 21 |
62 | Treaty of Versailles
The Treaty of Versailles (French: Traité de Versailles; German: Versailler Vertrag, pronounced [vɛʁˈzaɪ̯ɐ fɛɐ̯ˈtʁaːk] (listen)) was the most important of the peace treaties that brought World War I to an end. The Treaty ended the state of war between Germany and the Allied Powers. It was signed on 28 June 1919 in the Palace of Versailles, exactly five years after the assassination of Archduke Franz Ferdinand, which had directly led to the war. The other Central Powers on the German side signed separate treaties. Although the armistice, signed on 11 November 1918, ended the actual fighting, it took six months of Allied negotiations at the Paris Peace Conference to conclude the peace treaty. The treaty was registered by the Secretariat of the League of Nations on 21 October 1919.
|Treaty of Peace between the Allied and Associated Powers and Germany|
|Signed||28 June 1919|
|Location||Hall of Mirrors in the Palace of Versailles, Paris, France|
|Effective||10 January 1920|
|Condition||Ratification by Germany and three Principal Allied Powers.|
|Paris Peace Conference|
Of the many provisions in the treaty, one of the most important and controversial required "Germany [to] accept the responsibility of Germany and her allies for causing all the loss and damage" during the war (the other members of the Central Powers signed treaties containing similar articles). This article, Article 231, later became known as the War Guilt clause. The treaty required Germany to disarm, make ample territorial concessions, and pay reparations to certain countries that had formed the Entente powers. In 1921 the total cost of these reparations was assessed at 132 billion gold marks (then $31.4 billion or £6.6 billion, roughly equivalent to US$442 billion or UK£284 billion in 2021). At the time economists, notably John Maynard Keynes (a British delegate to the Paris Peace Conference), predicted that the treaty was too harsh—a "Carthaginian peace"—and said the reparations figure was excessive and counter-productive, views that, since then, have been the subject of ongoing debate by historians and economists. On the other hand, prominent figures on the Allied side, such as French Marshal Ferdinand Foch, criticized the treaty for treating Germany too leniently.
The result of these competing and sometimes conflicting goals among the victors was a compromise that left no one satisfied, and, in particular, Germany was neither pacified nor conciliated, nor was it permanently weakened. The problems that arose from the treaty would lead to the Locarno Treaties, which improved relations between Germany and the other European powers, and the re-negotiation of the reparation system resulting in the Dawes Plan, the Young Plan, and the indefinite postponement of reparations at the Lausanne Conference of 1932. The treaty has sometimes been cited as a cause of World War II: although its actual impact was not as severe as feared, its terms led to great resentment in Germany which powered the rise of the Nazi Party.
Although it is often referred to as the "Versailles Conference", only the actual signing of the treaty took place at the historic palace. Most of the negotiations were in Paris, with the "Big Four" meetings taking place generally at the French Ministry of Foreign Affairs on the Quai d'Orsay.
First World War
On 28 June 1914, the heir to the throne of Austria-Hungary, the Archduke Franz Ferdinand of Austria, was assassinated by a Serbian nationalist. This caused a rapidly escalating July Crisis resulting in Austria-Hungary declaring war on Serbia, followed quickly by the entry of most European powers into the First World War. Two alliances faced off, the Central Powers (led by Germany) and the Triple Entente (led by Britain, France and Russia). Other countries entered as fighting raged widely across Europe, as well as the Middle East, Africa and Asia. In 1917, two revolutions occurred within the Russian Empire. The new Bolshevik government under Vladimir Lenin in March 1918 signed the Treaty of Brest-Litovsk that was highly favourable to Germany. Sensing victory before American armies could be ready, Germany now shifted forces to the Western Front and tried to overwhelm the Allies. It failed. Instead, the Allies won decisively on the battlefield and forced an armistice in November 1918 that resembled a surrender.
US entry and the Fourteen Points
On 6 April 1917, the United States entered the war against the Central Powers. The motives were twofold: German submarine warfare against merchant ships trading with France and Britain, which led to the sinking of the RMS Lusitania and the loss of 128 American lives; and the interception of the German Zimmermann Telegram, urging Mexico to declare war against the United States. The American war aim was to detach the war from nationalistic disputes and ambitions after the Bolshevik disclosure of secret treaties between the Allies. The existence of these treaties tended to discredit Allied claims that Germany was the sole power with aggressive ambitions.
On 8 January 1918, President Woodrow Wilson issued the nation's postwar goals, the Fourteen Points. It outlined a policy of free trade, open agreements, and democracy. While the term was not used self-determination was assumed. It called for a negotiated end to the war, international disarmament, the withdrawal of the Central Powers from occupied territories, the creation of a Polish state, the redrawing of Europe's borders along ethnic lines, and the formation of a League of Nations to guarantee the political independence and territorial integrity of all states. It called for a just and democratic peace uncompromised by territorial annexation. The Fourteen Points were based on the research of the Inquiry, a team of about 150 advisors led by foreign-policy advisor Edward M. House, into the topics likely to arise in the expected peace conference.
Treaty of Brest-Litovsk, 1918
After the Central Powers launched Operation Faustschlag on the Eastern Front, the new Soviet Government of Russia signed the Treaty of Brest-Litovsk with Germany on 3 March 1918. This treaty ended the war between Russia and the Central powers and annexed 3,400,000 square kilometres (1,300,000 square miles) of territory and 62 million people. This loss resulted in the loss of one third of the Russian population, around one third of the country's arable land, three-quarters of its coal and iron, one third of its factories (totalling 54 percent of the nation's industrial capacity), and one quarter of its railroads.
During the autumn of 1918, the Central Powers began to collapse. Desertion rates within the German army began to increase, and civilian strikes drastically reduced war production. On the Western Front, the Allied forces launched the Hundred Days Offensive and decisively defeated the German western armies. Sailors of the Imperial German Navy at Kiel mutinied, which prompted uprisings in Germany, which became known as the German Revolution. The German government tried to obtain a peace settlement based on the Fourteen Points, and maintained it was on this basis that they surrendered. Following negotiations, the Allied powers and Germany signed an armistice, which came into effect on 11 November while German forces were still positioned in France and Belgium.
The terms of the armistice called for an immediate evacuation of German troops from occupied Belgium, France, and Luxembourg within fifteen days. In addition, it established that Allied forces would occupy the Rhineland. In late 1918, Allied troops entered Germany and began the occupation.
Both Germany and Great Britain were dependent on imports of food and raw materials, most of which had to be shipped across the Atlantic Ocean. The Blockade of Germany (1914–1919) was a naval operation conducted by the Allied Powers to stop the supply of raw materials and foodstuffs reaching the Central Powers. The German Kaiserliche Marine was mainly restricted to the German Bight and used commerce raiders and unrestricted submarine warfare for a counter-blockade. The German Board of Public Health in December 1918 stated that 763,000 German civilians had died during the Allied blockade, although an academic study in 1928 put the death toll at 424,000 people.
The blockade was maintained for eight months after the Armistice in November 1918, into the following year of 1919. Foodstuffs imports into Germany were controlled by the Allies after the Armistice with Germany until Germany signed the Treaty of Versailles in June 1919. In March 1919, Churchill informed the House of Commons, that the ongoing blockade was a success and "Germany is very near starvation." From January 1919 to March 1919, Germany refused to agree to Allied demands that Germany surrender its merchant ships to Allied ports to transport food supplies. Some Germans considered the armistice to be a temporary cessation of the war and knew, if fighting broke out again, their ships would be seized. Over the winter of 1919, the situation became desperate and Germany finally agreed to surrender its fleet in March. The Allies then allowed for the import of 270,000 tons of foodstuffs.
Both German and non-German observers have argued that these were the most devastating months of the blockade for German civilians, though disagreement persists as to the extent and who is truly at fault. According to Dr. Max Rubner 100,000 German civilians died due to the continuation blockade after the armistice. In the UK, Labour Party member and anti-war activist Robert Smillie issued a statement in June 1919 condemning continuation of the blockade, claiming 100,000 German civilians had died as a result.
Talks between the Allies to establish a common negotiating position started on 18 January 1919, in the Salle de l'Horloge at the French Foreign Ministry on the Quai d'Orsay in Paris. Initially, 70 delegates from 27 nations participated in the negotiations. Russia was excluded due to their signing of a separate peace (the Treaty of Brest-Litovsk) and early withdrawal from the war. Furthermore, German negotiators were excluded to deny them an opportunity to divide the Allies diplomatically.
Initially, a "Council of Ten" (comprising two delegates each from Britain, France, the United States, Italy, and Japan) met officially to decide the peace terms. This council was replaced by the "Council of Five", formed from each country's foreign ministers, to discuss minor matters. French Prime Minister Georges Clemenceau, Italian Prime Minister Vittorio Emanuele Orlando, British Prime Minister David Lloyd George, and United States President Woodrow Wilson formed the "Big Four" (at one point becoming the "Big Three" following the temporary withdrawal of Vittorio Emanuele Orlando). These four men met in 145 closed sessions to make all the major decisions, which were later ratified by the entire assembly. The minor powers attended a weekly "Plenary Conference" that discussed issues in a general forum but made no decisions. These members formed over 50 commissions that made various recommendations, many of which were incorporated into the final text of the treaty.
France had lost 1.3 million soldiers, including 25% of French men aged 18–30, as well as 400,000 civilians. France had also been more physically damaged than any other nation (the so-called zone rouge (Red Zone); the most industrialized region and the source of most coal and iron ore in the north-east had been devastated and in the final days of the war mines had been flooded and railways, bridges and factories destroyed.) Clemenceau intended to ensure the security of France, by weakening Germany economically, militarily, territorially and by supplanting Germany as the leading producer of steel in Europe. British economist and Versailles negotiator John Maynard Keynes summarized this position as attempting to "set the clock back and undo what, since 1870, the progress of Germany had accomplished."
Clemenceau told Wilson: "America is far away, protected by the ocean. Not even Napoleon himself could touch England. You are both sheltered; we are not". The French wanted a frontier on the Rhine, to protect France from a German invasion and compensate for French demographic and economic inferiority. American and British representatives refused the French claim and after two months of negotiations, the French accepted a British pledge to provide an immediate alliance with France if Germany attacked again, and Wilson agreed to put a similar proposal to the Senate. Clemenceau had told the Chamber of Deputies, in December 1918, that his goal was to maintain an alliance with both countries. Clemenceau accepted the offer, in return for an occupation of the Rhineland for fifteen years and that Germany would also demilitarise the Rhineland.
French negotiators required reparations, to make Germany pay for the destruction induced throughout the war and to decrease German strength. The French also wanted the iron ore and coal of the Saar Valley, by annexation to France. The French were willing to accept a smaller amount of reparations than the Americans would concede and Clemenceau was willing to discuss German capacity to pay with the German delegation, before the final settlement was drafted. In April and May 1919, the French and Germans held separate talks, on mutually acceptable arrangements on issues like reparation, reconstruction and industrial collaboration. France, along with the British Dominions and Belgium, opposed mandates and favored annexation of former German colonies.
Britain had suffered heavy financial costs but suffered little physical devastation during the war, but the British wartime coalition was re-elected during the so-called Coupon election at the end of 1918, with a policy of squeezing the Germans "'til the pips squeak". Public opinion favoured a "just peace", which would force Germany to pay reparations and be unable to repeat the aggression of 1914, although those of a "liberal and advanced opinion" shared Wilson's ideal of a peace of reconciliation.
In private Lloyd George opposed revenge and attempted to compromise between Clemenceau's demands and the Fourteen Points, because Europe would eventually have to reconcile with Germany. Lloyd George wanted terms of reparation that would not cripple the German economy, so that Germany would remain a viable economic power and trading partner. By arguing that British war pensions and widows' allowances should be included in the German reparation sum, Lloyd George ensured that a large amount would go to the British Empire.
Lloyd George also intended to maintain a European balance of power to thwart a French attempt to establish itself as the dominant European power. A revived Germany would be a counterweight to France and a deterrent to Bolshevik Russia. Lloyd George also wanted to neutralize the German navy to keep the Royal Navy as the greatest naval power in the world; dismantle the German colonial empire with several of its territorial possessions ceded to Britain and others being established as League of Nations mandates, a position opposed by the Dominions.
Before the American entry into the war, Wilson had talked of a "peace without victory". This position fluctuated following the US entry into the war. Wilson spoke of the German aggressors, with whom there could be no compromised peace. On 8 January 1918, however, Wilson delivered a speech (known as the Fourteen Points) that declared the American peace objectives: the rebuilding of the European economy, self-determination of European and Middle Eastern ethnic groups, the promotion of free trade, the creation of appropriate mandates for former colonies, and above all, the creation of a powerful League of Nations that would ensure the peace. The aim of the latter was to provide a forum to revise the peace treaties as needed, and deal with problems that arose as a result of the peace and the rise of new states.
Wilson brought along top intellectuals as advisors to the American peace delegation, and the overall American position echoed the Fourteen Points. Wilson firmly opposed harsh treatment on Germany. While the British and French wanted to largely annex the German colonial empire, Wilson saw that as a violation of the fundamental principles of justice and human rights of the native populations, and favored them having the right of self-determination via the creation of mandates. The promoted idea called for the major powers to act as disinterested trustees over a region, aiding the native populations until they could govern themselves. In spite of this position and in order to ensure that Japan did not refuse to join the League of Nations, Wilson favored turning over the former German colony of Shandong, in Eastern China, to Japan rather than return the area to Chinese control. Further confounding the Americans, was US internal partisan politics. In November 1918, the Republican Party won the Senate election by a slim margin. Wilson, a Democrat, refused to include prominent Republicans in the American delegation making his efforts seem partisan, and contributed to a risk of political defeat at home.
Vittorio Emanuele Orlando and his foreign minister Sidney Sonnino, an Anglican of British origins, worked primarily to secure the partition of the Habsburg Empire and their attitude towards Germany was not as hostile. Generally speaking, Sonnino was in line with the British position while Orlando favored a compromise between Clemenceau and Wilson. Within the negotiations for the Treaty of Versailles, Orlando obtained certain results such as the permanent membership of Italy in the security council of the League of Nations and a promised transfer of British Jubaland and French Aozou strip to the Italian colonies of Somalia and Libya respectively. Italian nationalists, however, saw the War as a mutilated victory for what they considered to be little territorial gains achieved in the other treaties directly impacting Italy's borders. Orlando was ultimately forced to abandon the conference and resign. Orlando refused to see World War I as a mutilated victory, replying at nationalists calling for a greater expansion that "Italy today is a great state....on par with the great historic and contemporary states. This is, for me, our main and principal expansion." Francesco Saverio Nitti took Orlando's place in signing the treaty of Versailles.
Treaty content and signing
|Wikisource has original text related to this article:|
In June 1919, the Allies declared that war would resume if the German government did not sign the treaty they had agreed to among themselves. The government headed by Philipp Scheidemann was unable to agree on a common position, and Scheidemann himself resigned rather than agree to sign the treaty. Gustav Bauer, the head of the new government, sent a telegram stating his intention to sign the treaty if certain articles were withdrawn, including Articles 227, 230 and 231. In response, the Allies issued an ultimatum stating that Germany would have to accept the treaty or face an invasion of Allied forces across the Rhine within 24 hours. On 23 June, Bauer capitulated and sent a second telegram with a confirmation that a German delegation would arrive shortly to sign the treaty. On 28 June 1919, the fifth anniversary of the assassination of Archduke Franz Ferdinand (the immediate impetus for the war), the peace treaty was signed. The treaty had clauses ranging from war crimes, the prohibition on the merging of the Republic of German Austria with Germany without the consent of the League of Nations, freedom of navigation on major European rivers, to the returning of a Koran to the king of Hedjaz.
The treaty stripped Germany of 65,000 km2 (25,000 sq mi) of territory and 7 million people. It also required Germany to give up the gains made via the Treaty of Brest-Litovsk and grant independence to the protectorates that had been established. In Western Europe Germany was required to recognize Belgian sovereignty over Moresnet and cede control of the Eupen-Malmedy area. Within six months of the transfer, Belgium was required to conduct a plebiscite on whether the citizens of the region wanted to remain under Belgian sovereignty or return to German control, communicate the results to the League of Nations and abide by the League's decision. To compensate for the destruction of French coal mines, Germany was to cede the output of the Saar coalmines to France and control of the Saar to the League of Nations for 15 years; a plebiscite would then be held to decide sovereignty. The treaty restored the provinces of Alsace-Lorraine to France by rescinding the treaties of Versailles and Frankfurt of 1871 as they pertained to this issue. France was able to make the claim that the provinces of Alsace-Lorraine were indeed part of France and not part of Germany by disclosing a letter sent from the Prussian King to the Empress Eugénie that Eugénie provided, in which William I wrote that the territories of Alsace-Lorraine were requested by Germany for the sole purpose of national defense and not to expand the German territory. The sovereignty of Schleswig-Holstein was to be resolved by a plebiscite to be held at a future time (see Schleswig Plebiscites).
In Central Europe Germany was to recognize the independence of Czechoslovakia (which had actually been controlled by Austria) and cede parts of the province of Upper Silesia. Germany had to recognize the independence of Poland and renounce "all rights and title over the territory". Portions of Upper Silesia were to be ceded to Poland, with the future of the rest of the province to be decided by plebiscite. The border would be fixed with regard to the vote and to the geographical and economic conditions of each locality. The province of Posen (now Poznań), which had come under Polish control during the Greater Poland Uprising, was also to be ceded to Poland. Pomerelia (Eastern Pomerania), on historical and ethnic grounds, was transferred to Poland so that the new state could have access to the sea and became known as the Polish Corridor. The sovereignty of part of southern East Prussia was to be decided via plebiscite while the East Prussian Soldau area, which was astride the rail line between Warsaw and Danzig, was transferred to Poland outright without plebiscite. An area of 51,800 square kilometres (20,000 square miles) was granted to Poland at the expense of Germany. Memel was to be ceded to the Allied and Associated powers, for disposal according to their wishes. Germany was to cede the city of Danzig and its hinterland, including the delta of the Vistula River on the Baltic Sea, for the League of Nations to establish the Free City of Danzig.
Article 119 of the treaty required Germany to renounce sovereignty over former colonies and Article 22 converted the territories into League of Nations mandates under the control of Allied states. Togoland and German Kamerun (Cameroon) were transferred to France. Ruanda and Urundi were allocated to Belgium, whereas German South-West Africa went to South Africa and Britain obtained German East Africa. As compensation for the German invasion of Portuguese Africa, Portugal was granted the Kionga Triangle, a sliver of German East Africa in northern Mozambique. Article 156 of the treaty transferred German concessions in Shandong, China, to Japan, not to China. Japan was granted all German possessions in the Pacific north of the equator and those south of the equator went to Australia, except for German Samoa, which was taken by New Zealand.
The treaty was comprehensive and complex in the restrictions imposed upon the post-war German armed forces (the Reichswehr). The provisions were intended to make the Reichswehr incapable of offensive action and to encourage international disarmament. Germany was to demobilize sufficient soldiers by 31 March 1920 to leave an army of no more than 100,000 men in a maximum of seven infantry and three cavalry divisions. The treaty laid down the organisation of the divisions and support units, and the General Staff was to be dissolved. Military schools for officer training were limited to three, one school per arm, and conscription was abolished. Private soldiers and non-commissioned officers were to be retained for at least twelve years and officers for a minimum of 25 years, with former officers being forbidden to attend military exercises. To prevent Germany from building up a large cadre of trained men, the number of men allowed to leave early was limited.
The number of civilian staff supporting the army was reduced and the police force was reduced to its pre-war size, with increases limited to population increases; paramilitary forces were forbidden. The Rhineland was to be demilitarized, all fortifications in the Rhineland and 50 kilometres (31 miles) east of the river were to be demolished and new construction was forbidden. Military structures and fortifications on the islands of Heligoland and Düne were to be destroyed. Germany was prohibited from the arms trade, limits were imposed on the type and quantity of weapons and prohibited from the manufacture or stockpile of chemical weapons, armoured cars, tanks and military aircraft. The German navy was allowed six pre-dreadnought battleships and was limited to a maximum of six light cruisers (not exceeding 6,000 long tons (6,100 t)), twelve destroyers (not exceeding 800 long tons (810 t)) and twelve torpedo boats (not exceeding 200 long tons (200 t)) and was forbidden submarines. The manpower of the navy was not to exceed 15,000 men, including manning for the fleet, coast defences, signal stations, administration, other land services, officers and men of all grades and corps. The number of officers and warrant officers was not allowed to exceed 1,500 men. Germany surrendered eight battleships, eight light cruisers, forty-two destroyers, and fifty torpedo boats for decommissioning. Thirty-two auxiliary ships were to be disarmed and converted to merchant use. Article 198 prohibited Germany from having an air force, including naval air forces, and required Germany to hand over all aerial related materials. In conjunction, Germany was forbidden to manufacture or import aircraft or related material for a period of six months following the signing of the treaty.
In Article 231 Germany accepted responsibility for the losses and damages caused by the war "as a consequence of the ... aggression of Germany and her allies." The treaty required Germany to compensate the Allied powers, and it also established an Allied "Reparation Commission" to determine the exact amount which Germany would pay and the form that such payment would take. The commission was required to "give to the German Government a just opportunity to be heard", and to submit its conclusions by 1 May 1921. In the interim, the treaty required Germany to pay an equivalent of 20 billion gold marks ($5 billion) in gold, commodities, ships, securities or other forms. The money would help to pay for Allied occupation costs and buy food and raw materials for Germany.
To ensure compliance, the Rhineland and bridgeheads east of the Rhine were to be occupied by Allied troops for fifteen years. If Germany had not committed aggression, a staged withdrawal would take place; after five years, the Cologne bridgehead and the territory north of a line along the Ruhr would be evacuated. After ten years, the bridgehead at Coblenz and the territories to the north would be evacuated and after fifteen years remaining Allied forces would be withdrawn. If Germany reneged on the treaty obligations, the bridgeheads would be reoccupied immediately.
Part I of the treaty, in common with all the treaties signed during the Paris Peace Conference, was the Covenant of the League of Nations, which provided for the creation of the League, an organization for the arbitration of international disputes. Part XIII organized the establishment of the International Labour Office, to regulate hours of work, including a maximum working day and week; the regulation of the labour supply; the prevention of unemployment; the provision of a living wage; the protection of the worker against sickness, disease and injury arising out of his employment; the protection of children, young persons and women; provision for old age and injury; protection of the interests of workers when employed abroad; recognition of the principle of freedom of association; the organization of vocational and technical education and other measures. The treaty also called for the signatories to sign or ratify the International Opium Convention.
The delegates of the Commonwealth and British Government had mixed thoughts on the treaty, with some seeing the French policy as being greedy and vindictive. Lloyd George and his private secretary Philip Kerr believed in the treaty, although they also felt that the French would keep Europe in a constant state of turmoil by attempting to enforce the treaty. Delegate Harold Nicolson wrote "are we making a good peace?", while General Jan Smuts (a member of the South African delegation) wrote to Lloyd-George, before the signing, that the treaty was unstable and declared "Are we in our sober senses or suffering from shellshock? What has become of Wilson's 14 points?" He wanted the Germans not be made to sign at the "point of the bayonet". Smuts issued a statement condemning the treaty and regretting that the promises of "a new international order and a fairer, better world are not written in this treaty". Lord Robert Cecil said that many within the Foreign Office were disappointed by the treaty. The treaty received widespread approval from the general public. Bernadotte Schmitt wrote that the "average Englishman ... thought Germany got only what it deserved" as a result of the treaty, but public opinion changed as German complaints mounted.
Prime Minister Ramsay MacDonald, following the German re-militarisation of the Rhineland in 1936, stated that he was "pleased" that the treaty was "vanishing", expressing his hope that the French had been taught a "severe lesson".
Status of British Dominions
The Treaty of Versailles was an important step in the status of the British Dominions under international law. Australia, Canada, New Zealand and South Africa had each made significant contributions to the British war effort, but as separate countries, rather than as British colonies. India also made a substantial troop contribution, although under direct British control, unlike the Dominions. The four Dominions and India all signed the Treaty separately from Britain, a clear recognition by the international community that the Dominions were no longer British colonies. "Their status defied exact analysis by both international and constitutional lawyers, but it was clear that they were no longer regarded simply as colonies of Britain." By signing the Treaty individually, the four Dominions and India also were founding members of the League of Nations in their own right, rather than simply as part of the British Empire.
The signing of the treaty was met with roars of approval, singing, and dancing from a crowd outside the Palace of Versailles. In Paris proper, people rejoiced at the official end of the war, the return of Alsace and Lorraine to France, and that Germany had agreed to pay reparations.
While France ratified the treaty and was active in the League, the jubilant mood soon gave way to a political backlash for Clemenceau. The French Right saw the treaty as being too lenient and saw it as failing to achieve all of France's demands. Left-wing politicians attacked the treaty and Clemenceau for being too harsh (the latter turning into a ritual condemnation of the treaty, for politicians remarking on French foreign affairs, as late as August 1939). Marshal Ferdinand Foch stated "this (treaty) is not peace. It is an armistice for twenty years."; a criticism over the failure to annex the Rhineland and for compromising French security for the benefit of the United States and Britain. When Clemenceau stood for election as President of France in January 1920, he was defeated.
Reaction in Italy to the treaty was extremely negative. The country had suffered high casualties, yet failed to achieve most of its major war goals, notably gaining control of the Dalmatian coast and Fiume. President Wilson rejected Italy's claims on the basis of "national self-determination." For their part, Britain and France—who had been forced in the war's latter stages to divert their own troops to the Italian front to stave off collapse—were disinclined to support Italy's position at the peace conference. Differences in negotiating strategy between Premier Vittorio Orlando and Foreign Minister Sidney Sonnino further undermined Italy's position at the conference. A furious Vittorio Orlando suffered a nervous collapse and at one point walked out of the conference (though he later returned). He lost his position as prime minister just a week before the treaty was scheduled to be signed, effectively ending his active political career. Anger and dismay over the treaty's provisions helped pave the way for the establishment of Benito Mussolini's dictatorship three years later.
Portugal entered the war on the Allied side in 1916 primarily to ensure the security of its African colonies, which were threatened with seizure by both Britain and Germany. To this extent, she succeeded in her war aims. The treaty recognized Portuguese sovereignty over these areas and awarded her small portions of Germany's bordering overseas colonies. Otherwise, Portugal gained little at the peace conference. Her promised share of German reparations never materialized, and a seat she coveted on the executive council of the new League of Nations went instead to Spain—which had remained neutral in the war. In the end, Portugal ratified the treaty, but got little out of the war, which cost more than 8,000 Portuguese troops and as many as 100,000 of her African colonial subjects their lives.
But the Republican Party, led by Henry Cabot Lodge, controlled the US Senate after the election of 1918, and the senators were divided into multiple positions on the Versailles question. It proved possible to build a majority coalition, but impossible to build a two-thirds coalition that was needed to pass a treaty.
A discontent bloc of 12–18 "Irreconcilables", mostly Republicans but also representatives of the Irish and German Democrats, fiercely opposed the treaty. One block of Democrats strongly supported the Versailles Treaty, even with reservations added by Lodge. A second group of Democrats supported the treaty but followed Wilson in opposing any amendments or reservations. The largest bloc, led by Senator Lodge, comprised a majority of the Republicans. They wanted a treaty with reservations, especially on Article 10, which involved the power of the League of Nations to make war without a vote by the US Congress. All of the Irreconcilables were bitter enemies of President Wilson, and he launched a nationwide speaking tour in the summer of 1919 to refute them. But Wilson collapsed midway with a serious stroke that effectively ruined his leadership skills.
The closest the treaty came to passage was on 19 November 1919, as Lodge and his Republicans formed a coalition with the pro-Treaty Democrats, and were close to a two-thirds majority for a Treaty with reservations, but Wilson rejected this compromise and enough Democrats followed his lead to end the chances of ratification permanently. Among the American public as a whole, the Irish Catholics and the German Americans were intensely opposed to the treaty, saying it favored the British.
After Wilson's presidency, his successor Republican President Warren G. Harding continued American opposition to the formation of the League of Nations. Congress subsequently passed the Knox–Porter Resolution bringing a formal end to hostilities between the United States and the Central Powers. It was signed into law by President Harding on 2 July 1921. Soon after, the US–German Peace Treaty of 1921 was signed in Berlin on 25 August 1921, and two similar treaties were signed with Austria and Hungary on 24 and 29 August 1921, in Vienna and Budapest respectively.
Edward House's views
Wilson's former friend Edward Mandell House, present at the negotiations, wrote in his diary on 29 June 1919:
I am leaving Paris, after eight fateful months, with conflicting emotions. Looking at the conference in retrospect, there is much to approve and yet much to regret. It is easy to say what should have been done, but more difficult to have found a way of doing it. To those who are saying that the treaty is bad and should never have been made and that it will involve Europe in infinite difficulties in its enforcement, I feel like admitting it. But I would also say in reply that empires cannot be shattered, and new states raised upon their ruins without disturbance. To create new boundaries is to create new troubles. The one follows the other. While I should have preferred a different peace, I doubt very much whether it could have been made, for the ingredients required for such a peace as I would have were lacking at Paris.
Many in China felt betrayed as the German territory in China was handed to Japan. Wellington Koo refused to sign the treaty and the Chinese delegation at the Paris Peace Conference was the only nation that did not sign the Treaty of Versailles at the signing ceremony. The sense of betrayal led to great demonstrations in China such as the May 4th movement. There was immense dissatisfaction with Duan Qirui's government, which had secretly negotiated with the Japanese in order to secure loans to fund their military campaigns against the south. On 12 June 1919, the Chinese cabinet was forced to resign and the government instructed its delegation at Versailles not to sign the treaty. As a result, relations with the West deteriorated.
On 29 April, the German delegation under the leadership of the Foreign Minister Ulrich Graf von Brockdorff-Rantzau arrived in Versailles. On 7 May, when faced with the conditions dictated by the victors, including the so-called "War Guilt Clause", von Brockdorff-Rantzau replied to Clemenceau, Wilson and Lloyd George: "We can sense the full force of hatred that confronts us here. ... You demand from us to confess we were the only guilty party of war; such a confession in my mouth would be a lie." Because Germany was not allowed to take part in the negotiations, the German government issued a protest against what it considered to be unfair demands, and a "violation of honour", soon afterwards withdrawing from the proceedings of the peace conference.
Germans of all political shades denounced the treaty—particularly the provision that blamed Germany for starting the war—as an insult to the nation's honour. They referred to the treaty as "the Diktat" since its terms were presented to Germany on a take-it-or-leave-it basis. Germany's first democratically elected head of government, Philipp Scheidemann, resigned rather than sign the treaty. In an emotional and polemical address to the National Assembly on 12 May 1919, he called the treaty a "horrific and murderous witch's hammer", and exclaimed:
After Scheidemann's resignation, a new coalition government was formed under Gustav Bauer. President Friedrich Ebert knew that Germany was in an impossible situation. Although he shared his countrymen's disgust with the treaty, he was sober enough to consider the possibility that the government would not be in a position to reject it. He believed that if Germany refused to sign the treaty, the Allies would invade Germany from the west—and there was no guarantee that the army would be able to make a stand in the event of an invasion. With this in mind, he asked Field Marshal Paul von Hindenburg if the army was capable of any meaningful resistance in the event the Allies resumed the war. If there was even the slightest chance that the army could hold out, Ebert intended to recommend against ratifying the treaty. Hindenburg—after prodding from his chief of staff, Wilhelm Groener—concluded the army could not resume the war even on a limited scale. But rather than inform Ebert himself, he had Groener inform the government that the army would be in an untenable position in the event of renewed hostilities. Upon receiving this, the new government recommended signing the treaty. The National Assembly voted in favour of signing the treaty by 237 to 138, with five abstentions (there were 421 delegates in total). This result was wired to Clemenceau just hours before the deadline. Foreign minister Hermann Müller and colonial minister Johannes Bell travelled to Versailles to sign the treaty on behalf of Germany. The treaty was signed on 28 June 1919 and ratified by the National Assembly on 9 July by a vote of 209 to 116.
The disenfranchised and often colonized 'non-white' world held high expectations that a new order would open up an unheralded opportunity to have a principle of racial equality recognized by the leading global powers. Japanese diplomacy had bitter memories of the rhetoric of the Yellow Peril, and the arrogance, underwritten by the assumptions about a White Man's Burden, memories aggravated by the rise of discrimination against their business men, severe immigration restrictions on Asiatics, and court judgments hostile to Japanese interests, which characterized Western states' treatment of their nationals. Japan's delegation, among whose plenipotentiaries figured Baron Makino and Ambassador Chinda Sutemi, was led by its elder statesman Saionji Kinmochi.
Versailles represented a chance to overturn this imposed inferiority, whose tensions were strengthened particularly in Japan's relationship with the United States during WW1. Confidence in their growing industrial strength, and conquest of Germany's Far East possessions, together with their proven fidelity to the Entente would, it was thought, allow them finally to take their rightful place among the victorious Great Powers. They solicited support especially from the American delegation to obtain recognition for the principle of racial equality at the League of Nations Commission. Their proposals to this end were consistently rebuffed by British, American and Australian diplomats, who were all sensitive to their respective countries' internal pressures. Wilson himself was an enactor of segregationist policies in the United States, Balfour considered Africans inferior to Europeans – equality was only true of people within particular nations – while William Hughes, adopting a "slap the Jap" attitude, was a vocal defender of a White Australia policy.
Japan's attempt, buttressed by the Chinese emissary Wellington Koo among others, to incorporate a Racial Equality Proposal in the treaty, had broad support, but was effectively declined when it was rejected by the United States, Great Britain and Australia, despite a powerfully persuasive speech delivered by Makino.
Japan itself both prior to and during WW1 had embarked on a vigorous expansion of continental colonialism, whose aims were justified in terms of an ideological vision of Asians, such as Koreans and Chinese, being of the same culture and race (dōbun dōshǖ: 同文同種), though its vision of those countries was paternalistic and geared to subordinating them to Japan's interests. Aspiring to be accepted as a world actor with similar status to the traditional Western powers, Japan envisaged an Asian Monroe Doctrine, where Japan's proper sphere of geostrategic interests in Asia would be recognized. Some years earlier, Japan secured both British and French support for its claims to inherit rights that Germany had exercised both in China and in the Pacific north of the Equator. American policy experts, unaware of these secret agreements, nonetheless suggested that Japan had adopted a Prussian model that would imperil China's own search for autonomy, and these considerations influenced Wilson.
On 5 May 1921, the reparation Commission established the London Schedule of Payments and a final reparation sum of 132 billion gold marks to be demanded of all the Central Powers. This was the public assessment of what the Central Powers combined could pay, and was also a compromise between Belgian, British, and French demands and assessments. Furthermore, the Commission recognized that the Central Powers could pay little and that the burden would fall upon Germany. As a result, the sum was split into different categories, of which Germany was only required to pay 50 billion gold marks (US$12.5 billion); this being the genuine assessment of the commission on what Germany could pay, and allowed the Allied powers to save face with the public by presenting a higher figure. Furthermore, payments made between 1919 and 1921 were taken into account reducing the sum to 41 billion gold marks.
In order to meet this sum, Germany could pay in cash or kind: coal, timber, chemical dyes, pharmaceuticals, livestock, agricultural machines, construction materials, and factory machinery. Germany's assistance with the restoration of the university library of Leuven, which was destroyed by the Germans on 25 August 1914, was also credited towards the sum. Territorial changes imposed by the treaty were also factored in. The payment schedule required US$250 million within twenty-five days and then US$500 million annually, plus 26 per cent of the value of German exports. The German Government was to issue bonds at five per cent interest and set up a sinking fund of one per cent to support the payment of reparations.
In February and March 1920, the Schleswig Plebiscites were held. The people of Schleswig were presented with only two choices: Danish or German sovereignty. The northern Danish-speaking area voted for Denmark while the southern German-speaking area voted for Germany, resulting in the province being partitioned. The East Prussia plebiscite was held on 11 July 1920. There was a 90% turn out with 99.3% of the population wishing to remain with Germany. Further plebiscites were held in Eupen-Malmedy and Neutral Moresnet. On 20 September 1920, the League of Nations allotted these territories to Belgium. These latter plebiscites were followed by a boundary commission in 1922, followed by the new Belgian-German border being recognized by the German Government on 15 December 1923. The transfer of the Hultschin area, of Silesia, to Czechoslovakia was completed on 3 February 1921.
Following the implementation of the treaty, Upper Silesia was initially governed by Britain, France, and Italy. Between 1919 and 1921, three major outbreaks of violence took place between German and Polish civilians, resulting in German and Polish military forces also becoming involved. In March 1921, the Inter-Allied Commission held the Upper Silesia plebiscite, which was peaceful despite the previous violence. The plebiscite resulted in c. 60 per cent of the population voting for the province to remain part of Germany. Following the vote, the League of Nations debated the future of the province. In 1922, Upper Silesia was partitioned: Oppeln, in the north-west, remained with Germany while Silesia Province, in the south-east, was transferred to Poland.
Memel remained under the authority of the League of Nations, with a French military garrison, until January 1923. On 9 January 1923, Lithuanian forces invaded the territory during the Klaipėda Revolt. The French garrison withdrew, and in February the Allies agreed to attach Memel as an "autonomous territory" to Lithuania. On 8 May 1924, after negotiations between the Lithuanian Government and the Conference of Ambassadors and action by the League of Nations, the annexation of Memel was ratified. Lithuania accepted the Memel Statute, a power-sharing arrangement to protect non-Lithuanians in the territory and its autonomous status while responsibility for the territory remained with the great powers. The League of Nations mediated between the Germans and Lithuanians on a local level, helping the power-sharing arrangement last until 1939.
On 13 January 1935, 15 years after the Saar Basin had been placed under the protection of the League of Nations, a plebiscite was held to determine the future of the area. 528,105 votes were cast, with 477,119 votes (90 per cent of the ballot) in favour of union with Germany; 46,613 votes were cast for the status quo, and 2,124 votes for union with France. The region returned to German sovereignty on 1 March 1935. When the result was announced 4,100 people, including 800 refugees from Germany fled to France.
In late 1918, American, Belgian, British, and French troops entered the Rhineland to enforce the armistice. Before the treaty, the occupation force stood at roughly 740,000 men. Following the signing of the peace treaty, the numbers drastically decreased and by 1926 the occupation force numbered only 76,000 men. As part of the 1929 negotiations that would become the Young Plan, Stresemann and Aristide Briand negotiated the early withdrawal of Allied forces from the Rhineland. On 30 June 1930, after speeches and the lowering of flags, the last troops of the Anglo-French-Belgian occupation force withdrew from Germany.
The British Second Army, with some 275,000 veteran soldiers, entered Germany in late 1918. In March 1919, this force became the British Army of the Rhine (BAOR). The total number of troops committed to the occupation rapidly dwindled as veteran soldiers were demobilized, and were replaced by inexperienced men who had finished basic training following the cessation of hostilities. By 1920, the BAOR consisted of only 40,594 men and the following year had been further reduced to 12,421. The size of the BAOR fluctuated over the following years, but never rose above 9,000 men. The British did not adhere to all obligated territorial withdrawals as dictated by Versailles, on account of Germany not meeting her own treaty obligations. A complete withdrawal was considered, but rejected in order to maintain a presence to continue acting as a check on French ambitions and prevent the establishment of an autonomous Rhineland Republic.
The French Army of the Rhine was initially 250,000 men strong, including at a peak 40,000 African colonial troops (Troupes coloniales). By 1923, the French occupation force had decreased to roughly 130,000 men, including 27,126 African troops. The troop numbers peaked again at 250,000 during the occupation of the Ruhr, before decreasing to 60,000 men by 1926. Germans viewed the use of French colonial troops as a deliberate act of humiliation, and used their presence to create a propaganda campaign dubbed the Black shame. This campaign lasted throughout the 1920s and 30s, although peaked in 1920 and 1921. For example, a 1921 German Government memo detailed 300 acts of violence from colonial troops, which included 65 murders and 170 sexual offenses. Historical consensus is that the charges were exaggerated for political and propaganda purposes, and that the colonial troops behaved far better than their white counterparts. An estimated 500–800 Rhineland Bastards were born as a result of fraternization between colonial troops and German women, and who would later be persecuted.
The United States Third Army entered Germany with 200,000 men. In June 1919, the Third Army demobilized and by 1920 the US occupation force had been reduced to 15,000 men. Wilson further reduced the garrison to 6,500 men, before Warren G. Harding's inauguration in 1921. On 7 January 1923, after the Franco–Belgian occupation of the Ruhr, the US senate legislated the withdrawal of the remaining force. On 24 January, the American garrison started their withdrawal from the Rhineland, with the final troops leaving in early February.
The German economy was so weak that only a small percentage of reparations was paid in hard currency. Nonetheless, even the payment of this small percentage of the original reparations (132 billion gold marks) still placed a significant burden on the German economy. Although the causes of the devastating post-war hyperinflation are complex and disputed, Germans blamed the near-collapse of their economy on the treaty, and some economists estimated that the reparations accounted for as much as one-third of the hyper-inflation.
In March 1921, French and Belgian troops occupied Duisburg, Düsseldorf, and other areas which formed part of the demilitarized Rhineland, according to the Treaty of Versailles. In January 1923, French and Belgian forces occupied the rest of the Ruhr area as a reprisal after Germany failed to fulfill reparation payments demanded by the Versailles Treaty. The German government answered with "passive resistance", which meant that coal miners and railway workers refused to obey any instructions by the occupation forces. Production and transportation came to a standstill, but the financial consequences contributed to German hyperinflation and completely ruined public finances in Germany. Consequently, passive resistance was called off in late 1923. The end of passive resistance in the Ruhr allowed Germany to undertake a currency reform and to negotiate the Dawes Plan, which led to the withdrawal of French and Belgian troops from the Ruhr Area in 1925.
In 1920, the head of the Reichswehr Hans von Seeckt clandestinely re-established the General Staff, by expanding the Truppenamt (Troop Office); purportedly a human resources section of the army. In March, 18,000 German troops entered the Rhineland under the guise of attempting to quell possible unrest by communists and in doing so violated the demilitarized zone. In response, French troops advanced further into Germany until the German troops withdrew.
German officials conspired systematically to evade the clauses of the treaty, by failing to meet disarmament deadlines, refusing Allied officials access to military facilities, and maintaining and hiding weapon production. As the treaty did not ban German companies from producing war material outside of Germany, companies moved to the Netherlands, Switzerland, and Sweden. Bofors was bought by Krupp, and in 1921 German troops were sent to Sweden to test weapons. The establishment of diplomatic ties with the Soviet Union, via the Genoa Conference and Treaty of Rapallo, was also used to circumvent the Treaty of Versailles. Publicly, these diplomatic exchanges were largely in regards to trade and future economic cooperation. But secret military clauses were included that allowed for Germany to develop weapons inside the Soviet Union. Furthermore, it allowed for Germany to establish three training areas for aviation, chemical and tank warfare. In 1923, the British newspaper The Times made several claims about the state of the German Armed Forces: that it had equipment for 800,000 men, was transferring army staff to civilian positions in order to obscure their real duties, and warned of the militarization of the German police force by the exploitation the Krümper system.
The Weimar Government also funded domestic rearmament programs, which were covertly funded with the money camouflaged in "X-budgets", worth up to an additional 10% of the disclosed military budget. By 1925, German companies had begun to design tanks and modern artillery. During the year, over half of Chinese arms imports were German and worth 13 million Reichsmarks. In January 1927, following the withdrawal of the Allied disarmament committee, Krupps ramped up production of armor plate and artillery. Production increased so that by 1937, military exports had increased to 82,788,604 Reichsmarks. Production was not the only violation: "Volunteers" were rapidly passed through the army to make a pool of trained reserves, and paramilitary organizations were encouraged with the illegally militarized police. Non-commissioned officers (NCOs) were not limited by the treaty, thus this loophole was exploited and as such the number of NCOs were vastly in excess to the number needed by the Reichswehr.
In December 1931, the Reichswehr finalized a second rearmament plan that called for 480 million Reichsmarks to be spent over the following five years: this program sought to provide Germany the capability of creating and supplying a defensive force of 21 divisions supported by aircraft, artillery, and tanks. This coincided with a 1 billion Reichsmark programme that planned for additional industrial infrastructure that would be able to permanently maintain this force. As these programs did not require an expansion of the military, they were nominally legal. On 7 November 1932, the Reich Minister of Defense Kurt von Schleicher authorized the illegal Umbau Plan for a standing army of 21 divisions based on 147,000 professional soldiers and a large militia. Later in the year at the World Disarmament Conference, Germany withdrew to force France and Britain to accept German equality of status. London attempted to get Germany to return with the promise of all nations maintaining an equality in armaments and security. The British later proposed and agreed to an increase in the Reichswehr to 200,000 men, and for Germany to have an air force half the size of the French. It was also negotiated for the French Army to be reduced.
In October 1933, following the rise of Adolf Hitler and the founding of Nazi regime, Germany withdrew from League of Nations and the World Disarmament Conference. In March 1935, Germany reintroduced conscription followed by an open rearmament programme, the official unveiling of the Luftwaffe (air force), and signed the Anglo-German Naval Agreement that allowed a surface fleet 35% of the size of the Royal Navy. The resulting rearmament programmes were allotted 35 billion Reichsmarks over an eight-year period.
On 7 March 1936, German troops entered and remilitarized the Rhineland. On 12 March 1938, following German pressure to the collapse the Austrian Government, German troops crossed into Austria and the following day Hitler announced the Anschluss: the annexation of Austria by Germany. The following year, on 23 March 1939, Germany annexed Memel from Lithuania.
Historians are split on the impact of the treaty. Some saw it as a good solution in a difficult time, others saw it as a disastrous measure that would anger the Germans to seek revenge. The actual impact of the treaty is also disputed.
In his book The Economic Consequences of the Peace, John Maynard Keynes referred to the Treaty of Versailles as a "Carthaginian peace", a misguided attempt to destroy Germany on behalf of French revanchism, rather than to follow the fairer principles for a lasting peace set out in President Woodrow Wilson's Fourteen Points, which Germany had accepted at the armistice. He stated: "I believe that the campaign for securing out of Germany the general costs of the war was one of the most serious acts of political unwisdom for which our statesmen have ever been responsible." Keynes had been the principal representative of the British Treasury at the Paris Peace Conference, and used in his passionate book arguments that he and others (including some US officials) had used at Paris. He believed the sums being asked of Germany in reparations were many times more than it was possible for Germany to pay, and that these would produce drastic instability.
French economist Étienne Mantoux disputed that analysis. During the 1940s, Mantoux wrote a posthumously published book titled The Carthaginian Peace, or the Economic Consequences of Mr. Keynes in an attempt to rebut Keynes' claims. More recently economists have argued that the restriction of Germany to a small army saved it so much money it could afford the reparations payments.
It has been argued – for instance by historian Gerhard Weinberg in his book A World at Arms – that the treaty was in fact quite advantageous to Germany. The Bismarckian Reich was maintained as a political unit instead of being broken up, and Germany largely escaped post-war military occupation (in contrast to the situation following World War II). In a 1995 essay, Weinberg noted that with the disappearance of Austria-Hungary and with Russia withdrawn from Europe, that Germany was now the dominant power in Eastern Europe.
The British military historian Correlli Barnett claimed that the Treaty of Versailles was "extremely lenient in comparison with the peace terms that Germany herself, when she was expecting to win the war, had had in mind to impose on the Allies". Furthermore, he claimed, it was "hardly a slap on the wrist" when contrasted with the Treaty of Brest-Litovsk that Germany had imposed on a defeated Russian SFSR in March 1918, which had taken away a third of Russia's population (albeit mostly of non-Russian ethnicity), one-half of Russia's industrial undertakings and nine-tenths of Russia's coal mines, coupled with an indemnity of six billion marks. Eventually, even under the "cruel" terms of the Treaty of Versailles, Germany's economy had been restored to its pre-war status.
Barnett also claims that, in strategic terms, Germany was in fact in a superior position following the Treaty than she had been in 1914. Germany's eastern frontiers faced Russia and Austria, who had both in the past balanced German power. Barnett asserts that its post-war eastern borders were safer, because the former Austrian Empire fractured after the war into smaller, weaker states, Russia was wracked by revolution and civil war, and the newly restored Poland was no match for even a defeated Germany. In the West, Germany was balanced only by France and Belgium, both of which were smaller in population and less economically vibrant than Germany. Barnett concludes by saying that instead of weakening Germany, the treaty "much enhanced" German power. Britain and France should have (according to Barnett) "divided and permanently weakened" Germany by undoing Bismarck's work and partitioning Germany into smaller, weaker states so it could never have disrupted the peace of Europe again. By failing to do this and therefore not solving the problem of German power and restoring the equilibrium of Europe, Britain "had failed in her main purpose in taking part in the Great War".
The British historian of modern Germany, Richard J. Evans, wrote that during the war the German right was committed to an annexationist program which aimed at Germany annexing most of Europe and Africa. Consequently, any peace treaty that did not leave Germany as the conqueror would be unacceptable to them. Short of allowing Germany to keep all the conquests of the Treaty of Brest-Litovsk, Evans argued that there was nothing that could have been done to persuade the German right to accept Versailles. Evans further noted that the parties of the Weimar Coalition, namely the Social Democratic Party of Germany (SPD), the social liberal German Democratic Party (DDP) and the Christian democratic Centre Party, were all equally opposed to Versailles, and it is false to claim as some historians have that opposition to Versailles also equalled opposition to the Weimar Republic. Finally, Evans argued that it is untrue that Versailles caused the premature end of the Republic, instead contending that it was the Great Depression of the early 1930s that put an end to German democracy. He also argued that Versailles was not the "main cause" of National Socialism and the German economy was "only marginally influenced by the impact of reparations".
Ewa Thompson points out that the treaty allowed numerous nations in Central and Eastern Europe to liberate themselves from oppressive German rule, a fact that is often neglected by Western historiography, more interested in understanding the German point of view. In nations that found themselves free as the result of the treaty — such as Poles or Czechs — it is seen as a symbol of recognition of wrongs committed against small nations by their much larger aggressive neighbours.
Resentment caused by the treaty sowed fertile psychological ground for the eventual rise of the Nazi Party, but the German-born Australian historian Jürgen Tampke argued that it was "a perfidious distortion of history" to argue that the terms prevented the growth of democracy in Germany and aided the growth of the Nazi party; saying that its terms were not as punitive as often held and that German hyper-inflation in the 1920s was partly a deliberate policy to minimise the cost of reparations. As an example of the arguments against the Versaillerdiktat he quotes Elizabeth Wiskemann who heard two officer's widows in Wiesbaden complaining that "with their stocks of linen depleted they had to have their linen washed once a fortnight (every two weeks) instead of once a month!"
The German historian Detlev Peukert wrote that Versailles was far from the impossible peace that most Germans claimed it was during the interwar period, and though not without flaws was actually quite reasonable to Germany. Rather, Peukert argued that it was widely believed in Germany that Versailles was a totally unreasonable treaty, and it was this "perception" rather than the "reality" of the Versailles treaty that mattered. Peukert noted that because of the "millenarian hopes" created in Germany during World War I when for a time it appeared that Germany was on the verge of conquering all of Europe, any peace treaty the Allies of World War I imposed on the defeated German Reich were bound to create a nationalist backlash, and there was nothing the Allies could have done to avoid that backlash. Having noted that much, Peukert commented that the policy of rapprochement with the Western powers that Gustav Stresemann carried out between 1923 and 1929 were constructive policies that might have allowed Germany to play a more positive role in Europe, and that it was not true that German democracy was doomed to die in 1919 because of Versailles. Finally, Peukert argued that it was the Great Depression and the turn to a nationalist policy of autarky within Germany at the same time that finished off the Weimar Republic, not the Treaty of Versailles.
French historian Raymond Cartier states that millions of ethnic Germans in the Sudetenland and in Posen-West Prussia were placed under foreign rule in a hostile environment, where harassment and violation of rights by authorities are documented. Cartier asserts that, out of 1,058,000 Germans in Posen-West Prussia in 1921, 758,867 fled their homelands within five years due to Polish harassment. These sharpening ethnic conflicts would lead to public demands to reattach the annexed territory in 1938 and become a pretext for Hitler's annexations of Czechoslovakia and parts of Poland.
According to David Stevenson, since the opening of French archives, most commentators have remarked on French restraint and reasonableness at the conference, though Stevenson notes that "[t]he jury is still out", and that "there have been signs that the pendulum of judgement is swinging back the other way."
The Treaty of Versailles resulted in the creation of several thousand miles of new boundaries, with maps playing a central role in the negotiations at Paris. The plebiscites initiated due to the treaty have drawn much comment. Historian Robert Peckham wrote that the issue of Schleswig "was premised on a gross simplification of the region's history. ... Versailles ignored any possibility of there being a third way: the kind of compact represented by the Swiss Federation; a bilingual or even trilingual Schleswig-Holsteinian state" or other options such as "a Schleswigian state in a loose confederation with Denmark or Germany, or an autonomous region under the protection of the League of Nations." In regards to the East Prussia plebiscite, historian Richard Blanke wrote that "no other contested ethnic group has ever, under un-coerced conditions, issued so one-sided a statement of its national preference". Richard Debo wrote "both Berlin and Warsaw believed the Soviet invasion of Poland had influenced the East Prussian plebiscites. Poland appeared so close to collapse that even Polish voters had cast their ballots for Germany".
In regards to the Silesian plebiscite, Blanke observed "given that the electorate was at least 60% Polish-speaking, this means that about one 'Pole' in three voted for Germany" and "most Polish observers and historians" have concluded that the outcome of the plebiscite was due to "unfair German advantages of incumbency and socio-economic position". Blanke alleged "coercion of various kinds even in the face of an allied occupation regime" occurred, and that Germany granted votes to those "who had been born in Upper Silesia but no longer resided there". Blanke concluded that despite these protests "there is plenty of other evidence, including Reichstag election results both before and after 1921 and the large-scale emigration of Polish-speaking Upper Silesians to Germany after 1945, that their identification with Germany in 1921 was neither exceptional nor temporary" and "here was a large population of Germans and Poles—not coincidentally, of the same Catholic religion—that not only shared the same living space but also came in many cases to see themselves as members of the same national community". Prince Eustachy Sapieha, the Polish Minister of Foreign Affairs, alleged that Soviet Russia "appeared to be intentionally delaying negotiations" to end the Polish-Soviet War "with the object of influencing the Upper Silesian plebiscite". Once the region was partitioned, both "Germany and Poland attempted to 'cleanse' their shares of Upper Silesia" via oppression resulting in Germans migrating to Germany and Poles migrating to Poland. Despite the oppression and migration, Opole Silesia "remained ethnically mixed."
Frank Russell wrote that, in regards to the Saar plebiscite, the inhabitants "were not terrorized at the polls" and the "totalitarian [Nazi] German regime was not distasteful to most of the Saar inhabitants and that they preferred it even to an efficient, economical, and benevolent international rule." When the outcome of the vote became known, 4,100 (including 800 refugees who had previously fled Germany) residents fled over the border into France.
Military terms and violations
During the formulation of the treaty, the British wanted Germany to abolish conscription but be allowed to maintain a volunteer Army. The French wanted Germany to maintain a conscript army of up to 200,000 men in order to justify their own maintenance of a similar force. Thus the treaty's allowance of 100,000 volunteers was a compromise between the British and French positions. Germany, on the other hand, saw the terms as leaving them defenseless against any potential enemy. Bernadotte Everly Schmitt wrote that "there is no reason to believe that the Allied governments were insincere when they stated at the beginning of Part V of the Treaty ... that in order to facilitate a general reduction of the armament of all nations, Germany was to be required to disarm first." A lack of American ratification of the treaty or joining the League of Nations left France unwilling to disarm, which resulted in a German desire to rearm. Schmitt argued "had the four Allies remained united, they could have forced Germany really to disarm, and the German will and capacity to resist other provisions of the treaty would have correspondingly diminished."
Max Hantke and Mark Spoerer wrote "military and economic historians [have] found that the German military only insignificantly exceeded the limits" of the treaty before 1933. Adam Tooze concurred, and wrote "To put this in perspective, annual military spending by the Weimar Republic was counted not in the billions but in the hundreds of millions of Reichsmarks"; for example, the Weimar Republic's 1931 program of 480 million Reichsmarks over five years compared to the Nazi Government's 1933 plan to spend 4.4 billion Reichsmarks per year. P. M. H. Bell argued that the British Government was aware of later Weimar rearming, and lent public respectability to the German efforts by not opposing them, an opinion shared by Churchill. Norman Davies wrote that "a curious oversight" of the military restrictions were that they "did not include rockets in its list of prohibited weapons", which provided Wernher von Braun an area to research within eventually resulting in "his break [that] came in 1943" leading to the development of the V-2 rocket.
Rise of the Nazis
The Treaty created much resentment in Germany, which was exploited by Adolf Hitler in his rise to power at the helm of Nazi Germany. Central to this was belief in the stab-in-the-back myth, which held that the German army had not lost the war and had been betrayed by the Weimar Republic, who negotiated an unnecessary surrender. The Great Depression exacerbated the issue and led to a collapse of the German economy. Though the treaty may not have caused the crash, it was a convenient scapegoat. Germans viewed the treaty as a humiliation and eagerly listened to Hitler's oratory which blamed the treaty for Germany's ills. Hitler promised to reverse the depredations of the Allied powers and recover Germany's lost territory and pride, which has led to the treaty being cited as a cause of World War II.
- Aftermath of World War I
- Decree on Peace
- Free State Bottleneck
- International Opium Convention, incorporated into the Treaty of Versailles
- Little Treaty of Versailles
- Minority Treaties
- Neutrality Acts of 1930s
- Treaty of Rapallo (1920)
- Treaty of Saint-Germain-en-Laye (1919) with Austria; Treaty of Neuilly-sur-Seine with Bulgaria; Treaty of Trianon with Hungary; Treaty of Sèvres with the Ottoman Empire (Davis 2010:49).
- See the Reparations section.
- Similar wording was used in the treaties signed by the other defeated nations of the Central Powers: Article 177 of the Treaty of Saint-Germain-en-Laye with Austria; Article 161 of the Treaty of Trianon with Hungary; Article 121 of the Treaty Areas of Neuilly-sur-Seine with Bulgaria; and Article 231 of the Treaty of Sevres with Turkey.
- see The Treaty of Saint-Germain-en-Laye, The Treaty of Trianon, The Treaty of Neuilly, and The Treaty of Sèvres.
- President Woodrow Wilson speaking on the League of Nations to a luncheon audience in Portland OR. 66th Cong., 1st sess. Senate Documents: Addresses of President Wilson (May–November 1919), vol. 11, no. 120, p. 206.
- "wir kennen die Wucht des Hasses, die uns hier entgegentritt ... Es wird von uns verlangt, daß wir uns als die allein Schuldigen am Kriege bekennen; ein solches Bekenntnis wäre in meinem Munde eine Lüge." (Weimarer Republik n.d.)
- "The whole purpose of the league", began Makino, was "to regulate the conduct of nations and peoples toward one another, according to a higher moral standard than has reigned in the past, and to administer justice throughout the world." In this regard, the wrongs of racial discrimination have been, and continue to be, the source of "profound resentment on the part of large numbers of the human race", directly affecting their rights and their pride. Many nations fought in the recent war to create a new international order, he said, and the hopes of their nationals now have risen to new heights with victory. Given the objectives of the league, the wrongs of the past, and the aspirations of the future, stated Makino, the leaders of the world gathered in Paris should openly declare their support for at least "the principle of equality of nations and just treatment of their nationals" (Lauren 1978, p. 270).
- On 8 March 1936, 22,700 armed policemen were incorporated into the army in 21 infantry battalions (Bell 1997, p. 234).
- Gustav Krupp later claimed he had duped the Allies throughout the 1920s and prepared the German military for the future (Shuster 2006, p. 116).
- "The Treaty includes no provisions for the economic rehabilitation of Europe—nothing to make the defeated Central Empires into good neighbours, nothing to stabilize the new States of Europe, nothing to reclaim Russia; nor does it promote in any way a compact of economic solidarity amongst the Allies themselves; no arrangement was reached at Paris for restoring the disordered finances of France and Italy, or to adjust the systems of the Old World and the New. The Council of Four paid no attention to these issues, being preoccupied with others—Clemenceau to crush the economic life of his enemy, Lloyd George to do a deal and bring home something which would pass muster for a week, the President to do nothing that was not just and right. It is an extraordinary fact that the fundamental economic problems of a Europe starving and disintegrating before their eyes, was the one question in which it was impossible to arouse the interest of the Four. Reparation was their main excursion into the economic field, and they settled it as a problem of theology, of polities, of electoral chicane, from every point of view except that of the economic future of the States whose destiny they were handling." (Keynes 1919)
- Raymond Cartier, La Seconde Guerre mondiale, Paris, Larousse Paris Match, 1965, quoted in Groppe 2004.
- Slavicek 2010, p. 114.
- Slavicek 2010, p. 107.
- Boyer et al. 2009, p. 153.
- Tucker & Roberts 2005, pp. xxv, 9.
- Tucker & Roberts 2005, p. 1078.
- Wiest 2012, pp. 126, 168, 200.
- BBC History Magazine 2017.
- "Leon Trotsky: Soviet government documents (1918)". www.marxists.org.
- Tucker & Roberts 2005, p. 429.
- Cooper 2011, pp. 422–424.
- Simkins, Jukes & Hickey 2003, p. 265.
- Tucker & Roberts 2005, p. 225.
- Truitt 2010, p. 114.
- Beller 2007, pp. 182–95.
- Bessel 1993, pp. 47–48.
- Hardach 1987, pp. 183–84.
- Simkins 2002, p. 71.
- Tucker & Roberts 2005, p. 638.
- Schmitt 1960, p. 101.
- Schmitt 1960, p. 102.
- Weinberg 1994, p. 8.
- Boyer et al. 2009, p. 526.
- Edmonds 1943, p. 1.
- Martel 1999, p. 18.
- Grebler 1940, p. 78.
- Mowat 1968, p. 213.
- Fuller 1993.
- Marks 2013, p. 650.
- March 1919 Brussels agreement.
- Paul 1985, p. 145.
- Marks 2013, p. 651.
- Proceedings of the National Assembly 1919, pp. 631–635.
- Deutsche Allgemeine Zeitung 1919.
- Roerkohl 1991, p. 348.
- Rudloff 1998, p. 184.
- Rubner 1919, p. 15.
- Common Sense (London) 5 July 1919.
- Bane 1942, p. 791.
- Slavicek 2010, p. 37.
- Lentin 1985, p. 84.
- Weinberg 1994, p. 12.
- Slavicek 2010, pp. 40–1.
- Venzon 1999, p. 439.
- Lentin 2012, p. 22.
- Slavicek 2010, p. 43.
- Lentin 2012, p. 21.
- Layne 1996, p. 187.
- Keynes 1920, p. 34.
- Keylor 1998, p. 43.
- Keylor 1998, p. 34.
- Lentin 1992, p. 28.
- Lentin 1992, pp. 28–32.
- Slavicek 2010, pp. 43–44.
- Trachtenberg 1982, p. 499.
- Thomson 1970, p. 605.
- Haigh 1990, p. 295.
- Slavicek 2010, p. 44.
- Brezina 2006, p. 21.
- Yearwood 2009, p. 127.
- Wilson 1917.
- Trachtenberg 1982, p. 490.
- Cooper 2011, pp. 454–505.
- Slavicek 2010, p. 48.
- Slavicek 2010, pp. 46–7.
- Slavicek 2010, p. 65.
- da Atti Parlamentari, Camera dei Deputati, Discussioni
- Slavicek 2010, p. 73.
- Reinach 1920, p. 193.
- Peckham 2003, p. 107.
- Frucht 2004, p. 24.
- Martin 2007, p. lii.
- Boemeke, Feldman & Glaser 1998, p. 325.
- Ingrao & Szabo 2007, p. 261.
- Brezina 2006, p. 34.
- Tucker & Roberts 2005, p. 437.
- Benians, Butler & Carrington 1959, p. 658.
- Tucker & Roberts 2005, p. 1224.
- Roberts 1986, p. 496.
- Shuster 2006, p. 74.
- Martel 2010, p. 156.
- Lovin 1997, pp. 9, 96.
- Stevenson 1998, p. 10.
- Lentin 2012, p. 26.
- Bell 1997, p. 26.
- Schmitt 1960, p. 104.
- Bell 1997, p. 22.
- Scott 1944, pp. 34–49.
- Slavicek 2010, p. 75.
- Sontag 1971, p. 22.
- Tucker & Roberts 2005, p. 426.
- Tucker 1999, p. 191.
- Ripsman 2004, p. 110.
- Henig 1995, p. 52.
- de Meneses n.d.
- Bailey 1945.
- Widenor 1980.
- Stone 1973.
- Cooper 2011, ch 22–23.
- Duff 1968, pp. 582–598.
- Wimer & Wimer 1967, pp. 13–24.
- The New York Times 1921.
- Schiff 1996.
- Dreyer 2015, p. 60.
- EB: May Fourth Movement.
- Arnander & Wood 2016.
- Château de Versailles 2016.
- Probst 2019.
- W-R: "shrivelled hand" speech.
- Pinson 1964, pp. 397 ff.
- Lauren 1978, pp. 257–278.
- Kawamura 1997, pp. 507–511.
- Marks 1978, pp. 236–237.
- Ferguson 1998, p. 414.
- Marks 1978, pp. 223–234.
- Kramer 2008, p. 10.
- Martin 2007, p. xiii.
- Martin 2007, p. xii.
- Ther & Siljak 2001, p. 123.
- Bartov & Weitz 2013, p. 490.
- Bullivant, Giles & Pape 1999, pp. 43–44.
- Albrecht-Carrie 1940, p. 9.
- Steiner 2007, p. 75.
- Lemkin, Schabas & Power 2008, p. 198.
- Russell 1951, pp. 103–106.
- Pawley 2008, p. 84.
- Liverman 1996, p. 92.
- Pawley 2008, p. 2.
- Collar 2012, p. 78.
- Pawley 2008, p. 117.
- Mommsen & Foster 1988, p. 273.
- Pawley 2008, pp. 181–182.
- Jacobson 1972, p. 135.
- Williamson 2017, pp. 19, 245.
- Edmonds 1943, p. 147.
- Williamson 2017, pp. 246–247.
- Pawley 2008, p. 94.
- McDougall 1978, p. 155.
- Appiah & Gates 2005, p. 781.
- Baker 2004, p. 21.
- Mommsen & Foster 1988, p. 129.
- Pawley 2008, p. 87.
- Nelson 1975, pp. 251–252.
- Kiger n.d.
- EB: Ruhr occupation.
- Zaloga 2002, p. 13.
- Geyer 1984.
- Shuster 2006, pp. 112, 114.
- Shuster 2006, p. 116.
- Bell 1997, p. 133.
- Tucker & Roberts 2005, p. 967.
- Shuster 2006, p. 120.
- Hantke & Spoerer 2010, p. 852.
- Kirby 1984, p. 25.
- Kirby 1984, p. 220.
- Mowat 1968, p. 235.
- Tooze 2007, p. 26.
- Bell 1997, p. 229.
- Bell 1997, p. 78.
- Corrigan 2011, p. 68.
- Fischer 1995, p. 408.
- Tooze 2007, p. 53.
- Bell 1997, pp. 233–234.
- Bell 1997, p. 254.
- Bell 1997, p. 281.
- TNA: The Great War 1914 to 1918 n.d.
- Keynes 1920.
- Markwell 2006.
- Hantke & Spoerer 2010, pp. 849–864.
- Reynolds 1994.
- Weinberg 2008, p. 16.
- Barnett 2002, p. 392.
- Barnett 1986, p. 316.
- Barnett 1986, p. 318.
- Barnett 1986, p. 319.
- Evans 1989, p. 107.
- Thompson n.d.
- BBC Bitesize.
- Tampke 2017, p. vii,xii.
- Peukert 1992, p. 278.
- Stevenson 1998, p. 11.
- Kent 2019, pp. 275–279.
- Altic 2016, pp. 179–198.
- Ingrao & Szabo 2007, p. 262.
- Debo 1992, p. 335.
- Schmitt 1960, pp. 104–105.
- Schmitt 1960, p. 108.
- Tooze 2007, pp. 26, 53–54.
- Davies 2007, p. 416.
- Wilde 2020.
- Signatures and Protocol
- President Wilson's "Fourteen Points" Speech
- Articles 227–230
- Article 80
- Part XII
- Article 246
- Articles 33 and 34
- Articles 45 and 49
- Section V preamble and Article 51
- Articles 81 and 83
- Article 88 and annex
- Article 94
- Article 99
- Articles 100–104
- Article 22 and Article 119
- Article 156
- Part V preamble
- Articles 159, 160, 163 and Table 1
- Articles 173, 174, 175 and 176
- Articles 161, 162, and 176
- Articles 42, 43, and 180
- Article 115
- Articles 165, 170, 171, 172, 198 and tables No. II and III.
- Articles 181 and 190
- Articles 185 and 187
- Articles 198, 201, and 202
- Article 231
- Treaty of Saint-Germain-en-Laye, Article 177
- Treaty of Trianon, Article 161
- Treaty of Neuilly-sur-Seine, Article 121
- Treaty of Sèvres, Article 231
- Articles 232–235
- Article 428
- Article 429
- Article 430
- Part I
- Constitution of the International Labour Office Part XIII preamble and Article 388
- Article 295
- Albrecht-Carrie, Rene (1940). "Versailles Twenty Years After". Political Science Quarterly. 55 (1): 1–24. doi:10.2307/2143772. JSTOR 2143772.
- Altic, Mirela (2016). Liebenberg, Elri; Demhardt, Imre & Vervust, Soetkin (eds.). The Peace Treaty of Versailles: The Role of Maps in Reshaping the Balkans in the Aftermath of WWI. Cham: Springer. pp. 179–198.
- Appiah, Anthony & Gates, Henry Louis, eds. (2005). Africana: The Encyclopedia of the African and African American Experience (2nd ed.). Oxford University Press. p. 781. ISBN 978-019517055-9.
- Arnander, Christopher & Wood, Frances (2016). "Introduction". The Betrayed Ally, China in the Great War. Pen and Sword. ISBN 978-147387501-2.
- Bailey, Thomas A. (1945). "Woodrow Wilson and the Great Betrayal". New York: The Macmillan Company – via Internet Archive.
- Baker, Anni (2004). American Soldiers Overseas: The Global Military Presence. Perspectives on the Twentieth Century. Praeger; First Edition. ISBN 978-027597354-4 – via Internet Archive.
- Bane, S.L. (1942). The Blockade of Germany after the Armistice. Stanford University Press. p. 791.
- Barnett, Correlli (1986). The Collapse of British Power. Prometheus Books. ISBN 978-039103-439-6.
- Barnett, Correlli (2002). The Collapse of British Power. "Pride and Fall" sequence. London: Pan. p. 392. ISBN 978-033049181-5.
- Bartov, Omer & Weitz, Eric D., eds. (2013). Shatterzone of Empires: Coexistence and Violence in the German, Habsburg, Russian and Ottoman Borderlands. Indiana University Press. ISBN 978-025300635-6.
- Bell, P.M.H. (1997) [First published 1986]. The Origins of the Second World War in Europe (2nd ed.). Pearson. ISBN 978-058230-470-3 – via Internet Archive.
- Beller, Steven (2007). A Concise History of Austria. Cambridge Concise Histories. Cambridge University Press. ISBN 978-052147-886-1 – via Internet Archive.
- Benians, Ernest Alfred; Butler, James & Carrington, C. E., eds. (1959). Cambridge History of the British Empire Volume 3, The Empire Commonwealth 1870-1919 (volume 3). Cambridge University Press. ISBN 978-052104-512-4.
- Bessel, Richard (1993). Germany After the First World War. Oxford University Press, USA. ISBN 978-019821-938-5.
- Boemeke, Manfred F.; Feldman, Gerald D. & Glaser, Elisabeth, eds. (1998). Versailles: A Reassessment after 75 Years. Publications of the German Historical Institute. Cambridge University Press. ISBN 978-052162-132-8.
- Boyer, Paul S.; Clark, Clifford E.; Hawley, Sandra; Kett, Joseph F & Rieser, Andrew (2009). The Enduring Vision: A History of the American People, Volume 2: From 1865. Cengage Learning. ISBN 978-054722-278-3.
- Brezina, Corona (2006). The Treaty of Versailles, 1919: A Primary Source Examination of the Treaty That Ended World War I. Primary Sources of American Treaties. Rosen Central. ISBN 978-140420-442-3 – via Internet Archive.
- Bullivant, Keith; Giles, Geoffrey & Pape, Walter, eds. (1999). Germany and Eastern Europe: Cultural Identities and Cultural Differences. Yearbook of European Sutdies. Rodopi Bv Editions. ISBN 978-904200688-1.
- "Clemenceau an Deutschland: "Die Stunde der Abrechnung ist da."" [Clemenceau to Germany: "The day of reckoning is here"]. Die Weimarer Republik: Deutschlands erste Demokratie (in German). Weimarer Republik e.V. n.d. Retrieved 21 January 2021.
- Collar, Peter (2012). The Propaganda War in the Rhineland: Weimar Germany, Race and Occupation after World War I. London: I.B. Tauris. p. 78. ISBN 978-184885946-3.
- Cooper, John Milton (2011). Woodrow Wilson: A Biography. Vintage Books. pp. 422–424. ISBN 978-030727790-9.
- Corrigan, Gordon (2011). The Second World War: A Military History. Thomas Dunne Books. ISBN 978-031-257709-4 – via Internet Archive.
- Davies, Norman (2007). Europe at War 1939-1945: No Simple Victory. Pan Books. ISBN 978-033035-212-3.
- Davis, Robert T., ed. (2010). U.S. Foreign Policy and National Security: Chronology and Index for the 20th Century. Volume 1. Santa Barbara, California: Praeger Security International. p. 49. ISBN 978-0-313-38385-4.
|volume=has extra text (help)
- Debo, Richard K. (1992). Survival and Consolidation: The Foreign Policy of Soviet Russia, 1918-1921. Mcgill Queens University Press, First Edition. ISBN 978-077350828-6.
- "Die Finanzierung des Lebensmittels" [Paying for food imports]. Deutsche Allgemeine Zeitung (in German). 2 February 1919.
- Dreyer, June Teufel (2015). China's Political System. Routledge. p. 60. ISBN 978-131734964-8.
- Duff, John B. (1968), "The Versailles Treaty and the Irish-Americans", The Journal of American History, 55 (3): 582–598, doi:10.2307/1891015, JSTOR 1891015
- Edmonds, J.E. (1987) [First published 1943]. The Occupation of the Rhineland 1918–29. HMSO. ISBN 978-0-11-290454-0.
- Evans, Richard J. (1989). In Hitler's Shadow: West German Historians and the Attempt to Escape from the Nazi Past (First ed.). Pantheon Books. ISBN 978-067972-348-6 – via Internet Archive.
- Ferguson, Niall (1998). The Pity of War: Explaining World War I. Allen Lane. ISBN 978-0-713-99246-5.
- Fischer, Klaus P. (1995). Nazi Germany: A New History. Constable. p. 408. ISBN 978-009474910-8.
- Folly, Martin & Palmer, Niall (2010). Historical Dictionary of U.S. Diplomacy from World War I through World War II. Historical Dictionaries of Diplomacy and Foreign Relations. Scarecrow Press. ISBN 978-081085-606-6.
- Frucht, Richard, ed. (2004). Eastern Europe: An Introduction to the People, Lands, and Culture. ABC-CLIO. ISBN 978-157607-800-6 – via Internet Archive.
- Fuller, J.F.C. (1993). The Second World War, 1939-45 A Strategical And Tactical History. Da Capo Press. ISBN 978-030680506-6.
- Geyer, Michael (1984). Deutsche Rüstungspolitik 1860 bis 1980 (in German). Frankfurt: Suhrkamp. ISBN 978-351811246-5.
- "The Great War 1914 to 1918". The National Archives. Retrieved 7 April 2020.
- Grebler, Leo (1940). The Cost of the World War to Germany and Austria-Hungary. Yale University Press. p. 78.
- Groppe, Pater Lothar (28 August 2004). "Die "Jagd auf Deutsche" im Osten: Die Verfolgung begann nicht erst mit dem "Bromberger Blutsonntag" vor 50 Jahren". Preußische Allgemeine Zeitung (in German). Retrieved 22 September 2010.
'Von 1.058.000 Deutschen, die noch 1921 in Posen und Westpreußen lebten', ist bei Cartier zu lesen, 'waren bis 1926 unter polnischem Druck 758.867 abgewandert. Nach weiterer Drangsal wurde das volksdeutsche Bevölkerungselement vom Warschauer Innenministerium am 15. Juli 1939 auf weniger als 300.000 Menschen geschätzt.'
- Haigh, Christopher, ed. (1990). The Cambridge Historical Encyclopedia of Great Britain and Ireland. Cambridge University Press. ISBN 978-052139-552-6 – via Internet Archive.
- Hantke, Max & Spoerer, Mark (2010), "The imposed gift of Versailles: the fiscal effects of restricting the size of Germany's armed forces, 1924–9" (PDF), Economic History Review, 63 (4): 849–864, doi:10.1111/j.1468-0289.2009.00512.x, S2CID 91180171 – via MPRA: Munich Personal RePEc Archive
- Hardach, Gerd (1987). The First World War, 1914–1918. Penguin. ISBN 978-014022-679-9.
- "HARDING ENDS WAR; SIGNS PEACE DECREE AT SENATOR'S HOME. Thirty Persons Witness Momentous Act in Frelinghuysen Living Room at Raritan". The New York Times. 3 July 1921.
- Henig, Ruth (1995) [First published 1984]. Versailles and After: 1919–1933. London: Routledge. p. 52. ISBN 978-041512710-3.
- Ingrao, Charles & Szabo, Franz A.J., eds. (2007). The Germans and the East. Purdue University Press. ISBN 978-155753-443-9.
- Jacobson, Jon (1972). Locarno Diplomacy: Germany and the West, 1925–1929. Princeton University Press. p. 135. ISBN 069105190-9.
- Kawamura, Noriko (November 1997). "Wilsonian Idealism and Japanese Claims at the Paris Peace Conference". Pacific Historical Review. 66 (4): 503–526. doi:10.2307/3642235. JSTOR 3642235.
- Kent, Alexander (2019). "A Picture and an Argument: Mapping for Peace with a Cartography of Hope". The Cartographic Journal. 56 (4): 275–279. doi:10.1080/00087041.2019.1694804.
- Keylor, William R. (1998). The Legacy of the Great War: Peacemaking, 1919. Boston and New York: Houghton Mifflin. p. 34. ISBN 0-669-41711-4. Archived from the original on 4 October 2013.
- Keynes, John Maynard (1919). The Economic Consequences of the Peace. Ch VI. – via Internet Archive.
- Keynes, John Maynard (1920). The Economic Consequences of the Peace. Harcourt Brace and Howe.
- Kiger, Patrick (n.d.). "The Treaty of Versailles Punished Defeated Germany with These Provisions". HISTORY.
- Kirby, William C. (1984). German and Republican China. Stanford University Press. ISBN 978-080471-209-5.
- Kramer, Alan (2008). Dynamic of Destruction: Culture and Mass Killing in the First World War. The Making of the Modern World. Penguin. ISBN 978-1-846-14013-6.
- Lauren, Paul Gordon (Summer 1978). "Human Rights in History: Diplomacy and Racial Equality at the Paris Peace Conference". Diplomatic History. 2 (3): 257–278. doi:10.1111/j.1467-7709.1978.tb00435.x. JSTOR 24909920.
- Layne, Christopher (1996). "Kant or Cant: The Myth of the Democratic Peace". In Brown, Michael E.; Lynn-Jones, Sean M. & Miller, Steve E. (eds.). Debating the Democratic Peace. International Security Readers. MIT Press. ISBN 978-026252-213-7.
- "Lebensmittelabkommen in Brüssel" (in German). Das Bundesarchiv.
- Lemkin, Raphael; Schabas, William A. & Power, Samantha (2008). Axis Rule in Occupied Europe: Laws of Occupation, Analysis of Government, Proposals for Redress. Foundations of the Laws of War. The Lawbook Exchange, Lrd 2 edition. ISBN 978-158477-901-8.
- Lentin, Antony (1985) [First published 1984]. Guilt at Versailles: Lloyd George and the Pre-history of Appeasement. Routledge. p. 84. ISBN 978-0-416-41130-0.
- Lentin, Antony (1992), "Trick or Treat? The Anglo-French Alliance, 1919", History Today, vol. 42 no. 12, pp. 28–32, ProQuest 1299048769
- Lentin, Antony (2012), "Germany: a New Carthage?", History Today, vol. 62 no. 1, pp. 20–27, archived from the original on 31 January 2015
- Liverman, Peter (1996). Does Conquest Pay?: The Exploitation of Occupied Industrial Societies. Princeton University Press. p. 92. ISBN 069102986-5.
- Lovin, Clifford R. (1997). A School for Diplomats: the Paris Peace Conference of 1919. University Press of America. ISBN 978-076180-755-1.
- Marks, Sally (1978), "The Myths of Reparations", Central European History, 11 (3): 231–255, doi:10.1017/S0008938900018707, JSTOR 4545835
- Marks, Sally (2013). "Mistakes and Myths: The Allies, Germany, and the Versailles Treaty, 1918–1921". Journal of Modern History. 85 (3): 632–659. doi:10.1086/670825. JSTOR 10.1086/670825. S2CID 154166326.
- Markwell, Donald (2006). John Maynard Keynes and International Relations: Economic Paths to War and Peace. Oxford University Press. ISBN 978-019829236-4.
- Martel, Gordon, ed. (1999). Origins of the Second World War Reconsidered (2nd ed.). London: Routledge. ISBN 978-0-415-16325-5 – via Internet Archive.
- Martel, Gordon, ed. (2010). A Companion to Europe 1900–1945. Hoboken NJ: Wiley-Blackwell. ISBN 978-1-444-33840-9.
- Martin, Lawrence (2007) [First published 1924]. The Treaties of Peace, 1919-1923. The Lawbook Exchange. ISBN 978-158477-708-3.
- "May Fourth Movement". Encyclopaedia Britannica.
- McDougall, Walter A (1978). France's Rhineland Policy, 1914–1924: The Last Bid for a Balance of Power in Europe. Princeton Legacy Library. Princeton University Press. p. 155. ISBN 978-069105268-7.
- McDougall, Walter A. (1979), "Political Economy versus National Sovereignty: French Structures for German Economic Integration after Versailles", The Journal of Modern History, 51 (1): 4–23, doi:10.1086/241846, JSTOR 1877866, S2CID 144670397
- de Meneses, Filipe Ribeiro (n.d.). "Post-war Settlement (Portugal)". In Rollo, Maria Fernanda & Pires, Ana Paula (eds.). 1914-1918 Online. International Encyclopedia of the First World War. doi:10.15463/ie1418.10521.
- Mommsen, Hans & Foster, Elborg (1988). The Rise and Fall of Weimar Democracy. University of North Crolina Press. ISBN 978-080784721-3.
- Mowat, C. L., ed. (1968). Volume XII: The Shifting Balance of World Forces 1898-1945. The New Cambridge Modern History. Cambridge University Press. ISBN 978-052104-551-3.
- Nelson, Keith L. (1975). Victors divided: America and the Allies in Germany, 1918-1923. University of California Press.
- Paul, C. (1985). The politics of hunger: the allied blockade of Germany, 1915–1919. Athens, Ohio: Ohio University Press. p. 145. ISBN 978-0-8214-0831-5.
- Pawley, Margaret (2008). The Watch on the Rhine: The Military Occupation of the Rhineland. I.B. Tauris. ISBN 978-184511457-2.
- Peckham, Robert Shannan, ed. (2003). Rethinking Heritage: Cultures and Politics in Europe. I.B.Tauris. ISBN 978-186064-796-3.
- Peukert, Detlev (1992). The Weimar Republic: The Crisis of Classical Modernity. Translated by Richard Deveson. Hill & Wang. p. 278. ISBN 978-080909674-9.
- Pinson, Koppel S. (1964). Modern Germany: Its History and Civilization (13th printing ed.). New York: Macmillan. pp. 397 ff. ISBN 0-88133-434-0.
- Probst, Robert (28 June 2019). "'Wir kennen die Wucht des Hasses'" [We can feel the strength of hatred]. Süddeutsche Zeitung (in German). Retrieved 20 January 2021.
- Reinach, Joseph (1920). "Le rôle de l'impératrice Eugénie en septembre et octobre 1870". Revue d'Histoire du XIXe siècle – 1848 (in French). Société d'Histoire de la Révolution de 1848: 193.
- Reynolds, David (20 February 1994). "Review of "A World at Arms: A Global History of World War II"". The New York Times.
- Ripsman, Norrin M. (2004). Peacemaking by Democracies: The Effect of State Autonomy on the Post-World War Settlements. Pennsylvania State University Press. ISBN 978-027102-398-4.
- Roberts, A.D., ed. (1986). The Cambridge History of Africa: Volume 7 c. 1905-c. 1940. Cambridge University Press. ISBN 978-052122-505-2.
- Roerkohl, Anne (1991). Hungerblockade und Heimatfront: Die kommunale Lebensmittelversorgung in Westfalen während des Ersten Weltkrieges [The hunger blockade and the home front: communal food supply in Westphalia during World War I] (in German). Stuttgart: Franz Steiner. p. 348.
- Rubner, Max (10 April 1919). "Von der Blockde und Aehlichen". Deutsche Medizinische Wochenschrift. Berlin. 45 (15): 15. doi:10.1055/s-0028-1137673.
- Rudloff, Wilfried (1998). Die Wohlfahrtsstadt: Kommunale Ernährungs-, Fürsorge, und Wohnungspolitik am Beispiel Münchens 1910-1933 (in German). Göttingen: Vandenhooeck & Ruprecht. p. 184.
- "Ruhr occupation". Encyclopaedia Britannica.
- Russell, Frank M. (1951). The Saar: Battleground and Pawn (First ed.). Stanford University Press.
- "Scheidemann: "Welche Hand müßte nicht verdorren, die sich und uns in diese Fesseln legt?"" [Scheidemann: "Which hand would not shrivel, that shackled itself and us in such a way?"]. Die Weimarer Republik: Deutschlands erste Demokratie (in German). Weimarer Republik e.V. n.d. Retrieved 4 February 2021.
- Schiff, Judith Ann (1 August 1996). "Bibliographical Introduction to "Diary, Reminiscences and Memories of Colonel Edward M. House"". Yale University Library and Social Science Statistical Laboratory. Archived from the original on 23 December 2009.
- Schmitt, Bernadotte (1960), "The Peace Treaties of 1919-1920", Proceedings of the American Philosophical Society, 104 (1): 101–110, JSTOR 985606
- Scott, F. R. (January 1944). "The End of Dominion Status". The American Journal of International Law. 38 (1): 34–49. doi:10.2307/2192530. JSTOR 2192530.
- Shuster, Richard (2006). German Disarmament After World War I: The Diplomacy of International Arms Inspection 1912–1931. Strategy and History. Routledge. ISBN 978-041535808-8.
- Simkins, Peter (2002). The First World War: Volume 3 The Western Front 1917-1918. Osprey Publishing. ISBN 978-184176-348-4.
- Simkins, Peter; Jukes, Geoffrey & Hickey, Michael (2003). The First World War: The War to End All Wars. Osprey Publishing. ISBN 978-184176-738-3.
- Slavicek, Louise Chipley (2010). The Treaty of Versailles. Milestones in Modern World History. Chelsea House Publications. ISBN 978-160413-277-9.
- Sontag, Richard (1971). A Broken World, 1919-1939. Michigan: Harper and Row – via Internet Archive.
- Steiner, Barry H. (2007). Collective Preventive Diplomacy: A Study in International Conflict Management. Suny Series in Global Politics. State University of New York Press. ISBN 978-079145988-1.
- Stevenson, David (1998). "France at the Paris Peace Conference: Addressing the Dilemmas of Security". French Foreign and Defence Policy, 1918–1940: The Decline and Fall of a Great Power. Routledge Studies in Modern European History. New York: Routledge. ISBN 978-0-415-15039-2.
- Stone, Ralph A. (1973). The Irreconcilables: The Fight Against the League of Nations. W. W. Norton & Co. ISBN 978-039300671-1.
- Tampke, Jürgen (2017). A Perfidious Distortion of History. Melbourne: Scribe. pp. vii, xii. ISBN 978-192532-1-944.
- Ther, Philipp & Siljak, Ana, eds. (2001). Redrawing Nations: Ethnic Cleansing in East-Central Europe, 1944-1948. The Harvard Cold War Studies Book Series. Rowman & Littlefield Publishers. ISBN 978-074251094-4.
- Thompson, Ewa (n.d.). "The Surrogate Hegemon in Polish Postcolonial Discourse" (PDF). Rice University. Retrieved 10 October 2020.
- Thomson, David (1970). Europe Since Napoleon. Penguin Books. p. 605.
- Tooze, Adam (2007) [First published 2006]. The Wages of Destruction: The Making and Breaking of the Nazi Economy. Penguin Books. ISBN 978-014100348-1.
- Trachtenberg, Marc (1982), "Versailles after Sixty Years", Journal of Contemporary History, 17 (3): 487–506, doi:10.1177/002200948201700305, JSTOR 260557, S2CID 154283533
- "The Treaty of Versailles, 1919". Château de Versailles. 22 November 2016. Archived from the original on 6 November 2020. Retrieved 2 March 2021.
- Truitt, Wesley B. (2010). Power and Policy: Lessons for Leaders in Government and Business. Praeger. ISBN 978-031338-240-6.
- Tucker, Spencer C., ed. (1999) [First published 1996]. European Powers in the First World War: An Encyclopedia. Garland Reference Library of the Humanities. Routledge. ISBN 978-081533-351-7.
- Tucker, Spencer C. & Roberts, Priscilla (2005). The Encyclopedia of World War I: A Political, Social, and Military History. ABC=CLIO. ISBN 978-185109-420-2.
- Venzon, Anne Cipriano, ed. (1999). The United States in the First World War: An Encyclopedia. Military History of the United States. Routledge. ISBN 978-081533-353-1.
- Verhandlung der verfassungsgebenden Nationalversammlung: Stenographische Berichte und Drucksachen. Volume 24. German National Assembly. 1919. pp. 631–635.
|volume=has extra text (help)
- Weinberg, Gerhard L. (1994). A World at Arms: A Global History of World War II. Cambridge University Press. ISBN 0-52144-317-2 – via Internet Archive.
- Weinberg, Gerhard L. (2008) [First published 1995]. Germany, Hitler, and World War II: Essays in Modern German and World History. Cambridge University Press. p. 16. ISBN 978-052156626-1.
- "Why the Nazis achieved power". BBC Bitesize.
- "Why was the Zimmermann Telegram important?". BBC History Magazine. 17 January 2017. Retrieved 11 January 2019.
- Widenor, William C. (1980). Henry Cabot Lodge and the Search for an American Foreign Policy. University of California Press. ISBN 0-520-04962-4.
- Wiest, Andrew (2012). The Western Front 1917–1918: From Vimy Ridge to Amiens and the Armistice. pp. 126, 168, 200. ISBN 978-190662613-6.
- Wilde, Robert (29 January 2020). "How the Treaty of Versailles Contributed to Hitler's Rise". ThoughtCo. Retrieved 5 October 2020.
- Williamson, David G (2017). The British in Interwar Germany: The Reluctant Occupiers, 1918–30 (2nd ed.). New York: Bloomsbury Academic. pp. 19, 245. ISBN 978-147259582-9.
- Wilson, Woodrow (22 January 1917). "Peace Without Victory (speech to Senate)". Digital History.
- Wimer, Kurt & Wimer, Sarah (1967). "The Harding Administration, the League of Nations, and the Separate Peace Treaty". The Review of Politics. 29 (1): 13–24. doi:10.1017/S0034670500023706. JSTOR 1405810.
- Yearwood, Peter J. (2009). Guarantee of Peace: The League of Nations in British Policy 1914-1925. Oxford University Press. ISBN 978-019922-673-3.
- Zaloga, Steven (2002). Poland 1939: The Birth of Blitzkrieg. Campaign. Illustrated by Howard Gerrard. Osprey Publishing. ISBN 978-184176408-5.
- Andelman, David A. (2008). A Shattered Peace: Versailles 1919 and the Price We Pay Today. New York/London: J. Wiley. ISBN 978-0-471-78898-0.
- Birdsall, Paul (1941). Versailles twenty years after. Allen & Unwin.
- Cooper, John Milton (2010). Breaking the Heart of the World: Woodrow Wilson and the Fight for the League of Nations. Cambridge University Press. ISBN 978-052114765-1.
- Demarco, Neil (1987). The World This Century. London: Collins Educational. ISBN 0-00-322217-9.
- Graebner, Norman A. & Bennett, Edward M. (2011). The Versailles Treaty and Its Legacy: The Failure of the Wilsonian Vision. New York: Cambridge University Press. ISBN 978-110700821-2.
- Herron, George D. (2015) [First edition published 1921]. The Defeat in the Victory (Reproduction ed.). Boston: Palala Press; originally published by Cecil Palmer. ISBN 978-134346520-6.
- Lloyd George, David (1938). The Truth About the Peace Treaties (2 volumes). London: Victor Gollancz.
- Published in the US as Memoirs of the Peace Conference
- Macmillan, Margaret (2001). Peacemakers. London: John Murray. ISBN 0-7195-5939-1.
- Parker, R.A.C (April 1956). "The First Capitulation: France and the Rhineland Crisis of 1936". World Politics. 8 (3): 355–373. doi:10.2307/2008855. JSTOR 2008855.
- Sharp, Alan (2011). Consequences of Peace: The Versailles Settlement: Aftermath and Legacy 1919–2010. Haus Publishing. ISBN 978-190579174-3.
- Sharp, Alan (2018). Versailles 1919: A Centennial Perspective. Haus Publishing. ISBN 978-191220809-8.
- Sharp, Alan (2018). The Versailles Settlement: Peacemaking After the First World War, 1919–1923 (Third ed.). Palgrave. ISBN 978-113761139-0.
- Webster, Andrew (2018). "Treaty of Versailles (1919)". In Martel, Gordon (ed.). The Encyclopedia of Diplomacy. 4. Wiley-Blackwell. pp. 1–15. ISBN 978-111888791-2.
- Wheeler-Bennett, Sir John (1972). The Wreck of Reparations, being the political background of the Lausanne Agreement, 1932. New York: H. Fertig.
|Wikimedia Commons has media related to Treaty of Versailles.|
|Wikisource has original text related to this article:|
|Wikisource has original text related to this article:|
- Documents relating to the Treaty from the Parliamentary Collections
- Treaty of Versailles Resource Guide from the Library of Congress
- Photographs of the document
- The consequences of the Treaty of Versailles for today's world
- Text of Protest by Germany and Acceptance of Fair Peace Treaty
- Woodrow Wilson Original Letters on Treaty of Versailles, Shapell Manuscript Foundation
- My 1919 – A film from the Chinese point of view, the only country that did not sign the treaty
- "Versailles Revisted" (Review of Manfred Boemeke, Gerald Feldman and Elisabeth Glaser, The Treaty of Versailles: A Reassessment after 75 Years. Cambridge, UK: German Historical Institute, Washington, and Cambridge University Press, 1998), Strategic Studies 9:2 (Spring 2000), 191–205
- Map of Europe and the impact of the Versailles Treaty at omniatlas.com | https://library.kiwix.org/wikipedia_en_top_maxi/A/Treaty_of_Versailles | 21 |
15 | The supply and demand model can be applied to realworld events.
There is a price ceiling and a price floor in the market.
You can view the economy through supply and demand.
The supply/demand lens is used to consider real-world events.
You can apply supply/demand analysis to real-world events.
There are three events.
Try to explain what happened using supply and demand curves.
Figure 5-1 has some diagrams to help you in the process.
In each, be careful to explain which curve, or curves, shifted and how that affected equilibrium price and quantity.
Half a million acres of land in California are not being used.
Sand is a key ingredient in the process of fracking for oil and natural gas in the United States.
There are three shifts of supply and demand in this exhibit.
Match them with the events listed in the text.
A growing middle class in China and India has increased the demand for food products such as soy and palm.
To meet the increasing demand for corn and soy, U.S. farmers have decided to grow more corn and less soy.
Let's see if your analysis matches mine after you've matched them.
The weather is a factor of supply.
The invisible hand of the market pushes the price up until the quantity demanded is equal to the price.
The process of fracking involves the use of sand.
Oil and gas producers in the U.S. increased their demand for sand.
The sellers ran out of sand and raised their price.
Let's review the examples that we've been through.
The curves will shift if demand and supply are not affected by the price of the good.
When both curves are shifting, you can get a change in price but little change in quantity, or a change in quantity but little change in price.
The effects of supply or demand curve shifts.
For supply, do the same thing.
The new keep things straight is where the equilibrium price and quantity are.
Draw the initial demand and supply curves and compare them to the initial equilibrium price and quantity.
The quantity and price are the new equilibrium price.
If only price has changed, no curves will shift and a graphs below.
There has been demand and supply for a hormone.
You have to decide what shifts produced the results.
I began the chapter with a variation of this exercise.
It goes over the production by 20 percent.
On the left-hand side of Table 5-1, I list graphically what effect this discovery would have on the price and quantity of combinations of movements of observed prices and quantities.
Milk is sold in a market.
Demand goes out.
Demand goes out.
There is a shift in supply.
Demand goes out.
There is a shift in supply.
The table shows the effects of supply and demand on equilibrium price and quantity.
The relative size of the shifts affects the Q down on either price or quantity.
If both demand and supply shift, let me give you the answers I came up with.
The price and quantity decline when the demand curve is downward-sloping.
Number 2 is on the left.
The price and quantity would go up.
Higher prices and lower quantity are caused by supply shifting to the left.
The match must be number 3 because it's unclear what happens to quantity.
This is the same as number 6.
We move out along the new demand curve.
The quantity goes up even more when the price goes down.
This is the same as number 5.
The diagrammatic of the com binations is presented as a summary.
People don't like the market-determined price.
People would have to accept the invisible hand as the sole factor that determined prices.
The price is determined by social and political forces.
When prices fall, sellers look to government for ways to hold prices up, while buyers look to government for ways to hold prices up.
Let's start with an example of a price being held down.
Many different models are used by economists.
When I discuss a real-world market as fitting a model, I use a pedagogical license.
The conclusions that come out of a model are given by the assumptions.
It is necessary to consider which assumptions of the model fit the situation one is describing.
When World War II ended, the price ceiling on housing rent in Paris created a short age of housing.
If rents were allowed to rise to $17 per month, the shortage would have been eliminated.
The limit is below the equilibrium price.
quantity demanded will exceed quantity supplied and there will be excess demand if the price is below the Rent Control equilibrium price.
In the real world, excess demand shows up.
Rent controls exist in a number of American cities as well as other cities around the world.
The first half of the 20th century saw the introduction of many of the laws governing rent.
For example, consider Paris.
Rent was frozen by the Paris government in World War II to make up for the financial burden of wage earner families who were sent to fight.
When the soldiers returned after the war, the rent control was still in place, and it was felt that it was an unfair burden on veterans.
The figure shows a situation.
There was an enormous shortage of apartments because of the below-market rent.
Since they got low-cost apartments, the shortage didn't bother those who were renting.
It created a lot of hardship for people who didn't have apartments.
Many families moved in with friends.
Others lived on the streets because they couldn't find housing.
The rent controls caused problems for people who did not have apartments.
Maintenance is cut back by owners of buildings.
20 percent of the private bathrooms had no running water.
Existing buildings weren't kept in repair because rental properties weren't profitable.
It was even harder for people who didn't have apartments.
Alternative methods of rationing were developed because the market price was not allowed to ration apartments.
They moved their furniture before anyone else.
Rent controls were lifted when the situation got so bad.
The system of rent controls is more than just historical.
Some phenomena have existed in New York City in the past.
A couple paid $450 a month for a two-bedroom Park Avenue apartment with a solarium and two terraces, while another individual paid $3,500 a month for a studio apartment shared with two roommates.
The apartment vacancies in New York City were 1.2 percent.
Anything under 5 percent is considered a housing emergency.
Key money is payments made by would-be tenants to current tenants or landlords to get apartments.
To demonstrate the situation that likely caused them.
You can check my answers with my answers.
Take the first item.
The couple lived in a rent-controlled apartment while the individual with roommates didn't.
The housing emergency was caused by rent control.
Excess demand and little vacancies were caused by below-market rent.
Nonprice rationing caused Mia Farrow to rent a rent-controlled apartment.
Other methods of rationing came about instead of being rationed by price.
In New York City, strict no longer rationed by price.
Nonprice rationing is one of the methods of rationing existing goods.
Many new residents discovered that illegal payments to landlords were the only way to get a rent-controlled apartment.
A key money is a black market payment.
Individuals were willing to pay more than the controlled price because of the limited supply of apartments.
Landlords used other methods of rationing the limited supply of apartments--instituting first-come, first-served policies, and selecting tenants based on gender, race, or other personal characteristics, even though such discrimination was il egal.
In some cases in New York City, the rent was so low that developers were able to buy the building from the landlord, tear it down, and replace it with a new apartment building.
No community would institute rent controls if they only had bad effects.
To cope with sudden increases in demand for housing that would otherwise cause rents to explode and force many poor people out of their apartments, they are implemented with good intentions.
The number of people looking to rent and unable to find apartments increases as buildings begin to degrade.
The result is excess supply.
The minimum wage is an example of a price floor.
It has been raised many times.
The federal minimum wage was $7 per hour.
Twenty-nine states had minimum wages that were higher than the federal minimum.
Almost all of the workers who receive the minimum wage are unskilled and/or part-time.
Most full-time adult workers are paid above the minimum wage. | https://knowt.io/note/707e742f-41a7-4438-bfc5-78bac0a5a930/6----Part-1-Describing-Supply-and-Demand | 21 |
25 | How do companies decide what price to charge for their sleek new gadgets? Why are some people willing to pay more for a product than others? How do your decisions play into how corporations price their products? The answer to all of these questions and many more is microeconomics. Read on to find out what microeconomics is and how it works.
What Is It?
Microeconomics focuses on the role consumers and businesses play in the economy, with specific attention paid to how these two groups make decisions. These decisions include when a consumer purchases a good and for how much, or how a business determines the price it will charge for its product. Microeconomics examines smaller units of the overall economy; it is different than macroeconomics, which focuses primarily on the effects of interest rates, employment, output and exchange rates on governments and economies as a whole. Both microeconomics and macroeconomics examine the effects of actions in terms of supply and demand.
Microeconomics breaks down into the following tenets:
- Individuals make decisions based on the concept of utility. In other words, the decision made by the individual is supposed to increase that individual's happiness or satisfaction. This concept is called rational behavior or rational decision-making.
- Businesses make decisions based on the competition they face in the market. The more competition a business faces, the less leeway it has in terms of pricing.
- Both individuals and consumers take the opportunity cost of their actions into account when making their decisions.
Total and Marginal Utility
At the core of how a consumer makes a decision is the concept of individual benefit, also known as utility. The more benefit a consumer feels a product provides, the more that consumer is willing to pay for the product. Consumers often assign different levels of utility to different goods, creating different levels of demand. Consumers have the choice of purchasing any number of goods, so utility analysis often looks at marginal utility, which shows the satisfaction that one additional unit of a good brings. Total utility is the total satisfaction the consumption of a product brings to the consumer.
Utility can be difficult to measure and is even more difficult to aggregate in order to explain how all consumers will behave. After all, each consumer feels differently about a particular product. Take the following example:
Think of how much you like eating a particular food, such as pizza. While you might be really satisfied after one slice, that seventh slice of pizza makes your stomach hurt. In the case of you and pizza, you might say that the benefit (utility) that you receive from eating that seventh slice of pizza is not nearly as great as that of the first slice. Imagine that the value of eating that first slice of pizza is set to 14 (an arbitrary number chosen for the sake of illustration).
Figure 1, below, shows that each additional slice of pizza you eat increases your total utility because you feel less hungry as you eat more. At the same time, because the hunger you feel decreases with each additional slice you consume, the marginal utility—the utility of each additional slice—also decreases.
|Slices of Pizza||Marginal Utility||Total Utility|
In graph form, Figures 2 and 3 would look like the following:
Notice the difference that total utility and marginal utility create.
The decreasing satisfaction the consumer feels from additional units is referred to as the law of diminishing marginal utility. While the law of diminishing marginal utility isn't really a law in the strictest sense (there are exceptions), it does help illustrate how resources spent by a consumer, such as the extra dollar needed to buy that seventh piece of pizza, could have been better used elsewhere.
For example, if you were given the choice of buying more pizza or buying a soda, you might decide to forgo another slice in order to have something to drink. Just as you were able to indicate in a chart how much each slice of pizza meant to you, you probably could also indicate how you felt about combinations of different amounts of soda and pizza. If you were to plot out this chart on a graph, you'd get an indifference curve, a diagram depicting equal levels of utility (satisfaction) for a consumer faced with various combinations of goods.
Figure 4 shows the combinations of soda and pizza, which you would be equally happy with.
When consumers or businesses make the decision to purchase or produce particular goods, they are doing so at the expense of buying or producing something else. This is referred to as the opportunity cost. If an individual decides to use a month's salary for a vacation instead of saving, the immediate benefit is the vacation on a sandy beach, but the opportunity cost is the money that could have accrued in that account in interest, as well as what could have been done with that money in the future.
When illustrating how opportunity costs influence decision making, economists use a graph called the production possibility frontier (PPF). Figure 5 shows the combinations of two goods that a company or economy can produce. Points within the curve (Point A) are considered inefficient because the maximum combination of the two goods is not reached, while points outside of the curve (Point B) cannot exist because they require a higher level of efficiency than what is currently possible. Points outside the curve can only be reached by an increase in resources or by improvements to technology. The curve represents maximum efficiency.
The graph represents the amount of two different goods that a firm can produce, but instead of always seeking to produce along the curve, a firm might choose to produce within the curve's boundaries. The firm's decision to produce less than what is efficient is determined by demand for the two types of goods. If the demand for goods is lower than what can be efficiently produced, then the firm is more likely to limit production. This decision is also influenced by the competition faced by the firm.
A well-known example of the PPF in practice is the "guns and butter" model, which shows the combinations of defense spending and civilian spending that a government can support. While the model itself oversimplifies the complex relationships between politics and economics, the general idea is that the more a government spends on defense, the less it can spend on non-defense items.
Market Failure and Competition
While the term market failure might conjure up images of unemployment or a massive economic depression, the meaning of the term is different. Market failure exists when the economy is unable to efficiently allocate resources. This can result in scarcity, a glut or a general mismatch between supply and demand. Market failure is frequently associated with the role that competition plays in the production of goods and services, but can also arise from asymmetric information or from a misjudgment in the effects of a particular action (referred to as externalities).
The level of competition a firm faces in a market, as well as how this determines consumer prices, is probably the more widely-referenced concept. There are four main types of competition:
- Perfect Competition: A large number of firms produce a good, and a large number of buyers are in the market. Because so many firms are producing, there is little room for differentiation between products, and individual firms cannot affect prices because they have a low market share. There are few barriers to entry in the production of this good.
- Monopolistic Competition: A large number of firms produce a good, but the firms are able to differentiate their products. There are also few barriers to entry.
- Oligopoly: A relatively small number of firms produce a good, and each firm is able to differentiate its product from its competitors. Barriers to entry are relatively high.
- Monopoly: One firm controls the market. The barriers to entry are very high because the firm controls the entire share of the market.
The price that a firm sets is determined by the competitiveness of its industry, and the firm's profits are judged by how well it balances costs to revenues. The more competitive the industry, the less choice the individual firm has when it sets its price.
The Bottom Line
We can analyze the economy by examining how the decisions of individuals and firms alter the types of goods that are produced. Ultimately, it is the smallest segment of the market - the consumer—who determines the course of the economy by making choices that best fit the consumer's perception of cost and benefit. | https://www.investopedia.com/articles/economics/08/understanding-microeconomics.asp | 21 |
26 | Byzantine Empire, the eastern half of the Roman Empire, which survived for a thousand years after the western half had crumbled into various feudal kingdoms and which finally fell to Ottoman Turkish onslaughts in 1453.
The Virgin Mary holding the Christ Child (centre), Justinian (left) holding a model of the Hagia Sophia, and Constantine (right) holding a model of the city of Constantinople; mosaic from the Hagia Sophia, 9th century.
Dumbarton Oaks/Trustees for Harvard University, Washington, D.C.
The very name Byzantine illustrates the misconceptions to which the empire’s history has often been subject, for its inhabitants would hardly have considered the term appropriate to themselves or to their state. Theirs was, in their view, none other than the Roman Empire, founded shortly before the beginning of the Christian era by God’s grace to unify his people in preparation for the coming of his Son. Proud of that Christian and Roman heritage, convinced that their earthly empire so nearly resembled the heavenly pattern that it could never change, they called themselves Romaioi, or Romans. Modern historians agree with them only in part. The term East Rome accurately described the political unit embracing the Eastern provinces of the old Roman Empire until 476, while there were yet two emperors. The same term may even be used until the last half of the 6th century, as long as men continued to act and think according to patterns not unlike those prevailing in an earlier Roman Empire. During those same centuries, nonetheless, there were changes so profound in their cumulative effect that after the 7th century state and society in the East differed markedly from their earlier forms. In an effort to recognize that distinction, historians traditionally have described the medieval empire as Byzantine.
When did the Byzantine Empire exist?
How was the Byzantine Empire different from the Roman Empire?
How did the Byzantine Empire get its name?
Where was the Byzantine Empire?
Did the Byzantine Empire practice Christianity?
The latter term is derived from the name Byzantium, borne by a colony of ancient Greek foundation on the European side of the Bosporus, midway between the Mediterranean and the Black Sea. The city was, by virtue of its location, a natural transit point between Europe and Asia Minor (Anatolia). Refounded as the “new Rome” by the emperor Constantine I in 330, it was endowed by him with the name Constantinople, the city of Constantine. The derivation from Byzantium is suggestive in that it emphasizes a central aspect of Byzantine civilization: the degree to which the empire’s administrative and intellectual life found a focus at Constantinople from 330 to 1453, the year of the city’s last and unsuccessful defense under the 11th (or 12th) Constantine. The circumstances of the last defense are suggestive too, for in 1453 the ancient, medieval, and modern worlds seemed briefly to meet. The last Constantine fell in defense of the new Rome built by the first Constantine. Walls that had held firm in the early Middle Ages against German, Hun, Avar, Slav, and Arab were breached finally by modern artillery, in the mysteries of which European technicians had instructed the most successful of the Central Asian invaders: the Ottoman Turks.
Marble head of Constantine I, the only surviving piece of a giant statue that was made about 300 CE.
The fortunes of the empire were thus intimately entwined with those of peoples whose achievements and failures constitute the medieval history of both Europe and Asia. Nor did hostility always characterize the relations between Byzantines and those whom they considered “barbarian.” Even though the Byzantine intellectual firmly believed that civilization ended with the boundaries of his world, he opened it to the barbarian, provided that the latter (with his kin) would accept baptism and render loyalty to the emperor. Thanks to the settlements that resulted from such policies, many a name, seemingly Greek, disguises another of different origin: Slavic, perhaps, or Turkish. Barbarian illiteracy, in consequence, obscures the early generations of more than one family destined to rise to prominence in the empire’s military or civil service. Byzantium was a melting-pot society, characterized during its earlier centuries by a degree of social mobility that belies the stereotype, often applied to it, of an immobile caste-ridden society.
Get a Britannica Premium subscription and gain access to exclusive content.
A source of strength in the early Middle Ages, Byzantium’s central geographical position served it ill after the 10th century. The conquests of that age presented new problems of organization and assimilation, and those the emperors had to confront at precisely the time when older questions of economic and social policy pressed for answers in a new and acute form. Satisfactory solutions were never found. Bitter ethnic and religious hostility marked the history of the empire’s later centuries, weakening Byzantium in the face of new enemies descending upon it from east and west. The empire finally collapsed when its administrative structures could no longer support the burden of leadership thrust upon it by military conquests.
The empire to 867
The Roman and Christian background
Unity and diversity in the late Roman Empire
The Roman Empire, the ancestor of the Byzantine, remarkably blended unity and diversity, the former being by far the better known, since its constituents were the predominant features of Roman civilization. The common Latin language, the coinage, the “international” army of the Roman legions, the urban network, the law, and the Greco-Roman heritage of civic culture loomed largest among those bonds that Augustus and his successors hoped would bring unity and peace to a Mediterranean world exhausted by centuries of civil war. To strengthen those sinews of imperial civilization, the emperors hoped that a lively and spontaneous trade might develop between the several provinces. At the pinnacle of that world stood the emperor himself, the man of wisdom who would shelter the state from whatever mishaps fortune had darkly hidden. The emperor alone could provide that protection, since, as the embodiment of all the virtues, he possessed in perfection those qualities displayed only imperfectly by his individual subjects.
The Roman formula of combating fortune with reason and therewith ensuring unity throughout the Mediterranean world worked surprisingly well in view of the pressures for disunity that time was to multiply. Conquest had brought regions of diverse background under Roman rule. The Eastern provinces were ancient and populous centres of that urban life that for millennia had defined the character of Mediterranean civilization. The Western provinces had only lately entered upon their own course of urban development under the not-always-tender ministrations of their Roman masters.
Each of the aspects of unity enumerated above had its other side. Not everyone understood or spoke Latin. Paralleling and sometimes influencing Roman law were local customs and practices, understandably tenacious by reason of their antiquity. Pagan temples, Jewishsynagogues, and Christian baptisteries attest to the range of organized religions with which the official forms of the Roman state, including those of emperor worship, could not always peacefully coexist. And far from unifying the Roman world, economic growth often created self-sufficient units in the several regions, provinces, or great estates.
Given the obstacles against which the masters of the Roman state struggled, it is altogether remarkable that Roman patriotism was ever more than an empty formula, that cultivated gentlemen from the Pillars of Hercules to the Black Sea were aware that they had “something” in common. That “something” might be defined as the Greco-Roman civic tradition in the widest sense of its institutional, intellectual, and emotional implications. Grateful for the conditions of peace that fostered it, men of wealth and culture dedicated their time and resources to glorifying that tradition through adornment of the cities that exemplified it and through education of the young who they hoped might perpetuate it.
Upon that world the barbarians descended after about 150 CE. To protect the frontier against them, warrior emperors devoted whatever energies they could spare from the constant struggle to reassert control over provinces where local regimes emerged. In view of the ensuing warfare, the widespread incidence of disease, and the rapid turnover among the occupants of the imperial throne, it would be easy to assume that little was left of either the traditional fabric of Greco-Roman society or the bureaucratic structure designed to support it.
Neither assumption is accurate. Devastation was haphazard, and some regions suffered while others did not. In fact, the economy and society of the empire as a whole during that period was the most diverse it had ever been. Impelled by necessity or lured by profit, people moved from province to province. Social disorder opened avenues to eminence and wealth that the more-stable order of an earlier age had closed to the talented and the ambitious. For personal and dynastic reasons, emperors favoured certain towns and provinces at the expense of others, and the erratic course of succession to the throne, coupled with a resulting constant change among the top administrative officials, largely deprived economic and social policies of recognizable consistency.
The reforms of Diocletian and Constantine
The definition of consistent policy in imperial affairs was the achievement of two great soldier-emperors, Diocletian (ruled 284–305) and Constantine I (sole emperor 324–337), who together ended a century of anarchy and refounded the Roman state. There are many similarities between them, not the least being the range of problems to which they addressed themselves: both had learned from the 3rd-century anarchy that one man alone and unaided could not hope to control the multiform Roman world and protect its frontiers; as soldiers, both considered reform of the army a prime necessity in an age that demanded the utmost mobility in striking power; and both found the old Rome and Italy an unsatisfactory military base for the bulk of the imperial forces. Deeply influenced by the soldier’s penchant for hierarchy, system, and order, a taste that they shared with many of their contemporaries as well as the emperors who preceded them, they were appalled by the lack of system and the disorder characteristic of the economy and the society in which they lived. Both, in consequence, were eager to refine and regularize certain desperate expedients that had been adopted by their rough military predecessors to conduct the affairs of the Roman state. Whatever their personal religious convictions, both, finally, believed that imperial affairs would not prosper unless the emperor’s subjects worshiped the right gods in the right way.
Statue of Diocletian's tetrarchy, red porphyry, c. 300 CE, taken to Venice 1258.
Alinari/Art Resource, New York
The means they adopted to achieve those ends differ so profoundly that one, Diocletian, looks to the past and ends the history of Rome; the other, Constantine, looks to the future and founds the history of Byzantium. Thus, in the matter of succession to the imperial office, Diocletian adopted precedents he could have found in the practices of the 2nd century CE. He associated with himself a coemperor, or Augustus. Each Augustus then adopted a young colleague, or Caesar, to share in the rule and eventually to succeed the senior partner. That rule of four, or tetrarchy, failed of its purpose, and Constantine replaced it with the dynastic principle of hereditary succession, a procedure generally followed in subsequent centuries. To divide administrative responsibilities, Constantine replaced the single praetorian prefect, who had traditionally exercised both military and civil functions in close proximity to the emperor, with regional prefects established in the provinces and enjoying civil authority alone. In the course of the 4th century, four great “regional prefectures” emerged from those Constantinian beginnings, and the practice of separating civil from military authority persisted until the 7th century.
Contrasts in other areas of imperial policy are equally striking. Diocletian persecuted Christians and sought to revive the ancestral religion. Constantine, a convert to the new faith, raised it to the status of a “permitted religion.” Diocletian established his headquarters at Nicomedia, a city that never rose above the status of a provincial centre during the Middle Ages, whereas Constantinople, the city of Constantine’s foundation, flourished mightily. Diocletian sought to bring order into the economy by controlling wages and prices and by initiating a currency reform based upon a new gold piece, the aureus, struck at the rate of 60 to the pound of gold. The controls failed and the aureus vanished, to be succeeded by Constantine’s gold solidus. The latter piece, struck at the lighter weight of 72 to the gold pound, remained the standard for centuries. For whatever reason, in summary, Constantine’s policies proved extraordinarily fruitful. Some of them—notably hereditary succession, the recognition of Christianity, the currency reform, and the foundation of the capital—determined in a lasting way the several aspects of Byzantine civilization with which they are associated.
Yet it would be a mistake to consider Constantine a revolutionary or to overlook those areas in which, rather than innovating, he followed precedent. Earlier emperors had sought to constrain groups of men to perform certain tasks that were deemed vital to the survival of the state but that proved unremunerative or repellent to those forced to assume the burden. Such tasks included the tillage of the soil, which was the work of the peasant, or colonus; the transport of cheap bulky goods to the metropolitan centres of Rome or Constantinople, which was the work of the shipmaster, or navicularius; and services rendered by the curiales, members of the municipal senate charged with the assessment and collection of local taxes. Constantine’s laws in many instances extended or even rendered hereditary those enforced responsibilities, thus laying the foundations for the system of collegia, or hereditary state guilds, that was to be so noteworthy a feature of late-Roman social life. Of particular importance, he required the colonus (peasant) to remain in the locality to which the tax lists ascribed him.
The 5th century: Persistence of Greco-Roman civilization in the East
Whether innovative or traditional, Constantine’s measures determined the thrust and direction of imperial policy throughout the 4th century and into the 5th. The state of the empire in 395 may, in fact, be described in terms of the outcome of Constantine’s work. The dynastic principle was established so firmly that the emperor who died in that year, Theodosius I, could bequeath the imperial office jointly to his sons, both of whom were young and incompetent: Arcadius in the East and Honorius in the West. Never again would one man rule over the full extent of the empire in both its halves. Constantinople had probably grown to a population of between 200,000 and 500,000; in the 5th century the emperors sought to restrain rather than promote its growth. After 391 Christianity was far more than one among many religions: from that year onward, imperial decree prohibited all forms of pagan cult, and the temples were closed. Imperial pressure was often manifest at the church councils of the 4th century, with the emperor assuming a role he was destined to fill again during the 5th century in defining and suppressing heresy.
Gold coin depicting Valentinian II (obverse side) and Valentinian II with Theodosius I (reverse side).
CNG coins (http://www.cngcoins.com)
Economic and social policies
The empire’s economy had prospered in a spotty fashion. Certain provinces, or parts of provinces such as northern Italy, flourished commercially as well as agriculturally. Constantinople, in particular, influenced urban growth and the exploitation of agricultural frontiers. Balkan towns along the roads leading to the great city prospered, while others not so favoured languished and even disappeared. Untilled land in the hilly regions of northern Syria fell under the plow to supply foodstuffs for the masses of Constantinople. As the 4th century progressed, not only did Constantine’s solidus remain indeed solid gold, but evidence drawn from a wide range of sources suggests that gold in any form was far more abundant than it had been for at least two centuries. It may be that new sources of supply for the precious metal had been discovered: those perhaps were in spoils plundered from pagan temples or perhaps were from mines newly exploited in western Africa and newly available to the lands of the empire, thanks to the appearance of camel-driving nomads who transported the gold across the Sahara to the Mediterranean coastline of North Africa.
The extreme social mobility noted in the late 3rd and early 4th centuries seems less characteristic of the second half of the latter century. Certainly the emperors continued their efforts to bind men collectively to their socially necessary tasks, but the repetition of laws tying the colonus to his estate, the navicularius to his ship, and the curialis to his municipal senate suggests that those edicts had little effect. Indeed, it would be a mistake to conclude from such legislation that Roman society was universally and uniformly organized in castes determined in response to imperial orders. There was always a distinction between what an emperor wanted and what he could obtain, and, as the foregoing survey has suggested, there were distinctions between the provinces as well.
Even before the end of the first quarter of the 5th century, those provincial differences were visible, and in no small degree they help to explain the survival of imperial government and Greco-Roman civilization in the East while both eventually perished in the West. Throughout the Eastern provinces, population levels seem to have remained higher, and the emperors in Constantinople never had to search (at least until the 6th century) for men to fill the ranks of their armies. As might be expected in those eastern lands in which urban civilization was several centuries old, cities persisted and, with them, a merchant class and a monetary economy. Eastern merchants, known in the sources as Syrians, assumed the carrying trade between East and West, often establishing colonies in the beleaguered cities of the latter region.
Most important, the emperor in the East never lost access to, or control over, his sources of manpower and money. An older and probably more-wealthy senatorial class, or aristocracy, in the West consolidated its great estates and assumed a form of protection or patronage over the labouring rural classes, depriving the state of desperately needed military and financial services. The senatorial class in the East seems to have been of more-recent origin, its beginnings to be found among those favourites or parvenus who had followed Constantine to his new capital. By the early 5th century, their wealth seems to have been, individually, much less than the resources at the disposal of their Western counterparts; their estates were far more scattered and their rural dependents less numerous. They were thus less able to challenge the imperial will and less able to interpose themselves between the state on the one hand and its potential soldiers or taxpayers on the other. | https://googleweblight.com/sp?hl&geid=NSTNR&u=https://www.britannica.com/place/Byzantine-Empire | 21 |
17 | This article needs additional citations for verification. (June 2021)
|Enforcement authorities and organizations|
A duopoly (from Greek δύο, duo "two" and πωλεῖν, polein "to sell") is a type of oligopoly where two firms have dominant or exclusive control over a market. It is the most commonly studied form of oligopoly due to its simplicity. Duopolies sell to consumers in a competitive market where the choice of an individual consumer can not affect the firm. The defining characteristic of both duopolies and oligopolies is that decisions made by sellers are dependent on each other.
Duopoly models in economics and game theory
- The Cournot model, which shows that two firms assume each other's output and treat this as a fixed amount, and produce in their own firm according to this.
- The Bertrand model, in which, in a game of two firms, each one of them will assume that the other will not change prices in response to its price cuts. When both firms use this logic, they will reach a Nash equilibrium.
Characteristics of duopoly
- Existence of only two sellers
- Interdependence: if any firm makes the change in the price or promotional scheme, other firms also have to comply with it, to remain in the competition.[further explanation needed]
- Presence of monopoly elements: so long as products are differentiated, the firms enjoy some monopoly power, as each product will have some loyal customers
- There are two popular models of duopoly, i.e., Cournot's Model and Bertrand's Model.
Like a market, a political system can be dominated by two groups, which exclude other parties or ideologies from participation. One party or the other tends to dominate government at any given time (the Majority party), while the other has only limited power (the Minority party). According to Duverger's law, this tends to be caused by a simple winner-take-all voting system without runoffs or ranked choices. The United States and many Latin American countries[which?] have two-party government systems.
Examples in business
A commonly cited example of a duopoly is that involving Visa and MasterCard, who between them control a large proportion of the electronic payment processing market. In 2000 they were the defendants in a U.S. Department of Justice antitrust lawsuit. An appeal was upheld in 2004.
Examples where two companies control a overwhelming proportion of a market are:
- Woolworths and Coles in the Australian supermarket market
- Myer and David Jones in the Australian upmarket department store market
- Airbus and Boeing in the largest commercial aircraft market in the world
- Husqvarna and Stihl in the chainsaw market
- Nvidia and AMD in the GPU market
- Intel and AMD in the desktop CPU market
- Norfolk Southern Railway and CSX Transportation operate a duopoly on freight rail traffic in the Eastern United States, and the Union Pacific Railroad and BNSF Railway operate a duopoly on freight rail traffic elsewere in the United States.
- Google's Android and Apple's iOS make up over 99% of the mobile operating system market
- Doppelmayr Garaventa Group and HTI Group consisting of Poma & Leitner in the market for ropeways, transport commonly used in mountainous regions, ski resorts, cities and amusement parks.
In Finland, the state-owned broadcasting company Yleisradio and the private broadcaster Mainos-TV had a legal duopoly (in the economists' sense of the word) from the 1950s to 1993. No other broadcasters were allowed. Mainos-TV operated by leasing air time from Yleisradio, broadcasting in reserved blocks between Yleisradio's own programming on its two channels. This was a unique phenomenon in the world. Between 1986 and 1992 there was an independent third channel but it was jointly owned by Yle and MTV; only in 1993 did MTV get its own channel.
Duopoly is also used in the United States broadcast television and radio industry to refer to a single company owning two outlets in the same city.
This usage is technically incompatible with the normal definition of the word and may lead to confusion, inasmuch as there are generally more than two owners of broadcast television stations in markets with broadcast duopolies. In Canada, this definition is therefore more commonly called a "twinstick".
- "Complaint - ATR - Department of Justice". www.usdoj.gov. Retrieved 9 April 2018.
- "Credit Card Antitrust Suit". mit.edu. Retrieved 9 April 2018.
- "Amex is suing Visa and Mastercard". 15 November 2004. Retrieved 9 April 2018 – via news.bbc.co.uk.
- Kramer-Miller, Ben (June 25, 2013). "Norfolk Southern Corp. Looks Like A Solid Investment". Seeking Alpha. Retrieved October 7, 2014.
- 99.6 percent of new smartphones run Android or iOS. The Verge. 16 February 2017.
- "Mobile Operating System Market Share Worldwide". StatCounter Global Stats. Retrieved 2020-06-12. | https://en.wikipedia.org/wiki/Duopoly | 21 |
14 | After an in-depth analysis of the causes and the consequences of climate change, the United Nations understood that it was necessary to take action to prevent the deterioration of the planet’s climatic and environmental conditions.
It was necessary to force nations to respect the environment, limiting emissions of greenhouse gases and other harmful gases, without causing too much damage to the economy and industrial production.
To this end, starting in 1992, the United Nations organized international conventions on climate change in order to ratify various treaties about environmental protection.
From 1995 onwards, every year conferences on climate change have been held by the United Nations, in order to assess progresses made and the new challenges on this issue.
The United Nations Framework Convention on Climate Change
Among the first measures against global warming, in 1992 the United Nations held in Rio de Janeiro the first United Nations Framework Convention on climate change, also known as the “Earth Summit”.
This agreement was ratified by all the 196 countries belonging to the United Nations. The signatory states of the “Earth Summit” were divided into three groups:
- industrialized countries; former socialist countries and the European Union;
- industrialized countries that are not part of the first group;
- developing countries.
Developing countries had the possibility to become “industrialized” once they met the necessary economic requirements; however, these countries were not required to respect the agreements until the industrialized nations invested in them with economic resources and technological means: nonetheless, technological progress was subordinated to the fight against poverty and social inequality.
The main objective was to reduce the greenhouse gas emissions of industrialized nations by 2000, and the signatories agreed that the nations had different responsibilities.
The ratification obliged governments to pursue the non-binding goal of reducing emissions.
The treaty came into force in 1994, and from that moment the negotiations for the Kyoto Protocol began.
The Kyoto Protocol
As a consequence of the 1992 “Earth Summit”, the 1997 Kyoto Protocol introduced a legally binding obligation to reduce greenhouse gas emissions (while the 1992 agreements were not legally binding).
The protocol came into force in 2005, and was ratified by 192 countries.
The United States, however, despite having signed the protocol, never ratified it.
The treaty obliged nations to reduce emissions by at least 8% compared to the levels recorded in 1985, dividing the policies into two operational periods: 2008-2012 and 2013-2020.
The signatory states were divided into two groups, “Annex B nations” (which included the United States, Canada, European Union, Switzerland, Australia, Japan, Ukraine, Belarus, New Zealand and the Russian Federation) and “non-Annex B nations” (all the other nations not included in the first group).
Countries adhering to the Kyoto protocol could use flexible mechanisms to reduce emissions to the lowest possible economic cost and obtain “certified emission reductions” (or “CERs”).
The CERs are generated by avoiding CO2 emissions with the implementation of sustainable development projects, and are measured in “metric tons of CO2 equivalent”.
Let’s make an example to better clarify this matter.
An industrialized nation carries out a sustainable development project in a developing nation, such as the building of a solar power plant, and thanks to this power plant the developing nation has avoided emitting 300 tons of CO2 during the year: these 300 tons of CO2 are the certified emission reductions earned by the industrialized nation, which in turn will have the right to release 300 tons of CO2 into the environment, or resell the CERs on the market.
The Kyoto protocol provides three flexible mechanisms for the acquisition of CERs:
- Clean Development Mechanism: allows industrialized countries to carry out projects in developing countries, both for sustainable development and socioeconomic development, generating CERs for countries that invest in such projects;
- Joint Implementation: allows industrialized countries to implement projects to reduce greenhouse gas emissions in another country of the same group and to use the resulting CERs, jointly with the host country;
- Emission Trading: allows the exchange of CERs between countries: a country that has achieved a decrease in its emissions of greenhouse gases above its prefixed quota can thus give these “credits” to a country that has not achieved its goals in reducing its greenhouse gases emissions.
The purpose of flexible mechanisms is to reduce greenhouse gas emissions at the lowest possible costs, safeguarding both the environment and economic investments.
The UN considered an extension of the Kyoto Protocol as well, which would have extended its validity until 2020, the “Doha Amendment”, signed in 1997, which would have come into force in 2012
However, the Doha Amendment has never formally entered into force since it required the ratification by least 2/3 of the UN states (144 nations), but only 124 actually ratified it, including Italy.
The United States did not ratify the Kyoto Protocol, causing a great deal of controversy in the public opinion and among the United Nations: the United States made this choice to avoid economic losses.
Despite the decision of the US federal government, some US states spontaneously complied with the Kyoto protocol, by enacting laws to reduce pollution.
Canada’s withdrawal sparked controversy as well: this country ratified the Kyoto protocol, pledging to reduce greenhouse gas emissions by 6% compared to 1990 levels by 2012; however they not only failed to achieve their goal, but even increased emissions by 17 %.
In order to avoid the heavy economic sanctions envisaged in the event of non-compliance with emission reduction obligations, at the end of 2011 Canada withdrew its membership from the Kyoto protocol.
Criticism of the Kyoto Protocol
However, the Kyoto protocol has received many criticisms in several respects.
The fact that developing countries such as China and India were exempt from the obligation to reduce their emissions, despite being responsible for most greenhouse gas emissions, raised many concerns; this is one of the reasons that prompted US President George W. Bush not to ratify the treaty.
The Protocol then made almost all efforts on reducing carbon dioxide emissions, neglecting other harmful substances, such as hydro-fluorocarbons and ozone.
The Kyoto Protocol was accused of establishing short-term goals, while the solution to the problem of global warming requires decades, if not centuries.
The “CERs” aroused a great deal of criticism for the fact that they would not reduce emissions of greenhouse gases at all, but would stimulate the sale of emissions to the highest bidder, who in turn acquired the right to pollute.
The Paris Agreement
Over the years, many international conferences on climate change have been held, and assessments on the achieved goals and studies on new solutions to counter this issue have been made.
However, over time climate change has worsened and caused greater damage than before.
After the failed Copenhagen Accord of 2009, whose objectives and assumptions were judged by the international community as “ineffective” and “insufficient”, the Paris Agreement entered into force in November 2016 and was signed by 195 countries, but only 185 actually ratified it.
There are three main goals envisaged in the Paris Agreement:
- avoid the increase of 2 degrees of the global temperature foreseen for the year 2100, determining a maximum threshold increase of 1,5 degrees;
- diminish the greenhouse gases emissions;
- funding alternative energy sources.
Every country has the obligation to draw up a yearly detailed report on its own efforts and objectives achieved in terms of reducing greenhouse gas emissions.
Unlike the Kyoto protocol, the goals to be achieved are not established by the United Nations, but each country has the faculty to establish its own, without any legal obligations (except that of drawing up annual reports).
There are no distinctions between industrialized and developing nations, and the countries that ratify the treaty will have to contribute to environmental protection, while the Kyoto protocol provided for such obligations only for industrialized nations.
According to the principle of “common but different responsibility and respective capacities”, each nation must make a contribution based on its own greenhouse gas emissions and economic capacities, differentiating the objectives and trying to create more effective pollution reduction plans.
As happened in Kyoto, the United States withdrew from the agreement, causing outrage in the international community.
Criticism of the Paris Agreement
Some studies conducted by the prestigious journal Nature have shown that, as of 2017, the industrialized nations had not yet implemented the policies of the Paris Agreement, or that the measures taken were insufficient.
According to critics, the lack of legal constraints on emission reductions would give governments the possibility of not respecting the treaty, thus questioning the effectiveness of the Paris Agreement.
To overcome this problem, the industrialized nations asked the intervention of international institutions for the monitoring of greenhouse gas emissions, while the developing nations, in turn, asked and obtained that each country could independently monitor its own emissions.
The Agreement has not set a certain date by which CO2 emissions will have to cease or diminish substantially, with the consequence that the nations could take too long to reduce air pollution.
Finally, there is the unresolved issue about the international routes of planes and ships: in the treaty it is not clear which country should be considered as responsible for the emissions.
European laws on environmental protection
To fulfill the commitments made by the ratification of international treaties, the European Union has promulgated laws and directives to preserve the environment, defining certain terms within which the objectives must be achieved.
The European Council directive 406/2009 obliges the member states to reduce greenhouse gas emissions by a certain percentage compared to the emissions recorded in the year 2005, by 2020: Italy must reduce its own emissions by 13%.
Directive 28/2009 promotes the use of renewable energy sources (wind power, solar power, geothermal power, biomass, etc.), forcing member states to meet a percentage of their energy needs through clean sources by 2020: Italy must reach 17%, but renewable sources already make up about 40% of the total energy requirement.
Furthermore, although the Doha amendment did not come into force, the European Union complies with its provisions, prolonging the period of application of the Kyoto protocol until 2020, by means of Decision 1339/2015.
These directives serve to achieve the 20-20-20 goal, namely the 20% reduction of greenhouse gases compared to the emissions recorded in 1990, 20% of European energy from renewable sources and the improvement of 20% energy efficiency, by 2020.
By 2030, the European Union is committed to reducing emissions by 40% compared to the levels registered in 1990, achieving 32% of the total energy coming from renewable sources and the 32.5% improvement in energy efficiency.
Once the objectives set by the European Union have been analyzed, we wonder how many and which of these have actually been achieved.
The European Environment Agency is the institution that monitors the impact of human actions on nature, which in a 2017 report established that although the emissions of harmful gases have greatly reduced in the last few decades, in the metropolitan areas air quality is low, and concentrations of harmful gases exceed the limits set by law.
Among these gases are ozone (which, although essential in the ozone layer, is harmful to living forms when breathed), nitrogen dioxide and particulate matter, which cause respiratory diseases and serious damage to the respiratory system, if inhaled for prolonged periods.
Italian laws on environmental protection
Laws on environmental protection can be found in the Italian legal system as well, besides the application of European norms and international treaties.
One of the first Italian regulations on environmental protection is the law 1497 of 1939, which protects natural landscapes, as well as gardens, villas and parks that “stand out for their uncommon beauty”, punishing whoever damages them.
In 1948 the Italian Constitution was promulgated, and in article 9 there’s a provision for the “protection of the landscape”, which establishes that: “The Republic promotes the development of culture and scientific and technical research. It protects the landscape and the historical and artistic heritage of the Nation.”
In 1966 the Parliament approved one of the first laws against air pollution, the law 615, also known as the “anti-smog law”, that regulated the emissions of cars and thermal and industrial plants, which should not have exceeded a certain emission threshold, expressed in Kilo-calorie per hour (the amount varied according to the size of the city and the type of plant taken into consideration), under penalty of monetary fine.
In 1986 the law 349 established the Ministry of the Environment, which still deals with environmental protection.
Law 61/1994 established the ARPAs (Agenzie Regionali per la Protezione dell’Ambiente, Regional Agencies for Environmental Protection), which have the task of monitoring, processing and disseminating data on the state of health of the various regions.
In 2008, thanks to law 133, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale, Higher Institute for Environmental Protection and Research) was established, which has functions similar to those of ARPAs, and cooperates with the European Environment Agency and other international institutions.
As for criminal law, in 2015 a new section was added to the Italian Penal Code: Title VI bis, titled “On crimes against the environment”, which extends the cases of article 452 (“crimes against public health”).
Article 452-bis punishes with imprisonment from two to six years and a fine from 10 thousand to 100 thousand euros “anyone who illegally causes significant and measurable impairment or deterioration of: water or air, or of extended portions […] of the soil and subsoil […]”.
Article 452-ter punishes with imprisonment anyone who causes injuries to third parties following pollution actions.
Article 452-quater provides for the crime of “environmental disaster”, that is “the irreversible alteration of the balance of an ecosystem”.
To counter the undergoing climate change, in 2016 the Ministry of the Environment approved the “National Strategy for adaptation to climate change”, in order to minimize the damage caused by climate change, protecting the health of citizens and preserving the territory, as well as promoting the establishment of a “permanent information campaign” that deals with information campaigns for citizens, and a “National Observatory”, which identifies territorial priorities in terms of environmental protection.
This shows that even Italy is working to hinder the causes and effects of climate change and pollution.
Environmental associations, political parties and Greta Thunberg
There are hundreds of environmental associations that are fighting to protect the environment and preserve nature, but WWF and Greenpeace are among the oldest and best known.
In 1961 the WWF (Worldwide Wildlife Fund) was founded, and since the 1990s it has also begun to deal with issues concerning pollution and climate change.
At the end of the 1960s, following a protest against some US nuclear tests carried out in Alaska, Greenpeace was established. Similarly to the WWF, initially the purpose of the association was to safeguard biodiversity, only to focus also on the issues of sustainable development, clean energy and climate change since the early 1990s.
In 1980 in Italy was established Legambiente (a portmanteau of the words “lega”, which means “league”, and “ambiente”, “environment”), with the same objectives as the other environmental associations. This organization, following the Chernobyl disaster of 1986, promoted the referendum that caused the ban of nuclear energy in Italy in 1987.
Between the ’70s and ’80s in many nations environmentalist parties were founded (so-called “green parties”) which, together with the associations, promoted laws to reduce air pollution and protect biodiversity, in addition to organizing information campaigns for citizens; these parties, although with different names, leaders and slogans, still exist nowadays.
The action of environmental associations and environmentalist political parties has had enormous influence on societies in every part of the world, and therefore there is currently an ever greater awareness and sensitivity towards environmental issues and climate change.
The most concerned group on environmental issues are young people, who will inherit the planet and will have to cope with climate change and its consequences.
Currently in Europe there are peaceful demonstrations in defense of the environment, led by the Swedish student Greta Thunberg, who first decided to call a “school strike” for the climate: this initiative has aroused the attention of the media, politicians and even the United Nations.
Greta Thunberg delivered a speech at the United Nations Climate Change Conference in Katowice (Poland), in which she highlighted how climate change is causing extinction and loss of animal and plant biodiversity. Recently he also had a meeting with Pope Francis, with whom she briefly discussed the future of the planet.
Thanks to a widespread media coverage, Greta’s initiatives have highlighted the problems linked to climate change, which will be destined to worsen quickly if all of humanity does not take immediate action, forcing world leaders to ratify and promptly execute all those treaties, agreements, resolutions, directives and amendments that they have signed.
Although we are risking an environmental disaster, fortunately the restoration of our planet is still possible, but only if everyone cooperates to achieve this goal. | https://www.censori.eu/en/2021/04/20/climate-change-international-treaties-european-and-italian-laws-and-public-opinion/ | 21 |
27 | Quiz 12: The Costs of Production
Accounting profit: Accounting profit refers to the financial gain that is difference between total cost and total revenue. A person spends $10 to prepare lemonade and sells at $60; hence, the accounting profit of the person is as follows: The accounting profit is . Economic profit: Economic profit refers to the additional profit that exceeds the opportunity cost. At the same time, if the person had mowed his neighbor's lawn, the might have earned $40. Hence, the economic profit is as follows: The economic profit is . Hence, option 'a' is correct.
Opportunity cost : In economics, the opportunity cost refers to the benefits given up by an individual or business when an alternative is chosen over another. Fixed cost: It refers to the constant cost of production which remains same irrespective of the change in the level of output. Variable cost : It refers to the cost change with the change in the level of output. It is high at the higher level of outputs and low at the lower level of outputs. The sum of fixed cost and variable cost is equal to the total cost. Marginal cost : It is the additional cost of doing an additional activity. Average total cost : It results when the total cost is divided by units of quantity produced. a. The amount given up in order to take an action is equivalent to the amount of next best alternative. This is the opportunity cost of the action chosen. Hence, the first blank is filled with . b. When marginal cost is below than average cost, the average cost tends to fall. When the marginal cost is above than the average cost, the average cost tends to rise. Hence, the second blank is filled with . c. The cost remains fixed whether the quantity produced increases or decreases is said to be a fixed cost. For instance, Plant and machinery cost, installation cost, rent, insurance premium et cetera. Hence, the third blank is filled with . d. The cost increase with the increase in the number of units produced is said to be a variable cost. This is dependent on the quantity produced. For instance, labor cost, material cost et cetera. In the short-run, in the ice cream industry, cream and sugar are intermediate goods used in the production of ice creams. As the demand for ice cream increases, the demand for cream and sugar increases and thus the industry must spend more on these goods. Therefore, the cost of cream and sugar increases with the increases in the output of ice creams, this implies the cost of these goods are variable costs. Hence, the fourth blank is filled with . e. The gap between the total revenue and total cost is equal to the profit. Hence, the fifth blank is filled with . f. The cost spends on the additional unit of output produced is called marginal cost. Hence, the sixth blank is filled with .
Total revenue, total cost, and profit are interrelated. Economists assume that the goal of a firm is to maximize profit. Profit is a firm's total revenue minus its total cost. Total revenue is the amount that the firm receives for the sale of its output. Total cost is the amount that the firm pays to buy inputs. Thus, in order to determine the profit of a firm, one should know the total revenue and the total cost. Therefore, they are interrelated. | https://quizplus.com/quiz/152786-quiz-12-the-costs-of-production | 21 |
56 | Literature review and Introduction
1.1 Hearing loss
The ability to detect vibrations and perceive sound through ears is termed as “Hearing”. Any individual unable to hear within the normal hearing threshold (≥ 25 dB) is said to have the condition of hearing loss (http://www.who.int/mediacentre/factsheets/fs300/en/, 2017).
Table: Classification of Severity of Hearing Impairment (http://www.rehabcouncil.nic.in/writereaddata/hi.pdf, 2012)
Niparko (2012), classified hearing loss as: –
- conductive hearing loss- caused due to lesions in the tympanic membrane, external auditory canal, or middle ear, prevents the reception of sound to the inner ear;
- sensorineural hearing loss- caused due to lesions are present either in the inner ear or the auditory nerve; and
- mixed loss- caused due to chronic infection, extreme head injury, genetic disorders, or during a state of transient hearing loss accompanied with sensorineural hearing loss (Listen Hear! New Zealand, 2017).
Hearing loss is amongst a frequent sensory impairment affecting 1 in 500 new-borns and 1 in 300 children by the age of 4. Approximately, 70% of genetic hearing loss is non-syndromic (i.e., hearing loss is the only phenotype), and 30% syndromic (e.g., additional clinical findings such as changes in pigmentation of the hair, skin, and eyes or individuals with Waardenburg syndrome) (Peters et al., (2002)).
In New Zealand, hearing loss is relatively common as it affects one in six New Zealanders. In 2016, the frequency of hearing loss was estimated to be 880,350 people. This figure represented ~18.9%. Also, it prevalence amongst males was greater (472,961 people) than females (407,388) (Listen Hear! New Zealand, 2017) (https://www.nfd.org.nz/our-work/education-and-prevention/).
Table: The prevalence of hearing loss among the population aged ≥65 years, by Region (DJ Exeter, B Wu, AC Lee, & Searchfield, 2015).
|People living in rural areas 2013 (%)|
|Bay of Plenty||8,794||12,033||15,841||80.13||18.33|
1.2 Anatomy of the human ear
The ear functions as a sensory system to detect sound. It is unguarded to thermal injury not only due to its protruding morphology and site but also of the subcutaneous tissue layers and very thin skin (Bos, Doerga, Breugem, & van Zuijlen, (2016)). The human ear is anatomically composed of three parts: the outer, middle, and the inner ear. The outer and the middle ear works to transmit sound waves while in the inner ear, sound stimuli are transduced to nerve impulses via the auditory nerve (Alters, (2000)). The inner ear consists of the cochlea which contains sensory hair that can be damaged by genetic or environmental factors, and are unable to regenerate naturally. Thus, hearing has to be restored by a cochlea transplant (Introduction to Psychology, (2008)).
Figure: Parts of an ear (“http://keywordsuggest.org/gallery/381978.html,”).
The inner ear represents the sensory organ required for hearing (the cochlea) and balance (the semi-circular canals). The cochlea is snail shaped bony structure (filled with two fluids: – endolymph and perilymph). The sensory receptor, called as the Organ of Corti is embedded inside the cochlea which clasps the hair cells, which are the nerve receptors for hearing (“http://www.asha.org/public/hearing/Inner-Ear/,”).
1.3 Underlying causes of hearing loss
The etiology and causes of Hearing loss can be congenital/acquired; sudden/ progressive; and temporary/permanent. Causes of hearing loss according to World Health Organization, 2015 are as follows: –
1) Congenital causes – due to hereditary or non-hereditary factors, prenatal exposure in utero to maternal disease or inappropriate drug use such as aminoglycosides, cytotoxic drugs, antimalarial drugs, and diuretics; severe jaundice in the neonatal period, maternal rubella, syphilis or birth asphyxia, and low birth weight resulting from premature birth.
2) Acquired causes-
- Noise exposure / Noise-induced hearing loss (NIHL) – single instances of extreme noise, and prolonged exposure to noise (machinery and explosions) can lead, respectively, to sudden or gradual sensorineural hearing loss, because of damage to the sensory cells. NIHL is commonly associated with occupational-related noise in industries such as agriculture, manufacturing and construction and may occur with noisy leisure pursuits.
- Ageing – presbycusis also called as age-related hearing loss occur progressively with age due to degeneration of the cochlea and/or auditory nerve, due to exposure to high volume of sound for prolonged time;
- Diseases and disorders – hearing loss resulting from a variety of different conditions, such as chronic ear infections, autoimmune disorders, measles, mumps, meningitis, otitis media, drug-resistant tuberculosis, head or ear injury, and cancer;
- Physical trauma –caused by injuries either to the ear or the brain.
- Cerumen accumulation –build-up of cerumen (earwax) or other foreign bodies in the ear canal, can lead to temporary hearing loss (Listen Hear! New Zealand, 2017) (http://www.who.int/mediacentre/factsheets/fs300/en/, 2017).
1.3a Hearing loss at cellular level
Cochlear cells are also undermined if traumatized by auricular overstimulation and compromises its function. Clinical implications of acoustic trauma often include auditory symptoms, like the hearing loss and hearing sensitivity, or problems with clarity. Sometimes patients also experience symptoms like tinnitus (ringing in the ear unrelated to outside sound) and hyperacusis (intolerance to environmental sound). Tinnitus affects 10-12% of general population in New Zealand and about 1%
have a disturbing degree of severity. Patients with a primary complaint of tinnitus showed a prevalence rate of hyperacusis ~ 40%; and those of a primary complaint for hyperacusis showed a prevalence of tinnitus has as 86% (Yang et al., (2016)) (“http://www.tinnitus.org.nz/,”) (Baguley, (2003)).
Cochlear damage by noise can be due to mechanical, biological, and molecular stresses (these include inflammation, oxidative stress, energy exhaustion and excitotoxicity). These stress responses further aggravate cell death by apoptosis and necrosis. Investigations in the past have identified multiple genes associated with transcriptional control and various molecular pathways, such as the JNK pathway, phospho-MEK1/ERK1/2/p90, p38/MAPK signalling pathway, RSK signalling pathway, etc (Yang et al., (2016)).
Congenital deformities such as absence of the middle ear (microtia) are additional conditions to accidents or disease. The most conventional medical care is to substitute the damaged ear with a prosthesis or cartilage (silicone ear implant). However, these techniques are unideal, expensive and involves patient’s customization. Therefore, 3D printing is becoming more relevant in auricular research to create tissue-engineered constructs or prostheses (Shafiee & Atala, (2016)).
Non-syndromatic hearing loss follows a Mendelian inheritance pattern for a single gene mutation. A mutation in transcription factor TFCP2L3 led to non-syndromic AD age-related hearing loss (DFNA28), the function of which is still a mystery (Peters et al., (2002)). 50% cases of hearing impairment arising due to disruption of gap junctions is correlated to yet another gene called Connexin 26 (GJB2) (Shalit & Avraham., (2008)). Autosomal-recessive (AR) inheritance accounts for 80% of nonsyndromic genetic hearing loss, normally prelingual (at birth) while autosomal-dominant (AD) inheritance reckons for the remaining 20% and is mostly post lingual (typically leading to progressive sensorineural hearing loss (SNHL) with unreliable severity, usually beginning at 10 to 40 years (Peters et al., (2002)). X-linked (designated “DFNX#”) and mitochondrial inheritance accounts for 1% to 2% nonsyndromic hearing loss. Mitochondrial inheritance patients tend to incur progressive SNHL and begins at 5-50 years, with variable degree of hearing loss (Chang, (2015)). Examples of genes encoding -components of the hair cells and the nerves (PMCA2 and otoferlin), cytoskeleton proteins (myosin VI, myosin VIIA, and myosin XVA), and proteins significant for potassium recycling in the organ of Corti (connexin, KCNQ4, Pendrin, and Claudin 14) (Shalit & Avraham., (2008)).
1.4 Models used in auditory research and current technology
On the other hand, animal models such as mice (Mus musculus), hamsters (Cricetinae), rats (Rattus rattus), Zebrafish (Danio rerio) are being used to study the genetics of hearing loss. An estimated number of 115 million animals and above are being applied to laboratory experiments each year worldwide. However, the precise number remains unknown (“http://www.hsi.org/,”).
In particular, a Tfcp2l3 mutant zebrafish was developed in 2011 and resulted in impaired optic development and hearing ability (Han et al., (2011)). Knock out or mutant mice have been instrumental in illuminating the functions and process of the inner ear and its membrane formation, signalling pathways, growth factors, and hair cell projections (stereocilia and kinocilia) (Friedman, Dror, & Avraham, (2007)).
With the advances in medical technology, the number of animals used in research every year has increased. Besides ethical concerns, the use of animals in preclinical drug testing is very laborious, time consuming, and expensive. Such disadvantages have propelled researchers to find new substitutes to compensate for the drawbacks and to reduce the number of animals used (Sánchez-Romero, Schophuizen, Giménez, & Masereeuw, (2016)). Another approach for re-establishing hearing could be in “stem cell therapy” which involves, surgical stacking of stem cells within the cochlea which will not only allow fusion with the remaining cochlear cells but develop response to hair cells (https://hearinglosscure.stanford.edu/research/stem-cell-therapy/). Researchers at Stanford University studied expression of ATOH1 gene, that was found to be active in developing hair cells. Their finding emphasized that activation of ATOH1 using various tools of gene therapy in the human ear, could possibly influence cells to evolve as new hair cells (https://hearinglosscure.stanford.edu/research/gene-therapy/). The researchers have also pioneered a new technology called as, the Volumetric Optical Coherence Tomography Vibrometry, that enables non-invasive imaging of sound-induced vibrations (unusually small (<1 nm) but critical to normal hearing) within the cochlea (https://hearinglosscure.stanford.edu/research/targeted-neural-stimulation/).
Age related degeneration of the sensory cells in a mammalian cochlea broadcast severe symptoms that worsen later in life (Yang et al., (2016)). Hence, it is important to emphasize the understanding of cellular and molecular mechanisms to prevent auditory loss, especially gene expression and protein analysis.
1.5 Vitelline Membrane (layer) VM
The vitelline membrane of hen eggs, separating the yolk from the egg white, consists of two distinct layers of different compositions and structures, the inner layer (lamina perivitellina) and the outer layer (lamina extravitellina). The two vitelline layers are synthesised in two different organs: the inner layer is formed in the ovary before ovulation, whereas the outer layer is formed in the upper oviduct after ovulation (Kido, Morimoto, Kim, & Doi, (1992)).
Figure: Transmission electron microscopy (TEM) of VM: OVM: outer VM; CM: continuous membrane; IVM: inner VM. (Li et al., (2017))
The embryo of Caenorhabditis elegans is surrounded by a concealed inner VM and a distinguished outer chitinous eggshell. When the VM was gently beamed with a laser irradiation of only the eggshell, it had the property of resealing over time. However, this membrane lost the resealing agility when bombarded with larger holes made into the eggshell. This scenario leads to an impaired gastrulation that renders the gut precursor cells inefficient to migrate towards an embryonic halt. This underscores a critical role for pattern formation of the “micro-environment” throughout the embryo safeguarded by the unimpaired vitelline membrane (Schierenberg & Junkersdorf, (1992)).
In another study, the VM domain was characterized by the presence of three promptly spaced cysteine residues (CX7CX8C). The VM proteins along with VM domains are integrated into large disulphide linkages during late oogenesis, which are frequently utilized in extracellular matrices to stabilize other non-covalent interactions. The regulation of disulphide bond formation for balancing early elasticity and late stability in the extracellular region may be beneficial for morphogenesis of proper vitelline membrane (Wu, Manogaran, Beauchamp, & Waring, (2010)).
In Drosophila melanogaster, the eggshell is composed of a vitelline membrane that undergoes cross-linking during the oogenesis and the vitelline membrane proteins are bound by disulphide linking. It was observed that when the VM is treated with reducing agents it leads to membrane solubilization. Besides the VMs structural role, it may constitute a repository for products of follicle cells involved in embryonic patterning. Vitelline membrane defects have also been detected in some mutants of the germ-line dependent genes fs (1) polehole [fs (1) ph] and fs (1) Nasrat [fs (1) Nas. Mutations in these genes fall into two major phenotypic classes: having either a fragile eggshell and consequent early embryonic development arrest, or defects only at the termini of the embryo characteristic of genes involved in the Tor signalling pathway (Cernilogar, Fabbri, Andrenacci, Taddei, & Gargiulo, (2001)).
1.6 Vitelline Membrane Outer Protein 1 (VMO1) and history
VMO1 is identified as one of the proteins in the outer layer of egg vitelline membranes, and was first identified by Back et al. (1982). This protein was also accompanied with other proteins such as lysozyme, ovomucin, and a second vitelline membrane outer protein (VMO2), which was indicated by Kido et al. (1992) and was later found to be 100% identical to mature β-defensin-11. All the VM proteins participate and are responsible for maintaining its structural need (Guérin-Dubiard & Nau, (2007)) (Mann, (2008)).
Schäfer et al. (1998) indicated in his research that egg stored under non-refrigerated conditions disintegrate VMO1 and VMO2 proteins, leading to vitelline membrane distortion. This scientific finding can now be an appropriate assumption as to why Guerin-Dubiard et al. (2006) was fortunate to detect VMO1 for the first time. Along with his colleagues, they determined the molecular weight of VMO1 at 17.6 KDa, and was consistent with Schäfer et als’ SDS-PAGE analysis. Also, VMO1 protein was suggested to have a pI near 10 since it was found to be in the alkaline area of 2-D gel analysis (Guérin-Dubiard & Nau, (2007)). Chromatographic analysis identified VMO-1 as a spot in conjunction with lysozyme. However, there is no experimental evidence for interactions between these two proteins in egg white. (Guérin-Dubiard et al., (2006)).
The crystal structure of VMO1 was originally determined using by the multiple isomorphous replacement anomalous scattering (MIRAS) method at 3 Ǻ resolution. The parent chain of this protein consists of a peculiar three β-sheets folds that forms Greek key motifs. Sequence analyses revealed the a 53 residue, 3-protein component of Greek key motifs. VMO-I could also synthesize N-acetyl chito-oligosaccharides (n = 14 or 15) from hexa saccharides of N-acetylglucosamine, this activity was found to be similar in comparison to the transferase activity of lysozyme without the hydrolysis activity. The true physiological function yet remains to be poorly understood. (Shimizu, Vassylyev, Kido, Y. U. K. I. O., & Morikawa, (1994)).
Figure: Ribbon representation of VMO-I
1.6a Chicken VMO1
Chick embryo development comprises of three phases. days 0 to 7 denotes the inception of germ layers, the ectoderm, mesoderm, and the endoderm responsible for tissue and organs differentiation during gastrulation. During days 3 and 7, functional organs and extra-embryonic membranes (amnion, allantois, chorion, and yolk sac) are developed. The second phase, from days 8 to 18 marks the development of chorio-allantoic membrane (CAM) and embryo completion. At day 10, the chick embryo is fully formed. The final phase, days 19–21, represents emergence (Cordeiro & Hincke, (2015)).
Proteins present in the eggshell membranes (ESMs) of fertilized eggs during peculiar days of chick embryo development are represented in the figure below. The black circles correspond to the presence of VMO1 gene, emphasizing its role in extracellular structure organization during developmental stage 2 and 3. The ESMs are polymeric scaffolds that accelerate embryo development and offers protection against invading viruses (Cordeiro & Hincke, (2015)).
Figure: (Cordeiro & Hincke, (2015))
Table: Heat-map illustrating the comparative analysis of the ESM proteins with different characters traced in the fertilized versus unfertilized eggs at various days (0, 3, 7, 11, 15 and 19) of incubation (Cordeiro & Hincke, (2015)).
Intensity of red colour depicts the fold increase in abundance of ESM proteins within fertilized eggs and in comparison, to unfertilized condition. Intensity of green illustrates the fold increase in abundance of ESM proteins for unfertilized eggs, in comparison to embryonated eggs. Intensity of grey shows comparative analysis of the degree of variation in ESM proteins which are present in both fertilized and unfertilized conditions at a similar level. White cells in the figure indicates the absence of particular proteins at that specific day of incubation. The article also affirms the expression of VMO1 gene at all levels, amongst 17 other genes marked by a circle(Cordeiro & Hincke, (2015)).
Table: Official gene symbols correspond to eggshell membranes proteins from fertilized eggs (Cordeiro & Hincke, (2015))
(Lee et al., (2015)) analysed the expression pattern and functional activity of VMO-1 in the laying hen oviduct using reverse transcription polymerase chain reaction (RT-PCR), quantitative RT-PCR (qRT-PCR), and RNA interference (RNAi). The group inferred that microRNAs (miRNAs) like 1651-3p, 1552-3p, and gga-miR-1623 had an influential role on the expression of VMO-1 through its 3′-UTR, speculating its posttranscriptional regulation. Some VMO-1 gene knockdown experiments have also revealed repression of ovomucin by VMO1 silencing, while other studies have linked estrogen as an inducible factor for VMO-1 mRNA expression in vitro.
Figure: Expression of VMO-1 in chickens.
In a study conducted by (Lee et al., (2015)), expression of VMO-1 was under examination in numerous organs of both male and female chickens as seen in A. It was evident that VMO1 gene was highly expressed in females’ oviduct. Also, RT-PCR and qRT-PCR analyses of the chicken oviduct comprising of the infundibulum, isthmus, magnum, and shell gland as seen in B, was conclusive in determining the presence of this gene is the “Magnum”.
(Lim & Song, (2015)) proposed that VMO1 holds a crucial contribution in the morphogenesis of oviduct in presence of estrogen and moulting, they also affirmed that the onset of VMO1 expression is associated with carcinogenesis of laying hens. A CA-125 biomarker for diagnosing early stage of ovarian cancer in women was cross-reactive with biomarkers for ovarian cancer in laying hens.
1.6b Mouse/rat Vmo1
Differentially expressed mRNAs products confer cells with specialized functions and structures that could be confined to different cell types or in small copy numbers. The intricate morphology of the mammalian middle ear makes it a complicated model to detect mRNAs. For instance, the sensory cells of the inner ear, the organ of Corti, and outer hair cells, possess less than ∼5% of this tissue. Although in an average cell, about 250,000 mRNAs are exhibited but a typical low-abundance message specific to the hair cells would be expected (at a rate of 1 in 1 million transcripts). Before its discovery, the mouse Vmo1 gene was originally annotated as an in silico-predicted gene, GM741 (GeneID 327956; RefSeq XM_282996) (Peters et al., (2007)).
Table: Molecular pathways identified (Yang et al., (2016)).
|Molecular pathways||Rat cochlea||Mouse cochlea|
|Complement and coagulation cascades||+||–|
|Cytokine receptor interaction||+||+|
|Chemokine signalling pathway*||+||+|
|Cell adhesion molecules||+||–|
|Toll-like receptor signalling pathway||+||+|
|NOD-like receptor signalling pathway||+||+|
|p53 signalling pathway||+||–|
|Adipocytokine signalling pathway||+||–|
|Fc gamma R-mediated phagocytosis||+||–|
|Cytosolic DNA-sensing pathway||–||+|
|RIG-I-like receptor signalling pathway||–||+|
Evidence for the presence of Vmo1 in the ear of mouse can from two experiments conducted by (Peters et al., (2007)). First the mRNA was procured from adult liver, kidney, pancreas, retina, brain, testes, inner ear, and genomic DNA from mouse. This was followed by a RT experiment that revealed the presence of Vmo1 in the inner ear. The evidence was further advocated by performing an In situ hybridization, in order to detect the tissue specific localization of mRNA, and was found in the Reissner’s membrane.
Figure: In situ hybridization in cross sections of the mouse inner ear. (A) Antisense probe for Vmo1 (B) Control probe for Vmo1.
Figure: RT-PCR experiments confirming expression of predicted transcripts in
the inner ear.
1.6c VMO1 and miscellaneous
Presence of VMO1 have also been reported in various exocrine glands and/or secretions, for example., pancreas, urine, breast, cerebrospinal fluid, and respiratory secretions, and also in minuscule quantities in human tears while a comparatively higher in camel tears (Wang et al., (2014)).
Proteins present in the tear film often play an important defensive role to thwart pathogens, while maintaining the integrity of the tear film, and modulating wound healing. In the recent research, proteomics studies have enlightened potential biomarkers in the tear proteins, for treating systemic and ocular diseases. Camels often habitat under harsh environmental conditions and have been demonstrated for the presence of large amount of VMO1 homolog in camel tears (Chen et al., (2011)). Thus, characterizing their tear components can shed some light underlying the mechanisms of tear film stabilization and understanding other variants of this genes’ product.
Figure: Interaction model between VMO1 (red) and lysozyme (blue). This model was beneficial in deciphering how these two molecules associated through various amino acid interactions. This was done using computer-assisted programs. Under this particular model, Glutamine (Q) 153, Glutamic acid (E) 110, and Q 70 of VOM1 binds to Arginine (R) 59, R59 or Serine (S) 100, and Aspartic acid (D) 85 of lysozyme through hydrogen bonding (Wang et al., (2014)).
Intraembryonic hematopoietic stem cells cluster in the floor of the dorsal aorta, vitelline membrane, and umbilical arteries. Here, Notch signalling has demonstrated to be functionally important particularly for Yolk Sac-derived haematopoiesis. It acts as a mediator by balancing the mechanisms of proliferation and differentiation of progenitors (lineage restricted intermediate) (Cortegano et al., (2014)).
VMO1 homolog was found to be one of the most highly transcribed genes in two undisputed samples from dog: Bichon fries and golden retriever at 3548 and 2723 FPKM (fragment per kilo base of exon per million of fragments mapped) respectively (Galibert, Azzouzi, Quignon, & Chaudieu, (2016)).
One study involved the use of two bioinformatics tools specifically to determine the functional relevance of the differentially expressed genes (DEGs): The Database for Annotation, Visualization, and Integrated Discovery (DAVID v 6.7) and the Ingenuity Pathway Analysis (IPA). For DEGs identified in the rat cochleae, over 400 genes were Upregulated amongst which was Vmo1 with 19.08-fold change (Yang et al., (2016)).
Comparative analysis of chicken VMO-1 protein-coding sequence and the human, mouse, rat, and bovine VMO-1 proteins using multiple sequence alignment tool revealed high degree of homology of 55%, 53%, 48%, and 54%, respectively (Lee et al., (2015)).
Alters, S. ((2000)). Biology: understanding life (3rd Edition, google books. ed.). Sudbury, MA: Jones and Bartlett.
Baguley, D. M. ((2003)). Hyperacusis. Journal of the Royal Society of Medicine, 96(12), 582-585.
Bos, E. J., Doerga, P., Breugem, C. C., & van Zuijlen, P. P. ((2016)). The burned ear; possibilities and challenges in framework reconstruction and coverage. Burns, 42(7), 1387-1395.
Cernilogar, F. M., Fabbri, F., Andrenacci, D., Taddei, C., & Gargiulo, G. ((2001)). Drosophila vitelline membrane cross-linking requires the fs (1) Nasrat, fs (1) polehole and chorion genes activities. Development genes and evolution, 211(12), 573-580.
Chang, K. W. ((2015)). Genetics of Hearing Loss—Nonsyndromic. Otolaryngologic Clinics of North America, 48(6), 1063-1072.
Chen, Z., Shamsi, F. A., Li, K., Huang, Q., Al-Rajhi, A. A., Chaudhry, I. A., & Wu, K. ((2011)). Comparison of camel tear proteins between summer and winter. Molecular vision, 17, 323-331.
Cordeiro, C. M., & Hincke, M. T. ((2015)). Quantitative proteomics analysis of eggshell membrane proteins during chick embryonic development. Journal of proteomics, 130, 11-25.
Cortegano, I., Melgar-Rojas, P., Luna-Zurita, L., Siguero-Alvarez, M., Marcos, M. A., Gaspar, M. L., & De la Pompa, J. L. ((2014)). Notch1 regulates progenitor cell proliferation and differentiation during mouse yolk sac hematopoiesis. Cell death and differentiation, 21(7), 1081-1094.
DJ Exeter, B Wu, AC Lee, & Searchfield, G. (2015). The projected burden of hearing loss in New Zealand (2011-2061) and the implications for the hearing health workforce. The New Zealand Medial Journal, Volume 128, Number 1419.
Friedman, L. M., Dror, A. A., & Avraham, K. B. ((2007)). Mouse models to study inner ear development and hereditary hearing loss. International Journal of Developmental Biology, 51(6-7), 609-631.
Galibert, F., Azzouzi, N., Quignon, P., & Chaudieu, G. ((2016)). The genetics of canine olfaction. . Journal of Veterinary Behavior: Clinical Applications and Research, 16, 86-93.
Guérin-Dubiard, C., & Nau, F. ((2007)). Minor proteins. Bioactive Egg Compounds, 93-98.
Guérin-Dubiard, C., Pasco, M., Mollé, D., Désert, C., Croguennec, T., & Nau, F. ((2006)). Proteomic analysis of hen egg white. Journal of Agricultural and Food Chemistry, 54(11), 3901-3910.
Han, Y., Mu, Y., Li, X., Xu, P., Tong, J., Liu, Z., & Meng, A. ((2011)). Grhl2 deficiency impairs otic development and hearing ability in a zebrafish model of the progressive dominant hearing loss DFNA28. Human molecular genetics, 20(16), 3213-3226.
http://www.asha.org/public/hearing/Inner-Ear/. American Speech-Language-Hearing Association (ASHA).
http://www.hsi.org/. Humane Society International.
http://www.rehabcouncil.nic.in/writereaddata/hi.pdf. (2012). Hearing Impairment. In M. R. R. (Editor) (Ed.), Status of Disability in India. India: Rehabilitation Council of India.
http://www.who.int/mediacentre/factsheets/fs300/en/. (2017). Deafness and hearing loss.
https://hearinglosscure.stanford.edu/research/gene-therapy/. Stanford Initiative to Cure Hearing Loss.
https://hearinglosscure.stanford.edu/research/stem-cell-therapy/. Standford Initiative to Cure Hearing Loss.
https://hearinglosscure.stanford.edu/research/targeted-neural-stimulation/. Standford Initiative to Cure Hearing Loss.
https://www.nfd.org.nz/our-work/education-and-prevention/. The National Foundation for the Deaf Inc. .
Introduction to Psychology. ((2008)). (2nd Edition, Google books ed.). Cape Town: UCT Press.
Kido, S., Morimoto, A., Kim, F., & Doi, Y. ((1992)). Isolation of a novel protein from the outer layer of the vitelline membrane. Biochemical journal, 286(1), 17-22.
Lee, S. I., Ji, M. R., Jang, Y. J., Jeon, M. H., Kim, J. S., Park, J. K., & Byun, S. J. ((2015)). Characterization and miRNA-mediated posttranscriptional regulation of vitelline membrane outer layer protein I in the adult chicken oviduct. In Vitro Cellular & Developmental Biology-Animal, 51(3), 222-229.
Li, Q., Li, W., Li, X., Liu, L., Zhang, Y., Guo, Y., & Zheng, J. ((2017)). The Distribution Characteristics and Applications for Maternal Cells on Chicken Egg Vitelline Membrane. Scientific Reports, 7., 1-11.
Lim, W., & Song, G. ((2015)). Differential expression of vitelline membrane outer layer protein 1: hormonal regulation of expression in the oviduct and in ovarian carcinomas from laying hens. Molecular and cellular endocrinology, 399, 250-258.
Listen Hear! New Zealand. (2017). Retrieved from Australia:
Mann, K. ((2008)). Proteomic analysis of the chicken egg vitelline membrane. Proteomics, 8(11), 2322-2332.
Peters, L. M., Anderson, D. W., Griffith, A. J., Grundfast, K. M., San Agustin, T. B., Madeo, A. C., & Morell, R. J. ((2002)). Mutation of a transcription factor, TFCP2L3, causes progressive autosomal dominant hearing loss, DFNA28. Human molecular genetics, 11(23), 2877-2885.
Peters, L. M., Belyantseva, I. A., Lagziel, A., Battey, J. F., Friedman, T. B., & Morell, R. J. ((2007)). Signatures from tissue-specific MPSS libraries identify transcripts preferentially expressed in the mouse inner ear. Genomics, 89(2), 197-206.
Sánchez-Romero, N., Schophuizen, C. M., Giménez, I., & Masereeuw, R. ((2016)). In vitro systems to study nephropharmacology: 2D versus 3D models. European journal of pharmacology., 790, 36-45.
Schierenberg, E., & Junkersdorf, B. ((1992)). Schierenberg, E., & Junkersdorf, B. (1992). The role of eggshell and underlying vitelline membrane for normal pattern formation in the early C. elegans embryo. Roux’s archives of developmental biology, 202(1), 10-16. The role of eggshell and underlying vitelline membrane for normal pattern formation in the early C. elegans embryo., Roux’s archives of developmental biology(202(1)), 10-16.
Shafiee, A., & Atala, A. ((2016)). Printing technologies for medical applications. Trends in molecular medicine, 22(3), 254-265.
Shalit, E., & Avraham., K. B. ((2008)). Genetics of hearing loss. Auditory Trauma, Protection, and Repair. Springer US, 9-47.
Shimizu, T., Vassylyev, D. G., Kido, S., Doi,, Y. U. K. I. O., & Morikawa, K. ((1994)). Crystal structure of vitelline membrane outer layer protein I (VMO-I): a folding motif with homologous Greek key structures related by an internal three-fold symmetry. The EMBO journal, 13(5), 1003-1010.
Wang, Z., Chen, Z., Yang, Q., Jiang, Y., Lin, L., Liu, X., & Wu, K. ((2014)). Vitelline Membrane Outer Layer 1 Homolog Interacts With Lysozyme C and Promotes the Stabilization of Tear FilmVMO1 Promotes Stable Tear Film. Investigative ophthalmology & visual science, 55(10), 6722-6727.
Wu, T., Manogaran, A. L., Beauchamp, J. M., & Waring, G. L. ((2010)). Drosophila vitelline membrane assembly: a critical role for an evolutionarily conserved cysteine in the “VM domain” of sV23. Developmental biology, 347(2).
Yang, S., Cai, Q., Vethanayagam, R. R., Wang, J., Yang, W., & Hu, B. H. ((2016)). Immune defense is the primary function associated with the differentially expressed genes in the cochlea following acoustic trauma. Hearing research, 333, 283-294.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "Chemistry"
Chemistry is a science involving the study of the elements and matter at the atomic and molecular level including their composition, structure, properties, behaviour, and how they react or combine.
Sodium, Potassium and Urea Measurement
Introduction Electrolytes are solutions that conduct electricity. Any molecule that becomes an ion when mixed with water is an electrolyte. Salts such as sodium, potassium, calcium and chloride are ex...
High Performance Liquid Chromatography (HPLC) 214
Introduction High performance liquid chromatography 214 is the most widely used of all of the analytical separation techniques. The reasons for the popularity of the method is its sensitivity, ready a...
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: | https://ukdiss.com/examples/auditory-loss-gene-expression-protein-analysis.php | 21 |
21 | Natural-gas condensate is a low-density mixture of hydrocarbon liquids that are present as gaseous components in the raw natural gas produced from many natural gas fields. Some gas species within the raw natural gas will condense to a liquid state if the temperature is reduced to below the hydrocarbon dew point temperature at a set pressure.
The natural gas condensate is also referred to as simply condensate, or gas condensate, or sometimes natural gasoline because it contains hydrocarbons within the gasoline boiling range. Raw natural gas may come from any one of three types of gas wells:
- Crude oil wells—Raw natural gas that comes from crude oil wells is called associated gas. This gas can exist separate from the crude oil in the underground formation, or dissolved in the crude oil. Condensate produced from oil wells is often referred to as lease condensate.
- Dry gas wells—These wells typically produce only raw natural gas that does not contain any hydrocarbon liquids. Such gas is called non-associated gas. Condensate from dry gas is extracted at gas processing plants and, hence, is often referred to as plant condensate.
- Condensate wells—These wells produce raw natural gas along with natural gas liquid. Such gas is also called associated gas and often referred to as wet gas.
There are many condensate sources worldwide and each has its own unique gas condensate composition. However, in general, gas condensate has a specific gravity ranging from 0.5 to 0.8, and is composed of hydrocarbons such as propane, butane, pentane, hexane, etc. Natural gas compounds with more carbon atoms (e.g. pentane, or blends of butane, pentane and other hydrocarbons with additional carbon atoms) exist as liquids at ambient temperatures. Additionally, condensate may contain additional impurities such as:
- Hydrogen sulfide (H
- Thiols traditionally also called mercaptans (denoted as RSH, where R is an organic group such as methyl, ethyl, etc.)
- Carbon dioxide (CO2)
- Straight-chain alkanes having from 2 to 12 carbon atoms (denoted as C
2 to C
- Cyclohexane and perhaps other naphthenes
- Aromatics (benzene, toluene, xylenes, and ethylbenzene)
Separating the condensate from the raw natural gas
There are literally hundreds of different equipment configurations for the processing required to separate natural gas condensate from a raw natural gas. The schematic flow diagram to the right depicts just one of the possible configurations.
The raw natural gas feedstock from a gas well or a group of wells is cooled to lower the gas temperature to below its hydrocarbon dew point at the feedstock pressure and that condenses a large part of the gas condensate hydrocarbons. The feedstock mixture of gas, liquid condensate and water is then routed to a high pressure separator vessel where the water and the raw natural gas are separated and removed. The raw natural gas from the high pressure separator is sent to the main gas compressor.
The gas condensate from the high pressure separator flows through a throttling control valve to a low pressure separator. The reduction in pressure across the control valve causes the condensate to undergo a partial vaporization referred to as a flash vaporization. The raw natural gas from the low pressure separator is sent to a "booster" compressor which raises the gas pressure and sends it through a cooler and on to the main gas compressor. The main gas compressor raises the pressure of the gases from the high and low pressure separators to whatever pressure is required for the pipeline transportation of the gas to the raw natural gas processing plant. The main gas compressor discharge pressure will depend upon the distance to the raw natural gas processing plant and it may require that a multi-stage compressor be used.
At the raw natural gas processing plant, the gas will be dehydrated and acid gases and other impurities will be removed from the gas. Then, the ethane (C
2), propane (C
3), butanes (C
4), and pentanes (C
5)—plus higher molecular weight hydrocarbons referred to as C5+—will also be removed and recovered as byproducts.
The water removed from both the high and low pressure separators may need to be processed to remove hydrogen sulfide (H
2S) before the water can be disposed of underground or reused in some fashion.
Some of the raw natural gas may be re-injected into the producing formation to help maintain the reservoir pressure, or for storage pending later installation of a pipeline.
Drip gas, so named because it can be drawn off the bottom of small chambers (called drips) sometimes installed in pipelines from gas wells, is another name for natural-gas condensate, a naturally occurring form of gasoline obtained as a byproduct of natural gas extraction. It is also known as "condensate", "natural gasoline", "casing head gas", "raw gas", "white gas" and "liquid gold". Drip gas is defined in the United States Code of Federal Regulations as consisting of butane, pentane, and hexane hydrocarbons. Within set ranges of distillation, drip gas may be extracted and used to denature fuel alcohol. Drip gas is also used as a cleaner and solvent as well as a lantern and stove fuel.
Use as a diluent in heavy oil production
Because condensate is typically liquid in ambient conditions and also has very low viscosity, condensate is often used to dilute highly viscous heavier oils that cannot otherwise be efficiently transported via pipelines. In particular, condensate is frequently mixed with bitumen from oil sands to create dilbit. The increased use of condensate as diluent has significantly increased its price in certain regions.
Historical use in vehicles
Some very early internal combustion engines—such as the first types made by Karl Benz, and early Wright brothers aircraft engines—used natural gasoline, which could be either drip gas or a similar range of hydrocarbons distilled from crude oil. Natural gasoline has an octane rating of about 30 to 50, sufficient for the low-compression engines of the early 20th century. By 1930, improved engines and higher compression ratios required higher-octane, refined gasolines to produce power without knocking or detonation.
Beginning in the Great Depression, drip gas was used as a replacement for commercial gasoline by people in oil-producing areas. "In the days of simple engines in automobiles and farm tractors it was not uncommon for anyone having access to a condensate well to fill his tank with 'drip,'" according to the Oklahoma Historical Society. Sometimes it worked fine. "At other times it might cause thundering backfires and clouds of foul-smelling smoke."
Certain manufacturers made farm tractors specifically designed to run on heavy, low-octane fuels which were commonly called "distillate" or "tractor fuel". Often the tractors were referred to as "all-fuel". The most important factor in burning heavy fuels in a spark-ignition engine is proper fuel vaporization. Tractors designed to run on those fuels usually used a "hot" intake air manifold that allowed exhaust heat to warm the manifold and carburetor to aid vaporization. Given the poor vaporization at cold temps, all-fuel tractors were started on gasoline and switched over to the heavy fuel. They were equipped with a small gasoline tank and a large fuel tank, both of which fed into a common valve supplying the fuel to the carburetor.
The engine would be started on gasoline and the tractor would then be worked until the engine was sufficiently warm to change over. At that point, the fuel valve would be turned to switch the fuel supply from the gasoline tank to the fuel tank and the heavy fuel would flow to the carburetor. Radiator shutters or curtains were typically used to keep the engine sufficiently hot for efficient operation. Coolant temperatures in the 200 degree F range were normal. John Deere two-cylinder all-fuel tractors worked very well with heavy fuel, as their long piston strokes, slow engine speeds and low compression ratios allowed for efficient operation. Most were also equipped with thermosiphon cooling systems that used no water pumps. Natural convection allowed the water to flow up and out of the engine block and into the top of the radiator, where it cooled and dropped and fell to continue the cycle.
Woody Guthrie's autobiographical novel Seeds of Man begins with Woody and his uncle Jeff tapping a natural gas pipeline for drip gas. The gas also has a mention in Badlands, the Terrence Malick movie.
It was sold commercially at gas stations and hardware stores in North America until the early 1950s. The white gas sold today is a similar product but is produced at refineries with the benzene removed.
In 1975, the New Mexico State Police's drip gas detail – three men in pickup trucks – began patrolling oil and gas fields, catching thieves and recovering barrels of stolen gas. The detail stopped its work in 1987.
The use of drip gas in cars and trucks is now illegal in many states. It is also harmful to modern engines due to its low octane rating, high heat of combustion and lack of additives. It has a distinctive smell when used as a fuel, which allowed police to catch people using drip gas illegally.
- International Energy Glossary (a page from the website of the Energy Information Administration)
- Natural gas processing (a page from the website of the Energy Information Administration)
- U.S. Crude Oil Production Forecast- Analysis of Crude Types (PDF), Washington, DC: U.S. Energy Information Administration, 29 May 2014, p. 7,
A final point to consider involves the distinction between the very light grades of lease condensate (which are included in EIA's oil production data) and hydrocarbon gas liquids (HGL) that are produced from the wellhead as gas but are converted to liquids once separated from methane at a natural gas processing plant. These hydrocarbons include ethane, propane, butanes, and hydrocarbons with five or more carbon atoms – referred to as pentanes plus, naptha, or plant condensate. Plant condensate can also be blended with crude oil, which would change both the distribution and total volume of oil received by refineries.
- "Diluent and Dilbit". Oil Sands Research and Information Network. University of Alberta. Retrieved 29 January 2014.
- Natural Gas Condensate Marathon Oil Company MSDS
- Natural Gas Condensate Phillips Petroleum Company MSDS
- Condensate (Alaska) ConocoPhillips of Alaska MSDS
- Natural Gas Condensate Amerada Hess Corporation MSDS
- Simplified Process Flow Diagram
- Mamdouh R. Gadallah and Ray L. Fisher (2004). Applied Seismology: A Comprehensive Guide to Seismic Theory and Application. PennWell Corporation. ISBN 1-59370-022-9. External link in
- New Mexico State Police Association (2000). New Mexico State Police, 1933-2000 (1st ed.). Turner Publishing Company. ISBN 1-56311-587-5. External link in
- "Authorized Materials for Fuel Alcohol" (PDF). Alcohol and Tobacco Tax and Trade Bureau. Retrieved 2008-03-06.
- Lewis, Jeff. "Diluent shortages could make for sticky situation for Alberta bitumen". Financial Post. Retrieved 29 January 2014.
- Oklahoma Historical Society, Encyclopedia of Oklahoma History and Culture.
- International Fuel Names
- New Mexico State Police, 1933-2000.
- "Drip Gas Was A Real Gas for Me As A Kid" by Jack Cawthon, June 9, 2004.
- Burning Drip Gas in Horntown, Oklahoma, by Clayton Adair.
- Processing raw natural gas
- Preparing raw natural gas for sales
- Natural Gas Processing (part of the US EPA's AP-42 publication and includes a schematic diagram) | https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Gas_condensate.html | 21 |
16 | Plymouth Colony was the first English Settlement in Massachusetts (1620-1691).
- 1 Overview
- 2 Colonists Origins
- 3 Nottinghamshire Seperatists
- 4 Pilgrim Stay in Holland
- 5 Colony Organizers
- 6 Immigrant Ships
- 7 Founding of Plymouth Colony
- 8 Immigrant Arrivals to Plymouth Colony
- 9 Expansion Site of Plymouth Colony
- 10 Colony Economics
- 11 Historical Genealogical Documents
- 12 References
- 13 Footnotes
Plymouth Colony (sometimes "New Plymouth") was an English colonial venture in North America from 1620 until 1691. The first settlement was at New Plymouth, a location previously surveyed and named by Captain John Smith. The settlement, which served as the capital of the colony, is today the modern town of Plymouth, Massachusetts. At its height, Plymouth Colony occupied most of the southeastern portion of the modern state of Massachusetts.
Founded by a group of separatists who later came to be known as the Pilgrims, Plymouth Colony was, along with Jamestown, Virginia, one of the earliest colonies to be founded by the English in North America and the first sizable permanent English settlement in the New England region. Aided by Squanto, a Native American, the colony was able to establish a treaty with Chief Massasoit which helped to ensure the colony's success. The colony played a central role in King Phillip's War, one of the earliest and bloodiest of the Indian Wars. Ultimately, the colony was annexed by the Massachusetts Bay Colony in 1691.
Plymouth holds a special role in American history. Rather than being entrepreneurs like many of the settlers of Jamestown, the citizens of Plymouth were fleeing religious persecution and searching for a place to worship their God as they saw fit. The social and legal systems of the colony were thus closely tied to their religious beliefs. Many of the people and events surrounding Plymouth Colony have become part of American mythology, including the North American tradition known as Thanksgiving and the monument known as Plymouth Rock. Despite the colony's relatively short history, it has become an important symbol of what is now labeled "American".
- English Prosperity - England, under the reign of Elizabeth I of England and James I of England had enjoyed nearly sixty years free from any major war or tumult.
- Religious Freedom - Several key reformation figures had challenged many of the old traditions of uniformity of state managed system of government. They sought an opportunity to worship God as they thought best.
- Thirst for Adventure - Englishmen sought out fame and adventure in a new world.
- Relief from Fuedalism - Much of the populace sought escape from the old tenant farmer system and craved the opportunity to own land of their own.
- Visions of Wealth - Capitalists of the British Empire saw an opportunity for speedy and ample profit from new world discoveries.
Many English adventures would become pioneers of English colonization in America. But the vast majority of the Plymouth Colonists were Pilgrims seeking religious freedom during the Great Reformation.
The late 16th and early 17th centuries witnessed the height of the Great Christian Reformation when many devout Christians would revolt from the established doctrine and organization of the Medieval State Church.
Back in the early 16th century, King Henry VIII wanted his divorce and split England away from the Roman Pope creating the Anglican Church. This led to conflict in subsequent years between Catholics, Anglicans and Protestant Reformers who challenged their concepts and forms of religions.
The 1555 Peace of Augsburg ended a number of religious wards in Europe and recognized various protestant churches established by reigning monarchs. This set the background for a second religious revolt against the established state church.
English Separatists Origins
By the late 1500s in England A number of groups advocated to become completely separated from the Anglican Church. These groups were called "Separatists" and included the Pilgrims, Puritans and others.
These groups were numerous and mostly small number. They sought the right to act totally independent of the church and covenanted with God to more fully keep his Divine Law.
When King James I started his reign in 1603, various forms of these groups, called Separatists (later called Pilgrims), tried to break away from the Church of England. They were heavily persecuted by the state.
They were mix of laborers, artisans, tenant farmers and occasional middle-class gentry. They kept in tight knit groups and accepted persecution as a test of their faith.
Conspicuous for their radical thought and peculiar worship, they tended to attract the unwanted attention of secular and ecclesiastical authorities. Most Seperatists groups congregated in England and Norfolk
The Pilgrims of Plymouth Colony were a small group of Seperatists that had their origins in Nottinghamshire and the surrounding counties. They met at the Scrooby Manorhouse.
Major figures of the Nottinghamshire Group are:
- William Brewster (1527-1590) - (Father Brewster) and receiver, bailiff for Scrooby Manorhouse. Spiritual Leader
- William Brewster (1567-1644) - future leader of Pilgrims to Amsterdam and America. Acting postmaster at Scrooby.
- Richard Clayton - spiritual leader
- John Robinson - spiritual leader
This group was discovered by church authorities from Yorkshire and their leaders were cast into prison for up to a year (circa 1604?).
"taken and clapt up in prison." (and after their release) "their houses were besett and wacht night and day and hardly escaped their hand."
Pilgrim Stay in Holland
Researchers have been carefully checking historical records in Leiden, Holland to see which members originated there and for key life events from 1607 to 1630. Life their was not pleasant either and their were soon desirous of removing to a unique land where they could live totally independent of others. Their attention soon turned to America.
The first pilgrims arrived in 1607 and most of the remainder in 1608. The first group left for Plymouth Colony in 1620 and most of the remainder came in the next ten years afterwards.
Move to Holland
The English persecution caused most of the Pilgrims to migrate to Amsterdam in 1607/1608. In 1608 part of this group migrated to Leiden, Holland to escape intense persecution. Although Holland had religious freedom, they faced economic and language hardships there.
However, the Pilgrims began to become very uncomfortable with life in Holland where their moral and religious standards seemed to be more lax than in England (not to mention that different language and traditions.) They were concerned what kind of environment this would create for their children. Labor was hard, educational opportunities for their children were few and they stand to lose their language and traditions. They feared the many temptations here would be a menace to the habits, morals and faith of the younger members of their flock.
Formation of Virginia Company of Plymouth
In 1606, King James grants two charters for settlement in North America. Note their is an overlap in their charter area, but each company was not allowed to start a settlement within 100 miles of an existing settlement of the other.
- Virginia Company of London - a group of London Merchants that are authorized to colonize between 34 & 41 North Latitude. In 1607 they found the Jamestown Settlement at 37.12 North Latitude.
- Virginia Company of Plymouth - a group of merchants from Plymouth are authorized to colonize between 38 and 45 North Latitude. In 1620 they finance the Mayflower Expedition which lands at 41.56 North Latitude.
About this time English explorers and fishermen were actively navigating the coast of America and had already established one settlement at Jamestown, (1607). This included one famous founder of Jamestown, John Smith in 1614 who gave the region its name. Their maps and reports gave a new hope to the Pilgrims and they started devising plans to go to a new home devoid of any other civilized people.
They sent two members to England to secure a patent from the Virginia Company of London. Because of dissensions in that group they turned instead to the newly reorganized Virginia Company of Plymouth and obtained a charters as The New England Council. This contract called for them to settle in New England, but what was then called Northern Virginia.
The financial burden of this mission required to enlist help from other London merchants (principally, Thomas Weston). It created a voluntary joint stock company. The Pilgrims were to labor at trade, trucking and fishing for seven years and return all profits to the London merchants. The conditions were quite discouraging but there appeared to be no other alternative. These poor conditions would cause much hardship the first couple of years and were re-negotiated soon after.
The following list of people played a very influential role in organizing the Mayflower trip:
- Robert Cushman (1577-1625) - Leiden Seperatist - attempted to sail in 1620 on the Speedwell. Arrived in Plymouth in 1621 on Forture but promptly returned to England to coordinate Colony efforts from there. Giant monument to his memory at Burial Hill in Plymouth. His son Thomas Cushman (1608-1691) succeeded Bradford as a leading elder of the Colony.
Voyage of the Mayflower
Many boarded the Speedwell, at Delftshaven.The Leiden Separatists bought the ship in Holland. They then sailed it to Southampton, England to meet the Mayflower, which had been chartered by the merchant investors. In Southampton they joined with other Separatists and the additional colonists hired by the investors.
The two ships began the voyage on August 5, 1620, but the Speedwell leaked badly and had to return to Dartmouth to be refitted at great expense and time. On the second attempt, the two ships sailed about 100 leagues beyond Land's End in Cornwall, but the Speedwell, was again found to be leaky. Both vessels returned to Plymouth where the Speedwell was sold. It would later be revealed that there was in fact nothing wrong with the ship. The crew had sabotaged it in order to escape the year long commitment of their contract.
Eleven people from the Speedwell (including Francis and John Cooke) boarded the Mayflower, leaving 20 people (including Robert Cushman and Philippe de Lannoy) to return to London while a combined company of 103 continued the voyage. For a third time, the Mayflower headed for the New World. She left Plymouth on September 6, 1620 and entered Cape Cod Harbor on November 11, 1620. However, it was not until 21 Dec 1620 before they could decide on their place of landing and four days later they erected their first building at Plymouth Colony.
- Christopher Jones - Captain of the Mayflower.
- Mayflower - History of the Mayflower
- List of Mayflower passengers -
Other Ships 1621-1638
Plymouth Colony Immigrant Ship passenger lists (1620-1638) (Does not included ships landing at Salem, Boston, Maine or Virginia or numerous fishing vessels and pirate ships that may have stopped by. See also PackRat Pro for all ships:
- List of Fortune 1621 passengers to Plymouth
- Sparrow : May 1622
- List of Anne 1623 passengers to Plymouth
- List of Little James 1623 passengers
- Charity : 1624
- Jacob 1625
- White Angel 1628
- Pilgrim (#4) : April 1629
- Hand Maiden : 1630
Founding of Plymouth Colony
The Mayflower arrived off Cape Cod on 9-Nov-1620 and attempted to sail for Manhattan, but was driven back by strong winds. On 11-Dec-1620, the settlers put ashore at Plymouth Rock.
While in the Harbor, 41 leading settlers signed the Mayflower Compact before putting ashore. This document provided for a early democratic government of the colony and peace between pilgrims and strangers.
John Carver was elected the 1st governor, but he died in the spring of 1621. His successor was William Bradford. Captain Myles Standish was appointed the militia leader.
- NEWS 25-Nov-2016: Remains of Pilgrims’ Plymouth settlement unearthed Boston Globe]
The Harsh Winter of 1620/21
The miseries suffered by pilgrims in that first year were not due to any inhospitable climate, but for the lateness of the year in which they landed as well as insufficiently planning their provisions. Of the 102 passengers many died during the harsh winter of 1620/21. When the next ship, Fortune, arrived in Nov 1621, only 52 settlers were left at Plymouth Rock. The nearby Wampanoags Indians taught the pilgrims how to plant corn for survival.
During the first winter in the New World, the Mayflower colonists suffered greatly from diseases like scurvy, lack of shelter and general conditions onboard ship. 45 of the 102 emigrants died the first winter and were buried on Cole's Hill. Additional deaths during the first year meant that only 53 people were alive in November 1621 to celebrate the first Thanksgiving. Of the 18 adult women, 13 died the first winter while another died in May. Only four adult women were left alive for the Thanksgiving.
Just before landing ashore, the Mayflower Pilgrims signed the Mayflower Conpact. This acted as a form of constitution for the group and was loosely based on existing congregational compacts that were a common part of their religion.
This contract allowed for one governor and one assistant who were elected annually. The compact was modified several times. In 1668 it was modified to allow voting by any property owner who was in good standing with the community.
The pilgrims of Plymouth Colony get credit for establishing the congregational form of church organization and worship the became prevalent throughout Massachusetts and Connecticut.
The primary economy of early settlers was based on agriculture, fishing, salt-making and trading with Indians.
Since the settlers had no money, they relied on corn (at six shillings per bushell) as their primary currency of exchange.
Plantation Buyout (1624-27)
Either due to poor bookkeeping or other issues, the original investors complain about lack of profits from the colony. The Pilgrims were most anxious to bring over more of the Leiden group and family members, but couldn't get financing to send them over.
In 1626, Isaac Allerton (1586-1658) negotiates a new contract with the investors. This proves inadequate and in 1627, several pilgrims (called Undertakers) arrange a buyout of their contract with the investors. There were about 300 settlers at this time.
During this time, some of the Non-Seperatists migrate to Virginia because of disputes with the Seperatists over the use of Colony funds to bring over more Seperatists and on other matters of religion.
Isaac de Rasieres Report 1627
A year or two after his visit in 1627, a Dutch Trader, Isaac de Rasieres, wrote down a description of Plymouth Colony. He described a community of about 50 families in a town along the slope of a hill with a wooden fortress and cannon at the top. The lower room of the fortress was a church. Clapboard houses lined the principal street (Leyden Street) down to the sea. At the cross street (Main Street) was the home of the governor and a stockade with four cannon.
Rasieres also made the classic report of witnessing the colonists marching together to church. Laws of morality were strictly enforced amongst the colonists and any Indians living in the community. Rasieres noted that the Indians at Plymouth were better behaved than elsewhere.
1629 saw the arrival of more immigrants from England and Holland. The Mayflower II came in Aug 1629 with 35 migrants. The Lyon came in May 1630. The Talbot came in May 1629 with a number of servants.
In 1629 the Higginson fleet stopped by on its way to form the colony at Salem. In 1630 the Winthrop fleet stopped by on its way to Boston Harbor to start the Massachusetts Bay Colony there. These two colonies will quickly surpass the Plymouth Colony in size. Also note that those two groups are Puritans which is quite different religious sect than the Seperatists here at Plymouth.
The Seperatist Church in Plymouth did not have an ordained minister to administer sacraments until 1629. Until then William Brewster was the presiding Elder.
Pestilence of 1633
1629-1630 saw the last surge of arrival by Leiden pilgrims to Plymouth.
29-Oct-1630 Handmaid arrives at Plymouth with 60 immigrants including John and Samuel Eddy.
A great pestilence afflicted both Plymouth Colony, other colonies and nearby Indian encampments wherein many died. At Plymouth this included some 20 adults and an unknown number of children.
1634 Kennebec River Disupte
Kennebec Dispute 1634 was a deadly fight in 1634 between traders of Plymouth Colony and nearby Pistacaqua Colony over indian trading rights on the Kennebec River in Southern Maine territory. Afterwards two prominent leaders of Plymouth (John Alden (c1599-1687) and John Howland (1592-1672)) were implicated in Massachusetts Bay Colony but eventually released.
Immigrant Arrivals to Plymouth Colony
Most of the immigrants to Plymouth Colony between 1620 and 1628 came from the Pilgrim community in Leyden, Holland.
The merchants in London would also send out adventurers to help expand the colony, but they usually left quickly for Virginia or back to England, because they had a hard time adapting to the religious way of life of the Pilgrims.
- Mayflower - Mayflower Passenger List (1620) - Written 30 years after the fact by Gov William Bradford - but has proven to be very accurate.
- Fortune - Fortune Passenger List (Nov-1621) - Arrived 1621 with 35 additional settlers - new immigrants and additional family members of the Mayflower settlers
- Anne - Arrived - July 1623 - ditto - see 1623 Division of Land Census
- Little James - Arrived Late July 1623 - ditto- see 1623 Division of Land Census
- Charitie - Arrives April 1624 - ditto
- Handmaid - Arrives 1630-Oct-29 - 60 passengers including John and Samuel Eddy.
- Francis - Arrives April 1634
The Fortune - 1621
The Mayflower left Plymouth to return home on 5-Apr-1621. Just before the arrival of the Fortune - the Mayflower Pilgrims celebrate the first Thanksgiving in America.
On 9-Nov-1621, the ship Fortune arrives of Cape Cod with 35 more settlers, but it takes a couple of weeks for them to find Plymouth Colony. Many of these settlers are family members of the earlier arrivals.
This group includes Mr. Robert Cushman, a pilgrim leader who preaches a sermon than leaves for England on 13 Dec 1621.
During this time period the settlers struggled with low food supplies. Various fishing ships and trade ships from Virginia occasionally visit Plymouth.
1623 Pilgrim Immigrants
- See also 1623 Plymouth Land Census
In July 1623 came the ship Anne and one week later the Little James. Per the 1623 Division of Land Census we can estimate 90 new arrivals from these two ship, about 60 pilgrims and 30 strangers. These include new pilgrims and family members of previous arrivals.
In April 1624, the Charitie makes its 2nd visit to Plymouth with more Pilgrim Settlers. (Does the 1624 census increase over 1623 census tell us the names of these arrivals?)
After the arrival of the Anne and Little James, the colonists implemented the 1623 Division of Land. This document is a valuable census of the approximately 180 persons living in the colony at that time. The original Mayflower Compact put all property in common, but eventually their were complaints of the industrious settlers supporting the lazier ones. This division granted land for private use to each head of household. The colonists still run some operations in common.
Expansion Site of Plymouth Colony
In July 1622, two ships, (Swan and Charitie) arrive at Plymouth with a different group of adventures. They stay a couple of months before moving to establish a nearby at Weymouth (or Wessagusset). This groups is financed by Mr Wesson. They are joined by a 3rd ship (Sparrow). This groups fares badly with the Indians and is forced to abandon their settlement after a rescue by Plymouth militia.
In Sept 1623, the ship Katherine arrives with a group of settlers financed by Sir Ferdinando Gorges. They stop shortly at Plymouth before continuing onwards to the Weymouth Settlment.
Founding Nearby Towns 1633-1643
During this time period a number of towns were founded nearby by the Plymouth settlers
Also during this time rival colonies are growing at Boston, Salem, New Amsterdam (Manhattan/Dutch) and Canada (French).
For the first years, the colony struggled to survive. However, colonists did start to gain substantial wealth from the trade in beaver pelts. The luxuious fur of these common water rodents became a highly sought after good in Europe.
Historical Genealogical Documents
- 1st Pierce Patent (1620) - Document to legalize Plymouth Colony - dated 1620. This document appears to be lost.
- 2nd Pierce Patent (1621) - 2nd Document to legalize Plymouth Colony
- Bradford Patent (1629) - 3rd Document to legalize Plymouth Colony
- Immigrant_Ships_To_America/First_Families/Mayflower - Mayflower Passenger List - Written 30 years after the fact by Gov William Bradford - but has proven to be very accurate.
- Mayflower Compact (11-NOV-1620) - Cooperative agreement signed by most settlers.
- 1623 Division of Land - Early Colony Census
- 1626 Purchasers - Corporate Stock Agreement signed by some colonists
- http://www.histarch.uiuc.edu/plymouth/cattlediv.html 1627 Division of Cattle] - Early Colony Census
- 1633 Tax Roll -
- 1634 Tax Roll -
- 1643 ATBA Militia Roll - Able to Bear Arms List
- Mourt's Relation - A Journal of the Pilgrims at Plymouth, 1622, Part I
- William Bradford's Of Plymouth Plantation: 1620-1647 - Journal of the Governor
- Mayflower and Her Passengers by Caleb Johnson - Genealogical research on all Mayflower pilgrims. 292 pages (Pulb 2006).
- Wikipedia History of Plymouth Colony
- Book: Plymouth Colony - Its History & People 1620-1691 by Eugene Aubrey Stratton - good genealogical history with many biographical sketches.
- Genealogy Trails - Mayflower settlers and marriages for two generations
- Writings of Governor Winslow - the author of several works concerning Plymouth Colony, which are now considered among the most important primary source materials about Plimoth still in existence. These include Good Newes from New England (1624); Hypocrisie Unmasked (1646),; New England's Salamander Discovered (1647); and The Glorious Progress of the Gospel Amongst The Indians of New England (1649). It is believed that he also wrote Mourt's Relation with William Bradford in 1622, although he did not sign the work.
Please do not list people here - but instead use the passenger lists above and/or start a page Resided in Plymouth Colony. | https://familypedia.wikia.org/wiki/Plymouth_Colony | 21 |
32 | A carbon sink is any reservoir, natural or otherwise, that accumulates and stores some carbon-containing chemical compound for an indefinite period and thereby lowers the concentration of CO2 from the atmosphere. Globally, the two most important carbon sinks are vegetation and the ocean. Public awareness of the significance of CO2 sinks has grown since passage of the Kyoto Protocol, which promotes their use as a form of carbon offset. There are also different strategies used to enhance this process. Soil is an important carbon storage medium. Much of the organic carbon retained in agricultural areas has been depleted due to intensive farming. "Blue carbon" designates carbon that is fixed via the ocean ecosystems. Mangroves, salt marshes and seagrasses make up a majority of ocean plant life and store large quantities of carbon. Many efforts are being made to enhancing natural sequestration in soils and the oceans. In addition, a range of artificial sequestration initiatives are underway such as changed building construction materials, carbon capture and storage and geological sequestration.
Increase in atmospheric carbon dioxide means increase in global temperature. The amount of carbon dioxide varies naturally in a dynamic equilibrium with photosynthesis of land plants. The natural sinks are:
Carbon sources include the combustion of fossil fuels (coal, natural gas, and oil) by humans for energy and transportation and farmland (by animal respiration), although there are proposals for improvements in farming practices to reverse this.
The Kyoto Protocol is an international agreement that aimed to reduce carbon dioxide emissions and the presence of greenhouse gases (GHG) in the atmosphere. The essential tenet of the Kyoto Protocol was that industrialized nations needed to reduce their emissions.Because growing vegetation takes in carbon dioxide, the Kyoto Protocol allows Annex I countries with large areas of growing forests to issue Removal Units to recognize the sequestration of carbon. The additional units make it easier for them to achieve their target emission levels. It is estimated that forests absorb between 10to each year, through photosynthetic conversion into starch, cellulose, lignin, and other components of wooden biomass. While this has been well documented for temperate forests and plantations, the fauna of the tropical forests place some limitations for such global estimates.
Some countries seek to trade emission rights in carbon emission markets, purchasing the unused carbon emission allowances of other countries. If overall limits on greenhouse gas emission are put into place, cap and trade market mechanisms are purported to find cost-effective ways to reduce emissions. There is as yet no carbon audit regime for all such markets globally, and none is specified in the Kyoto Protocol. National carbon emissions are self-declared.
In the Clean Development Mechanism, only afforestation and reforestation are eligible to produce certified emission reductions (CERs) in the first commitment period of the Kyoto Protocol (2008–2012). Forest conservation activities or activities avoiding deforestation, which would result in emission reduction through the conservation of existing carbon stocks, are not eligible at this time. Also, agricultural carbon sequestration is not possible yet.
Soils represent a short to long-term carbon storage medium, and contain more carbon than all terrestrial vegetation and the atmosphere combined. Plant litter and other biomass including charcoal accumulates as organic matter in soils, and is degraded by chemical weathering and biological degradation. More recalcitrant organic carbon polymers such as cellulose, hemi-cellulose, lignin, aliphatic compounds, waxes and terpenoids are collectively retained as humus. Organic matter tends to accumulate in litter and soils of colder regions such as the boreal forests of North America and the Taiga of Russia. Leaf litter and humus are rapidly oxidized and poorly retained in sub-tropical and tropical climate conditions due to high temperatures and extensive leaching by rainfall. Areas where shifting cultivation or slash and burn agriculture are practiced are generally only fertile for two to three years before they are abandoned. These tropical jungles are similar to coral reefs in that they are highly efficient at conserving and circulating necessary nutrients, which explains their lushness in a nutrient desert. Much organic carbon retained in many agricultural areas worldwide has been severely depleted due to intensive farming practices.
Grasslands contribute to soil organic matter, stored mainly in their extensive fibrous root mats. Due in part to the climatic conditions of these regions (e.g. cooler temperatures and semi-arid to arid conditions), these soils can accumulate significant quantities of organic matter. This can vary based on rainfall, the length of the winter season, and the frequency of naturally occurring lightning-induced grass-fires. While these fires release carbon dioxide, they improve the quality of the grasslands overall, in turn increasing the amount of carbon retained in the humic material. They also deposit carbon directly to the soil in the form of char that does not significantly degrade back to carbon dioxide.
Organic matter in peat bogs undergoes slow anaerobic decomposition below the surface. This process is slow enough that in many cases the bog grows rapidly and fixes more carbon from the atmosphere than is released. Over time, the peat grows deeper. Peat bogs hold approximately one-quarter of the carbon stored in land plants and soils.
Under some conditions, forests and peat bogs may become sources of CO2, such as when a forest is flooded by the construction of a hydroelectric dam. Unless the forests and peat are harvested before flooding, the rotting vegetation is a source of CO2 and methane comparable in magnitude to the amount of carbon released by a fossil-fuel powered plant of equivalent power.
See also: Biosequestration.
Current agricultural practices lead to carbon loss from soils. It has been suggested that improved farming practices could return the soils to being a carbon sink. Present worldwide practises of overgrazing are substantially reducing many grasslands' performance as carbon sinks. The Rodale Institute says that regenerative agriculture, if practiced on the planet's tillable land of 3.6order=flipNaNorder=flip, could sequester up to 40% of current CO2 emissions. They claim that agricultural carbon sequestration has the potential to mitigate global warming. When using biologically-based regenerative practices, this dramatic benefit can be accomplished with no decrease in yields or farmer profits. Organically managed soils can convert carbon dioxide from a greenhouse gas into a food-producing asset.
In 2006, U.S. carbon dioxide emissions, largely from fossil fuel combustion, were estimated at nearly 6.5order=flipNaNorder=flip. If a 2000lb/acre per year sequestration rate was achieved on all 434order=flipNaNorder=flip of cropland in the United States, nearly 1.6order=flipNaNorder=flip of carbon dioxide would be sequestered per year, mitigating close to one quarter of the country's total fossil fuel emissions.
See also: Biosequestration.
Forests can be carbon stores, and they are carbon dioxide sinks when they are increasing in density or area. In Canada's boreal forests as much as 80% of the total carbon is stored in the soils as dead organic matter. A 40-year study of African, Asian, and South American tropical forests by the University of Leeds showed that tropical forests absorb about 18% of all carbon dioxide added by fossil fuels. For the last three decades, the amount of carbon absorbed by the world's intact tropical forests has fallen, according to a study published in 2020 in the journal Nature. The total carbon stock in forests decreased from 668 gigatonnes in 1990 to 662 gigatonnes in 2020. However, another study finds that the leaf area index has increased globally since 1981, which was responsible for 12.4% of the accumulated terrestrial carbon sink from 1981 to 2016. The CO2 fertilization effect, on the other hand, was responsible for 47% of the sink, while climate change reduced the sink by 28.6%.
In 2019 they took up a third less carbon than they did in the 1990s, due to higher temperatures, droughts and deforestation. The typical tropical forest may become a carbon source by the 2060s. Truly mature tropical forests, by definition, grow rapidly, with each tree producing at least 10 new trees each year. Based on studies by FAO and UNEP, it has been estimated that Asian forests absorb about 5 tonnes of carbon dioxide per hectare each year. The global cooling effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo). Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary cooling benefit.
In the United States in 2004 (the most recent year for which EPA statistics are available), forests sequestered 10.6% (637 megatonnes) of the carbon dioxide released in the United States by the combustion of fossil fuels (coal, oil, and natural gas; 5,657 megatonnes). Urban trees sequestered another 1.5% (88 megatonnes). To further reduce U.S. carbon dioxide emissions by 7%, as stipulated by the Kyoto Protocol, would require the planting of "an area the size of Texas [8% of the area of Brazil] every 30 years". Carbon offset programs are planting millions of fast-growing trees per year to reforest tropical lands, for as little as $0.10 per tree; over their typical 40-year lifetime, one million of these trees will fix 1 to 2 megatonnes of carbon dioxide. In Canada, reducing timber harvesting would have very little impact on carbon dioxide emissions because of the combination of harvest and stored carbon in manufactured wood products along with the regrowth of the harvested forests. Additionally, the amount of carbon released from harvesting is small compared to the amount of carbon lost each year to forest fires and other natural disturbances.
The Intergovernmental Panel on Climate Change concluded that "a sustainable forest management strategy aimed at maintaining or increasing forest carbon stocks, while producing an annual sustained yield of timber fibre or energy from the forest, will generate the largest sustained mitigation benefit". Sustainable management practices keep forests growing at a higher rate over a potentially longer period of time, thus providing net sequestration benefits in addition to those of unmanaged forests.
Life expectancy of forests varies throughout the world, influenced by tree species, site conditions and natural disturbance patterns. In some forests, carbon may be stored for centuries, while in other forests, carbon is released with frequent stand replacing fires. Forests that are harvested prior to stand replacing events allow for the retention of carbon in manufactured forest products such as lumber. However, only a portion of the carbon removed from logged forests ends up as durable goods and buildings. The remainder ends up as sawmill by-products such as pulp, paper and pallets, which often end with incineration (resulting in carbon release into the atmosphere) at the end of their lifecycle. For instance, of the 1,692 megatonnes of carbon harvested from forests in Oregon and Washington from 1900 to 1992, only 23% is in long-term storage in forest products.
See also: Iron fertilization, Enhanced weathering and Ocean nourishment. One way to increase the carbon sequestration efficiency of the oceans is to add micrometre-sized iron particles in the form of either hematite (iron oxide) or melanterite (iron sulfate) to certain regions of the ocean. This has the effect of stimulating growth of plankton. Iron is an important nutrient for phytoplankton, usually made available via upwelling along the continental shelves, inflows from rivers and streams, as well as deposition of dust suspended in the atmosphere. Natural sources of ocean iron have been declining in recent decades, contributing to an overall decline in ocean productivity (NASA, 2003). Yet in the presence of iron nutrients plankton populations quickly grow, or 'bloom', expanding the base of biomass productivity throughout the region and removing significant quantities of CO2 from the atmosphere via photosynthesis. A test in 2002 in the Southern Ocean around Antarctica suggests that between 10,000 and 100,000 carbon atoms are sunk for each iron atom added to the water. More recent work in Germany (2005) suggests that any biomass carbon in the oceans, whether exported to depth or recycled in the euphotic zone, represents long-term storage of carbon. This means that application of iron nutrients in select parts of the oceans, at appropriate scales, could have the combined effect of restoring ocean productivity while at the same time mitigating the effects of human caused emissions of carbon dioxide to the atmosphere.
Because the effect of periodic small scale phytoplankton blooms on ocean ecosystems is unclear, more studies would be helpful. Phytoplankton have a complex effect on cloud formation via the release of substances such as dimethyl sulfide (DMS) that are converted to sulfate aerosols in the atmosphere, providing cloud condensation nuclei, or CCN. But the effect of small scale plankton blooms on overall DMS production is unknown.
Other nutrients such as nitrates, phosphates, and silica as well as iron may cause ocean fertilization. There has been some speculation that using pulses of fertilization (around 20 days in length) may be more effective at getting carbon to ocean floor than sustained fertilization.
There is some controversy over seeding the oceans with iron however, due to the potential for increased toxic phytoplankton growth (e.g. "red tide"), declining water quality due to overgrowth, and increasing anoxia in areas harming other sea-life such as zooplankton, fish, coral, etc.
Since the 1850s, a large proportion of the world's grasslands have been tilled and converted to croplands, allowing the rapid oxidation of large quantities of soil organic carbon. However, in the United States in 2004 (the most recent year for which EPA statistics are available), agricultural soils including pasture land sequestered 0.8% (46 megatonne) as much carbon as was released in the United States by the combustion of fossil fuels (5,988 megatonne). The annual amount of this sequestration has been gradually increasing since 1998.
Methods that significantly enhance carbon sequestration in soil include no-till farming, residue mulching, cover cropping, and crop rotation, all of which are more widely used in organic farming than in conventional farming. Because only 5% of US farmland currently uses no-till and residue mulching, there is a large potential for carbon sequestration. Conversion to pastureland, particularly with good management of grazing, can sequester even more carbon in the soil.
Terra preta, an anthropogenic, high-carbon soil, is also being investigated as a sequestration mechanism.By pyrolysing biomass, about half of its carbon can be reduced to charcoal, which can persist in the soil for centuries, and makes a useful soil amendment, especially in tropical soils (biochar or agrichar).
Controlled burns on far north Australian savannas can result in an overall carbon sink. One working example is the West Arnhem Fire Management Agreement, started to bring "strategic fire management across 28,000 km² of Western Arnhem Land". Deliberately starting controlled burns early in the dry season results in a mosaic of burnt and unburnt country which reduces the area of burning compared with stronger, late dry season fires. In the early dry season there are higher moisture levels, cooler temperatures, and lighter wind than later in the dry season; fires tend to go out overnight. Early controlled burns also results in a smaller proportion of the grass and tree biomass being burnt. Emission reductions of 256,000 tonnes of CO2 have been made as of 2007.
For carbon to be sequestered artificially (i.e. not using the natural processes of the carbon cycle) it must first be captured, or it must be significantly delayed or prevented from being re-released into the atmosphere (by combustion, decay, etc.) from an existing carbon-rich material, by being incorporated into an enduring usage (such as in construction). Thereafter it can be passively stored or remain productively utilized over time in a variety of ways.
For instance, upon harvesting, wood (as a carbon-rich material) can be immediately burned or otherwise serve as a fuel, returning its carbon to the atmosphere, or it can be incorporated into construction or a range of other durable products, thus sequestering its carbon over years or even centuries.
A very carefully designed and durable, energy-efficient and energy-capturing building has the potential to sequester (in its carbon-rich construction materials), as much as or more carbon than was released by the acquisition and incorporation of all its materials and than will be released by building-function "energy-imports" during the structure's (potentially multi-century) existence. Such a structure might be termed "carbon neutral" or even "carbon negative". Building construction and operation (electricity usage, heating, etc.) are estimated to contribute nearly half of the annual human-caused carbon additions to the atmosphere.
Natural-gas purification plants often already have to remove carbon dioxide, either to avoid dry ice clogging gas tankers or to prevent carbon-dioxide concentrations exceeding the 3% maximum permitted on the natural-gas distribution grid.
Beyond this, one of the most likely early applications of carbon capture is the capture of carbon dioxide from flue gases at power stations (in the case of coal, this coal pollution mitigation is sometimes known as "clean coal"). A typical new 1000 MW coal-fired power station produces around 6 million tons of carbon dioxide annually. Adding carbon capture to existing plants can add significantly to the costs of energy production; scrubbing costs aside, a 1000 MW coal plant will require the storage of about 50Moilbbl of carbon dioxide a year. However, scrubbing is relatively affordable when added to new plants based on coal gasification technology, where it is estimated to raise energy costs for households in the United States using only coal-fired electricity sources from 10 cents per kW·h to 12 cents.
According to an international team of interdisciplinary scientists in a 2020 study, broad-base adoption of mass timber and their substitution for steel and concrete in new mid-rise construction projects over the next few decades has the potential to turn timber buildings into a global carbon sink, as they store the carbon dioxide taken up from the air by trees that are harvested and used as engineered timber. Noting the demographic need for new urban construction for the next thirty years, the team analyzed four scenarios for the transition to mass-timber new mid-rise construction. Assuming business as usual, only 0.5% of new buildings worldwide would be constructed with timber by 2050 (scenario 1). This could be driven up to 10% (scenario 2) or 50% (scenario 3), assuming mass timber manufacturing would increase as a material revolution replacing cement and steel in urban construction by wood scales up accordingly. Lastly, if countries with current low industrialization level, e.g., Africa, Oceania, and parts of Asia, would also make the transition to timber (including bamboo), then even 90% timber by 2050 (scenario 4) is conceivable. This could result in storing between 10 million tons of carbon per year in the lowest scenario and close to 700 million tons in the highest scenario. The study found that this potential could be realized under two conditions. First, the harvested forests would need to be sustainably managed, governed, and used. Second, wood from demolished timber buildings would need to be reused or preserved on land in various forms.
See main article: Carbon capture and storage.
Currently, capture of carbon dioxide is performed on a large scale by absorption of carbon dioxide onto various amine-based solvents. Other techniques are currently being investigated, such as pressure swing adsorption, temperature swing adsorption, gas separation membranes, cryogenics and flue capture.
In coal-fired power stations, the main alternatives to retrofitting amine-based absorbers to existing power stations are two new technologies: coal gasification combined-cycle and oxy-fuel combustion. Gasification first produces a "syngas" primarily of hydrogen and carbon monoxide, which is burned, with carbon dioxide filtered from the flue gas. Oxy-fuel combustion burns the coal in oxygen instead of air, producing only carbon dioxide and water vapour, which are relatively easily separated. Some of the combustion products must be returned to the combustion chamber, either before or after separation, otherwise the temperatures would be too high for the turbine.
Another long-term option is carbon capture directly from the air using hydroxides. The air would literally be scrubbed of its CO2 content. This idea offers an alternative to non-carbon-based fuels for the transportation sector.
Examples of carbon sequestration at coal plants include converting carbon from smokestacks into baking soda, and algae-based carbon capture, circumventing storage by converting algae into fuel or feed.
Another proposed form of carbon sequestration in the ocean is direct injection. In this method, carbon dioxide is pumped directly into the water at depth, and expected to form "lakes" of liquid CO2 at the bottom. Experiments carried out in moderate to deep waters (350-) indicate that the liquid CO2 reacts to form solid CO2 clathrate hydrates, which gradually dissolve in the surrounding waters.
This method, too, has potentially dangerous environmental consequences. The carbon dioxide does react with the water to form carbonic acid, ; however, most (as much as 99%) remains as dissolved molecular CO2. The equilibrium would no doubt be quite different under the high pressure conditions in the deep ocean. In addition, if deep-sea bacterial methanogens that reduce carbon dioxide were to encounter the carbon dioxide sinks, levels of methane gas may increase, leading to the generation of an even worse greenhouse gas. The resulting environmental effects on benthic life forms of the bathypelagic, abyssopelagic and hadopelagic zones are unknown. Even though life appears to be rather sparse in the deep ocean basins, energy and chemical effects in these deep basins could have far-reaching implications. Much more work is needed here to define the extent of the potential problems.
Carbon storage in or under oceans may not be compatible with the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter.
An additional method of long-term ocean-based sequestration is to gather crop residue such as corn stalks or excess hay into large weighted bales of biomass and deposit it in the alluvial fan areas of the deep ocean basin. Dropping these residues in alluvial fans would cause the residues to be quickly buried in silt on the sea floor, sequestering the biomass for very long time spans. Alluvial fans exist in all of the world's oceans and seas where river deltas fall off the edge of the continental shelf such as the Mississippi alluvial fan in the gulf of Mexico and the Nile alluvial fan in the Mediterranean Sea. A downside, however, would be an increase in aerobic bacteria growth due to the introduction of biomass, leading to more competition for oxygen resources in the deep sea, similar to the oxygen minimum zone.
The method of geo-sequestration or geological storage involves injecting carbon dioxide directly into underground geological formations. Declining oil fields, saline aquifers, and unmineable coal seams have been suggested as storage sites. Caverns and old mines that are commonly used to store natural gas are not considered, because of a lack of storage safety.
CO2 has been injected into declining oil fields for more than 40 years, to increase oil recovery. This option is attractive because the storage costs are offset by the sale of additional oil that is recovered. Typically, 10–15% additional recovery of the original oil in place is possible. Further benefits are the existing infrastructure and the geophysical and geological information about the oil field that is available from the oil exploration. Another benefit of injecting CO2 into Oil fields is that CO2 is soluble in oil. Dissolving CO2 in oil lowers the viscosity of the oil and reduces its interfacial tension which increases the oils mobility. All oil fields have a geological barrier preventing upward migration of oil. As most oil and gas has been in place for millions to tens of millions of years, depleted oil and gas reservoirs can contain carbon dioxide for millennia. Identified possible problems are the many 'leak' opportunities provided by old oil wells, the need for high injection pressures and acidification which can damage the geological barrier. Other disadvantages of old oil fields are their limited geographic distribution and depths, which require high injection pressures for sequestration. Below a depth of about 1000 m, carbon dioxide is injected as a supercritical fluid, a material with the density of a liquid, but the viscosity and diffusivity of a gas.Unmineable coal seams can be used to store CO2, because CO2 absorbs to the coal surface, ensuring safe long-term storage. In the process it releases methane that was previously adsorbed to the coal surface and that may be recovered. Again the sale of the methane can be used to offset the cost of the CO2 storage. Release or burning of methane would of course at least partially offset the obtained sequestration result – except when the gas is allowed to escape into the atmosphere in significant quantities: methane has a higher global warming potential than CO2.
Saline aquifers contain highly mineralized brines and have so far been considered of no benefit to humans except in a few cases where they have been used for the storage of chemical waste. Their advantages include a large potential storage volume and relatively common occurrence reducing the distance over which CO2 has to be transported. The major disadvantage of saline aquifers is that relatively little is known about them compared to oil fields. Another disadvantage of saline aquifers is that as the salinity of the water increases, less CO2 can be dissolved into aqueous solution. To keep the cost of storage acceptable the geophysical exploration may be limited, resulting in larger uncertainty about the structure of a given aquifer. Unlike storage in oil fields or coal beds, no side product will offset the storage cost. Leakage of CO2 back into the atmosphere may be a problem in saline-aquifer storage. However, current research shows that several trapping mechanisms immobilize the CO2 underground, reducing the risk of leakage.
A major research project examining the geological sequestration of carbon dioxide is currently being performed at an oil field at Weyburn in south-eastern Saskatchewan. In the North Sea, Norway's Equinor natural-gas platform Sleipner strips carbon dioxide out of the natural gas with amine solvents and disposes of this carbon dioxide by geological sequestration. Sleipner reduces emissions of carbon dioxide by approximately one million tonnes a year. The cost of geological sequestration is minor relative to the overall running costs. As of April 2005, BP is considering a trial of large-scale sequestration of carbon dioxide stripped from power plant emissions in the Miller oilfield as its reserves are depleted.
In October 2007, the Bureau of Economic Geology at The University of Texas at Austin received a 10-year, $38 million subcontract to conduct the first intensively monitored, long-term project in the United States studying the feasibility of injecting a large volume of CO2 for underground storage. The project is a research program of the Southeast Regional Carbon Sequestration Partnership (SECARB), funded by the National Energy Technology Laboratory of the U.S. Department of Energy (DOE). The SECARB partnership will demonstrate CO2 injection rate and storage capacity in the Tuscaloosa-Woodbine geologic system that stretches from Texas to Florida. Beginning in fall 2007, the project will inject CO2 at the rate of one million tons per year, for up to 1.5 years, into brine up to 10000feet below the land surface near the Cranfield oil field about 15miles east of Natchez, Mississippi. Experimental equipment will measure the ability of the subsurface to accept and retain CO2.
Mineral sequestration aims to trap carbon in the form of solid carbonate salts. This process occurs slowly in nature and is responsible for the deposition and accumulation of limestone over geologic time. Carbonic acid in groundwater slowly reacts with complex silicates to dissolve calcium, magnesium, alkalis and silica and leave a residue of clay minerals. The dissolved calcium and magnesium react with bicarbonate to precipitate calcium and magnesium carbonates, a process that organisms use to make shells. When the organisms die, their shells are deposited as sediment and eventually turn into limestone. Limestones have accumulated over billions of years of geologic time and contain much of Earth's carbon. Ongoing research aims to speed up similar reactions involving alkali carbonates.
Several serpentinite deposits are being investigated as potentially large scale CO2 storage sinks such as those found in NSW, Australia, where the first mineral carbonation pilot plant project is underway. Beneficial re-use of magnesium carbonate from this process could provide feedstock for new products developed for the built environment and agriculture without returning the carbon into the atmosphere and so acting as a carbon sink.
One proposed reaction is that of the olivine-rich rock dunite, or its hydrated equivalent serpentinite with carbon dioxide to form the carbonate mineral magnesite, plus silica and iron oxide (magnetite).
Serpentinite sequestration is favored because of the non-toxic and stable nature of magnesium carbonate. The ideal reactions involve the magnesium endmember components of the olivine (reaction 1) or serpentine (reaction 2), the latter derived from earlier olivine by hydration and silicification (reaction 3). The presence of iron in the olivine or serpentine reduces the efficiency of sequestration, since the iron components of these minerals break down to iron oxide and silica (reaction 4).
See main article: Zeolitic imidazolate frameworks.
One study in 2009 found that the fraction of fossil-fuel emissions absorbed by the oceans may have declined by up to 10% since 2000, indicating oceanic sequestration may be sublinear. Another 2009 study found that the fraction of absorbed by terrestrial ecosystems and the oceans has not changed since 1850, indicating undiminished capacity.
One study in 2020 found that 32 tracked Brazilian non-Amazon seasonal tropical forests declined from a carbon sink to a carbon source in 2013 and concludes that "policies are needed to mitigate the emission of greenhouse gases and to restore and protect tropical seasonal forests". | https://everything.explained.today/Carbon_sink/ | 21 |
21 | The Court of King's Bench,formally known as The Court of the King Before the King Himself, was a court of common law in the English legal system. Created in the late 12th to early 13th century from the curia regis , the King's Bench initially followed the monarch on his travels. The King's Bench finally joined the Court of Common Pleas and Exchequer of Pleas in Westminster Hall in 1318, making its last travels in 1421. The King's Bench was merged into the High Court of Justice by the Supreme Court of Judicature Act 1873, after which point the King's Bench was a division within the High Court. The King's Bench was staffed by one Chief Justice (now the Lord Chief Justice of England and Wales) and usually three Puisne Justices.
In the 15th and 16th centuries, the King's Bench's jurisdiction and caseload was significantly challenged by the rise of the Court of Chancery and equitable doctrines as one of the two principal common law courts along with the Common Pleas. To recover, the King's Bench undertook a scheme of revolutionary reform, creating less expensive, faster and more versatile types of pleading in the form of bills as opposed to the more traditional writs. Although not immediately stemming the tide, it helped the King's Bench to recover and increase its workload in the long term. While there was a steep decline in business from 1460 to 1540, as the new reforms began to take effect the King's Bench's business was significantly boosted; between 1560 and 1640, it rose tenfold. The Common Pleas became suspicious of the new developments, as legal fictions such as the Bill of Middlesex damaged its own business. Fighting against the King's Bench in a reactionary and increasingly conservative way, an equilibrium was eventually reached in the 17th century until the merger in 1873.
The King's Bench's jurisdiction initially covered a wide range of criminal matters, any business not claimed by the other courts, and any cases concerning the monarch. Until 1830, the King's Bench acted as a court of appeal for the Exchequer of Pleas and Common Pleas, and required Parliament to sign off on its decisions. From 1585, the Court of Exchequer Chamber served for appeals of King's Bench decisions.
Originally, the sole "court" was the curia regis , one of the three central administrative bodies along with the Exchequer and Chancery, from which the Court of Chancery formed.This curia was the King's court, composed of those advisers and courtiers who followed the King as he travelled around the country. This was not a dedicated court of law, instead a descendant of the witenagemot . In concert with the curia regis, eyre circuits staffed by itinerant judges dispensed justice throughout the country, operating on fixed paths at certain times. These judges were also members of the curia, and would hear cases on the King's behalf in the "lesser curia regis". Because the curia travelled with the King, it caused problems with the dispensation of justice; if the King went out of the country, or as Richard I did spent much of his career there, the curia followed. To remedy this a central "bench" was established, with the Court of Common Pleas, initially split from the Exchequer of Pleas, receiving official recognition in Magna Carta so that common pleas could be heard in "some fixed place". There were thus two common law courts; the curia, which followed the King, and the Common Pleas, which sat in Westminster Hall. The curia eventually became known as the King's Bench, with the King himself required for the court to sit.
There is some controversy over whether the original fixed court was the Common Pleas or King's Bench. In 1178, a chronicler recorded that when Henry II:
learned that the land and the men of the land were burdened by so great a number of justices, for there were, eighteen, chose with the counsel of the wise men of his Kingdom five only, two clerks three and laymen, all of his private family, and decreed that these five should hear all complaints of the Kingdom and should do right and should not depart from the king's court but should remain there to hear the complaints of men, with this understanding that, if there should come up among them any question which could not be brought to a conclusion by them, it should be presented to a royal hearing and be determined by the king and the wiser men of the kingdom".
This was originally interpreted as the foundation of the King's Bench, with the Court of Common Pleas not coming into existence until the signing of the Magna Carta.The later theory was that Henry II's decree created the Court of Common Pleas, not the King's Bench, and that the King's Bench instead split from the Common Pleas at some later time. The first records of an independent King's Bench come from 1234, when distinct plea rolls are found for each court. Modern academics give 1234 as the founding date for the King's Bench as a fully independent tribunal, considering it part of the law reform which took place from 1232 to 1234. Under Edward I, the presence of the King in the court became more and more irregular, and by 1318 the court sat independent of the monarch. Its last travels around the country were in 1414 to Leicestershire, Staffordshire and Shropshire, and a visit to Northamptonshire in 1421. From then onwards, the King's Bench became a fixed court rather than one that followed the King. Like the Common Pleas, the King's Bench sat in Westminster Hall until its dissolution.
During the 15th century, the traditional superiority of the common law courts was challenged by ecclesiastical courts and the equitable jurisdiction of the Lord Chancellor, exercised through the Court of Chancery. These courts were more attractive to the common lawyers because of their informality and the simple method used to arrest defendants. The bills of complaint and subpoena used by the Chancery made court procedure far faster, and from 1460 to 1540 there was a steep decline in the number of cases in the common law courts, coinciding with a sharp increase in cases in the newer courts. This loss of business was quickly recognised by the King's Bench, which was urged by Fairfax J in 1501 to develop new remedies so that "subpoenas would not be used as often as they are at present". From 1500 the King's Bench began reforming to increase its business and jurisdiction, with the tide finally turning in their favour by 1550.
The recovery of the King's Bench was thanks to its use of Chancery-like procedure; centrally, the system of bills. Prior to this, a writ would have to be issued, with different writs depending on the issue. If A wished to sue B for trespass, debt and detinue, the court would have to issue an individual writ for each action, with associated time delays and costs for A, and then ensure that B appeared in court. Bills, on the other hand, were traditionally used against court officials and the court's prisoners; as such, the defendant was assumed to already be in the court's custody and presence in court was not needed. Thus a legal fiction arose; if A wished to sue B for trespass, debt and detinue, he would have a writ issued for trespass. B would be arrested as a result, and the covenant, detinue and debt actions undertaken by bill after he had been detained.Eventually it became even more fictitious; if A wished to sue B merely for debt and detinue, a trespass writ would be obtained and then quietly dismissed when B was detained in custody. This was originally undertaken through getting a writ of trespass from the Chancery, but eventually a shorter workaround was used; since the King's Bench retained criminal jurisdiction over Middlesex, the trespass (which was fictitious anyway) would be said to have occurred there, allowing the King's Bench to issue a bill of arrest on its own. This became known as the Bill of Middlesex, and undermined the jurisdiction of the Court of Common Pleas, which would normally deal with such civil cases.
The advantages to this method were that bills were substantially cheaper, and unlike writs did not tie the plaintiff down; once the case came to court the bill could be amended to include any action or actions the plaintiff wanted to enforce. In addition, by avoiding the Chancery writ, the case was substantially cheaper. The result of this was substantial; between 1560 and 1640, the King's Bench's business rose tenfold.This period also saw a substantial broadening of the remedies available in the common law. The main remedy and method was action on the case, which justices expanded to encompass other things. In 1499 it enabled the enforcement of parol promises, which rendered Chancery subpoenas obsolete; later developments included the recovery of debts, suing for defamatory words (previously an ecclesiastical matter) and action on the case for trover and conversion. Most of this reform took place under Fineux CJ, who never lived to see the results of his work; it took over 100 years for the reforms to fully reverse the decline in business.
While these reforms succeeded in forming an equilibrium between the old common law courts and the new courts, they were viewed with suspicion by the Common Pleas, who became highly reactionary to the changes the King's Bench attempted to introduce.While the King's Bench was more revolutionary, the Common Pleas became increasingly conservative in its attempts to avoid ceding cases. The disparity between the reformist King's Bench and conservative Common Pleas was exacerbated by the fact that the three Common Pleas prothonotaries could not agree on how to cut costs, leaving the court both expensive and of limited malleability while the King's Bench became faster, cheaper and more varied in its jurisdiction.
The troubles during this period are best illustrated by Slade's Case.Under the medieval common law, claims seeking the repayment of a debt or other matters could only be pursued through a writ of debt in the Common Pleas, a problematic and archaic process. By 1558 the lawyers had succeeded in creating another method, enforced by the Court of King's Bench, through the action of assumpsit , which was technically for deceit. The legal fiction used was that by failing to pay after promising to do so, a defendant had committed deceit, and was liable to the plaintiff. The conservative Common Pleas, through the appellate court the Court of Exchequer Chamber, began to overrule decisions made by the King's Bench on assumpsit, causing friction between the courts. In Slade's Case, the Chief Justice of the King's Bench, John Popham, deliberately provoked the Common Pleas into bringing an assumpsit action to a higher court where the Justices of the King's Bench could vote, allowing them to overrule the Common Pleas and establish assumpsit as the main contractual action. After the death of Edmund Anderson, the more activist Francis Gawdy became Chief Justice of the Common Pleas, which briefly led to a less reactionary and more revolutionary Common Pleas.
The struggle continued even after this point. The Interregnum granted some respite to the Common Pleas, which abolished fines on original writs, hurting the King's Bench, but in 1660 the fines were reinstated and "then the very attorneys of the Common Pleas boggled at them and carried all their finable business to the King's Bench".In 1661 the Common Pleas attempted to reverse this by pushing for an Act of Parliament to abolish latitats based on legal fictions, forbidding "special bail" in any case where "the true cause of action" was not expressed in the process. The King's Bench got around this in the 1670s; the Act did not say that the process had to be true, so the court continued to use legal fictions, simply ensuring that the true cause of action was expressed in the process, regardless of whether or not it was correct. The Bill of Middlessex disclosed the true cause of action, satisfying the 1661 statute, but did not require a valid complaint. This caused severe friction within the court system, and Francis North, Chief Justice of the Common Pleas, eventually reached a compromise by allowing such legal fictions in the Common Pleas as well as the King's Bench.
The unintended outcome of these compromises was that by the end of Charles II's reign, all three common law courts had a similar jurisdiction over most common pleas, with similar processes. By the 18th century, it was customary to speak of the "twelve justices" of the three courts, not distinguishing them, and assize cases were shared equally between them.In 1828, Henry Brougham complained that:
[t]he jurisdiction of the Court of King's Bench, for example, was originally confined to pleas of the Crown, and then extended to actions where violence was used – actions of trespass, by force; but now, all actions are admissible within its walls, through the medium of a legal fiction, which was adopted for the purpose of enlarging its authority, that every person sued is in the custody of the marshal of the court and may, therefore, be proceeded against for any personal cause of actions. Thus, by degrees, this court has drawn over to itself actions which really belong to...the Court of Common Pleas. The Court of Common Pleas, however...never was able to obtain cognizance of – the peculiar subject of King's Bench jurisdiction – Crown Pleas... the Exchequer has adopted a similar course for, though it was originally confined to the trial of revenue cases, it has, by means of another fiction – the supposition that everybody sued is a debtor to the Crown, and further, that he cannot pay his debt, because the other party will not pay him, – opened its doors to every suitor, and so drawn to itself the right of trying cases, that were never intended to be placed within its jurisdiction.
The purpose of Brougham's speech was to illustrate that three courts of identical jurisdiction were unnecessary, and further that it would create a situation where the best judges, lawyers and cases would eventually go to one court, overburdening that body and leaving the others near useless. In 1823, 43,465 actions were brought in the King's Bench, 13,009 in the Common Pleas and 6,778 in the Exchequer of Pleas. Not surprisingly, the King's Bench judges were "immoderately over burdened", the Common Pleas judges were "fully occupied in term, and much engaged in vacation also" and the Barons of the Exchequer were "comparatively little occupied either in term or vacation".
In response to this and the report of a committee investigating the slow pace of the Court of Chancery, the Judicature Commission was formed in 1867, and given a wide remit to investigate reform of the courts, the law, and the legal profession. Five reports were issued, from 25 March 1869 to 10 July 1874, with the first (dealing with the formation of a single Supreme Court of Judicature) considered the most influential.The report disposed of the previous idea of merging the common law and equity, and instead suggested a single Supreme Court capable of using both. In 1870 the Lord Chancellor, Lord Hatherly, attempted to bring the recommendations into law through an Act of Parliament, but did not go to the trouble of consulting the judiciary or the leader of the Conservatives, who controlled the House of Lords. The bill ran into strong opposition from lawyers and judges, particularly Alexander Cockburn. After Hatherly was replaced by Lord Selborne in September 1872, a second bill was introduced after consultation with the judiciary; although along the same lines, it was far more detailed.
The Act, finally passed as the Supreme Court of Judicature Act 1873, merged the Common Pleas, Exchequer, Queen's Bench and Court of Chancery into one body, the High Court of Justice, with the divisions between the courts to remain.The Queen's Bench thus ceased to exist, holding its last session on 6 July 1875, except as the Queen's Bench Division of the High Court. The existence of the same courts as divisions of one unified body was a quirk of constitutional law, which prevented the compulsory demotion or retirement of Chief Justices. Thus all three Chief Justices (Lord Chief Justice Sir Alexander Cockburn, Chief Justice of the Common Pleas Lord Coleridge and Chief Baron of the Exchequer Sir Fitzroy Kelly) continued in post. Kelly and Cockburn died in 1880, allowing for the abolition of the Common Pleas Division and Exchequer Division by Order in Council on 16 December 1880. The High Court was reorganised into the Chancery Division, Queen's Bench Division and the Probate, Divorce and Admiralty Division.
Due to a misunderstanding by Sir Edward Coke in his Institutes of the Lawes of England , academics thought for a long time that the King's Bench was primarily a criminal court. This was factually incorrect; no indictment was tried by the King's Bench until January 1323, and no record of the court ordering the death penalty is found until halfway through Edward II's reign. The court did have some criminal jurisdiction, with a royal ordinance in 1293 directing conspiracy cases to be brought to the King's Bench and the court's judges acting in trailbaston commissions around the country.A. T. Carter, in his History of English Legal Institutions, defines the early King's Bench jurisdiction as "to correct all crimes and misdemeanours that amounted to a breach of the peace, the King being then plaintiff, for such were in derogation of the Jura regalia; and to take cognizance of everything not parcelled out to the other courts". By the end of the 14th century much of the criminal jurisdiction had declined, although the court maintained a criminal jurisdiction over all cases in Middlesex, the county where Westminster Hall stood. The King's Bench's main jurisdiction was over "pleas of the crown"; cases which involved the King in some way. With the exception of revenue matters, which were handled by the Exchequer of Pleas, the King's Bench held exclusive jurisdiction over these cases.
The Court of King's Bench did act as an appellate body, hearing appeals from the Court of Common Pleas, eyre circuits, assize courts and local courts, but was not a court of last resort; its own records were sent to Parliament to be signed off on. The creation of the Court of Exchequer Chamber in 1585 created a court from which King's Bench decisions could be appealed to, and with the expansion of the Exchequer Chamber's jurisdiction in 1830 the King's Bench ceased to be an appellate court.Thanks to the Bill of Westminster and other legal fictions, the King's Bench gained much of the Common Pleas's jurisdiction, although the Common Pleas remained the sole place where real property claims could be brought.
The head of the court was the Chief Justice of the King's Bench, a position established by 1268. From the 14th century onwards, the Chief Justice was appointed by a writ, in Latin until 1727 and in English from then on. The Chief Justice was the most senior judge in the superior courts, having superiority over the Chief Justice of the Common Pleas and Chief Baron of the Exchequer, and from 1612 the Master of the Rolls. Unlike other Chief Justices, who were appointed to serve "during the King's Pleasure", the appointment as Chief Justice of the King's Bench "did not usually specify any particular tenure".This practice ended in 1689, when all of the Chief Justices became appointed to serve "during good behaviour". The initial salary was £40 a year, with an additional £66 in 1372 and an increase to a total of £160 in 1389. An ordinance of 1646 set a fixed salary of £1,000, increased to £2,000 in 1714, £4,000 in 1733, and finally peaked at £10,000 a year in 1825. Pension arrangements were first made in 1799, peaking at £4,000 a year in 1825. The position remains to this day; after the dissolution of the Court of King's Bench, the Chief Justice has instead been the Lord Chief Justice of England and Wales, now the head of the Judiciary of England and Wales.
A Chief Justice of the King's Bench was assisted in his work by a number of Justices of the King's Bench. Occasionally appointed before 1272, the number fluctuated considerably between 1 and 4; from 1522, the number was fixed at 3. Provisions for a fourth were established in 1830, and a fifth in 1868. Following the dissolution of the Court of King's Bench, the remaining Justices because Justices of the Queen's Bench Division of the High Court of Justice.Justices were originally paid £26 a year, increasing to £66 in 1361, and £100 in 1389. An ordinance of 1645 increased this to £1,000, with the salary peaking at £5,500 in 1825. As with the Chief Justice, pension arrangements were formally organised in 1799, starting at £2,000 a year and peaking at £3,500 in 1825.
In jurisdictions following the English common law system, equity is the body of law which was developed in the English Court of Chancery and which is now administered concurrently with the common law. In common law jurisdictions, the word "equity" "is not a synonym for 'general fairness' or 'natural justice'", but refers to "a particular body of rules that originated in a special system of courts."
In common law, a writ is a formal written order issued by a body with administrative or judicial jurisdiction; in modern usage, this body is generally a court. Warrants, prerogative writs, and subpoenas are common types of writ, but many forms exist and have existed.
The Court of Chancery was a court of equity in England and Wales that followed a set of loose rules to avoid the slow pace of change and possible harshness of the common law. The Chancery had jurisdiction over all matters of equity, including trusts, land law, the estates of lunatics and the guardianship of infants. Its initial role was somewhat different: as an extension of the Lord Chancellor's role as Keeper of the King's Conscience, the Court was an administrative body primarily concerned with conscientious law. Thus the Court of Chancery had a far greater remit than the common law courts, whose decisions it had the jurisdiction to overrule for much of its existence, and was far more flexible. Until the 19th century, the Court of Chancery could apply a far wider range of remedies than common law courts, such as specific performance and injunctions, and had some power to grant damages in special circumstances. With the shift of the Exchequer of Pleas towards a common law court and loss of its equitable jurisdiction by the Administration of Justice Act 1841, the Chancery became the only national equitable body in the English legal system.
The Queen's Bench (French: Cour du banc de la Reine ; or, during the reign of a male monarch, the King's Bench, is the superior court in a number of jurisdictions within some of the Commonwealth realms. The original King's Bench, founded in 1215 in England, was one of the ancient courts of the land and is now a division of the High Court of Justice of England and Wales. In the Commonwealth, the term Queen-on-the-Bench, or King-on-the-Bench is a title sometimes used to refer to the monarch in their ceremonial role within the justice system, as the fount of justice in that justice is carried out in their name.
The Judicature Acts are a series of Acts of Parliament, beginning in the 1870s, which aimed to fuse the hitherto split system of courts in England and Wales. The first two Acts were the Supreme Court of Judicature Act 1873 and the Supreme Court of Judicature Act 1875, with a further series of amending acts.
The Exchequer of Pleas, or Court of Exchequer, was a court that dealt with matters of equity, a set of legal principles based on natural law and common law in England and Wales. Originally part of the curia regis, or King's Council, the Exchequer of Pleas split from the curia in the 1190s to sit as an independent central court. The Court of Chancery's reputation for tardiness and expense resulted in much of its business transferring to the Exchequer. The Exchequer and Chancery, with similar jurisdictions, drew closer together over the years until an argument was made during the 19th century that having two seemingly-identical courts was unnecessary. As a result, the Exchequer lost its equity jurisdiction. With the Judicature Acts, the Exchequer was formally dissolved as a judicial body by an Order in Council on 16 December 1880.
The forms of action were the different procedures by which a legal claim could be made during much of the history of the English common law. Depending on the court, a plaintiff would purchase a writ in Chancery which would set in motion a series of events eventually leading to a trial in one of the medieval common law courts. Each writ entailed a different set of procedures and remedies which together amounted to the "form of action".
A court of equity, equity court or chancery court is a court that is authorized to apply principles of equity, as opposed to those of law, to cases brought before it.
In tort law, detinue is an action to recover for the wrongful taking of personal property. It is initiated by an individual who claims to have a greater right to their immediate possession than the current possessor. For an action in detinue to succeed, a claimant must first prove that he had better right to possession of the chattel than the defendant and second that the defendant refused to return the chattel once demanded by the claimant.
Assumpsit, or more fully, action in assumpsit, was a form of action at common law used to enforce what are now called obligations arising in tort and contract; and in some common law jurisdictions, unjust enrichment.
A Serjeant-at-Law (SL), commonly known simply as a Serjeant, was a member of an order of barristers at the English and Irish bar. The position of Serjeant-at-Law, or Sergeant-Counter, was centuries old; there are writs dating to 1300 which identify them as descended from figures in France before the Norman Conquest, thus the Serjeants are said to be the oldest formally created order in England. The order rose during the 16th century as a small, elite group of lawyers who took much of the work in the central common law courts.
The writs of trespass and trespass on the case are the two catchall torts from English common law, the former involving trespass against the person, the latter involving trespass against anything else which may be actionable. The writ is also known in modern times as action on the case and can be sought for any action that may be considered as a tort but is yet to be an established category.
The writ of quominus, or writ of quo minus, was a writ and legal fiction which allowed the Court of Exchequer to obtain a jurisdiction over cases normally brought in the Court of Common Pleas. The Exchequer was tasked with collecting the King's revenue, and the legal fiction worked by having the plaintiff in a debt case claim that he was a debtor to the king, and that the defendant's debt prevented him paying the King. As such, the defendant would be arrested, and the case heard by the Exchequer. The writ's predecessors were in use from at least 1230, and it was in common use during the 16th century. The use continued into the 19th century, until all original writs were abolished in 1883.
The Court of Common Pleas, or Common Bench, was a common law court in the English legal system that covered "common pleas"; actions between subject and subject, which did not concern the king. Created in the late 12th to early 13th century after splitting from the Exchequer of Pleas, the Common Pleas served as one of the central English courts for around 600 years. Authorised by Magna Carta to sit in a fixed location, the Common Pleas sat in Westminster Hall for its entire existence, joined by the Exchequer of Pleas and Court of King's Bench.
Justice of the King's Bench, or Justice of the Queen's Bench during the reign of a female monarch, was a puisne judicial position within the Court of King's Bench, under the Chief Justice. The King's Bench was a court of common law which modern academics argue was founded independently in 1234, having previously been part of the curia regis. The court became a key part of the Westminster courts, along with the Exchequer of Pleas and the Court of Common Pleas ; the latter was deliberately stripped of its jurisdiction by the King's Bench and Exchequer, through the Bill of Middlesex and Writ of Quominus respectively. As a result, the courts jockeyed for power. In 1828 Henry Brougham, a Member of Parliament, complained in Parliament that as long as there were three courts unevenness was inevitable, saying that "It is not in the power of the courts, even if all were monopolies and other restrictions done away, to distribute business equally, as long as suitors are left free to choose their own tribunal", and that there would always be a favourite court, which would therefore attract the best lawyers and judges and entrench its position. The outcome was the Supreme Court of Judicature Act 1873, under which all the central courts were made part of a single Supreme Court of Judicature. Eventually the government created a High Court of Justice under Lord Coleridge by an Order in Council of 16 December 1880. At this point, the King's Bench formally ceased to exist.
The Bill of Middlesex was a legal fiction used by the Court of King's Bench to gain jurisdiction over cases traditionally in the remit of the Court of Common Pleas. Hinging on the King's Bench's remaining criminal jurisdiction over the county of Middlesex, the Bill allowed it to take cases traditionally in the remit of other common law courts by claiming that the defendant had committed trespass in Middlesex. Once the defendant was in custody, the trespass complaint would be quietly dropped and other complaints would be substituted.
Slade's Case was a case in English contract law that ran from 1596 to 1602. Under the medieval common law, claims seeking the repayment of a debt or other matters could only be pursued through a writ of debt in the Court of Common Pleas, a problematic and archaic process. By 1558 the lawyers had succeeded in creating another method, enforced by the Court of King's Bench, through the action of assumpsit, which was technically for deceit. The legal fiction used was that by failing to pay after promising to do so, a defendant had committed deceit, and was liable to the plaintiff. The conservative Common Pleas, through the appellate court the Court of Exchequer Chamber, began to overrule decisions made by the King's Bench on assumpsit, causing friction between the courts.
The Exchequer of Ireland was a body in the Kingdom of Ireland tasked with collecting royal revenue. Modelled on the English Exchequer, it was created in 1210 after King John of England applied English law and legal structure to his Lordship of Ireland. The Exchequer was divided into two parts; the Superior Exchequer, which acted as a court of equity and revenue in a way similar to the English Exchequer of Pleas, and the Inferior Exchequer, which directly collected revenue from those who owed The Crown money, principally rents for Crown lands. The Exchequer primarily worked in a way similar to the English legal system, holding a similar jurisdiction. Following the Act of Union 1800, which incorporated Ireland into the United Kingdom, the Exchequer was merged with the English Exchequer in 1817 and ceased to function as an independent body, although the Irish Court of Exchequer, like other Irish courts, remained separate from the English equivalent.
The High Court of Justice in Ireland was the court created by the Supreme Court of Judicature Act (Ireland) 1877 to replace the existing court structure in Ireland. Its creation mirrored the reform of the courts of England and Wales five years earlier under the Judicature Acts. The Act created a Supreme Court of Judicature, consisting of a High Court of Justice and a Court of Appeal.
Certain former courts of England and Wales have been abolished or merged into or with other courts, and certain other courts of England and Wales have fallen into disuse.
|volume=has extra text (help) | https://wikimili.com/en/Court_of_King's_Bench_(England) | 21 |
18 | An animated display showing the territory controlled by Rome and Carthage during the period of the Punic Wars and the territorial changes during them
The First Punic War broke out on the island of Sicily in 264 BC. It was regarded as "the longest and most severely contested war in history" by the Ancient Greek historian Polybius. The fighting, which consisted predominantly of naval warfare, largely took place on the waters of the Mediterranean surrounding Sicily. The conflict began because Rome's imperial ambitions had been interfering with Carthage's ownership claims of the island of Sicily. Carthage was the dominant power of the western Mediterranean sea at the time, and had an extensive maritime empire; meanwhile, Rome was a rapidly expanding state that had a powerful army but a weak navy. The conflict lasted a total of 23 years and turned out substantial materiel and human losses on both sides; the Carthaginians were ultimately defeated by the Romans. By the terms of the peace treaty, Carthage paid large War reparations to Rome and Sicily fell to Roman control—thus becoming a Roman province. The action of taking control of Sicily had further entrenched Rome's position as a superpower in the Mediterranean and the world as a whole. The end of the war also sparked a significant, but unsuccessful, mutiny within the Carthaginian Empire referred to as the Mercenary War. The First Punic War officially came to an in 241 BC.
The Second Punic War began in 218 BC and witnessed Hannibal's crossing of the Alps and invasion of mainland Italy. This expedition enjoyed considerable early success, but after 14 years the survivors withdrew. There was also extensive fighting in Iberia (modern Spain and Portugal); on Sicily; on Sardinia; and in North Africa. The successful Roman invasion of the Carthaginian homeland in Africa in 204 BC led to Hannibal's recall. He was defeated in the Battle of Zama in 202 BC and Carthage sued for peace. A treaty was agreed in 201 BC which stripped Carthage of its overseas territories, and some of their African ones; imposed a large indemnity, to be paid over 50 years; severely restricted the size of its armed forces; and prohibited Carthage from waging war without Rome's express permission. Carthage ceased to be a military threat.
Rome contrived a justification to declare war on Carthage again in 149 BC in the Third Punic War. This conflict was fought entirely on Carthage's territories in what is now Tunisia and largely centred around the Siege of Carthage. In 146 BC the Romans stormed the city of Carthage, sacked it, slaughtered most of its population and completely demolished it. The previously Carthaginian territories were taken over as the Roman province of Africa. The ruins of the city lie 16 kilometres (10 mi) east of modern Tunis on the North African coast.
The main source for almost every aspect of the Punic Wars is the historian Polybius (c. 200 – c. 118 BC), a Greek sent to Rome in 167 BC as a hostage. His works include a now-largely lost manual on military tactics, but he is now known for The Histories, written sometime after 146 BC. Polybius's work is considered broadly objective and largely neutral as between Carthaginian and Roman points of view. Polybius was an analytical historian and wherever possible personally interviewed participants, from both sides, in the events he wrote about. He accompanied the Roman general Scipio Aemilianus during his campaign in North Africa which resulted in the Roman victory in the Third Punic War.
The accuracy of Polybius's account has been much debated over the past 150 years, but the modern consensus is to accept it largely at face value, and the details of the war in modern sources are largely based on interpretations of Polybius's account. The modern historian Andrew Curry sees Polybius as being "fairly reliable"; while Craige Champion describes him as "a remarkably well-informed, industrious, and insightful historian".
Other, later, ancient histories of the war exist, although often in fragmentary or summary form. Modern historians usually take into account the writings of various Roman annalists, some contemporary; the Sicilian Greek Diodorus Siculus; the later Roman historians, Livy (who relied heavily on Polybius), Plutarch, Appian (whose account of the Third Punic War is especially valuable) and Dio Cassius. The classicist Adrian Goldsworthy states "Polybius' account is usually to be preferred when it differs with any of our other accounts". Other sources include coins, inscriptions, archaeological evidence and empirical evidence from reconstructions such as the trireme Olympias.
Background and origin
The Roman Republic had been aggressively expanding in the southern Italian mainland for a century before the First Punic War. It had conquered peninsular Italy south of the Arno River by 272 BC, when the Greek cities of southern Italy (Magna Graecia) submitted after the conclusion of the Pyrrhic War. During this period of Roman expansion Carthage, with its capital in what is now Tunisia, had come to dominate southern Spain, much of the coastal regions of North Africa, the Balearic Islands, Corsica, Sardinia, and the western half of Sicily.
Beginning in 480 BC, Carthage had fought a series of inconclusive wars against the Greek city states of Sicily, led by Syracuse. By 264 BC Carthage was the dominant external power on the island, and Carthage and Rome were the preeminent powers in the western Mediterranean. Relationships were good and the two states had several times declared their mutual friendship via formal alliances: in 509 BC, 348 BC and around 279 BC. There were strong commercial links. During the Pyrrhic War of 280–275 BC, against a king of Epirus who alternately fought Rome in Italy and Carthage on Sicily, Carthage provided materiel to the Romans and on at least one occasion used its navy to ferry a Roman force. According to the classicist Richard Miles, Rome's expansionary attitude after southern Italy came under its control combined with Carthage's proprietary approach to Sicily caused the two powers to stumble into war more by accident than design. The immediate cause of the war was the issue of control of the independent Sicilian city state of Messana (modern Messina). In 264 BC Carthage and Rome went to war, starting the First Punic War.
Most male Roman citizens were eligible for military service and would serve as infantry, with a better-off minority providing a cavalry component. Traditionally, when at war the Romans would raise two legions, each of 4,200 infantry and 300 cavalry. Approximately 1,200 of the infantry, poorer or younger men unable to afford the armour and equipment of a standard legionary, served as javelin-armed skirmishers, known as velites. They carried several javelins, which would be thrown from a distance, a short sword, and a 90-centimetre (3 ft) shield. The balance were equipped as heavy infantry, with body armour, a large shield and short thrusting swords. They were divided into three ranks, of which the front rank also carried two javelins, while the second and third ranks had a thrusting spear instead. Both legionary sub-units and individual legionaries fought in relatively open order. It was the long-standing Roman procedure to elect two men each year, known as consuls, as senior magistrates, who at time of war would each lead an army. An army was usually formed by combining a Roman legion with a similarly sized and equipped legion provided by their Latin allies; allied legions usually had a larger attached complement of cavalry than Roman ones.
Carthaginian citizens only served in their army if there was a direct threat to the city. When they did they fought as well-armoured heavy infantry armed with long thrusting spears, although they were notoriously ill-trained and ill-disciplined. In most circumstances Carthage recruited foreigners to make up its army. Many were from North Africa which provided several types of fighters including: close-order infantry equipped with large shields, helmets, short swords and long thrusting spears; javelin-armed light infantry skirmishers; close-order shock cavalry carrying spears; and light cavalry skirmishers who threw javelins from a distance and avoided close combat. Both Iberia and Gaul provided large numbers of experienced infantry – unarmoured troops who would charge ferociously, but had a reputation for breaking off if a combat was protracted – and unarmoured close-order cavalry referred to by Livy as "steady", meaning that they were accustomed to sustained hand-to-hand combat rather than hit and run tactics. The close-order Libyan infantry and the citizen-militia would fight in a tightly packed formation known as a phalanx. On occasion some of the infantry would wear captured Roman armour, especially among Hannibal's troops. Slingers were frequently recruited from the Balearic Islands. The Carthaginians also employed war elephants; North Africa had indigenous African forest elephants at the time.
Garrison duty and land blockades were the most common operations. When armies were campaigning, surprise attacks, ambushes and stratagems were common. More formal battles were usually preceded by the two armies camping one to seven miles (2–12 km) apart for days or weeks; sometimes forming up in battle order each day. If either commander felt at a disadvantage, they might march off without engaging. In such circumstances it was difficult to force a battle if the other commander was unwilling to fight. Forming up in battle order was a complicated and premeditated affair, which took several hours. Infantry were usually positioned in the centre of the battle line, with light infantry skirmishers to their front and cavalry on each flank. Many battles were decided when one side's infantry force was attacked in the flank or rear and they were partially or wholly enveloped.
Quinqueremes, meaning "five-oarsmen", provided the workhorses of the Roman and Carthaginian fleets throughout the Punic Wars. So ubiquitous was the type that Polybius uses it as a shorthand for "warship" in general. A quinquereme carried a crew of 300: 280 oarsmen and 20 deck crew and officers. It would also normally carry a complement of 40 marines; if battle was thought to be imminent this would be increased to as many as 120. In 260 BC Romans set out to construct a fleet and used a shipwrecked Carthaginian quinquereme as a blueprint for their own.
As novice shipwrights, the Romans built copies that were heavier than the Carthaginian vessels, and so slower and less manoeuvrable. Getting the oarsmen to row as a unit, let alone to execute more complex battle manoeuvres, required long and arduous training. At least half of the oarsmen would need to have had some experience if the ship was to be handled effectively. As a result, the Romans were initially at a disadvantage against the more experienced Carthaginians. To counter this, the Romans introduced the corvus, a bridge 1.2 metres (4 feet) wide and 11 metres (36 feet) long, with a heavy spike on the underside, which was designed to pierce and anchor into an enemy ship's deck. This allowed Roman legionaries acting as marines to board enemy ships and capture them, rather than employing the previously traditional tactic of ramming.
All warships were equipped with rams, a triple set of 60-centimetre-wide (2 ft) bronze blades weighing up to 270 kilograms (600 lb) positioned at the waterline. In the century prior to the Punic Wars, boarding had become increasingly common and ramming had declined, as the larger and heavier vessels adopted in this period lacked the speed and manoeuvrability necessary to ram, while their sturdier construction reduced the ram's effect even in case of a successful attack. The Roman adaptation of the corvus was a continuation of this trend and compensated for their initial disadvantage in ship-manoeuvring skills. The added weight in the prow compromised both the ship's manoeuvrability and its seaworthiness, and in rough sea conditions the corvus became useless; part way through the First Punic War the Romans ceased using it.
First Punic War, 264–241 BC
Much of the First Punic War was fought on, or in the waters near, Sicily. Away from the coasts its hilly and rugged terrain made manoeuvring large forces difficult and favoured the defence over the offence. Land operations were largely confined to raids, sieges and interdiction; in 23 years of war on Sicily there were only two full-scale pitched battles.
Sicily, 264–257 BC
The war began with the Romans gaining a foothold on Sicily at Messana (modern Messina). The Romans then pressed Syracuse, the only significant independent power on the island, into allying with them and laid siege to Carthage's main base at Akragas on the south coast. A Carthaginian army of 50,000 infantry, 6,000 cavalry and 60 elephants attempted to lift the siege in 262 BC, but was heavily defeated at the Battle of Akragas. That night the Carthaginian garrison escaped and the Romans seized the city and its inhabitants, selling 25,000 of them into slavery.
After this the land war on Sicily reached a stalemate as the Carthaginians focused on defending their well-fortified towns and cities; these were mostly on the coast and so could be supplied and reinforced without the Romans being able to use their superior army to interfere. The focus of the war shifted to the sea, where the Romans had little experience; on the few occasions they had previously felt the need for a naval presence they had usually relied on small squadrons provided by their Latin or Greek allies. The Romans built a navy to challenge Carthage's, and using the corvus inflicted a major defeat at the Battle of Mylae in 260 BC. A Carthaginian base on Corsica was seized, but an attack on Sardinia was repulsed; the base on Corsica the Romans had seized was then lost. In 258 BC a Roman fleet heavily defeated a smaller Carthaginian fleet at the Battle of Sulci off the western coast of Sardinia.
Africa, 256–255 BC
Taking advantage of their naval victories the Romans launched an invasion of North Africa in 256 BC, which the Carthaginians intercepted at the Battle of Cape Ecnomus off the south coast of Sicily. The Carthaginians were again beaten; this was possibly the largest naval battle in history by the number of combatants involved. The invasion initially went well and in 255 BC the Carthaginians sued for peace; the proposed terms were so harsh they fought on. At the Battle of Tunis in spring 255 BC a combined force of infantry, cavalry and war elephants under the command of the Spartan mercenary Xanthippus crushed the Romans. The Romans sent a fleet to evacuate their survivors and the Carthaginians opposed it at the Battle of Cape Hermaeum (modern Cape Bon); the Carthaginians were again heavily defeated. The Roman fleet, in turn, was devastated by a storm while returning to Italy, losing most of its ships and more than 100,000 men.
Sicily, 255–241 BC
The war continued, with neither side able to gain a decisive advantage. The Carthaginians attacked and recaptured Akragas in 255 BC, but not believing they could hold the city, they razed and abandoned it. The Romans rapidly rebuilt their fleet, adding 220 new ships, and captured Panormus (modern Palermo) in 254 BC. The next year they lost another 150 ships to a storm. On Sicily the Romans avoided battle in 252 and 251 BC, according to Polybius because they feared the war elephants which the Carthaginians had shipped to the island. In 250 BC the Carthaginians advanced on Panormus, but in a battle outside the walls the Romans drove off the Carthaginian elephants with javelin fire. The elephants routed through the Carthaginian infantry, who were then charged by the Roman infantry to complete their defeat.
Slowly the Romans had occupied most of Sicily; in 250 BC they besieged the last two Carthaginian strongholds – Lilybaeum and Drepana in the extreme west. Repeated attempts to storm Lilybaeum's strong walls failed, as did attempts to block access to its harbour, and the Romans settled down to a siege which was to last nine years. They launched a surprise attack on the Carthaginian fleet, but were defeated at the Battle of Drepana; Carthage's greatest naval victory of the war. Carthage turned to the maritime offensive, inflicting another heavy naval defeat at the Battle of Phintias and all but swept the Romans from the sea. It was to be seven years before Rome again attempted to field a substantial fleet, while Carthage put most of its ships into reserve to save money and free up manpower.
Roman victory, 243–241 BC
After more than 20 years of war, both states were financially and demographically exhausted. Evidence of Carthage's financial situation includes their request for a 2,000 talent loan from Ptolemaic Egypt, which was refused. Rome was also close to bankruptcy and the number of adult male citizens, who provided the manpower for the navy and the legions, had declined by 17 per cent since the start of the war. Goldsworthy describes Roman manpower losses as "appalling".
The Romans rebuilt their fleet again in 243 BC after the Senate approached Rome's wealthiest citizens for loans to finance the construction of one ship each, repayable from the reparations to be imposed on Carthage once the war was won. This new fleet effectively blockaded the Carthaginian garrisons. Carthage assembled a fleet which attempted to relieve them, but it was destroyed at the Battle of the Aegates Islands in 241 BC, forcing the cut-off Carthaginian troops on Sicily to negotiate for peace.
The Treaty of Lutatius was agreed. By its terms Carthage paid 3,200 talents of silver in reparations and Sicily was annexed as a Roman province. Henceforth Rome considered itself the leading military power in the western Mediterranean, and increasingly the Mediterranean region as a whole. The immense effort of repeatedly building large fleets of galleys during the war laid the foundation for Rome's maritime dominance for 600 years.
Interbellum, 241–218 BC
The Mercenary, or Truceless, War began in 241 BC as a dispute over the payment of wages owed to 20,000 foreign soldiers who had fought for Carthage on Sicily during the First Punic War. This erupted into full-scale mutiny under the leadership of Spendius and Matho and 70,000 Africans from Carthage's oppressed dependant territories flocked to join the mutineers, bringing supplies and finance. War-weary Carthage fared poorly in the initial engagements, especially under the generalship of Hanno. Hamilcar Barca, a veteran of the campaigns in Sicily, was given joint command of the army in 240 BC, and supreme command in 239 BC. He campaigned successfully, initially demonstrating leniency in an attempt to woo the rebels over. To prevent this, in 240 BC Spendius tortured 700 Carthaginian prisoners to death, and henceforth the war was pursued with great brutality.
By early 237 BC, after numerous setbacks, the rebels were defeated and their cities brought back under Carthaginian rule. An expedition was prepared to reoccupy Sardinia, where mutinous soldiers had slaughtered all Carthaginians. The Roman Senate stated they considered the preparation of this force an act of war, and demanded Carthage cede Sardinia and Corsica, and pay an additional 1,200-talent indemnity. Weakened by 30 years of war, Carthage agreed rather than again enter into conflict with Rome. Polybius considered this "contrary to all justice" and modern historians have variously described the Romans' behaviour as "unprovoked aggression and treaty-breaking", "shamelessly opportunistic" and an "unscrupulous act". These events fuelled resentment of Rome in Carthage, which was not reconciled to Rome's perception of its situation. This breach of the recently signed treaty is considered by modern historians to be the single greatest cause of war with Carthage breaking out again in 218 BC in the Second Punic War.
Carthaginian expansion in Iberia
With the suppression of the rebellion, Hamilcar understood that Carthage needed to strengthen its economic and military base if it were to again confront Rome. After the First Punic War, Carthaginian possessions in Iberia (modern Spain and Portugal) were limited to a handful of prosperous coastal cities in the south. Hamilcar took the army which he had led to victory in the Mercenary War to Iberia in 237 BC and carved out a quasi-monarchial, autonomous state in its south east. This gave Carthage the silver mines, agricultural wealth, manpower, military facilities such as shipyards and territorial depth to stand up to future Roman demands with confidence. Hamilcar ruled as a viceroy and was succeeded by his son-in-law, Hasdrubal, in the early 220s BC and then his son, Hannibal, in 221 BC. In 226 BC the Ebro Treaty was agreed with Rome, specifying the Ebro River as the northern boundary of the Carthaginian sphere of influence. At some time during the next six years Rome made a separate treaty with the city of Saguntum, which was situated well south of the Ebro.
Second Punic War, 218–201 BC
In 219 BC a Carthaginian army under Hannibal besieged, captured and sacked Saguntum and in spring 218 BC Rome declared war on Carthage. There were three main military theatres in the war: Italy, where Hannibal defeated the Roman legions repeatedly, with occasional subsidiary campaigns in Sicily, Sardinia and Greece; Iberia, where Hasdrubal, a younger brother of Hannibal, defended the Carthaginian colonial cities with mixed success until moving into Italy; and Africa, where the war was decided.
Hannibal crosses the Alps, 218–217 BC
In 218 BC there was some naval skirmishing in the waters around Sicily. The Romans beat off a Carthaginian attack and captured the island of Malta. In Cisalpine Gaul (modern northern Italy), the major Gallic tribes attacked the Roman colonies there, causing the Romans to flee to their previously-established colony of Mutina (modern Modena), where they were besieged. A Roman relief army broke through the siege, but was then ambushed and besieged itself. An army had previously been created by the Romans to campaign in Iberia, but the Roman Senate detached one Roman and one allied legion from it to send to north Italy. Raising fresh troops to replace these delayed the army's departure for Iberia until September.
Meanwhile, Hannibal assembled a Carthaginian army in New Carthage (modern Cartagena) and led it northwards along the Iberian coast in May or June. It entered Gaul and took an inland route, to avoid the Roman allies to the south. At the Battle of Rhone Crossing, Hannibal defeated a force of local Allobroges which sought to bar his way. A Roman fleet carrying the Iberian-bound army landed at Rome's ally Massalia (modern Marseille) at the mouth of the Rhone, but Hannibal evaded the Romans and they continued to Iberia. The Carthaginians reached the foot of the Alps by late autumn and crossed them, surmounting the difficulties of climate, terrain and the guerrilla tactics of the native tribes. Hannibal arrived with 20,000 infantry, 6,000 cavalry, and an unknown number of elephants – the survivors of the 37 with which he left Iberia – in what is now Piedmont, northern Italy. The Romans were still in their winter quarters. His surprise entry into the Italian peninsula led to the cancellation of Rome's planned campaign for the year: an invasion of Africa.
Roman defeats, 218–217 BC
Hannibal captured the chief city of the hostile Taurini (in the area of modern Turin) and his army routed the cavalry and light infantry of the Romans at the Battle of Ticinus in late November. As a result, most of the Gallic tribes declared for the Carthaginian cause, and Hannibal's army grew to more than 40,000 men. A large Roman army was lured into combat by Hannibal at the Battle of the Trebia, encircled and destroyed. Only 10,000 Romans out of 42,000 were able to cut their way to safety. Gauls now joined Hannibal's army in large numbers, bringing it up to 60,000 men. The Romans stationed an army at Arretium and one on the Adriatic coast to block Hannibal's advance into central Italy.
In early spring 217 BC, the Carthaginians crossed the Apennines unopposed, taking a difficult but unguarded route. Hannibal attempted without success to draw the main Roman army under Gaius Flaminius into a pitched battle by devastating the area they had been sent to protect. Hannibal then cut off the Roman army from Rome, which provoked Flaminius into a hasty pursuit without proper reconnaissance. Hannibal set an ambush and in the Battle of Lake Trasimene completely defeated the Roman army, killing 15,000 Romans, including Flaminius, and taking 15,000 prisoner. A cavalry force of 4,000 from the other Roman army were also engaged and wiped out. The prisoners were badly treated if they were Romans, but released if they were from one of Rome's Latin allies. Hannibal hoped some of these allies could be persuaded to defect, and marched south in the hope of winning over Roman allies among the ethnic Greek and Italic city states.
The Romans, panicked by these heavy defeats, appointed Quintus Fabius Maximus as dictator. Fabius introduced the Fabian strategy of avoiding open battle with his opponent, but constantly skirmishing with small detachments of the enemy. This was not popular among the soldiers, the Roman public or the Roman elite, since he avoided battle while Italy was being devastated by the enemy. Hannibal marched through the richest and most fertile provinces of Italy, hoping the devastation would draw Fabius into battle, but Fabius refused.
Cannae, 216 BC
At the elections of 216 BC Gaius Terentius Varro and Lucius Aemilius Paullus were elected as consuls; both were more aggressive-minded than Fabius. The Roman Senate authorised the raising of a force of 86,000 men, the largest in Roman history to that point. Paullus and Varro marched southward to confront Hannibal, who accepted battle on the open plain near Cannae. In the Battle of Cannae the Roman legions forced their way through Hannibal's deliberately weak centre, but Libyan heavy infantry on the wings swung around their advance, menacing their flanks. Hasdrubal led Carthaginian cavalry on the left wing and routed the Roman cavalry opposite, then swept around the rear of the Romans to attack the cavalry on the other wing. He then charged into the legions from behind. As a result, the Roman infantry was surrounded with no means of escape. At least 67,500 Romans were killed or captured.
Roman allies defect, 216–205 BC
Little has survived of Polybius's account of Hannibal's army in Italy after Cannae. Livy gives a fuller record, but according to Goldsworthy "his reliability is often suspect", especially with regard to his descriptions of battles; nevertheless his is the best surviving source for this part of the war. Several of the city states in southern Italy allied themselves with Hannibal, or were captured when pro-Carthaginian factions betrayed their defences. These included the large city of Capua and the major port city of Tarentum (modern Taranto). Two of the major Samnite tribes also joined the Carthaginian cause. By 214 BC the bulk of southern Italy had turned against Rome.
However, the majority of Rome's allies remained loyal, including many in southern Italy. All except the smallest towns were too well fortified for Hannibal to take by assault, and blockade could be a long-drawn-out affair, or if the target was a port, impossible. Carthage's new allies felt little sense of community with Carthage, or even with each other. The new allies increased the number of fixed points which Hannibal's army was expected to defend from Roman retribution, but provided relatively few fresh troops to assist him in doing so. Such Italian forces as were raised resisted operating away from their home cities and performed badly when they did.
When the port city of Locri defected to Carthage in the summer of 215 BC it was immediately used to reinforce the Carthaginian forces in Italy with soldiers, supplies and war elephants. It was the only time during the war that Carthage reinforced Hannibal. A second force, under Hannibal's youngest brother Mago, was meant to land in Italy in 215 BC but was diverted to Iberia after the Carthaginian defeat in Iberia at the Battle of Dertosa.
Meanwhile, the Romans took drastic steps to raise new legions: enrolling slaves, criminals and those who did not meet the usual property qualification. By early 215 BC they were fielding at least 12 legions; by 214 BC, 18; and by 213 BC, 22. By 212 BC the full complement of the legions deployed would have been in excess of 100,000 men, plus, as always, a similar number of allied troops. The majority were deployed in southern Italy in field armies of approximately 20,000 men each. This was insufficient to challenge Hannibal's army in open battle, but sufficient to force him to concentrate his forces and to hamper his movements.
For 11 years after Cannae the war surged around southern Italy as cities went over to the Carthaginians or were taken by subterfuge, and the Romans recaptured them by siege or by suborning pro-Roman factions. Hannibal repeatedly defeated Roman armies, but wherever his main army was not active the Romans threatened Carthaginian-supporting towns or sought battle with Carthaginian or Carthaginian-allied detachments; frequently with success. By 207 BC Hannibal had been confined to the extreme south of Italy and many of the cities and territories which had joined the Carthaginian cause had returned to their Roman allegiance.
First Macedonian War, 214–205 BC
During 216 BC the Macedonian king, Philip V, pledged his support to Hannibal – thus initiating the First Macedonian War against Rome in 215 BC. In 211 BC, Rome contained the threat of Macedonia by allying with the Aetolian League, an anti-Macedonian coalition of Greek city states. In 205 BC this war ended with a negotiated peace.
Sardinia, 213 BC
Sicily, 213–210 BC
Sicily remained firmly in Roman hands, blocking the ready seaborne reinforcement and resupply of Hannibal from Carthage. Hiero II, the old tyrant of Syracuse of forty-five-years standing and a staunch Roman ally, died in 215 BC and his successor Hieronymus was discontented with his situation. Hannibal negotiated a treaty whereby Syracuse came over to Carthage, at the price of making the whole of Sicily a Syracusan possession. The Syracusan army proved no match for the Romans, and by spring 213 BC Syracuse was besieged. The siege was marked by the ingenuity of Archimedes in inventing war machines to counteract the traditional siege warfare methods of the Romans.
A large Carthaginian army led by Himilco was sent to relieve the city in 213 BC. It captured several Roman-garrisoned towns on Sicily; many Roman garrisons were either expelled or massacred by Carthaginian partisans. In the spring of 212 BC the Romans stormed Syracuse in a surprise night assault and captured several districts of the city. Meanwhile, the Carthaginian army was crippled by plague. After the Carthaginians failed to resupply the city, Syracuse fell in the autumn of 212 BC; Archimedes was killed by a Roman soldier.
Carthage sent more reinforcements to Sicily in 211 BC and went on the offensive. A fresh Roman army attacked the main Carthaginian stronghold on the island, Agrigentum, in 210 BC and the city was betrayed to the Romans by a discontented Carthaginian officer. The remaining Carthaginian-controlled towns then surrendered or were taken through force or treachery and the Sicilian grain supply to Rome and its armies was resumed.
Hasdrubal invades Italy, 207 BC
In the spring of 207 BC, Hasdrubal Barca marched across the Alps and invaded Italy with an army of 30,000 men. His aim was to join his forces with those of Hannibal, but Hannibal was unaware of his presence. The Romans facing Hannibal in southern Italy tricked him into believing the whole Roman army was still in camp, while a large portion marched north and reinforced the Romans facing Hasdrubal. The combined Roman force attacked Hasdrubal at the Battle of the Metaurus and destroyed his army, killing Hasdrubal. This battle confirmed Roman dominance in Italy.
Mago invades Italy, 205–203 BC
In 205 BC, Mago landed in Genua in north-west Italy with the remnants of his Spanish army (see § Iberia below). It soon received Gallic and Ligurian reinforcements. Mago's arrival in the north of the Italian peninsula was followed by Hannibal's inconclusive Battle of Crotona in 204 BC in the far south of the peninsula. Mago marched his reinforced army towards the lands of Carthage's main Gallic allies in the Po Valley, but was checked by a large Roman army and defeated at the Battle of Insubria in 203 BC.
Hannibal is recalled, 203 BC
After Publius Cornelius Scipio invaded the Carthaginian homeland in 204 BC, defeating the Carthaginians in two major battles and winning the allegiance of the Numidian kingdoms of North Africa, Hannibal and the remnants of his army were recalled. They sailed from Croton and landed at Carthage with 15,000–20,000 experienced veterans. Mago was also recalled; he died of wounds on the voyage and some of his ships were intercepted by the Romans, but 12,000 of his troops reached Carthage.
Iberia 218–215 BC
The Roman fleet continued on from Massala in the autumn of 218 BC, landing the army it was transporting in north-east Iberia, where it won support among the local tribes. A rushed Carthaginian attack in late 218 BC was beaten off at the Battle of Cissa. In 217 BC 40 Carthaginian and Iberian warships were beaten by 55 Roman and Massalian vessels at the Battle of Ebro River, with 29 Carthaginian ships lost. The Romans' lodgement between the Ebro and Pyrenees blocked the route from Iberia to Italy and prevented the despatch of reinforcements from Iberia to Hannibal. The Carthaginian commander in Iberia, Hannibal's brother Hasdrubal, marched into this area in 215 BC, offered battle and was defeated at Dertosa, although both sides suffered heavy casualties.
Iberia, 214–209 BC
The Carthaginians suffered a wave of defections of local Celtiberian tribes to Rome. The Roman commanders captured Saguntum in 212 BC and in 211 BC hired 20,000 Celtiberian mercenaries to reinforce their army. Observing that the three Carthaginian armies were deployed apart from each other, the Romans split their forces. This strategy resulted in the Battle of Castulo and the Battle of Ilorca, usually combined as the Battle of the Upper Baetis. Both battles ended in complete defeat for the Romans, as Hasdrubal had bribed the Romans' mercenaries to desert. The Romans retreated to their coastal stronghold north of the Ebro, from which the Carthaginians again failed to expel them. Claudius Nero brought over reinforcements in 210 BC and stabilised the situation.
In 210 BC Publius Cornelius Scipio, arrived in Iberia with further Roman reinforcements. In a carefully planned assault in 209 BC, he captured the lightly-defended centre of Carthaginian power in Iberia, Cartago Nova, seizing a vast booty of gold, silver and siege artillery. He released the captured population and liberated the Iberian hostages held there by the Carthaginians to ensure the loyalty of their tribes, although many of them were subsequently to fight against the Romans.
Iberia, 208–207 BC
In the spring of 208 BC, Hasdrubal moved to engage Scipio at the Battle of Baecula. The Carthaginians were defeated, but Hasdrubal was able to withdraw the majority of his army in good order. Most of his losses were among his Iberian allies. Scipio was not able to prevent Hasdrubal from leading his depleted army over the western passes of the Pyrenees into Gaul. In 207 BC, after recruiting heavily in Gaul, Hasdrubal crossed the Alps into Italy in an attempt to join his brother, Hannibal.
Roman victory in Iberia, 206–205 BC
In 206 BC, at the Battle of Ilipa, Scipio with 48,000 men, half Italian and half Iberian, defeated a Carthaginian army of 54,500 men and 32 elephants. This sealed the fate of the Carthaginians in Iberia. It was followed by the Roman capture of Gades after the city rebelled against Carthaginian rule.
Later the same year a mutiny broke out among Roman troops, which initially attracted support from Iberian leaders, disappointed that Roman forces had remained in the peninsula after the expulsion of the Carthaginians, but it was effectively put down by Scipio. In 205 BC a last attempt was made by Mago to recapture New Carthage when the Roman occupiers were shaken by another mutiny and an Iberian uprising, but he was repulsed. Mago left Iberia for northern Italy with his remaining forces. In 203 BC Carthage succeeded in recruiting at least 4,000 mercenaries from Iberia, despite Rome's nominal control.
In 213 BC Syphax, a powerful Numidian king in North Africa, declared for Rome. In response, Roman advisers were sent to train his soldiers and he waged war against the Carthaginian ally Gala. In 206 BC the Carthaginians ended this drain on their resources by dividing several Numidian kingdoms with him. One of those disinherited was the Numidian prince Masinissa, who was thus driven into the arms of Rome.
Scipio's invasion of Africa, 204–201 BC
In 205 BC Publius Scipio was given command of the legions in Sicily and allowed to enrol volunteers for his plan to end the war by an invasion of Africa. After landing in Africa in 204 BC, he was joined by Masinissa and a force of Numidian cavalry. Scipio gave battle to and destroyed two large Carthaginian armies. After the second of these Syphax was pursued and taken prisoner by Masinissa at the Battle of Cirta; Masinissa then seized most of Syphax's kingdom with Roman help.
Rome and Carthage entered into peace negotiations, and Carthage recalled Hannibal from Italy. The Roman Senate ratified a draft treaty, but due to mistrust and a surge in confidence when Hannibal arrived from Italy Carthage repudiated it. Hannibal was placed in command of another army, formed from his veterans from Italy and newly raised troops from Africa, but with few cavalry. The decisive Battle of Zama followed in October 202 BC. Unlike most battles of the Second Punic War, the Romans had superiority in cavalry and the Carthaginians in infantry. Hannibal attempted to use 80 elephants to break into the Roman infantry formation, but the Romans countered them effectively and they routed back through the Carthaginian ranks. The Roman and allied Numidian cavalry drove the Carthaginian cavalry from the field. The two sides' infantry fought inconclusively until the Roman cavalry returned and attacked his rear. The Carthaginian formation collapsed; Hannibal was one of the few to escape the field.
The peace treaty imposed on the Carthaginians stripped them of all of their overseas territories, and some of their African ones. An indemnity of 10,000 silver talents was to be paid over 50 years. Hostages were taken. Carthage was forbidden to possess war elephants and its fleet was restricted to 10 warships. It was prohibited from waging war outside Africa, and in Africa only with Rome's express permission. Many senior Carthaginians wanted to reject it, but Hannibal spoke strongly in its favour and it was accepted in spring 201 BC. Henceforth it was clear that Carthage was politically subordinate to Rome. Scipio was awarded a triumph and received the agnomen "Africanus".
Interbellum, 201–149 BC
At the end of the war, Masinissa emerged as by far the most powerful ruler among the Numidians. Over the following 48 years he repeatedly took advantage of Carthage's inability to protect its possessions. Whenever Carthage petitioned Rome for redress, or permission to take military action, Rome backed its ally, Masinissa, and refused. Masinissa's seizures of and raids into Carthaginian territory became increasingly flagrant. In 151 BC Carthage raised a large army, the treaty notwithstanding, and counterattacked the Numidians. The campaign ended in disaster for the Carthaginians and their army surrendered. Carthage had paid off its indemnity and was prospering economically, but was no military threat to Rome. Elements in the Roman Senate had long wished to destroy Carthage, and with the breach of the treaty as a casus belli, war was declared in 149 BC.
Third Punic War, 149–146 BC
In 149 BC a Roman army of approximately 50,000 men, jointly commanded by both consuls, landed near Utica, 35 kilometres (22 mi) north of Carthage. Rome demanded that if war were to be avoided, the Carthaginians must hand over all of their armaments. Vast amounts of materiel were delivered, including 200,000 sets of armour, 2,000 catapults and a large number of warships. This done, the Romans demanded the Carthaginians burn their city and relocate at least 16 kilometres (10 mi) from the sea; the Carthaginians broke off negotiations and set to recreating their armoury.
Siege of Carthage
As well as manning the walls of Carthage, the Carthaginians formed a field army under Hasdrubal, which was based 25 kilometres (16 mi) to the south. The Roman army moved to lay siege to Carthage, but its walls were so strong and its citizen-militia so determined it was unable to make any impact, while the Carthaginians struck back effectively. Their army raided the Roman lines of communication, and in 148 BC Carthaginian fire ships destroyed many Roman vessels. The main Roman camp was in a swamp, which caused an outbreak of disease during the summer. The Romans moved their camp, and their ships, further away – so they were now more blockading than closely besieging the city. The war dragged on into 147 BC.
In early 147 BC Scipio Aemilianus, an adopted grandson of Scipio Africanus who had distinguished himself during the previous two years' fighting, was elected consul and took control of the war. The Carthaginians continued to resist vigorously: they constructed warships and during the summer twice gave battle to the Roman fleet, losing both times. The Romans launched an assault on the walls; after confused fighting they broke into the city, but lost in the dark, withdrew. Hasdrubal and his army retreated into the city to reinforce the garrison. Hasdrubal had Roman prisoners tortured to death on the walls, in view of the Roman army. He was reinforcing the will to resist in the Carthaginian citizens; from this point there could be no possibility of negotiations. Some members of the city council denounced his actions and Hasdrubal had them too put to death and took control of the city. With no Carthaginian army in the field those cities which had remained loyal went over to the Romans or were captured.
Scipio moved back to a close blockade of the city, and built a mole which cut off supply from the sea. In the spring of 146 BC the Roman army managed to secure a foothold on the fortifications near the harbour. When the main assault began it quickly captured the city's main square, where the legions camped overnight. The next morning the Romans systematically worked their way through the residential part of the city, killing everyone they encountered and firing the buildings behind them. At times the Romans progressed from rooftop to rooftop, to prevent missiles being hurled down on them. It took six days to clear the city of resistance, and on the last day Scipio agreed to accept prisoners. The last holdouts, including Roman deserters in Carthaginian service, fought on from the Temple of Eshmoun and burnt it down around themselves when all hope was gone. There were 50,000 Carthaginian prisoners, a small proportion of the pre-war population, who were sold into slavery. There is a tradition that Roman forces then sowed the city with salt, but this has been shown to have been a 19th-century invention.
The remaining Carthaginian territories were annexed by Rome and reconstituted to become the Roman province of Africa with Utica as its capital. The province became a major source of grain and other foodstuffs. Numerous large Punic cities, such as those in Mauretania, were taken over by the Romans, although they were permitted to retain their Punic system of government. A century later, the site of Carthage was rebuilt as a Roman city by Julius Caesar, and would become one of the main cities of Roman Africa by the time of the Empire. Rome still exists as the capital of Italy; the ruins of Carthage lie 24 kilometres (15 mi) east of Tunis on the North African coast.
Notes, citations and sources
- The term Punic comes from the Latin word Punicus (or Poenicus), meaning "Carthaginian", and is a reference to the Carthaginians' Phoenician ancestry.
- Sources other than Polybius are discussed by Bernard Mineo in "Principal Literary Sources for the Punic Wars (apart from Polybius)".
- This could be increased to 5,000 in some circumstances, or, rarely, even more.
- These elephants were typically about 2.5-metre-high (8 ft) at the shoulder, and should not be confused with the larger African bush elephant.
- 2,000 talents was approximately 52,000 kilograms (51 long tons) of silver.
- Several different "talents" are known from antiquity. The ones referred to in this article are all Euboic (or Euboeic) talents, of approximately 26 kilograms (57 lb).
- 3,200 talents was approximately 82,000 kg (81 long tons).
- 1,200 talents was approximately 30,000 kg (30 long tons) of silver.
- The historian Philip Sabin refers to Livy's "military ignorance".
- Publius Scipio was the bereaved son of the previous Roman co-commander in Iberia, also named Publius Scipio, and the nephew of the other co-commander, Gnaeus Scipio.
- 10,000 talents was approximately 269,000 kg (265 long tons) of silver.
- Polybius. The Histories. p. 1.63.
- Sidwell & Jones 1998, p. 16.
- Goldsworthy 2006, pp. 20–21.
- Shutt 1938, p. 53.
- Goldsworthy 2006, p. 20.
- Walbank 1990, pp. 11–12.
- Lazenby 1996, pp. x–xi.
- Hau 2016, pp. 23–24.
- Shutt 1938, p. 55.
- Goldsworthy 2006, p. 21.
- Champion 2015, pp. 98, 101.
- Champion 2015, p. 96.
- Lazenby 1996, pp. x–xi, 82–84.
- Tipps 1985, p. 432.
- Curry 2012, p. 34.
- Champion 2015, p. 102.
- Goldsworthy 2006, pp. 21–23.
- Champion 2015, p. 95.
- Le Bohec 2015, p. 430.
- Mineo 2015, pp. 111–127.
- Goldsworthy 2006, pp. 23, 98.
- Miles 2011, pp. 157–158.
- Bagnall 1999, pp. 21–22.
- Goldsworthy 2006, pp. 29–30.
- Miles 2011, pp. 115, 132.
- Goldsworthy 2006, pp. 25–26.
- Miles 2011, pp. 94, 160, 163, 164–165.
- Goldsworthy 2006, pp. 69–70.
- Miles 2011, pp. 175–176.
- Goldsworthy 2006, pp. 74–75.
- Warmington 1993, p. 168.
- Bagnall 1999, p. 23.
- Goldsworthy 2006, p. 287.
- Goldsworthy 2006, p. 48.
- Bagnall 1999, pp. 22–25.
- Goldsworthy 2006, p. 50.
- Lazenby 1998, p. 9.
- Goldsworthy 2006, pp. 32–34.
- Koon 2015, p. 80.
- Goldsworthy 2006, pp. 32–33.
- Bagnall 1999, p. 9.
- Goldsworthy 2006, p. 32.
- Rawlings 2015, p. 305.
- Bagnall 1999, p. 8.
- Miles 2011, p. 240.
- Lazenby 1996, p. 27.
- Goldsworthy 2006, pp. 82, 311, 313–314.
- Bagnall 1999, p. 237.
- Goldsworthy 2006, p. 55.
- Goldsworthy 2006, p. 56.
- Sabin 1996, p. 64.
- Goldsworthy 2006, p. 57.
- Sabin 1996, p. 66.
- Goldsworthy 2006, p. 98.
- Lazenby 1996, pp. 27–28.
- Goldsworthy 2006, p. 104.
- Goldsworthy 2006, p. 100.
- Tipps 1985, p. 435.
- Casson 1995, p. 121.
- Goldsworthy 2006, pp. 102–103.
- Goldsworthy 2006, pp. 97, 99–100.
- Murray 2011, p. 69.
- Casson 1995, pp. 278–280.
- de Souza 2008, p. 358.
- Miles 2011, p. 178.
- Wallinga 1956, pp. 77–90.
- Goldsworthy 2006, pp. 100–101, 103.
- Goldsworthy 2006, p. 310.
- Goldsworthy 2006, p. 82.
- Bagnall 1999, pp. 52–53.
- Erdkamp 2015, p. 71.
- Miles 2011, p. 179.
- Miles 2011, pp. 179–180.
- Bagnall 1999, pp. 64–66.
- Goldsworthy 2006, p. 97.
- Bagnall 1999, p. 66.
- Goldsworthy 2006, pp. 91–92, 97.
- Miles 2011, pp. 180–181.
- Goldsworthy 2006, pp. 109–110.
- Bagnall 1999, p. 65.
- Lazenby 1996, pp. 73–74.
- Bagnall 1999, pp. 63–65.
- Rankov 2015, p. 155.
- Rankov 2015, pp. 155–156.
- Goldsworthy 2006, pp. 110–111.
- Lazenby 1996, p. 87.
- Tipps 1985, p. 436.
- Goldsworthy 2006, p. 87.
- Miles 2011, p. 188.
- Tipps 2003, p. 382.
- Tipps 1985, p. 438.
- Miles 2011, p. 189.
- Erdkamp 2015, p. 66.
- Scullard 2006, p. 559.
- Lazenby 1996, pp. 114–116, 169.
- Rankov 2015, p. 158.
- Bagnall 1999, p. 80.
- Miles 2011, pp. 189–190.
- Lazenby 1996, p. 118.
- Rankov 2015, p. 159.
- Lazenby 1996, p. 169.
- Miles 2011, p. 190.
- Lazenby 1996, p. 127.
- Bagnall 1999, pp. 84–86.
- Goldsworthy 2006, pp. 117–121.
- Bagnall 1999, pp. 88–91.
- Goldsworthy 2006, pp. 121–122.
- Rankov 2015, p. 163.
- Bringmann 2007, p. 127.
- Lazenby 1996, p. 158.
- Scullard 2006, p. 565.
- Bagnall 1999, p. 92.
- Bagnall 1999, p. 91.
- Goldsworthy 2006, p. 131.
- Lazenby 1996, p. 49.
- Miles 2011, p. 196.
- Bagnall 1999, p. 96.
- Lazenby 1996, p. 157.
- Goldsworthy 2006, pp. 128–129, 357, 359–360.
- Bagnall 1999, pp. 112–114.
- Goldsworthy 2006, pp. 133–134.
- Eckstein 2017, p. 6.
- Bagnall 1999, p. 115.
- Bagnall 1999, p. 118.
- Miles 2011, p. 208.
- Eckstein 2017, p. 7.
- Hoyos 2000, p. 377.
- Scullard 2006, p. 569.
- Miles 2011, pp. 209, 212–213.
- Lazenby 1996, p. 175.
- Goldsworthy 2006, p. 136.
- Bagnall 1999, p. 124.
- Collins 1998, p. 13.
- Hoyos 2015, p. 211.
- Miles 2011, p. 213.
- Miles 2011, pp. 226–227.
- Hoyos 2015, p. 77.
- Hoyos 2015, p. 80.
- Miles 2011, p. 220.
- Miles 2011, pp. 219–220, 225.
- Eckstein 2006, pp. 173–174.
- Miles 2011, pp. 222, 225.
- Goldsworthy 2006, pp. 143–144.
- Goldsworthy 2006, p. 144.
- Goldsworthy 2006, pp. 144–145.
- Goldsworthy 2006, p. 145.
- Goldsworthy 2006, pp. 310–311.
- Briscoe 2006, p. 61.
- Edwell 2015, p. 327.
- Castillo 2006, p. 25.
- Goldsworthy 2006, p. 151.
- Zimmermann 2011, p. 283.
- Mahaney 2008, p. 221.
- Lazenby 1998, p. 41.
- Fronda 2011, p. 252.
- Zimmermann 2011, p. 291.
- Edwell 2015, p. 321.
- Hoyos 2015b, p. 107.
- Zimmermann 2011, pp. 283–284.
- Fronda 2011, p. 243.
- Zimmermann 2011, p. 284.
- Fronda 2011, pp. 243–244.
- Zimmermann 2011, p. 285.
- Goldsworthy 2006, p. 184.
- Liddell Hart 1967, p. 45.
- Fronda 2011, p. 244.
- Goldsworthy 2006, p. 190.
- Miles 2011, p. 270.
- Lazenby 1998, p. 86.
- Bagnall 1999, p. 183.
- Bagnall 1999, pp. 184–188.
- Zimmermann 2011, p. 286.
- Fronda 2011, p. 245.
- Hoyos 2015, p. 127.
- Sabin 1996, p. 62.
- Goldsworthy 2006, p. 222.
- Lazenby 1998, p. 87.
- Goldsworthy 2006, pp. 222–226.
- Rawlings 2015, p. 313.
- Goldsworthy 2006, p. 223.
- Goldsworthy 2006, p. 225.
- Goldsworthy 2006, pp. 225–226.
- Goldsworthy 2006, p. 226.
- Lazenby 1998, p. 98.
- Erdkamp 2015, p. 75.
- Barceló 2015, p. 370.
- Goldsworthy 2006, p. 227.
- Goldsworthy 2006, pp. 222–235.
- Goldsworthy 2006, p. 236.
- Goldsworthy 2006, pp. 237–238.
- Bagnall 1999, pp. 199–200.
- Goldsworthy 2006, pp. 253–260.
- Miles 2011, p. 288.
- Edwell 2011, p. 327.
- Bagnall 1999, p. 200.
- Edwell 2011, p. 328.
- Edwell 2011, p. 329.
- Edwell 2011, p. 330.
- Goldsworthy 2006, pp. 266–267.
- Rawlings 2015, p. 311.
- Zimmermann 2011, p. 290.
- Bagnall 1999, pp. 286–287.
- Miles 2011, p. 310.
- Goldsworthy 2006, p. 244.
- Miles 2011, p. 312.
- Bagnall 1999, p. 289.
- Edwell 2011, p. 321.
- Edwell 2011, p. 322.
- Coarelli 2002, pp. 73–74.
- Etcheto 2012, pp. 274–278.
- Miles 2011, pp. 268, 298–299.
- Edwell 2011, p. 323.
- Zimmermann 2011, p. 292.
- Barceló 2015, p. 362.
- Hoyos 2015, p. 178.
- Zimmermann 2011, p. 293.
- Miles 2011, p. 303.
- Edwell 2011, p. 333.
- Barceló 2015, p. 372.
- Goldsworthy 2006, pp. 286–288.
- Goldsworthy 2006, pp. 291–292.
- Bagnall 1999, pp. 282–283.
- Goldsworthy 2006, pp. 298–300.
- Bagnall 1999, pp. 287–291.
- Goldsworthy 2006, p. 302.
- Miles 2011, p. 315.
- Bagnall 1999, pp. 291–293.
- Goldsworthy 2006, pp. 308–309.
- Eckstein 2006, p. 176.
- Miles 2011, p. 318.
- Kunze 2015, p. 398.
- Kunze 2015, pp. 398, 407.
- Kunze 2015, p. 407.
- Kunze 2015, p. 408.
- Le Bohec 2015, p. 434.
- Le Bohec 2015, pp. 436–437.
- Le Bohec 2015, p. 438.
- Bagnall 1999, pp. 309–310.
- Coarelli 1981, p. 187.
- Le Bohec 2015, p. 439.
- Miles 2011, p. 343.
- Bagnall 1999, p. 314.
- Bagnall 1999, p. 315.
- Le Bohec 2015, p. 440.
- Goldsworthy 2006, pp. 348–349.
- Goldsworthy 2006, p. 349.
- Bagnall 1999, p. 318.
- Miles 2011, p. 2.
- Le Bohec 2015, p. 441.
- Miles 2011, p. 346.
- Miles 2011, p. 3.
- Miles 2011, pp. 3–4.
- Scullard 2002, p. 316.
- Ridley 1986, pp. 144–145.
- Baker 2014, p. 50.
- Scullard 2002, pp. 310, 316.
- Whittaker 1996, p. 596.
- Pollard 2015, p. 249.
- Fantar 2015, pp. 455–456.
- Richardson 2015, pp. 480–481.
- Miles 2011, pp. 363–364.
- Mazzoni 2010, pp. 13–14.
- Goldsworthy 2006, p. 296.
- UNESCO 2020.
- Bagnall, Nigel (1999). The Punic Wars: Rome, Carthage and the Struggle for the Mediterranean. London: Pimlico. ISBN 978-0-7126-6608-4.
- Baker, Heather D. (2014). "'I burnt, razed (and) destroyed those cities': The Assyrian accounts of deliberate architectural destruction". In Mancini, JoAnne; Bresnahan, Keith (eds.). Architecture and Armed Conflict: The Politics of Destruction. New York: Routledge. pp. 45–57. ISBN 978-0-415-70249-2.
- Barceló, Pedro (2015) . "Punic Politics, Economy, and Alliances, 218–201". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 357–375. ISBN 978-1-119-02550-4.
- Le Bohec, Yann (2015) . "The "Third Punic War": The Siege of Carthage (148–146 BC)". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 430–446. ISBN 978-1-1190-2550-4.
- Bringmann, Klaus (2007). A History of the Roman Republic. Cambridge, United Kingdom: Polity Press. ISBN 978-0-7456-3370-1.
- Briscoe, John (2006). "The Second Punic War". In Astin, A. E.; Walbank, F. W.; Frederiksen, M. W.; Ogilvie, R. M. (eds.). The Cambridge Ancient History: Rome and the Mediterranean to 133 B.C. VIII. Cambridge: Cambridge University Press. pp. 44–80. ISBN 978-0-521-23448-1.
- Casson, Lionel (1995). Ships and Seamanship in the Ancient World. Baltimore: Johns Hopkins University Press. ISBN 978-0-8018-5130-8.
- Castillo, Dennis Angelo (2006). The Maltese Cross: A Strategic History of Malta. Westport, Connecticut: Greenwood Publishing Group. ISBN 978-0-313-32329-4.
- Champion, Craige B. (2015) . "Polybius and the Punic Wars". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 95–110. ISBN 978-1-1190-2550-4.
- Coarelli, Filippo (1981). "La doppia tradizione sulla morte di Romolo e gli auguracula dell'Arx e del Quirinale". In Pallottino, Massimo (ed.). Gli Etruschi e Roma : atti dell'incontro di studio in onore di Massimo Pallottino : Roma, 11-13 dicembre 1979 (in Italian). Rome: G. Bretschneider. pp. 173–188. ISBN 978-88-85007-51-2.
- Coarelli, Filippo (2002). "I ritratti di 'Mario' e 'Silla' a Monaco e il sepolcro degli Scipioni". Eutopia Nuova Serie (in Italian). II (1): 47–75. ISSN 1121-1628.
- Collins, Roger (1998). Spain: An Oxford Archaeological Guide. Oxford University Press. ISBN 978-0-19-285300-4.
- Curry, Andrew (2012). "The Weapon That Changed History". Archaeology. 65 (1): 32–37. JSTOR 41780760.
- Eckstein, Arthur (2006). Mediterranean Anarchy, Interstate War, and the Rise of Rome. Berkeley: University of California Press. ISBN 978-0-520-24618-8.
- Eckstein, Arthur (2017). "The First Punic War and After, 264–237 BC". The Encyclopedia of Ancient Battles. Wiley Online Library. pp. 1–14. doi:10.1002/9781119099000.wbabat0270. ISBN 978-1-4051-8645-2.
- Edwell, Peter (2011). "War Abroad: Spain, Sicily, Macedon, Africa". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 320–338. ISBN 978-1-119-02550-4.
- Edwell, Peter (2015) . "War Abroad: Spain, Sicily, Macedon, Africa". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 320–338. ISBN 978-1-119-02550-4.
- Erdkamp, Paul (2015) . "Manpower and Food Supply in the First and Second Punic Wars". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 58–76. ISBN 978-1-1190-2550-4.
- Etcheto, Henri (2012). Les Scipions. Famille et pouvoir à Rome à l'époque républicaine (in French). Bordeaux: Ausonius Éditions. ISBN 978-2-35613-073-0.
- Fantar, M’hamed-Hassine (2015) . "Death and Transfiguration: Punic Culture after 146". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 449–466. ISBN 978-1-1190-2550-4.
- Fronda, Michael P. (2011). "Hannibal: Tactics, Strategy, and Geostrategy". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Oxford: Wiley-Blackwell. pp. 242–259. ISBN 978-1-405-17600-2.
- Goldsworthy, Adrian (2006). The Fall of Carthage: The Punic Wars 265–146 BC. London: Phoenix. ISBN 978-0-304-36642-2.
- Hau, Lisa (2016). Moral History from Herodotus to Diodorus Siculus. Edinburgh: Edinburgh University Press. ISBN 978-1-4744-1107-3.
- Hoyos, Dexter (2000). "Towards a Chronology of the 'Truceless War', 241–237 B.C.". Rheinisches Museum für Philologie. 143 (3/4): 369–380. JSTOR 41234468.
- Hoyos, Dexter (2015) . A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. ISBN 978-1-1190-2550-4.
- Hoyos, Dexter (2015b). Mastering the West: Rome and Carthage at War. Oxford: Oxford University Press. ISBN 978-0-19-986010-4.
- Koon, Sam (2015) . "Phalanx and Legion: the "Face" of Punic War Battle". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 77–94. ISBN 978-1-1190-2550-4.
- Kunze, Claudia (2015) . "Carthage and Numidia, 201–149". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 395–411. ISBN 978-1-1190-2550-4.
- Lazenby, John (1996). The First Punic War: A Military History. Stanford, California: Stanford University Press. ISBN 978-0-8047-2673-3.
- Lazenby, John (1998). Hannibal's War: A Military History of the Second Punic War. Warminster: Aris & Phillips. ISBN 978-0-85668-080-9.
- Mahaney, W.C. (2008). Hannibal's Odyssey: Environmental Background to the Alpine Invasion of Italia. Piscataway, New Jersey: Gorgias Press. ISBN 978-1-59333-951-7.
- Mazzoni, Cristina (2010). "Capital City: Rome 1870–2010". Annali d'Italianistica. 28: 13–29. JSTOR 24016385.
- Miles, Richard (2011). Carthage Must be Destroyed. London: Penguin. ISBN 978-0-14-101809-6.
- Mineo, Bernard (2015) . "Principal Literary Sources for the Punic Wars (apart from Polybius)". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 111–128. ISBN 978-1-1190-2550-4.
- Murray, William (2011). The Age of Titans: The Rise and Fall of the Great Hellenistic Navies. Oxford: Oxford University Press. ISBN 978-0-19-993240-5.
- Pollard, Elizabeth (2015). Worlds Together Worlds Apart. New York: W.W. Norton. ISBN 978-0-393-92207-3.
- Rankov, Boris (2015) . "A War of Phases: Strategies and Stalemates". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 149–166. ISBN 978-1-4051-7600-2.
- Rawlings, Louis (2015) . "The War in Italy, 218–203". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 58–76. ISBN 978-1-1190-2550-4.
- Richardson, John (2015) . "Spain, Africa, and Rome after Carthage". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Chichester, West Sussex: John Wiley. pp. 467–482. ISBN 978-1-1190-2550-4.
- Ridley, Ronald (1986). "To Be Taken with a Pinch of Salt: The Destruction of Carthage". Classical Philology. 81 (2): 140–146. doi:10.1086/366973. JSTOR 269786.
- Sabin, Philip (1996). "The Mechanics of Battle in the Second Punic War". Bulletin of the Institute of Classical Studies. Supplement. 41 (67): 59–79. doi:10.1111/j.2041-5370.1996.tb01914.x. JSTOR 43767903.
- Scullard, Howard H. (2002). A History of the Roman World, 753 to 146 BC. London: Routledge. ISBN 978-0-415-30504-4.
- Scullard, Howard H. (2006) . "Carthage and Rome". In Walbank, F. W.; Astin, A. E.; Frederiksen, M. W. & Ogilvie, R. M. (eds.). Cambridge Ancient History: Volume 7, Part 2, 2nd Edition. Cambridge: Cambridge University Press. pp. 486–569. ISBN 978-0-521-23446-7.
- Shutt, Rowland (1938). "Polybius: A Sketch". Greece & Rome. 8 (22): 50–57. doi:10.1017/S001738350000588X. JSTOR 642112.
- Sidwell, Keith C.; Jones, Peter V. (1998). The World of Rome: an Introduction to Roman Culture. Cambridge: Cambridge University Press. ISBN 978-0-521-38600-5.
- de Souza, Philip (2008). "Naval Forces". In Sabin, Philip; van Wees, Hans & Whitby, Michael (eds.). The Cambridge History of Greek and Roman Warfare, Volume 1: Greece, the Hellenistic World and the Rise of Rome. Cambridge: Cambridge University Press. pp. 357–367. ISBN 978-0-521-85779-6.
- Tipps, G.K. (1985). "The Battle of Ecnomus". Historia: Zeitschrift für Alte Geschichte. 34 (4): 432–465. JSTOR 4435938.
- Tipps, G. K. (2003). "The Defeat of Regulus". The Classical World. 96 (4): 375–385. doi:10.2307/4352788. JSTOR 4352788.
- "Archaeological Site of Carthage". UNESCO. UNESCO. 2020. Retrieved 26 July 2020.
- Walbank, F.W. (1990). Polybius. 1. Berkeley: University of California Press. ISBN 978-0-520-06981-7.
- Wallinga, Herman (1956). The Boarding-bridge of the Romans: Its Construction and its Function in the Naval Tactics of the First Punic War. Groningen: J.B. Wolters. OCLC 458845955.
- Warmington, Brian (1993) . Carthage. New York: Barnes & Noble, Inc. ISBN 978-1-56619-210-1.
- Whittaker, C. R. (1996). "Roman Africa: Augustus to Vespasian". In Bowman, A.; Champlin, E.; Lintott, A. (eds.). The Cambridge Ancient History. X. Cambridge: Cambridge University Press. pp. 595–96. doi:10.1017/CHOL9780521264303.022. ISBN 978-1-139-05438-6.
- Zimmermann, Klaus (2011). "Roman Strategy and Aims in the Second Punic War". In Hoyos, Dexter (ed.). A Companion to the Punic Wars. Oxford: Wiley-Blackwell. pp. 280–298. ISBN 978-1-405-17600-2. | https://worddisk.com/wiki/Punic_wars/ | 21 |
56 | |Part of the Politics series|
|Basic forms of government|
A state is a polity under a system of governance with a monopoly on force. There is no undisputed definition of a state. A widely used definition from the German sociologist Max Weber is that a "state" is a polity that maintains a monopoly on the legitimate use of violence, although other definitions are not uncommon. A state is not synonymous with a government, as stateless governments like the Iroquois Confederacy exist.
In a federal union, the term "state" is sometimes used to refer to the federated polities that make up the federation. (Other terms that are used in such federal systems may include “province”, “region” or other terms.) In international law, such entities are not considered states, which is a term that relates only to the national entity, commonly referred to as the country or nation.[ clarification needed]
Most of the human population has existed within a state system for millennia; however, for most of prehistory people lived in stateless societies. The earliest forms of states arose about 5,500 years ago in conjunction with rapid growth of cities, invention of writing and codification of new forms of religion. Over time, a variety of different forms developed, employing a variety of justifications for their existence (such as divine right, the theory of the social contract, etc.). Today, the modern nation state is the predominant form of state to which people are subject.
The word state and its cognates in some other European languages (stato in Italian, estado in Spanish and Portuguese, état in French, Staat in German) ultimately derive from the Latin word status, meaning "condition, circumstances".
With the revival of the Roman law in 14th-century Europe, the term came to refer to the legal standing of persons (such as the various " estates of the realm" – noble, common, and clerical), and in particular the special status of the king. The highest estates, generally those with the most wealth and social rank, were those that held power. The word also had associations with Roman ideas (dating back to Cicero) about the "status rei publicae", the "condition of public matters". In time, the word lost its reference to particular social groups and became associated with the legal order of the entire society and the apparatus of its enforcement.
The early 16th-century works of Machiavelli (especially The Prince) played a central role in popularizing the use of the word "state" in something similar to its modern sense. The contrasting of church and state still dates to the 16th century. The North American colonies were called "states" as early as the 1630s. The expression l'Etat, c'est moi (" I am the State") attributed to Louis XIV is probably apocryphal, recorded in the late 18th century.
There is no academic consensus on the most appropriate definition of the state. The term "state" refers to a set of different, but interrelated and often overlapping, theories about a certain range of political phenomena. The act of defining the term can be seen as part of an ideological conflict, because different definitions lead to different theories of state function, and as a result validate different political strategies. According to Jeffrey and Painter, "if we define the 'essence' of the state in one place or era, we are liable to find that in another time or space something which is also understood to be a state has different 'essential' characteristics".
Different definitions of the state often place an emphasis either on the ‘means’ or the ‘ends’ of states. Means-related definitions include those by Max Weber and Charles Tilly, both of whom define the state according to its violent means. For Weber, the state "is a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory" (Politics as a Vocation), while Tilly characterizes them as "coercion-wielding organisations" (Coercion, Capital, and European States).
Ends-related definitions emphasis instead the teleological aims and purposes of the state. Marxist thought regards the ends of the state as being the perpetuation of class domination in favour of the ruling class which, under the capitalist mode of production, is the bourgeoisie. The state exists to defend the ruling class's claims to private property and its capturing of surplus profits at the expense of the proletariat. Indeed, Marx claimed that "the executive of the modern state is nothing but a committee for managing the common affairs of the whole bourgeoisie" ( Communist Manifesto).
Liberal thought provides another possible teleology of the state. According to John Locke, the goal of the state/commonwealth was "the preservation of property" (Second Treatise on Government), with 'property' in Locke's work referring not only to personal possessions but also to one's life and liberty. On this account, the state provides the basis for social cohesion and productivity, creating incentives for wealth creation by providing guarantees of protection for one's life, liberty and personal property. Provision of public goods is considered by some such as Adam Smith as a central function of the state, since these goods would otherwise be underprovided.
The most commonly used definition is Max Weber's, which describes the state as a compulsory political organization with a centralized government that maintains a monopoly of the legitimate use of force within a certain territory. While economic and political philosophers have contested the monopolistic tendency of states, Robert Nozick argues that the use of force naturally tends towards monopoly.
Another commonly accepted definition of the state is the one given at the Montevideo Convention on Rights and Duties of States in 1933. It provides that "[t]he state as a person of international law should possess the following qualifications: (a) a permanent population; (b) a defined territory; (c) government; and (d) capacity to enter into relations with the other states." And that "[t]he federal state shall constitute a sole person in the eyes of international law."
According to the Oxford English Dictionary, a state is "a. an organized political community under one government; a commonwealth; a nation. b. such a community forming part of a federal republic, esp the United States of America".
Confounding the definition problem is that "state" and "government" are often used as synonyms in common conversation and even some academic discourse. According to this definition schema, the states are nonphysical persons of international law, governments are organizations of people. The relationship between a government and its state is one of representation and authorized agency.
States may be classified by political philosophers as sovereign if they are not dependent on, or subject to any other power or state. Other states are subject to external sovereignty or hegemony where ultimate sovereignty lies in another state. Many states are federated states which participate in a federal union. A federated state is a territorial and constitutional community forming part of a federation. (Compare confederacies or confederations such as Switzerland.) Such states differ from sovereign states in that they have transferred a portion of their sovereign powers to a federal government.
One can commonly and sometimes readily (but not necessarily usefully) classify states according to their apparent make-up or focus. The concept of the nation-state, theoretically or ideally co-terminous with a "nation", became very popular by the 20th century in Europe, but occurred rarely elsewhere or at other times. In contrast, some states have sought to make a virtue of their multi-ethnic or multinational character ( Habsburg Austria-Hungary, for example, or the Soviet Union), and have emphasised unifying characteristics such as autocracy, monarchical legitimacy, or ideology. Other states, often fascist or authoritarian ones, promoted state-sanctioned notions of racial superiority. Other states may bring ideas of commonality and inclusiveness to the fore: note the res publica of ancient Rome and the Rzeczpospolita of Poland-Lithuania which finds echoes in the modern-day republic. The concept of temple states centred on religious shrines occurs in some discussions of the ancient world. Relatively small city-states, once a relatively common and often successful form of polity, have become rarer and comparatively less prominent in modern times. Modern-day independent city-states include Vatican City, Monaco, and Singapore. Other city-states survive as federated states, like the present day German city-states, or as otherwise autonomous entities with limited sovereignty, like Hong Kong, Gibraltar and Ceuta. To some extent, urban secession, the creation of a new city-state (sovereign or federated), continues to be discussed in the early 21st century in cities such as London.
A state can be distinguished from a government. The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time. That is, governments are the means through which state power is employed. States are served by a continuous succession of different governments. States are immaterial and nonphysical social objects, whereas governments are groups of people with certain coercive powers.
Each successive government is composed of a specialized and privileged body of individuals, who monopolize political decision-making, and are separated by status and organization from the population as a whole.
States can also be distinguished from the concept of a " nation", where "nation" refers to a cultural-political community of people. A nation-state refers to a situation where a single ethnicity is associated with a specific state.
In the classical thought, the state was identified with both political society and civil society as a form of political community, while the modern thought distinguished the nation state as a political society from civil society as a form of economic society. Thus in the modern thought the state is contrasted with civil society.
Antonio Gramsci believed that civil society is the primary locus of political activity because it is where all forms of "identity formation, ideological struggle, the activities of intellectuals, and the construction of hegemony take place." and that civil society was the nexus connecting the economic and political sphere. Arising out of the collective actions of civil society is what Gramsci calls "political society", which Gramsci differentiates from the notion of the state as a polity. He stated that politics was not a "one-way process of political management" but, rather, that the activities of civil organizations conditioned the activities of political parties and state institutions, and were conditioned by them in turn. Louis Althusser argued that civil organizations such as church, schools, and the family are part of an "ideological state apparatus" which complements the "repressive state apparatus" (such as police and military) in reproducing social relations.
Given the role that many social groups have in the development of public policy and the extensive connections between state bureaucracies and other institutions, it has become increasingly difficult to identify the boundaries of the state. Privatization, nationalization, and the creation of new regulatory bodies also change the boundaries of the state in relation to society. Often the nature of quasi-autonomous organizations is unclear, generating debate among political scientists on whether they are part of the state or civil society. Some political scientists thus prefer to speak of policy networks and decentralized governance in modern societies rather than of state bureaucracies and direct state control over policy.
The earliest forms of the state emerged whenever it became possible to centralize power in a durable way. Agriculture and a settled population have been attributed as necessary conditions to form states. Certain types of agriculture are more conducive to state formation, such as grain (wheat, barley, millet), because they are suited to concentrated production, taxation, and storage. Agriculture and writing are almost everywhere associated with this process: agriculture because it allowed for the emergence of a social class of people who did not have to spend most of their time providing for their own subsistence, and writing (or an equivalent of writing, like Inca quipus) because it made possible the centralization of vital information. Bureaucratization made expansion over large territories possible.
The first known states were created in the Egypt, Mesopotamia, India, China, Mesoamerica, and the Andes. It is only in relatively modern times that states have almost completely displaced alternative " stateless" forms of political organization of societies all over the planet. Roving bands of hunter-gatherers and even fairly sizable and complex tribal societies based on herding or agriculture have existed without any full-time specialized state organization, and these "stateless" forms of political organization have in fact prevailed for all of the prehistory and much of the history of the human species and civilization.
Since the late 19th century, virtually the entirety of the world's inhabitable land has been parcelled up into areas with more or less definite borders claimed by various states. Earlier, quite large land areas had been either unclaimed or uninhabited, or inhabited by nomadic peoples who were not organised as states. However, even within present-day states there are vast areas of wilderness, like the Amazon rainforest, which are uninhabited or inhabited solely or mostly by indigenous people (and some of them remain uncontacted). Also, there are so-called " failed states" which do not hold de facto control over all of their claimed territory or where this control is challenged. Currently the international community comprises around 200 sovereign states, the vast majority of which are represented in the United Nations.[ citation needed]
The anthropologist Tim Ingold writes:
It is not enough to observe, in a now rather dated anthropological idiom, that hunter gatherers live in 'stateless societies', as though their social lives were somehow lacking or unfinished, waiting to be completed by the evolutionary development of a state apparatus. Rather, the principal of their socialty, as Pierre Clastres has put it, is fundamentally against the state.
During the Neolithic period, human societies underwent major cultural and economic changes, including the development of agriculture, the formation of sedentary societies and fixed settlements, increasing population densities, and the use of pottery and more complex tools.
Sedentary agriculture led to the development of property rights, domestication of plants and animals, and larger family sizes. It also provided the basis for the centralized state form by producing a large surplus of food, which created a more complex division of labor by enabling people to specialize in tasks other than food production. Early states were characterized by highly stratified societies, with a privileged and wealthy ruling class that was subordinate to a monarch. The ruling classes began to differentiate themselves through forms of architecture and other cultural practices that were different from those of the subordinate laboring classes.
In the past, it was suggested that the centralized state was developed to administer large public works systems (such as irrigation systems) and to regulate complex economies. However, modern archaeological and anthropological evidence does not support this thesis, pointing to the existence of several non-stratified and politically decentralized complex societies.
Mesopotamia is generally considered to be the location of the earliest civilization or complex society, meaning that it contained cities, full-time division of labor, social concentration of wealth into capital, unequal distribution of wealth, ruling classes, community ties based on residency rather than kinship, long distance trade, monumental architecture, standardized forms of art and culture, writing, and mathematics and science. It was the world's first literate civilization, and formed the first sets of written laws.
Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths.
Several important political innovations of classical antiquity came from the Greek city-states and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population, and in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history.
During Medieval times in Europe, the state was organized on the principle of feudalism, and the relationship between lord and vassal became central to social organization. Feudalism led to the development of greater social hierarchies.
The formalization of the struggles over taxation between the monarch and other elements of society (especially the nobility and the cities) gave rise to what is now called the Standestaat, or the state of Estates, characterized by parliaments in which key social groups negotiated with the king about legal and economic matters. These estates of the realm sometimes evolved in the direction of fully-fledged parliaments, but sometimes lost out in their struggles with the monarch, leading to greater centralization of lawmaking and military power in his hands. Beginning in the 15th century, this centralizing process gives rise to the absolutist state.
Cultural and national homogenization figured prominently in the rise of the modern state system. Since the absolutist period, states have largely been organized on a national basis. The concept of a national state, however, is not synonymous with nation state. Even in the most ethnically homogeneous societies there is not always a complete correspondence between state and nation, hence the active role often taken by the state to promote nationalism through emphasis on shared symbols and national identity.
Charles Tilly argues that the number of total states in Western Europe declined rapidly from the Late Middle Ages to Early Modern Era during a process of state formation. Other research has disputed whether such a decline took place.
According to Hendrik Spruyt, the modern state is different from its predecessor polities in two main aspects: (1) Modern states have greater capacity to intervene in their societies, and (2) Modern states are buttressed by the principle of international legal sovereignty and the juridicial equivalence of states. The two features began to emerge in the Late Middle Ages but the modern state form took centuries to come firmly into fruition. Other aspects of modern states is that they tend to be organized as unified national polities, and that they have rational-legal bureaucracies.
Sovereign equality did not become fully global until after World War II amid decolonization. Adom Getachew writes that it was not until the 1960 Declaration on the Granting of Independence to Colonial Countries and Peoples that the international legal context for popular sovereignty was instituted.
Theories for the emergence of the earliest states emphasize grain agriculture and settled populations as necessary conditions. Some argue that climate change led to a greater concentration of human populations around dwindling waterways.
Hendrik Spruyt distinguishes between three prominent categories of explanations for the emergence of the modern state as a dominant polity: (1) Security-based explanations that emphasize the role of warfare, (2) Economy-based explanations that emphasize trade, property rights and capitalism as drivers behind state formation, and (3) Institutionalist theories that sees the state as an organizational form that is better able to resolve conflict and cooperation problems than competing political organizations.
According to Philip Gorski and Vivek Swaroop Sharma, the "neo-Darwinian" framework for the emergence of sovereign states is the dominant explanation in the scholarship. The neo-Darwininian framework emphasizes how the modern state emerged as the dominant organizational form through natural selection and competition.
Most political theories of the state can roughly be classified into two categories. The first are known as "liberal" or "conservative" theories, which treat capitalism as a given, and then concentrate on the function of states in capitalist society. These theories tend to see the state as a neutral entity separated from society and the economy. Marxist and anarchist theories on the other hand, see politics as intimately tied in with economic relations, and emphasize the relation between economic power and political power. They see the state as a partisan instrument that primarily serves the interests of the upper class.
Anarchism is a political philosophy which considers the state and hierarchies to be immoral, unnecessary and harmful and instead promotes a stateless society, or anarchy, a self-managed, self-governed society based on voluntary, cooperative institutions.
Anarchists believe that the state is inherently an instrument of domination and repression, no matter who is in control of it. Anarchists note that the state possesses the monopoly on the legal use of violence. Unlike Marxists, anarchists believe that revolutionary seizure of state power should not be a political goal. They believe instead that the state apparatus should be completely dismantled, and an alternative set of social relations created, which are not based on state power at all.
Anarcho-capitalists such as Murray Rothbard come to some of the same conclusions about the state apparatus as anarchists, but for different reasons. The two principles that anarchists rely on most are consent and non-initiation. Consent in anarcho-capitalist theory requires that individuals explicitly assent to the jurisdiction of the State excluding Lockean tacit consent. Consent may also create a right of secession which destroys any concept of government monopoly on force. Coercive monopolies are excluded by the non-initiation of force principle because they must use force in order to prevent others from offering the same service that they do. Anarcho-capitalists start from the belief that replacing monopolistic states with competitive providers is necessary from a normative, justice-based scenario.
Anarcho-capitalists believe that the market values of competition and privatization can better provide the services provided by the state. Murray Rothbard argues in Power and Market that any and all government functions could better be fulfilled by private actors including: defense, infrastructure, and legal adjudication.
Marx and Engels were clear in that the communist goal was a classless society in which the state would have " withered away", replaced only by "administration of things". Their views are found throughout their Collected Works, and address past or then extant state forms from an analytical and tactical viewpoint, but not future social forms, speculation about which is generally antithetical to groups considering themselves Marxist but who – not having conquered the existing state power(s) – are not in the situation of supplying the institutional form of an actual society. To the extent that it makes sense, there is no single "Marxist theory of state", but rather several different purportedly "Marxist" theories have been developed by adherents of Marxism.
Marx's early writings portrayed the bourgeois state as parasitic, built upon the superstructure of the economy, and working against the public interest. He also wrote that the state mirrors class relations in society in general, acting as a regulator and repressor of class struggle, and as a tool of political power and domination for the ruling class. The Communist Manifesto claimed that the state to be nothing more than "a committee for managing the common affairs of the bourgeoisie.
For Marxist theorists, the role of the modern bourgeois state is determined by its function in the global capitalist order. Ralph Miliband argued that the ruling class uses the state as its instrument to dominate society by virtue of the interpersonal ties between state officials and economic elites. For Miliband, the state is dominated by an elite that comes from the same background as the capitalist class. State officials therefore share the same interests as owners of capital and are linked to them through a wide array of social, economic, and political ties.
Gramsci's theories of state emphasized that the state is only one of the institutions in society that helps maintain the hegemony of the ruling class, and that state power is bolstered by the ideological domination of the institutions of civil society, such as churches, schools, and mass media.
Pluralists view society as a collection of individuals and groups, who are competing for political power. They then view the state as a neutral body that simply enacts the will of whichever groups dominate the electoral process. Within the pluralist tradition, Robert Dahl developed the theory of the state as a neutral arena for contending interests or its agencies as simply another set of interest groups. With power competitively arranged in society, state policy is a product of recurrent bargaining. Although pluralism recognizes the existence of inequality, it asserts that all groups have an opportunity to pressure the state. The pluralist approach suggests that the modern democratic state's actions are the result of pressures applied by a variety of organized interests. Dahl called this kind of state a polyarchy.
Pluralism has been challenged on the ground that it is not supported by empirical evidence. Citing surveys showing that the large majority of people in high leadership positions are members of the wealthy upper class, critics of pluralism claim that the state serves the interests of the upper class rather than equitably serving the interests of all social groups.
Jürgen Habermas believed that the base-superstructure framework, used by many Marxist theorists to describe the relation between the state and the economy, was overly simplistic. He felt that the modern state plays a large role in structuring the economy, by regulating economic activity and being a large-scale economic consumer/producer, and through its redistributive welfare state activities. Because of the way these activities structure the economic framework, Habermas felt that the state cannot be looked at as passively responding to economic class interests.
Michel Foucault believed that modern political theory was too state-centric, saying "Maybe, after all, the state is no more than a composite reality and a mythologized abstraction, whose importance is a lot more limited than many of us think." He thought that political theory was focusing too much on abstract institutions, and not enough on the actual practices of government. In Foucault's opinion, the state had no essence. He believed that instead of trying to understand the activities of governments by analyzing the properties of the state (a reified abstraction), political theorists should be examining changes in the practice of government to understand changes in the nature of the state. Foucault argues that it is technology that has created and made the state so elusive and successful, and that instead of looking at the state as something to be toppled we should look at the state as technological manifestation or system with many heads; Foucault argues instead of something to be overthrown as in the sense of the Marxist and Anarchist understanding of the state. Every single scientific technological advance has come to the service of the state Foucault argues and it is with the emergence of the Mathematical sciences and essentially the formation of Mathematical statistics that one gets an understanding of the complex technology of producing how the modern state was so successfully created. Foucault insists that the Nation state was not a historical accident but a deliberate production in which the modern state had to now manage coincidentally with the emerging practice of the Police ( Cameral science) 'allowing' the population to now 'come in' into jus gentium and civitas ( Civil society) after deliberately being excluded for several millennia. Democracy wasn't (the newly formed voting franchise) as is always painted by both political revolutionaries and political philosophers as a cry for political freedom or wanting to be accepted by the 'ruling elite', Foucault insists, but was a part of a skilled endeavour of switching over new technology such as; Translatio imperii, Plenitudo potestatis and extra Ecclesiam nulla salus readily available from the past Medieval period, into mass persuasion for the future industrial 'political' population(deception over the population) in which the political population was now asked to insist upon itself "the president must be elected". Where these political symbol agents, represented by the pope and the president are now democratised. Foucault calls these new forms of technology Biopower and form part of our political inheritance which he calls Biopolitics.
Heavily influenced by Gramsci, Nicos Poulantzas, a Greek neo-Marxist theorist argued that capitalist states do not always act on behalf of the ruling class, and when they do, it is not necessarily the case because state officials consciously strive to do so, but because the ' structural' position of the state is configured in such a way to ensure that the long-term interests of capital are always dominant. Poulantzas' main contribution to the Marxist literature on the state was the concept of 'relative autonomy' of the state. While Poulantzas' work on 'state autonomy' has served to sharpen and specify a great deal of Marxist literature on the state, his own framework came under criticism for its ' structural functionalism'.[ citation needed]
It can be considered as a single structural universe: the historical reality that takes shape in societies characterized by a codified or crystallized right, with a power organized hierarchically and justified by the law that gives it authority, with a well-defined social and economic stratification, with an economic and social organization that gives the society precise organic characteristics, with one (or multiple) religious organizations, in justification of the power expressed by such a society and in support of the religious beliefs of individuals and accepted by society as a whole. Such a structural universe, evolves in a cyclical manner, presenting two different historical phases (a mercantile phase, or “open society”, and a feudal phase or “closed society”), with characteristics so divergent that it can qualify as two different levels of civilization which, however, are never definitive, but that alternate cyclically, being able, each of the two different levels, to be considered progressive (in a partisan way, totally independent of the real value of well-being, degrees of freedom granted, equality realized and a concrete possibility to achieve further progress of the level of civilization), even by the most cultured fractions, educated and intellectually more equipped than the various societies, of both historical phases.
State autonomy theorists believe that the state is an entity that is impervious to external social and economic influence, and has interests of its own.
"New institutionalist" writings on the state, such as the works of Theda Skocpol, suggest that state actors are to an important degree autonomous. In other words, state personnel have interests of their own, which they can and do pursue independently of (at times in conflict with) actors in society. Since the state controls the means of coercion, and given the dependence of many groups in civil society on the state for achieving any goals they may espouse, state personnel can to some extent impose their own preferences on civil society.
Various social contract theories have been proffered to establish state legitimacy and to explain state formation. Common elements in these theories are a state of nature that incentivizes people to seek out the establishment of a state. Thomas Hobbes described the state of nature as "solitary, poor, nasty, brutish, and short" ( Leviathan, Chapters XIII–XIV). Locke takes a more benign view of the state of nature and is unwilling to take as hard a stance on the degeneracy of the state of nature. He does agree that it is equally incapable of providing a high quality of life. Locke argues for inalienable human rights. One of the most significant rights for Locke was the right to property. He viewed it as a keystone right that was inadequately protected in the state of nature. Social contract theorists frequently argue for some level of natural rights. In order to protect their ability to exercise these rights, they are willing to give up some other rights to the state to allow it to establish governance. Ayn Rand argues that the only right sacrificed is the right to vigilante justice, thus individuals preserve full autonomy over their property. Social contract theory then basis government legitimacy on the consent of the governed, but such legitimacy only extends as far as the governed have consented. This line of reasoning figures prominently in The United States Declaration of Independence.
The rise of the modern day state system was closely related to changes in political thought, especially concerning the changing understanding of legitimate state power and control. Early modern defenders of absolutism ( Absolute monarchy), such as Thomas Hobbes and Jean Bodin undermined the doctrine of the divine right of kings by arguing that the power of kings should be justified by reference to the people. Hobbes in particular went further to argue that political power should be justified with reference to the individual (Hobbes wrote in the time of the English Civil War), not just to the people understood collectively. Both Hobbes and Bodin thought they were defending the power of kings, not advocating for democracy, but their arguments about the nature of sovereignty were fiercely resisted by more traditional defenders of the power of kings, such as Sir Robert Filmer in England, who thought that such defenses ultimately opened the way to more democratic claims.[ citation needed]
Max Weber identified three main sources of political legitimacy in his works. The first, legitimacy based on traditional grounds is derived from a belief that things should be as they have been in the past, and that those who defend these traditions have a legitimate claim to power. The second, legitimacy based on charismatic leadership, is devotion to a leader or group that is viewed as exceptionally heroic or virtuous. The third is rational-legal authority, whereby legitimacy is derived from the belief that a certain group has been placed in power in a legal manner, and that their actions are justifiable according to a specific code of written laws. Weber believed that the modern state is characterized primarily by appeals to rational-legal authority.
Some states are often labeled as "weak" or "failed". In David Samuels's words "...a failed state occurs when sovereignty over claimed territory has collapsed or was never effectively at all". Authors like Samuels and Joel S. Migdal have explored the emergence of weak states, how they are different from Western "strong" states and its consequences to the economic development of developing countries.
Early state formation
To understand the formation of weak states, Samuels compares the formation of European states in the 1600s with the conditions under which more recent states were formed in the twentieth century. In this line of argument, the state allows a population to resolve a collective action problem, in which citizens recognize the authority of the state and this exercise the power of coercion over them. This kind of social organization required a decline in legitimacy of traditional forms of ruling (like religious authorities) and replaced them with an increase in the legitimacy of depersonalized rule; an increase in the central government's sovereignty; and an increase in the organizational complexity of the central government ( bureaucracy).
The transition to this modern state was possible in Europe around 1600 thanks to the confluence of factors like the technological developments in warfare, which generated strong incentives to tax and consolidate central structures of governance to respond to external threats. This was complemented by the increasing on the production of food (as a result of productivity improvements), which allowed to sustain a larger population and so increased the complexity and centralization of states. Finally, cultural changes challenged the authority of monarchies and paved the way to the emergence of modern states.
Late state formation
The conditions that enabled the emergence of modern states in Europe were different for other countries that started this process later. As a result, many of these states lack effective capabilities to tax and extract revenue from their citizens, which derives in problems like corruption, tax evasion and low economic growth. Unlike the European case, late state formation occurred in a context of limited international conflict that diminished the incentives to tax and increase military spending. Also, many of these states emerged from colonization in a state of poverty and with institutions designed to extract natural resources, which have made more difficult to form states. European colonization also defined many arbitrary borders that mixed different cultural groups under the same national identities, which has made difficult to build states with legitimacy among all the population, since some states have to compete for it with other forms of political identity.
As a complement of this argument, Migdal gives a historical account on how sudden social changes in the Third World during the Industrial Revolution contributed to the formation of weak states. The expansion of international trade that started around 1850, brought profound changes in Africa, Asia and Latin America that were introduced with the objective of assure the availability of raw materials for the European market. These changes consisted in: i) reforms to landownership laws with the objective of integrate more lands to the international economy, ii) increase in the taxation of peasants and little landowners, as well as collecting of these taxes in cash instead of in kind as was usual up to that moment and iii) the introduction of new and less costly modes of transportation, mainly railroads. As a result, the traditional forms of social control became obsolete, deteriorating the existing institutions and opening the way to the creation of new ones, that not necessarily lead these countries to build strong states. This fragmentation of the social order induced a political logic in which these states were captured to some extent by "strongmen", who were capable to take advantage of the above-mentioned changes and that challenge the sovereignty of the state. As a result, these decentralization of social control impedes to consolidate strong states.
- Cudworth et al., 2007: p. 1
- Barrow, 1993: pp. 9–10
- Cudworth et al., 2007: p. 95
- Salmon, 2008: p. 54 Archived 15 May 2016 at the Wayback Machine
- "Stateless Society | Encyclopedia.com". www.encyclopedia.com.
- Marek, Krystyna (1954). Identity and Continuity of States in Public International Law. Library Droz. p. 178.
It has been thought necessary to quote the Lytton Report at such length since it is probably the fullest and most exhaustive description of an allegedly independent, by 'actually' dependent, i.e. Puppet State
- Wimmer, Andreas; Feinstein, Yuval (2010).
"The Rise of the Nation-State across the World, 1816 to 2001". American Sociological Review. 75 (5): 764–790.
This global outcome—the almost universal adoption of the nation-state form
- Skinner, 1989:[ page needed]
- Bobbio, 1989: pp.57–58 Archived 30 April 2016 at the Wayback Machine
- C. D. Erhard, Betrachtungen über Leopolds des Weisen Gesetzgebung in Toscana, Richter, 1791, p. 30 Archived 19 January 2018 at the Wayback Machine. Recognized as apocryphal in the early 19th century. Jean Etienne François Marignié, The king can do no wrong: Le roi ne peut jamais avoit tort, le roi ne peut mal faire, Le Normant, 1818 p. 12 Archived 19 January 2018 at the Wayback Machine.
- Barrow, 1993: pp. 10–11
- Painter, Joe; Jeffrey, Alex (2009). Political Geography (2nd ed.). London: Sagr Publications Ltd. p. 21. ISBN 978-1-4129-0138-3.
- Smith, Adam (1776). An Inquiry into the Nature and Causes of the Wealth of Nations.
- Dubreuil, Benoít (2010). Human Evolution and the Origins of Hierarchies: The State of Nature. Cambridge University Press. p. 189. ISBN 978-0-521-76948-8. Archived from the original on 4 May 2016.
- Gordon, Scott (2002). Controlling the State: Constitutionalism from Ancient Athens to Today. Harvard University Press. p. 4. ISBN 978-0-674-00977-6. Archived from the original on 3 May 2016.
- Hay, Colin (2001). Routledge Encyclopedia of International Political Economy. New York: Routledge. pp. 1469–1474. ISBN 0-415-14532-5. Archived from the original on 3 May 2016.
- Donovan, John C. (1993). People, power, and politics: an introduction to political science. Rowman & Littlefield. p. 20. ISBN 978-0-8226-3025-8. Archived from the original on 8 May 2016.
- Shaw, Martin (2003). War and genocide: organized killing in modern society. Wiley-Blackwell. p. 59. ISBN 978-0-7456-1907-1. Archived from the original on 3 June 2016.
- Holcombe, Randall (2004). "Government: Unnecessary but Inevitable" (PDF). The Independent Review. VIII (3): 325–342.
- Nozick, Robert (1974). Anarchy, State, and Utopia. Oxford: Blackwell. ISBN 063119780X.
- Article 1 of the Montevideo Convention.
- Article 2 of the Montevideo Convention.
- Thompson, Della, ed. (1995). "state". Concise Oxford English Dictionary (9th ed.). Oxford University Press.
3 (also State) a an organized political community under one government; a commonwealth; a nation. b such a community forming part of a federal republic, esp the United States of America
- Robinson, E. H. 2013. The Distinction Between State and Government Archived 2 November 2013 at the Wayback Machine. The Geography Compass 7(8): pp. 556–566.
- Crawford, J. (2007) The Creation of States in International Law. Oxford University Press.
- The Australian National Dictionary: Fourth Edition, p. 1395. (2004) Canberra. ISBN 0-19-551771-7.
- Longerich, Peter (2010). Holocaust: The Nazi Persecution and Murder of the Jews. Oxford; New York: Oxford University Press. ISBN 978-0-19-280436-5.
For example: Pastor, Jack (1997). "3: The Early Hellenistic Period".
Land and Economy in Ancient Palestine. London: Routledge (published 2013). p. 32.
Archived from the original on 19 December 2016. Retrieved 14 February 2017.
The idea of Jerusalem as a temple state is an analogy to the temple states of Asia Minor and the Seleucid Empire, but it is an inappropriate analogy. [...] Rostovtzeff referred to Judea as a sort of temple state, notwithstanding his own definition that stipulates ownership of territory and state organization. [...] Hengel also claims that Judea was a temple state, ignoring his own evidence that the Ptolemies hardly would have tolerated such a situation.
- Athens, Carthage, Rome, Novgorod, Pskov, Hamburg, Bremen, Frankfurt, Lübeck, Florence, Pisa, Genoa, Venice, Danzig, Fiume, Dubrovnik.
- Bealey, Frank, ed. (1999). "government". The Blackwell dictionary of political science: a user's guide to its terms. Wiley-Blackwell. p. 147. ISBN 978-0-631-20695-8. Archived from the original on 16 May 2016.
- Sartwell, 2008: p. 25 Archived 23 June 2016 at the Wayback Machine
- Flint & Taylor, 2007: p. 137
- Robinson, E.H. 2013. The Distinction Between State and Government. Archived 2 November 2013 at the Wayback Machine The Geography Compass 7(8): pp. 556–566.
- Zaleski, Pawel (2008). "Tocqueville on Civilian Society. A Romantic Vision of the Dichotomic Structure of Social Reality". Archiv für Begriffsgeschichte. Felix Meiner Verlag. 50.
- Ehrenberg, John (1999). "Civil Society and the State". Civil society: the critical history of an idea. NYU Press. p. 109. ISBN 978-0-8147-2207-7.
- Kaviraj, Sudipta (2001). "In search of civil society". In Kaviraj, Sudipta; Khilnani, Sunil (eds.). Civil society: history and possibilities. Cambridge University Press. pp. 291–293. ISBN 978-0-521-00290-5. Archived from the original on 1 May 2016.
- Reeve, Andrew (2001). "Civil society". In Jones, R.J. Barry (ed.). Routledge Encyclopedia of International Political Economy: Entries P–Z. Taylor & Francis. pp. 158–160. ISBN 978-0-415-24352-0. Archived from the original on 23 June 2016.
- Sassoon, Anne Showstack (2000). Gramsci and contemporary politics: beyond pessimism of the intellect. Psychology Press. p. 70. ISBN 978-0-415-16214-2. Archived from the original on 3 May 2016.
- Augelli, Enrico & Murphy, Craig N. (1993). "Gramsci and international relations: a general perspective with examples from recent US policy towards the Third World". In Gill, Stephen (ed.). Gramsci, historical materialism and international relations. Cambridge University Press. p. 129. ISBN 978-0-521-43523-9. Archived from the original on 2 May 2016.
- Ferretter, Luke (2006). Louis Althusser. Taylor & Francis. p. 85. ISBN 978-0-415-32731-2.
- Flecha, Ramon (2009). "The Educative City and Critical Education". In Apple, Michael W.; et al. (eds.). The Routledge international handbook of critical education. Taylor & Francis. p. 330. ISBN 978-0-415-95861-5.
- Malešević, 2002: p. 16 Archived 23 July 2016 at the Wayback Machine
- Morrow, Raymond Allen & Torres, Carlos Alberto (2002). Reading Freire and Habermas: critical pedagogy and transformative social change. Teacher's College Press. p. 77. ISBN 978-0-8077-4202-0.
- Kjaer, Anne Mette (2004). Governance. Wiley-Blackwell. ISBN 978-0-7456-2979-7. Archived from the original on 11 June 2016. --[ page needed]
- Scott, James C. (2017). Against the Grain: A Deep History of the Earliest States. Yale University Press. ISBN 978-0-300-18291-0. JSTOR j.ctv1bvnfk9.
- Carneiro, Robert L. (1970). "A Theory of the Origin of the State". Science. 169 (3947): 733–738. doi: 10.1126/science.169.3947.733. ISSN 0036-8075. JSTOR 1729765. PMID 17820299.
- Allen, Robert C. (1 April 1997). "Agriculture and the Origins of the State in Ancient Egypt". Explorations in Economic History. 34 (2): 135–154. doi: 10.1006/exeh.1997.0673. ISSN 0014-4983.
- "Transition to agriculture and first state presence: A global analysis". Explorations in Economic History. 2021. doi: 10.1016/j.eeh.2021.101404. hdl: 2077/57593. ISSN 0014-4983.
- Ahmed, Ali T.; Stasavage, David (May 2020). "Origins of Early Democracy". American Political Science Review. 114 (2): 502–518. doi: 10.1017/S0003055419000741. ISSN 0003-0554.
- Mayshar, Joram; Moav, Omer; Neeman, Zvika (2017). "Geography, Transparency, and Institutions". American Political Science Review. 111 (3): 622–636. doi: 10.1017/S0003055417000132. ISSN 0003-0554.
- Boix, Carles (2015). Political Order and Inequality. Cambridge University Press. ISBN 978-1-107-08943-3.
- Giddens, Anthony (1987). "The Traditional State: Domination and Military Power". Contemporary Critique of Historical Materialism. II: The Nation-State and Violence. Cambridge: Polity Press. ISBN 0-520-06039-3.
- Spencer, Charles S. (2010). "Territorial expansion and primary state formation". Proceedings of the National Academy of Sciences. 107 (16): 7119–7126. doi: 10.1073/pnas.1002470107. ISSN 0027-8424. PMID 20385804.
- Ingold, Tim (1999). "On the social relations of the hunter-gatherer band". In Lee, Richard B.; Daly, Richard Heywood (eds.). The Cambridge encyclopedia of hunters and gatherers. Cambridge University Press. p. 408. ISBN 978-0-521-57109-8. Archived from the original on 17 May 2016.
- Shaw, Ian & Jameson, Robert (2002). "Neolithic". A dictionary of archaeology (6th ed.). Wiley-Blackwell. p. 423. ISBN 978-0-631-23583-5. Archived from the original on 24 April 2016.
- Hassan, F.A. (2007). "The Lie of History: Nation-States and the Contradictions of Complex Societies". In Costanza, Robert; et al. (eds.). Sustainability or collapse?: an integrated history and future of people on earth. MIT Press. p. 186. ISBN 978-0-262-03366-4. Archived from the original on 2 May 2016.
- Scott, 2009: p. 29 Archived 5 May 2016 at the Wayback Machine
- Langer, Erick D. & Stearns, Peter N. (1994). "Agricultural systems". In Stearns, Peter N. (ed.). Encyclopedia of social history. Taylor & Francis. p. 28. ISBN 978-0-8153-0342-8. Archived from the original on 4 June 2016.
- Cohen, Ronald (1978). "State Origins: A Reappraisal". The Early State. Walter de Gruyter. p. 36. ISBN 978-90-279-7904-9. Archived from the original on 30 April 2016.
- Roosevelt, Anna C. (1999). "The Maritime, Highland, Forest Dynamic and the Origins of Complex Culture". In Salomon, Frank; Schwartz, Stuart B. (eds.). Cambridge history of the Native peoples of the Americas: South America, Volume 3. Cambridge University Press. pp. 266–267. ISBN 978-0-521-63075-7. Archived from the original on 24 June 2016.
- Mann, Michael (1986). "The emergence of stratification, states, and multi-power-actor civilization in Mesopotamia". The sources of social power: A history of power from the beginning to A. D. 1760, Volume 1. Cambridge University Press. ISBN 978-0-521-31349-0. Archived from the original on 25 April 2016.
- Wang, Yuhua (2021). "State-in-Society 2.0: Toward Fourth-Generation Theories of the State". Comparative Politics. doi: 10.5129/001041521x16184035797221.
- Yoffee, Norman (1988). "Context and Authority in Early Mesopotamian Law". In Cohen, Ronald; Toland, Judith D. (eds.). State formation and political legitimacy. Transaction Publishers. p. 95. ISBN 978-0-88738-161-4. Archived from the original on 1 May 2016.
- Yoffee, Norman (2005). Myths of the archaic state: evolution of the earliest cities, states and civilizations. Cambridge University Press. p. 102. ISBN 978-0-521-81837-7. Archived from the original on 11 May 2011.
- Nelson, 2006: p. 17 Archived 16 May 2016 at the Wayback Machine
- Jones, Rhys (2007). People/states/territories: the political geographies of British state transformation. Wiley-Blackwell. pp. 52–53. ISBN 978-1-4051-4033-1. Archived from the original on 2 May 2016. ... see also pp. 54- Archived 16 May 2016 at the Wayback Machine where Jones discusses problems with common conceptions of feudalism.
- Poggi, G. 1978. The Development of the Modern State: A Sociological Introduction. Stanford: Stanford University Press.
- Breuilly, John. 1993. Nationalism and the State Archived 1 May 2016 at the Wayback Machine. New York: St. Martin's Press. ISBN 0-7190-3800-6.
- Tilly, Charles (1990). Coercion, Capital, and European States, AD 990-1992. Blackwell. p. 44.
- Abramson, Scott F. (2017). "The Economic Origins of the Territorial State". International Organization. 71 (1): 97–130. doi: 10.1017/S0020818316000308. ISSN 0020-8183.
- Spruyt, Hendrik (2002). "The Origins, Development, and Possible Decline of the Modern State". Annual Review of Political Science. 5 (1): 127–149. doi: 10.1146/annurev.polisci.5.101501.145837. ISSN 1094-2939.
- Thomas, George M.; Meyer, John W. (1984). "The Expansion of the State". Annual Review of Sociology. 10 (1): 461–482. doi: 10.1146/annurev.so.10.080184.002333. ISSN 0360-0572.
- Getachew, Adom (2019). Worldmaking after Empire: The Rise and Fall of Self-Determination. Princeton University Press. pp. 73–74. ISBN 978-0-691-17915-5. JSTOR j.ctv3znwvg.
- Gorski, Philip; Sharma, Vivek Swaroop (2017), Strandsbjerg, Jeppe; Kaspersen, Lars Bo (eds.), "Beyond the Tilly Thesis: "Family Values" and State Formation in Latin Christendom", Does War Make States?: Investigations of Charles Tilly's Historical Sociology, Cambridge University Press, pp. 98–124, ISBN 978-1-107-14150-6
- Newman, Saul (2010). The Politics of Postanarchism. Edinburgh University Press. p. 109. ISBN 978-0-7486-3495-8. Archived from the original on 29 July 2016.
- Roussopoulos, Dimitrios I. (1973). The political economy of the state: Québec, Canada, U.S.A. Black Rose Books. p. 8. ISBN 978-0-919618-01-5. Archived from the original on 13 May 2016.
Christoyannopoulos, Alexandre (2010). Christian Anarchism: A Political Commentary on the Gospel. Exeter: Imprint Academic. pp. 123–126.
Ellul, Jacques (1988).
Anarchy and Christianity. Michigan: Wm. B. Eerdmans. pp. 71–74.
Archived from the original on 2 November 2015.
The first beast comes up from the sea...It is given 'all authority and power over every tribe, every people, every tongue, and every nation' (13:7). All who dwell on earth worship it. Political power could hardly, I think, be more expressly described, for it is this power which has authority, which controls military force, and which compels adoration (i.e., absolute obedience).
- Rothbard, Murray (1970). Power and Market. Institute for Humane Studies. ISBN 1-933550-05-8.
- Long, Roderick T. (2013). "Anarchism and the Problems of Rand and Paterson: Anarchism and the Problems of Rand and Paterson". The Journal of Ayn Rand Studies. 13 (2): 210–223. doi: 10.5325/jaynrandstud.13.2.0210. ISSN 1526-1018. JSTOR 10.5325/jaynrandstud.13.2.0210.
- Block, Walter (2005). "Ayn Rand and Austrian Economics: Two Peas in a Pod". The Journal of Ayn Rand Studies. 6 (2): 259–269. ISSN 1526-1018. JSTOR 41560283.
- Frederick Engels – Socialism: Utopian and Scientific. 1880 Archived 6 February 2007 at the Wayback Machine Full Text. From Historical Materialism: "State interference in social relations becomes, in one domain after another, superfluous, and then dies out of itself; the government of persons is replaced by the administration of things, and by the conduct of processes of production. The State is not "abolished". It dies out...Socialized production upon a predetermined plan becomes henceforth possible. The development of production makes the existence of different classes of society thenceforth an anachronism. In proportion as anarchy in social production vanishes, the political authority of the State dies out. Man, at last the master of his own form of social organization, becomes at the same time the lord over Nature, his own master – free."
- Flint & Taylor, 2007: p. 139
- Joseph, 2004: p. 15 Archived 6 May 2016 at the Wayback Machine
- Barrow, 1993: p. 4
- Smith, Mark J. (2000). Rethinking state theory. Psychology Press. p. 176. ISBN 978-0-415-20892-5. Archived from the original on 3 May 2016.
- Miliband, Ralph. 1983. Class power and state power. London: Verso.
- Joseph, 2004: p. 44 Archived 29 July 2016 at the Wayback Machine
- Vincent, 1992: pp. 47–48 Archived 30 April 2016 at the Wayback Machine
- Dahl, Robert (1973). Modern Political Analysis. Prentice Hall. p. [ page needed]. ISBN 0-13-596981-6.
- Cunningham, Frank (2002). Theories of democracy: a critical introduction. Psychology Press. pp. 86–87. ISBN 978-0-415-22879-4. Archived from the original on 12 May 2016.
- Zweigenhaft, Richard L. & Domhoff, G. William (2006). Diversity in the power elite: how it happened, why it matters (2nd ed.). Rowman & Littlefield. p. 4. ISBN 978-0-7425-3699-9. Archived from the original on 30 April 2016.
- Duncan, Graeme Campbell (1989). Democracy and the capitalist state. Cambridge University Press. p. 137. ISBN 978-0-521-28062-4. Archived from the original on 25 April 2016.
- Edgar, Andrew (2005). The philosophy of Habermas. McGill-Queen's Press. pp. 5–6, 44. ISBN 978-0-7735-2783-6.
- Cook, Deborah (2004). Adorno, Habermas, and the search for a rational society. Psychology Press. p. 20. ISBN 978-0-415-33479-2. Archived from the original on 25 April 2016.
- Melossi, Dario (2006). "Michel Foucault and the Obsolescent State". In Beaulieu, Alain; Gabbard, David (eds.). Michel Foucault and power today: international multidisciplinary studies in the history of the present. Lexington Books. p. 6. ISBN 978-0-7391-1324-0. Archived from the original on 16 May 2016.
- Gordon, Colin (1991). "Government rationality: an introduction". In Foucault, Michel; et al. (eds.). The Foucault effect: studies in governmentality. University of Chicago Press. p. 4. ISBN 978-0-226-08045-1. Archived from the original on 3 May 2016.
- Mitchell, Timothy (2006). "Society, Economy, and the State Effect". In Sharma, Aradhana; Gupta, Akhil (eds.). The anthropology of the state: a reader. Wiley-Blackwell. p. 179. ISBN 978-1-4051-1467-7. Archived from the original on 18 May 2016.
- Michel, Foucault (2007). Security,Territory,Population. pp. 311–332.
- Michel, Foucault (2007). Security,Territory,Population. pp. 1–27.
- Michel, Foucault (2007). Security,Territory,Population. pp. 87–115 115–135.
- Giano Rocca “The Faces of Belial – The Scientific Method Applied to Human Condition – Book V” (2020) https://independent.academia.edu/GianoRocca
- Sklair, Leslie (2004). "Globalizing class theory". In Sinclair, Timothy (ed.). Global governance: critical concepts in political science. Taylor & Francis. pp. 139–140. ISBN 978-0-415-27665-8. Archived from the original on 19 May 2016.
- Rueschemeyer, Skocpol, and Evans, 1985:[ page needed]
- Vincent, 1992: p. 43 Archived 24 June 2016 at the Wayback Machine
- Malešević, 2002: p. 85 Archived 20 May 2016 at the Wayback Machine
- Dogan, 1992: pp. 119–120 Archived 17 June 2016 at the Wayback Machine
- "Leviathan, by Thomas Hobbes". www.gutenberg.org. Retrieved 19 November 2020.
- Locke, John (1690). Second Treatise of Government.
- Cox, Stephen (2013). "Rand, Paterson, and the Problem of Anarchism". The Journal of Ayn Rand Studies. 13 (1): 3–25. doi: 10.5325/jaynrandstud.13.1.0003. ISSN 1526-1018. JSTOR 10.5325/jaynrandstud.13.1.0003.
- Rand, Ayn (1 March 1964). "The Nature of Government by Ayn Rand | Ayn Rand". fee.org. Retrieved 19 November 2020.
- Wallerstein, Immanuel (1999). The end of the world as we know it: social science for the twenty-first century. University of Minnesota Press. p. 228. ISBN 978-0-8166-3398-2. Archived from the original on 28 May 2016.
- Collins, Randall (1986). Weberian Sociological Theory. Cambridge University Press. p. 158. ISBN 978-0-521-31426-8. Archived from the original on 3 June 2016.
- Swedberg, Richard & Agevall, Ola (2005). The Max Weber dictionary: key words and central concepts. Stanford University Press. p. 148. ISBN 978-0-8047-5095-0. Archived from the original on 28 April 2016.
- Samuels, David (2012). Comparative Politics. Pearson Higher Education. p. 29.
- Samuels, David. Comparative Politics. Pearson Higher Education.
- Migdal, Joel (1988). Strong societies and weak states: state-society relations and state capabilities in the Third World. pp. Chapter 2.
- Migdal, Joel (1988). Strong societies and weak states: state-society relations and state capabilities in the Third World. Princeton University Press. pp. Chapter 8.
- Barrow, Clyde W. (1993). Critical Theories of State: Marxist, Neo-Marxist, Post-Marxist. University of Wisconsin Press. ISBN 0-299-13714-7.
- Bobbio, Norberto (1989). Democracy and Dictatorship: The Nature and Limits of State Power. University of Minnesota Press. ISBN 0-8166-1813-5.
- Cudworth, Erika (2007). The Modern State: Theories and Ideologies. Edinburgh University Press. ISBN 978-0-7486-2176-7.
- Dogan, Mattei (1992). "Conceptions of Legitimacy". In Paynter, John; et al. (eds.). Encyclopedia of government and politics. Psychology Press. ISBN 978-0-415-07224-3.
- Flint, Colin & Taylor, Peter (2007). Political Geography: World Economy, Nation-State, and Locality (5th ed.). Pearson/Prentice Hall. ISBN 978-0-13-196012-1.
- Hay, Colin (2001). "State theory". In Jones, R.J. Barry (ed.). Routledge Encyclopedia of International Political Economy: Entries P-Z. Taylor & Francis. pp. 1469–1475. ISBN 978-0-415-24352-0.
- Joseph, Jonathan (2004). Social theory: an introduction. NYU Press. ISBN 978-0-8147-4277-8.
- Malešević, Siniša (2002). Ideology, legitimacy and the new state: Yugoslavia, Serbia and Croatia. Routledge. ISBN 978-0-7146-5215-3.
- Nelson, Brian T. (2006). The making of the modern state: a theoretical evolution. Palgrave Macmillan. ISBN 978-1-4039-7189-0.
- Rueschemeyer, Dietrich; Skocpol, Theda; Evans, Peter B. (1985). Bringing the State Back In. Cambridge University Press. ISBN 0-521-31313-9.
- Salmon, Trevor C. (2008). Issues in international relations. Taylor & Francis US. ISBN 978-0-415-43126-2.
- Sartwell, Crispin (2008). Against the state: an introduction to anarchist political theory. SUNY Press. ISBN 978-0-7914-7447-1.
- Scott, James C. (2009). The art of not being governed: an anarchist history of upland Southeast Asia. Yale University Press. ISBN 978-0-300-15228-9.
- Skinner, Quentin (1989). "The state". In Ball, T; Farr, J.; Hanson, R.L. (eds.). Political Innovation and Conceptual Change. Cambridge University Press. pp. 90–131. ISBN 0-521-35978-3.
- Vincent, Andrew (1992). "Conceptions of the State". In Paynter, John; et al. (eds.). Encyclopedia of government and politics. Psychology Press. ISBN 978-0-415-07224-3.
- Barrow, Clyde W. (2002). "The Miliband-Poulantzas Debate: An Intellectual History". In Aronowitz, Stanley; Bratsis, Peter (eds.). Paradigm lost: state theory reconsidered. University of Minnesota Press. ISBN 978-0-8166-3293-0.
- Bottomore, T.B., ed. (1991). "The State". A Dictionary of Marxist thought (2nd ed.). Wiley-Blackwell. ISBN 978-0-631-18082-1.
- Bratsis, Peter (2006). Everyday Life and the State. Paradigm. ISBN 978-1-59451-219-3.
- Faulks, Keith (2000). "Classical Theories of the State and Civil Society". Political sociology: a critical introduction. NYU Press. ISBN 978-0-8147-2709-6.
- Feldbrugge, Ferdinand J.M., ed. (2003). The law's beginning. Martinus Nijhoff Publishers. ISBN 978-90-04-13705-9.
- Fisk, Milton (1989). The state and justice: an essay in political theory. Cambridge University Press. ISBN 978-0-521-38966-2.
- Friedeburg, Robert von (2011). State Forms and State Systems in Modern Europe. Institute of European History.
- Green, Penny & Ward, Tony (2009). "Violence and the State". In Coleman, Roy; et al. (eds.). State, Power, Crime. Sage. p. 116. ISBN 978-1-4129-4805-0.
- Hall, John A., ed. (1994). The state: critical concepts (Vol. 1 & 2). Taylor & Francis. ISBN 978-0-415-08683-7.
- Hansen, Thomas Blom; Stepputat, Finn, eds. (2001). States of imagination: ethnographic explorations of the postcolonial state. Duke University Press. ISBN 978-0-8223-2798-1.
- Hoffman, John (1995). Beyond the state: an introductory critique. Polity Press. ISBN 978-0-7456-1181-5.
- Hoffman, John (2004). Citizenship beyond the state. Sage. ISBN 978-0-7619-4942-8.
- Jessop, Bob (1990). State theory: putting the Capitalist state in its place. Penn State Press. ISBN 978-0-271-00735-9.
- Jessop, Bob (2009). "Redesigning the State, Reorienting State Power, and Rethinking the State". In Leicht, Kevin T.; Jenkins, J. Craig (eds.). Handbook of Politics: State and Society in Global Perspective. Springer. ISBN 978-0-387-68929-6.
- Lefebvre, Henri (2009). Brenner, Neil; Elden, Stuart (eds.). State, space, world: selected essays. University of Minnesota Press. ISBN 978-0-8166-5317-1.
- Long, Roderick T. & Machan, Tibor R. (2008). Anarchism/minarchism: is a government part of a free country?. Ashgate Publishing. ISBN 978-0-7546-6066-8.
- Mann, Michael (1994). "The Autonomous Power of the State: Its Origins, Mechanisms, and Results". In Hall, John A. (ed.). The State: critical concepts, Volume 1. Taylor & Francis. ISBN 978-0-415-08680-6.
- Oppenheimer, Franz (1975). The state. Black Rose Books. ISBN 978-0-919618-59-6.
- Poulantzas, Nicos & Camiller, Patrick (2000). State, power, socialism. Verso. ISBN 978-1-85984-274-4.
- Sanders, John T. & Narveson, Jan (1996). For and against the state: new philosophical readings. Rowman & Littlefield. ISBN 978-0-8476-8165-5.
- Scott, James C. (1998). Seeing like a state: how certain schemes to improve the human condition have failed. Yale University Press. ISBN 978-0-300-07815-2.
- Taylor, Michael (1982). Community, anarchy, and liberty. Cambridge University Press. ISBN 978-0-521-27014-4.
- Zippelius, Reinhold (2010). Allgemeine Staatslehre, Politikwissenschaft (16th ed.). C.H. Beck, Munich. ISBN 978-3406603426.
- Uzgalis, William (5 May 2007). "John Locke". Stanford Encyclopedia of Philosophy.
|Look up state, estate, or status in Wiktionary, the free dictionary.|
Quotations related to State at Wikiquote | https://webot.org/basic/?search=State_(polity) | 21 |
15 | This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action.
Cholesterol and your Child's Health
is a waxy, fat-like substance. Your child's body uses cholesterol to make hormones and new cells, and to protect nerves. Cholesterol is made by your child's body. It also comes from certain foods, such as meat and dairy products. Your child's healthcare provider can help you set goals for your child's cholesterol levels. He or she can help you and your child create a plan to meet those goals.
Cholesterol level goals:
Your child's cholesterol level goals depend on his or her age. They also depend on your child's risk for heart disease, and other health conditions he or she has. The following are general guidelines:
- Total cholesterol includes low-density lipoprotein (LDL), high-density lipoprotein (HDL), and triglyceride levels. The total cholesterol level should be lower than 170 mg/dL.
- LDL cholesterol is called bad cholesterol because it forms plaque in your child's arteries. As plaque builds up, your child's arteries become narrow, and less blood flows through. This increases your child's risk for heart disease, a heart attack, or a stroke later in life. The level should be lower than 110 mg/dL.
- HDL cholesterol is called good cholesterol because it helps remove LDL cholesterol from your child's arteries. It does this by attaching to LDL cholesterol and carrying it to his or her liver. Your child's liver breaks down LDL cholesterol so his or her body can get rid of it. High levels of HDL cholesterol can help prevent a heart attack and stroke. Low levels of HDL cholesterol can increase your child's risk for heart disease, heart attack, and stroke later in life. The level should be higher than 45 mg/dL.
- Triglycerides are a type of fat that store energy from foods your child eats. High levels of triglycerides also cause plaque buildup. This can increase your child's risk for a heart attack or stroke later in life. If your child's triglyceride level is high, his or her LDL cholesterol level may also be high. The level for children 9 years or younger should be 75 mg/dL or lower. The level for children and adolescents 10 years or older should be 90 mg/dL or lower.
What increases your child's risk for high cholesterol:
- Being overweight or obese, or not getting enough exercise
- A medical condition such as hypertension (high blood pressure) or diabetes
- Certain genes passed from your child's parent to him or her
What you need to know about having your child's cholesterol checked:
Blood tests are used to check your child's cholesterol levels. Blood tests measure your child's levels of triglycerides, LDL cholesterol, and HDL cholesterol. Your child should have his or her cholesterol checked between 9 and 11 years of age. He or she should have it checked a second time at 17 years of age. Your child may need his or her cholesterol checked between 2 and 8 years of age if any of the following is true:
- Your child's parent or close relative has a total cholesterol level of 250 mg/dL or higher.
- Your child's parent was diagnosed with heart disease before 55 years of age.
- Your child has diabetes, high blood pressure, or smokes cigarettes.
- Your child is overweight or obese.
- Your child has kidney disease, juvenile arthritis, or Kawasaki disease.
How healthy fats affect your child's cholesterol levels:
Healthy fats, also called unsaturated fats, help lower LDL cholesterol and triglyceride levels. Healthy fats include the following:
- Monounsaturated fats are found in foods such as olive oil, canola oil, avocado, nuts, and olives.
- Polyunsaturated fats, such as omega 3 fats, are found in fish, such as salmon, trout, and tuna. They can also be found in plant foods such as flaxseed, walnuts, and soybeans.
How unhealthy fats affect your child's cholesterol levels:
Unhealthy fats increase LDL cholesterol and triglyceride levels in your child's blood. They are found in foods high in cholesterol, saturated fat, and trans fat:
- Cholesterol is found in eggs, dairy, and meat.
- Saturated fat is found in butter, cheese, ice cream, whole milk, and coconut oil. Saturated fat is also found in meat, such as hamburgers, sausage, hot dogs, and bologna.
- Trans fat is found in liquid oils and is used in fried and baked foods. Foods that contain trans fats include chips, crackers, muffins, sweet rolls, microwave popcorn, and cookies.
for high cholesterol will decrease your child's risk for health problems later in life. This includes heart disease, heart attack, and stroke.
- Lifestyle changes may include food, exercise, and weight loss. Your child's healthcare provider will want to start with lifestyle changes. Other treatment may be added if lifestyle changes are not enough.
- Medicines may be prescribed if lifestyle changes do not lower your child's cholesterol levels within 6 months. Medicine is usually only given to children 10 years or older who have high LDL cholesterol levels.
Food changes that can help lower your child's cholesterol levels:
A dietitian can help you create a heart healthy eating plan for your child. He or she can show you how to read food labels to choose foods low in saturated fat, trans fats, and cholesterol. Depending on your child's age, he or she may also be shown how to read food labels.
- Give your child foods low in fat. Feed your child lean meats, fat-free or 1% fat milk, and low-fat dairy products, such as yogurt and cheese. Try to limit or avoid the amount of red meat you give your child. Limit or do not feed your child fried foods or baked goods, such as cookies.
- Replace unhealthy fats with healthy fats. Cook foods in olive oil or canola oil. Choose soft margarines that are low in saturated fat and trans fat. Seeds, nuts, and avocados are other examples of healthy fats.
- Feed your child foods with omega-3 fats. Examples include salmon, tuna, mackerel, walnuts, and flaxseed. Do not give your child fish that have high levels of mercury, such as shark, swordfish, and king mackerel.
- Increase the amount of high-fiber foods your child eats. High-fiber foods can help lower LDL cholesterol. Help your child get between 20 and 30 grams of fiber each day. Fruits and vegetables are high in fiber. Offer your child at least 5 servings each day. Other high-fiber foods are whole-grain or whole-wheat breads, pastas, or cereals, and brown rice. Give your child 3 ounces of whole-grain foods each day. Increase fiber slowly. Your child may have abdominal discomfort, bloating, and gas if you add fiber too quickly.
- Give your child healthy protein foods. Examples include low-fat dairy products, skinless chicken and turkey, fish, and nuts.
- Limit foods and drinks that are high in sugar. Your dietitian or healthcare provider can help you create daily limits for high-sugar foods and drinks. The limit may be lower if your child has diabetes or another health condition. Limits can also help your child lose weight if needed.
Help your child make lifestyle changes to lower his or her cholesterol levels:
- Help your child maintain a healthy weight. Ask your child's healthcare provider what a healthy weight is for your child. He or she can help you and your child create a weight loss plan if needed. Weight loss can decrease your child's total cholesterol level. Weight loss may also help keep your child's blood pressure at a healthy level.
- Encourage your child to exercise regularly. Exercise can help lower your child's cholesterol level and maintain a healthy weight. Exercise can also help increase your child's HDL (good) cholesterol level. Work with your child's healthcare provider to create an exercise program that is right for him or her. Your child should get 30 to 60 minutes of moderate exercise most days of the week. Examples of exercise include brisk walking, swimming, or biking. Your child may enjoy exercise more if the whole family is active together. Team sports may also help your child get enough exercise.
- Do not let your older child smoke. Nicotine and other chemicals in cigarettes and cigars can raise his or her cholesterol levels. Ask your child's healthcare provider for information if he or she currently smokes and needs help to quit. E-cigarettes or smokeless tobacco still contain nicotine. Talk to your child's healthcare provider before he or she uses these products.
© Copyright IBM Corporation 2021 Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or IBM Watson Health
The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
Always consult your healthcare provider to ensure the information displayed on this page applies to your personal circumstances. | https://www.drugs.com/cg/cholesterol-and-your-child-s-health-ambulatory-care.html | 21 |
33 | Price levels depend on the food production process, including food marketing and food distribution. Fluctuation in food prices is determined by a number of compounding factors. Geopolitical events, global demand, exchange rates, government policy, diseases and crop yield, energy costs, availability of natural resources for agriculture, food speculation, changes in the use of soil and weather events have a direct impact on the increase or decrease of food prices.
The consequences of food price fluctuation are multiple. Increases in food prices endangers food security, particularly for developing countries, and can cause social unrest. Increases in food prices is related to disparities in diet quality and health, particularly among vulnerable populations, such as women and children.
Food prices will on average continue to rise due to a variety of reasons. Growing world population will put more pressure on the supply and demand. Climate change will increase extreme weather events, including droughts, storms and heavy rain, and overall increases in temperature will have an impact on food production.
To a certain extent, adverse price trends can be counteracted by food politics.
An intervention to reduce food loss or waste, if sufficiently large, will affect prices upstream and downstream in the supply chain relative to where the intervention occurred.
Food production is a very energy-intensive process. Energy is used in the raw materials for fertilizers to powering the facilities needed to process the food. Increases in the price of energy leads to an increase in the price of food. Oil prices also impact on the price of food. Food distribution is also affected by increases in oil prices, leading to increases in the price of food.
Adverse weather events such as droughts or heavy rain can cause harvest failure. There is evidence that extreme weather events and natural disasters have an impact on increased food prices.
The price of food has risen quite drastically since the 2007–08 world food price crisis, and has been most noticeable in developing countries while less so in the OECD countries and North America.
Consumer prices in the rich countries are massively influenced by the power of discount stores and constitute only a small part of the entire cost of living. In particular, Western pattern diet constituents like those that are processed by fast food chains are comparatively cheap in the Western hemisphere. Profits rely primarily on quantity (see mass production), less than high-price quality. For some product classes like dairy or meat, overproduction has twisted the price relations in a way utterly unknown in underdeveloped countries ("butter mountain"). The situation for poor societies is worsened by certain free trade agreements that allow easier export of food in the "southern" direction than vice versa. A striking example can be found in tomato exports from Italy to Ghana by virtue of the Economic Partnership Agreements where the artificially cheap vegetables play a significant role in the destruction of indigenous agriculture and a corresponding further decline in the already ailing economic power.
FAO has developed an "early warning tool" called the Food Price and Monitoring Analysis (FPMA). After the food crisis of 2008 and 2011, FAO developed an "early warning indicator to detect abnormal growth in prices in consumer markets in the developing world". The FPMA uses a variety of data sources to feed their database.
Fluctuating food prices have led to some initiative in the industrialized world as well. In Canada, Dalhousie University and the University of Guelph publish Canada's Food Price Report every year, since 2010. Read by millions of people every year, the report monitors and forecasts food prices for the coming year. The report was created by Canadian researchers Sylvain Charlebois and Francis Tapon.
FAO food price index
The FAO food price index is a measure of the monthly change in international prices of a market basket of food commodities. It consists of the average of five commodity group price indices, weighted with the average export shares of each of the groups.
- FAO Cereal price index
- FAO Vegetable oil price index
- FAO Dairy price index
- FAO Meat price index
- FAO Sugar Price index
|Year||nominal price idx||deflated price idx|
World bank food price watch
The World Bank releases the quarterly Food Price Watch report which highlights trends in domestic food prices in low- and middle-income countries, and outlines the (food) policy implications of food price fluctuations.
Grocery Foods Economics
It is rare for price spikes to hit all major foods in most countries at once, but food prices suffered all-time peaks in 2008 and 2011, posting a 15% and 12% deflated increase year-over-year, representing prices higher than any data collected. One reason for the increase in food prices may be the increase in oil prices at the same time.
It is rare for the spikes to hit all major foods in most countries at once. Food prices rose 4% in the United States in 2007, the highest increase since 1990, and are expected to climb as much again in 2008. As of December 2007, 37 countries faced food crises, and 20 had imposed some sort of food-price controls. In China, the price of pork jumped 58% in 2007. In the 1980s and 1990s, farm subsidies and support programs allowed major grain exporting countries to hold large surpluses which could be tapped during food shortages to keep prices down. However, new trade policies have made agricultural production much more responsive to market demands, putting global food reserves at their lowest since 1983.
Food prices are rising, wealthier Asian consumers are westernizing their diets, and farmers and nations of the third world are struggling to keep up the pace. Asian nations have contributed at a more rapid growth rate in the past five years to the global fluid and powdered milk manufacturing industry. In 2008, this accounted for more than 30% of production with China accounting for more than 10% of both production and consumption in the global fruit and vegetable processing and preserving industry. The trend is similarly evident in industries such as soft drink and bottled water manufacturing, as well as global cocoa, chocolate, and sugar confectionery manufacturing, forecast to grow by 5.7% and 10.0% respectively during 2008 in response to soaring demand in Chinese and Southeast Asian markets.
In 2013, Overseas Development Institute researchers showed that rice has more than doubled in price since 2000, rising by 120% in real terms. This was as a result of shifts in trade policy and restocking by major producers. More fundamental drivers of increased prices are the higher costs of fertilizer, diesel and labor. Parts of Asia see rural wages rise with potential large benefits for the 1.3 billion (2008 estimate) of Asia's poor in reducing the poverty they face. However, this negatively impacts more vulnerable groups who don't share in the economic boom, especially in Asian and African coastal cities. The researchers said the threat means social-protection policies are needed to guard against price shocks. The research proposed that in the longer run, the rises present opportunities to export for Western African farmers with high potential for rice production to replace imports with domestic production.
Most recently, global food prices have been more stable and relatively low, after a sizable increase in late 2017, they are back under 75% of the nominal value seen during the all-time high in the 2011 food crisis. In the long term, prices are expected to stabilize. Farmers will grow more grain for both fuel and food and eventually bring prices down. This has already occurred with wheat, with more crops planted in the United States, Canada, and Europe in 2009. However, the Food and Agriculture Organization projects that consumers still have to deal with more expensive food until at least 2018.
Impact of climate change in rising food prices
- Global Hunger Index
- Recent food crises:
- Food riot
- Food waste, Food rescue, Food drive
- Meat industry, Dairy industry
- Food choice
- Agricultural marketing
- Fast food, Junk food
- Fast-moving consumer goods
- Nutrition transition
- Commodity risk
- Prices received index, Prices paid index
- Global issues
- Subsistence crisis
- Engel's Law
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 License statement/permission on Wikimedia Commons. Text taken from The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, In brief, 24, FAO, FAO.
- Roser, Max; Ritchie, Hannah (2013-10-08). "Food Prices". Our World in Data.
- Amadeo, Kimberly. "5 Causes of High Food Prices". The Balance. Retrieved 2020-09-19.
- Abbott, Philip C.; Hurt, Christopher; Tyner, Wallace E., eds. (2008). What's Driving Food Prices?. Issue Report.
- Savary, Serge; Ficke, Andrea; Aubertot, Jean-Noël; Hollier, Clayton (2012-12-01). "Crop losses due to diseases and their implications for global food production losses and food security". Food Security. 4 (4): 519–537. doi:10.1007/s12571-012-0200-5. ISSN 1876-4525. S2CID 3335739.
- "Hedge funds accused of gambling with lives of the poorest as food prices soar". The Guardian. 2010-07-18. Retrieved 2020-09-19.
- "Food speculation". Global Justice Now. 2014-12-09. Retrieved 2020-09-19.
- Spratt, S. (2013). "Food price volatility and financial speculation". FAC Working Paper 47. CiteSeerX 10.1.1.304.5228.
- "Food Price Explained". Futures Fundamentals. Retrieved 2020-09-19.
- Bellemare, Marc F. (2015). "Rising Food Prices, Food Price Volatility, and Social Unrest". American Journal of Agricultural Economics. 97 (1): 1–21. doi:10.1093/ajae/aau038. hdl:10.1093/ajae/aau038. ISSN 1467-8276. S2CID 34238445.
- Perez, Ines. "Climate Change and Rising Food Prices Heightened Arab Spring". Scientific American. Retrieved 2020-09-19.
- Winecoff, Ore Koren, W. Kindred. "Food Price Spikes and Social Unrest: The Dark Side of the Fed's Crisis-Fighting". Foreign Policy. Retrieved 2020-09-19.
- Darmon, Nicole; Drewnowski, Adam (2015-10-01). "Contribution of food prices and diet cost to socioeconomic disparities in diet quality and health: a systematic review and analysis". Nutrition Reviews. 73 (10): 643–660. doi:10.1093/nutrit/nuv027. ISSN 0029-6643. PMC 4586446. PMID 26307238.
- Darnton-Hill, Ian; Cogill, Bruce (2010-01-01). "Maternal and Young Child Nutrition Adversely Affected by External Shocks Such As Increasing Global Food Prices". The Journal of Nutrition. 140 (1): 162S–169S. doi:10.3945/jn.109.111682. ISSN 0022-3166. PMID 19939995.
- "Climate Change: The Unseen Force Behind Rising Food Prices?". World Watch Institute. 2013. Archived from the original on 17 July 2018. Retrieved 7 June 2016.
- The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, In brief. Rome: FAO. 2019. p. 18.
- "USDA ERS - The Relationship Between Energy Prices and Food-Related Energy Use in the United States". www.ers.usda.gov. Retrieved 2020-09-19.
- "As the Cost of Energy Goes Up, Food Prices Follow". blogs.worldbank.org. Retrieved 2020-09-19.
- Canning, Patrick; Rehkamp, Sarah; Waters, Arnold; Etemadnia, Hamideh. "The Role of Fossil Fuels in the U.S. Food System and the American Diet". www.ers.usda.gov. Retrieved 2020-09-19.
- "How Oil Prices Affect the Price of Food". OilPrice.com. Retrieved 2020-09-19.
- "Consumer food-price inflation". The Economist. 19 July 2014.
- "FAO Global and regional consumer food inflation monitoring". FAO.
- Krupa, Matthias; Lobenstein, Caterina (30 December 2015). "Afrika: Ein Mann pflückt gegen Europa". Retrieved 16 June 2016 – via Die Zeit.
- "Home | Food Price Monitoring and Analysis (FPMA) | Food and Agriculture Organization of the United Nations". www.fao.org. Retrieved 2020-09-19.
- "Developing a price warning indicator as an early warning tool - a compound growth approach | Food Price Monitoring and Analysis (FPMA) | Food and Agriculture Organization of the United Nations". www.fao.org. Retrieved 2020-09-19.
- "Data Sources | Food Price Monitoring and Analysis (FPMA) | Food and Agriculture Organization of the United Nations". www.fao.org. Retrieved 2020-09-19.
- "Canada's Food Price Report 2019". Dalhousie University.
- "Numbeo is the world's largest database about food prices worldwide". Retrieved 6 June 2016.
- "Annual real food price indices". Archived from the original on 1 April 2014. Retrieved 19 March 2014.
- "FAO Food Price Index". FAO. Retrieved 6 June 2016.
- "Food Price Watch". Retrieved 16 June 2016.
- "FAO food prices index". FAO.org. Archived from the original on 24 February 2018. Retrieved 25 Feb 2018.
- "The global grain bubble". Christian Science Monitor. 18 January 2008.
- "The World Food Crisis". The New York Times. 10 April 2008.
- "Food prices rising across the world", CNN. 24 March 2008
- "The real hunger games: How banks gamble on food prices – and the poor lose out". The Independent. Retrieved 1 April 2012.
- "Let them eat baklava". The Economist. 17 March 2012. ISSN 0013-0613. Retrieved 21 December 2018.
- "The end of cheap rice: a cause for celebration?". Overseas Development Institute. Archived from the original on 19 June 2014. Retrieved 6 March 2015.
- Kimball, Jack (7 August 2009). "World food prices stabilize, no drop in sight: WFP". Reuters. Retrieved 6 March 2015.
- "Inflation slows in Feb. as food prices stabilize". GMA News. 5 March 2010. Retrieved 6 March 2015.
- "SA drought persists despite May rainfall". 30 May 2016. Retrieved 16 June 2016.
- Thandi Skade (29 March 2016). "SA's ticking food price bomb". Destiny Man. Archived from the original on 11 July 2017. Retrieved 9 June 2016.
- Eric Holt-Gimenez; Raj Patel, eds. (2009). Food Rebellions: Crisis and the Hunger for Justice. Food First Books. ISBN 978-0935028348.
- Schlosser, Eric (2002). Fast Food Nation: What The All-American Meal is Doing to the World. ISBN 978-0141006871.
- 1975 food prize study, part 1. prepared by the staff of the Select Committee on Nutrition and Human Needs, United States Senate. 1975.
- 1975 food prize study, part 2. prepared by the staff of the Select Committee on Nutrition and Human Needs, United States Senate. 1975.
- "Food price outlook 2016". Economic Research Service in the U.S. Department of Agriculture.
- "Food prices". Financial Times. | https://wiki-offline.jakearchibald.com/wiki/Food_prices | 21 |
15 | NCERT Solutions for Class 10 Science Chapter 4- Carbon and its compounds helps students to understand concepts provided in the textbook in detail. NCERT Solutions for Class 10 Science, provides solutions to all the questions asked at the end of every chapter as well as the questions printed within a chapter.
These NCERT Solutions for Class 10 Science are designed by our subject experts who are highly experienced. The solutions created are authentic and can be referred by the students as a guide to prepare for their examination. The content is straightforward, which makes it easy to understand. The concepts are explained with diagrams, points to remember, step by step procedure, shortcuts for numerical problems and tricks to remember formulae and chemical reactions.
Access Answers of Science NCERT Class 10 Chapter 4 – Carbon and Its Compounds
(All In text and Exercise Questions Solved)
In-text questions set 1 Page number 61
1. What would be the electron dot structure of carbon dioxide which has the formula CO2?
2. What would be the electron dot structure of a molecule of Sulphur which is made up of eight atoms of Sulphur? (Hint – The eight atoms of Sulphur are joined together in the form of a ring).
In-text questions set 2 Page number 61
1. How many structural isomers can you draw for pentane?
Solution: Structural isomer of pentane are
2. What are the two properties of carbon which lead to the huge number of carbon compounds we see around us?
Solution: Two properties of carbon which lead to the huge number of carbon compounds we see around us are
- Carbon has six valence electrons which are actually a high number of valency.
- Covalent bonding happens easily with carbon atoms and numerous others such as oxygen, chlorine, nitrogen, Sulphur, hydrogen, etc.
3. What will be the formula and electron dot structure of cyclopentane?
4. Draw the structures for the following compounds.
(i) Ethanoic acid
5. How would you name the following compounds?
- Methanal or Formaldehyde
- 1 – Hexyne
In-text questions set 3 Page number 71
1. Why is the conversion of ethanol to ethanoic acid an oxidation reaction?
Conversion of ethanol to ethanoic acid involves the removal of Hydrogen atom and addition of oxygen it is an oxidation reaction. In the first step, a H2 molecule is removed from ethanol to form ethanal. As loss of Hydrogen is oxidation so, the reaction is an oxidation reaction. Similarly Oxygen atom is added to form ethanoic acid from ethanal. As, gain of Oxygen is called oxidation so, the reaction is an oxidation reaction.
2. A mixture of oxygen and ethyne is burnt for welding. Can you tell why a mixture of ethyne and air is not used?
Solution: A mixture of oxygen and ethyne is burnt for welding instead of mixture of ethyne and air because the production of heat is very important for welding metals. When oxygen and ethyne are burnt, it burns completely and produces a higher temperature than air and ethyne. Oxygen and ethyne produce very hot blue flame but the mixture of air and ethyne gives out a sooty flame which means that there are unburnt particles, resulting in lesser heat.
In text questions set 4 Page number 74
1. How would you distinguish experimentally between an alcohol and a carboxylic acid?
Solution: On reaction with Sodium Carbonate, Carboxylic acids produces carbon dioxide gas which turns lime water milky whereas alcohols do not give this reaction. This experiment can be used to distinguish an alcohol and carboxylic acid.
Reaction of Carboxylic acid with sodium carbonate:
2CH3COOH + Na2CO3 → 2CH3COONa + H2O + CO2
2. What are oxidising agents?
Solution: Oxidising agents are those compounds which either removes Hydrogen or adds oxygen to a compound. Ex: halogens, potassium nitrate, and nitric acid.
In text questions set 5 Page number 76
1. Would you be able to check if water is hard by using a detergent?
Solution: It is not possible to check if water is hard by using a detergent because detergents are salts of ammonium or sulphonates of long chain carboxylic acids. Unlike soaps they do not react with calcium and magnesium to distinguish nature of water.
2. People use a variety of methods to wash clothes. Usually after adding the soap, they ‘beat’ the clothes on a stone, or beat it with a paddle, scrub with a brush or the mixture is agitated in a washing machine. Why is agitation necessary to get clean clothes?
Solution: Agitation is necessary to get clean clothes as agitation aid soap micelles to trap the oil, grease or any other impurities that have to be removed. When they are being beaten or agitated, the particles are removed from the clothes’ surfaces and go into the water, thus cleaning the clothes.
Exercise questions Page number 77-78
1. Ethane, with the molecular formula C2H6 has
(a) 6 covalent bonds.
(b) 7 covalent bonds.
(c) 8 covalent bonds.
(d) 9 covalent bonds
Solution: Ethane, with the molecular formula C2H6 has 7 covalent bonds
2. Butanone is a four-carbon compound with the functional group
(a) carboxylic acid
Solution: Answer is option C i.e Ketone.
3. While cooking, if the bottom of the vessel is getting blackened on the outside, it means that
(a) the food is not cooked completely.
(b) the fuel is not burning completely.
(c) the fuel is wet.
(d) the fuel is burning completely.
Solution: Answer is option b. While cooking, if the bottom of the vessel is getting blackened on the outside indicates that the fuel is not burning completely.
4. Explain the nature of the covalent bond using the bond formation in CH3Cl
Solution: Carbon can neither lose 4 electrons nor do gain four electrons as these process make the system unstable due to requirement of extra energy. Therefore CH3Cl completes its octet configuration by sharing its 4 electrons with carbon atoms or with atoms of other elements. Hence the bonding that exists in CH3Cl is a covalent bonding.
Here, carbon requires 4 electrons to complete its octet, while each hydrogen atom requires one electron to complete its duplet. Also, chlorine requires an electron to complete the octet. Therefore, all of these share the electrons and as a result, carbon forms 3 bonds with hydrogen and one with chlorine.
5. Draw the electron dot structures for
(a) ethanoic acid
(b) H2 S
6. What is a homologous series? Explain with an example.
A homologous series is a series of compounds, which has the same functional group. This also contains similar general formula and chemical properties. Since there is a change in the physical properties, we can say that there would be an increase in the molecular size and mass.
For example, methane, ethane, propane, butane, etc. are all part of the alkane homologous series. The general formula of this series is CnH2n+2. Methane CH4 Ethane CH3CH3 Propane CH3CH2CH3 Butane CH3CH2CH2CH3. It can be noticed that there is a difference of −CH2 unit between each successive compound.
7. How can ethanol and ethanoic acid be differentiated on the basis of their physical and chemical properties?
|Does not react with sodium hydrogen carbonate||Bubbles and fizzes with sodium hydrogen carbonate|
|A good smell||Smells like vinegar|
|No action in litmus paper||Blue litmus paper to red|
|Burning taste||Sour taste|
8. Why does micelle formation take place when soap is added to water? Will a micelle be formed in other solvents such as ethanol also?
Solution: Micelle formation takes place because of the dirt particles in water and clean water. There are two mediums that are involved: one is pure water and the other being dirt (also called as impurities). The soap also has two mediums:
(i) organic tail and
(ii) ionic head
So the organic tail mixes and dissolves with the dirt whereas the oil or grease and ionic head dissolves and mixes with the water. Therefore, when the material to be cleaned is removed from the water, the dirt is taken off by the soap molecules in the water. Hence, the soap cleans by forming closed structures by the mutual repulsion of the micelles (positively charged heads).
Other solvents such as ethanol, in which sodium salt of fatty acids does not dissolve, so not able to form such micelles.
9. Why are carbon and its compounds used as fuels for most applications?
Solution: Carbon and its compounds used as fuels for most applications for they have high calorific values and give out a lot of energy. Most of the carbon compounds give a lot of heat and light when burnt in air.
10. Explain the formation of scum when hard water is treated with soap?
Solution: Scrum is produced from reaction of hard water with soap. Calcium and magnesium present in the hard water form an insoluble precipitate that stick as a white which is also called as scrum.
11. What change will you observe if you test soap with litmus paper (red and blue)?
Solution:When soap is dissolved in water, due to the formation of alkaline NaOH or KOH, the solution is alkaline. The solution changes the colour of the red litmus to blue, but in the soap solution, the blue litmus remains blue.
12. What is hydrogenation? What is its industrial application?
Solution: Hydrogenation is a process or a chemical reaction between hydrogen and other compounds. It is usually done in the presence of catalysts: for example nickel, palladium or platinum. Hydrogenation is used mainly to saturate organic compounds.
13. Which of the following hydrocarbons undergo addition reactions: C2H6, C3H8, C3H6, C2H2 and CH4.
Solution: Unsaturated hydrocarbons undergo addition reactions. C3H6 and C2H2 are unsaturated hydrocarbons which undergo addition reactions.
14. Give a test that can be used to differentiate between saturated and unsaturated hydrocarbons.
Solution: Bromine water test – is used to differentiate between the unsaturated compounds (like alkenes and alkynes) and the saturated compounds. For this purpose, bromine is used in the form of bromine water. A solution of bromine in water is called bromine water. Bromine water has a red-brown color due to the presence of bromine in it. When bromine water is added to an unsaturated compound, then bromine gets added to the unsaturated compound and the red-brown color of bromine water is discharged. So, if an organic compound decolorizes bromine water, then it will be an unsaturated hydrocarbon (containing a double bond or a triple bond), but saturated hydrocarbon (alkanes) do not decolorize bromine water.
Bromine water test is perform to differentiate between the unsaturated compounds (like alkenes and alkynes) and the saturated compounds. Bromine water is added to an un-saturated hydrocarbon red brown color of bromine solution is discharged. So if there is dis-coloration then the compound will be an unsaturated Hydrocarbon.
15. Explain the mechanism of the cleaning action of soaps.
Solution: There are so many impurities and dirt mixed in water, and most of all the dirt does not dissolve in the water. Soap molecules are a combination of salts such as sodium or potassium. The molecules are of a long chain of carboxylic acids. So, when the carbon chain is dissolved in oil and the ionic end is dissolved in the water, the soap starts cleansing and trapping the dirt. When this happens, the soap molecules form structures that are called micelles are used for capturing the oil droplets and then the other end being the ionic faces. This will then form an emulsion in water and help in dissolving the dirt or impurities when the clothes are washed.
The soap molecules have different properties at different ends. The first end being the hydrophilic end which dissolves in the water and is attracted towards the water and the second one being the hydrophobic end is dissolved in the hydrocarbons and is repulsive to water. The hydrophobic tail aligns itself along the surface of the water because it is not soluble in the water.
|NCERT Exemplar Solutions for class 10 Science Chapter 4|
|CBSE Notes for Class 10 Science Chapter 4|
NCERT Solutions for Class 10 Science Chapter 4 – Carbon and Its Compounds
Average Number of questions usually asked from this chapter is for two plus three marks. In the year 2017, a total of 15 marks questions were asked from this chapter whereas in the year 2018 the weightage was 2+3 = 5.
List of Section questions with type
|Section number||Section name||Questions||Question type|
|Section 4.1||Bonding in Carbon – The covalent bond||2||2 short answers|
|Section 4.2||Versatile nature of carbon||5||2 very short answers, 3 long answers|
|Section 4.3||Chemical properties of Carbon compounds||2||2 long answer|
|Section 4.4||Some important carbon compounds – ethanol and ethanoic acid||2||1 short answer, 1 long answer|
|Section 4.5||Soaps and detergents||2||1 very short answer, 1 long answer|
List of Exercise questions with type
This chapter consists of 15 questions and is divided into the following types –
|Multiple choice questions (MCQ)||3|
|Short answer (SA)||2|
|Long answer (LA)||7|
|Very long answer (VLA)||3|
NCERT Solutions for Class 10 Science Chapter 4 – Carbon and Its Compounds
Carbon is the basis for all living organisms and a versatile element. It is tetravalent and has the property of catenation. Carbon forms covalent bonds by sharing electrons between two atoms and achieves completely filled outermost shell. It forms covalent bonds with oxygen, chlorine, hydrogen, nitrogen, sulphur and itself. It can form double and triple bond compounds. There exist three types of carbon chains viz, branched, ring and straight. Carbon is considered as a major source of fuel. Ethanoic acid and ethanol are carbon compounds which are important and used in our daily lives. The behaviour of detergents and soaps is because of the hydrophilic and hydrophobic groups in the which help in the emulsification of oily dirt and removes it.
Key Features of NCERT Solutions for Class 10 Science Chapter 4 – Carbon and Its Compounds
- The information given in these NCERT Solutions for Class 10 Science Chapter 4 – Carbon and Its Compounds is authentic and simple.
- These solutions provide answers to all the exercise questions present at the end of Chapter 4 Carbon and Its Compounds from NCERT Class 10 Science textbook.
- The solutions to questions printed between the lesson have also been provided.
- NCERT Solutions for Class 10 Science Chapter 4 – Carbon and Its Compounds are provided by expert teachers after expansive research.
- These solutions will be useful to prepare for the board exam as well as various competitive exams.
- Students can refer these solutions to prepare for their board exam as it consists of step by step procedure, neat labelled diagrams, shortcuts, tips to tackle the complex type of questions smartly.
You can check out NCERT Solutions for Class 10 to get NCERT Class 10 chapter wise solutions for other subjects.
Frequently Asked Questions on NCERT Solutions for Class 10 Science Chapter 4
What type of questions are present in the Chapter 4 of NCERT Solutions for Class 10 Science?
1. Multiple science questions – 3 questions
2. Short answer – 2 questions
3. Long answer – 7 questions
4. Very long answer – 3 questions
List out the topics present in the Chapter 4 of NCERT Solutions for Class 10 Science.
1. Bonding in Carbon – The covalent bond
2. Versatile nature of carbon
3. Chemical properties of Carbon compounds
4. Some important carbon compounds – ethanol and ethanoic acid
5. Soaps and detergents | https://byjus.com/ncert-solutions-class-10-science-chapter-4-carbon-and-its-compounds/ | 21 |
36 | |Civil rights movement|
|Caused by||Racism, segregation, disenfranchisement, Jim Crow laws, socioeconomic inequality|
The civil rights movement[b] in the United States was a decades-long campaign by African Americans and their like-minded allies to end institutionalized racial discrimination, disenfranchisement and racial segregation in the United States. The movement has its origins in the Reconstruction era during the late 19th century, although it made its largest legislative gains in the mid-1960s after years of direct actions and grassroots protests. The social movement's major nonviolent resistance and civil disobedience campaigns eventually secured new protections in federal law for the human rights of all Americans.
After the American Civil War and the subsequent abolition of slavery in the 1860s, the Reconstruction Amendments to the United States Constitution granted emancipation and constitutional rights of citizenship to all African Americans, most of whom had recently been enslaved. For a short period of time, African American men voted and held political office, but they were increasingly deprived of civil rights, often under the so-called Jim Crow laws, and African Americans were subjected to discrimination and sustained violence by white supremacists in the South. Over the following century, various efforts were made by African Americans to secure their legal and civil rights (see also, Civil rights movement (1865-1896) and Civil rights movement (1896-1954)). In 1954, the separate but equal policy, which aided the enforcement of Jim Crow laws, was substantially weakened and eventually dismantled with the United States Supreme Court's Brown v. Board of Education ruling and other subsequent rulings which followed. Between 1955 and 1968, nonviolent mass protests and civil disobedience produced crisis situations and productive dialogues between activists and government authorities. Federal, state, and local governments, businesses, and communities often had to immediately respond to these situations, which highlighted the inequities faced by African Americans across the country. The lynching of Chicago teenager Emmett Till in Mississippi, and the outrage generated by seeing how he had been abused when his mother decided to have an open-casket funeral, galvanized the African-American community nationwide. Forms of protest and/or civil disobedience included boycotts, such as the successful Montgomery bus boycott (1955-56) in Alabama, "sit-ins" such as the Greensboro sit-ins (1960) in North Carolina and successful Nashville sit-ins in Tennessee, mass marches, such as the 1963 Children's Crusade in Birmingham and 1965 Selma to Montgomery marches (1965) in Alabama, and a wide range of other nonviolent activities and resistance.
At the culmination of a legal strategy pursued by African Americans, the U.S. Supreme Court in 1954 under the leadership of Earl Warren struck down many of the laws that had allowed racial segregation and discrimination to be legal in the United States as unconstitutional. The Warren Court made a series of landmark rulings against racist discrimination, such as Brown v. Board of Education (1954), Heart of Atlanta Motel, Inc. v. United States (1964), and Loving v. Virginia (1967) which banned segregation in public schools and public accommodations, and struck down all state laws banning interracial marriage. The rulings also played a crucial role in bringing an end to the segregationist Jim Crow laws prevalent in the Southern states. In the 1960s, moderates in the movement worked with the United States Congress to achieve the passage of several significant pieces of federal legislation that overturned discriminatory laws and practices and authorized oversight and enforcement by the federal government. The Civil Rights Act of 1964, which was upheld by the Supreme Court in Heart of Atlanta Motel, Inc. v. United States (1964), explicitly banned all discrimination based on race, color, religion, sex, or national origin in employment practices, ended unequal application of voter registration requirements, and prohibited racial segregation in schools, at the workplace, and in public accommodations. The Voting Rights Act of 1965 restored and protected voting rights for minorities by authorizing federal oversight of registration and elections in areas with historic under-representation of minorities as voters. The Fair Housing Act of 1968 banned discrimination in the sale or rental of housing.
African Americans re-entered politics in the South, and young people across the country were inspired to take action. From 1964 through 1970, a wave of inner-city riots and protests in black communities dampened support from the white middle class, but increased support from private foundations. The emergence of the Black Power movement, which lasted from 1965 to 1975, challenged the established black leadership for its cooperative attitude and its constant practice of legalism and non-violence. Instead, its leaders demanded that, in addition to the new laws gained through the nonviolent movement, political and economic self-sufficiency had to be developed in the black community. Support for the Black Power movement came from African Americans who had seen little material improvement since the Civil Rights Movement's peak in the mid-1960s, and who still faced discrimination in jobs, housing, education and politics. Many popular representations of the civil rights movement are centered on the charismatic leadership and philosophy of Martin Luther King Jr., who won the 1964 Nobel Peace Prize for combatting racial inequality through nonviolent resistance. However, some scholars note that the movement was too diverse to be credited to any particular person, organization, or strategy.
Before the American Civil War, eight serving presidents had owned slaves, almost four million black people remained enslaved in the South, only white men with property could vote, and the Naturalization Act of 1790 limited U.S. citizenship to whites. Following the Civil War, three constitutional amendments were passed, including the 13th Amendment (1865) that ended slavery; the 14th Amendment (1869) that gave black people citizenship, adding their total population of four million to the official population of southern states for Congressional apportionment; and the 15th Amendment (1870) that gave black males the right to vote (only males could vote in the U.S. at the time). From 1865 to 1877, the United States underwent a turbulent Reconstruction Era during which the federal government tried to establish free labor and the civil rights of freedmen in the South after the end of slavery. Many whites resisted the social changes, leading to the formation of insurgent movements such as the Ku Klux Klan, whose members attacked black and white Republicans in order to maintain white supremacy. In 1871, President Ulysses S. Grant, the U.S. Army, and U.S. Attorney General Amos T. Akerman, initiated a campaign to repress the KKK under the Enforcement Acts. Some states were reluctant to enforce the federal measures of the act. In addition, by the early 1870s, other white supremacist and insurgent paramilitary groups arose that violently opposed African-American legal equality and suffrage, intimidating and suppressing black voters, and assassinating Republican officeholders. However, if the states failed to implement the acts, the laws allowed the Federal Government to get involved. Many Republican governors were afraid of sending black militia troops to fight the Klan for fear of war.
After the disputed election of 1876, which resulted in the end of Reconstruction and the withdrawal of federal troops, whites in the South regained political control of the region's state legislatures. They continued to intimidate and violently attack blacks before and during elections to suppress their voting, but the last African Americans were elected to Congress from the South before disenfranchisement of blacks by states throughout the region, as described below.
From 1890 to 1908, southern states passed new constitutions and laws to disenfranchise African Americans and many Poor Whites by creating barriers to voter registration; voting rolls were dramatically reduced as blacks and poor whites were forced out of electoral politics. After the landmark Supreme Court case of Smith v. Allwright (1944), which prohibited white primaries, progress was made in increasing black political participation in the Rim South and Acadiana - although almost entirely in urban areas and a few rural localities where most blacks worked outside plantations. The status quo ante of excluding African Americans from the political system lasted in the remainder of the South, especially North Louisiana, Mississippi and Alabama, until national civil rights legislation was passed in the mid-1960s to provide federal enforcement of constitutional voting rights. For more than sixty years, blacks in the South were essentially excluded from politics, unable to elect anyone to represent their interests in Congress or local government. Since they could not vote, they could not serve on local juries.
During this period, the white-dominated Democratic Party maintained political control of the South. With whites controlling all the seats representing the total population of the South, they had a powerful voting bloc in Congress. The Republican Party--the "party of Lincoln" and the party to which most blacks had belonged--shrank to insignificance except in remote Unionist areas of Appalachia and the Ozarks as black voter registration was suppressed. The Republican lily-white movement also gained strength by excluding blacks. Until 1965, the "Solid South" was a one-party system under the white Democrats. Excepting the previously noted historic Unionist strongholds the Democratic Party nomination was tantamount to election for state and local office. In 1901, President Theodore Roosevelt invited Booker T. Washington, president of the Tuskegee Institute, to dine at the White House, making him the first African American to attend an official dinner there. "The invitation was roundly criticized by southern politicians and newspapers." Washington persuaded the president to appoint more blacks to federal posts in the South and to try to boost African-American leadership in state Republican organizations. However, these actions were resisted by both white Democrats and white Republicans as an unwanted federal intrusion into state politics.
During the same time as African Americans were being disenfranchised, white southerners imposed racial segregation by law. Violence against blacks increased, with numerous lynchings through the turn of the century. The system of de jure state-sanctioned racial discrimination and oppression that emerged from the post-Reconstruction South became known as the "Jim Crow" system. The United States Supreme Court made up almost entirely of Northerners, upheld the constitutionality of those state laws that required racial segregation in public facilities in its 1896 decision Plessy v. Ferguson, legitimizing them through the "separate but equal" doctrine. Segregation, which began with slavery, continued with Jim Crow laws, with signs used to show blacks where they could legally walk, talk, drink, rest, or eat. For those places that were racially mixed, non-whites had to wait until all white customers were served first. Elected in 1912, President Woodrow Wilson gave in to demands by Southern members of his cabinet and ordered segregation of workplaces throughout the federal government.
The early 20th century is a period often referred to as the "nadir of American race relations", when the number of lynchings was highest. While tensions and civil rights violations were most intense in the South, social discrimination affected African Americans in other regions as well. At the national level, the Southern bloc controlled important committees in Congress, defeated passage of federal laws against lynching, and exercised considerable power beyond the number of whites in the South.
Characteristics of the post-Reconstruction period:
African Americans and other ethnic minorities rejected this regime. They resisted it in numerous ways and sought better opportunities through lawsuits, new organizations, political redress, and labor organizing (see the Civil rights movement (1896-1954)). The National Association for the Advancement of Colored People (NAACP) was founded in 1909. It fought to end race discrimination through litigation, education, and lobbying efforts. Its crowning achievement was its legal victory in the Supreme Court decision Brown v. Board of Education (1954), when the Warren Court ruled that segregation of public schools in the US was unconstitutional and, by implication, overturned the "separate but equal" doctrine established in Plessy v. Ferguson of 1896. Following the unanimous Supreme Court ruling, many states began to gradually integrate their schools, but some areas of the South resisted by closing public schools altogether.
The integration of Southern public libraries followed demonstrations and protests that used techniques seen in other elements of the larger civil rights movement. This included sit-ins, beatings, and white resistance. For example, in 1963 in the city of Anniston, Alabama, two black ministers were brutally beaten for attempting to integrate the public library. Though there was resistance and violence, the integration of libraries was generally quicker than the integration of other public institutions.
The situation for blacks outside the South was somewhat better (in most states they could vote and have their children educated, though they still faced discrimination in housing and jobs). In 1900 Reverend Matthew Anderson, speaking at the annual Hampton Negro Conference in Virginia, said that "...the lines along most of the avenues of wage-earning are more rigidly drawn in the North than in the South. There seems to be an apparent effort throughout the North, especially in the cities to debar the colored worker from all the avenues of higher remunerative labor, which makes it more difficult to improve his economic condition even than in the South." From 1910 to 1970, blacks sought better lives by migrating north and west out of the South. A total of nearly seven million blacks left the South in what was known as the Great Migration, most during and after World War II. So many people migrated that the demographics of some previously black-majority states changed to a white majority (in combination with other developments). The rapid influx of blacks altered the demographics of Northern and Western cities; happening at a period of expanded European, Hispanic, and Asian immigration, it added to social competition and tensions, with the new migrants and immigrants battling for a place in jobs and housing.
Reflecting social tensions after World War I, as veterans struggled to return to the workforce and labor unions were organizing, the Red Summer of 1919 was marked by hundreds of deaths and higher casualties across the U.S. as a result of white race riots against blacks that took place in more than three dozen cities, such as the Chicago race riot of 1919 and the Omaha race riot of 1919. Urban problems such as crime and disease were blamed on the large influx of Southern blacks to cities in the north and west, based on stereotypes of rural southern African-Americans. Overall, blacks in Northern and Western cities experienced systemic discrimination in a plethora of aspects of life. Within employment, economic opportunities for blacks were routed to the lowest status and restrictive in potential mobility. Within the housing market, stronger discriminatory measures were used in correlation to the influx, resulting in a mix of "targeted violence, restrictive covenants, redlining and racial steering". The Great Migration resulted in many African Americans becoming urbanized, and they began to realign from the Republican to the Democratic Party, especially because of opportunities under the New Deal of the Franklin D. Roosevelt administration during the Great Depression in the 1930s. Substantially under pressure from African-American supporters who began the March on Washington Movement, President Roosevelt issued the first federal order banning discrimination and created the Fair Employment Practice Committee. After both World Wars, black veterans of the military pressed for full civil rights and often led activist movements. In 1948, President Harry Truman issued Executive Order 9981, which ended segregation in the military.
Housing segregation became a nationwide problem following the Great Migration of black people out of the South. Racial covenants were employed by many real estate developers to "protect" entire subdivisions, with the primary intent to keep "white" neighborhoods "white". Ninety percent of the housing projects built in the years following World War II were racially restricted by such covenants. Cities known for their widespread use of racial covenants include Chicago, Baltimore, Detroit, Milwaukee, Los Angeles, Seattle, and St. Louis.
Said premises shall not be rented, leased, or conveyed to, or occupied by, any person other than of the white or Caucasian race.-- Racial covenant for a home in Beverly Hills, California.
While many whites defended their space with violence, intimidation, or legal tactics toward black people, many other whites migrated to more racially homogeneous suburban or exurban regions, a process known as white flight. From the 1930s to the 1960s, the National Association of Real Estate Boards (NAREB) issued guidelines that specified that a realtor "should never be instrumental in introducing to a neighborhood a character or property or occupancy, members of any race or nationality, or any individual whose presence will be clearly detrimental to property values in a neighborhood." The result was the development of all-black ghettos in the North and West, where much housing was older, as well as South.
The first anti-miscegenation law was passed by the Maryland General Assembly in 1691, criminalizing interracial marriage. In a speech in Charleston, Illinois in 1858, Abraham Lincoln stated, "I am not, nor ever have been in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people". By the late 1800s, 38 US states had anti-miscegenation statutes. By 1924, the ban on interracial marriage was still in force in 29 states. While interracial marriage had been legal in California since 1948, in 1957 actor Sammy Davis Jr. faced a backlash for his involvement with white actress Kim Novak. Davis briefly married a black dancer in 1958 to protect himself from mob violence. In 1958, officers in Virginia entered the home of Richard and Mildred Loving and dragged them out of bed for living together as an interracial couple, on the basis that "any white person intermarry with a colored person"-- or vice versa--each party "shall be guilty of a felony" and face prison terms of five years.
Invigorated by the victory of Brown and frustrated by the lack of immediate practical effect, private citizens increasingly rejected gradualist, legalistic approaches as the primary tool to bring about desegregation. They were faced with "massive resistance" in the South by proponents of racial segregation and voter suppression. In defiance, African-American activists adopted a combined strategy of direct action, nonviolence, nonviolent resistance, and many events described as civil disobedience, giving rise to the civil rights movement of 1954 to 1968.
The strategy of public education, legislative lobbying, and litigation that had typified the civil rights movement during the first half of the 20th century broadened after Brown to a strategy that emphasized "direct action": boycotts, sit-ins, Freedom Rides, marches or walks, and similar tactics that relied on mass mobilization, nonviolent resistance, standing in line, and, at times, civil disobedience.
Churches, local grassroots organizations, fraternal societies, and black-owned businesses mobilized volunteers to participate in broad-based actions. This was a more direct and potentially more rapid means of creating change than the traditional approach of mounting court challenges used by the NAACP and others.
In 1952, the Regional Council of Negro Leadership (RCNL), led by T. R. M. Howard, a black surgeon, entrepreneur, and planter organized a successful boycott of gas stations in Mississippi that refused to provide restrooms for blacks. Through the RCNL, Howard led campaigns to expose brutality by the Mississippi state highway patrol and to encourage blacks to make deposits in the black-owned Tri-State Bank of Nashville which, in turn, gave loans to civil rights activists who were victims of a "credit squeeze" by the White Citizens' Councils.
After Claudette Colvin was arrested for not giving up her seat on a Montgomery, Alabama bus in March 1955, a bus boycott was considered and rejected. But when Rosa Parks was arrested in December, Jo Ann Gibson Robinson of the Montgomery Women's Political Council put the bus boycott protest in motion. Late that night, she, John Cannon (chairman of the Business Department at Alabama State University) and others mimeographed and distributed thousands of leaflets calling for a boycott. The eventual success of the boycott made its spokesman Martin Luther King Jr., a nationally known figure. It also inspired other bus boycotts, such as the successful Tallahassee, Florida boycott of 1956-57.
In 1957, King and Ralph Abernathy, the leaders of the Montgomery Improvement Association, joined with other church leaders who had led similar boycott efforts, such as C. K. Steele of Tallahassee and T. J. Jemison of Baton Rouge, and other activists such as Fred Shuttlesworth, Ella Baker, A. Philip Randolph, Bayard Rustin and Stanley Levison, to form the Southern Christian Leadership Conference (SCLC). The SCLC, with its headquarters in Atlanta, Georgia, did not attempt to create a network of chapters as the NAACP did. It offered training and leadership assistance for local efforts to fight segregation. The headquarters organization raised funds, mostly from Northern sources, to support such campaigns. It made nonviolence both its central tenet and its primary method of confronting racism.
In 1959, Septima Clarke, Bernice Robinson, and Esau Jenkins, with the help of Myles Horton's Highlander Folk School in Tennessee, began the first Citizenship Schools in South Carolina's Sea Islands. They taught literacy to enable blacks to pass voting tests. The program was an enormous success and tripled the number of black voters on Johns Island. SCLC took over the program and duplicated its results elsewhere.
In the spring of 1951, black students in Virginia protested their unequal status in the state's segregated educational system. Students at Moton High School protested the overcrowded conditions and failing facility. Some local leaders of the NAACP had tried to persuade the students to back down from their protest against the Jim Crow laws of school segregation. When the students did not budge, the NAACP joined their battle against school segregation. The NAACP proceeded with five cases challenging the school systems; these were later combined under what is known today as Brown v. Board of Education. Under the leadership of Walter Reuther, the United Auto Workers donated $75,000 to help pay for the NAACP's efforts at the Supreme Court.
On May 17, 1954, the U.S. Supreme Court under Chief Justice Earl Warren ruled unanimously in Brown v. Board of Education of Topeka, Kansas, that mandating, or even permitting, public schools to be segregated by race was unconstitutional. Chief Justice Warren wrote in the court majority opinion that
Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The impact is greater when it has the sanction of the law; for the policy of separating the races is usually interpreted as denoting the inferiority of the Negro group.
The lawyers from the NAACP had to gather plausible evidence in order to win the case of Brown vs. Board of Education. Their method of addressing the issue of school segregation was to enumerate several arguments. One pertained to having exposure to interracial contact in a school environment. It was argued that interracial contact would, in turn, help prepare children to live with the pressures that society exerts in regards to race and thereby afford them a better chance of living in a democracy. In addition, another argument emphasized how "'education' comprehends the entire process of developing and training the mental, physical and moral powers and capabilities of human beings".
Risa Goluboff wrote that the NAACP's intention was to show the Courts that African American children were the victims of school segregation and their futures were at risk. The Court ruled that both Plessy v. Ferguson (1896), which had established the "separate but equal" standard in general, and Cumming v. Richmond County Board of Education (1899), which had applied that standard to schools, was unconstitutional.
The federal government filed a friend of the court brief in the case urging the justices to consider the effect that segregation had on America's image in the Cold War. Secretary of State Dean Acheson was quoted in the brief stating that "The United States is under constant attack in the foreign press, over the foreign radio, and in such international bodies as the United Nations because of various practices of discrimination in this country."
The following year, in the case known as Brown II, the Court ordered segregation to be phased out over time, "with all deliberate speed". Brown v. Board of Education of Topeka, Kansas (1954) did not overturn Plessy v. Ferguson (1896). Plessy v. Ferguson was segregation in transportation modes. Brown v. Board of Education dealt with segregation in education. Brown v. Board of Education did set in motion the future overturning of 'separate but equal'.
On May 18, 1954, Greensboro, North Carolina, became the first city in the South to publicly announce that it would abide by the Supreme Court's Brown v. Board of Education ruling. "It is unthinkable,' remarked School Board Superintendent Benjamin Smith, 'that we will try to [override] the laws of the United States." This positive reception for Brown, together with the appointment of African American David Jones to the school board in 1953, convinced numerous white and black citizens that Greensboro was heading in a progressive direction. Integration in Greensboro occurred rather peacefully compared to the process in Southern states such as Alabama, Arkansas, and Virginia where "massive resistance" was practiced by top officials and throughout the states. In Virginia, some counties closed their public schools rather than integrate, and many white Christian private schools were founded to accommodate students who used to go to public schools. Even in Greensboro, much local resistance to desegregation continued, and in 1969, the federal government found the city was not in compliance with the 1964 Civil Rights Act. Transition to a fully integrated school system did not begin until 1971.
Many Northern cities also had de facto segregation policies, which resulted in a vast gulf in educational resources between black and white communities. In Harlem, New York, for example, neither a single new school was built since the turn of the century, nor did a single nursery school exist - even as the Second Great Migration was causing overcrowding. Existing schools tended to be dilapidated and staffed with inexperienced teachers. Brown helped stimulate activism among New York City parents like Mae Mallory who, with the support of the NAACP, initiated a successful lawsuit against the city and state on Brown principles. Mallory and thousands of other parents bolstered the pressure of the lawsuit with a school boycott in 1959. During the boycott, some of the first freedom schools of the period were established. The city responded to the campaign by permitting more open transfers to high-quality, historically-white schools. (New York's African-American community, and Northern desegregation activists generally, now found themselves contending with the problem of white flight, however.)
Emmett Till, a 14-year-old African American from Chicago, visited his relatives in Money, Mississippi, for the summer. He allegedly had an interaction with a white woman, Carolyn Bryant, in a small grocery store that violated the norms of Mississippi culture, and Bryant's husband Roy and his half-brother J. W. Milam brutally murdered young Emmett Till. They beat and mutilated him before shooting him in the head and sinking his body in the Tallahatchie River. Three days later, Till's body was discovered and retrieved from the river. After Emmett's mother, Mamie Till, came to identify the remains of her son, she decided she wanted to "let the people see what I have seen". Till's mother then had his body taken back to Chicago where she had it displayed in an open casket during the funeral services where many thousands of visitors arrived to show their respects. A later publication of an image at the funeral in Jet is credited as a crucial moment in the civil rights era for displaying in vivid detail the violent racism that was being directed at black people in America. In a column for The Atlantic, Vann R. Newkirk wrote: "The trial of his killers became a pageant illuminating the tyranny of white supremacy". The state of Mississippi tried two defendants, but they were speedily acquitted by an all-white jury.
"Emmett's murder," historian Tim Tyson writes, "would never have become a watershed historical moment without Mamie finding the strength to make her private grief a public matter." The visceral response to his mother's decision to have an open-casket funeral mobilized the black community throughout the U.S. The murder and resulting trial ended up markedly impacting the views of several young black activists. Joyce Ladner referred to such activists as the "Emmett Till generation." One hundred days after Emmett Till's murder, Rosa Parks refused to give up her seat on the bus in Montgomery, Alabama. Parks later informed Till's mother that her decision to stay in her seat was guided by the image she still vividly recalled of Till's brutalized remains. The glass topped casket that was used for Till's Chicago funeral was found in a cemetery garage in 2009. Till had been reburied in a different casket after being exhumed in 2005. Till's family decided to donate the original casket to the Smithsonian's National Museum of African American Culture and History, where it is now on display. In 2007, Bryant said that she had fabricated the most sensational part of her story in 1955.
On December 1, 1955, nine months after a 15-year-old high school student, Claudette Colvin, refused to give up her seat to a white passenger on a public bus in Montgomery, Alabama, and was arrested, Rosa Parks did the same thing. Parks soon became the symbol of the resulting Montgomery bus boycott and received national publicity. She was later hailed as the "mother of the civil rights movement".
Parks was secretary of the Montgomery NAACP chapter and had recently returned from a meeting at the Highlander Folk School in Tennessee where nonviolence as a strategy was taught by Myles Horton and others. After Parks' arrest, African Americans gathered and organized the Montgomery bus boycott to demand a bus system in which passengers would be treated equally. The organization was led by Jo Ann Robinson, a member of the Women's Political Council who had been waiting for the opportunity to boycott the bus system. Following Rosa Parks' arrest, Jo Ann Robinson mimeographed 52,500 leaflets calling for a boycott. They were distributed around the city and helped gather the attention of civil rights leaders. After the city rejected many of its suggested reforms, the NAACP, led by E. D. Nixon, pushed for full desegregation of public buses. With the support of most of Montgomery's 50,000 African Americans, the boycott lasted for 381 days, until the local ordinance segregating African Americans and whites on public buses was repealed. Ninety percent of African Americans in Montgomery partook in the boycotts, which reduced bus revenue significantly, as they comprised the majority of the riders. In November 1956, the United States Supreme Court upheld a district court ruling in the case of Browder v. Gayle and ordered Montgomery's buses desegregated, ending the boycott.
Local leaders established the Montgomery Improvement Association to focus their efforts. Martin Luther King Jr. was elected President of this organization. The lengthy protest attracted national attention for him and the city. His eloquent appeals to Christian brotherhood and American idealism created a positive impression on people both inside and outside the South.
A crisis erupted in Little Rock, Arkansas, when Governor of Arkansas Orval Faubus called out the National Guard on September 4 to prevent entry to the nine African-American students who had sued for the right to attend an integrated school, Little Rock Central High School. Under the guidance of Daisy Bates, the nine students had been chosen to attend Central High because of their excellent grades.
On the first day of school, 15-year-old Elizabeth Eckford was the only one of the nine students who showed up because she did not receive the phone call about the danger of going to school. A photo was taken of Eckford being harassed by white protesters outside the school, and the police had to take her away in a patrol car for her protection. Afterwards, the nine students had to carpool to school and be escorted by military personnel in jeeps.
Faubus was not a proclaimed segregationist. The Arkansas Democratic Party, which then controlled politics in the state, put significant pressure on Faubus after he had indicated he would investigate bringing Arkansas into compliance with the Brown decision. Faubus then took his stand against integration and against the Federal court ruling. Faubus' resistance received the attention of President Dwight D. Eisenhower, who was determined to enforce the orders of the Federal courts. Critics had charged he was lukewarm, at best, on the goal of desegregation of public schools. But, Eisenhower federalized the National Guard in Arkansas and ordered them to return to their barracks. Eisenhower deployed elements of the 101st Airborne Division to Little Rock to protect the students.
The students attended high school under harsh conditions. They had to pass through a gauntlet of spitting, jeering whites to arrive at school on their first day, and to put up with harassment from other students for the rest of the year. Although federal troops escorted the students between classes, the students were teased and even attacked by white students when the soldiers were not around. One of the Little Rock Nine, Minnijean Brown, was suspended for spilling a bowl of chili on the head of a white student who was harassing her in the school lunch line. Later, she was expelled for verbally abusing a white female student.
Only Ernest Green of the Little Rock Nine graduated from Central High School. After the 1957-58 school year was over, Little Rock closed its public school system completely rather than continue to integrate. Other school systems across the South followed suit.
During the time period considered to be the "African-American civil rights" era, the predominant use of protest was nonviolent, or peaceful. Often referred to as pacifism, the method of nonviolence is considered to be an attempt to impact society positively. Although acts of racial discrimination have occurred historically throughout the United States, perhaps the most violent regions have been in the former Confederate states. During the 1950s and 1960s, the nonviolent protesting of the civil rights movement caused definite tension, which gained national attention.
In order to prepare for protests physically and psychologically, demonstrators received training in nonviolence. According to former civil rights activist Bruce Hartford, there are two main branches of nonviolence training. There is the philosophical method, which involves understanding the method of nonviolence and why it is considered useful, and there is the tactical method, which ultimately teaches demonstrators "how to be a protestor--how to sit-in, how to picket, how to defend yourself against attack, giving training on how to remain cool when people are screaming racist insults into your face and pouring stuff on you and hitting you" (Civil Rights Movement Archive). The philosophical method of nonviolence, in the American civil rights movement, was largely inspired by Mahatma Gandhi's "non-cooperation" policies during his involvement in the Indian independence movement which were intended to gain attention so that the public would either "intervene in advance," or "provide public pressure in support of the action to be taken" (Erikson, 415). As Hartford explains it, philosophical nonviolence training aims to "shape the individual person's attitude and mental response to crises and violence" (Civil Rights Movement Archive). Hartford and activists like him, who trained in tactical nonviolence, considered it necessary in order to ensure physical safety, instill discipline, teach demonstrators how to demonstrate, and form mutual confidence among demonstrators (Civil Rights Movement Archive).
For many, the concept of nonviolent protest was a way of life, a culture. However, not everyone agreed with this notion. James Forman, former SNCC (and later Black Panther) member, and nonviolence trainer was among those who did not. In his autobiography, The Making of Black Revolutionaries, Forman revealed his perspective on the method of nonviolence as "strictly a tactic, not a way of life without limitations." Similarly, Bob Moses, who was also an active member of SNCC, felt that the method of nonviolence was practical. When interviewed by author Robert Penn Warren, Moses said "There's no question that he (Martin Luther King Jr.) had a great deal of influence with the masses. But I don't think it's in the direction of love. It's in a practical direction . . ." (Who Speaks for the Negro? Warren).
According to a 2020 study in the American Political Science Review, nonviolent civil rights protests boosted vote shares for the Democratic party in presidential elections in nearby counties, but violent protests substantially boosted white support for Republicans in counties near to the violent protests.
The Jim Crow system employed "terror as a means of social control," with the most organized manifestations being the Ku Klux Klan and their collaborators in local police departments. This violence played a key role in blocking the progress of the civil rights movement in the late 1950s. Some black organizations in the South began practicing armed self-defense. The first to do so openly was the Monroe, North Carolina, chapter of the NAACP led by Robert F. Williams. Williams had rebuilt the chapter after its membership was terrorized out of public life by the Klan. He did so by encouraging a new, more working-class membership to arm itself thoroughly and defend against attack. When Klan nightriders attacked the home of NAACP member Albert Perry in October 1957, Williams' militia exchanged gunfire with the stunned Klansmen, who quickly retreated. The following day, the city council held an emergency session and passed an ordinance banning KKK motorcades. One year later, Lumbee Indians in North Carolina would have a similarly successful armed stand-off with the Klan (known as the Battle of Hayes Pond) which resulted in KKK leader James W. "Catfish" Cole being convicted of incitement to riot.
After the acquittal of several white men charged with sexually assaulting black women in Monroe, Williams announced to United Press International reporters that he would "meet violence with violence" as a policy. Williams' declaration was quoted on the front page of The New York Times, and The Carolina Times considered it "the biggest civil rights story of 1959". NAACP National chairman Roy Wilkins immediately suspended Williams from his position, but the Monroe organizer won support from numerous NAACP chapters across the country. Ultimately, Wilkins resorted to bribing influential organizer Daisy Bates to campaign against Williams at the NAACP national convention and the suspension was upheld. The convention nonetheless passed a resolution which stated: "We do not deny, but reaffirm the right of individual and collective self-defense against unlawful assaults." Martin Luther King Jr. argued for Williams' removal, but Ella Baker and WEB Dubois both publicly praised the Monroe leader's position.
Williams--along with his wife, Mabel Williams--continued to play a leadership role in the Monroe movement, and to some degree, in the national movement. The Williamses published The Crusader, a nationally circulated newsletter, beginning in 1960, and the influential book Negroes With Guns in 1962. Williams did not call for full militarization in this period, but "flexibility in the freedom struggle." Williams was well-versed in legal tactics and publicity, which he had used successfully in the internationally known "Kissing Case" of 1958, as well as nonviolent methods, which he used at lunch counter sit-ins in Monroe--all with armed self-defense as a complementary tactic.
Williams led the Monroe movement in another armed stand-off with white supremacists during an August 1961 Freedom Ride; he had been invited to participate in the campaign by Ella Baker and James Forman of the Student Nonviolent Coordinating Committee (SNCC). The incident (along with his campaigns for peace with Cuba) resulted in him being targeted by the FBI and prosecuted for kidnapping; he was cleared of all charges in 1976. Meanwhile, armed self-defense continued discreetly in the Southern movement with such figures as SNCC's Amzie Moore, Hartman Turnbow, and Fannie Lou Hamer all willing to use arms to defend their lives from nightrides. Taking refuge from the FBI in Cuba, the Willamses broadcast the radio show Radio Free Dixie throughout the eastern United States via Radio Progresso beginning in 1962. In this period, Williams advocated guerilla warfare against racist institutions and saw the large ghetto riots of the era as a manifestation of his strategy.
University of North Carolina historian Walter Rucker has written that "the emergence of Robert F Williams contributed to the marked decline in anti-black racial violence in the U.S....After centuries of anti-black violence, African Americans across the country began to defend their communities aggressively--employing overt force when necessary. This in turn evoked in whites real fear of black vengeance..." This opened up space for African Americans to use nonviolent demonstrations with less fear of deadly reprisal. Of the many civil rights activists who share this view, the most prominent was Rosa Parks. Parks gave the eulogy at Williams' funeral in 1996, praising him for "his courage and for his commitment to freedom," and concluding that "The sacrifices he made, and what he did, should go down in history and never be forgotten."
In July 1958, the NAACP Youth Council sponsored sit-ins at the lunch counter of a Dockum Drug Store in downtown Wichita, Kansas. After three weeks, the movement successfully got the store to change its policy of segregated seating, and soon afterward all Dockum stores in Kansas were desegregated. This movement was quickly followed in the same year by a student sit-in at a Katz Drug Store in Oklahoma City led by Clara Luper, which also was successful.
Mostly black students from area colleges led a sit-in at a Woolworth's store in Greensboro, North Carolina. On February 1, 1960, four students, Ezell A. Blair Jr., David Richmond, Joseph McNeil, and Franklin McCain from North Carolina Agricultural & Technical College, an all-black college, sat down at the segregated lunch counter to protest Woolworth's policy of excluding African Americans from being served food there. The four students purchased small items in other parts of the store and kept their receipts, then sat down at the lunch counter and asked to be served. After being denied service, they produced their receipts and asked why their money was good everywhere else at the store, but not at the lunch counter.
The protesters had been encouraged to dress professionally, to sit quietly, and to occupy every other stool so that potential white sympathizers could join in. The Greensboro sit-in was quickly followed by other sit-ins in Richmond, Virginia; Nashville, Tennessee; and Atlanta, Georgia. The most immediately effective of these was in Nashville, where hundreds of well organized and highly disciplined college students conducted sit-ins in coordination with a boycott campaign. As students across the south began to "sit-in" at the lunch counters of local stores, police and other officials sometimes used brutal force to physically escort the demonstrators from the lunch facilities.
The "sit-in" technique was not new--as far back as 1939, African-American attorney Samuel Wilbert Tucker organized a sit-in at the then-segregated Alexandria, Virginia, library. In 1960 the technique succeeded in bringing national attention to the movement. On March 9, 1960, an Atlanta University Center group of students released An Appeal for Human Rights as a full page advertisement in newspapers, including the Atlanta Constitution, Atlanta Journal, and Atlanta Daily World. Known as the Committee on Appeal for Human Rights (COAHR), the group initiated the Atlanta Student Movement and began to lead sit-ins starting on March 15, 1960. By the end of 1960, the process of sit-ins had spread to every southern and border state, and even to facilities in Nevada, Illinois, and Ohio that discriminated against blacks.
Demonstrators focused not only on lunch counters but also on parks, beaches, libraries, theaters, museums, and other public facilities. In April 1960 activists who had led these sit-ins were invited by SCLC activist Ella Baker to hold a conference at Shaw University, a historically black university in Raleigh, North Carolina. This conference led to the formation of the Student Nonviolent Coordinating Committee (SNCC). SNCC took these tactics of nonviolent confrontation further, and organized the freedom rides. As the constitution protected interstate commerce, they decided to challenge segregation on interstate buses and in public bus facilities by putting interracial teams on them, to travel from the North through the segregated South.
Freedom Rides were journeys by civil rights activists on interstate buses into the segregated southern United States to test the United States Supreme Court decision Boynton v. Virginia (1960), which ruled that segregation was unconstitutional for passengers engaged in interstate travel. Organized by CORE, the first Freedom Ride of the 1960s left Washington D.C. on May 4, 1961, and was scheduled to arrive in New Orleans on May 17.
During the first and subsequent Freedom Rides, activists traveled through the Deep South to integrate seating patterns on buses and desegregate bus terminals, including restrooms and water fountains. That proved to be a dangerous mission. In Anniston, Alabama, one bus was firebombed, forcing its passengers to flee for their lives.
In Birmingham, Alabama, an FBI informant reported that Public Safety Commissioner Eugene "Bull" Connor gave Ku Klux Klan members fifteen minutes to attack an incoming group of freedom riders before having police "protect" them. The riders were severely beaten "until it looked like a bulldog had got a hold of them." James Peck, a white activist, was beaten so badly that he required fifty stitches to his head.
In a similar occurrence in Montgomery, Alabama, the Freedom Riders followed in the footsteps of Rosa Parks and rode an integrated Greyhound bus from Birmingham. Although they were protesting interstate bus segregation in peace, they were met with violence in Montgomery as a large, white mob attacked them for their activism. They caused an enormous, 2-hour long riot which resulted in 22 injuries, five of whom were hospitalized.
Mob violence in Anniston and Birmingham temporarily halted the rides. SNCC activists from Nashville brought in new riders to continue the journey from Birmingham to New Orleans. In Montgomery, Alabama, at the Greyhound Bus Station, a mob charged another busload of riders, knocking John Lewis unconscious with a crate and smashing Life photographer Don Urbrock in the face with his own camera. A dozen men surrounded James Zwerg, a white student from Fisk University, and beat him in the face with a suitcase, knocking out his teeth.
On May 24, 1961, the freedom riders continued their rides into Jackson, Mississippi, where they were arrested for "breaching the peace" by using "white only" facilities. New Freedom Rides were organized by many different organizations and continued to flow into the South. As riders arrived in Jackson, they were arrested. By the end of summer, more than 300 had been jailed in Mississippi.
.. When the weary Riders arrive in Jackson and attempt to use "white only" restrooms and lunch counters they are immediately arrested for Breach of Peace and Refusal to Obey an Officer. Says Mississippi Governor Ross Barnett in defense of segregation: "The Negro is different because God made him different to punish him." From lockup, the Riders announce "Jail No Bail"--they will not pay fines for unconstitutional arrests and illegal convictions--and by staying in jail they keep the issue alive. Each prisoner will remain in jail for 39 days, the maximum time they can serve without loosing [sic] their right to appeal the unconstitutionality of their arrests, trials, and convictions. After 39 days, they file an appeal and post bond...
The jailed freedom riders were treated harshly, crammed into tiny, filthy cells and sporadically beaten. In Jackson, some male prisoners were forced to do hard labor in 100 °F (38 °C) heat. Others were transferred to the Mississippi State Penitentiary at Parchman, where they were treated to harsh conditions. Sometimes the men were suspended by "wrist breakers" from the walls. Typically, the windows of their cells were shut tight on hot days, making it hard for them to breathe.
Public sympathy and support for the freedom riders led John F. Kennedy's administration to order the Interstate Commerce Commission (ICC) to issue a new desegregation order. When the new ICC rule took effect on November 1, 1961, passengers were permitted to sit wherever they chose on the bus; "white" and "colored" signs came down in the terminals; separate drinking fountains, toilets, and waiting rooms were consolidated; and lunch counters began serving people regardless of skin color.
The student movement involved such celebrated figures as John Lewis, a single-minded activist; James Lawson, the revered "guru" of nonviolent theory and tactics; Diane Nash, an articulate and intrepid public champion of justice; Bob Moses, pioneer of voting registration in Mississippi; and James Bevel, a fiery preacher and charismatic organizer, strategist, and facilitator. Other prominent student activists included Dion Diamond, Charles McDew, Bernard Lafayette, Charles Jones, Lonnie King, Julian Bond, Hosea Williams, and Stokely Carmichael.
After the Freedom Rides, local black leaders in Mississippi such as Amzie Moore, Aaron Henry, Medgar Evers, and others asked SNCC to help register black voters and to build community organizations that could win a share of political power in the state. Since Mississippi ratified its new constitution in 1890 with provisions such as poll taxes, residency requirements, and literacy tests, it made registration more complicated and stripped blacks from voter rolls and voting. Also, violence at the time of elections had earlier suppressed black voting.
By the mid-20th century, preventing blacks from voting had become an essential part of the culture of white supremacy. In June and July 1959, members of the black community in Fayette County, TN formed the Fayette County Civic and Welfare League to spur voting. At the time, there were 16,927 blacks in the county, yet only 17 of them had voted in the previous seven years. Within a year, some 1,400 blacks had registered, and the white community responded with harsh economic reprisals. Using registration rolls, the White Citizens Council circulated a blacklist of all registered black voters, allowing banks, local stores, and gas stations to conspire to deny registered black voters essential services. What's more, sharecropping blacks who registered to vote were getting evicted from their homes. All in all, the number of evictions came to 257 families, many of whom were forced to live in a makeshift Tent City for well over a year. Finally, in December 1960, the Justice Department invoked its powers authorized by the Civil Rights Act of 1957 to file a suit against seventy parties accused of violating the civil rights of black Fayette County citizens. In the following year the first voter registration project in McComb and the surrounding counties in the Southwest corner of the state. Their efforts were met with violent repression from state and local lawmen, the White Citizens' Council, and the Ku Klux Klan. Activists were beaten, there were hundreds of arrests of local citizens, and the voting activist Herbert Lee was murdered.
White opposition to black voter registration was so intense in Mississippi that Freedom Movement activists concluded that all of the state's civil rights organizations had to unite in a coordinated effort to have any chance of success. In February 1962, representatives of SNCC, CORE, and the NAACP formed the Council of Federated Organizations (COFO). At a subsequent meeting in August, SCLC became part of COFO.
In the Spring of 1962, with funds from the Voter Education Project, SNCC/COFO began voter registration organizing in the Mississippi Delta area around Greenwood, and the areas surrounding Hattiesburg, Laurel, and Holly Springs. As in McComb, their efforts were met with fierce opposition--arrests, beatings, shootings, arson, and murder. Registrars used the literacy test to keep blacks off the voting roles by creating standards that even highly educated people could not meet. In addition, employers fired blacks who tried to register, and landlords evicted them from their rental homes. Despite these actions, over the following years, the black voter registration campaign spread across the state.
Similar voter registration campaigns--with similar responses--were begun by SNCC, CORE, and SCLC in Louisiana, Alabama, southwest Georgia, and South Carolina. By 1963, voter registration campaigns in the South were as integral to the Freedom Movement as desegregation efforts. After the passage of the Civil Rights Act of 1964, protecting and facilitating voter registration despite state barriers became the main effort of the movement. It resulted in the passage of the Voting Rights Act of 1965, which had provisions to enforce the constitutional right to vote for all citizens.
Beginning in 1956, Clyde Kennard, a black Korean War-veteran, wanted to enroll at Mississippi Southern College (now the University of Southern Mississippi) at Hattiesburg under the G.I. Bill. William David McCain, the college president, used the Mississippi State Sovereignty Commission, in order to prevent his enrollment by appealing to local black leaders and the segregationist state political establishment.
The state-funded organization tried to counter the civil rights movement by positively portraying segregationist policies. More significantly, it collected data on activists, harassed them legally, and used economic boycotts against them by threatening their jobs (or causing them to lose their jobs) to try to suppress their work.
Kennard was twice arrested on trumped-up charges, and eventually convicted and sentenced to seven years in the state prison. After three years at hard labor, Kennard was paroled by Mississippi Governor Ross Barnett. Journalists had investigated his case and publicized the state's mistreatment of his colon cancer.
McCain's role in Kennard's arrests and convictions is unknown. While trying to prevent Kennard's enrollment, McCain made a speech in Chicago, with his travel sponsored by the Mississippi State Sovereignty Commission. He described the blacks' seeking to desegregate Southern schools as "imports" from the North. (Kennard was a native and resident of Hattiesburg.) McCain said:
We insist that educationally and socially, we maintain a segregated society...In all fairness, I admit that we are not encouraging Negro voting...The Negroes prefer that control of the government remain in the white man's hands.
Note: Mississippi had passed a new constitution in 1890 that effectively disfranchised most blacks by changing electoral and voter registration requirements; although it deprived them of constitutional rights authorized under post-Civil War amendments, it survived U.S. Supreme Court challenges at the time. It was not until after the passage of the 1965 Voting Rights Act that most blacks in Mississippi and other southern states gained federal protection to enforce the constitutional right of citizens to vote.
In September 1962, James Meredith won a lawsuit to secure admission to the previously segregated University of Mississippi. He attempted to enter campus on September 20, on September 25, and again on September 26. He was blocked by Mississippi Governor Ross Barnett, who said, "[N]o school will be integrated in Mississippi while I am your Governor." The Fifth U.S. Circuit Court of Appeals held Barnett and Lieutenant Governor Paul B. Johnson Jr. in contempt, ordering them arrested and fined more than $10,000 for each day they refused to allow Meredith to enroll.
Attorney General Robert F. Kennedy sent in a force of U.S. Marshals and deputized U.S. Border Patrol agents and Federal Bureau of Prisons officers. On September 30, 1962, Meredith entered the campus under their escort. Students and other whites began rioting that evening, throwing rocks and firing on the federal agents guarding Meredith at Lyceum Hall. Rioters ended up killing two civilians, including a French journalist; 28 federal agents suffered gunshot wounds, and 160 others were injured. President John F. Kennedy sent U.S. Army and federalized Mississippi National Guard forces to the campus to quell the riot. Meredith began classes the day after the troops arrived.
Kennard and other activists continued to work on public university desegregation. In 1965 Raylawni Branch and Gwendolyn Elaine Armstrong became the first African-American students to attend the University of Southern Mississippi. By that time, McCain helped ensure they had a peaceful entry. In 2006, Judge Robert Helfrich ruled that Kennard was factually innocent of all charges for which he had been convicted in the 1950s.
The SCLC, which had been criticized by some student activists for its failure to participate more fully in the freedom rides, committed much of its prestige and resources to a desegregation campaign in Albany, Georgia, in November 1961. King, who had been criticized personally by some SNCC activists for his distance from the dangers that local organizers faced--and given the derisive nickname "De Lawd" as a result--intervened personally to assist the campaign led by both SNCC organizers and local leaders.
The campaign was a failure because of the canny tactics of Laurie Pritchett, the local police chief, and divisions within the black community. The goals may not have been specific enough. Pritchett contained the marchers without violent attacks on demonstrators that inflamed national opinion. He also arranged for arrested demonstrators to be taken to jails in surrounding communities, allowing plenty of room to remain in his jail. Pritchett also foresaw King's presence as a danger and forced his release to avoid King's rallying the black community. King left in 1962 without having achieved any dramatic victories. The local movement, however, continued the struggle, and it obtained significant gains in the next few years.
The Albany movement was shown to be an important education for the SCLC, however, when it undertook the Birmingham campaign in 1963. Executive Director Wyatt Tee Walker carefully planned the early strategy and tactics for the campaign. It focused on one goal--the desegregation of Birmingham's downtown merchants, rather than total desegregation, as in Albany.
The movement's efforts were helped by the brutal response of local authorities, in particular Eugene "Bull" Connor, the Commissioner of Public Safety. He had long held much political power but had lost a recent election for mayor to a less rabidly segregationist candidate. Refusing to accept the new mayor's authority, Connor intended to stay in office.
The campaign used a variety of nonviolent methods of confrontation, including sit-ins, kneel-ins at local churches, and a march to the county building to mark the beginning of a drive to register voters. The city, however, obtained an injunction barring all such protests. Convinced that the order was unconstitutional, the campaign defied it and prepared for mass arrests of its supporters. King elected to be among those arrested on April 12, 1963.
While in jail, King wrote his famous "Letter from Birmingham Jail" on the margins of a newspaper, since he had not been allowed any writing paper while held in solitary confinement. Supporters appealed to the Kennedy administration, which intervened to obtain King's release. Walter Reuther, president of the United Auto Workers, arranged for $160,000 to bail out King and his fellow protestors. King was allowed to call his wife, who was recuperating at home after the birth of their fourth child and was released early on April 19.
The campaign, however, faltered as it ran out of demonstrators willing to risk arrest. James Bevel, SCLC's Director of Direct Action and Director of Nonviolent Education, then came up with a bold and controversial alternative: to train high school students to take part in the demonstrations. As a result, in what would be called the Children's Crusade, more than one thousand students skipped school on May 2 to meet at the 16th Street Baptist Church to join the demonstrations. More than six hundred marched out of the church fifty at a time in an attempt to walk to City Hall to speak to Birmingham's mayor about segregation. They were arrested and put into jail. In this first encounter, the police acted with restraint. On the next day, however, another one thousand students gathered at the church. When Bevel started them marching fifty at a time, Bull Connor finally unleashed police dogs on them and then turned the city's fire hoses water streams on the children. National television networks broadcast the scenes of the dogs attacking demonstrators and the water from the fire hoses knocking down the schoolchildren.
Widespread public outrage led the Kennedy administration to intervene more forcefully in negotiations between the white business community and the SCLC. On May 10, the parties announced an agreement to desegregate the lunch counters and other public accommodations downtown, to create a committee to eliminate discriminatory hiring practices, to arrange for the release of jailed protesters, and to establish regular means of communication between black and white leaders.
Not everyone in the black community approved of the agreement--Fred Shuttlesworth was particularly critical, since he was skeptical about the good faith of Birmingham's power structure from his experience in dealing with them. Parts of the white community reacted violently. They bombed the Gaston Motel, which housed the SCLC's unofficial headquarters, and the home of King's brother, the Reverend A. D. King. In response, thousands of blacks rioted, burning numerous buildings and one of them stabbed and wounded a police officer.
Kennedy prepared to federalize the Alabama National Guard if the need arose. Four months later, on September 15, a conspiracy of Ku Klux Klan members bombed the Sixteenth Street Baptist Church in Birmingham, killing four young girls.
Birmingham was only one of over a hundred cities rocked by the chaotic protest that spring and summer, some of them in the North but mainly in the South. During the March on Washington, Martin Luther King Jr. would refer to such protests as "the whirlwinds of revolt." In Chicago, blacks rioted through the South Side in late May after a white police officer shot a fourteen-year-old black boy who was fleeing the scene of a robbery. Violent clashes between black activists and white workers took place in both Philadelphia and Harlem in successful efforts to integrate state construction projects. On June 6, over a thousand whites attacked a sit-in in Lexington, North Carolina; blacks fought back and one white man was killed. Edwin C. Berry of the National Urban League warned of a complete breakdown in race relations: "My message from the beer gardens and the barbershops all indicate the fact that the Negro is ready for war."
In Cambridge, Maryland, a working-class city on the Eastern Shore, Gloria Richardson of SNCC led a movement that pressed for desegregation but also demanded low-rent public housing, job-training, public and private jobs, and an end to police brutality. On June 11, struggles between blacks and whites escalated into violent rioting, leading Maryland Governor J. Millard Tawes to declare martial law. When negotiations between Richardson and Maryland officials faltered, Attorney General Robert F. Kennedy directly intervened to negotiate a desegregation agreement. Richardson felt that the increasing participation of poor and working-class blacks was expanding both the power and parameters of the movement, asserting that "the people as a whole really do have more intelligence than a few of their leaders.?
In their deliberations during this wave of protests, the Kennedy administration privately felt that militant demonstrations were ?bad for the country? and that "Negroes are going to push this thing too far." On May 24, Robert Kennedy had a meeting with prominent black intellectuals to discuss the racial situation. The blacks criticized Kennedy harshly for vacillating on civil rights and said that the African-American community's thoughts were increasingly turning to violence. The meeting ended with ill will on all sides. Nonetheless, the Kennedys ultimately decided that new legislation for equal public accommodations was essential to drive activists "into the courts and out of the streets."
On June 11, 1963, George Wallace, Governor of Alabama, tried to block the integration of the University of Alabama. President John F. Kennedy sent a military force to make Governor Wallace step aside, allowing the enrollment of Vivian Malone Jones and James Hood. That evening, President Kennedy addressed the nation on TV and radio with his historic civil rights speech, where he lamented "a rising tide of discontent that threatens the public safety." He called on Congress to pass new civil rights legislation, and urged the country to embrace civil rights as "a moral issue...in our daily lives." In the early hours of June 12, Medgar Evers, field secretary of the Mississippi NAACP, was assassinated by a member of the Klan. The next week, as promised, on June 19, 1963, President Kennedy submitted his Civil Rights bill to Congress.
A. Philip Randolph had planned a march on Washington, D.C., in 1941 to support demands for elimination of employment discrimination in defense industries; he called off the march when the Roosevelt administration met the demand by issuing Executive Order 8802 barring racial discrimination and creating an agency to oversee compliance with the order.
Randolph and Bayard Rustin were the chief planners of the second march, which they proposed in 1962. In 1963, the Kennedy administration initially opposed the march out of concern it would negatively impact the drive for passage of civil rights legislation. However, Randolph and King were firm that the march would proceed. With the march going forward, the Kennedys decided it was important to work to ensure its success. Concerned about the turnout, President Kennedy enlisted the aid of white church leaders and Walter Reuther, president of the UAW, to help mobilize white supporters for the march.
The march was held on August 28, 1963. Unlike the planned 1941 march, for which Randolph included only black-led organizations in the planning, the 1963 march was a collaborative effort of all of the major civil rights organizations, the more progressive wing of the labor movement, and other liberal organizations. The march had six official goals:
Of these, the march's major focus was on passage of the civil rights law that the Kennedy administration had proposed after the upheavals in Birmingham.
National media attention also greatly contributed to the march's national exposure and probable impact. In the essay "The March on Washington and Television News," historian William Thomas notes: "Over five hundred cameramen, technicians, and correspondents from the major networks were set to cover the event. More cameras would be set up than had filmed the last presidential inauguration. One camera was positioned high in the Washington Monument, to give dramatic vistas of the marchers". By carrying the organizers' speeches and offering their own commentary, television stations framed the way their local audiences saw and understood the event.
The march was a success, although not without controversy. An estimated 200,000 to 300,000 demonstrators gathered in front of the Lincoln Memorial, where King delivered his famous "I Have a Dream" speech. While many speakers applauded the Kennedy administration for the efforts it had made toward obtaining new, more effective civil rights legislation protecting the right to vote and outlawing segregation, John Lewis of SNCC took the administration to task for not doing more to protect southern blacks and civil rights workers under attack in the Deep South.
After the march, King and other civil rights leaders met with President Kennedy at the White House. While the Kennedy administration appeared sincerely committed to passing the bill, it was not clear that it had enough votes in Congress to do so. However, when President Kennedy was assassinated on November 22, 1963, the new President Lyndon Johnson decided to use his influence in Congress to bring about much of Kennedy's legislative agenda.
In March 1964, Malcolm X (el-Hajj Malik el-Shabazz), national representative of the Nation of Islam, formally broke with that organization, and made a public offer to collaborate with any civil rights organization that accepted the right to self-defense and the philosophy of Black nationalism (which Malcolm said no longer required Black separatism). Gloria Richardson, head of the Cambridge, Maryland, chapter of SNCC, and leader of the Cambridge rebellion, an honored guest at The March on Washington, immediately embraced Malcolm's offer. Mrs. Richardson, "the nation's most prominent woman [civil rights] leader," told The Baltimore Afro-American that "Malcolm is being very practical...The federal government has moved into conflict situations only when matters approach the level of insurrection. Self-defense may force Washington to intervene sooner." Earlier, in May 1963, writer and activist James Baldwin had stated publicly that "the Black Muslim movement is the only one in the country we can call grassroots, I hate to say it...Malcolm articulates for Negroes, their suffering...he corroborates their reality..." On the local level, Malcolm and the NOI had been allied with the Harlem chapter of the Congress of Racial Equality (CORE) since at least 1962.
On March 26, 1964, as the Civil Rights Act was facing stiff opposition in Congress, Malcolm had a public meeting with Martin Luther King Jr. at the Capitol. Malcolm had tried to begin a dialog with King as early as 1957, but King had rebuffed him. Malcolm had responded by calling King an "Uncle Tom", saying he had turned his back on black militancy in order to appease the white power structure. But the two men were on good terms at their face-to-face meeting. There is evidence that King was preparing to support Malcolm's plan to formally bring the U.S. government before the United Nations on charges of human rights violations against African Americans. Malcolm now encouraged Black nationalists to get involved in voter registration drives and other forms of community organizing to redefine and expand the movement.
Civil rights activists became increasingly combative in the 1963 to 1964 period, seeking to defy such events as the thwarting of the Albany campaign, police repression and Ku Klux Klan terrorism in Birmingham, and the assassination of Medgar Evers. The latter's brother Charles Evers, who took over as Mississippi NAACP Field Director, told a public NAACP conference on February 15, 1964, that "non-violence won't work in Mississippi...we made up our minds...that if a white man shoots at a Negro in Mississippi, we will shoot back." The repression of sit-ins in Jacksonville, Florida, provoked a riot in which black youth threw Molotov cocktails at police on March 24, 1964. Malcolm X gave numerous speeches in this period warning that such militant activity would escalate further if African Americans' rights were not fully recognized. In his landmark April 1964 speech "The Ballot or the Bullet", Malcolm presented an ultimatum to white America: "There's new strategy coming in. It'll be Molotov cocktails this month, hand grenades next month, and something else next month. It'll be ballots, or it'll be bullets."
As noted in the PBS documentary Eyes on the Prize, "Malcolm X had a far-reaching effect on the civil rights movement. In the South, there had been a long tradition of self-reliance. Malcolm X's ideas now touched that tradition". Self-reliance was becoming paramount in light of the 1964 Democratic National Convention's decision to refuse seating to the Mississippi Freedom Democratic Party (MFDP) and instead to seat the regular state delegation, which had been elected in violation of the party's own rules, and by Jim Crow law instead. SNCC moved in an increasingly militant direction and worked with Malcolm X on two Harlem MFDP fundraisers in December 1964.
When Fannie Lou Hamer spoke to Harlemites about the Jim Crow violence that she'd suffered in Mississippi, she linked it directly to the Northern police brutality against blacks that Malcolm protested against; When Malcolm asserted that African Americans should emulate the Mau Mau army of Kenya in efforts to gain their independence, many in SNCC applauded.
During the Selma campaign for voting rights in 1965, Malcolm made it known that he'd heard reports of increased threats of lynching around Selma. In late January he sent an open telegram to George Lincoln Rockwell, the head of the American Nazi Party, stating:
"if your present racist agitation against our people there in Alabama causes physical harm to Reverend King or any other black Americans...you and your KKK friends will be met with maximum physical retaliation from those of us who are not handcuffed by the disarming philosophy of nonviolence."
The following month, the Selma chapter of SNCC invited Malcolm to speak to a mass meeting there. On the day of Malcolm's appearance, President Johnson made his first public statement in support of the Selma campaign. Paul Ryan Haygood, a co-director of the NAACP Legal Defense Fund, credits Malcolm with a role in gaining support by the federal government. Haygood noted that "shortly after Malcolm's visit to Selma, a federal judge, responding to a suit brought by the Department of Justice, required Dallas County, Alabama, registrars to process at least 100 Black applications each day their offices were open."
St. Augustine was famous as the "Nation's Oldest City", founded by the Spanish in 1565. It became the stage for a great drama leading up to the passage of the landmark Civil Rights Act of 1964. A local movement, led by Robert B. Hayling, a black dentist and Air Force veteran affiliated with the NAACP, had been picketing segregated local institutions since 1963. In the fall of 1964, Hayling and three companions were brutally beaten at a Ku Klux Klan rally.
Nightriders shot into black homes, and teenagers Audrey Nell Edwards, JoeAnn Anderson, Samuel White, and Willie Carl Singleton (who came to be known as "The St. Augustine Four") sat in at a local Woolworth's lunch counter, seeking to get served. They were arrested and convicted of trespassing, and sentenced to six months in jail and reform school. It took a special act of the governor and cabinet of Florida to release them after national protests by the Pittsburgh Courier, Jackie Robinson, and others.
In response to the repression, the St. Augustine movement practiced armed self-defense in addition to nonviolent direct action. In June 1963, Hayling publicly stated that "I and the others have armed. We will shoot first and answer questions later. We are not going to die like Medgar Evers." The comment made national headlines. When Klan nightriders terrorized black neighborhoods in St. Augustine, Hayling's NAACP members often drove them off with gunfire. In October 1963, a Klansman was killed.
In 1964, Hayling and other activists urged the Southern Christian Leadership Conference to come to St. Augustine. Four prominent Massachusetts women - Mary Parkman Peabody, Esther Burgess, Hester Campbell (all of whose husbands were Episcopal bishops), and Florence Rowe (whose husband was vice president of the John Hancock Insurance Company) - also came to lend their support. The arrest of Peabody, the 72-year-old mother of the governor of Massachusetts, for attempting to eat at the segregated Ponce de Leon Motor Lodge in an integrated group, made front-page news across the country and brought the movement in St. Augustine to the attention of the world.
Widely publicized activities continued in the ensuing months. When King was arrested, he sent a "Letter from the St. Augustine Jail" to a northern supporter, Rabbi Israel Dresner. A week later, in the largest mass arrest of rabbis in American history took place, while they were conducting a pray-in at the segregated Monson Motel. A well-known photograph taken in St. Augustine shows the manager of the Monson Motel pouring muriatic acid in the swimming pool while blacks and whites are swimming in it. The horrifying photograph was run on the front page of a Washington newspaper the day the Senate was to vote on passing the Civil Rights Act of 1964.
From November 1963 through April 1964, the Chester school protests were a series of civil rights protests led by George Raymond of the National Association for the Advancement of Colored Persons (NAACP) and Stanley Branche of the Committee for Freedom Now (CFFN) that made Chester, Pennsylvania one of the key battlegrounds of the civil rights movement. James Farmer, the national director of the Congress of Racial Equality called Chester "the Birmingham of the North".
In 1962, Branche and the CFFN focused on improving conditions at the predominantly black Franklin Elementary school in Chester. Although the school was built to house 500 students, it had become overcrowded with 1,200 students. The school's average class size was 39, twice the number of nearby all-white schools. The school was built in 1910 and had never been updated. Only two bathrooms were available for the entire school. In November 1963, CFFN protesters blocked the entrance to Franklin Elementary school and the Chester Municipal Building resulting in the arrest of 240 protesters. Following public attention to the protests stoked by media coverage of the mass arrests, the mayor and school board negotiated with the CFFN and NAACP. The Chester Board of Education agreed to reduce class sizes at Franklin school, remove unsanitary toilet facilities, relocate classes held in the boiler room and coal bin and repair school grounds.
Emboldened by the success of the Franklin Elementary school demonstrations, the CFFN recruited new members, sponsored voter registration drives and planned a citywide boycott of Chester schools. Branche built close ties with students at nearby Swarthmore College, Pennsylvania Military College and Cheyney State College in order to ensure large turnouts at demonstrations and protests. Branche invited Dick Gregory and Malcolm X to Chester to participate in the "Freedom Now Conference" and other national civil rights leaders such as Gloria Richardson came to Chester in support of the demonstrations.
In 1964, a series of almost nightly protests brought chaos to Chester as protestors argued that the Chester School Board had de facto segregation of schools. The mayor of Chester, James Gorbey, issued "The Police Position to Preserve the Public Peace", a ten-point statement promising an immediate return to law and order. The city deputized firemen and trash collectors to help handle demonstrators. The State of Pennsylvania deployed 50 state troopers to assist the 77-member Chester police force. The demonstrations were marked by violence and charges of police brutality. Over six hundred people were arrested over a two-month period of civil rights rallies, marches, pickets, boycotts and sit-ins. Pennsylvania Governor William Scranton became involved in the negotiations and convinced Branche to obey a court-ordered moratorium on demonstrations. Scranton created the Pennsylvania Human Relations Commission to conduct hearings on the de facto segregation of public schools. All protests were discontinued while the commission held hearings during the summer of 1964.
In November 1964, the Pennsylvania Human Relations Commission concluded that the Chester School Board had violated the law and ordered the Chester School District to desegregate the city's six predominantly African-American schools. The city appealed the ruling, which delayed implementation.
In the summer of 1964, COFO brought nearly 1,000 activists to Mississippi--most of them white college students from the North and West--to join with local black activists to register voters, teach in "Freedom Schools," and organize the Mississippi Freedom Democratic Party (MFDP).
Many of Mississippi's white residents deeply resented the outsiders and attempts to change their society. State and local governments, police, the White Citizens' Council and the Ku Klux Klan used arrests, beatings, arson, murder, spying, firing, evictions, and other forms of intimidation and harassment to oppose the project and prevent blacks from registering to vote or achieving social equality.
On June 21, 1964, three civil rights workers disappeared: James Chaney, a young black Mississippian and plasterer's apprentice; and two Jewish activists, Andrew Goodman, a Queens College anthropology student; and Michael Schwerner, a CORE organizer from Manhattan's Lower East Side. They were found weeks later, murdered by conspirators who turned out to be local members of the Klan, some of the members of the Neshoba County sheriff's department. This outraged the public, leading the U.S. Justice Department along with the FBI (the latter which had previously avoided dealing with the issue of segregation and persecution of blacks) to take action. The outrage over these murders helped lead to the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
From June to August, Freedom Summer activists worked in 38 local projects scattered across the state, with the largest number concentrated in the Mississippi Delta region. At least 30 Freedom Schools, with close to 3,500 students, were established, and 28 community centers were set up.
Over the course of the Summer Project, some 17,000 Mississippi blacks attempted to become registered voters in defiance of the red tape and forces of white supremacy arrayed against them--only 1,600 (less than 10%) succeeded. But more than 80,000 joined the Mississippi Freedom Democratic Party (MFDP), founded as an alternative political organization, showing their desire to vote and participate in politics.
Though Freedom Summer failed to register many voters, it had a significant effect on the course of the civil rights movement. It helped break down the decades of people's isolation and repression that were the foundation of the Jim Crow system. Before Freedom Summer, the national news media had paid little attention to the persecution of black voters in the Deep South and the dangers endured by black civil rights workers. The progression of events throughout the South increased media attention to Mississippi.
The deaths of affluent northern white students and threats to non-Southerners attracted the full attention of the media spotlight to the state. Many black activists became embittered, believing the media valued the lives of whites and blacks differently. Perhaps the most significant effect of Freedom Summer was on the volunteers, almost all of whom--black and white--still consider it to have been one of the defining periods of their lives.
Although President Kennedy had proposed civil rights legislation and it had support from Northern Congressmen and Senators of both parties, Southern Senators blocked the bill by threatening filibusters. After considerable parliamentary maneuvering and 54 days of filibuster on the floor of the United States Senate, President Johnson got a bill through the Congress.
On July 2, 1964, Johnson signed the Civil Rights Act of 1964, which banned discrimination based on "race, color, religion, sex or national origin" in employment practices and public accommodations. The bill authorized the Attorney General to file lawsuits to enforce the new law. The law also nullified state and local laws that required such discrimination.
When police shot an unarmed black teenager in Harlem in July 1964, tensions escalated out of control. Residents were frustrated with racial inequalities. Rioting broke out, and Bedford-Stuyvesant, a major black neighborhood in Brooklyn, erupted next. That summer, rioting also broke out in Philadelphia, for similar reasons. The riots were on a much smaller scale than what would occur in 1965 and later.
Washington responded with a pilot program called Project Uplift. Thousands of young people in Harlem were given jobs during the summer of 1965. The project was inspired by a report generated by HARYOU called Youth in the Ghetto. HARYOU was given a major role in organizing the project, together with the National Urban League and nearly 100 smaller community organizations. Permanent jobs at living wages were still out of reach of many young black men.
Blacks in Mississippi had been disfranchised by statutory and constitutional changes since the late 19th century. In 1963 COFO held a Freedom Ballot in Mississippi to demonstrate the desire of black Mississippians to vote. More than 80,000 people registered and voted in the mock election, which pitted an integrated slate of candidates from the "Freedom Party" against the official state Democratic Party candidates.
In 1964, organizers launched the Mississippi Freedom Democratic Party (MFDP) to challenge the all-white official party. When Mississippi voting registrars refused to recognize their candidates, they held their own primary. They selected Fannie Lou Hamer, Annie Devine, and Victoria Gray to run for Congress, and a slate of delegates to represent Mississippi at the 1964 Democratic National Convention.
The presence of the Mississippi Freedom Democratic Party in Atlantic City, New Jersey, was inconvenient, however, for the convention organizers. They had planned a triumphant celebration of the Johnson administration's achievements in civil rights, rather than a fight over racism within the Democratic Party. All-white delegations from other Southern states threatened to walk out if the official slate from Mississippi was not seated. Johnson was worried about the inroads that Republican Barry Goldwater's campaign was making in what previously had been the white Democratic stronghold of the "Solid South", as well as support that George Wallace had received in the North during the Democratic primaries.
Johnson could not, however, prevent the MFDP from taking its case to the Credentials Committee. There Fannie Lou Hamer testified eloquently about the beatings that she and others endured and the threats they faced for trying to register to vote. Turning to the television cameras, Hamer asked, "Is this America?"
Johnson offered the MFDP a "compromise" under which it would receive two non-voting, at-large seats, while the white delegation sent by the official Democratic Party would retain its seats. The MFDP angrily rejected the "compromise."
The MFDP kept up its agitation at the convention after it was denied official recognition. When all but three of the "regular" Mississippi delegates left because they refused to pledge allegiance to the party, the MFDP delegates borrowed passes from sympathetic delegates and took the seats vacated by the official Mississippi delegates. National party organizers removed them. When they returned the next day, they found convention organizers had removed the empty seats that had been there the day before. They stayed and sang "freedom songs".
The 1964 Democratic Party convention disillusioned many within the MFDP and the civil rights movement, but it did not destroy the MFDP. The MFDP became more radical after Atlantic City. It invited Malcolm X to speak at one of its conventions and opposed the war in Vietnam.
SNCC had undertaken an ambitious voter registration program in Selma, Alabama, in 1963, but by 1965 little headway had been made in the face of opposition from Selma's sheriff, Jim Clark. After local residents asked the SCLC for assistance, King came to Selma to lead several marches, at which he was arrested along with 250 other demonstrators. The marchers continued to meet violent resistance from the police. Jimmie Lee Jackson, a resident of nearby Marion, was killed by police at a later march on February 17, 1965. Jackson's death prompted James Bevel, director of the Selma Movement, to initiate and organize a plan to march from Selma to Montgomery, the state capital.
On March 7, 1965, acting on Bevel's plan, Hosea Williams of the SCLC and John Lewis of SNCC led a march of 600 people to walk the 54 miles (87 km) from Selma to the state capital in Montgomery. Six blocks into the march, at the Edmund Pettus Bridge where the marchers left the city and moved into the county, state troopers, and local county law enforcement, some mounted on horseback, attacked the peaceful demonstrators with billy clubs, tear gas, rubber tubes wrapped in barbed wire, and bullwhips. They drove the marchers back into Selma. Lewis was knocked unconscious and dragged to safety. At least 16 other marchers were hospitalized. Among those gassed and beaten was Amelia Boynton Robinson, who was at the center of civil rights activity at the time.
The national broadcast of the news footage of lawmen attacking unresisting marchers seeking to exercise their constitutional right to vote provoked a national response and hundreds of people from all over the country came for a second march. These marchers were turned around by King at the last minute so as not to violate a federal injunction. This displeased many demonstrators, especially those who resented King's nonviolence (such as James Forman and Robert F. Williams).
That night, local Whites attacked James Reeb, a voting rights supporter. He died of his injuries in a Birmingham hospital on March 11. Due to the national outcry at a White minister being murdered so brazenly (as well as the subsequent civil disobedience led by Gorman and other SNCC leaders all over the country, especially in Montgomery and at the White House), the marchers were able to lift the injunction and obtain protection from federal troops, permitting them to make the march across Alabama without incident two weeks later; during the march, Gorman, Williams, and other more militant protesters carried bricks and sticks of their own.
Eight days after the first march, but before the final march, President Johnson delivered a televised address to support the voting rights bill he had sent to Congress. In it he stated:
Their cause must be our cause too. Because it is not just Negroes, but really it is all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome.
On August 6, Johnson signed the Voting Rights Act of 1965, which suspended literacy tests and other subjective voter registration tests. It authorized Federal supervision of voter registration in states and individual voting districts where such tests were being used and where African Americans were historically under-represented in voting rolls compared to the eligible population. African Americans who had been barred from registering to vote finally had an alternative to taking suits to local or state courts, which had seldom prosecuted their cases to success. If discrimination in voter registration occurred, the 1965 act authorized the Attorney General of the United States to send Federal examiners to replace local registrars.
Within months of the bill's passage, 250,000 new black voters had been registered, one-third of them by federal examiners. Within four years, voter registration in the South had more than doubled. In 1965, Mississippi had the highest black voter turnout at 74% and led the nation in the number of black public officials elected. In 1969, Tennessee had a 92.1% turnout among black voters; Arkansas, 77.9%; and Texas, 73.1%.
Several whites who had opposed the Voting Rights Act paid a quick price. In 1966 Sheriff Jim Clark of Selma, Alabama, infamous for using cattle prods against civil rights marchers, was up for reelection. Although he took off the notorious "Never" pin on his uniform, he was defeated. At the election, Clark lost as blacks voted to get him out of office.
Blacks' regaining the power to vote changed the political landscape of the South. When Congress passed the Voting Rights Act, only about 100 African Americans held elective office, all in northern states. By 1989, there were more than 7,200 African Americans in office, including more than 4,800 in the South. Nearly every county where populations were majority black in Alabama had a black sheriff. Southern blacks held top positions in city, county, and state governments.
Atlanta elected a black mayor, Andrew Young, as did Jackson, Mississippi, with Harvey Johnson Jr., and New Orleans, with Ernest Morial. Black politicians on the national level included Barbara Jordan, elected as a Representative from Texas in Congress, and President Jimmy Carter appointed Andrew Young as United States Ambassador to the United Nations. Julian Bond was elected to the Georgia State Legislature in 1965, although political reaction to his public opposition to the U.S. involvement in the Vietnam War prevented him from taking his seat until 1967. John Lewis was first elected in 1986 to represent Georgia's 5th congressional district in the United States House of Representatives, where he served from 1987 until his death in 2020.
The new Voting Rights Act of 1965 had no immediate effect on living conditions for poor blacks. A few days after the act became law, a riot broke out in the South Central Los Angeles neighborhood of Watts. Like Harlem, Watts was a majority-black neighborhood with very high unemployment and associated poverty. Its residents confronted a largely white police department that had a history of abuse against blacks.
While arresting a young man for drunk driving, police officers argued with the suspect's mother before onlookers. The spark triggered massive destruction of property through six days of rioting in Los Angeles. Thirty-four people were killed, and property valued at about $40 million was destroyed, making the Watts riots among the city's worst unrest until the Rodney King riots of 1992.
With black militancy on the rise, ghetto residents directed acts of anger at the police. Black residents growing tired of police brutality continued to riot. Some young people joined groups such as the Black Panthers, whose popularity was based in part on their reputation for confronting police officers. Riots among blacks occurred in 1966 and 1967 in cities such as Atlanta, San Francisco, Oakland, Baltimore, Seattle, Tacoma, Cleveland, Cincinnati, Columbus, Newark, Chicago, New York City (specifically in Brooklyn, Harlem and the Bronx), and worst of all in Detroit.
The first major blow against housing segregation in the era, the Rumford Fair Housing Act, was passed in California in 1963. It was overturned by white California voters and real estate lobbyists the following year with Proposition 14, a move which helped precipitate the Watts riots. In 1966, the California Supreme Court invalidated Proposition 14 and reinstated the Rumford Fair Housing Act.
Working and organizing for fair housing laws became a major project of the movement over the next two years, with Martin Luther King Jr., James Bevel, and Al Raby leading the Chicago Freedom Movement around the issue in 1966. In the following year, Father James Groppi and the NAACP Youth Council also attracted national attention with a fair housing campaign in Milwaukee. Both movements faced violent resistance from white homeowners and legal opposition from conservative politicians.
The Fair Housing Bill was the most contentious civil rights legislation of the era. Senator Walter Mondale, who advocated for the bill, noted that over successive years, it was the most filibustered legislation in U.S. history. It was opposed by most Northern and Southern senators, as well as the National Association of Real Estate Boards. A proposed "Civil Rights Act of 1966" had collapsed completely because of its fair housing provision. Mondale commented that:
A lot of civil rights [legislation] was about making the South behave and taking the teeth from George Wallace, [but] this came right to the neighborhoods across the country. This was civil rights getting personal.
In 1967 riots broke out in black neighborhoods in more than 100 U.S. cities, including Detroit, Newark, Cincinnati, Cleveland, and Washington, D.C. The largest of these was the 1967 Detroit riot.
In Detroit, a large black middle class had begun to develop among those African Americans who worked at unionized jobs in the automotive industry. These workers complained of persisting racist practices, limiting the jobs they could have and opportunities for promotion. The United Auto Workers channeled these complaints into bureaucratic and ineffective grievance procedures. Violent white mobs enforced the segregation of housing up through the 1960s. Blacks who were not upwardly mobile were living in substandard conditions, subject to the same problems as poor African Americans in Watts and Harlem.
When white Detroit Police Department (DPD) officers shut down an illegal bar and arrested a large group of patrons during the hot summer, furious black residents rioted. Rioters looted and destroyed property while snipers engaged in firefights from rooftops and windows, undermining the DPD's ability to curtail the disorder. In response, the Michigan Army National Guard and U.S. Army paratroopers were deployed to reinforce the DPD and protect Detroit Fire Department (DFD) firefighters from attacks while putting out fires. Residents reported that police officers and National Guardsmen shot at black civilians and suspects indiscriminately. After five days, 43 people had been killed, hundreds injured, and thousands left homeless; $40 to $45 million worth of damage was caused.
State and local governments responded to the riot with a dramatic increase in minority hiring. In the aftermath of the turmoil, the Greater Detroit Board of Commerce also launched a campaign to find jobs for ten thousand "previously unemployable" persons, a preponderant number of whom were black. Governor George Romney immediately responded to the riot of 1967 with a special session of the Michigan legislature where he forwarded sweeping housing proposals that included not only fair housing, but "important relocation, tenants' rights and code enforcement legislation." Romney had supported such proposals in 1965 but abandoned them in the face of organized opposition. The laws passed both houses of the legislature. Historian Sidney Fine wrote that:
The Michigan Fair Housing Act, which took effect on November 15, 1968, was stronger than the federal fair housing law...It is probably more than a coincidence that the state that had experienced the most severe racial disorder of the 1960s also adopted one of the strongest state fair housing acts.
President Johnson created the National Advisory Commission on Civil Disorders in response to a nationwide wave of riots. The commission's final report called for major reforms in employment and public policy in black communities. It warned that the United States was moving toward separate white and black societies.
As 1968 began, the fair housing bill was being filibustered once again, but two developments revived it. The Kerner Commission report on the 1967 ghetto riots was delivered to Congress on March 1, and it strongly recommended "a comprehensive and enforceable federal open housing law" as a remedy to the civil disturbances. The Senate was moved to end their filibuster that week.
James Lawson invited King to Memphis, Tennessee, in March 1968 to support a sanitation workers' strike. These workers launched a campaign for union representation after two workers were accidentally killed on the job; they were seeking fair wages and improved working conditions. King considered their struggle to be a vital part of the Poor People's Campaign he was planning.
A day after delivering his stirring "I've Been to the Mountaintop" sermon, which has become famous for his vision of American society, King was assassinated on April 4, 1968. Riots broke out in black neighborhoods in more than 110 cities across the United States in the days that followed, notably in Chicago, Baltimore, and Washington, D.C.
The day before King's funeral, April 8, a completely silent march with Coretta Scott King, SCLC, and UAW president Walter Reuther attracted approximately 42,000 participants. Armed National Guardsmen lined the streets, sitting on M-48 tanks, to protect the marchers, and helicopters circled overhead. On April 9, Mrs. King led another 150,000 people in a funeral procession through the streets of Atlanta. Her dignity revived courage and hope in many of the Movement's members, confirming her place as the new leader in the struggle for racial equality.
Coretta Scott King said,
Martin Luther King Jr. gave his life for the poor of the world, the garbage workers of Memphis and the peasants of Vietnam. The day that Negro people and others in bondage are truly free, on the day want is abolished, on the day wars are no more, on that day I know my husband will rest in a long-deserved peace.
Ralph Abernathy succeeded King as the head of the SCLC and attempted to carry forth King's plan for a Poor People's March. It was to unite blacks and whites to campaign for fundamental changes in American society and economic structure. The march went forward under Abernathy's plainspoken leadership but did not achieve its goals.
The House of Representatives had been deliberating its Fair Housing Act in early April, before King's assassination and the aforementioned wave of unrest that followed, the largest since the Civil War. Senator Charles Mathias wrote:
[S]ome Senators and Representatives publicly stated they would not be intimidated or rushed into legislating because of the disturbances. Nevertheless, the news coverage of the riots and the underlying disparities in income, jobs, housing, and education, between White and Black Americans helped educate citizens and Congress about the stark reality of an enormous social problem. Members of Congress knew they had to act to redress these imbalances in American life to fulfill the dream that King had so eloquently preached.
The House passed the legislation on April 10, less than a week after King was murdered, and President Johnson signed it the next day. The Civil Rights Act of 1968 prohibited discrimination concerning the sale, rental, and financing of housing based on race, religion, and national origin. It also made it a federal crime to "by force or by the threat of force, injure, intimidate, or interfere with anyone...by reason of their race, color, religion, or national origin."
Conditions at the Mississippi State Penitentiary at Parchman, then known as Parchman Farm, became part of the public discussion of civil rights after activists were imprisoned there. In the spring of 1961, Freedom Riders came to the South to test the desegregation of public facilities. By the end of June 1963, Freedom Riders had been convicted in Jackson, Mississippi. Many were jailed in Mississippi State Penitentiary at Parchman. Mississippi employed the trusty system, a hierarchical order of inmates that used some inmates to control and enforce punishment of other inmates.
In 1970 the civil rights lawyer Roy Haber began taking statements from inmates. He collected 50 pages of details of murders, rapes, beatings and other abuses suffered by the inmates from 1969 to 1971 at Mississippi State Penitentiary. In a landmark case known as Gates v. Collier (1972), four inmates represented by Haber sued the superintendent of Parchman Farm for violating their rights under the United States Constitution.
Federal Judge William C. Keady found in favor of the inmates, writing that Parchman Farm violated the civil rights of the inmates by inflicting cruel and unusual punishment. He ordered an immediate end to all unconstitutional conditions and practices. Racial segregation of inmates was abolished, as was the trusty system, which allowed certain inmates to have power and control over others.
The prison was renovated in 1972 after the scathing ruling by Keady, who wrote that the prison was an affront to "modern standards of decency." Among other reforms, the accommodations were made fit for human habitation. The system of trusties was abolished. (The prison had armed lifers with rifles and given them authority to oversee and guard other inmates, which led to many cases of abuse and murders.)
In integrated correctional facilities in northern and western states, blacks represented a disproportionate number of prisoners, in excess of their proportion of the general population. They were often treated as second-class citizens by white correctional officers. Blacks also represented a disproportionately high number of death row inmates. Eldridge Cleaver's book Soul on Ice was written from his experiences in the California correctional system; it contributed to black militancy.
Civil rights protest activity had an observable impact on white American's views on race and politics over time. White people who live in counties in which civil rights protests of historical significance occurred have been found to have lower levels of racial resentment against blacks, are more likely to identify with the Democratic Party as well as more likely to support affirmative action.
One study found that non-violent activism of the era tended to produce favorable media coverage and changes in public opinion focusing on the issues organizers were raising, but violent protests tended to generate unfavorable media coverage that generated public desire to restore law and order.
The 1964 Act was passed to end discrimination in various fields based on race, color, religion, sex, or national origin in the areas of employment and public accommodation. The 1964 Act did not prohibit sex discrimination against persons employed at educational institutions. A parallel law, Title VI, had also been enacted in 1964 to prohibit discrimination in federally funded private and public entities. It covered race, color, and national origin but excluded sex. Feminists during the early 1970s lobbied Congress to add sex as a protected class category. In 1972, Title IX was enacted to fill this gap and prohibit discrimination in all federally funded education programs. Title IX, or the Education Amendments of 1972 was later renamed the Patsy T. Mink Equal Opportunity in Education Act following Mink's death in 2002.
African-American women in the civil rights movement were pivotal to its success. They volunteered as activists, advocates, educators, clerics, writers, spiritual guides, caretakers and politicians for the civil rights movement; leading and participating in organizations that contributed to the cause of civil rights. Rosa Parks's refusal to sit at the back of a public bus resulted in the year-long Montgomery bus boycott, and the eventual desegregation of interstate travel in the United States. Women were members of the NAACP because they believed it could help them contribute to the cause of civil rights. Some of those involved with the Black Panthers were nationally recognized as leaders, and still others did editorial work on the Black Panther newspaper spurring internal discussions about gender issues. Ella Baker founded the SNCC and was a prominent figure in the civil rights movement. Female students involved with the SNCC helped to organize sit-ins and the Freedom Rides. At the same time many elderly black women in towns across the Southern US cared for the organization's volunteers at their homes, providing the students food, a bed, healing aid and motherly love. Other women involved also formed church groups, bridge clubs, and professional organizations, such as the National Council of Negro Women, to help achieve freedom for themselves and their race. Several who participated in these organizations lost their jobs because of their involvement.
Many women who participated in the movement experienced gender discrimination and sexual harassment. In the SCLC, Ella Baker's input was discouraged in spite of her being the oldest and most experienced person on the staff. There are many other accounts and examples.
On December 17, 1951, the Communist Party-affiliated Civil Rights Congress delivered the petition We Charge Genocide: The Crime of Government Against the Negro People to the United Nations, arguing that the U.S. federal government, by its failure to act against lynching in the United States, was guilty of genocide under Article II of the UN Genocide Convention (see Black genocide). The petition was presented to the United Nations at two separate venues: Paul Robeson, a concert singer and activist, presented it to a UN official in New York City, while William L. Patterson, executive director of the CRC, delivered copies of the drafted petition to a UN delegation in Paris.
Patterson, the editor of the petition, was a leader of the Communist Party USA and head of the International Labor Defense, a group that offered legal representation to communists, trade unionists, and African Americans who were involved in cases that involved issues of political or racial persecution. The ILD was known for leading the defense of the Scottsboro Boys in Alabama in 1931, where the Communist Party had a considerable amount of influence among African Americans in the 1930s. This influence had largely declined by the late 1950s, although it could command international attention. As earlier civil rights figures such as Robeson, Du Bois and Patterson became more politically radical (and therefore targets of Cold War anti-Communism by the U.S. Government), they lost favor with mainstream Black America as well as with the NAACP.
In order to secure a place in the political mainstream and gain the broadest base of support, the new generation of civil rights activists believed that it had to openly distance itself from anything and anyone associated with the Communist party. According to Ella Baker, the Southern Christian Leadership Conference added the word "Christian" to its name in order to deter charges that it was associated with Communism. Under J. Edgar Hoover, the FBI had been concerned about communism since the early 20th century, and it kept civil rights activists under close surveillance and labeled some of them "Communist" or "subversive", a practice that continued during the Civil Rights Movement. In the early 1960s, the practice of distancing the civil rights movement from "Reds" was challenged by the Student Nonviolent Coordinating Committee which adopted a policy of accepting assistance and participation from anyone who supported the SNCC's political program and was willing to "put their body on the line, regardless of political affiliation." At times the SNCC's policy of political openness put it at odds with the NAACP.
While most popular representations of the movement are centered on the leadership and philosophy of Martin Luther King Jr., some scholars note that the movement was too diverse to be credited to one person, organization, or strategy. Sociologist Doug McAdam has stated that, "in King's case, it would be inaccurate to say that he was the leader of the modern civil rights movement...but more importantly, there was no singular civil rights movement. The movement was, in fact, a coalition of thousands of local efforts nationwide, spanning several decades, hundreds of discrete groups, and all manner of strategies and tactics--legal, illegal, institutional, non-institutional, violent, non-violent. Without discounting King's importance, it would be sheer fiction to call him the leader of what was fundamentally an amorphous, fluid, dispersed movement." Decentralized grassroots leadership has been a major focus of movement scholarship in recent decades through the work of historians John Dittmer, Charles Payne, Barbara Ransby, and others.
Many in the Jewish community supported the civil rights movement. In fact, statistically, Jews were one of the most actively involved non-black groups in the Movement. Many Jewish students worked in concert with African Americans for CORE, SCLC, and SNCC as full-time organizers and summer volunteers during the Civil Rights era. Jews made up roughly half of the white northern and western volunteers involved in the 1964 Mississippi Freedom Summer project and approximately half of the civil rights attorneys active in the South during the 1960s.
Jewish leaders were arrested while heeding a call from Martin Luther King Jr. in St. Augustine, Florida, in June 1964, where the largest mass arrest of rabbis in American history took place at the Monson Motor Lodge. Abraham Joshua Heschel, a writer, rabbi, and professor of theology at the Jewish Theological Seminary of America in New York, was outspoken on the subject of civil rights. He marched arm-in-arm with King in the 1965 Selma to Montgomery march. In the 1964 murders of Chaney, Goodman, and Schwerner, the two white activists killed, Andrew Goodman and Michael Schwerner, were both Jewish.
Brandeis University, the only nonsectarian Jewish-sponsored college university in the world, created the Transitional Year Program (TYP) in 1968, in part response to the assassination of Martin Luther King Jr.. The faculty created it to renew the university's commitment to social justice. Recognizing Brandeis as a university with a commitment to academic excellence, these faculty members created a chance for disadvantaged students to participate in an empowering educational experience.
The American Jewish Committee, American Jewish Congress, and Anti-Defamation League (ADL) actively promoted civil rights. While Jews were very active in the civil rights movement in the South, in the North, many had experienced a more strained relationship with African Americans. It has been argued that with Black militancy and the Black Power movements on the rise, "Black Anti-Semitism" increased leading to strained relations between Blacks and Jews in Northern communities. In New York City, most notably, there was a major socio-economic class difference in the perception of African Americans by Jews. Jews from better educated Upper-Middle-Class backgrounds were often very supportive of African American civil rights activities while the Jews in poorer urban communities that became increasingly minority were often less supportive largely in part due to more negative and violent interactions between the two groups.
According to political scientist Michael Rogin, Jewish-Black hostility was a two-way street extending to earlier decades. In the post-World War II era, Jews were granted white privilege and most moved into the middle-class while Blacks were left behind in the ghetto. Urban Jews engaged in the same sort of conflicts with Blacks--over integration busing, local control of schools, housing, crime, communal identity, and class divides--that other white ethnics did, leading to Jews participating in white flight. The culmination of this was the 1968 New York City teachers' strike, pitting largely Jewish schoolteachers against predominantly Black parents in Brownsville, New York.
Many Jewish individuals in the Southern states who supported civil rights for African Americans tended to keep a low profile on "the race issue", in order to avoid attracting the attention of the anti-Black and antisemitic Ku Klux Klan. However, Klan groups exploited the issue of African-American integration and Jewish involvement in the struggle in order to commit violently antisemitic hate crimes. As an example of this hatred, in one year alone, from November 1957 to October 1958, temples and other Jewish communal gatherings were bombed and desecrated in Atlanta, Nashville, Jacksonville, and Miami, and dynamite was found under synagogues in Birmingham, Charlotte, and Gastonia, North Carolina. Some rabbis received death threats, but there were no injuries following these outbursts of violence.
Despite the common notion that the ideas of Martin Luther King Jr., Malcolm X and Black Power only conflicted with each other and were the only ideologies of the civil rights movement, there were other sentiments felt by many blacks. Fearing the events during the movement was occurring too quickly, there were some blacks who felt that leaders should take their activism at an incremental pace. Others had reservations on how focused blacks were on the movement and felt that such attention was better spent on reforming issues within the black community.
While Conservatives, in general, supported integration, some defended incrementally phased out segregation as a backstop against assimilation. Based on her interpretation of a 1966 study made by Donald Matthews and James Prothro detailing the relative percentage of blacks for integration, against it or feeling something else, Lauren Winner asserts that:
Black defenders of segregation look, at first blush, very much like black nationalists, especially in their preference for all-black institutions; but black defenders of segregation differ from nationalists in two key ways. First, while both groups criticize NAACP-style integration, nationalists articulate a third alternative to integration and Jim Crow, while segregationists preferred to stick with the status quo. Second, absent from black defenders of segregation's political vocabulary was the demand for self-determination. They called for all-black institutions, but not autonomous all-black institutions; indeed, some defenders of segregation asserted that black people needed white paternalism and oversight in order to thrive.
Oftentimes, African-American community leaders would be staunch defenders of segregation. Church ministers, businessmen, and educators were among those who wished to keep segregation and segregationist ideals in order to retain the privileges they gained from patronage from whites, such as monetary gains. In addition, they relied on segregation to keep their jobs and economies in their communities thriving. It was feared that if integration became widespread in the South, black-owned businesses and other establishments would lose a large chunk of their customer base to white-owned businesses, and many blacks would lose opportunities for jobs that were presently exclusive to their interests. On the other hand, there were the everyday, average black people who criticized integration as well. For them, they took issue with different parts of the civil rights movement and the potential for blacks to exercise consumerism and economic liberty without hindrance from whites.
For Martin Luther King Jr., Malcolm X and other leading activists and groups during the movement, these opposing viewpoints acted as an obstacle against their ideas. These different views made such leaders' work much harder to accomplish, but they were nonetheless important in the overall scope of the movement. For the most part, the black individuals who had reservations on various aspects of the movement and ideologies of the activists were not able to make a game-changing dent in their efforts, but the existence of these alternate ideas gave some blacks an outlet to express their concerns about the changing social structure.
During the Freedom Summer campaign of 1964, numerous tensions within the civil rights movement came to the forefront. Many blacks in SNCC developed concerns that white activists from the North and West were taking over the movement. The participation by numerous white students was not reducing the amount of violence that SNCC suffered, but seemed to exacerbate it. Additionally, there was profound disillusionment at Lyndon Johnson's denial of voting status for the Mississippi Freedom Democratic Party at the Democratic National Convention. Meanwhile, during CORE's work in Louisiana that summer, that group found the federal government would not respond to requests to enforce the provisions of the Civil Rights Act of 1964, or to protect the lives of activists who challenged segregation. The Louisiana campaign survived by relying on a local African-American militia called the Deacons for Defense and Justice, who used arms to repel white supremacist violence and police repression. CORE's collaboration with the Deacons was effective in disrupting Jim Crow in numerous Louisiana areas.
In 1965, SNCC helped organize an independent political party, the Lowndes County Freedom Organization (LCFO), in the heart of the Alabama Black Belt, also Klan territory. It permitted its black leaders to openly promote the use of armed self-defense. Meanwhile, the Deacons for Defense and Justice expanded into Mississippi and assisted Charles Evers' NAACP chapter with a successful campaign in Natchez. Charles had taken the lead after his brother Medgar Evers was assassinated in 1963. The same year, the 1965 Watts Rebellion took place in Los Angeles. Many black youths were committed to the use of violence to protest inequality and oppression.
During the March Against Fear in 1966, initiated by James Meredith, SNCC and CORE fully embraced the slogan of "black power" to describe these trends towards militancy and self-reliance. In Mississippi, Stokely Carmichael declared, "I'm not going to beg the white man for anything that I deserve, I'm going to take it. We need power."
Some people engaging in the Black Power movement claimed a growing sense of black pride and identity. In gaining more of a sense of a cultural identity, blacks demanded that whites no longer refer to them as "Negroes" but as "Afro-Americans," similar to other ethnic groups, such as Irish Americans and Italian Americans. Until the mid-1960s, blacks had dressed similarly to whites and often straightened their hair. As a part of affirming their identity, blacks started to wear African-based dashikis and grow their hair out as a natural afro. The afro, sometimes nicknamed the "'fro," remained a popular black hairstyle until the late 1970s. Other variations of traditional African styles have become popular, often featuring braids, extensions, and dreadlocks.
The Black Panther Party (BPP), which was founded by Huey Newton and Bobby Seale in Oakland, California, in 1966, gained the most attention for Black Power nationally. The group began following the revolutionary pan-Africanism of late-period Malcolm X, using a "by-any-means necessary" approach to stopping racial inequality. They sought to rid African-American neighborhoods of police brutality and to establish socialist community control in the ghettos. While they conducted armed confrontation with police, they also set up free breakfast and healthcare programs for children. Between 1968 and 1971, the BPP was one of the most important black organizations in the country and had support from the NAACP, SCLC, Peace and Freedom Party, and others.
Black Power was taken to another level inside prison walls. In 1966, George Jackson formed the Black Guerrilla Family in the California San Quentin State Prison. The goal of this group was to overthrow the white-run government in America and the prison system. In 1970, this group displayed their dedication after a white prison guard was found not guilty of shooting and killing three black prisoners from the prison tower. They retaliated by killing a white prison guard.
Numerous popular cultural expressions associated with black power appeared at this time. Released in August 1968, the number one Rhythm & Blues single for the Billboard Year-End list was James Brown's "Say It Loud - I'm Black and I'm Proud". In October 1968, Tommie Smith and John Carlos, while being awarded the gold and bronze medals, respectively, at the 1968 Summer Olympics, donned human rights badges and each raised a black-gloved Black Power salute during their podium ceremony.
King was not comfortable with the "Black Power" slogan, which sounded too much like black nationalism to him. When King was assassinated in 1968, Stokely Carmichael said that whites had murdered the one person who would prevent rampant rioting and that blacks would burn every major city to the ground. Riots broke out in more than 100 cities across the country. Some cities did not recover from the damage for more than a generation; other city neighborhoods never recovered.
King and the civil rights movement inspired the Native American rights movement of the 1960s and many of its leaders. Native Americans had been dehumanized as "merciless Indian savages" in the United States Declaration of Independence, and in King's 1964 book Why We Can't Wait he wrote: "Our nation was born in genocide when it embraced the doctrine that the original American, the Indian, was an inferior race." John Echohawk, a member of the Pawnee tribe and the executive director and one of the founders of the Native American Rights Fund, stated: "Inspired by Dr. King, who was advancing the civil rights agenda of equality under the laws of this country, we thought that we could also use the laws to advance our Indianship, to live as tribes in our territories governed by our own laws under the principles of tribal sovereignty that had been with us ever since 1831. We believed that we could fight for a policy of self-determination that was consistent with U.S. law and that we could govern our own affairs, define our own ways and continue to survive in this society". Native Americans were also active supporters of King's movement throughout the 1960s, which included a sizable Native American contingent at the 1963 March on Washington for Jobs and Freedom.
Due to policies of segregation and disenfranchisement present in Northern Ireland many Irish activists took inspiration from American civil rights activists. People's Democracy had organized a "Long March" from Belfast to Derry which was inspired by the Selma to Montgomery marches. During the civil rights movement in Northern Ireland protesters often sang the American protest song We Shall Overcome and sometimes referred to themselves as the "negroes of Northern Ireland".
There was an international context for the actions of the U.S. federal government during these years. The Soviet media frequently covered racial discrimination in the U.S. Deeming American criticism of its own human rights abuses hypocritical, the Soviet government would respond by stating "And you are lynching Negroes". In his 1934 book Russia Today: What Can We Learn from It?, Sherwood Eddy wrote: "In the most remote villages of Russia today Americans are frequently asked what they are going to do to the Scottsboro Negro boys and why they lynch Negroes."
In Cold War Civil Rights: Race and the Image of American Democracy, the historian Mary L. Dudziak wrote that Communists who were critical of the United States accused it of practicing hypocrisy when it portrayed itself as the "leader of the free world," while so many of its citizens were being subjected to severe racial discrimination and violence; she argued that this was a major factor in moving the government to support civil rights legislation.
A majority of White Southerners have been estimated to have neither supported or resisted the civil rights movement. Many did not enjoy the idea of expanding civil rights but were uncomfortable with the language and often violent tactics used by those who resisted the civil rights movement as part of the Massive resistance. Many only reacted to the movement once forced to by their changing environment, and when they did their response was usually whatever they felt would disturb their daily life the least. Most of their personal reactions, whether eventually in support or resistance weren't in extreme.
King reached the height of popular acclaim during his life in 1964, when he was awarded the Nobel Peace Prize. After that point his career was filled with frustrating challenges. The liberal coalition that had gained passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 began to fray.
King was becoming more estranged from the Johnson administration. In 1965 he broke with it by calling for peace negotiations and a halt to the bombing of Vietnam. He moved further left in the following years, speaking about the need for economic justice and thoroughgoing changes in American society. He believed that change was needed beyond the civil rights which had been gained by the movement.
However, King's attempts to broaden the scope of the civil rights movement were halting and largely unsuccessful. In 1965 King made several attempts to take the Movement north in order to address housing discrimination. The SCLC's campaign in Chicago publicly failed, because Chicago's Mayor Richard J. Daley marginalized the SCLC's campaign by promising to "study" the city's problems. In 1966, white demonstrators in notoriously racist Cicero, a suburb of Chicago, held "white power" signs and threw stones at marchers who were demonstrating against housing segregation.
Politicians and journalists quickly blamed this white backlash on the movement's shift towards Black Power in the mid-1960s; today most scholars believe the backlash was a phenomenon that was already developing in the mid-1950s, and it was embodied in the "massive resistance" movement in the South where even the few moderate white leaders (including George Wallace, who had once been endorsed by the NAACP) shifted to openly racist positions. Northern and Western racists opposed the southerners on a regional and cultural basis, but also held segregationist attitudes which became more pronounced as the civil rights movement headed north and west. For instance, prior to the Watts riot, California whites had already mobilized to repeal the state's 1963 fair housing law.
Even so, the backlash which occurred at the time was not able to roll back the major civil rights victories which had been achieved or swing the country into reaction. Social historians Matthew Lassiter and Barbara Ehrenreich note that the backlash's primary constituency was suburban and middle-class, not working-class whites: "among the white electorate, one half of blue-collar voters...cast their ballot for [the liberal presidential candidate] Hubert Humphrey in 1968...only in the South did George Wallace draw substantially more blue-collar than white-collar support."
While not a key focus of his administration, President Eisenhower made several conservative strides toward making America a racially integrated country. The year he was elected, Eisenhower desegregated Washington D.C. after hearing a story about an African American man who was unable to rent a hotel room, buy a meal, access drinking water, and attend a movie. Shortly after this act, Eisenhower utilized Hollywood personalities to pressure movie theatres into desegregating as well.
Under the previous administration, President Truman signed executive order 9981 to desegregate the military. However, Truman's executive order had hardly been enforced. President Eisenhower made it a point to enforce the executive order. By October 30, 1954, there were no segregated combat units in the United States. Not only this, but Eisenhower also desegregated the Veterans Administration and military bases in the South, including federal schools for military dependents. Expanding his work beyond the military, Eisenhower formed two non-discrimination committees, one to broker nondiscrimination agreements with government contractors, and a second to end discrimination within government departments and agencies.
The first major piece of Civil Rights legislation since the Civil Rights Act of 1875 was also passed under the Eisenhower administration. President Eisenhower proposed, championed, and signed the Civil Rights Act of 1957. The legislation established the Civil Rights Commission and the Justice Department's Civil Rights Division and banned intimidating, coercing, and other means of interfering with a citizen's right to vote. Eisenhower's work in desegregating the judicial system is also notable. The judges he appointed were liberal when it came to the subject of Civil Rights/ desegregation and he actively avoided placing segregationists in federal courts.
For the first two years of the Kennedy administration, civil rights activists had mixed opinions of both the president and Attorney General, Robert F. Kennedy. A well of historical skepticism toward liberal politics had left African Americans with a sense of uneasy disdain for any white politician who claimed to share their concerns for freedom, particularly ones connected to the historically pro-segregationist Democratic Party. Still, many were encouraged by the discreet support Kennedy gave to King, and the administration's willingness, after dramatic pressure from civil disobedience, to bring forth racially egalitarian initiatives.
Many of the initiatives resulted from Robert Kennedy's passion. The younger Kennedy gained a rapid education in the realities of racism through events such as the Baldwin-Kennedy meeting. The president came to share his brother's sense of urgency on the matter, resulting in the landmark Civil Rights Address of June 1963 and the introduction of the first major civil rights act of the decade.
Robert Kennedy first became concerned with civil rights in mid-May 1961 during the Freedom Rides, when photographs of the burning bus and savage beatings in Anniston and Birmingham were broadcast around the world. They came at an especially embarrassing time, as President Kennedy was about to have a summit with the Soviet premier in Vienna. The White House was concerned with its image among the populations of newly independent nations in Africa and Asia, and Robert Kennedy responded with an address for Voice of America stating that great progress had been made on the issue of race relations. Meanwhile, behind the scenes, the administration worked to resolve the crisis with a minimum of violence and prevent the Freedom Riders from generating a fresh crop of headlines that might divert attention from the President's international agenda. The Freedom Riders documentary notes that, "The back burner issue of civil rights had collided with the urgent demands of Cold War realpolitik."
On May 21, when a white mob attacked and burned the First Baptist Church in Montgomery, Alabama, where King was holding out with protesters, Robert Kennedy telephoned King to ask him to stay in the building until the U.S. Marshals and National Guard could secure the area. King proceeded to berate Kennedy for "allowing the situation to continue". King later publicly thanked Kennedy for deploying the force to break up an attack that might otherwise have ended King's life.
With a very small majority in Congress, the president's ability to press ahead with legislation relied considerably on a balancing game with the Senators and Congressmen of the South. Without the support of Vice-President Johnson, a former Senator who had years of experience in Congress and longstanding relations there, many of the Attorney-General's programs would not have progressed.
By late 1962, frustration at the slow pace of political change was balanced by the movement's strong support for legislative initiatives, including administrative representation across all U.S. Government departments and greater access to the ballot box. From squaring off against Governor George Wallace, to "tearing into" Vice-President Johnson (for failing to desegregate areas of the administration), to threatening corrupt white Southern judges with disbarment, to desegregating interstate transport, Robert Kennedy came to be consumed by the civil rights movement. He continued to work on these social justice issues in his bid for the presidency in 1968.
On the night of Governor Wallace's capitulation to African-American enrollment at the University of Alabama, President Kennedy gave an address to the nation, which marked the changing tide, an address that was to become a landmark for the ensuing change in political policy as to civil rights. In 1966, Robert Kennedy visited South Africa and voiced his objections to apartheid, the first time a major US politician had done so:
At the University of Natal in Durban, I was told the church to which most of the white population belongs teaches apartheid as a moral necessity. A questioner declared that few churches allow black Africans to pray with the white because the Bible says that is the way it should be, because God created Negroes to serve. "But suppose God is black", I replied. "What if we go to Heaven and we, all our lives, have treated the Negro as an inferior, and God is there, and we look up and He is not white? What then is our response?" There was no answer. Only silence.-- LOOK Magazine
Robert Kennedy's relationship with the movement was not always positive. As attorney general, he was called to account by activists--who booed him at a June 1963 speech--for the Justice Department's own poor record of hiring blacks. He also presided over FBI Director J. Edgar Hoover and his COINTELPRO program. This program ordered FBI agents to "expose, disrupt, misdirect, discredit, or otherwise neutralize" the activities of Communist front groups, a category in which the paranoid Hoover included most civil rights organizations. Kennedy personally authorized some of the programs. According to Tim Weiner, "RFK knew much more about this surveillance than he ever admitted." Although Kennedy only gave approval for limited wiretapping of King's phones "on a trial basis, for a month or so." Hoover extended the clearance so his men were "unshackled" to look for evidence in any areas of the black leader's life they deemed important; they then used this information to harass King. Kennedy directly ordered surveillance on James Baldwin after their antagonistic racial summit in 1963.
Lyndon Johnson made civil rights one of his highest priorities, coupling it with a whites war on poverty. However increasing the opposition to the War in Vietnam, coupled with the cost of the war, undercut support for his domestic programs.
Under Kennedy, major civil rights legislation had been stalled in Congress. His assassination changed everything. On one hand, President Lyndon Johnson was a much more skillful negotiator than Kennedy but he had behind him a powerful national momentum demanding immediate action on moral and emotional grounds. Demands for immediate action originated from unexpected directions, especially white Protestant church groups. The Justice Department, led by Robert Kennedy, moved from a posture of defending Kennedy from the quagmire minefield of racial politics to acting to fulfill his legacy. The violent death and public reaction dramatically moved the conservative Republicans, led by Senator Everett McKinley Dirksen, whose support was the margin of victory for the Civil Rights Act of 1964. The act immediately ended de jure (legal) segregation and the era of Jim Crow.
With the civil rights movement at full blast, Lyndon Johnson coupled black entrepreneurship with his war on poverty, setting up special programs in the Small Business Administration, the Office of Economic Opportunity, and other agencies. This time there was money for loans designed to boost minority business ownership. Richard Nixon greatly expanded the program, setting up the Office of Minority Business Enterprise (OMBE) in the expectation that black entrepreneurs would help defuse racial tensions and possibly support his reelection .
The 1954 to 1968 civil rights movement contributed strong cultural threads to American and international theater, song, film, television, and folk art.
Carolyn Bryant Donham, interview with the author, Raleigh, NC, September 8, 2008.
Soon Americans who criticized the Soviet Union for its human rights violations were answered with the famous tu quoque argument: 'A u vas negrov linchuyut' (and you are lynching Negroes). | https://www.popflock.com/learn?s=Civil_Rights_Movement | 21 |
72 | This article discusses the concept of inflation and its impact on the economy. You can also view a related PowerPoint presentation at the end of the post.
What is inflation?
The term inflation refers to a decrease in purchasing power of consumers because of an increase in the prices of goods and services. In other words, it can also be defined as a decline in purchasing power of a given currency over time.
What is the impact of inflation on the economy?
Inflation can have both positive and negative effects on both individuals and on the economy as a whole. Sometimes, it can be beneficial to the economic recovery if inflation occurs in a controlled manner. But, when inflation becomes too high, the economy can suffer adversely.
The article discusses both the favorable and unfavorable impacts of inflation on the economy.
Favorable impacts of inflation
Impact on producers
Inflation may have a positive impact on the producers as they will earn more profit. The extra money earned through an increase in prices will also lead to better investment returns for investors and entrepreneurs. They receive incentives for investing in productive activities i.e. tax exemption, grants, etc.
Increase in production
The extra money that industrialists earn through inflation is invested in industrial production. An increase in production will open doors for more jobs and better income.
Increase in the income of shareholders
The key shareholders can earn a good income as they share the increased profit earned through inflation. The companies as they get more profits they also share it with stakeholders.
Borrower related benefits
Inflation benefits borrowers if they have borrowed money before the price hike. This happens because the real value of the money returned by the borrower will be less than that of the current one.
Unfavorable impacts of inflation
Impact on fixed income groups
Inflation adversely affects people of fixed-income Groups, e.g. salaried workers, employees, pensioners, etc. As inflation increases, their income does not increase. Hence, they start compromising on the quality of life.
Rise of inequality in income distribution
High-income people can balance rising inflation with a correspondingly increased income. In contrast, the lower-income people would have to suffer as there is hardly any proportional rise in their income. Consequently, the lower-income households find themselves in poverty.
Moreover, rising inflation causes rising nominal interest rates as inflation leads to an increased burden of real tax on interest income. This negative impact on interest income sometimes leads to reduced interest in stocks. As a result, the stock exchange participation reduces as the investors would fear losing their money.
Another impact on lower-income groups is a decrease in saving with an uncertain future ahead. They would prefer to purchase consumer goods before their price further increase.
Disturbance in planning
Inflation creates uncertainty as to what extent the inflation rate will rise, thus discouraging investment. At the level of lower-income groups, people start to spend and stock up items to cope with the potential rise in price. This is how inflation also discourages money saving. This all suggests that inflation disturbs economic planning at the individual, organizational and state-level.
Investment on basis of speculation
The rise in inflation lures many people into a speculative investment. People start buying shares, residential buildings, plots that they will be able to sell with a high margin of profit when their prices will spike.
The purpose behind such investment is profit earning on the fast track. Speculative investment is not encouraged if we look at it from the broader economic perspective because it does not help create productive capital.
Reduced accumulation of Capital
With the uncertainty of prices of goods and services, the general desire and capacity for capital saving also reduce. In the case of investors, the capital available for future investment also shrinks. Since capital accumulation depends on investment growth, it is negatively impacted during periods of inflation.
Impact on money lenders
Unlike borrowers, money lenders e.g. individuals, credit societies, banks, etc. lose during the period of inflation. Because money at the time of loaning has lost its value which we call inflation. Its purchasing power has declined compared to the loaning time.
Impact on foreign exchange earnings
During the period of inflation, the value of industrial inputs i.e. raw material, power, labor, technology also increases. Consequently, the production cost of export goods also increases. This increase in the prices of exports might lead to a decrease in the value of the exports in the international market adversely affecting the overall economy. | https://contentgenerate.com/impact-of-inflation-economy-purchasing-power-economic-recovery/ | 21 |
156 | Gender pay gap
The gender pay gap or gender wage gap is the average difference between the remuneration for men and women who are working. Women are generally considered to be paid less than men. There are two distinct numbers regarding the pay gap: non-adjusted versus adjusted pay gap. The latter typically takes into account differences in hours worked, occupations chosen, education and job experience. In the United States, for example, the non-adjusted average female's annual salary is 79% of the average male salary, compared to 95% for the adjusted average salary.
|Part of a series on|
The gender pay gap can be a problem from a public policy perspective because it reduces economic output and means that women are more likely to be dependent upon welfare payments, especially in old age.
According to a 2021 study on historical gender wage ratios, women in Southern Europe earned approximately half that of unskilled men between 1300 and 1800. In Northern and Western Europe, the ratio was far higher but it declined over the period 1500–1800.
A 2005 meta-analysis by Doris Weichselbaumer and Rudolf Winter-Ebmer of more than 260 published pay gap studies for over 60 countries found that, from the 1960s to the 1990s, raw (aka non-adjusted) wage differentials worldwide have fallen substantially from around 65% to 30%. The bulk of this decline, was due to better labor market endowments of women (i.e. better education, training, and work attachment).
A 2011 study by the British CMI concluded that, if pay growth continues for female executives at current rates, the gap between the earnings of female and male executives would not be closed until 2109.
The non-adjusted gender pay gap, or gender wage gap is typically the median or mean average difference between the remuneration for all working men and women in the sample chosen. It is usually represented as either a percentage or a ratio of the "difference between average gross hourly [or annual] earnings of male and female employees as % of male gross earnings".
Some countries use only the full-time working population for the calculation of national gender gaps. Others are based on a sample from the entire working population of a country (including part-time workers), in which case the full-time equivalent (FTE) is used to obtain the remuneration for an equal amount of paid hours worked.
Non-governmental organizations apply the calculation to various samples. Some share how the calculation was performed and on which data set. The gender pay gap can, for example, be measured by ethnicity, by city, by job, or within a single organization.
Adjusting for different causes
Comparing salary "within, rather than across" data sets helps to focus on a specific factor, by controlling for other factors. For example, to eliminate the role of horizontal and vertical segregation in the gender pay gap, salary can be compared by gender within a specific job function. To eliminate transnational differences in the job market, measurements can focus on a single geographic area instead.
The non-adjusted gender pay gap is not itself a measure of discrimination. Rather, it combines differences in the average pay of women and men to serve as a barometer of comparison. Differences in pay are caused by occupational segregation (with more men in higher paid industries and women in lower paid industries), vertical segregation (fewer women in senior, and hence better paying positions), ineffective equal pay legislation, women’s overall paid working hours, and barriers to entry into the labor market (such as education level and single parenting rate).
Some variables that help explain the non-adjusted gender pay gap include economic activity, working time, and job tenure. Gender-specific factors, including gender differences in qualifications and discrimination, overall wage structure, and the differences in remuneration across industry sectors all influence the gender pay gap.
Eurostat estimated in 2016 that after allowing for average characteristics of men and women, women still earn 11.5% less than men. Since this estimate accounts for average differences between men and women, it is an estimation of the unexplained gender pay gap.
In Jacobs (1995), Boyd et al. refer to the horizontal division of labor as "high-tech" (predominantly men) versus "high-touch" (predominantly women) with high tech being more financially rewarding. Men are more likely to be in relatively high-paying, dangerous industries such as mining, construction, or manufacturing and to be represented by a union. Women, in contrast, are more likely to be in clerical jobs and to work in the service industry.
A study of the US labor force in the 1990s suggested that gender differences in occupation, industry and union status explain an estimated 53% of the wage gap. A 2017 study in the American Economic Journal: Macroeconomics found that the growing importance of the services sector has played a role in reducing the gender gap in pay and hours. In 1998, adjusting for both differences in human capital and in industry, occupation, and unionism increases the size of American women's average earnings from 80% of American men's to 91%.
A 2017 study by the US National Science Foundation's annual census revealed pay gaps in different areas of science: there is a much larger proportion of men in higher-paying fields such as mathematics and computer science, the two highest-paying scientific fields. Men accounted for about 75% of doctoral degrees in those fields (a proportion that has barely changed since 2007), and expected to earn $113,000 compared with $99,000 for women. In the social sciences the difference between men and women with PhD's was significantly smaller, with men earning ~$66,000, compared with $62,000 for women. However, in some fields women earn more: women in chemistry earn ~$85,000, about $5,000 more than their male colleagues.
A Morningstar analysis of senior executive pay data revealed that senior executive women earned 84.6 cents for every dollar earned by male executives in 2019. Women also remained outnumbered in the C-Suite 7 to 1.
A 2015 meta-analysis of studies of experimental studies of gender in hiring found that "men were preferred for male-dominated jobs (i.e., gender-role congruity bias), whereas no strong preference for either gender was found for female-dominated or integrated jobs". A 2018 audit study found that high-achieving men are called back more frequently by employers than equally high-achieving women (at a rate of nearly 2-to-1).
The European Commission divides discrimination, as it impacts the EU wage gap, into several categories.
Direct discrimination is when a woman is paid less than a man for the same job. According to Harvard Economist, Claudia Goldin, by and large women receive equal pay for equal work in the US. A more persistent difference is the sectoral or industry discrimination, with women being paid less for a job of equal value in careers dominated by women.
Studies have shown that an increasing share of the gender pay gap over time is due to children. The phenomenon of lower wages due to childbearing has been termed the motherhood penalty. A 2019 study conducted in Germany found that women with children are discriminated against in the job market, whereas men with children are not. In contrast, a 2020 study in the Netherlands found little evidence for discrimination against women in hiring based on their parental status.
Motherhood can affect job choices as well. In a traditional role, women are the ones who leave the workforce temporarily to take care of their children. As a result, women tend to take lower paying jobs because they are more likely to have more flexible timings compared to higher-paying jobs. Since women are more likely to work fewer hours than men, they have less experience, which will cause women to be behind in the work force.
Another explanation of such gender pay gap is the distribution of housework. Couples who raise a child tend to designate the mother to do the larger share of housework and takes on the main responsibility of child care, and as a result women tend to have less time available for wage-earning. This reinforces the pay gap between male and female in the labor market, and now people are trapped in this self-reinforcing cycle.
Another social factor, which is related to the aforementioned one, is the socialization of individuals to adopt specific gender roles. Job choices influenced by socialization are often slotted in to "demand-side" decisions in frameworks of wage discrimination, rather than a result of extant labor market discrimination influencing job choice. Men that are in non-traditional job roles or jobs that are primarily seen as a women-focused jobs, such as nursing, have high enough job satisfaction that motivates the men to continue in these job fields despite criticism they may receive.
According to a 1998 study, in the eyes of some employees, women in middle management are perceived to lack the courage, leadership, and drive that male managers appear to have, despite female middle managers achieving results on par with their male counterparts in terms of successful projects and achieving results for their employing companies. These perceptions, along with the factors previously described in the article, contribute to the difficulty of women to ascend to the executive ranks when compared to men in similar positions.
Societal ideas of gender roles stem somewhat from media influences. Media portrays ideals of gender-specific roles off of which gender stereotypes are built. These stereotypes then translate to what types of work men and women can or should do. In this way, gender plays a mediating role in work discrimination, and women find themselves in positions that do not allow for the same advancements as males.
Some research suggests that women are more likely to volunteer for tasks that are less likely to help earn promotions, and that they are more likely to be asked to volunteer and more likely to say yes to such requests.
The gender pay gap can be a problem from a public policy perspective because it reduces economic output and means that women are more likely to be dependent upon welfare payments, especially in old age.
For economic activity
A 2009 report for the Australian Department of Families, Housing, Community Services and Indigenous Affairs argued that in addition to fairness and equity there are also strong economic imperatives for addressing the gender wage gap. The researchers estimated that a decrease in the gender wage gap from 17% to 16% would increase GDP per capita by approximately $260, mostly from an increase in the hours females would work. Ignoring opposing factors as hours females work increase, eliminating the whole gender wage gap from 17% could be worth around $93 billion or 8.5% of GDP. The researchers estimated the causes of the wage gap as follows, lack of work experience was 7%, lack of formal training was 5%, occupational segregation was 25%, working at smaller firms was 3%, and being female represented the remaining 60%.
An October 2012 study by the American Association of University Women found that over the course of 47 years, an American woman with a college degree will make about $1.2 million less than a man with the same education. Therefore, closing the pay gap by raising women's wages would have a stimulus effect that would grow the United States economy by at least 3% to 4%.
For women's pensions
The European Commission argues that the pay gap has significant effects on pensions. Since women's lifetime earnings are on average 17.5% (as of 2008) lower than men's, they have lower pensions. As a result, elderly women are more likely to face poverty: 22% of women aged 65 and over are at risk of poverty compared to 16% of men.
Analysis conducted by the World Bank and available in the 2019 World Development Report on The Changing Nature of Work connects earnings with skill accumulation, suggesting that women also accumulate less human capital (skills and knowledge) at work and through their careers. The report shows that the payoffs to work experience is lower for women across the world as compared to men. For example, in Venezuela, for each additional year of work, men's wages increase on average by 2.2 percent, compared to only 1.5 percent for women. In Denmark, by contrast, the payoffs to an additional year of work experience are the same for both men and women, at 5 percent on average. To address these differences, the report argues that governments could seek to remove limitations on the type or nature of work available to women and eliminate rules that limit women's property rights. Parental leave, nursing breaks, and the possibility for flexible or part-time schedules are also identified as potential factors limiting women's learning in the workplace.
In certain neoclassical models, discrimination by employers can be inefficient; excluding or limiting employment of a specific group will raise the wages of groups not facing discrimination. Other firms could then gain a competitive advantage by hiring more workers from the group facing discrimination. As a result, in the long run discrimination would not occur. However, this view depends on strong assumptions about the labor market and the production functions of the firms attempting to discriminate. Firms which discriminate on the basis of real or perceived customer or employee preferences would also not necessarily see discrimination disappear in the long run even under stylized models.
In monopsony theory, which describes situations where there is only one buyer (in this case, a "buyer" for labor), wage discrimination can be explained by variations in labor mobility constraints between workers. Ransom and Oaxaca (2005) show that women appear to be less pay sensitive than men, and therefore employers take advantage of this and discriminate in their pay for women workers.
According to the 2008 edition of the Employment Outlook report by the OECD, almost all OECD countries have established laws to combat discrimination on grounds of gender. Examples of this are the Equal Pay Act of 1963 and Title VII of the Civil Rights Act of 1964. Legal prohibition of discriminatory behavior, however, can only be effective if it is enforced. The OECD points out that:
herein lies a major problem: in all OECD countries, enforcement essentially relies on the victims' willingness to assert their claims. But many people are not even aware of their legal rights regarding discrimination in the workplace. And even if they are, proving a discrimination claim is intrinsically difficult for the claimant and legal action in courts is a costly process, whose benefits down the road are often small and uncertain. All this discourages victims from lodging complaints.
Moreover, although many OECD countries have put in place specialized anti-discrimination agencies, only in a few of them are these agencies effectively empowered, in the absence of individual complaints, to investigate companies, take actions against employers suspected of operating discriminatory practices, and sanction them when they find evidence of discrimination.
In 2003, the U.S. Government Accountability Office (GAO) found that women in the United States, on average, earned 80% of what men earned in 2000 and workplace discrimination may be one contributing factor. In light of these findings, GAO examined the enforcement of anti-discrimination laws in the private and public sectors. In a 2008 report, GAO focused on the enforcement and outreach efforts of the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (Labor). GAO found that EEOC does not fully monitor gender pay enforcement efforts and that Labor does not monitor enforcement trends and performance outcomes regarding gender pay or other specific areas of discrimination. GAO came to the conclusion that "federal agencies should better monitor their performance in enforcing anti-discrimination laws."
In 2016, the EEOC proposed a rule to submit more information on employee wages by gender to better monitor and combat gender discrimination. In 2018, Iceland enacted legislation to reduce the country's pay gap.
Civil society groups organize awareness campaigns that include activities such as Equal Pay Day or the equal pay for equal work movement to increase the public attention received by the gender pay gap. For the same reason, various groups publish regular reports on the current state of gender pay differences. An example is the Global Gender Gap Report.
The growth of the "gig" economy generates worker flexibility that, some have speculated, will favor women. However, the analysis of earnings among more than one million Uber drivers in the United States surprisingly showed that the gender pay gap between drivers is about 7% in favor of men. Uber's algorithm does not distinguish the gender of its workers, but men get more income because they choose better when and in which areas to work, and cancel and accept trips in a more lucrative way. Finally, men drive 2.2% faster than women, which also allows them to increase their income per unit of time. The study concludes the "gig" economy can perpetuate the gender pay gap even in the absence of discrimination.
This is a list of non-adjusted pay gaps (median earnings of full-time employees) according to the OECD (2008).
|Country||Unadjusted Gender gap|
Moreover, the World Economic Forum provides data from 2015 that evaluates the gender pay gap in 145 countries. Their evaluations take into account economic participation and opportunity, educational attainment, health and survival, and political empowerment scores.
In Australia, the Workplace Gender and Equality Agency (WGEA), an Australian Government statutory agency, publishes data from non-public sector Australian organizations. There is a pay gap across all industries. The gender pay gap is calculated on the average weekly ordinary time earnings for full-time employees published by the Australian Bureau of Statistics. The gender pay gap excludes part-time earnings, casual earnings, and increased hourly rates for overtime.
Ian Watson of Macquarie University examined the gender pay gap among full-time managers in Australia over the period 2001–2008, and found that between 65 and 90% of this earnings differential could not be explained by a large range of demographic and labor market variables. In fact, a "major part of the earnings gap is simply due to women managers being female". Watson also notes that despite the "characteristics of male and female managers being remarkably similar, their earnings are very different, suggesting that discrimination plays an important role in this outcome". A 2009 report to the Department of Families, Housing, Community Services and Indigenous Affairs also found that "simply being a woman is the major contributing factor to the gap in Australia, accounting for 60 per cent of the difference between women's and men's earnings, a finding which reflects other Australian research in this area". The second most important factor in explaining the pay gap was industrial segregation. A report by the World Bank also found that women in Australia who worked part-time jobs and were married came from households which had a gendered distribution of labor, possessed high job satisfaction, and hence were not motivated to increase their working hours.
The Global Gender Gap Report ranks Brazil at 95 out of 144 countries on pay equality for like jobs. Brazil has a score of 0.684, which is a little below 2017's global index. In 2017, Brazil was one of the 6 countries that fully closed their gaps on both the Health and Survival and Educational Attainment sub-indexes. However, Brazil saw a setback in the progress towards gender parity this year, with its overall gender gap standing at its widest point since 2011. This is due to an exponential growth of Brazil's Political Empowerment gender gap, which measures the ratio of females in the parliament and at a ministerial level, that is too large to be counterbalanced by a range of modest improvements across the country's Economic Participation and Opportunity sub-index.
According to the Brazilian Institute of Geography and Statistics, or IBGE, women in Brazil study more, work more and earn less than men. On average, combining paid work, household chores and caring for people, women work three hours a week more than men. In fact, the average women will work 54.4 hours a week, and the average man will only work 51.4 hours per week. Despite that, even with a higher educational level, women earn, on average, less than men do. Although the difference between men's and women's earnings has declined in recent years, in 2016 women still received the equivalent of 76.5% of men's earnings. One of the factors that may explain this difference is that only 37.8% of management positions in 2016 were held by women. According to IBGE, occupational segregation and the wage discrimination of women in the labor market also have an important role in the wage difference between men and women. According to data from the Continuous National Household Sample Survey, done by IBGE on the fourth quarter of 2017, 24.3% of the 40.2 million Brazilian workers had completed college, but this proportion was of 14.6% among employed men. As reported by the same survey, women who work earn 24.4% less, on average, than men. It also cited that 6.0% of working men were employers, while the proportion of women employers was only 3.3%. The survey also pointed out that 92.3% of domestic workers, a job culturally known as "feminine" and that pays low wages, are women. While high paying occupations like civil construction employed 13% of the employed men and only 0.5% of the employed women. Other reason that might explain the gender wage gap in Brazil are the very strict labor regulations that increase informal hiring. In Brazil, under law, female workers may opt to take 6 months of maternity leave that must be fully paid by the employer. Many researches are concerned with this regulations. They question if these regulations may actually force workers into informal jobs, where they will have no rights at all. In fact, women who work on informal jobs earn only 50% of the average women in formal jobs. Between men the difference is less radical: men working on informal jobs earn 60% of the average men in formal jobs.
A study of wages among Canadian supply chain managers found that men make an average of $14,296 a year more than women. The research suggests that as supply chain managers move up the corporate ladder, they are less likely to be female. Women in Canada are more likely to seek employment opportunities which greatly contrast the ones of men. About 20 percent of women between the ages of 25 and 54 will make just under $12 an hour in Canada. The demographic of women who take jobs paying less than $12 an hour is also a proportion that is twice as large as the proportion of men taking on the same type of low-wage work. There still remains the question of why such a trend seems to resonate throughout the developed world. One identified societal factor that has been identified is the influx of women of color and immigrants into the work force. These groups both tend to be subject to lower paying jobs from a statistical perspective. Each province and territory in Canada has a quasi-constitutional human rights code which prohibits discrimination based on sex. Several also have laws specifically prohibiting public sector and private sector employers from paying men and women differing amounts for substantially similar work. Verbatim, the Alberta Human Rights Act states in regards to equal pay, "Where employees of both sexes perform the same or substantially similar work for an employer in an establishment the employer shall pay the employees at the same rate of pay."
Using the gaps between men and women in economic participation and opportunity, educational attainment, health and survival, and political empowerment, The Global Gender Gap Report 2018 ranks China's gender gap at 110 out of 145 countries. As an upper middle income country, as classified by the World Bank, China is the "third-least improved country in the world" on the gender gap. The health and survival sub-index is the lowest within the countries listed; this sub-index takes into account the gender differences of life expectancy and sex ratio at birth (the ratio of male to female children to depict the preferences of sons in accordance with China's One Child Policy).:4,26 In particular, Jayoung Yoon, a researcher, claims the women's employment rate is decreasing. However, several of the contributing factors might be expected to increase women's participation. Yoon's contributing factors include: the traditional gender roles; the lack of childcare services provided by the state; the obstacle of child rearing; and the highly educated, unmarried women termed "leftover women" by the state. The term "leftover women" produces anxieties for women to rush marriage, delaying employment. In alignment with the traditional gender roles, the "Women Return to the Home" movement by the government encouraged women to leave their jobs to alleviate the men's unemployment rate.
Dominican women, who are 52.2% of the labor force, earns an average of 20,479 Dominican pesos, 2.6% more than Dominican men's average income of 19,961 pesos. The Global Gender Gap ranking, found by compiling economic participation and opportunity, educational attainment, health and survival, and political empowerment scores, in 2009 it was 67th out of 134 countries representing 90% of the globe, and its ranking has dropped to 86th out of 145 countries in 2015. More women are in ministerial offices, improving the political empowerment score, but women are not receiving equal pay for similar jobs, preserving the low economic participation and opportunity scores.:15–17, 23
At EU level, the gender pay gap is defined as the relative difference in the average gross hourly earnings of women and men within the economy as a whole. Eurostat found a persisting gender pay gap of 17.5% on average in the 27 EU Member States in 2008. There were considerable differences between the Member States, with the non-adjusted pay gap ranging from less than 10% in Italy, Slovenia, Malta, Romania, Belgium, Portugal, and Poland to more than 20% in Slovakia, the Netherlands, Czech Republic, Cyprus, Germany, United Kingdom, and Greece and more than 25% in Estonia and Austria. However, taking into account the hours worked in Finland, men there only earned 0.4% more in net income than women.
A recent survey of international employment law firms showed that gender pay gap reporting is not a common policy internationally. Despite such laws on a national level being few and far between, there are calls for regulation on an EU level. A recent (as of December 2015) resolution of the European Parliament urged the Commission to table legislation closing the pay gap. A proposal that is substantively the same as the UK plan was passed by 344 votes to 156 in the European Parliament.
The European Commission has stated that the undervaluation of female work is one of the main contributors to the persisting gender pay gap. They add that explanations of the pay gap goes beyond discrimination, and that other factors contributes in upholding the gap: factors such as work-life balance, the issue of women in leadership and the glass ceiling, and sectoral segregation, which has to do with the overrepresentation of women in low-paying sectors.
On average, between 1995 and 2005, women in Finland earned 28.4% less in non-adjusted salaries than men. Taking into account the high progressive tax rate in Finland, the net income difference was 22.7%. Adjusted for the amount of hours worked (and not including unpaid national military service hours), these wage differences are reduced to approximately 5.7% (non taxed) and 0.4% (tax-adjusted).
The difference in the amount of hours worked is largely attributed to social factors; for example, women in Finland spend considerably more time on domestic work instead. Other considerable factors are increased pay rates for overtime and evening/night-time work, of which men in Finland, on average, work more. When comparing people with the same job title, women in public sector positions earn approximately 99% of their male counterparts, while those in the private sector only earn 95%. Public sector positions are generally more rigidly defined, allowing for less negotiation in individual wages and overtime/evening/night-time work.
Women earn 22–23% less than men, according to the Federal Statistical Office of Germany. The revised gender pay gap was 6–8% in the years 2006–2013. The Cologne Institute for Economic Research adjusted the wage gap to less than 2%. They reduced the gender pay gap from 25% to 11% by taking in account the work hours, education and the period of employment. The difference in revenue was reduced furthermore if women had not paused their job for more than 18 months due to motherhood.
The most significant factors associated with the remaining gender pay gap are part-time work, education and occupational segregation (less women in leading positions and in fields like STEM).
In Luxembourg, the total gender income gap represents 32.5%. The gender pay gap of full-time workers regarding monthly gross wages has narrowed over the past few years. According to the data from OECD (Organization for Economic Co-operation and Development) the gender pay gap dropped over 10% between 2002 and 2015. The gap is also dependent on the age group. Females between the ages of 25–34 years are getting higher wages than males in this time period. One of the reasons for that is that they have a higher level of education during this age. From the age of 35 years males earn higher salaries than females.
The current extent of gender pay gap refers to different factors such as varying working hours and diverse participation in the labor market. More females (30.4%) than males (4.6%) are working part-time, due to this fact the overall working hours for females are lowered. The labor force participation represents 60.3% for females and 76% for males, because most women will take advantage of the maternity leave. Males participate more often in higher paid jobs, for instance in executive positions (93.7%), what affects the scale of the gender pay gap as well.
In the Netherlands, recent numbers from the CBS (Central Bureau voor statistieken; English: Central Bureau of Statistics) claim that the pay gap is getting smaller. Adjusted for occupation level, education level, experience level, and 17 other variables the difference in earnings in businesses has fallen from 9% (2008) to 7% (2014) and in government from 7% (2008) to 5% (2014). Without adjustments the gap is for businesses 20% (2014) and government 10% (2014). Young women earn more than men up until the age of 30, this is mostly due to a higher level of education. Women in the Netherlands, up until the age of 30, have a higher educational level on average than men; after this age men have on average a higher educational degree. The chance can also be caused by women getting pregnant and start taking part-time jobs so they can care for the children.
For the year 2013, the gender pay gap in India was estimated to be 24.81%. Further, while analyzing the level of female participation in the economy, a report slots India as one of the bottom 10 countries on its list. Thus, in addition to unequal pay, there is also unequal representation, because while women constitute almost half the Indian population (about 48% of the total), their representation in the work force amounts to only about one-fourth of the total.
Jayoung Yoon analyzes Japan's culture of the traditional male breadwinner model, where the husband works outside of the house while the wife is the caretaker. Despite these traditional gender roles for women, Japan's government aims to enhance the economy by improving the labor policies for mothers with Abenomics, an economy revitalization strategy. Yoon believes Abenomics represents a desire to remedy the effects of an aging population rather than a desire to promote gender equality. Evidence for the conclusion is the finding that women are entering the workforce in contingent positions for a secondary income and a company need of part-time workers based on mechanizing, outsourcing and subcontracting. Therefore, Yoon states that women's participation rates do not seem to be influenced by government policies but by companies' necessities. The Global Gender Gap Report 2015 said that Japan's economic participation and opportunity ranking (106th), 145th being the broadest gender gap, dropped from 2014 "due to lower wage equality for similar work and fewer female legislators, senior officials and managers".:25–27
From a total of 145 states, the World Economic Forum calculates Jordan's gender gap ranking for 2015 as 140th through economic participation and opportunity, educational attainment, health and survival, and political empowerment evaluations. Jordan is the "world's second-least improved country" for the overall gender gap.:25–27 The ranking dropped from 93rd in 2006.:9 In contradiction to Jordan's provisions within its constitution and being signatory to multiple conventions for improving the gender pay gap, there is no legislation aimed at gender equality in the workforce. According to The Global Gender Gap Report 2015, Jordan had a score of 0.61; 1.00 being equality, on pay equality for like jobs.:25, 222
As stated by Jayoung Yoon, South Korea's female employment rate has increased since the 1997 Asian financial crisis as a result of women 25 to 34 years old leaving the workforce later to become pregnant and women 45 to 49 years old returning to the workforce. Mothers are more likely to continue working after child rearing on account of the availability of affordable childcare services provided for mothers previously in the workforce or the difficulty to be rehired after taking time off to raise their children. The World Economic Forum found that, in 2015, South Korea had a score of 0.55, 1.00 being equality, for pay equality for like jobs. From a total of 145 countries, South Korea had a gender gap ranking of 115th (the lower the ranking, the narrower the gender gap). On the other hand, political empowerment dropped to half of the percentage of women in the government in 2014.:26, 228
In 2018, the gender wage gap in South Korea is of 34.6% and women earned about 65.4% of what men did on average, according to OECD data. With regards to monthly earnings, including part-time jobs, the gender gap can be explained primarily by the fact that women work few hours than men, but occupation and industry segregation also pay an important role. Korea is considered to have the worst wage gap among the industrialized countries. This gap is often overlooked. In addition, as many women leave the workplace once married or pregnant, the gender gap in pension entitlements is affected too, which in turn impacts the poverty level.
North Korea, on the other hand, is one of few countries where women earn more than men. The disparity is due to women's greater participation in the shadow economy of North Korea.
Although recent studies have shown that the gender wage gap in New Zealand has diminished in the last two decades, the gap continues to affect many women today. According to StatsNZ, the wage gap was measured to be 9.4 percent in September 2017. Back in 1998, it was measured to be approximately 16.3 percent. There are several different factors that affect New Zealand's wage gap. However, researchers claim that 80 percent of these factors cannot be elucidated, which often causes difficulty in understanding the gap.
In order to calculate the gap, New Zealand makes use of several different methods. The official gap is calculated by Statistics New Zealand. They use the difference between men and women's hourly revenue. On the other hand, the State Services Commission examine the average income of men and women for their calculation. Over the years, the OECD has and continues to track New Zealand's, along with 34 other countries', gender wage gap. In fact, the overall goal of the OECD is to fix the wage gap so that gender no longer plays a significant role in an individual's income. Although it has been a gradual change, New Zealand is one of the countries that has seen notable progress and researchers have predicted that it will continue to do so.
A wage gap exists in Russia (after 1991, but also before) and statistical analysis shows that most of it cannot be explained by lower qualifications of women compared to men. On the other hand, occupational segregation by gender and labor market discrimination seem to account for a large share of it.
The October Revolution (1917) and the dissolution of the Soviet Union in 1991, have shaped the developments in the gender wage gap. These two main turning points in the Russian history frame the analysis of Russia's gender pay gap found in the economic literature. Consequently, the pay gap study can be examined for two periods: the wage gap in Soviet Russia (1917–1991) and the wage gap in the transition and post-transition (after 1991).
According to Jayoung Yoon, Singapore's aging population and low fertility rates are resulting in more women joining the labor force in response to the government's desire to improve the economy. The government provides tax relief to mothers in the workforce to encourage them to continue working. Yoon states that "as female employment increases, the gender gap in employment rates…narrows down" in Singapore. The Global Gender Gap Report 2015 ranks Singapore's gender gap at 54th out of 145 states globally based on the economic participation and opportunity, the educational attainment, the health and survival, and the political empowerment sub-indexes (a lower rank means a smaller gender gap). The gender gap narrowed from 2014's ranking of 59. In the Asia and Pacific region, Singapore has evolved the most in the economic participation and opportunity sub-index, yet it is lower than the region's means in educational attainment and political empowerment.:25–27
In April 2018 the aggregate gender pay gap declined to 8.6%, and even reversed for certain categories, e.g. with men in their 30s paid less than women for part-time work. The gap varies considerably from −4.4% (women employed part-time without overtime out earn men) to 26% (for UK women employed full-time aged 50 – 59). In 2012 the pay gap officially dropped below 10% for full-time workers. The median pay, the point at which half of people earn more and half earn less, is 17.9% less for employed women than for employed men.
The most significant factors associated with the gender pay gap are full-time/part-time work, education, the size of the firm a person is employed in, and occupational segregation (women are under-represented in managerial and high-paying professional occupations). In part-time roles women out-earn men by 4.4% in 2018 (6.5% in 2015, 5.5% in 2014). Women workers qualified to GCSE or A level standard, experienced a smaller pay gap in 2018. (Those qualified to degree level have seen little change). A 2015 study compiled by the Press Association based on data from the Office for National Statistics revealed that women in their 20s were out-earning men in their 20s by an average of £1,111, showing a reversal of trends. However, the same study showed that men in their 30s out-earned women in their 30s by an average of £8,775. The study did not attempt to explain the causes of the gender gap.
In October 2014, the UK Equality Act 2010 was augmented with regulations which require Employment Tribunals to order an employer (except an existing micro-business or a new business) to carry out an equal pay audit where the employer is found to have breached equal pay law. The then prime minister David Cameron announced plans to require large firms to disclose data on the gender pay gap among staff. Since April 2018, employers with over 250 employees are legally required to publish data relating to pay inequalities. Data published includes the pay and bonus figures between men and women, and includes data from April 2017.
A BBC analysis of the figures after the deadline expired showed that more than three-quarters of UK companies pay men more on average than women. Employment barrister Harini Iyengar advocates more flexible working and greater paternity leave to achieve economic and cultural change.
In the US, women's average annual salary has been estimated as 78% to 82% of that of men's average salary. Beyond overt discrimination, multiple studies explain the gender pay gap in terms of women's higher participation in part-time work and long-term absences from the labor market due to care responsibilities, among other factors.
The extent to which discrimination plays a role in explaining gender wage disparities is somewhat difficult to quantify. A 2010 research review by the majority staff of the United States Congress Joint Economic Committee reported that studies have consistently found unexplained pay differences even after controlling for measurable factors that are assumed to influence earnings – suggestive of unknown/non-measurable contributing factors of which gender discrimination may be one. Other studies have found direct evidence of discrimination – for example, more jobs went to women when the applicant's sex was unknown during the hiring process.
- Economic inequality
- Feminization of poverty
- Gender inequality
- Glass ceiling
- Global Gender Gap Report
- Income inequality metrics
- International inequality
- Lowell Mill Girls
- Material feminism
- For other wage gaps
- Racial wage gap in the United States
- Gay wage gap
- "Gender Pay Gap". www.genderequality.ie.
- "Progress on the Gender Pay Gap: 2019 - Glassdoor". Glassdoor Economic Research. 2019-03-27. Retrieved 2021-03-09.
- O'Brien, Sara Ashley (April 14, 2015). "78 cents on the dollar: The facts about the gender wage gap". CNN Money. New York. Retrieved May 28, 2015.
7% wage gap between male and female college grads a year after graduation even controlling for college major, occupation, age, geographical region and hours worked.
- The Simple Truth About The Gender Wage Gap (Report). 1310 L St. NW, Suite 1000 Washington, DC 20005. Spring 2018. Archived from the original on February 2017. Retrieved 19 March 2018.CS1 maint: location (link)
- Gurchiek, Kathy (April 2, 2019). "Study: Global Gender Pay Gap Has Narrowed but Still Exists". SHRM. Retrieved May 25, 2020.
Average Cents/Pence Earned by Women Per Dollar/Pound/Euro of Male Earnings: .95
- Bandara, Amarakoon (3 April 2015). "The Economic Cost of Gender Gaps in Effective Labor: Africa's Missing Growth Reserve". Feminist Economics. 21 (2): 162–186. doi:10.1080/13545701.2014.986153. ISSN 1354-5701. S2CID 154698810.
- Klasen, Stephan (5 October 2018). "The Impact of Gender Inequality on Economic Performance in Developing Countries". Annual Review of Resource Economics. 10 (1): 279–298. doi:10.1146/annurev-resource-100517-023429. ISSN 1941-1340,1941-1359 Check
- Mandel, Hadas; Shalev, Michael (1 June 2009). "How Welfare States Shape the Gender Pay Gap: A Theoretical and Comparative Analysis". Social Forces. 87 (4): 1873–1911. doi:10.1353/sof.0.0187. ISSN 0037-7732. S2CID 20802566.
- "Women's earnings as a percentage of men's, 1979-2005". United States Department of Labor. Retrieved 16 February 2020.
- Pleijt, Alexandra de; Zanden, Jan Luiten van (2021). "Two worlds of female labour: gender wage inequality in western Europe, 1300–1800†". The Economic History Review. n/a (n/a). doi:10.1111/ehr.13045. ISSN 1468-0289.
- Weichselbaumer, Doris; Winter-Ebmer, Rudolf (2005). "A Meta-Analysis on the International Gender Wage Gap" (PDF). Journal of Economic Surveys. 19 (3): 479–511. CiteSeerX 10.1.1.318.9241. doi:10.1111/j.0950-0804.2005.00256.x. S2CID 88508221.
- Stanley, T. D.; Jarrell, Stephen B. (1998). "Gender Wage Discrimination Bias? A Meta-Regression Analysis". The Journal of Human Resources. 33 (4): 947–973. doi:10.2307/146404. JSTOR 146404.
- Goodley, Simon (2011-08-31). "Women executives could wait 98 years for equal pay, says report". The Guardian. London. Retrieved August 31, 2011.
While the salaries of female executives are increasing faster than those of their male counterparts, it will take until 2109 to close the gap if pay grows at current rates, the Chartered Management Institute reveals.
- "Gender pay gap statistics - Statistics Explained". ec.europa.eu.
- "How Pew Research measured the gender pay gap".
- "Australia's Gender Pay Gap Statistics". Australian Government Workplace Gender Equality Statistics. 20 February 2020.
- "What is the gender pay gap? - The Workplace Gender Equality Agency". web.archive.org. 31 October 2015. Archived from the original on 2015-10-31. Retrieved 7 February 2019.
- "The Gender Pay Gap Explained" (PDF). Retrieved 7 February 2019.
- "Breaking down the gender wage gap" (PDF). dol.gov. Archived from the original (PDF) on 28 July 2018. Retrieved 7 February 2019.
- Bureau, US Census. "Income and Poverty in the United States: 2016". www.census.gov.
- "Do South African women earn 27% less than men?". Africa Check.
- "The Gender Wage Gap: 2017 Earnings Differences by Race and Ethnicity". Institute for Women's Policy Research.
- Kiersz, Sonam Sheth, Shayanne Gal, Andy. "7 charts that show the glaring gap between men and women's salaries in the US". Business Insider.
- Beautiful, Information is. "Gender Pay Gap". Information is Beautiful.
- "New Numbers Show the Gender Pay Gap Is Real". Bloomberg.com. 2018-03-29. Retrieved 2020-09-26.
- Titcomb, James (26 March 2018). "Google reveals UK gender pay gap of 17pc". The Telegraph – via www.telegraph.co.uk.
- "Microsoft publishes gender pay data for UK". Microsoft News Centre UK. 29 March 2018.
- Beautiful, Information is. "Gender Pay Gap". Information is Beautiful.
- Blau, Francine D.; Kahn, Lawrence M. (November 2000). "Gender Differences in Pay". Journal of Economic Perspectives. 14 (4): 75–100. doi:10.1257/jep.14.4.75. S2CID 55685704.
- "Women's earnings and employment by industry, 2009". United States Department of Labor. Retrieved 16 February 2020.
- "Women in the Labor Force: A Databook" (PDF). United States Department of Labor. December 2010. Retrieved 16 February 2020.
- "Social Research Update 16: Occupational Gender Segregation". sru.soc.surrey.ac.uk. Retrieved 2019-01-16.
- "Segregation | Eurofound". www.eurofound.europa.eu. Retrieved 2019-01-16.
- Gender inequality at work. Jacobs, Jerry A., 1955-. Thousand Oaks, California: Sage Publications. 1995. ISBN 978-0803956964. OCLC 31013096.CS1 maint: others (link)
- Blau, Francine D.; Kahn, Lawrence M. (2007). "The Gender Pay Gap: Have Women Gone as Far as They Can?" (PDF). Academy of Management Perspectives. 21 (1): 7–23. doi:10.5465/AMP.2007.24286161. S2CID 152531847.
- Rachel, Ngai, L.; Barbara, Petrongolo (2017). "Gender Gaps and the Rise of the Service Economy" (PDF). American Economic Journal: Macroeconomics. 9 (4): 1–44. doi:10.1257/mac.20150253. ISSN 1945-7707. S2CID 13478654.
- Blau, Francine (2015). "Gender, Economics of". International Encyclopedia of the Social & Behavioral Sciences. International Encyclopedia of the Social & Behavioral Sciences (Second Edition). Elsevier. pp. 757–763. doi:10.1016/B978-0-08-097086-8.71051-8. ISBN 9780080970875.
- Woolston, Chris (2019-01-22). "Scientists' salary data highlight US$18,000 gender pay gap". Nature. 565 (7740): 527. doi:10.1038/d41586-019-00220-y. PMID 30670866.
- Cook, Jackie (February 22, 2021). "What Will it Take to Close the Gender Pay Gap for Good?". Morningstar.com. Retrieved 2021-02-24.
- Thorbecke, Catherine (February 17, 2021). "Gender pay gap persists even at executive level, new study finds". ABC News. Retrieved 2021-02-23.
- Koch, Amanda J.; D'Mello, Susan D.; Sackett, Paul R. (2015). "A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making". Journal of Applied Psychology. 100 (1): 128–161. doi:10.1037/a0036734. ISSN 1939-1854. PMID 24865576.
- Quadlin, Natasha (2018). "The Mark of a Woman's Record: Gender and Academic Performance in Hiring". American Sociological Review. 83 (2): 331–360. doi:10.1177/0003122418762291. S2CID 148955615.
- "EUR-Lex - 52017DC0678 - EN - EUR-Lex". eur-lex.europa.eu.
- Rosalsky, Greg. "The True Story of the Gender Pay Gap (Ep. 232)". Freakonomics. Retrieved 2019-03-10.
- European Commission (2014). Tackling the gender pay gap in the European Union (PDF). Justice. ISBN 978-92-79-36068-8. Archived from the original (PDF) on 2014-11-17.
- "What are the causes? - European Commission". European Commission. 2016-03-05. Archived from the original on 2016-03-05. Retrieved 2019-01-16.
- Bütikofer, Aline; Jensen, Sissel; Salvanes, Kjell G. (2018). "The role of parenthood on the gender gap among top earners". European Economic Review. 109: 103–123. doi:10.1016/j.euroecorev.2018.05.008. hdl:11250/2497041. S2CID 54929106.
- Kleven, Henrik; Landais, Camille; Søgaard, Jakob Egholt (January 2018). "Children and Gender Inequality: Evidence from Denmark". NBER Working Paper No. 24219. doi:10.3386/w24219.
- Hipp, Lena (2019). "Do Hiring Practices Penalize Women and Benefit Men for Having Children? Experimental Evidence from Germany". European Sociological Review. 36 (2): 250–264. doi:10.1093/esr/jcz056. hdl:10419/205802.
- Mari, Gabriele; Luijkx, Ruud (2020-01-25). "Gender, parenthood, and hiring intentions in sex-typical jobs: Insights from a survey experiment". Research in Social Stratification and Mobility. 65: 100464. doi:10.1016/j.rssm.2019.100464. ISSN 0276-5624.
- "Analysis | Here are the facts behind that '79 cent' pay gap factoid". Washington Post.
- Goldin, Claudia; Mitchell, Joshua (February 2017). "The New Life Cycle of Women's Employment: Disappearing Humps, Sagging Middles, Expanding Tops" (PDF). Journal of Economic Perspectives. 31 (1): 161–182. doi:10.1257/jep.31.1.161. ISSN 0895-3309. S2CID 157933907.
- Mooney Marin, Margaret; Fan, Pi-Ling (1997). "The Gender Gap in Earnings at Career Entry". American Sociological Review. 62 (4): 589–591. doi:10.2307/2657428. JSTOR 2657428.
- Jacobs, Jerry (1989). Revolving Doors: Sex Segregation and Women's Careers. Stanford University Press. ISBN 9780804714891.
- Reskin, Barbara (1993). "Sex Segregation in the Workplace". Annual Review of Sociology. 19: 241–270. doi:10.1146/annurev.soc.19.1.241. JSTOR 2083388.
- Macpherson, David A.; Hirsch, Barry T. (1995). "Wages and Gender Composition: Why do Women's Jobs Pay Less?". Journal of Labor Economics. 13 (3): 426–471. doi:10.1086/298381. JSTOR 2535151. S2CID 18614879.
- Ruth Simpson (2006). "Men in non-traditional occupations: Career entry, career orientation and experience of role strain". Human Resource Management International Digest. 14 (3). CiteSeerX 10.1.1.426.5021. doi:10.1108/hrmid.2006.04414cad.007.
- Martell, Richard F.; et al. (1998). "Sex Stereotyping In The Executive Suite: 'Much Ado About Something'". Journal of Social Behavior & Personality. 13 (1): 127–138.
- Frankforter, Steven A. (1996). "The Progression Of Women Beyond The Glass Ceiling". Journal of Social Behavior & Personality. 11 (5): 121–132.
- Tharenou, Phyllis (2012-10-07). "The Work of Feminists is Not Yet Done: The Gender Pay Gap—a Stubborn Anachronism". Sex Roles. 68 (3–4): 198–206. doi:10.1007/s11199-012-0221-8. ISSN 0360-0025. S2CID 144449088.
- Bryant, Molly (2012). "Gender Pay Gap". Iowa State University Digital Repository.
- "Why Women Volunteer for Tasks That Don't Lead to Promotions". Harvard Business Review. 2018-07-16. Retrieved 2018-07-23.
- National Centre for Social and Economic Modelling (2009), "The impact of a sustained gender wage gap on the economy" (PDF), Report to the Office for Women, Department of Families, Community Services, Housing and Indigenous Affairs: v–vi, archived from the original (PDF) on December 1, 2010
- "Women in their 20s earn more than men of same age, study finds". American Association of University Women.
- Laura Bassett (October 24, 2012) "Closing The Gender Wage Gap Would Create 'Huge' Economic Stimulus, Economists Say" Huffington Post
- Christianne Corbett; Catherine Hill (October 2012). "Graduating to a Pay Gap: The Earnings of Women and Men One Year after College Graduation" (PDF). Washington, DC: American Association of University Women.
- European Commission (March 4, 2011). "Closing the gender pay gap". Archived from the original on March 6, 2011.
- "World Bank World Development Report 2019: The Changing Nature of Work" (PDF).
- Becker, Gary (1993). "Nobel Lecture: The Economic Way of Looking at Behavior". Journal of Political Economy. 101 (3): 387–389. doi:10.1086/261880. JSTOR 2138769. S2CID 15060650.
- Cain, Glen G. (1986). "The Economic Analysis of Labor Market Discrimination: A Survey". In Ashenfelter, Orley; Laynard, R. (eds.). Handbook of Labor Economics: Volume I. pp. 710–712. ISBN 9780444534521.
- Ransom, Michael; Oaxaca, Ronald L. (January 2005). "Intrafirm Mobility and Sex Differences in Pay" (PDF). Industrial and Labor Relations Review. 58 (2): 219–237. CiteSeerX 10.1.1.224.3070. doi:10.1177/001979390505800203. hdl:10419/20633. JSTOR 30038574. S2CID 153480792.
- Tharenou, Phyllis (7 October 2012). "The Work of Feminists is Not Yet Done: The Gender Pay Gap—a Stubborn Anachronism". Sex Roles. 68 (3–4): 198–206. doi:10.1007/s11199-012-0221-8. S2CID 144449088.
- OECD. OECD Employment Outlook – 2008 Edition Summary in English. OECD, Paris, 2008, p. 3–4.
- OECD. OECD Employment Outlook. Chapter 3: The Price of Prejudice: Labour Market Discrimination on the Grounds of Gender and Ethnicity. OECD, Paris, 2008.
- U.S. Government Accountability Office. Women's Earnings: Federal Agencies Should Better Monitor Their Performance in Enforcing Anti-Discrimination Laws. Retrieved on April 1, 2011.
- U.S. Government Accountability Office. Report Women's Earnings: Federal Agencies Should Better Monitor Their Performance in Enforcing Anti-Discrimination Laws.
- "Statement of Jocelyn Frye". www.eeoc.gov. Retrieved 2018-07-01.
- "New Law In Iceland Aims At Reducing Country's Gender Pay Gap". NPR.org. Retrieved 2018-07-01.
- "The Gender Earnings Gap in the Gig Economy: Evidence from over a Million Rideshare Drivers" (PDF). Stanford University. January 2018. Retrieved 8 March 2018.
- "Earning more is never a simple choice for women". Financial Times. 6 March 2018. Retrieved 16 March 2018.
- "Pay Disparity Between Men and Women Even Exists in the Gig Economy". Fortune. 6 February 2018. Retrieved 16 March 2018.
- "State of the World's Mothers 2007" (PDF). Save the Children. p. 57. Retrieved 16 February 2020.
- OECD. OECD Employment Outlook 2008 – Statistical Annex Archived December 6, 2008, at the Wayback Machine. OECD, Paris, 2008, p. 358.
- Schwab, Klaus; et al. (2015). "The Global Gender Gap Report 2015" (PDF). World Economic Forum. pp. 8–9. Retrieved 29 September 2016.
- "WGEA Data Explorer". WGEA Data Explorer.
- "Frequently asked questions about pay equity". Department of Commerce. Archived from the original on April 22, 2011. Retrieved May 6, 2011.
- "Australia's Gender Pay Gap Statistics 2021". 2021.
- "Australia's Gender Pay Gap Statistics 2021". 2021.
- Watson, Ian (2010). "Decomposing the Gender Pay Gap in the Australian Managerial Labour Market" (PDF). Australian Journal of Labour Economics. Macquarie University. 13 (1): 49–79. Archived from the original (PDF) on March 6, 2011.
- "Gender Differences in Employment and Why They Matter". World Development Report 2012. World Development Report. The World Bank. 2011-09-12. pp. 198–253. doi:10.1596/9780821388105_ch5. ISBN 9780821388105.
- Derocher, John. "Student". Global Gender Gap Report 2018. World Economic Forum. Retrieved December 5, 2019.
- "The Global Gender Gap Report 2017". World Economic Forum. Retrieved 2018-07-01.
- Manhaes, Gisele Flores Caldas. "IBGE - Agência de Notícias". IBGE - Agência de Notícias (in Portuguese). Retrieved 2018-04-12.
- Rodrigues, Joao Carlos de Melo Miranda. "IBGE - Agência de Notícias". IBGE - Agência de Notícias (in Portuguese). Retrieved 2018-04-12.
- Madalozzo, Regina (June 2010). "Occupational segregation and the gender wage gap in Brazil: an empirical analysis". Economia Aplicada. 14 (2): 147–168. doi:10.1590/S1413-80502010000200002. ISSN 1413-8050.
- Larson, Paul D.; Morris, Matthew (2014). "Sex and salary: Does size matter? (A survey of supply chain managers)". Supply Chain Management. 19 (4): 385–394. doi:10.1108/SCM-08-2013-0268.
- Congress, C. L. (n.d.). Women in the Workforce: Still a Long Way from Equality Archived 2016-03-10 at the Wayback Machine. Retrieved November 23, 2012, from Canadian Labour Congress.
- For example, Alberta Human Rights Act, RSA 2000, c A-25.5 at §6 in Alberta and Pay Equity Act, RSO 1990, c P.7 in Ontario
- Derocher, John. "Student". Global Gender Pay Gap Report. World Economic Forum. Retrieved December 5, 2019.
- Schwab, Klaus; Samans, Richard; Zahidi, Saadia; Bekhouche, Yasmina; Ugarte, Paulina Padilla; Ratcheva, Vesselina; Hausmann, Ricardo; Tyson, Laura D'Andrea (2015). The Global Gender Gap Report 2015 (PDF). World Economic Forum.
- Yoon, Jayoung (2015). "Labor Market Outcomes for Women in East Asia". Asian Journal of Women's Studies. 21 (4): 384–408. doi:10.1080/12259276.2015.1106861. S2CID 155639216.
- Jorge Mateo, Manauri (6 July 2016). "La banca aprovecha que en Dominicana hay más mujeres que hombres". Forbes México (in Spanish). Archived from the original on 9 July 2016. Retrieved 9 July 2016.
- Hausmann, Ricardo; Lauren D. Tyson; Saadia Zahidi (2009-01-01). The Global Gender Gap Report 2009. World Economic Forum. ISBN 9789295044289.
- European Commission. The situation in the EU. Archived 2017-12-19 at the Wayback Machine Retrieved on July 12, 2011.
- Nupponen, Sakari. "Miehet painavat töitä niska limassa – naiset eivät". taloussanomat.fi. Taloussanomat. Retrieved 28 December 2015.
- "Tehtyjen työtuntien määrä on laskenut tasaisesti vuosina 1995-2005". tilastokeskus.fi. Tilastokeskus. Retrieved 28 December 2015.
- "Pay gap between men and women: MEPs call for binding measures to close it | News | European Parliament". News | European Parliament. Retrieved 2017-02-05.
- "Wo liegen die Ursachen? - Europäische Kommission". ec.europa.eu. Retrieved 2012-02-18.
- "The gender pay gap situation in the EU". European Commission - European Commission.
- Järvenpää, Heidi (13 February 2017). "Naisen euron mysteeri – asiantuntijat selittävät palkkaeron syitä". Jyväskylän ylioppilaslehti (in Finnish). Retrieved 27 September 2019.
- John, Derocher. "Student". Global Gender Gap Report. World Economic Forum. Retrieved December 5, 2019.
- "Pressemitteilungen - Gender Pay Gap 2013 bei Vollzeitbeschäftigten besonders hoch - Statistisches Bundesamt (Destatis)". Federal Statistical Office of German. Archived from the original on 2014-11-15. Retrieved 2016-02-18.
- "Beschäftigungsperspektiven von Frauen - Nur 2 Prozent Gehaltsunterschied". German Institute for Economic Research (in German). Retrieved 2016-02-18.
- "Gender Pay Gap: Wie groß ist der Unterschied wirklich?". Die Zeit. ISSN 0044-2070. Retrieved 2016-02-18.
- "Studie: Geschlecht senkt Gehalt um sieben Prozent". Die Zeit. ISSN 0044-2070. Retrieved 2016-02-18.
- "The gender pay gap in Luxembourg" (PDF). ec.europa.eu. November 2018. Retrieved 2018-12-12.
- "LMF1.5: Gender pay gaps for full-time workers and earnings differentials by educational attainment" (PDF). www.oecd.org. September 18, 2018. Retrieved 2018-12-12.
- "BULLETIN DU STATEC 1 Salaires, emploi et conditions de travail Sommaire" (PDF). statistiques.public.lu. February 2017. Retrieved 2018-12-12.
- "A decomposition of the unadjusted gender pay gap using Structure of Earnings Survey data". May 2018.
- "Gender, Employment, and Parenthood: The Consequences of Work-Family Policies" (PDF). 2015.
- Estévez-Abe, Margarita (October 2006). "Gendering the Varieties of Capitalism. A Study of Occupational Segregation by Sex in Advanced Industrial Societies". World Politics. 59 (1): 142–175. doi:10.1353/wp.2007.0016. JSTOR 40060158. S2CID 27503441.
- CBS. "Krijgen mannen en vrouwen gelijk loon voor gelijk werk?". www.cbs.nl (in Dutch). Retrieved 2016-12-01.
- "Gender Pay Gap in the Formal Sector: 2006 - 2013(September, 2013)". Wage Indicator Data Report.
- "Eleventh Five Year Plan 2007-2012". planningcommission.nic.in.
- Alfarhan, Usamah F. (2015). "Gender Earnings Discrimination in Jordan: Good Intentions Are Not Enough". International Labour Review. 154 (4): 563–580. doi:10.1111/j.1564-913X.2015.00252.x. S2CID 155234203.
- Delfino, Devon. "12 countries where men earn significantly more than women". Business Insider.
- "The Pursuit of Gender Equality - An Uphill Battle". OECD ILibrary.
- Park, Katrin. "S. Korea reflects lag in gender equality: Column". USA TODAY.
- Moon, Grace. "The young Koreans pushing back on a culture of endurance". www.bbc.com.
- The Pursuit of Gender Equality: An Uphill Battle. www.oecd-ilibrary.org. 2017. doi:10.1787/9789264281318-en. ISBN 9789264281301.
- Fyodor Tertitskiy (23 December 2015). "Life in North Korea - the adult years". the Guardian. Retrieved 2017-04-13.
- "Gender Pay Gap". Women.govt. Retrieved 10 December 2017.
- "Gender Wage Gaps" (PDF). oecd. Retrieved 10 December 2017.
- Newell, A.; Reilly, B. (2001). "The Gender Pay Gap in the Transition from Communism: Some Empirical Evidence". Economic Systems. 25 (4): 287–304. CiteSeerX 10.1.1.202.9177. doi:10.1016/S0939-3625(01)00028-0. S2CID 17330181.
- Ogloblin, C. G. (1999). "The Gender Earnings Differential in the Russian Transition Economy". Industrial and Labor Relations. 52 (4): 602–634. doi:10.1177/001979399905200406. S2CID 154728371.
- Katz, K. (2001) Gender, Work and Wages in the Soviet Union. A Legacy of Discrimination. Palgrave. ISBN 978-0-333-73414-8.
- Gerry, C. J.; Kim, B.; Li, C. A. (2004). "The gender wage gap and wage arrears in Russia Evidence from RLMS" (PDF). Journal of Population Economics. 17 (2): 267–288 [p. 268]. doi:10.1007/s00148-003-0160-3. S2CID 7435706.
- Hansberry, R. (2004). "An Analysis of Gender Wage Differentials in Russia from 1996–2002". William Davidson Institute. Working Paper Number 720. SSRN 615801.
- Kazakova, E. (2007). "Wages in a Growing Russia. When is a 10 percent rise in the gender wage gap good news?". Economics of Transition. 15 (2): 365–392. doi:10.1111/j.1468-0351.2007.00282.x. S2CID 58942405.
- Pyper, Douglas; McGuinness, Feargal (7 November 2018). "The gender pay gap" – via researchbriefings.parliament.uk. Cite journal requires
- King, Mark (November 22, 2012). "Gender pay gap falls for full-time workers". The Guardian. Retrieved December 19, 2015.
- "Annual Survey of Hours and Earnings, 2012 Provisional Results". Office for National Statistics. 22 November 2012.
- Thomson, Victoria (October 2006). "How Much of the Remaining Gender Pay Gap is the Result of Discrimination, and How Much is Due to Individual Choices?" (PDF). International Journal of Urban Labour and Leisure. 7 (2). Retrieved September 26, 2012.
- "Annual Survey of Hours and Earnings: 2015 Provisional Results". Office of National Statistics. Cite journal requires
- "Women in their 20s earn more than men of same age, study finds". The Guardian. 29 August 2015.
- "The Equality Act 2010 (Equal Pay Audits) Regulations 2014". www.legislation.gov.uk.
- Staff writer (14 July 2015). "David Cameron sets out plans to tackle gender pay gap". BBC News. Retrieved 11 February 2018.
- Staff writer (12 February 2016). "Firms forced to reveal gender pay gap". BBC News. Retrieved 11 February 2018.
- "PAYnotes on Gender Pay Reporting – What employers need to know". Paydata Ltd. February 26, 2016.
- Staff writer (5 April 2018). "Final gender pay gap figures revealed". BBC News. Retrieved 5 April 2018.
- Iyengar, Harini (5 April 2018). "The gender pay gap data is in – what now?". i (newspaper). Retrieved 5 April 2018.
- O'Brien, Sara Ashley (April 14, 2015). "78 cents on the dollar: The facts about the gender wage gap". CNN Money. New York. Retrieved May 28, 2015.
- "Women in the labor force: a databook" (PDF). US Bureau of Labor Statistics. December 2018. Retrieved 24 August 2019.
- "An Analysis of Reasons for the Disparity in Wages Between Men and Women" (PDF). US Department of Labor; CONSAD Research Corp. Archived from the original (PDF) on March 27, 2016. Retrieved August 24, 2019.
- Jackson, Brooks (June 22, 2012). "Obama's 77-Cent Exaggeration". FactCheck.org.
- Graduating to a Pay Gap – The Earnings of Women and Men One Year after College Graduation (PDF)
- "Invest in Women, Invest in America: A Comprehensive Review of Women in the U.S. Economy". Washington, DC: United States Congress Joint Economic Committee. December 2010. p. 80.
- Altonji, Joseph G.; Blank, Rebecca (1999). "Chapter 48: Race and gender in the labor market". Handbook of Labor Economics. 3 (C): 3143–3259. doi:10.1016/S1573-4463(99)30039-0. ISSN 1573-4463.
- Brown, Anna; Patten, Eileen (3 April 2017). "The Narrowing, But Persistent, Gender Gap in Pay". Washington, D.C.: Pew Research Center.
|Look up pay gap in Wiktionary, the free dictionary.| | https://library.kiwix.org/wikipedia_en_top_maxi/A/Gender_pay_gap | 21 |
20 | "Social responsibility—that is, a personal investment in the well-being of others and of the planet—doesn't just happen. It takes intention, attention, and time."
—Sheldon Berman, "Educating for Social Responsibility," Educational Leadership, 11/1990
The greatest economic crisis since the depression of the 1930s grips America. This teachable moment suggests two challenging questions: 1) How can teachers help their students understand what is happening? 2) How might teachers work with their students to create a community of socially responsible citizens who act on the crisis to make a difference in their school or community?
You might begin your work with students by having the class read about and discuss the current economic crisis facing our country and the world. Consider using any of the materials on this topic recently posted on TeachableMoment.Org. These lessons provide background information and student activities on the economic crisis, including discussion questions, inquiry possibilities, writing assignments and citizenship suggestions. Beginning with the most recent posting, they include:
- FDR and Barack Obama: Leading the Nation Through Hard Times
- What will President Obama Do About America's Economic Nightmare?
- Presidential Election 2008, Second Debate: Financial Crisis
- Financial Crisis: Bailout or Rescue? Student Readings & a DBQ
- Presidential Election 2008: Financial Crisis
A. Make a start
Consider using a group activity called "Concert" to focus the class on group cooperation and problem-solving. (See "Concert on TeachableMoment.Org at: ) The exercise demonstrates dramatically the roles students play, the problem-solving and decision-making strategies they employ, and the behaviors they exhibit that may help or hinder them when they are presented with a problem.
B. Identify school/community problems and their effects
Schools, towns and cities are cutting staffs and programs as tax revenues and government funding fall.
Divide students into groups of four to six to 1) identify the problems they are aware of in their school and in the wider community because of the severe economic downturn and 2) comment on how these problems have affected them.
Ask each group to appoint a note-taker. Then call on one student in each group to speak briefly in their group, without interruption, about a problem they are aware of in their school or community as a result of the economic crisis. Then give time for other members of each group to identify a problem. Next, ask each group to conduct a second go-around. This time, each student will briefly discuss how each problem has affected them. After everyone has had an opportunity to speak on each issue, ask the recorders to report to the class. List on the chalkboard, without comment, the problems identified and the effects felt for later reference.
Continue the discussion by posing additional questions such as: What major problems have their school and community officials identified? What are they doing about them? What don't students know? What might they know about a school issue that officials don't? What problem or issue seems to interest students most?
C. Discuss experiences with acting on a public issue
Talk to students about your experience in taking action on a public issue or making a difference in the lives of others. Describe how that felt and why it made a difference in your life. Take your time. Provide detail. Answer questions.
Have any students tried to make a difference on a school or community problem? What was the problem? What did the student do? What obstacles were there? What did the student do about them? How did working on the problem make the student feel?
D. Consider a class project
Use the words of Martin Luther King Jr. to begin a discussion of different attitudes toward social activism:
"It is interesting to notice that the extreme pessimist and the extreme optimist agree on at least one point. They both feel that we must sit down and do nothing in the area of race relations. The extreme optimist says do nothing because integration is inevitable. The extreme pessimist says do nothing because integration is impossible."
Can students see themselves as social activists-or do they already? Are they extreme pessimists? Extreme optimists? Why? What would an effective social activist be like? Can they name any? What qualities must such a person have or develop? How would such a person behave?
How might acting together on a problem of common interest help students not only to make a contribution on a public issue but also reduce the stress and fear they may be living with as a result of the economic crisis?
Ask students to explore the possibility of working as a group to learn about and do something about a school or community problem they have identified. Invite suggestions about what that problem might be.
E. Launch a discovery process
Discuss with students how they might select and carefully define a problem to work on as a class project. The session might begin with a period of brainstorming. The class might then carefully consider proposals that are limited, focused and doable. Ideally the project will be a productive response to a school problem created by the budget crisis that students are already familiar with.
If, as is likely, the problem students want to address results from a cut in the school or community budget, students need to consider some questions:
- Why did officials cut this but not that? What priorities did they have in mind?
- Might money for the program be found elsewhere in the budget? Specifically, where?
- Is there some other possibility of funding the desired program? Some new source of revenue?
- If not, might there be another way of keeping it going? Students need to be realistic, as well as creative.
The project idea must be clear. What does the group want to accomplish? Based on what students know about challenges facing their school and community, how likely are they to succeed? How much class time can be allotted to the project?
Before students launch their action project, they should research the problem further. This might include conducting interviews with school and public officials, PTA and civic group leaders.
What specific questions need answers? Devote time for brainstorming interview questions, analyzing them, and rewriting them if necessary. See "Thinking Is Questioning" on TeachableMoment for suggestions on formulating and analyzing questions. Role plays may be useful with, perhaps, the teacher as the official or community member to be interviewed. Assess the effectiveness of the student in the role play.
Students might also need to gather information from local newspaper, radio and TV reports. Establish deadlines and methods for students to report to the class on their research.
Once the research has been completed, have a class discussion. What have students learned? Does the action project still seem feasible? If so, the teacher needs to clear it with a department chairperson and the principal. Inform parents about what students are doing and why.
F. Plan the project
Key to any success for the project is careful planning.
Consider with students:
- What additional information, if any, needs to be gathered? By whom?
- What tasks and actions does the project require? Who will perform them?
- How will the project be coordinated? By an executive committee? A project leader?
- How much student control of the project is possible? Students should feel the project is theirs. Define clearly what teacher control there must be and why.
- How much class time can be allotted to the project?
Media attention to the project might be desirable. If so, include in the organizational plan a media group to seek meetings with a local newspaper, radio station, or TV channel to explain what students are doing and why. Would establishing a website be desirable?
Develop with students a written plan for the project that includes everyone. The plan should detail the project's goal and proposed actions (such as additional research, meeting with school or town officials, speaking at their public sessions, finding and working with allies in the PTA or a public citizens' group, generating publicity for the campaign), potential obstacles to success and how they might be overcome.
G. Assess the project experience
Consider having students keep a journal during the project period that might include a daily report on learning experiences, obstacles encountered and how they were dealt with, successes, and connections between the project and what students may have studied in class.
Devote a class session to assessing the project experience. Did students make a good project choice? Why or why not? How effective was the organizational plan? What do they think they learned? How? What was successful? Unsuccessful? Why? What would they do differently next time? Why? How did they experience the project personally? Was it fun, scary, boring, exciting? Did working on the project affect their thinking or their feelings about the economic crisis? Did the project affect their thinking about being a citizen activist?
This lesson was written for TeachableMoment, a project of Morningside Center for Teaching Social Responsibility. We welcome your comments. Please email them to: email@example.com | https://www.morningsidecenter.org/teachable-moment/lessons/student-action-economic-crisis | 21 |
48 | Previous Year Question & Answers | UPSC Mains 2020 (GS-III)
Q1. Define potential GDP and explain its determinants. What are the factors that have been inhibiting India from realizing its potential GDP?
Most economists and governments use Gross Domestic Product, also known as GDP, or real GDP. GDP represents the total market value of all the goods and services produced by a state over a given period of time.
Like GDP, potential GDP represents the market value of goods and services, but rather than capturing the current objective state of a nation’s economic activity, potential GDP attempts to estimate the highest level of output an economy can sustain over a period of time.
- It assumes that an economy has achieved full employment and that aggregate demand does not exceed aggregate supply.
- Sustainability is the key concept Every economy has certain natural limits, determined by its available labour force, technology, natural resources, and other limitations.
- When GDP falls short of that natural limit, it means the country is failing to live up to its economic potential.
- When GDP exceeds that natural limit, inflation is likely to follow. This is why potential GDP is sometimes referred to as potential output or natural GDP
The difference between potential and real GDP is called the output gap.
- If real GDP falls short of potential GDP (i.e., if the output gap is negative), it means demand for goods and services is weak. It’s a sign that the economy may not be at full employment.
- If the real GDP exceeds potential GDP (i.e., if the output gap is positive), it means the economy is producing above its sustainable limits, and that aggregate demand is outstripping aggregate supply. In this case, inflation and price increases are likely to
Determinants: Factors affecting potential GDP and real GDP
- Aggregate demand
- Short-term aggregate supply
- Quantity and quality of factors of production
For aggregate demand, examples of factors are household consumption, business investment, exports, and government spending. In this case, the factors also include monetary policy and fiscal policy.
Meanwhile, the factors affecting short-run aggregate supply (and real GDP) are the cost of raw materials, energy prices, wages, taxes, and subsidies. They all affect the cost of production in the economy.
Factors affecting quantity and quality of production factors include: Growth in labour supply, Improvement of workforce quality, Capital stock growth, Technology advances and Increased availability of natural resources.
Factors inhibiting India from achieving its Potential GDP:
- Twin challenge of Banking and corporate balance sheet crisis and consequent reduction in savings and investment
- Low skilled labour, high unemployment and Inequality keeps demand low
- Agriculture which spurs rural demand is dependent on erratic monsoon
- Inadequate Capital expenditure: The capital stock also includes infrastructure such as roads, bridges, and ports in a broader
- India lacks Technological advances: Technological advances are essential for increasing the productivity of other production factors, such as machinery and labour. By using more sophisticated machines, we can produce more output, using the same
- Inefficient use of natural resources like Iron etc due corruption and malpractices
- National Infrastructure Pipeline
- Skilling Labour
- Transparent allocation of natural resources
- Incentivizing industries for capital expenditure
- Bring FDI, Promoting PPP
- Reviving banking sector | https://www.bhadraiasacademy.in/blog/previous-year-question-answers-upsc-mains-2020-gs-iii/ | 21 |
27 | In chemistry, isomers are molecules or polyatomic ions with identical molecular formulas -- that is, same number of atoms of each element -- but distinct arrangements of atoms in space. Isomerism is existence or possibility of isomers.
Isomers do not necessarily share similar chemical or physical properties. Two main forms of isomerism are structural or constitutional isomerism, in which bonds between the atoms differ; and stereoisomerism or spatial isomerism, in which the bonds are the same but the relative positions of the atoms differ.
Isomeric relationships form a hierarchy. Two chemicals might be the same constitutional isomer, but upon deeper analysis be stereoisomers of each other. Two molecules that are the same stereoisomer as each other might be in different conformational forms or be different isotopologues. The depth of analysis depends on the field of study or the chemical and physical properties of interest.
The English word "isomer" is a back-formation from "isomeric", which was borrowed through German isomerisch from Swedish isomerisk; which in turn was coined from Greek o? isómeros, with roots isos = "equal", méros = "part".
For example, there are three distinct compounds with the molecular formula :
The first two isomers shown of are propanols, that is, alcohols derived from propane. Both have a chain of three carbon atoms connected by single bonds, with the remaining carbon valences being filled by seven hydrogen atoms and by a hydroxyl group comprising the oxygen atom bound to a hydrogen atom. These two isomers differ on which carbon the hydroxyl is bound to: either to an extremity of the carbon chain propan-1-ol (1-propanol, n-propyl alcohol, n-propanol; I) or to the middle carbon propan-2-ol (2-propanol, isopropyl alcohol, isopropanol; II). These can be described by the condensed structural formulas and .
The third isomer of is the ether methoxyethane (ethyl-methyl-ether). Unlike the other two, it has the oxygen atom connected to two carbons, and all eight hydrogens bonded directly to carbons. It can be described by the condensed formula .
The alcohol "3-propanol" is not another isomer, since the difference between it and 1-propanol is not real; it is only the result of an arbitrary choice in the ordering of the carbons along the chain. For the same reason, "ethoxymethane" is not another isomer.
1-Propanol and 2-propanol are examples of positional isomers, which differ by the position at which certain features, such as double bonds or functional groups, occur on a "parent" molecule (propane, in that case).
There are also three structural isomers of the hydrocarbon :
In two of the isomers, the three carbon atoms are connected in an open chain, but in one of them (propadiene or allene; I) the carbons are connected by two double bonds, while in the other (propyne or methylacetylene, II) they are connected by a single bond and a triple bond. In the third isomer (cyclopropene; III) the three carbons are connected into a ring by two single bonds and a double bond. In all three, the remaining valences of the carbon atoms are satisfied by the four hydrogens.
Again, note that there is only one structural isomer with a triple bond, because the other possible placement of that bond is just drawing the three carbons in a different order. For the same reason, there is only one cyclopropene, not three.
Tautomers are structural isomers which readily interconvert, so that two or more species co-exist in equilibrium such as
The structure of some molecules is sometimes described as a resonance between several apparently different structural isomers. The classical example is 1,2-methylbenzene (o-xylene), which is often described as a mix of the two apparently distinct structural isomers:
However, neither of these two structures describes a real compound; they are fictions devised as a way to describe (by their "averaging" or "resonance") the actual delocalized bonding of o-xylene, which is the single isomer of with a benzene core and two methyl groups in adjacent positions.
In theory, one can imagine any spacial arrangement of the atoms of a molecule or ion to be gradually changed to any other arrangement in infinitely many ways, by moving each atom along an appropriate path. However, changes in the positions of atoms will generally change the internal energy of a molecule, which is determined by the angles between bonds in each atom and by the distances between atoms (whether they are bonded or not).
A conformational isomer is an arrangement of the atoms of the molecule or ion for which the internal energy is a local minimum; that is, an arrangement such that any small changes in the positions of the atoms will increase the internal energy, and hence result in forces that tend to push the atoms back to the original positions. Changing the shape of the molecule from such an energy minimum to another energy minimum will therefore require going through configurations that have higher energy than and . That is, a conformation isomer is separated from any other isomer by an energy barrier: the amount that must be temporarily added to the internal energy of the molecule in order to go through all the intermediate conformations along the "easiest" path (the one that minimizes that amount).
A classic example of conformational isomerism is cyclohexane. Alkanes generally have minimum energy when the angles are close to 110 degrees. Conformations of the cyclohexane molecule with all six carbon atoms on the same plane have a higher energy, because some or all the angles must be far from that value (120 degrees for a regular hexagon). Thus the conformations which are local energy minima have the ring twisted in space, according to one of two patterns known as chair (with the carbons alternately above and below their mean plane) and boat (with two opposite carbons above the plane, and the other four below it).
If the energy barrier between two conformational isomers is low enough, it may be overcome by the random inputs of thermal energy that the molecule gets from interactions with the environment or from its own vibrations. In that case, the two isomers may as well be considered a single isomer, depending on the temperature and the context. For example, the two conformations of cyclohexane convert to each other quite rapidly at room temperature (in the liquid state), so that they are usually treated as a single isomer in chemistry.
In some cases, the barrier can be crossed by quantum tunneling of the atoms themselves. This last phenomenon prevents the separation of stereoisomers of fluorochloroamine or hydrogen peroxide , because the two conformations with minimum energy interconvert in a few picoseconds even at very low temperatures.
Conversely, the energy barrier may be so high that the easiest way to overcome it would require temporarily breaking and then reforming or more bonds of the molecule. In that case, the two isomers usually are stable enough to be isolated and treated as distinct substances. These isomers are then said to be different configurational isomers or "configurations" of the molecule, not just two different conformations. (However, one should be aware that the terms "conformation" and "configuration" are largely synonymous outside of chemistry, and their distinction may be controversial even among chemists.)
Interactions with other molecules of the same or different compounds (for example, through hydrogen bonds) can significantly change the energy of conformations of a molecule. Therefore, the possible isomers of a compound in solution or in its liquid and solid phases many be very different from those of an isolated molecule in vacuum. Even in the gas phase, some compounds like acetic acid will exist mostly in the form of dimers or larger groups of molecules, whose configurations may be different from those of the isolated molecule.
Two compounds are said to be enantiomers if their molecules are mirror images of each other, that cannot be made to coincide only by rotations or translations -- like a left hand and a right hand. The two shapes are said to be chiral.
A classical example is bromochlorofluoromethane (). The two enantiomers can be distinguished, for example, by whether the path turns clockwise or counterclockwise as seen from the hydrogen atom. In order to change one conformation to the other, at some point those four atoms would have to lie on the same plane -- which would require severely straining or breaking their bonds to the carbon atom. The corresponding energy barrier between the two conformations is so high that there is practically no conversion between them at room temperature, and they can be regarded as different configurations.
The compound chlorofluoromethane , in contrast, is not chiral: the mirror image of its molecule is also obtained by a half-turn about a suitable axis.
Another example of a chiral compound is 2,3-pentadiene a hydrocarbon that contains two overlapping double bonds. The double bonds are such that the three middle carbons are in a straight line, while the first three and last three lie on perpendicular planes. The molecule and its mirror image are not superimposable, even though the molecule has an axis of symmetry. The two enantiomers can be distinguished, for example, by the right-hand rule. This type of isomerism is called axial isomerism.
Enantiomers behave identically in chemical reactions, except when reacted with chiral compounds or in the presence of chiral catalysts, such as most enzymes. For this latter reason, the two enantiomers of most chiral compounds usually have markedly different effects and roles in living organisms. In biochemistry and food science, the two enantiomers of a chiral molecule -- such as glucose -- are usually identified, and treated as very different substances.
Each enantiomer of a chiral compound typically rotates the plane of polarized light that passes through it. The rotation has the same magnitude but opposite senses for the two isomers, and can be a useful way of distinguishing and measuring their concentration in a solution. For this reason, enantiomers were formerly called "optical isomers". However, this term is ambiguous and is discouraged by the IUPAC.
Some enantiomer pairs (such as those of trans-cyclooctene) can be interconverted by internal motions that change bond lengths and angles only slightly. Other pairs (such as CHFClBr) cannot be interconverted without breaking bonds, and therefore are different configurations.
A double bond between two carbon atoms forces the remaining four bonds (if they are single) to lie on the same plane, perpendicular to the plane of the bond as defined by its ? orbital. If the two bonds on each carbon connect to different atoms, two distinct conformations are possible, that differ from each other by a twist of 180 degrees of one of the carbons about the double bond.
The classical example is dichloroethene , specifically the structural isomer that has one chlorine bonded to each carbon. It has two conformational isomers, with the two chlorines on the same side or on opposite sides of the double bond's plane. They are traditionally called cis (from Latin meaning "on this side of") and trans ("on the other side of"), respectively; or Z and E in the IUPAC recommended nomenclature. Conversion between these two forms usually requires temporarily breaking bonds (or turning the double bond into a single bond), so the two are considered different configurations of the molecule.
More generally, cis-trans isomerism (formerly called "geometric isomerism") occurs in molecules where the relative orientation of two distinguishable functional groups is restricted by a somewhat rigid framework of other atoms.
For example, in the cyclic alcohol inositol (a six-fold alcohol of cyclohexane), the six-carbon cyclic backbone largely prevents the hydroxyl and the hydrogen on each carbon from switching places. Therefore, one has different configurational isomers depending on whether each hydroxyl is on "this side" or "the other side" of the ring's mean plane. Discounting isomers that are equivalent under rotations, there are nine isomers that differ by this criterion, and behave as different stable substances (two of them being enantiomers of each other). The most common one in nature (myo-inositol) has the hydroxyls on carbons 1, 2, 3 and 5 on the same side of that plane, and can therefore be called cis-1,2,3,5-trans-4,6-cyclohexanehexol. And each of these cis-trans isomers can possibly have stable "chair" or "boat" conformations (although the barriers between these are significantly lower than those between different cis-trans isomers).
More generally, atoms or atom groups that can form three or more non-equivalent single bonds (such as the transition metals in coordination compounds) may give rise to multiple stereoisomers when different atoms or groups are attached at those positions. The same is true if a center with six or more equivalent bonds has two or more substituents.
For instance, in the compound , the bonds from the phosphorus atom to the five halogens have approximately trigonal bipyramidal geometry. Thus two stereoisomers with that formula are possible, depending on whether the chlorine atom occupies one of the two "axial" positions, or one of the three "equatorial" positions.
For the compound , three isomers are possible, with zero, one, or two chlorines in the axial positions.
As another example, a complex with a formula like , where the central atom M forms six bonds with octahedral geometry, has at least two facial-meridional isomers, depending on whether the three bonds (and thus also the three bonds) are directed at the three corners of one face of the octahedron (fac isomer), or lie on the same equatorial or "meridian" plane of it (mer isomer).
Two parts of a molecule that are connected by just one single bond can rotate about that bond. While the bond itself is indifferent to that rotation, attractions and repulsions between the atoms in the two parts normally cause the energy of the whole molecule to vary (and possibly also the two parts to deform) depending on the relative angle of rotation ? between the two parts. Then there will be one or more special values of ? for which the energy is at a local minimum. The corresponding conformations of the molecule are called rotational isomers or rotamers.
Thus, for example, in an ethane molecule , all the bond angles and length are narrowly constrained, except that the two methyl groups can independently rotate about the axis. Thus, even if those angles and distances are assumed fixed, there are infinitely many conformations for the ethane molecule, that differ by the relative angle ? of rotation between the two groups. The feeble repulsion between the hydrogen atoms in the two methyl groups causes the energy to minimized for three specific values of ?, 120° apart. In those configurations, the six planes or are 60° apart. Discounting rotations of the whole molecule, that configuration is a single isomer -- the so-called staggered conformation.
Rotation between the two halves of the molecule 1,2-dichloroethane ( also has three local energy minima, but they have different energies due to differences between the , , and interactions. There are therefore three rotamers: a trans isomer where the two chlorines are on the same plane as the two carbons, but with oppositely directed bonds; and two gauche isomers, mirror images of each other, where the two groups are rotated about 109° from that position. The computed energy difference between trans and gauche is ~1.5 kcal/mol, the barrier for the ~109° rotation from trans to gauche is ~5 kcal/mol, and that of the ~142° rotation from one gauche to its enantiomer is ~8 kcal/mol. The situation for butane is similar, but with sightly lower gauche energies and barriers.
If the two parts of the molecule connected by a single bond are bulky or charged, the energy barriers may be much higher. For example, in the compound biphenyl -- two phenyl groups connected by a single bond -- the repulsion between hydrogen atoms closest to the central single bond gives the fully planar conformation, with the two rings on the same plane, a higher energy than conformations where the two rings are skewed. In the gas phase, the molecule has therefore at least two rotamers, with the ring planes twisted by ±47°, which are mirror images of each other. The barrier between them is rather low (~8 kJ/mol). This steric hindrance effect is more pronounced when those four hydrogens are replaced by larger atoms or groups, like chlorines or carboxyls. If the barrier is high enough for the two rotamers to be separated as stable compounds at room temperature, they are called atropisomers.
Large molecules may have isomers that differ by the topology of their overall arrangement in space, even if there is no specific geometric constraint that separate them. For example, long chains may be twisted to form topologically distinct knots, with interconversion prevented by bulky substituents or cycle closing (as in circular DNA and RNA plasmids). Some knots may come in mirror-image enantiomer pairs. Such forms are called topological isomers or topoisomers
Also, two or more such molecules may be bound together in a catenane by such topological linkages, even if there is no chemical bond between them. If the molecules are large enough, the linking may occur in multiple topologically distinct ways, constituting different isomers. Cage compounds, such as helium enclosed in dodecahedrane (He@) and carbon peapods, are a similar type of topological isomerism involving molecules with large internal voids with restricted or no openings.
Different isotopes of the same element can be considered as different kinds of atoms when enumerating isomers of a molecule or ion. The replacement of one or more atoms by their isotopes can create multiple structural isomers and/or stereoisomers from a single isomer.
For example, replacing two atoms of common hydrogen () by deuterium (, or ) on an ethane molecule yields two distinct structural isomers, depending on whether the substitutions are both on the same carbon (1,1-dideuteroethane, ) or one on each carbon (1,2-dideuteroethane, ); as if the substituent was chlorine instead of deuterium. The two compounds do not interconvert easily and have different properties, such as their microwave spectrum.
Another example would be substituting one atom of deuterium for one of the hydrogens in chlorofluoromethane (). While the original compound is not chiral and has a single isomer, the substitution creates a pair of chiral enantiomers of , which could be distinguished (at least in theory) by their optical activity.
When two isomers would be identical if all isotopes of each element were replaced by a single isotope, they are described as isotopomers or isotopic isomers. In the above two examples if all were replaced by , the two dideuteroethanes would both become ethane and the two deuterochlorofluoromethanes would both become .
The concept of isotopomers is different from isotopologs or isotopic homologs, which differ in their isotopic composition. For example, and are isotopologues and not isotopomers, and are therefore not isomers of each other.
Another type of isomerism based on nuclear properties is spin isomerism, where molecules differ only in the relative spins of the constituent atomic nuclei. This phenomenon is significant for molecular hydrogen, which can be partially separated into two spin isomers: parahydrogen, with the spins of the two nuclei pointing in opposite ways, and orthohydrogen, where the spins point the same way.
The same isomer can also be in different excited states, that differ by the quantum state of their electrons. For example, the oxygen molecule can be in the triplet state or one of two singlet states. These are not considered different isomers, since such molecules usually decay spontaneously to their lowest-energy excitation state in a relatively short time scale.
Likewise, polyatomic ions and molecules that differ only by the addition or removal of electrons, like oxygen or the peroxide ion are not considered isomers.
Isomerization is the process by which one molecule is transformed into another molecule that has exactly the same atoms, but the atoms are rearranged. In some molecules and under some conditions, isomerization occurs spontaneously. Many isomers are equal or roughly equal in bond energy, and so exist in roughly equal amounts, provided that they can interconvert relatively freely, that is the energy barrier between the two isomers is not too high. When the isomerization occurs intramolecularly, it is considered a rearrangement reaction.
Topoisomerases are enzymes that can cut and reform circular DNA and thus change its topology.
Isomers having distinct biological properties are common; for example, the placement of methyl groups. In substituted xanthines, theobromine, found in chocolate, is a vasodilator with some effects in common with caffeine; but, if one of the two methyl groups is moved to a different position on the two-ring core, the isomer is theophylline, which has a variety of effects, including bronchodilation and anti-inflammatory action. Another example of this occurs in the phenethylamine-based stimulant drugs. Phentermine is a non-chiral compound with a weaker effect than that of amphetamine. It is used as an appetite-reducing medication and has mild or no stimulant properties. However, an alternate atomic arrangement gives dextromethamphetamine, which is a stronger stimulant than amphetamine.
In medicinal chemistry and biochemistry, enantiomers are a special concern because they may possess distinct biological activity. Many preparative procedures afford a mixture of equal amounts of both enantiomeric forms. In some cases, the enantiomers are separated by chromatography using chiral stationary phases. They may also be separated through the formation of diastereomeric salts. In other cases, enantioselective synthesis have been developed.
As an inorganic example, cisplatin (see structure above) is an important drug used in cancer chemotherapy, whereas the trans isomer (transplatin) has no useful pharmacological activity.
Isomerism was first observed in 1827, when Friedrich Wöhler prepared silver cyanate and discovered that, although its elemental composition of was identical to silver fulminate (prepared by Justus von Liebig the previous year), its properties were distinct. This finding challenged the prevailing chemical understanding of the time, which held that chemical compounds could be distinct only when their elemental compositions differ. (We now know that the bonding structures of fulminate and cyanate can be approximately described as ? and , respectively.)
Additional examples were found in succeeding years, such as Wöhler's 1828 discovery that urea has the same atomic composition () as the chemically distinct ammonium cyanate. (Their structures are now known to be and , respectively.) In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon.
In 1848, Louis Pasteur observed that tartaric acid crystals came into two kinds of shapes that were mirror images of each other. Separating the crystals by hand, he obtained two version of tartaric acid, each of which would crystallize in only one of the two shapes, and rotated the plane of polarized light to the same degree but in opposite directions. | https://www.popflock.com/learn?s=Isomer | 21 |
41 | Human rights are moral principles or norms for certain standards of human behaviour and are regularly protected in municipal and international law. They are commonly understood as inalienable, fundamental rights "to which a person is inherently entitled simply because she or he is a human being" and which are "inherent in all human beings", regardless of their age, ethnic origin, location, language, religion, ethnicity, or any other status. They are applicable everywhere and at every time in the sense of being universal, and they are egalitarian in the sense of being the same for everyone. They are regarded as requiring empathy and the rule of law and imposing an obligation on persons to respect the human rights of others, and it is generally considered that they should not be taken away except as a result of due process based on specific circumstances.
|Rights by beneficiary|
|Other groups of rights|
The doctrine of human rights has been highly influential within international law and global and regional institutions. Actions by states and non-governmental organisations form a basis of public policy worldwide. The idea of human rights suggests that "if the public discourse of peacetime global society can be said to have a common moral language, it is that of human rights". The strong claims made by the doctrine of human rights continue to provoke considerable scepticism and debates about the content, nature and justifications of human rights to this day. The precise meaning of the term right is controversial and is the subject of continued philosophical debate; while there is consensus that human rights encompasses a wide variety of rights such as the right to a fair trial, protection against enslavement, prohibition of genocide, free speech or a right to education, there is disagreement about which of these particular rights should be included within the general framework of human rights; some thinkers suggest that human rights should be a minimum requirement to avoid the worst-case abuses, while others see it as a higher standard.
Many of the basic ideas that animated the human rights movement developed in the aftermath of the Second World War and the events of the Holocaust, culminating in the adoption of the Universal Declaration of Human Rights in Paris by the United Nations General Assembly in 1948. Ancient peoples did not have the same modern-day conception of universal human rights. The true forerunner of human rights discourse was the concept of natural rights which appeared as part of the medieval natural law tradition that became prominent during the European Enlightenment with such philosophers as John Locke, Francis Hutcheson and Jean-Jacques Burlamaqui and which featured prominently in the political discourse of the American Revolution and the French Revolution. From this foundation, the modern human rights arguments emerged over the latter half of the 20th century, possibly as a reaction to slavery, torture, genocide and war crimes, as a realisation of inherent human vulnerability and as being a precondition for the possibility of a just society.
Ancient peoples did not have the same modern-day conception of universal human rights. The true forerunner of human-rights discourse was the concept of natural rights which appeared as part of the medieval natural law tradition that became prominent during the European Enlightenment. From this foundation, the modern human rights arguments emerged over the latter half of the 20th century.
17th-century English philosopher John Locke discussed natural rights in his work, identifying them as being "life, liberty, and estate (property)", and argued that such fundamental rights could not be surrendered in the social contract. In Britain in 1689, the English Bill of Rights and the Scottish Claim of Right each made illegal a range of oppressive governmental actions. Two major revolutions occurred during the 18th century, in the United States (1776) and in France (1789), leading to the United States Declaration of Independence and the French Declaration of the Rights of Man and of the Citizen respectively, both of which articulated certain human rights. Additionally, the Virginia Declaration of Rights of 1776 encoded into law a number of fundamental civil rights and civil freedoms.
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.— United States Declaration of Independence, 1776
1800 to World War I
Philosophers such as Thomas Paine, John Stuart Mill and Hegel expanded on the theme of universality during the 18th and 19th centuries. In 1831 William Lloyd Garrison wrote in a newspaper called The Liberator that he was trying to enlist his readers in "the great cause of human rights" so the term human rights probably came into use sometime between Paine's The Rights of Man and Garrison's publication. In 1849 a contemporary, Henry David Thoreau, wrote about human rights in his treatise On the Duty of Civil Disobedience which was later influential on human rights and civil rights thinkers. United States Supreme Court Justice David Davis, in his 1867 opinion for Ex Parte Milligan, wrote "By the protection of the law, human rights are secured; withdraw that protection and they are at the mercy of wicked rulers or the clamor of an excited people."
Many groups and movements have managed to achieve profound social changes over the course of the 20th century in the name of human rights. In Western Europe and North America, labour unions brought about laws granting workers the right to strike, establishing minimum work conditions and forbidding or regulating child labour. The women's rights movement succeeded in gaining for many women the right to vote. National liberation movements in many countries succeeded in driving out colonial powers. One of the most influential was Mahatma Gandhi's movement to free his native India from British rule. Movements by long-oppressed racial and religious minorities succeeded in many parts of the world, among them the civil rights movement, and more recent diverse identity politics movements, on behalf of women and minorities in the United States.
The foundation of the International Committee of the Red Cross, the 1864 Lieber Code and the first of the Geneva Conventions in 1864 laid the foundations of International humanitarian law, to be further developed following the two World Wars.
Between World War I and World War II
The League of Nations was established in 1919 at the negotiations over the Treaty of Versailles following the end of World War I. The League's goals included disarmament, preventing war through collective security, settling disputes between countries through negotiation, diplomacy and improving global welfare. Enshrined in its Charter was a mandate to promote many of the rights which were later included in the Universal Declaration of Human Rights.
The League of Nations had mandates to support many of the former colonies of the Western European colonial powers during their transition from colony to independent state.
Established as an agency of the League of Nations, and now part of United Nations, the International Labour Organization also had a mandate to promote and safeguard certain of the rights later included in the Universal Declaration of Human Rights (UDHR):
the primary goal of the ILO today is to promote opportunities for women and men to obtain decent and productive work, in conditions of freedom, equity, security and human dignity.— Report by the Director General for the International Labour Conference 87th Session
After World War II
- All major governments at the time of drafting the U.N. charter and the Universal declaration did their best to ensure, by all means known to domestic and international law, that these principles had only international application and carried no legal obligation on those governments to be implemented domestically. All tacitly realized that for their own discriminated-against minorities to acquire leverage on the basis of legally being able to claim enforcement of these wide-reaching rights would create pressures that would be political dynamite.
Universal Declaration of Human Rights
The Universal Declaration of Human Rights (UDHR) is a non-binding declaration adopted by the United Nations General Assembly in 1948, partly in response to the barbarism of World War II. The UDHR urges member states to promote a number of human, civil, economic and social rights, asserting these rights are part of the "foundation of freedom, justice and peace in the world". The declaration was the first international legal effort to limit the behavior of states and press upon them duties to their citizens following the model of the rights-duty duality.
...recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world— Preamble to the Universal Declaration of Human Rights, 1948
The UDHR was framed by members of the Human Rights Commission, with Eleanor Roosevelt as Chair, who began to discuss an International Bill of Rights in 1947. The members of the Commission did not immediately agree on the form of such a bill of rights, and whether, or how, it should be enforced. The Commission proceeded to frame the UDHR and accompanying treaties, but the UDHR quickly became the priority. Canadian law professor John Humprey and French lawyer Rene Cassin were responsible for much of the cross-national research and the structure of the document respectively, where the articles of the declaration were interpretative of the general principle of the preamble. The document was structured by Cassin to include the basic principles of dignity, liberty, equality and brotherhood in the first two articles, followed successively by rights pertaining to individuals; rights of individuals in relation to each other and to groups; spiritual, public and political rights; and economic, social and cultural rights. The final three articles place, according to Cassin, rights in the context of limits, duties and the social and political order in which they are to be realized. Humphrey and Cassin intended the rights in the UDHR to be legally enforceable through some means, as is reflected in the third clause of the preamble:
Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law.— Preamble to the Universal Declaration of Human Rights, 1948
Some of the UDHR was researched and written by a committee of international experts on human rights, including representatives from all continents and all major religions, and drawing on consultation with leaders such as Mahatma Gandhi. The inclusion of both civil and political rights and economic, social and cultural rights was predicated on the assumption that basic human rights are indivisible and that the different types of rights listed are inextricably linked. Though this principle was not opposed by any member states at the time of adoption (the declaration was adopted unanimously, with the abstention of the Soviet bloc, Apartheid South Africa and Saudi Arabia), this principle was later subject to significant challenges.
The onset of the Cold War soon after the UDHR was conceived brought to the fore divisions over the inclusion of both economic and social rights and civil and political rights in the declaration. Capitalist states tended to place strong emphasis on civil and political rights (such as freedom of association and expression), and were reluctant to include economic and social rights (such as the right to work and the right to join a union). Socialist states placed much greater importance on economic and social rights and argued strongly for their inclusion.
Because of the divisions over which rights to include, and because some states declined to ratify any treaties including certain specific interpretations of human rights, and despite the Soviet bloc and a number of developing countries arguing strongly for the inclusion of all rights in a so-called Unity Resolution, the rights enshrined in the UDHR were split into two separate covenants, allowing states to adopt some rights and derogate others. Though this allowed the covenants to be created, it denied the proposed principle that all rights are linked which was central to some interpretations of the UDHR.
Although the UDHR is a non-binding resolution, it is now considered to be a central component of international customary law which may be invoked under appropriate circumstances by state judiciaries and other judiciaries.
Human Rights Treaties
In 1966, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR) were adopted by the United Nations, between them making the rights contained in the UDHR binding on all states. However, they came into force only in 1976, when they were ratified by a sufficient number of countries (despite achieving the ICCPR, a covenant including no economic or social rights, the US only ratified the ICCPR in 1992). The ICESCR commits 155 state parties to work toward the granting of economic, social, and cultural rights (ESCR) to individuals.
Numerous other treaties (pieces of legislation) have been offered at the international level. They are generally known as human rights instruments. Some of the most significant are:
- Convention on the Prevention and Punishment of the Crime of Genocide (adopted 1948, entry into force: 1951)
- Convention on the Elimination of All Forms of Racial Discrimination (CERD) (adopted 1966, entry into force: 1969)
- Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) (entry into force: 1981)
- United Nations Convention Against Torture (CAT) (adopted 1984, entry into force: 1984)
- Convention on the Rights of the Child (CRC) (adopted 1989, entry into force: 1989)
- International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families (ICRMW) (adopted 1990)
- Rome Statute of the International Criminal Court (ICC) (entry into force: 2002)
The United Nations
The United Nations (UN) is the only multilateral governmental agency with universally accepted international jurisdiction for universal human rights legislation. All UN organs have advisory roles to the United Nations Security Council and the United Nations Human Rights Council, and there are numerous committees within the UN with responsibilities for safeguarding different human rights treaties. The most senior body of the UN with regard to human rights is the Office of the High Commissioner for Human Rights. The United Nations has an international mandate to:
...achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character, and in promoting and encouraging respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion.— Article 1–3 of the United Nations Charter
Protection at the international level
Human Rights Council
The UN Human Rights Council, created in 2005, has a mandate to investigate alleged human rights violations. 47 of the 193 UN member states sit on the council, elected by simple majority in a secret ballot of the United Nations General Assembly. Members serve a maximum of six years and may have their membership suspended for gross human rights abuses. The council is based in Geneva, and meets three times a year; with additional meetings to respond to urgent situations.
Independent experts (rapporteurs) are retained by the council to investigate alleged human rights abuses and to report to the council.
The Human Rights Council may request that the Security Council refer cases to the International Criminal Court (ICC) even if the issue being referred is outside the normal jurisdiction of the ICC.
UN treaty bodies
In addition to the political bodies whose mandate flows from the UN charter, the UN has set up a number of treaty-based bodies, comprising committees of independent experts who monitor compliance with human rights standards and norms flowing from the core international human rights treaties. They are supported by and are created by the treaty that they monitor, With the exception of the CESCR, which was established under a resolution of the Economic and Social Council to carry out the monitoring functions originally assigned to that body under the Covenant, they are technically autonomous bodies, established by the treaties that they monitor and accountable to the state parties of those treaties – rather than subsidiary to the United Nations, though in practice they are closely intertwined with the United Nations system and are supported by the UN High Commissioner for Human Rights (UNHCHR) and the UN Centre for Human Rights.
- The Human Rights Committee promotes participation with the standards of the ICCPR. The members of the committee express opinions on member countries and make judgments on individual complaints against countries which have ratified an Optional Protocol to the treaty. The judgments, termed "views", are not legally binding. The member of the committee meets around three times a year to hold sessions
- The Committee on Economic, Social and Cultural Rights monitors the ICESCR and makes general comments on ratifying countries performance. It will have the power to receive complaints against the countries that opted into the Optional Protocol once it has come into force. It is important to note that unlike the other treaty bodies, the economic committee is not an autonomous body responsible to the treaty parties, but directly responsible to the Economic and Social Council and ultimately to the General Assembly. This means that the Economic Committee faces particular difficulties at its disposal only relatively "weak" means of implementation in comparison to other treaty bodies. Particular difficulties noted by commentators include: perceived vagueness of the principles of the treaty, relative lack of legal texts and decisions, ambivalence of many states in addressing economic, social and cultural rights, comparatively few non-governmental organisations focused on the area and problems with obtaining relevant and precise information.
- The Committee on the Elimination of Racial Discrimination monitors the CERD and conducts regular reviews of countries' performance. It can make judgments on complaints against member states allowing it, but these are not legally binding. It issues warnings to attempt to prevent serious contraventions of the convention.
- The Committee on the Elimination of Discrimination against Women monitors the CEDAW. It receives states' reports on their performance and comments on them, and can make judgments on complaints against countries which have opted into the 1999 Optional Protocol.
- The Committee Against Torture monitors the CAT and receives states' reports on their performance every four years and comments on them. Its subcommittee may visit and inspect countries which have opted into the Optional Protocol.
- The Committee on the Rights of the Child monitors the CRC and makes comments on reports submitted by states every five years. It does not have the power to receive complaints.
- The Committee on Migrant Workers was established in 2004 and monitors the ICRMW and makes comments on reports submitted by states every five years. It will have the power to receive complaints of specific violations only once ten member states allow it.
- The Committee on the Rights of Persons with Disabilities was established in 2008 to monitor the Convention on the Rights of Persons with Disabilities. It has the power to receive complaints against the countries which have opted into the Optional Protocol to the Convention on the Rights of Persons with Disabilities.
- The Committee on Enforced Disappearances monitors the ICPPED. All States parties are obliged to submit reports to the committee on how the rights are being implemented. The Committee examines each report and addresses its concerns and recommendations to the State party in the form of "concluding observations".
Each treaty body receives secretariat support from the Human Rights Council and Treaties Division of Office of the High Commissioner on Human Rights (OHCHR) in Geneva except CEDAW, which is supported by the Division for the Advancement of Women (DAW). CEDAW formerly held all its sessions at United Nations headquarters in New York but now frequently meets at the United Nations Office in Geneva; the other treaty bodies meet in Geneva. The Human Rights Committee usually holds its March session in New York City.
Regional human rights regimes
There are many regional agreements and organizations promoting and governing human rights.
The African Union (AU) is a supranational union consisting of fifty-five African states. Established in 2001, the AU's purpose is to help secure Africa's democracy, human rights, and a sustainable economy, especially by bringing an end to intra-African conflict and creating an effective common market.
The African Commission on Human and Peoples' Rights (ACHPR) is a quasi-judicial organ of the African Union tasked with promoting and protecting human rights and collective (peoples') rights throughout the African continent as well as interpreting the African Charter on Human and Peoples' Rights and considering individual complaints of violations of the Charter. The commission has three broad areas of responsibility:
- Promoting human and peoples' rights
- Protecting human and peoples' rights
- Interpreting the African Charter on Human and Peoples' Rights
In pursuit of these goals, the commission is mandated to "collect documents, undertake studies and researches on African problems in the field of human and peoples, rights, organise seminars, symposia and conferences, disseminate information, encourage national and local institutions concerned with human and peoples' rights and, should the case arise, give its views or make recommendations to governments" (Charter, Art. 45).
With the creation of the African Court on Human and Peoples' Rights (under a protocol to the Charter which was adopted in 1998 and entered into force in January 2004), the commission will have the additional task of preparing cases for submission to the Court's jurisdiction. In a July 2004 decision, the AU Assembly resolved that the future Court on Human and Peoples' Rights would be integrated with the African Court of Justice.
The Court of Justice of the African Union is intended to be the "principal judicial organ of the Union" (Protocol of the Court of Justice of the African Union, Article 2.2). Although it has not yet been established, it is intended to take over the duties of the African Commission on Human and Peoples' Rights, as well as act as the supreme court of the African Union, interpreting all necessary laws and treaties. The Protocol establishing the African Court on Human and Peoples' Rights entered into force in January 2004 but its merging with the Court of Justice has delayed its establishment. The Protocol establishing the Court of Justice will come into force when ratified by 15 countries.
The Organization of American States (OAS) is an international organization, headquartered in Washington, D.C., United States. Its members are the thirty-five independent states of the Americas. Over the course of the 1990s, with the end of the Cold War, the return to democracy in Latin America, and the thrust toward globalization, the OAS made major efforts to reinvent itself to fit the new context. Its stated priorities now include the following:
- Strengthening democracy
- Working for peace
- Protecting human rights
- Combating corruption
- The rights of Indigenous Peoples
- Promoting sustainable development
The Inter-American Commission on Human Rights (the IACHR) is an autonomous organ of the Organization of American States, also based in Washington, D.C. Along with the Inter-American Court of Human Rights, based in San José, Costa Rica, it is one of the bodies that comprise the inter-American system for the promotion and protection of human rights. The IACHR is a permanent body which meets in regular and special sessions several times a year to examine allegations of human rights violations in the hemisphere. Its human rights duties stem from three documents:
- the OAS Charter
- the American Declaration of the Rights and Duties of Man
- the American Convention on Human Rights
The Inter-American Court of Human Rights was established in 1979 with the purpose of enforcing and interpreting the provisions of the American Convention on Human Rights. Its two main functions are thus adjudicatory and advisory. Under the former, it hears and rules on the specific cases of human rights violations referred to it. Under the latter, it issues opinions on matters of legal interpretation brought to its attention by other OAS bodies or member states.
The Association of Southeast Asian Nations (ASEAN) is a geo-political and economic organization of 10 countries located in Southeast Asia, which was formed in 1967 by Indonesia, Malaysia, the Philippines, Singapore and Thailand. The organisation now also includes Brunei Darussalam, Vietnam, Laos, Myanmar and Cambodia. In October 2009, the ASEAN Intergovernmental Commission on Human Rights was inaugurated, and subsequently, the ASEAN Human Rights Declaration was adopted unanimously by ASEAN members on 18 November 2012.
The Council of Europe, founded in 1949, is the oldest organisation working for European integration. It is an international organisation with legal personality recognised under public international law and has observer status with the United Nations. The seat of the Council of Europe is in Strasbourg in France. The Council of Europe is responsible for both the European Convention on Human Rights and the European Court of Human Rights. These institutions bind the council's members to a code of human rights which, though strict, are more lenient than those of the United Nations charter on human rights. The council also promotes the European Charter for Regional or Minority Languages and the European Social Charter. Membership is open to all European states which seek European integration, accept the principle of the rule of law and are able and willing to guarantee democracy, fundamental human rights and freedoms.
The Council of Europe is an organisation that is not part of the European Union, but the latter is expected to accede to the European Convention and potentially the Council itself. The EU has its own human rights document; the Charter of Fundamental Rights of the European Union.
The European Convention on Human Rights defines and guarantees since 1950 human rights and fundamental freedoms in Europe. All 47 member states of the Council of Europe have signed this convention and are therefore under the jurisdiction of the European Court of Human Rights in Strasbourg. In order to prevent torture and inhuman or degrading treatment (Article 3 of the convention), the European Committee for the Prevention of Torture was established.
Philosophies of human rights
|Rights by beneficiary|
|Other groups of rights|
Several theoretical approaches have been advanced to explain how and why human rights become part of social expectations.
One of the oldest Western philosophies on human rights is that they are a product of a natural law, stemming from different philosophical or religious grounds.
Other theories hold that human rights codify moral behavior which is a human social product developed by a process of biological and social evolution (associated with Hume). Human rights are also described as a sociological pattern of rule setting (as in the sociological theory of law and the work of Weber). These approaches include the notion that individuals in a society accept rules from legitimate authority in exchange for security and economic advantage (as in Rawls) – a social contract.
Natural law theories base human rights on a "natural" moral, religious or even biological order which is independent of transitory human laws or traditions.
Socrates and his philosophic heirs, Plato and Aristotle, posited the existence of natural justice or natural right (dikaion physikon, δικαιον φυσικον, Latin ius naturale). Of these, Aristotle is often said to be the father of natural law, although evidence for this is due largely to the interpretations of his work of Thomas Aquinas.
Some of the early Church fathers sought to incorporate the until then pagan concept of natural law into Christianity. Natural law theories have featured greatly in the philosophies of Thomas Aquinas, Francisco Suárez, Richard Hooker, Thomas Hobbes, Hugo Grotius, Samuel von Pufendorf, and John Locke.
In the Seventeenth Century Thomas Hobbes founded a contractualist theory of legal positivism on what all men could agree upon: what they sought (happiness) was subject to contention, but a broad consensus could form around what they feared (violent death at the hands of another). The natural law was how a rational human being, seeking to survive and prosper, would act. It was discovered by considering humankind's natural rights, whereas previously it could be said that natural rights were discovered by considering the natural law. In Hobbes' opinion, the only way natural law could prevail was for men to submit to the commands of the sovereign. In this lay the foundations of the theory of a social contract between the governed and the governor.
Hugo Grotius based his philosophy of international law on natural law. He wrote that "even the will of an omnipotent being cannot change or abrogate" natural law, which "would maintain its objective validity even if we should assume the impossible, that there is no God or that he does not care for human affairs." (De iure belli ac pacis, Prolegomeni XI). This is the famous argument etiamsi daremus (non-esse Deum), that made natural law no longer dependent on theology.
John Locke incorporated natural law into many of his theories and philosophy, especially in Two Treatises of Government. Locke turned Hobbes' prescription around, saying that if the ruler went against natural law and failed to protect "life, liberty, and property," people could justifiably overthrow the existing state and create a new one.
The Belgian philosopher of law Frank van Dun is one among those who are elaborating a secular conception of natural law in the liberal tradition. There are also emerging and secular forms of natural law theory that define human rights as derivative of the notion of universal human dignity.
The term "human rights" has replaced the term "natural rights" in popularity, because the rights are less and less frequently seen as requiring natural law for their existence.
Other theories of human rights
The philosopher John Finnis argues that human rights are justifiable on the grounds of their instrumental value in creating the necessary conditions for human well-being. Interest theories highlight the duty to respect the rights of other individuals on grounds of self-interest:
Human rights law, applied to a State's own citizens serves the interest of states, by, for example, minimizing the risk of violent resistance and protest and by keeping the level of dissatisfaction with the government manageable
Concepts in human rights
Indivisibility and categorization of rights
The most common categorization of human rights is to split them into civil and political rights, and economic, social and cultural rights.
Civil and political rights are enshrined in articles 3 to 21 of the Universal Declaration of Human Rights and in the ICCPR. Economic, social and cultural rights are enshrined in articles 22 to 28 of the Universal Declaration of Human Rights and in the ICESCR. The UDHR included both economic, social and cultural rights and civil and political rights because it was based on the principle that the different rights could only successfully exist in combination:
The ideal of free human beings enjoying civil and political freedom and freedom from fear and want can only be achieved if conditions are created whereby everyone may enjoy his civil and political rights, as well as his social, economic and cultural rights— International Covenant on Civil and Political Rights and the International Covenant on Economic Social and Cultural Rights, 1966
This is held to be true because without civil and political rights the public cannot assert their economic, social and cultural rights. Similarly, without livelihoods and a working society, the public cannot assert or make use of civil or political rights (known as the full belly thesis).
Although accepted by the signatories to the UDHR, most of them do not in practice give equal weight to the different types of rights. Western cultures have often given priority to civil and political rights, sometimes at the expense of economic and social rights such as the right to work, to education, health and housing. For example, in the United States there is no universal access to healthcare free at the point of use. That is not to say that Western cultures have overlooked these rights entirely (the welfare states that exist in Western Europe are evidence of this). Similarly the ex Soviet bloc countries and Asian countries have tended to give priority to economic, social and cultural rights, but have often failed to provide civil and political rights.
Another categorization, offered by Karel Vasak, is that there are three generations of human rights: first-generation civil and political rights (right to life and political participation), second-generation economic, social and cultural rights (right to subsistence) and third-generation solidarity rights (right to peace, right to clean environment). Out of these generations, the third generation is the most debated and lacks both legal and political recognition. This categorisation is at odds with the indivisibility of rights, as it implicitly states that some rights can exist without others. Prioritisation of rights for pragmatic reasons is however a widely accepted necessity. Human rights expert Philip Alston argues:
If every possible human rights element is deemed to be essential or necessary, then nothing will be treated as though it is truly important.
He, and others, urge caution with prioritisation of rights:
...the call for prioritizing is not to suggest that any obvious violations of rights can be ignored.
Priorities, where necessary, should adhere to core concepts (such as reasonable attempts at progressive realization) and principles (such as non-discrimination, equality and participation.
Some human rights are said to be "inalienable rights". The term inalienable rights (or unalienable rights) refers to "a set of human rights that are fundamental, are not awarded by human power, and cannot be surrendered".
The adherence to the principle of indivisibility by the international community was reaffirmed in 1995:
All human rights are universal, indivisible and interdependent and related. The international community must treat human rights globally in a fair and equal manner, on the same footing, and with the same emphasis.— Vienna Declaration and Program of Action, World Conference on Human Rights, 1995
This statement was again endorsed at the 2005 World Summit in New York (paragraph 121).
Universalism vs cultural relativism
The UDHR enshrines, by definition, rights that apply to all humans equally, whichever geographical location, state, race or culture they belong to.
Proponents of cultural relativism suggest that human rights are not all universal, and indeed conflict with some cultures and threaten their survival.
Rights which are most often contested with relativistic arguments are the rights of women. For example, female genital mutilation occurs in different cultures in Africa, Asia and South America. It is not mandated by any religion, but has become a tradition in many cultures. It is considered a violation of women's and girl's rights by much of the international community, and is outlawed in some countries.
Universalism has been described by some as cultural, economic or political imperialism. In particular, the concept of human rights is often claimed to be fundamentally rooted in a politically liberal outlook which, although generally accepted in Europe, Japan or North America, is not necessarily taken as standard elsewhere.
For example, in 1981, the Iranian representative to the United Nations, Said Rajaie-Khorassani, articulated the position of his country regarding the Universal Declaration of Human Rights by saying that the UDHR was "a secular understanding of the Judeo-Christian tradition", which could not be implemented by Muslims without trespassing the Islamic law. The former Prime Ministers of Singapore, Lee Kuan Yew, and of Malaysia, Mahathir bin Mohamad both claimed in the 1990s that Asian values were significantly different from western values and included a sense of loyalty and foregoing personal freedoms for the sake of social stability and prosperity, and therefore authoritarian government is more appropriate in Asia than democracy. This view is countered by Mahathir's former deputy:
To say that freedom is Western or unAsian is to offend our traditions as well as our forefathers, who gave their lives in the struggle against tyranny and injustices.— Anwar Ibrahim in his keynote speech to the Asian Press Forum title Media and Society in Asia, 2 December 1994
An appeal is often made to the fact that influential human rights thinkers, such as John Locke and John Stuart Mill, have all been Western and indeed that some were involved in the running of Empires themselves.
Relativistic arguments tend to neglect the fact that modern human rights are new to all cultures, dating back no further than the UDHR in 1948. They also do not account for the fact that the UDHR was drafted by people from many different cultures and traditions, including a US Roman Catholic, a Chinese Confucian philosopher, a French Zionist and a representative from the Arab League, amongst others, and drew upon advice from thinkers such as Mahatma Gandhi.
Michael Ignatieff has argued that cultural relativism is almost exclusively an argument used by those who wield power in cultures which commit human rights abuses, and that those whose human rights are compromised are the powerless. This reflects the fact that the difficulty in judging universalism versus relativism lies in who is claiming to represent a particular culture.
Although the argument between universalism and relativism is far from complete, it is an academic discussion in that all international human rights instruments adhere to the principle that human rights are universally applicable. The 2005 World Summit reaffirmed the international community's adherence to this principle:
The universal nature of human rights and freedoms is beyond question.— 2005 World Summit, paragraph 120
State and non-state actors
Companies, NGOs, political parties, informal groups, and individuals are known as non-State actors. Non-State actors can also commit human rights abuses, but are not subject to human rights law other than International Humanitarian Law, which applies to individuals.
Multi-national companies play an increasingly large role in the world, and are responsible for a large number of human rights abuses. Although the legal and moral environment surrounding the actions of governments is reasonably well developed, that surrounding multi-national companies is both controversial and ill-defined. Multi-national companies' primary responsibility is to their shareholders, not to those affected by their actions. Such companies are often larger than the economies of the states in which they operate, and can wield significant economic and political power. No international treaties exist to specifically cover the behavior of companies with regard to human rights, and national legislation is very variable. Jean Ziegler, Special Rapporteur of the UN Commission on Human Rights on the right to food stated in a report in 2003:
the growing power of transnational corporations and their extension of power through privatization, deregulation and the rolling back of the State also mean that it is now time to develop binding legal norms that hold corporations to human rights standards and circumscribe potential abuses of their position of power.
In August 2003 the Human Rights Commission's Sub-Commission on the Promotion and Protection of Human Rights produced draft Norms on the responsibilities of transnational corporations and other business enterprises with regard to human rights. These were considered by the Human Rights Commission in 2004, but have no binding status on corporations and are not monitored. Additionally, the United Nations Sustainable Development Goal 10 aims to substantially reduce inequality by 2030 through the promotion of appropriate legislation.
Human rights law
Human rights vs national security
With the exception of non-derogable human rights (international conventions class the right to life, the right to be free from slavery, the right to be free from torture and the right to be free from retroactive application of penal laws as non-derogable), the UN recognises that human rights can be limited or even pushed aside during times of national emergency – although
the emergency must be actual, affect the whole population and the threat must be to the very existence of the nation. The declaration of emergency must also be a last resort and a temporary measure
Rights that cannot be derogated for reasons of national security in any circumstances are known as peremptory norms or jus cogens. Such International law obligations are binding on all states and cannot be modified by treaty.
Legal instruments and jurisdiction
The human rights enshrined in the UDHR, the Geneva Conventions and the various enforced treaties of the United Nations are enforceable in law. In practice, many rights are very difficult to legally enforce due to the absence of consensus on the application of certain rights, the lack of relevant national legislation or of bodies empowered to take legal action to enforce them.
There exist a number of internationally recognized organisations with worldwide mandate or jurisdiction over certain aspects of human rights:
- The International Court of Justice is the United Nations' primary judiciary body. It has worldwide jurisdiction. It is directed by the Security Council. The ICJ settles disputes between nations. The ICJ does not have jurisdiction over individuals.
- The International Criminal Court is the body responsible for investigating and punishing war crimes, and crimes against humanity when such occur within its jurisdiction, with a mandate to bring to justice perpetrators of such crimes that occurred after its creation in 2002. A number of UN members have not joined the court and the ICC does not have jurisdiction over their citizens, and others have signed but not yet ratified the Rome Statute, which established the court.
The ICC and other international courts (see Regional human rights above exist to take action where the national legal system of a state is unable to try the case itself. If national law is able to safeguard human rights and punish those who breach human rights legislation, it has primary jurisdiction by complementarity. Only when all local remedies have been exhausted does international law take effect.
In over 110 countries national human rights institutions (NHRIs) have been set up to protect, promote or monitor human rights with jurisdiction in a given country. Although not all NHRIs are compliant with the Paris Principles, the number and effect of these institutions is increasing. The Paris Principles were defined at the first International Workshop on National Institutions for the Promotion and Protection of Human Rights in Paris on 7–9 October 1991, and adopted by United Nations Human Rights Commission Resolution 1992/54 of 1992 and the General Assembly Resolution 48/134 of 1993. The Paris Principles list a number of responsibilities for national institutions.
Universal jurisdiction is a controversial principle in international law whereby states claim criminal jurisdiction over persons whose alleged crimes were committed outside the boundaries of the prosecuting state, regardless of nationality, country of residence, or any other relation with the prosecuting country. The state backs its claim on the grounds that the crime committed is considered a crime against all, which any state is authorized to punish. The concept of universal jurisdiction is therefore closely linked to the idea that certain international norms are erga omnes, or owed to the entire world community, as well as the concept of jus cogens. In 1993 Belgium passed a law of universal jurisdiction to give its courts jurisdiction over crimes against humanity in other countries, and in 1998 Augusto Pinochet was arrested in London following an indictment by Spanish judge Baltasar Garzón under the universal jurisdiction principle. The principle is supported by Amnesty International and other human rights organisations as they believe certain crimes pose a threat to the international community as a whole and the community has a moral duty to act, but others, including Henry Kissinger, argue that state sovereignty is paramount, because breaches of rights committed in other countries are outside states' sovereign interest and because states could use the principle for political reasons.
Human rights violations
Human rights violations occur when any state or non-state actor breaches any of the terms of the UDHR or other international human rights or humanitarian law. In regard to human rights violations of United Nations laws. Article 39 of the United Nations Charter designates the UN Security Council (or an appointed authority) as the only tribunal that may determine UN human rights violations.
Human rights abuses are monitored by United Nations committees, national institutions and governments and by many independent non-governmental organizations, such as Amnesty International, Human Rights Watch, World Organisation Against Torture, Freedom House, International Freedom of Expression Exchange and Anti-Slavery International. These organisations collect evidence and documentation of human rights abuses and apply pressure to promote human rights.
- Animal rights
- List of human rights organisations
- List of human rights awards
- James Nickel, with assistance from Thomas Pogge, M.B.E. Smith, and Leif Wenar, 13 December 2013, Stanford Encyclopedia of Philosophy, Human Rights. Retrieved 14 August 2014
- Nickel 2010 harvnb error: no target: CITEREFNickel2010 (help)
- The United Nations, Office of the High Commissioner of Human Rights, What are human rights?. Retrieved 14 August 2014
- Sepúlveda et al. 2004, p. 3 harvnb error: no target: CITEREFSepúlveda_et_al.2004 (help)"Archived copy". Archived from the original on March 28, 2012. Retrieved November 8, 2011.CS1 maint: archived copy as title (link)
- Burns H. Weston, 20 March 2014, Encyclopædia Britannica, human rights. Retrieved 14 August 2014
- Gary J. Bass (book reviewer), Samuel Moyn (author of book being reviewed), 20 October 2010, The New Republic, The Old New Thing. Retrieved 14 August 2014
- Beitz 2009, p. 1 harvnb error: no target: CITEREFBeitz2009 (help)
- Shaw 2008, p. 265 harvnb error: no target: CITEREFShaw2008 (help)
- Macmillan Dictionary, human rights – definition. Retrieved 14 August 2014, "the rights that everyone should have in a society, including the right to express opinions about the government or to have protection from harm"
- International technical guidance on sexuality education: an evidence-informed approach (PDF). Paris: UNESCO. 2018. p. 16. ISBN 978-9231002595.
- Freeman 2002, pp. 15–17 harvnb error: no target: CITEREFFreeman2002 (help)
- Moyn 2010, p. 8 harvnb error: no target: CITEREFMoyn2010 (help)
- "Britain's unwritten constitution". British Library. Retrieved 27 November 2015.
The key landmark is the Bill of Rights (1689), which established the supremacy of Parliament over the Crown ... providing for the regular meeting of Parliament, free elections to the Commons, free speech in parliamentary debates, and some basic human rights, most famously freedom from 'cruel or unusual punishment.
- Mayer (2000) p. 110
- "Ex Parte Milligan, 71 U.S. 2, 119. (full text)" (PDF). December 1866. Archived from the original (PDF) on 7 March 2008. Retrieved 28 December 2007.
- Paul Gordon Lauren, “First Principles of Racial Equality: History and the Politics and Diplomacy of Human Rights Provisions in the United Nations Charter,” Human Rights Quarterly 5 (1983): 1–26.
- Henry J. Richardson III, “Black People, Technocracy, and Legal Process: Thoughts, Fears, and Goals,” in Public Policy for the Black Community, ed. by Marguerite Ross Barnett and James A. Hefner (Port Washington, N.Y.: Alfred Publishing, 1976), p, 179.
- Eleanor Roosevelt: Address to the United Nations General Assembly 10 December 1948 in Paris, France
- (A/RES/217, 10 December 1948 at Palais de Chaillot, Paris)
- Glendon, Mary Ann (July 2004). "The Rule of Law in The Universal Declaration of Human Rights". Northwestern University Journal of International Human Rights. 2 (5). Archived from the original on 20 July 2011. Retrieved 7 January 2008.
- Glendon (2001)
- Ball, Gready (2007) p.34
- Ball, Gready (2007) p.35
- Littman, David G. (19 January 2003). "Human Rights and Human Wrongs".
The principal aim of the 1948 Universal Declaration of Human Rights (UDHR) was to create a framework for a universal code based on mutual consent. The early years of the United Nations were overshadowed by the division between the Western and Communist conceptions of human rights, although neither side called into question the concept of universality. The debate centered on which "rights" – political, economic, and social – were to be included among the Universal InstrumentsCite journal requires
- Ball, Gready
- This does not include the Vatican, which although recognised as an independent state, is not a member of the UN.
- Ball, Gready (2007) p.37
- Ball, Gready (2007) p.92
- "United Nations Rights Council Page". United Nations News Page.
- Ball, Gready (2007) p.95
- The Security Council referred the human rights situation in Darfur in Sudan to the ICC despite the fact that Sudan has a functioning legal system
- Shaw 2008, p. 311 harvnb error: no target: CITEREFShaw2008 (help)
- "OHCHR | Introduction of the Committee". www.ohchr.org. Retrieved 6 October 2017.
- Shaw 2008, p. 309 harvnb error: no target: CITEREFShaw2008 (help)
- Alston, Philip, ed. (1992). The United Nations and human rights : a critical appraisal (1. issued as pbk. ed.). Oxford: Clarendon Press. p. 474. ISBN 978-0-19-825450-8.
- "AU Member States". African Union. Archived from the original on 5 January 2008. Retrieved 3 January 2008.
- "AU in a Nutshell". Archived from the original on 30 December 2007. Retrieved 3 January 2008.
- "Mandate of the African Commission on Human and Peoples' Rights". Archived from the original on 20 January 2008. Retrieved 3 January 2008.
- "PROTOCOL TO THE AFRICAN CHARTER ON HUMAN AND PEOPLES' RIGHTS ON THE ESTABLISHMENT OF AN AFRICAN COURT ON HUMAN AND PEOPLES' RIGHTS". Archived from the original on 2 March 2012. Retrieved 3 January 2008.
- "PROTOCOL OF THE COURT OF JUSTICE OF THE AFRICAN UNION" (PDF). African Union. Archived from the original (PDF) on 24 July 2011. Retrieved 4 January 2008.
- "Open Letter to the Chairman of the African Union (AU) seeking clarifications and assurances that the Establishment of an effective African Court on Human and Peoples' Rights will not be delayed or undermined" (PDF). Amnesty International. 5 August 2004.
- "African Court of Justice". African International Courts and Tribunals. Archived from the original on 26 July 2013. Retrieved 3 January 2008.
- "Africa". Human Rights Watch. Retrieved 20 July 2019.
- "OAS Key Issues". Retrieved 3 January 2008.
- "Directory of OAS Authorities". Organization of American States. Retrieved 3 January 2008.
- "What is the IACHR?". Inter-Americal Commission on Human Rights. Retrieved 3 January 2008.
- "InterAmerican Court on Human Rights homepage". Inter-American Court on Human Rights. Retrieved 3 January 2008.
- Repucci, Sarah; Slipowitz, Amy (2021). "Democracy Under Siege" (PDF). Freedom in the World.
Beijing’s export of antidemocratic tactics, financial coercion, and physical intimidation have led to an erosion of democratic institutions and human rights protections in numerous countries...Political rights and civil liberties in the country have deteriorated since Narendra Modi became prime minister in 2014, with increased pressure on human rights organizations, rising intimidation of academics and journalists, and a spate of bigoted attacks, including lynchings, aimed at Muslims.
- Nazifa Alizada, Rowan Cole, Lisa Gastaldi, Sandra Grahn, Sebastian Hellmeier, Palina Kolvani, Jean Lachapelle, Anna Lührmann, Seraphine F. Maerz, Shreeya Pillai, and Staffan I. Lindberg. (2021) Autocratization Turns Viral. Democracy Report 2021. University of Gothenburg: V-Dem Institute. https://www.v-dem.net/en/publications/democracy-reports/
- "Overview ASSOCIATION OF SOUTHEAST ASIAN NATIONS". Retrieved 3 January 2008.
- Bangkok Declaration. Wikisource. Retrieved 14 March 2007
- "ASEAN Intergovernmental Commission on Human Rights (AICHR)". ASEAN. Retrieved 21 April 2021.
- "ASEAN Human Rights Declaration (AHRD) and the Phnom Penh Statement on the Adoption of the AHRD and Its Translations" (PDF). ASEAN. 2013. Retrieved 21 April 2021.
- "English Version of the Statute of the Arab Court of Human Rights". acihl.org. ACIHL. Retrieved 14 December 2020.
- "Council of Europe Human Rights". Council of Europe. Retrieved 4 January 2008.
- "Social Charter". Council of Europe. Retrieved 4 January 2008.
- "The Council of Europe in Brief". Retrieved 4 January 2008.
- Juncker, Jean-Claude (11 April 2006). "Council of Europe – European Union: "A sole ambition for the European Continent"" (PDF). Council of Europe. Archived from the original (PDF) on 1 May 2011. Retrieved 4 January 2008.
- "Historical Background to the European Court of Human Rights". European Court of Human Rights. Retrieved 4 January 2008.
- "About the European Committee for the Prevention of Torture". European Committee for the Prevention of Torture. Retrieved 4 January 2008.
- Shellens (1959)
- Jaffa (1979)
- Sills (1968, 1972) Natural Law
- van Dun, Frank. "Natural Law". Retrieved 28 December 2007.
- Kohen (2007)
- Weston, Burns H. "Human Rights". Encyclopedia Britannica Online, p. 2. Retrieved 18 May 2006.
- Fagan, Andrew (2006). "Human Rights". The Internet Encyclopedia of Philosophy. Retrieved 1 January 2008.
- Finnis (1980)
- Nathwani (2003) p.25
- Arnhart (1998)
- Clayton, Schloss (2004)
- Paul, Miller, Paul (2001): Arnhart, Larry. Thomistic Natural Law as Darwinian Natural Right p.1
- Light (2002)
- Alston (2005)
- Ball, Gready. (2007) p.42
- Littman (1999)
- Ball, Gready (2007) p.25
- Chee, S.J. (3 July 2003). Human Rights: Dirty Words in Singapore. Activating Human Rights and Diversity Conference (Byron Bay, Australia).
- Tunick (2006)
- Beate (2005)
- Ignatief, M. (2001) p.68
- "Corporations and Human Rights". Human Rights Watch. Retrieved 3 January 2008.
- "Transnational corporations should be held to human rights standards – UN expert". UN News Centre. 13 October 2003. Retrieved 3 January 2008.
- "Norms on the responsibilities of transnational corporations and other business enterprises with regard to human rights". UN Sub-Commission on the Promotion and Protection of Human Rights. Retrieved 3 January 2008.
- "REPORT TO THE ECONOMIC AND SOCIAL COUNCIL ON THE SIXTIETH SESSION OF THE COMMISSION (E/CN.4/2004/L.11/Add.7)" (PDF). United Nations Commission on Human Rights. p. 81. Retrieved 3 January 2008.
- "Goal 10 targets". UNDP. Retrieved 23 September 2020.
- "The Resource Part II: Human Rights in Times of Emergencies". United Nations. Retrieved 31 December 2007.
- "Cour internationale de Justice – International Court of Justice | International Court of Justice". icj-cij.org.
- United Nations. Multilateral treaties deposited with the Secretary-General: Rome Statute of the International Criminal Court. Retrieved 8 June 2007.
- "The Resource Part II: The International Human Rights System". United Nations. Retrieved 31 December 2007.
- "National Human Right Institutions Forum – An international forum for researchers and practitioners in the field of national human rights". Archived from the original on 15 September 2002. Retrieved 6 September 2007.
- "Chart of the Status of National Institutions" (PDF). National Human Rights Institutions Forum. November 2007. Archived from the original (PDF) on 16 February 2008. Retrieved 6 January 2008.
ACCREDITED BY THE INTERNATIONAL COORDINATING COMMITTEE OF NATIONAL INSTITUTIONS FOR THE PROMOTION AND PROTECTION OF HUMAN RIGHTS
In accordance with the Paris Principles and the ICC Sub-Committee Rules of Procedure, the following classifications for accreditation are used by the ICC:
A: Compliance with the Paris Principles;
A(R): Accreditation with reserve – granted where insufficient documentation is submitted to confer A status;
B: Observer Status – Not fully in compliance with the Paris Principles or insufficient information provided to make a determination;
C: Non-compliant with the Paris Principles.
- "National Human Rights Institutions – Implementing Human Rights", Executive Director Morten Kjærum, The Danish Institute for Human Rights, 2003. ISBN 87-90744-72-1, page 6
- Ball, Gready (2007) p.70
- Kissinger, Henry (July–August 2001). "The Pitfall of Universal Jurisdiction". Foreign Affairs. 80 (4): 86–96. doi:10.2307/20050228. JSTOR 20050228. Archived from the original on 14 January 2009. Retrieved 6 January 2008.
- "Chapter VII". www.un.org. 17 June 2015. Retrieved 13 October 2020.
References and further reading
- Amnesty International (2004). Amnesty International Report. Amnesty International. ISBN 0-86210-354-1 ISBN 1-887204-40-7
- Alston, Philip (2005). "Ships Passing in the Night: The Current State of the Human Rights and Development Debate seen through the Lens of the Millennium Development Goals". Human Rights Quarterly. Vol. 27 (No. 3) p. 807
- Arnhart, Larry (1998). Darwinian Natural Right: The Biological Ethics of Human Nature SUNY Press. ISBN 0-7914-3693-4
- Ball, Olivia; Gready, Paul (2007). The No-Nonsense Guide to Human Rights. New Internationalist. ISBN 1-904456-45-6
- Chauhan, O.P. (2004). Human Rights: Promotion and Protection. Anmol Publications PVT. LTD. ISBN 81-261-2119-X.
- Clayton, Philip; Schloss, Jeffrey (2004). Evolution and Ethics: Human Morality in Biological and Religious Perspective Wm. B. Eerdmans Publishing. ISBN 0-8028-2695-4
- Cope, K., Crabtree, C., & Fariss, C. (2020). "Patterns of disagreement in indicators of state repression" Political Science Research and Methods, 8(1), 178–187. [[doi:10.1017/psrm.2018.62|
- Cross, Frank B. "The relevance of law in human rights protection." International Review of Law and Economics 19.1 (1999): 87-98 online.
- Davenport, Christian (2007). State Repression and Political Order. Annual Review of Political Science.
- Donnelly, Jack. (2003). Universal Human Rights in Theory & Practice. 2nd ed. Ithaca & London: Cornell University Press. ISBN 0-8014-8776-5
- Finnis, John (1980). Natural Law and Natural Rights Oxford: Clarendon Press. ISBN 0-19-876110-4
- Fomerand, Jacques. ed. Historical Dictionary of Human Rights (2021) excerpt
- Forsythe, David P. (2000). Human Rights in International Relations. Cambridge: Cambridge University Press. International Progress Organization. ISBN 3-900704-08-2
- Freedman, Lynn P.; Isaacs, Stephen L. (Jan–Feb 1993). "Human Rights and Reproductive Choice". Studies in Family Planning Vol.24 (No.1): p. 18–30 JSTOR 2939211
- Glendon, Mary Ann (2001). A World Made New: Eleanor Roosevelt and the Universal Declaration of Human Rights. Random House of Canada Ltd. ISBN 0-375-50692-6
- Gorman, Robert F. and Edward S. Mihalkanin, eds. Historical Dictionary of Human Rights and Humanitarian Organizations (2007) excerpt
- Houghton Miffin Company (2006). The American Heritage Dictionary of the English Language. Houghton Miffin. ISBN 0-618-70173-7
- Ignatieff, Michael (2001). Human Rights as Politics and Idolatry. Princeton & Oxford: Princeton University Press. ISBN 0-691-08893-4
- Ishay, Micheline. The History of Human Rights: From Ancient Times to the Era of Globalization (U of California Press, 2008) excerpt
- Istrefi, Remzije. "International Security Presence in Kosovo and its Human Rights Implications." Croatian International Relations Review 23.80 (2017): 131-154. online
- Jaffa, Harry V. (1979). Thomism and Aristotelianism; A Study of the Commentary by Thomas Aquinas on the Nicomachean Ethics Greenwood Press. ISBN 0-313-21149-3 (reprint of 1952 edition published by University of Chicago Press)
- Jahn, Beate (2005). "Barbarian thoughts: imperialism in the philosophy of John Stuart Mill". Review of International Studies 13 June 2005 31: 599–618 Cambridge University Press
- Köchler, Hans (1981). The Principles of International Law and Human Rights. hanskoechler.com
- Köchler, Hans. (1990). "Democracy and Human Rights". Studies in International Relations, XV. Vienna: International Progress Organization.
- Kohen, Ari (2007). In Defense of Human Rights: A Non-Religious Grounding in a Pluralistic World. Routledge. ISBN 0-415-42015-6, ISBN 978-0-415-42015-0
- Landman, Todd (2006). Studying Human Rights. Oxford and London: Routledge ISBN 0-415-32605-2
- Light, Donald W. (2002). "A Conservative Call For Universal Access To Health Care" Penn Bioethics Vol.9 (No.4) p. 4–6
- Littman, David (1999). "Universal Human Rights and 'Human Rights in Islam'". Midstream Magazine Vol. 2 (no.2) pp. 2–7
- Maan, Bashir; McIntosh, Alastair (1999). "Interview with William Montgomery Watt" The Coracle Vol. 3 (No. 51) pp. 8–11.
- Maret, Susan 2005. “‘Formats Are a Tool for the Quest for Truth’: HURIDOCS Human Rights Materials for Library and Human Rights Workers.” Progressive Librarian, no. 26 (Winter): 33–39.
- Mayer, Henry (2000). All on Fire: William Lloyd Garrison and the Abolition of Slavery. St Martin's Press. ISBN 0-312-25367-2
- McAuliffe, Jane Dammen (ed) (2005). Encyclopaedia of the Qur'an: vol 1–5 Brill Publishing. ISBN 90-04-14743-8. ISBN 978-90-04-14743-0
- McLagan, Meg (2003) "Principles, Publicity, and Politics: Notes on Human Rights Media". American Anthropologist. Vol. 105 (No. 3). pp. 605–612
- Maddex, Robert L., ed. International encyclopedia of human rights: freedoms, abuses, and remedies (CQ Press, 2000).
- Möller, Hans-Georg. "How to distinguish friends from enemies: Human rights rhetoric and western mass media." in Technology and Cultural Values (U of Hawaii Press, 2003) pp. 209-221.
- Nathwani, Niraj (2003). Rethinking Refugee Law. Martinus Nijhoff Publishers. ISBN 90-411-2002-5
- Neier, Aryeh. The international human rights movement : a history (Princeton UP, 2012)
- Paul, Ellen Frankel; Miller, Fred Dycus; Paul, Jeffrey (eds) (2001). Natural Law and Modern Moral Philosophy Cambridge University Press. ISBN 0-521-79460-9
- Power, Samantha. A Problem from Hell": America and the Age of Genocide (Basic Books, 2013).
- Robertson, Arthur Henry; Merrills, John Graham (1996). Human Rights in the World: An Introduction to the Study of the International Protection of Human Rights. Manchester University Press. ISBN 0-7190-4923-7.
- Reyntjens, Filip. "Rwanda: progress or powder keg?." Journal of Democracy 26.3 (2015): 19-33. online
- Salevao, Lutisone (2005). Rule of Law, Legitimate Governance and Development in the Pacific. ANU E Press. ISBN 978-0731537211
- Scott, C. (1989). "The Interdependence and Permeability of Human Rights Norms: Towards a Partial Fusion of the International Covenants on Human Rights". Osgood Law Journal Vol. 27
- Shelton, Dinah. "Self-Determination in Regional Human Rights Law: From Kosovo to Cameroon." American Journal of International Law 105.1 (2011): 60-81 online.
- Sills, David L. (1968, 1972) International Encyclopedia of the Social Sciences. MacMillan.
- Shellens, Max Salomon. 1959. "Aristotle on Natural Law." Natural Law Forum 4, no. 1. Pp. 72–100.
- Sen, Amartya (1997). Human Rights and Asian Values. ISBN 0-87641-151-0.
- Shute, Stephen & Hurley, Susan (eds.). (1993). On Human Rights: The Oxford Amnesty Lectures. New York: BasicBooks. ISBN 0-465-05224-X
- Sobel, Meghan, and Karen McIntyre. "Journalists’ Perceptions of Human Rights Reporting in Rwanda." African Journalism Studies 39.3 (2018): 85-104. online
- Steiner, J. & Alston, Philip. (1996). International Human Rights in Context: Law, Politics, Morals. Oxford: Clarendon Press. ISBN 0-19-825427-X
- Straus, Scott, and Lars Waldorf, eds. Remaking Rwanda: State building and human rights after mass violence (Univ of Wisconsin Press, 2011).
- Sunga, Lyal S. (1992) Individual Responsibility in International Law for Serious Human Rights Violations, Martinus Nijhoff Publishers. ISBN 0-7923-1453-0
- Tierney, Brian (1997). The Idea of Natural Rights: Studies on Natural Rights, Natural Law, and Church Law. Wm. B. Eerdmans Publishing. ISBN 0-8028-4854-0
- Tunick, Mark (2006). "Tolerant Imperialism: John Stuart Mill's Defense of British Rule in India". The Review of Politics 27 October 2006 68: 586–611 Cambridge University Press
- Ishay, Micheline, ed. The Human Rights Reader: Major Political Essays, Speeches, and Documents from Ancient Times to the Present (2nd ed. 2007) excerpt
|Wikiquote has quotations related to: Human rights|
- Universal Declaration of Human Rights by the United Nations
- Office of the High Commissioner for Human Rights
- The Universal Human Rights Index of United Nations documents | https://library.kiwix.org/wikipedia_en_top_maxi/A/Human_rights | 21 |
41 | Money makes the economy function. Money evolved thousands of years ago because barter—the direct trading of goods or services for other goods or services—simply didn’t work. A modern economy could not function without money, and economies tend to break down when the quantity or value of money changes suddenly or dramatically. Print too much money, and its value declines—that is, prices rise (inflation). Shrink the money supply, on the other hand, and the value of money rises—that is, prices fall (deflation).
In modern economies, bank deposits—not coins or currency—comprise the lion’s share of the money supply. The amount of loans that a bank can make reflects the amount of money (deposits from customers) in the bank and is determined partly by regulations on the amount of money in reserve that a bank must hold against its deposits and partly by the business judgment of bankers.
In the United States, bank reserves consist of the cash that banks hold in their vaults and the deposits they keep at Federal Reserve banks. Reserves earn little or no interest, so banks don’t like to hold too much of them. On the other hand, if banks hold too few reserves, they risk getting caught short in the event of unexpected deposit withdrawals.
In the 1930s, the United States was on the gold standard, meaning that the U.S. government would exchange dollars for gold at a fixed price. Commercial banks, as well as Federal Reserve banks, held a portion of their reserves in the form of gold coin and bullion, as required by law.
An increase in gold reserves, which might come from domestic mining or inflows of gold from abroad, would enable banks to increase their lending and, as a result, would tend to inflate the money supply. A decrease in reserves, on the other hand, would tend to contract the money supply. For example, large withdrawals of cash or gold from banks could reduce bank reserves to the point that banks had to make fewer loans, which shrinks the money supply available for businesses and consumers. The money supply fell during the Great Depression primarily because of banking panics. Banking systems rely on the confidence of depositors that they will be able to access their funds in banks whenever they need them. If that confidence is shaken—perhaps by the failure of an important bank or large commercial firm—people will rush to withdraw their deposits to avoid losing their funds if their own bank fails.
Because banks hold only a fraction of the value of their customers’ deposits in the form of reserves, a sudden, unexpected attempt to convert deposits into cash can leave banks short of reserves. Ordinarily, banks can borrow extra reserves from other banks or from the Federal Reserve. However, borrowing from other banks becomes extremely expensive or even impossible when depositors make demands on all banks. During the Great Depression, many banks could not or would not borrow from the Federal Reserve because they either lacked acceptable collateral or did not belong to the Federal Reserve System. Starting in 1930, a series of banking panics rocked the U.S. financial system. As depositors pulled funds out of banks, banks lost reserves, which reduced the nation’s money supply. The monetary contraction, as well as the financial chaos associated with the failure of large numbers of banks, caused the economy to collapse.
Less money reduced spending on goods and services, which caused firms to cut back on production, cut prices and lay off workers. Falling prices and incomes, in turn, led to even more economic distress. Deflation reduced the income of businesses and households and made it harder for them to repay their debt. Bankruptcies and defaults increased, which caused thousands of banks to fail. In each year from 1930 to 1933, more than 1,000 U.S. banks closed.
Banking panics are pretty much a thing of the past, thanks to federal deposit insurance. Widespread failures of banks and savings institutions during the 1980s did not cause depositors to panic, which limited withdrawals from the banking system and prevented serious reverberations throughout the economy.
Adapted from: David C. Wheelock is an assistant vice president and economist at the Federal Reserve Bank of St. Louis. | https://essaydocs.org/the-cause-of-the-great-depression-v2.html?page=3 | 21 |
16 | I. Prelude to Revolution: The Eighteenth-Century Crisis
A. Colonial Wars and Fiscal Crises
1. Rivalry among the European powers intensified in the early 1600s when the Dutch attacked Spanish and Portuguese possessions in the Americas and in Asia. In the 1600s and 1700s, the British then checked Dutch commercial and colonial ambitions and went on to defeat France in the Seven Years War (1756–1763) and take over French colonial possessions in the Americas and in India.
2. The unprecedented costs of the wars of the seventeenth and eighteenth centuries drove European governments to seek new sources of revenue at a time when the intellectual environment of the Enlightenment inspired people to question and to protest the state’s attempts to introduce new ways of collecting revenue.
B. The Enlightenment and the Old Order
1. The Enlightenment thinkers sought to apply the methods and questions of the Scientific Revolution to the study of human society. One way of doing so was to classify and systematize knowledge; another way was to search for natural laws that were thought to underlie human affairs and to devise scientific techniques of government and social regulation.
2. John Locke argued that governments were created to protect the people; he emphasized the importance of individual rights. Jean Jacques Rousseau asserted that the will of the people was sacred; he believed that people would act collectively on the basis of their shared historical experience.
3. Not all Enlightenment thinkers were radicals or atheists. Many, like Voltaire, believed that monarchs could be agents of change.
4. Some members of the European nobility (e.g., Catherine the Great of Russia, Frederick the Great of Prussia) patronized Enlightenment thinkers and used Enlightenment ideas as they reformed their bureaucracies, legal systems, tax systems, and economies. At the same time, these monarchs suppressed or banned radical ideas that promoted republicanism or attacked religion.
5. Many of the major intellectuals of the Enlightenment communicated with each other and with political leaders. Women were instrumental in the dissemination of their ideas; purchasing and discussing the writings of the Enlightenment thinkers; and, in the case of wealthy Parisian women, making their homes available for salons at which Enlightenment thinkers gathered.
6. The new ideas of the Enlightenment were particularly attractive to the expanding middle class in Europe and in the Western Hemisphere. Many European intellectuals saw the Americas as a new, uncorrupted place in which material and social progress would come more quickly than in Europe.
7. Benjamin Franklin came to symbolize the natural genius and the vast potential of America. Franklin’s success in business, his intellectual and scientific accomplishments, and his political career offered proof that in America, where society was free of the chains of inherited privilege, genius could thrive.
C. Folk Cultures and Popular Protest
1. Most people in Western society did not share in the ideas of the Enlightenment; common people remained loyal to cultural values grounded in the preindustrial past. These cultural values prescribed a set of traditionally accepted mutual rights and obligations that connected the people to their rulers.
2. When eighteenth-century monarchs tried to increase their authority and to centralize power by introducing more efficient systems of tax collection and public administration, the people regarded these changes as violations of sacred customs and sometimes expressed their outrage in violent protests. Such protests aimed to restore custom and precedent, not to achieve revolutionary change. Rationalist Enlightenment reformers also sparked popular opposition when they sought to replace popular festivals with rational civic rituals.
3. Spontaneous popular uprisings had revolutionary potential only when they coincided with conflicts within the elite.
II. The American Revolution, 1775–1800
A. Frontiers and Taxes
1. After 1763, the British government faced two problems in its North American colonies: the danger of war with the Amerindians as colonists pushed west across the Appalachians, and the need to raise more taxes from the colonists to pay the increasing costs of colonial administration and defense. British attempts to impose new taxes or to prevent further westward settlement provoked protests in the colonies.
2. In the Great Lakes region, British policies undermined the Amerindian economy and provoked a series of Amerindian raids on the settled areas of Pennsylvania and Virginia. The Amerindian alliance that carried out these raids was defeated within a year. Fear of more violence led the British to establish a western limit for settlement in the Proclamation of 1763 and to slow down settlement of the regions north of the Ohio and east of the Mississippi in the Quebec Act of 1774.
3. The British government tried to raise new revenue from the American colonies through a series of fiscal reforms and new taxes, including a number of new commercial regulations, including the Stamp Act of 1765 and other taxes and duties. In response to these actions, the colonists organized boycotts of British goods, staged violent protests, and attacked British officials.
4. Relations between the American colonists and the British authorities were further exacerbated by the killing of five civilians in the Boston Massacre (1770) and by the action of the British government in granting the East India Company a monopoly on the import of tea to the colonies. When colonists in Boston responded to the monopoly by dumping tea into Boston harbor, the British closed the port of Boston and put administration of Boston in the hands of a general.
B. The Course of Revolution, 1775–1783
1. Colonial governing bodies deposed British governors and established a Continental Congress that printed currency and organized an army. Ideological support for independence was given by the rhetoric of thousands of street-corner speakers, by Thomas Paine’s pamphlet CommonSense, and in the Declaration of Independence.
2. The British sent a military force to pacify the colonies. The British force won most of its battles, but it was unable to control the countryside. The British were also unable to achieve a compromise political solution to the problems of the colonies.
3. Amerindians served as allies to both sides. The Mohawk leader Joseph Brant led one of the most effective Amerindian forces in support of the British; when the war was over, he and his followers fled to Canada.
4. France entered the war as an ally of the United States in 1778 and gave crucial assistance to the American forces, including naval support that enabled Washington to defeat Cornwallis at Yorktown, Virginia. Following this defeat, the British negotiators signed the Treaty of Paris (1783), giving unconditional independence to the former colonies.
1. After independence, each of the former colonies drafted written constitutions that were submitted to the voters for approval. The Articles of Confederation served as a constitution for the United States during and after the Revolutionary War.
2. In May 1787, a Constitutional Convention began to write a new constitution that established a system of government that was democratic but gave the vote only to a minority of the adult male population and protected slavery.
III. The French Revolution, 1789–1815
A. French Society and Fiscal Crisis
1. French society was divided into three estates: the First Estate (clergy), the Second Estate (hereditary nobility), and the Third Estate (everyone else). The clergy and the nobility controlled vast amounts of wealth, and the clergy was exempt from nearly all taxes.
2. The Third Estate included the rapidly growing, wealthy middle class (bourgeoisie). While the bourgeoisie prospered, France’s peasants (80 percent of the population), its artisans, workers, and small shopkeepers, were suffering in the 1780s from economic depression caused by poor harvests. Urban poverty and rural suffering often led to violent protests, but these protests were not revolutionary.
3. During the 1700s, the expense of wars drove France into debt and inspired the French kings to try to introduce new taxes and fiscal reforms to increase revenue. These attempts met with resistance in the Parliaments and on the part of the high nobility.
B. Protest Turns to Revolution, 1789–1792
1. The king called a meeting of the Estates General to get approval of new taxes. The representatives of the Third Estate and some members of the First Estate declared themselves to be a National Assembly and pledged to write a constitution that would incorporate the idea of popular sovereignty.
2. As the king prepared to send troops to arrest the members of the National Assembly, the common people of Paris rose up in arms against the government, and peasant uprisings broke out in the countryside. The National Assembly was emboldened to set forth its position in the Declaration of the Rights of Man.
3. As the economic crisis grew worse, Parisian market women marched on Versailles and captured the king and his family. The National Assembly passed a new constitution that limited the power of the monarchy and restructured French politics and society. When Austria and Prussia threatened to intervene, the National Assembly declared war in 1791.
C. The Terror, 1793–1794
1. The king’s attempt to flee in 1792 led to his execution and to the formation of a new government, the National Convention, which was dominated by the radical Mountain faction of the Jacobins and by their leader, Robespierre.
2. Under Robespierre, executive power was placed in the hands of the Committee of Public Safety, militant feminist forces were repressed, new actions against the clergy were approved, and suspected enemies of the revolution were imprisoned and guillotined in the Reign of Terror (1793–1794). In July 1794, conservatives in the National Convention voted for the arrest and execution of Robespierre.
D. Reaction and the Rise of Napoleon, 1795–1815
1. After Robespierre’s execution, the Convention worked to undo the radical reforms of the Robespierre years, ratified a more conservative constitution, and created a new executive authority, the Directory. The Directory’s suspension of the election results of 1797 signaled the end of the republican phase of the revolution, while Napoleon’s seizure of power in 1799 marked the beginning of another form of government: popular authoritarianism.
2. Napoleon provided greater internal stability and protection of personal and property rights by negotiating an agreement with the Catholic Church (the Concordat of 1801), promulgating the Civil Code of 1804, and declaring himself emperor (also in 1804). At the same time, the Napoleonic system denied basic political and property rights to women and restricted speech and expression.
3. The stability of the Napoleonic system depended upon the success of the military and upon French diplomacy. No single European state could defeat Napoleon, but his occupation of the Iberian Peninsula turned into a costly war of attrition with Spanish and Portuguese resistance forces, while his 1812 attack on Russia ended in disaster. An alliance of Russia, Austria, Prussia, and England defeated Napoleon in 1814.
1. The French colony of Saint Domingue was one of the richest European colonies in the Americas, but its economic success was based on one of the most brutal slave regimes in the Caribbean.
2. The political turmoil in France weakened the ability of colonial administrators to maintain order and led to conflict between slaves and gens de couleur on the one hand and whites on the other. A slave rebellion under the leadership of François Dominique Toussaint L’Ouverture took over the colony in 1794.
3. Napoleon’s 1802 attempt to reestablish French authority led to the capture of L’Ouverture but failed to retake the colony, which became the independent republic of Haiti in 1804.
B. The Congress of Vienna and Conservative Retrenchment, 1815–1820
1. From 1814 to 1815, representatives of Britain, Russia, Prussia, and Austria met in Vienna to create a comprehensive peace settlement that would reestablish and safeguard the conservative order in Europe.
2. The Congress of Vienna restored the French monarchy; redrew the borders of France and other European states; and established a Holy Alliance of Austria, Russia, and Prussia. The Holy Alliance defeated liberal revolutions in Spain and Italy in 1820 and tried, without success, to repress liberal and nationalist ideas.
C. Nationalism, Reform, and Revolution, 1821–1850
1. Popular support for national self-determination and democratic reform grew throughout Europe. Greece gained its independence from the Ottoman Empire in 1830, while in France, the people of Paris forced the monarchy to accept constitutional rule and to extend voting privileges.
2. Democratic reform movements emerged in both Britain and in the United States. In the United States, the franchise was extended after the War of 1812, while in Britain, response to the unpopular Corn Laws resulted in a nearly 50 percent increase in the number of voters.
3. In Europe, the desire for national self-determination and democratic reform led to a series of revolutions in 1848. In France, the monarchy was overthrown and replaced by an elected president (Louis Napoleon); elsewhere in Europe, the revolutions of 1848 failed to gain either their nationalist or republican objectives.
1. The expense of colonial wars led to the imposition of new taxes on colonials.
2. Resentment over taxation led the British American colonies to fight and win their independence.
3. The new American government reflected for contemporaries the democratic ideals of the Enlightenment.
B. The French Revolution
1. Revolutionaries in France created a more radical form of representative democracy than the one found in America and suffered more violence as well.
2. Events in France led to the Haitian Revolution and Haiti’s independence.
3. Entrenched elite forces within, and foreign intervention from without, made the French and Haitian Revolutions more violent and destructive than the American Revolution. In France, chaos led to the rise of Napoleon.
C. Aftermath of Revolution
1. Conservative retrenchment after Napoleon prevailed in the short term in Europe, but nationalism and liberalism could not be held in check for long.
2. The new social classes arising with industrial capitalism ultimately demanded a new social and political order. The new political freedoms were limited to a minority. Women could not participate until the twentieth century, and slavery endured until the second half of the nineteenth century in America. | https://essaydocs.org/outline-chapter-22-revolutionary-changes-in-the-atlantic-world.html | 21 |
29 | How We Hear
Physiology of Healthy Auditory (Hearing) System
Sound wave travels to the ear (what you see) and is delivered the ear drum through a narrow tunnel. We hear when vibrations (sound waves) reach our auditory system (hearing structure).
The ear consists of three parts: the outer ear, the middle ear, and the inner ear. Then most complex of all sensory pathways, the auditory nervous system processes and transmits/receives. sounds from and to inner ear to/from central nervous system (brain).
Hearing Loss (HL)
Classified by several different categories
Site -Conductive hearing loss or sensorineural hearing loss; peripheral hearing loss or central hearing loss
Degree - slight, mild, moderate, moderately severe, severe, or profound/deafness
Number -unilateral (one) or bilateral (both) ear(s)
Cause - Noise-Induced hearing loss (NIHL)-occupational or recreational, Age-related hearing loss (ARHL or presbycusis), hereditary HL (genetic disposition or genetic susceptibility), infections, toxicity, otosclerosis, tumor, labyrinthitis, trauma, head injury, cerumen impaction etc.
Time of onset-Congenital (present at birth), acquired, gradual, sudden etc.
Hearing Loss in Early Childhood
It is imperative to test hearing in neonates and ANY hearing loss detected needs to be compensated.
As we learn more about physiology of hearing and pathophysiology of hearing loss, the evidence has shown the function of the auditory nervous system can change without any detectable changes in morphology due to neural plasticity. This means that the function of specific parts of the nervous system change more or less permanently depending on how they are activated.
Sound deprivation from either conductive hearing loss or sensorineural hearing loss can severely affect the normal development of the auditory nervous system during childhood and even on the function of mature nervous system.
Development of the auditory nervous system depends on appropriate stimulation early in life. Deficits from insufficient stimulation during development is not reversible.
Hearing Loss In Adults
More adults than we think are Underdiagnosed & Undertreated
Plasticity of the auditory nervous system manifests in many forms and can be compensatory and harmful. Studies of mice indicate that appropriate sound stimulation can slow the progression of age-related hearing loss (ARHL, presbycusis). Like untreated hearing loss in early childhood, prolonged hearing loss in adults is not reversible.
Despite the rising concern on the negative effect of hearing loss, over 63 percent of adults aged 70 years and older in the U.S. population has hearing problem using the definition of hearing loss (>20 dB) by the World Health Organization (WHO) according to Institute on Deafness and Other Communication disorders (NIDCD). Furthermore, only fewer than 30 percent of those who could benefit from hearing aids has ever used them. That means 70 percent ( two of three) are neglecting something is very crucial for their health.
Auditory system is very delicate, complex, and susceptible.
Some of definite risk factors are, loud noise exposure via shooting, mechanic, factory, motors, or loud music, head or neck trauma and injury, age, otosclerosis, ear infections/inflammation, ototoxic drugs, or Meniere's disease.
Possible risk factors are familial inheritance, geographic region, health status -fair or poor, low socioeconomic status, or smoking.
Health Risk Related to Hearing Loss
Impact of Untreated Hearing Loss
Many researchers have reported the hearing loss is linked other medical conditions, such as cognitive decline/dementia, Alzheimer's disease, social isolation, loneliness, depression, increased falls, cardiovascular disease, diabetes, and mortality.
The First Step For A Hearing Concern
Why first seeing audiologists might benefit you?
Because the profession is medically trained to diagnose hearing loss -the degree, cause- and recommended the right treatment solutions for each patient. They are also trained to treat functional disorders of hearing when it is not medically treatable.
Audiologists may refer you to a general ENT, otologist, or neurotologist for medical or surgical options or to your primary care physicians (PCP) for further differential diagnosis, depending on the findings. In this way, you might save time and resources during the process of diagnosing and planning a treatment plan. Because all other professions would need results of hearing tests that assist them to diagnose.
In most retail stores- including Costco, Sam's Club, etc. - there is no audiologist but you may see a hearing instrument specialist (HIS). There are many different names - that all are referred as hearing care professionals. They are certified and licensed to dispense hearing aids legally. However, the requirement to be licensed in most states are very loose in their education requirement and a clinical training. Many times their training is more focused on sales.
It is your choice to choose your hearing care professional wisely and medical fields would recommend that your initial diagnose to be done by a medical profession not the sales personal. Anything short of "the best fit" may worsen your auditory system.
Hearing Care is Caring for Your Loved Ones.
Hearing loss may impose difficulty on your friends and family.
In conversation with a person with hearing loss, people might have to speak louder than their usual comfortable level, and that causes stress on one's vocal cord and causes frustration or tiredness.
Repetition, sometimes several times, is usually required.
Loud TV volume can bother others, especially more to those whose hearing is sensitive to loud sound.
Learn communication strategies together with your friends and family.
It is the best strategy!
One of most hazardous causes of hearing loss but that is also preventable is noise exposure. The extent of damage from noise exposure varies depending mainly on the intensity of the noise and duration of the exposure. The Occupational Safety and Health Administration (OSHA) regulates noise Standard and every work place is required to comply.
NIHL also occurs via recreational loud noise such as shooting, power saws, snow mobile, loud music etc.
Continuous exposure is more damaging than intermittent. The break between exposures tends to give rest in the auditory system and recovers shifted hearing due to noise.
To reduce the intensity, wear personal hearing protector. Investing to protect when they are healthy is much more beneficial than trying to compensate after loss occurs. The outcome is not comparable.
Get a baseline audiogram and observe annually if you are involved in a very noisy environment either via occupational or recreational.
Get a professional advise and custom hearing protection and educated for better motivation. | https://www.hearbetterathome.com/your-hearing | 21 |
108 | By VT Editors
Submitted by Philip Power
In the British West Indies much of the early capital to finance White slavery came from Sephardic Jews from Holland. They provided credit, machinery and shipping facilities. In the 1630s Dutch Jews had been deeply involved in the enslavement of the Irish, financing their transport to slave plantations in the tropics. By the 1660s, this combination of Jewish finance and White slave labor made the British island colony of Barbados the richest in the empire. The island’s value, in terms of trade and capital exceeded that of all other British colonies combined. (John Oldmixon, The British Empire in America, vol. 2, p. 186.)
Of the fact that the wealth of Barbados was founded on the backs of White slave labor there can be no doubt. White slave laborers from Britain and Ireland were the mainstay of the sugar colony. Until the mid-1640s there were few Blacks in Barbados. George Downing wrote to John Winthrop, the colonial governor of Massachusetts in 1645, that planters who wanted to make a fortune in the British West Indies must procure White slave labor “out of England” if they wanted to succeed. (Elizabeth Donnan, Documents Illustrative of the History of the Slave Trade to America, pp. 125-126. In Hoffman, They Were White and They Were Slaves p. 12.)
From their experience with rebellious Irish slaves, Dutch Jews would eventually be instrumental in the switch from White to Black slavery in the British West Indies. Blacks were more docile – and more profitable for the Jews.
The English traffic in slaves in the first half of the seventeenth century was solely in White slaves. The English had no slave base in West Africa, as did the Dutch Jews. Moreover, Dutch Jews were not only bankers and shipping magnates but slave masters and plantation owners themselves. Jews were forbidden by English law to own White Protestant slaves, although in practice this was not uniformly enforced. Irish slaves were allowed to the Jewish slaves but were regarded by them as intractable. Hence the Jews became prime movers behind the African slave trade and the importation of Negro slaves into the New World. (Dalby Thomas, An Historical Account of the Rise and Growth of the British West Indies, pp. 36-37, and G. Merrill, ‘The Role of the Sephardic Jews in the British Caribbean Area in the Seventeenth Century,’ Caribbean Studies, vol. 4, no. 3, 1964-65, pp. 32-49.)
Hundreds of thousands of White slaves were kidnapped off the streets and roads of Great Britain in the course of more than one hundred and fifty years and sold to captains of slave ships in London known as “White Guineamen.” Ten thousand Whites were kidnapped from England in the year 1670 alone.’
They Were White and They Were Slaves
The Untold History of the Enslavement of Whites in Early America
Michael A. Hoffman
- Mainly Britain
White Slavery in Ancient and Medieval Europe
Among the ancient Greeks, despite their tradition of democracy, the enslavement of fellow Whites – even fellow Greeks – was the order of the day. Aristotle considered White slaves as things. The Romans also had no compunctions against enslaving Whites who they too termed “a thing” (res). In his agricultural writings, the first century B.C. Roman philosopher Varro labeled White slaves as nothing more than “tools that happened to have voices” (instrumenti vocale). Cato the Elder, discoursing on plantation management, proposed that White slaves when old or ill should be discarded along with worn-out farm implements.
Julius Caesar enslaved as many as one million Whites from Gaul, some of whom were sold to the slave dealers who followed his victorious legions (William D. Phillips, Jr., Slavery from Roman Times to the Early Transatlantic Trade, p.18).
In A.D. 319 the “Christian” emperor of Rome, Constantine, ruled that if an owner whipped his White slave to death “he should not stand in any criminal accusation if the slave dies; and all statutes of limitations and legal interpretations are hereby set aside.”
The Romans enslaved thousands of the early White inhabitants of Great Britain who were known as “Angles,” from which we derive the term “Anglo-Saxon” as a description of the English race. In the sixth century Pope Gregory the First witnessed blond-haired, blue-eyed English boys awaiting sale in a slave market in Rome. Inquiring of their origin, the Pope was told they were Angles. Gregory replied, “Non Angli, sed Angeli” (“Not Angles, but Angels”).
Arabs and the Traffic in White Slaves
The fate of the hundreds of thousands of White slaves sold to the Arabs was described in one Spanish text as “atrocissima et ferocissima“ (most atrocious and harsh). The men were worked to death as galley slaves. The women, girls and boys were used as prostitutes.
White males had their genitals mutilated in castration attempts – bloody procedures of incredible brutality which most of the White men who were forced to submit did not survive, judging from the high prices White eunuchs commanded throughout the Middle Eastern slave markets.
Escape from North Africa and the Middle East was almost impossible and those White slaves who were caught trying to flee were punished by having their noses and ears cut off, or worse.
“The Norwegian slave trader was an important enough figure to appear in the 12th century tale of Tristan… Icelandic literature also provides numerous references to raiding in Ireland as a source for slaves…
“Norwegian Vikings made slave raids not only against the Irish and Scots (who are often called Irish in Norse sources) but also against Norse settlers in Ireland or the Scottish Isles or even in Norway itself… Slave trading was a major commercial activity of the Viking Age.” (Ruth Mazo Karras, Slavery and Society in Medieval Scandinavia, p. 49.) The children of White slaves in Iceland were routinely murdered en masse (Karras, p. 52).
White Slavery in Early America
In the Calendar of State Papers of 1701, we read of a protest over the “encouragement to the spiriting away of Englishmen without their consent and selling them for slaves, which hath been a practice very frequent and known by the name of kidnapping.”
In the British West Indies, plantation slavery was instituted as early as 1627. In Barbados by the 1640s there were an estimated 25,000 slaves, of whom 21,700 were White. (“Some Observations on the Island of Barbados,” Calendar of State Papers, Colonial Series, America and West Indies, p. 528.)
It is worth noting that while White slaves were worked to death in Barbados, there were Caribbean Indians brought from Guiana to help propagate native foodstuffs who were well-treated and received as free persons by the wealthy planters.
* * *
“White indentured servants were employed and treated, incidentally, exactly like slaves.” (Morley Ayearst, The British West Indies, p. 19.)
“The many gradations of unfreedom among Whites made it difficult to draw fast lines between any idealized free White worker and a pitied or scorned servile Black worker… in labor-short seventeenth and eighteenth-century America the work of slaves and that of White servants were virtually interchangeable in most areas.” (David R. Roediger, The Wages of Whiteness: Race and the Making of the American Working Class, p. 25.)
A Holocaust Against the White Poor
In 1723 the Waltham Act was passed which classified more than 200 minor offenses such as stealing a rabbit from an aristocrat or breaking up his fishpond, a crime punishable by hanging. Starving youths, fourteen years old, were strung-up on Tyburn gallows for stealing as little as one sheep. When their bodies were cut down their parents had to fight over them with agents of the Royal College of Physicians who had been empowered by the courts to use their remains for laboratory dissection.
The English historian William Cobbett stated in 1836,
“The starving agricultural laborers of southern England are worse off than American negroes.”
When in 1834 English farm workers in Dorset tried to form a union in order to “preserve ourselves, our wives and our children from starvation” they were shipped into slavery in Australia for this “crime.” The situation of White factory workers was no better. Robert Owen declared in 1840, “The working classes of Great Britain are in a worse condition than any slaves in any country, in any period of the world’s history.”
In 19th century England tens of thousands of White children were employed as slave laborers in British coal mines. Little White boys, seven years old, were harnessed like donkeys to coal carts and ordered to drag them through mine shafts. In 1843, White children aged four were working in the coal pits. In old English cemeteries can be seen epitaphs on grave stones like one which reads,
“William Smith, aged eight years, Miner, died Jan. 3, 1841.”
The root of the holocaust against the White yeomanry of Britain lies in the history of the land swindles perpetrated against them in the late 12th and early 13th centuries.
The Factory System
[Rev. Richard] Oastler was publicly thanked by a delegation of English laborers at a meeting in York “for his manly letters to expose the conduct of those pretended philanthropists and canting hypocrites who travel to the West Indies in search of slavery, forgetting there is a more abominable and degrading system of slavery at home.” (Cecil Driver, Tory Radical: The Life of Richard Oastler, pp. 36-55; lnglis, p. 260.)
* * *
White children worked up to sixteen hours a day and “During that period the doors were locked; children – and most of the mill workers were still children – were allowed out only ‘to go to the necessary’… In some factories it was forbidden to open my of the windows; cotton fluff was everywhere, including on he children’s food, but often, as they had to stand all day, they were too fatigued to have any appetite… The (child) apprentices who were on night shift might stay on it for as long as four or five years… although they were provided with dinner at midnight, the machinery did not stop.”
(lnglis, pp. 80, 163, 164, 262).
* * *
“In Bleak House Dickens was to satirize evangelical ‘telescopic philanthropy’ in the person of Mrs. Jellyby, a do-gooder so absorbed in the welfare of the African natives of Borrioboola-Gha that she fails to notice her own family sinking into ruin. This was precisely Carlyle’s point: with Irish… dying in ditches… it was the worst sort of rose-pink sentimentalism to worry oneself about West Indian Negroes.” (Eugene R. August, introduction to Thomas Carlyle’s The Nigger Question, Crofts Classics edition, p. xvii.)
In the late 1830s William Dodd began his exhaustive research into the condition of the English poor. He estimated that in the year 1846 alone, 10,000 English workers, many of them children, had been mangled and mutilated by machinery or otherwise disabled for life. They were abandoned and received no compensation of any kind. Many died of their injuries.
* * *
“Young children are allowed to clean the machinery, actually while it is in motion; and consequently the fingers, hands and arms are frequently destroyed in a moment. I have seen the whole of the arm, from the tip of the fingers to above the elbow, chopped into mince-meat, the cog-wheels cutting through the skin, muscles and in some places, through the bone… in one instance every limb but one was broken.” (William Dodd, The Factory System Illustrated, pp. 21-22.)
“Accidents were often due to… children being set to clean machinery while it was still in motion. The loss of two or three fingers was not exceptional. There were more serious accidents… such as that reported by a Stockport doctor in 1840 of a girl caught by the hair and scalped from the nose to the back of the head. The manufacturer gave her five shillings. She died in the workhouse.” (Cruickshank, p. 51.)
Nineteenth century factory worker William Dodd stated,
“Petition after petition has been sent to the two houses of Parliament, to the prime minister, and to the Queen, concerning this unfortunate class of British subjects, but without effect. Had they only been black instead of white, their case would have been taken into consideration long ago.”
* * *
Ruth Holland, commenting on the participation of New England factory owners in the cause of abolitionism and rights for negroes in the south, observed, “It’s a little difficult to believe that northern mill owners, who were mercilessly abusing [White] children for profit, felt such pure moral indignation at [negro] slavery.” (Mill Child, p. 28.)
In May, 1839, a boy, William Wilson, was ordered to clean a flue that was still hot from a fire that had recently been extinguished. “He was grievously burned” and hospitalized “for weeks” during which he was described by a care-giver as a “meek, gentle, little creature… the tears started in his eyes when he was spoken kindly to.” (Society for Superseding the Necessity of Climbing Boys, The Trade of Chimney Sweeping, pp. 2-20.)
* * *
“When Parliament abolished Negro slavery in 1808, the flues of its august chambers were being climbed by boys four, five and six years of age, sold… to chimney sweepers for prices ranging from a few shillings to two guineas – the smaller the child, the better the price.” (George L. Phillips, p. 3.)
* * *
Samuel Roberts, in an 1834 essay on the boys used as chimney-sweeps, addressed his indignation toward the upper-class British females who met in their sumptuously-appointed parlors to weep with tender-hearted solicitude over the latest accounts from America of oppression to Negroes, while in the next room, scarred and burned five year-old English boys enslaved as human brooms, were being forced up the lady’s chimney without a thought for their welfare:
There is a race of human beings in this country, the Chimney Sweepers’ Climbing Boys… which… is more oppressed than the negroes in either the West Indian Islands, or in North America… These objects are all young and helpless. Their employment is ten-fold more horrible than that of any attaching to the (negro) slaves… A far greater number of them are crippled, and rendered deformed for life. A far greater proportion of them die in consequence of hard usage, while the horrible deaths from suffocation, burning, and other accidents, are in this case, beyond measure more numerous. And all this at home, within our knowledge, before our eyes… in our very houses… How many of these poor infants… arrive at years of maturity?… of those who die young, who knows (or cares?) anything about them? The death of any of your favourite dogs would be more lamented.
(An Address to British Females, pp. 11-17.)
- Mainly America
Breaking the Chains of Illusion
White slaves were actually owned by Negroes and Indians in the South to such an extent that the Virginia Assembly passed the following law in 1670: “It is enacted that noe Negro or Indian though baptized and enjoyned their owne ffreedome shall be capable of any such purchase of christians.” (Statutes of the Virginia Assembly, Vol. 2, pp. 280-81.)
Negroes also owned other Negroes in America (Charleston County Probate Court Records, 1754-1758, p. 406).
While Whites languished in chains Blacks were free men in Virginia throughout the 17th century (Willie Lee Rose, A Documentary History of Slavery in North America, p. 15; John Henderson Russell, Free Negro in Virginia, 1619-1865, p.23; Bruce Levine, et al., Who Built America?, vol. I, p. 52).
In 1717, it was proposed that a qualification for election to the South Carolina Assembly was to be “the ownership of one white man.”
(Journals of the Commons House of Assembly of the Province of South Carolina: 1692-1775, volume 5, pp. 294-295.)
Negroes voted in the Carolina counties of Berkeley and Craven in 1706 “and their votes were taken.” (Levine, p. 63.)
Poor Whites and the Southern Confederacy
Try to envision the 19th century scene: yeoman southern Whites, sick and destitute, watching their children dying while enduring the spectacle of negroes from the jungles of Africa healthy and well-fed thanks to the ministrations of their fabulously wealthy White owners who cared little or nothing for the local “White trash.”
In the course of an 1855 journey up the Alabama River on the steamboat Fashion, Frederic Law Olmsted, the landscape architect who designed New York’s Central Park, observed bales of cotton being thrown from a considerable height into a cargo ship’s hold. The men tossing the bales somewhat recklessly into the hold were Negroes, the men in the hold were Irish. Olmsted inquired about this to a mate on the ship. “‘Oh,’ said the mate, ‘the Niggers are worth too much to be risked here; if the Paddies are knocked overboard or get their backs broke, nobody loses anything.’” (Frederic Law Olmsted, A Journey to the Seaboard Slave States, pp. 100-101; G.E.M. de Ste. Croix, Slavery and Other Forms of Unfree Labor, p. 27.)
* * *
From 1609 until the early 1800s, between one-half and two thirds of all the White colonists who came to the New World came as slaves. Of the passengers on the Mayflower, twelve were White slaves (John Van der Zee, Bound Over, p. 93). White slaves cleared the forests, drained the swamps, built the roads. They worked and died in greater numbers than anyone else.
Whites Were the First Slaves in America
“The practice developed and tolerated in the kidnapping of Whites laid the foundation for the kidnapping of Negroes.”
(Eric Williams, From Columbus to Castro, p. 103.)
The official papers of the White slave trade refer to adult White slaves as “freight” and White child slaves were termed “half-freight.” Like any other commodity on the shipping inventories, White human beings were seen strictly in terms of market economics by merchants.
* * *
In the 17th century White slaves were cheaper to acquire than Negroes and therefore were often mistreated to a greater extent.
Having paid a bigger price for the Negro, “the planters treated the black better than they did their ‘Christian’ white servant. Even the Negroes recognized this and did not hesitate to show their contempt for those white men who, they could see, were worse off than themselves.”
(Bridenbaugh, p. 118.)
It was White slaves who built America from its very beginnings and made up the overwhelming majority of stave-laborers in the colonies in the 17th century. Negro slaves seldom had to do the kind of virtually lethal work the White slaves of America did in the formative years of settlement. “The frontier demands for heavy manual labor, such as felling trees, soil clearance, and general infrastructural development, had been satisfied primarily by white indentured servants between 1627 and 1643.”
(Beckles, Natural Rebels, p. 8.)
* * *
Hundreds of thousands of Whites in colonial America were owned outright by their masters and died in slavery. They had no control over their own lives and were auctioned on the block and examined like livestock exactly like Black slaves, with the exception that these Whites were enslaved by their own race. White slaves “found themselves powerless as individuals, without honor or respect and driven into commodity production not by any inner sense of moral duty but by the outer stimulus of the whip.“
(Beckles, White Servitude, p. 5.)
Upon arrival in America, White slaves were “put up for sale by the ship captains or merchants… Families were often separated under these circumstances when wives and offspring were auctioned off to the highest bidder.” (Foster R. Dulles, Labor in America: A History, p. 7.)
“Eleanor Bradbury, sold with her three sons to a Maryland owner, was separated from her husband, who was bought by a man in Pennsylvania.” (Van der Zee, p. 165.)
White people who were passed over for purchase at the point of entry were taken into the back country by “soul drivers” who herded them along “like cattle to a Smithfield market” and then put them up for auction at public fairs. “Prospective buyers felt their muscles, checked their teeth… like cattle.” (Sharon Salinger, To Serve Well and Faithfully, Labor and Indentured Servants in Pennsylvania, 1682-1800, p. 97.)
“Indentured servants were sold at auction, sometimes after being stripped naked.” (Roediger, p. 30.) “We were… exposed to sale in public fairs as so many brute beasts.” (Ekirch, p. 129.)
* * *
The Virginia Company arranged with the City of London to have 100 poor White children “out of the swarms that swarme in the place” sent to Virginia in 1619 for sale to the wealthy planters of the colony to be used as slave labor. The Privy Council of London authorized the Virginia Company to “imprison, punish and dispose of any of those children upon any disorder by them committed, as cause shall require.”
The trade in White slaves was a natural one for English merchants who imported sugar and tobacco from the colonies. Whites kidnapped in Britain could be exchanged directly for this produce. The trade in White slaves was basically a return haul operation.
* * *
At the bare minimum, hundreds of thousands of White slaves were kidnapped off the streets and roads of Great Britain in the course of more than one hundred and fifty years and sold to captains of slave ships in London known as “White Guineamen.”
Ten thousand Whites were kidnapped from England in the year 1670 alone (Edward Channing, History of the United States, vol. 2, p. 369). The very word “kidnapper” was first coined in Britain in the 1600s to describe those who captured and sold White children into slavery (“kid-nabbers”).
Torture and Murder of White Slaves
White slaves were punished with merciless whippings and beatings. The records of Middlesex County, Virginia relate how a slave master confessed “that he hath most uncivilly and inhumanly beaten a (White) female with great knotted whipcord – so that the poor servant is a lamentable spectacle to behold.”
“Whippings were commonplace… as were iron collars and chains.”
(Ekirch, p. 150.)
A case in the county from 1655 relates how a White slave was “fastened by a lock with a chain to it” by his master and tied to a shop door and “whipped till he was very bloody.” The beating and whipping of White slaves resulted in so many being beaten to death that in 1662 the Virginia Assembly passed a law prohibiting the private burial of White slaves because such burial helped to conceal their murders and encouraged further atrocities against other White slaves.
A grievously ill White slave was forced by his master to dig his own grave, since there was little likelihood that the master would obtain any more labor from him. The White slave’s owner “made him sick and languishing as he was, dig his own grave, in which he was laid a few days afterwards, the others being too busy to dig it, having their hands full in attending to the tobacco.”
(Jaspar Danckaerts and Peter Sluyter, Journal of a Voyage to New York and a Tour of Several American Colonies, 1679-1680.)
In New England, Nicholas Weekes and his wife deliberately cut off the toes of their White slave who subsequently died. Marmaduke Pierce in Massachusetts severely beat a White slave boy with a rod and finally beat him to death. Pierce was not punished for the murder. In 1655 in the Plymouth Colony a master named Mr. Latham, starved his 14-year-old White slave boy, beat him and left him to die outdoors in sub-zero temperatures. The dead boy’s body showed the markings of repeated beatings and his hands and feet were frozen solid.
Colonial records are full of the deaths by beating, starvation and exposure of White slaves in addition to tragic accounts such as the one of the New Jersey White slave boy who drowned himself rather than continue to face the unmerciful beatings of his master (American Weekly Mercury, Sept. 2-9, 1731).
Henry Smith beat to death an elderly White slave and raped two of his female White slaves in Virginia. John Dandy beat to death his White slave boy whose black and blue body was found floating down a creek in Maryland. Pope Alvey beat his White slave girl Alice Sanford to death in 1663. She was reported to have been “beaten to a Jelly.” Joseph Fincher beat his White slave Jeffery Haggman to death in 1664.
John Grammer ordered his plantation overseer to beat his White slave 100 times with a cat-o’-ninetails. The White slave died of his wounds. The overseer, rather than expressing regret at the death he inflicted stated, “I could have given him ten times more.” There are thousands of cases in the colonial archives of inhuman mistreatment, cruelty, beatings and the entire litany of Uncle Tom’s Cabin horrors administered to hapless White slaves.
In Australia, White slave Joseph Mansbury had been whipped repeatedly to such an extent that his back appeared, “quite bare of flesh, and his collar bones were exposed looking very much like two ivory, polished horns. It was with difficulty that we could find another place to flog him. Tony [Chandler, the overseer] suggested to me that we had better do it on the soles of his feet next time.” (Robert Hughes, The Fatal Shore, p. 115.) Hughes describes the fate of White slaves as one of “prolonged and hideous torture.”
One overseer in Australia whose specialty was whipping White slaves would say while applying his whip on their backs, “Another half pound mate, off the beggar’s ribs.” The overseer’s face and clothes were described as having the appearance of “a mincemeat chopper, being covered in flesh from the victim’s body.” (Hughes, p. 115.)
* * *
Nor should it be concluded that because some trials were held for those masters who murdered their White slaves that this reflected a higher justice than that given to Black slaves. In thousands of cases of homicide against poor Whites there were no trials whatsoever – murdered White slaves were hurriedly buried by their masters so that the resulting decomposition would prohibit any enquiry into the cause of their deaths. Others just “disappeared” or died from “accidents” or committed “suicide.” Many of the high number of so-called “suicides” of White slaves took place under suspicious circumstances, but in every single case the slave master was found innocent of any crime. (For acquittals of masters in Virginia or instances of failure to prosecute them for the murder of White slaves, see Virginia General Court Minutes, VMH, XIX, 388.)
* * *
A White orphan boy was kidnapped in Virginia and enslaved under the guise of “teaching him a trade.” The boy was able to have the Rappahannock County Court take notice of his slavery:
“An orphan complained on July 2, 1685 that he was held in a severe and hard servitude illegally and that he was taken by one Major Hawkins ‘under pretense of giving him learning.’ The case came before the court on August 2, but the justices decided that he must continue in the service of his present master.” (Jernegan, pp. 159-160.)
“They possessed one right – to complain to the planter-magistrates concerning excessively violent abuse. But this right, which by custom was also available to black slaves in some societies, had little or no mitigating effect on the overall nature of their treatment on the estates.”
(Beckles, White Servitude, p. 5.
For information on Blacks allowed to accuse White slave masters in court and who were freed from slavery as a result of hearings before White judges, see the Minutes of Council of March 10, 1654 in the Lucas manuscripts, reel 1, f. 92, Bridgetown Public Library, Barbados.)
Crackers, Redlegs, Rednecks and Hillbillies
O Dear Father… I am sure you’ll pity your distressed daughter. What we unfortunate English people suffer here is beyond the probability of you in England to conceive.
Let it suffice that I am one of the unhappy number toiling day and night, and very often in the horse’s druggery, with only the comfort of hearing me called, ‘You, bitch, you did not do half enough.’
Then I am tied up and whipped to that degree that you’d not serve an animal. I have scarce anything but Indian corn and salt to eat and that even begrudged. Nay, many Negroes are better used…
After slaving after Master’s pleasure, what rest we can get is to wrap ourselves up in a blanket and lay upon the ground. This is the deplorable condition your poor Betty endures.
From a letter by White slave Elizabeth Sprigs in Maryland to her father John Sprigs in London, September 22, 1756. (High Court of the Admiralty, London, Public Record Office.)
Michael A. Hoffman, They Were White and They Were Slaves, Independent History & Research Co., Coeur d’Alene, Idaho, 1992, 137pp.
‘The cruelties of the Algerine pirates, shewing the present dreadful state of the English slaves, and other Europeans, at Algiers and Tunis; with the horrid barbarities inflicted on Christian mariners shipwrecked on the north western coast of Africa and carried into perpetual slavery.’
An Untold Story of White Slavery
Prof. Robert C. Davis, Christian Slaves, Muslim Masters reviewed by Thomas Jackson
Whites have forgotten what blacks take pains to remember
As Robert C. Davis notes in this eye-opening account of Barbary Coast slavery, American historians have studied every aspect of enslavement of Africans by whites but have largely ignored enslavement of whites by North Africans. Christian Slaves, Muslim Masters is a carefully researched, clearly written account of what Prof. Davis calls “the other slavery,” which flourished during approximately the same period as the trans-Atlantic trade, and which devastated hundreds of European coastal communities. Slavery plays nothing like the central role in the thinking of today’s whites that it does for blacks, but not because it was a fleeting or trivial matter. The record of Mediterranean slavery is indeed as black as the most tendentious portrayals of American slavery. Prof. Davis, who teaches Italian social history at Ohio State University, casts a piercing light into this fascinating but neglected corner of history.
A wholesale business
The Barbary Coast, which extends from Morocco through modern Libya, was home to a thriving man-catching industry from about 1500 to 1800. The great slaving capitals were Salé in Morocco, Tunis, Algiers, and Tripoli, and for most of this period European navies were too weak to put up more than token resistance.
The trans-Atlantic trade in blacks was strictly commercial, but for Arabs, memories of the Crusades and fury over expulsion from Spain in 1492 seem to have fuelled an almost jihad-like Christian-stealing campaign. “It may have been this spur of vengeance, as opposed to the bland workings of the marketplace, that made the Islamic slavers so much more aggressive and initially (one might say) successful in their work than their Christian counterparts,” writes Prof. Davis. During the 16th and 17th centuries more slaves were taken south across the Mediterranean than west across the Atlantic. Some were ransomed back to their families, some were put to hard labour in North Africa, and the unluckiest worked themselves to death as galley slaves.
What is most striking about Barbary slaving raids is their scale and reach. Pirates took most of their slaves from ships, but they also organised huge, amphibious assaults that practically depopulated parts of the Italian coast. Italy was the most popular target, partly because Sicily is only 125 miles from Tunis, but also because it did not have strong central rulers who could resist invasion.
Large raiding parties might be essentially unopposed. When pirates sacked Vieste in southern Italy in 1554, for example, they took an astonishing 6,000 captives. Algerians took 7,000 slaves in the Bay of Naples in 1554, in a raid that drove the price of slaves so low it was said you could “swap a Christian for an onion.” Spain too, suffered large-scale attacks. After a raid on Grenada in 1556 netted 4,000 men, women and children, it was said to be “raining Christians in Algiers.” For every large-scale raid of this kind there would have been dozens of smaller ones.
The appearance of a large fleet could send the entire population inland, emptying coastal areas. In 1566, a party of 6,000 Turks and Corsairs sailed up the Adriatic and landed at Fracaville. The authorities could do nothing and urged complete evacuation, leaving the Turks in control of over 500 square miles of abandoned villages all the way to Serracapriola.
When pirates appeared, people often fled the coast to the nearest town, but Prof. Davis explains why this was not always good strategy:
“More than one middle-sized town, swollen with refugees, was unable to withstand a frontal assault by several hundred corsairs, and the re’is (a corsair captain), who might otherwise have had to seek slaves a few dozen at a time along the beaches and up into the hills, could find a thousand or more captives all conveniently gathered in one place for the taking.”
Pirates returned time and again to pillage the same territory. In addition to a far larger number of smaller raids, the Calabrian coast suffered the following increasingly large-scale depredations in less than a 10-year period: 700 captured in a single raid in 1636, 1,000 in 1639 and 4,000 in 1644. During the 16th and 17th centuries, pirates set up semi-permanent bases on the islands of Ischia and Procida, practically within the mouth of the Bay of Naples, from which they took their pick of commercial traffic.
When they came ashore, Muslim corsairs made a point of desecrating churches. They often stole church bells, not just because the metal was valuable but also to silence the distinctive voice of Christianity.
In the more frequent smaller raiding parties, just a few ships would operate by stealth, falling upon coastal settlements in the middle of the night so as to catch people “peaceful and still naked in their beds.” This practice gave rise to the modern day Sicilian expression, pigliato dai turchi, or “taken by the Turks,” which means to be caught by surprise while asleep or distracted.
Constant predating took a terrible toll. Women were easier to catch than men, and coastal areas could quickly lose their entire child-bearing population. Fishermen were afraid to go out, or would sail only in convoys. Eventually, Italians gave up much of their coast. As Prof. Davis explains, by the end of the 17th century, “the Italian peninsula had by then been prey to the Barbary corsairs for two centuries or more, and its coastal populations had largely withdrawn into walled, hilltop villages or the larger towns like Rimini, abandoning miles of once populous shoreline to vagabonds and freebooters.”
Only by 1700 or so, were Italians able to prevent spectacular land raids, though piracy on the seas continued unchecked. Prof. Davis believes piracy caused Spain and especially Italy to turn away from the sea and lose their traditions of trade and navigation – with devastating effect. “At least for Iberia and Italy, the seventeenth century represented a dark period out of which Spanish and Italian societies emerged as mere shadows of what they had been in the earlier, golden ages.”
Some Arab pirates were skilled blue-water sailors, and terrorized Christians 1,000 miles away. One spectacular raid all the way to Iceland in 1627 took nearly 400 captives. We think of Britain as a redoubtable sea power since the time of Drake, but throughout the 17th century, Arab pirates operated freely in British waters, even sailing up the Thames estuary to pick off prizes and raid coastal towns. In just three years, from 1606 to 1609, the British navy admitted losing no fewer than 466 British and Scottish merchant ships to Algerian corsairs. By the mid-1600s the British were running a brisk trans-Atlantic trade in blacks, but many British crewmen themselves became the property of Arab raiders.
Life under the lash
Land attacks could be hugely successful, but they were riskier than taking prizes at sea. Ships were therefore the primary source of white slaves. Unlike their victims, corsair vessels had two means of propulsion: galley slaves as well as sails. This meant they could row up to any becalmed sailing ship and attack at will. They carried many different flags, so when they were under sail they could run up whatever ensign was most likely to gull a target.
A good-sized merchantman might yield 20 or so sailors healthy enough to last a few years in the galleys, and passengers were usually good for a ransom. Noblemen and rich merchants were attractive prizes, as were Jews, who could usually scrape up a substantial ransom from co-religionists. High clerics were also valuable because the Vatican would usually pay any price to keep them out of the hands of infidels.
At the approach of pirates, passengers often tore off their fine clothes and tried to dress as poorly as possible in the hope their captors would send to their families for more modest ransoms. This effort would be wasted if the pirates tortured the captain for information about passengers. It was also common to strip men naked, both to examine their clothes for sewn-in valuables and to see if any circumcised Jews were masquerading as gentiles.
If the pirates were short on galley slaves, they might put some of their captives to work immediately, but prisoners usually went below hatches for the journey home. They were packed in, barely able to move in the filth, stench, and vermin, and many died before they reached port.
Once in North Africa, it was tradition to parade newly-captured Christians through the streets, so people could jeer at them, and children would pelt them with refuse. At the slave market, men were made to jump about to prove they were not lame, and buyers often wanted them stripped naked again to see if they were healthy. This was also to evaluate the sexual value of both men and women; white concubines had a high value, and all the slave capitals had a flourishing homosexual underground. Buyers who hoped to make a quick profit on a fat ransom examined earlobes for signs of piercing, which was an indication of wealth. It was also common to check a captive’s teeth to see if he was likely to survive on a tough slave diet.
The pasha or ruler of the area got a certain percentage of the slave take as a form of income tax. These were almost always men, and became government rather than private property. Unlike private slaves, who usually boarded with their masters, they lived in the bagnos or “baths,” as the pasha’s slave warehouses came to be called. It was common to shave the heads and beards of public slaves as an added humiliation, in a period when head and facial hair were an important part of a man’s identity.
Most of these public slaves spent the rest of their lives as galley slaves, and it is hard to imagine a more miserable existence. Men were chained three, four, or five to an oar, with their ankles chained together as well. Rowers never left their oars, and to the extent that they slept at all, they slept on their benches. Slaves could push past each other to relieve themselves at an opening in the hull, but they were often too exhausted or dispirited to move, and fouled themselves where they sat. They had no protection against the burning Mediterranean sun, and their masters flayed their already-raw backs with the slave driver’s favourite tool of encouragement, a stretched bull’s penis or “bull’s pizzle.” There was practically no hope of escape or rescue; a galley slave’s job was to work himself to death mainly in raids to capture more wretches like himself – and his master pitched him overboard at the first sign of serious illness.
When the pirate fleet was in port, galley slaves lived in the bagno and did whatever filthy, dangerous, or exhausting work the pasha set them to. This was usually stonecutting and hauling, harbour-dredging, or heavy construction. The slaves in the Turkish sultan’s fleet did not even have this variety. They were often at sea for months on end, and stayed chained to their oars even in port. Their ships were life-long prisons.
Other slaves on the Barbary Coast had more varied jobs. Often they did household or agricultural work of the kind we associate with American slavery, but those who had skills were often rented out by their owners. Some masters simply horned slaves loose during the day with orders to return with a certain amount of money by evening or be severely beaten. Masters seem to have expected about a 20 percent return on the purchase price. Whatever they did, in Tunis and Tripoli, slaves usually wore an iron ring around an ankle, and were hobbled with a chain that weighed 25 or 30 pounds.
Some masters put their white slaves to work on farms deep in the interior, where they faced yet another peril: capture and re-enslavement by raiding Berbers. These unfortunates would probably never see another European for the rest of their short lives.
Prof. Davis points out that there was no check of any kind on cruelty: “There was no countervailing force to protect the slave from his master’s violence: no local anti-cruelty laws, no benign public opinion, and rarely any effective pressure from foreign states.”
Slaves were not just property, they were infidels, and deserved whatever suffering a suffering a master meted out. Prof. Davis notes that “all slaves who lived in the bagnos and survived to write of their experiences stressed the endemic cruelty and violence practiced there.”
The favourite punishment was the bastinado, in which a man was put on his back, and his ankles clamped together and held waist high for a sustained beating on the soles of the feet. A slave might get as many as 150 or 200 blows, which would leave him crippled. Systematic violence turned many men into automatons. Slaves were often so plentiful and so inexpensive, there was no joint in caring for them; many owners worked them to death and bought replacements.
The slavery system was not, however, entirely without humanity. Slaves usually got Fridays off. Likewise, when bagno men were in port, they had an hour or two of free time every day between the end of work and before the bagnos’ doors were locked at night. During this time, slaves could work for pay, but they could not keep all the money they made. Even bagno slaves were assessed a fee for their filthy lodgings and rancid food.
Public slaves also contributed to a fund to support bagno priests. This was a strongly religious era, and even under the most horrible conditions, men wanted a chance to say confession and – most important – receive extreme unction. There was almost always a captive priest or two in the bagno, but in order to keep him available for religious duties, other slaves had to chip in and buy his time from the pasha. Some galley slaves thus had nothing left over to spend on food or clothing, though in some periods, free Europeans living in the cities of Barbary contributed to the upkeep of bagno priests.
For a few, slavery became more than bearable. Some trades – particularly that of shipwright – were so valuable that an owner might reward his slave with a private villa and mistresses. Even a few bagno residents managed to exploit the hypocrisy of Islamic society and improve their condition. The law strictly forbade Muslims to trade in alcohol, but was more lenient with Muslims who only consumed it. Enterprising slaves established taverns in the bagnos and some made a good living catering to Muslim drinkers.
One way to lessen the burdens of slavery was to “take the turban” and convert to Islam. This exempted a man from service in the galleys, heavy construction, and a few other indignities unworthy of a son of the Prophet, but did not release him from slavery itself. One of the jobs of bagno priests was to keep desperate men from converting, but most slaves appear not to have needed religious counsel. Christians believed that conversion imperilled their souls, and it also meant the unpleasant ritual of adult circumcision. Many slaves appear to have endured the horrors of slavery by seeing it as punishment for their sins and as a test of their faith. Masters discouraged conversion because it limited the scope of mistreatment and lowered a slave’s resale value.
Ransom and redemption
For slaves, escape was impossible. They were too far from home, were often shackled, and could be immediately identified by their European features. The only hope was ransom.
Sometimes, the opportunity came quickly. If a slaving party had already snatched so many men it had no more room below deck, it might raid a town and then reappear a few days later to sell captives back to their families. This was usually at a considerable discount from the cost of ransoming someone from North Africa, but it was still far more than peasants could afford. Farmers usually had no ready money, and no property other than house and land. A merchant was usually willing to take these off their hands at distress prices, but it meant that a captured man or woman came back to a family that was completely impoverished.
Most slaves bought their way home only after they had gone through the ordeal of passage to Barbary and sale to a speculator. Wealthy captives could usually arrange a sufficient ransom, but most slaves could not. Illiterate peasants could not write home and even if they did, there was no cash for ransom.
The majority of slaves therefore depended on the charitable work of the Trinitarians (founded in Italy in 1193) and the Mercedarians (founded in Spain in 1203). These were religious orders established to free Crusaders held by Muslims, but they soon shifted their work to redemption of Barbary slaves, raising money specifically for this purpose. Often they maintained lockboxes outside churches marked “For the Recovery of the Poor Slaves,” and clerics urged wealthy Christians to leave money in their will for redemption. The two orders became skilled negotiators, and usually managed to buy back slaves at better prices than did less experienced liberators. Still, there was never enough money to flee many captives, and Prof. Davis estimates that no more than three or four percent of slaves were ever ransomed in a single year. This meant that most left their bones in the unmarked Christian graveyards outside the city walls.
The religious orders kept careful records of their successes; Spanish Trinitarians, for example, went on 72 redemption expeditions in the 1600s, averaging 220 releases on each. It was common to bring the freed slaves home and march them through the streets in big celebrations. These parades became one of the most characteristic urban spectacles of the period, and had a strong religious orientation. Sometimes the slaves marched in their old slave rags to emphasize the torments they had suffered; sometimes they wore special white costumes to symbolize rebirth. According to contemporary records, many freed slaves were never quite right after their ordeals, especially if they had spent many years in captivity.
How many slaves?
Prof. Davis points out that enormous research has gone into tracking down as accurately as possible the number of blacks taken across the Atlantic, but there has been nothing like the same effort to learn the extent of Mediterranean slavery. It is not easy to get a reliable count – the Arabs themselves kept essentially no records – but in the course of ten years of research Prof. Davis developed a method of estimation.
For example, records suggest that from 1580 to 1680 there was an average of some 35,000 slaves in Barbary. There was a steady loss through death and redemption, so if the population stayed level, the rate at which raiders captured new slaves must have equalled the rate of attrition. There are good bases for estimating death rates. For example, it is known that of the nearly 400 Icelanders caught in 1627, there were only 70 survivors eight years later. In addition to malnutrition, overcrowding, overwork, and brutal punishment, slaves faced epidemics of plague, which usually wiped out 20 to 30 percent of the white slaves.
From a number of sources, therefore, Prof. Davis estimates that the death rate was about 20 percent per year. Slaves had no access to women, so replacement was exclusively through capture. His conclusion: “Between 1530 and 1780 there were almost certainly a million and quite possibly as many as a million and a quarter white, European Christians enslaved by the Muslims of the Barbary Coast.”
This considerably exceeds the figure of 800,000 Africans generally accepted as having been transported to the North American colonies and, later, to the United States.
The European powers were unable to stop this traffic. Prof. Davis reports that in the late 1700s they had a better record of controlling the trade, but there was an upturn of white slavery during the chaos of the Napoleonic wars.
American shipping was not exempt from predation either. Only in 1815, after two wars against them, were American sailors free of the Barbary pirates. These wars were significant operations for the young republic; one campaign is remembered in the words “to the shores of Tripoli” in the U.S. Marine hymn. When the French took over Algiers in 1830, there were still 120 white slaves in the bagno.
Why is there so little interest in Mediterranean slavery while scholarship and reflection on black slavery never ends?
As Prof. Davis explains, white slaves with non-white masters simply do not fit “the master narrative of European imperialism.” The victimization schemes so dear to academics require white wickedness, not white suffering.
Prof. Davis also points out that the widespread European experience of slavery gives the lie to another favourite leftist hobby horse: that the enslavement of blacks was a crucial step in establishing European notions of race and racial hierarchy. Not so; for centuries, Europeans lived in fear of the lash themselves, and a great many watched redemption parades of freed slaves, all of whom were white. Slavery was a fate more easily imagined for themselves than for distant Africans.
With enough effort, it is possible to imagine Europeans as preoccupied with slavery as blacks. If Europeans nursed grievances about galley slaves the way blacks do about field hands, European politics would certainly be different. There would be no grovelling apologies for the Crusades, little Muslim immigration to Europe, minarets would not be going up all over Europe, and Turkey would not be dreaming of joining the European Union. The past cannot be undone, and brooding can be taken to excess, but those who forget also pay a high price.
Robert C. Davis: Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast, and Italy, 1500-1800, Palgrave Macmillan, 2003, 264pp. From Candour, vol. 73, no. 6, Dec. 2006
Rob Smyth relates some little-known history of an evil trade
European Slaves, African Slave-Masters
A recent British television programme about a sunken galleon in British waters made some brief reference to the Barbary slave trade, something that is very rarely mentioned in these politically correct times. Accordingly, it must be a subject worth exploring further, particularly as many people seem to perceive – understandably in view of continual one-sided propaganda on such issues – that it is only black people who have suffered from slavery at the hands of another race. This perception is one which the global money power is keen to encourage, no doubt, as it inculcates a guilt complex among Whites; which, in turn, tends to stifle objections to the mass importation of cheap labour into the West and the transfer of investment and jobs to the Third World.
A NATIONAL INDUSTRY
Historically, the Mediterranean was always plagued by piracy, but from the later Middle Ages the scale of these activities intensified substantially, particularly through the state-supported corsairs operating from the Barbary Coast of North Africa. For several hundred years, they spread terror throughout the waters and coastlines of the Mediterranean and Atlantic. As John Gunther says of the Moors of this period in Inside Africa:–
‘Their national industry was piracy. They lived on loot, human and otherwise, and brought forcibly into the Maghreb great numbers of captured European (Christian) slaves.’ (p. 39)
There were, however, differences between the corsairs’ slave business and the unjustifiably more notorious trans-Atlantic slave trade. The black slaves in the latter trade were sold to white traders by their own people in exchange for rum, brandy, clothes, glass beads and other European goodies; and they were only traded for economic purposes. On the other hand, the white slaves of the North African corsairs were captured in raids on European shipping and coastlines, with no question of purchase or exchange; and they were carried away not just as slave labour, but also for purposes of ransom and harem service.
Detail from a 1684 engraving showing the Algiers slave market. In the right foreground there appear to be two European women, stripped and publicly exhibited for potential buyers.
THE MOORS IN SPAIN
The Moors were the western-most adherents of Islam, which had spread rapidly in the Middle East and North Africa during the 7th and 8th centuries. They had invaded and occupied Spain in 711. Indeed, they advanced well into France before being defeated at Pointers by Charles Martel in 732. Then they retreated over the Pyrenees, and it was not until 1492, the year Columbus ‘discovered’ America, that they were finally expelled from Spain.
They were heavily engaged in human trafficking, and Andalusia, in Southern Spain, became an important centre for the slave trade. As Stephen Clissold wrote in ‘The Ransom Business,’ (History Today magazine, December 1976):–
‘Cordoba and other cities had their slave-markets where the human merchandise was examined and valued on sophisticated lines. Women were generally more highly prized than men and were carefully assessed by female inspectors, who recorded their physical attractions or defects in the purchase contract. Handbooks listing these good and bad qualities were especially composed to facilitate the delicate task.’ (p. 780)
In passing, it is interesting to reflect that the Moors occupied Spain for nearly 800 years. During much of that time they must have thought:
“We are here to stay.”
But they weren’t.
RISE OF THE BARBARY STATES
The expulsion of the Moors from Spain gave great impetus to the Barbary corsairs. Not only did the expulsion imbue the Moors with a hatred of Christianity, but also their numbers were too great for the economy of their new home, the Maghreb region of North Africa, to absorb. They joined motherland Moors, Algerians, Turks of the Ottoman Empire (to which the Barbary states were often linked) and others in the corsair business of plundering and human trafficking. Clissold tells us that:–
‘The Barbary States were organised primarily for this purpose, and for three centuries they continued to derive the major part of their revenues from the prizes and captives taken, and from the tribute or “presents” with which individual Christian countries sought to purchase immunity.’ (p. 779)
Matters got progressively worse. By the middle of the seventeenth century, the corsairs were marauding in the Atlantic and lurking in the English Channel. Attacks were not confined to merchant vessels. Even fishermen were attacked. Raids were made on coastal settlements, including some on Cornwall, where houses would be looted and inhabitants kidnapped. It is said that at one time 10,000 of the inhabitants of Algiers were slaves, a fifth of them British. Algiers became the chief centre of corsair activity; and the ocean-going vessels it developed were able to operate further away from base than the traditional galleys, bringing even Iceland within reach of the raiders.
RANSOM AND REDEMPTION
Much of the corsairs’ wealth came from the ransoming of captives. Among these, for example, was Cervantes, the famed author of Don Quixote, who was enslaved for five years before his family and friends managed to raise the required ransom money. Ransoming became big business, not only for the corsairs but also for the numerous intermediaries and brokers who negotiated or arranged payment. These included two great Redemptionist religious organisations, the Trinitarians and the Mercedarians, which specifically raised money for the purpose. Governments also sometimes raised ransom for their citizens. Tens of thousands of slaves changed hands in this period.
Ransom money and redemption expenses, often deliberately inflated by unscrupulous intermediaries, were not the totality of the cost. As Clissold relates:–
‘In addition to the ransom money, “presents” had to be made to the ruler and his hierarchy of officials. The number and cost of these had to be worked out and included in the Redemptionists’ budget; or they might otherwise have to buy them on the spot and incur crippling debts. Failure to make presents that their recipients regarded as suited to their rank and expectations caused great offense and might jeopardise the prospects of the mission.’ (p. 783)
Thus, a lot of people besides the corsairs were on the make from the business. As always, corruption breeds corruption.
COLLABORATION AND RENEGADES
The long reign of the corsairs was in no small measure due to the help they received from European governments and individuals. At a time when European solidarity was necessary, we see ruling elites making agreements with the corsairs in order to buy off attacks on their own people and property or to encourage the corsairs to attack their competitors.
Even more disgraceful was the large-scale defection of renegades to the corsair cause. In those days, as now, there was no shortage of race-traitors willing to make money from the persecution of their own people. Some renegades were captives who had ‘turned Turk’ but many, from all over Europe, made their own way to corsair dominions and voluntarily embraced Islam. They were attracted by the wealth and opportunities in Algiers and other corsair strongholds. Not all of them engaged in piracy; some took positions as personal guards and officials, many reaching the highest positions of authority. Others aided the corsairs in technical matters. Clissold remarks in another article, ‘Christian Renegades and Barbary Corsairs’ (History Today, August 1976):–
‘Renegades and Christian slaves… were the chief agents for the transmission of technology, particularly in the military and naval fields, to the less advanced Islamic lands. It was the renegades who piloted the corsair ships in European coastal waters and divulged the weak spots in the Christian defences.’ (p. 512)
So we see that, as now, the renegades actively assisted in the pillaging of their own lands.
The corsairs did not necessarily encourage conversion to Islam. As Clissold writes:–
‘By no means all these renegades had been forced to apostasise against their will. Pressure was indeed often applied against them precisely in a contrary direction. Apostasy did not automatically set a slave free, though he could generally expect to be better treated. He would no longer be fettered and sent to row in a galley. The corsairs, however, needed galley-slaves and were accordingly loath to let their Christians turn Moslem.’ (p. 509)
Other Christians were forced or bribed into changing their religion. Clissold mentions Father Gracian, a Carmelite friar, himself a captive, who related that a fellow priest was offered a substantial dowry if he would turn Moslem and marry the pretty fifteen-year-old daughter of a Moor. Clissold goes on:–
‘Christian girls were in special danger of being forced into apostasy unless they seemed likely to command rich ransoms. Gracian describes coming across a batch of Corsican and Calabrian girls who had just been taken captive. On making enquiries soon afterwards, he learned that all had already apostasised. To resist absorption into the harem was a forlorn hope. We hear of an English girl taken by Sallee Rovers in 1685 who was sent to the Moroccan Emperor Mouley Ismael for the honour of being deflowered. She at first resisted persuasions to turn Moslem and capitulated only after being handed over to the Emperor’s negresses who whipped her and tormented her with needles.’ (p. 510)
Mouley Ismael was truly a very unpleasant and sinister character, but his aims were, perhaps, not so far removed from those of some nasty social engineers of our own time, as we shall see later.
Eventually, several Western countries resorted to military measures against the corsairs. Portugal, Spain, France, Britain and, in 1804, even the United States, all became involved in the bombardment of North African ports in the hope of suppressing piracy. The latter remained remarkably resilient but the number of captives did gradually dwindle, especially after the British Navy’s devastating bombardment of Algiers in 1816. This episode alone led to the release of 3,000 slaves, some of whom had been held captive for 35 years. Yet, within two months, the rubble was cleared and the fortifications were rebuilt. It was not until 1830, when the French conquered Algeria, that the piracy was ended.
THE MULTI-RACIALIST SULTAN
One can learn some useful lessons from history. For although technology and science might transform our physical environment, it seems unlikely that man himself has changed very much, if at all. The never-ending catalogue of horrific modern atrocities, frauds and other crimes bears testament to that. So it may be well to ponder with concern, as well as with casual interest, the lifestyle and motives of Sultan Mouley Ismael, as recorded in John Gunther’s Inside Africa:–
‘One extraordinary and fiendish character was Sultan Moulay Ismael who ruled for fifty-five years between 1672 and 1727… He set out to build at Meknes, near Fez, a capital that would out-do Versailles. The royal stables alone were three miles long, and contained 12,000 horses; slaves who worked on these and other grandiose projects were, if they did not give satisfaction, cemented into the walls… Moulay is supposed to have had 549 wives, as well as concubines by the thousand, since he seldom used a woman twice. Of his children, 867 survived him. Hundreds of his daughters were strangled at birth. He imported vast numbers of Saharan Negroes to Meknes, systematically mated them to Moorish women, and set about to establish by this means a kind of Praetorian Guard of specially bred half-castes… He had several hundred thousand slaves, of whom 25,000 were captured Christians.’ (pp. 43-44)
Could it possibly be that our own beloved rulers are deliberately promoting miscegenation to create a Praetorian Guard for themselves or the New World Order?
Spearhead magazine, no. 409, March 2003
JEWS & SLAVERY
I remind you that this is the supposedly righteous “Creator of the Universe” speaking directly to Moses:
7 “If a man sells his daughter as a servant, she is not to go free as male servants do. 8 If she does not please the master who has selected her for himself, he must let her be redeemed. He has no right to sell her to foreigners, because he has broken faith with her.9 If he selects her for his son, he must grant her the rights of a daughter.10 If he marries another woman, he must not deprive the first one of her food, clothing and marital rights.11 If he does not provide her with these three things, she is to go free, without any payment of money.”
In this passage, Yahweh states that it’s okay for fathers to sell their daughters into permanent slavery, and he gives rules for what can and can’t be done with her afterwards. In the context of this passage, does being a good father entail getting a good price for your daughter on the slave market?
2 “If you buy a Hebrew servant, he is to serve you for six years. But in the seventh year, he shall go free, without paying anything. 3 If he comes alone, he is to go free alone; but if he has a wife when he comes, she is to go with him. 4 If his master gives him a wife and she bears him sons or daughters, the woman and her children shall belong to her master, and only the man shall go free.”
In this passage, Yahweh establishes a condition under which a married male slave who is freed must leave behind his wife and children because they are his master’s property. Is this what religious folk mean when they talk of “family values”?
26 “An owner who hits a male or female slave in the eye and destroys it must let the slave go free to compensate for the eye. 27 And an owner who knocks out the tooth of a male or female slave must let the slave go free to compensate for the tooth.”
This section is where Yahweh gets progressive and establishes the world’s first workers comp law. If you poke out the eye of your slave, you have to set him free?!? How revolutionary and enlightened!
And here is the most inspiring passage of all…
20 “Anyone who beats their male or female slave with a rod must be punished if the slave dies as a direct result, 21 but they are not to be punished if the slave recovers after a day or two, since the slave is their property.”
So if a slave-master beats his slave, he’s to be punished only if the slave dies from the beating (and only if a rod was used)? Hmmm, it’s pretty sweet to be a slave-master in Yahweh’s kingdom, isn’t it?
Except for murder, slavery has got to be one of the most immoral things a person can do. Yet slavery is rampant throughout the JEWISH Bible in both the Old and New Testaments. The JEWISH Bible clearly approves of slavery in many passages, and it goes so far as to tell how to obtain slaves, how hard you can beat them, and when you can have sex with the female slaves.
Many Jews and Christians will try to ignore the moral problems of slavery by saying that these slaves were actually servants or indentured servants. Many translations of the JEWISH Bible use the word “servant”, “bondservant”, or “manservant” instead of “slave” to make the JEWISH Bible seem less immoral than it really is. While many slaves may have worked as household servants, that doesn’t mean that they were not slaves who were bought, sold, and treated worse than livestock.
The following passage shows that slaves are clearly property to be bought and sold like livestock.
“However, you may purchase male or female slaves from among the foreigners who live among you. You may also purchase the children of such resident foreigners, including those who have been born in your land. You may treat them as your property, passing them on to your children as a permanent inheritance. You may treat your slaves like this, but the people of Israel, your relatives, must never be treated this way”. (Leviticus 25:44-46)
The following passage describes how the Hebrew slaves are to be treated.
“If you buy a Hebrew slave, he is to serve for only six years. Set him free in the seventh year, and he will owe you nothing for his freedom. If he was single when he became your slave and then married afterward, only he will go free in the seventh year. But if he was married before he became a slave, then his wife will be freed with him. If his master gave him a wife while he was a slave, and they had sons or daughters, then the man will be free in the seventh year, but his wife and children will still belong to his master. But the slave may plainly declare, ‘I love my master, my wife, and my children. I would rather not go free.’ If he does this, his master must present him before God. Then his master must take him to the door and publicly pierce his ear with an awl. After that, the slave will belong to his master forever”. (Exodus 21:2-6)
Notice how they can get a male Hebrew slave to become a permanent slave by keeping his wife and children hostage until he says he wants to become a permanent slave. What kind of family values are these?
The following passage describes the sickening practice of sex slavery. How can anyone think it is moral to sell your own daughter as a sex slave?
“When a man sells his daughter as a slave, she will not be freed at the end of six years as the men are. If she does not please the man who bought her, he may allow her to be bought back again. But he is not allowed to sell her to foreigners, since he is the one who broke the contract with her. And if the slave girl’s owner arranges for her to marry his son, he may no longer treat her as a slave girl, but he must treat her as his daughter. If he himself marries her and then takes another wife, he may not reduce her food or clothing or fail to sleep with her as his wife. If he fails in any of these three ways, she may leave as a free woman without making any payment”. (Exodus 21:7-11)
So these are the JEWISH Bible family values! A man can buy as many sex slaves as he wants as long as he feeds them, clothes them, and has sex with them!
What does the JEWISH Bible say about beating slaves?
It says you can beat both male and female slaves with a rod so hard that as long as they don’t die right away you are cleared of any wrong doing
“When a man strikes his male or female slave with a rod so hard that the slave dies under his hand, he shall be punished. If, however, the slave survives for a day or two, he is not to be punished, since the slave is his own property”. (Exodus 21:20-21)
You would think that Jesus and the New Testament would have a different view of slavery, but slavery is still approved of in the New Testament, as the following passages show.
“Slaves, obey your earthly masters with deep respect and fear. Serve them sincerely as you would serve Christ”. (Ephesians 6:5 )
“Christians who are slaves should give their masters full respect so that the name of God and his teaching will not be shamed. If your master is a Christian, that is no excuse for being disrespectful. You should work all the harder because you are helping another believer by your efforts. Teach these truths, Timothy, and encourage everyone to obey them”. (1 Timothy 6:1-2)
In the following parable, Jesus clearly approves of beating slaves even if they didn’t know they were doing anything wrong.
The servant will be severely punished, for though he knew his duty, he refused to do it.
“But people who are not aware that they are doing wrong will be punished only lightly. Much is required from those to whom much is given, and much more is required from those to whom much more is given.” (Luke 12:47-48)
The JEWISH Bible may, indeed does, contain a warrant for trafficking in humans, for ethnic cleansing, for slavery, for bride-price, and for indiscriminate massacre
In contemporary times, slavery is almost universally reviled; while human trafficking and similar practices are still far too common, people generally no longer argue that human beings should be owned like property. However, through most of human history, well into the nineteenth century, slavery was (notwithstanding the opinions of the enslaved) broadly accepted as an economic and social necessity.
Slavery was an important facet of life in biblical times. Both the Old and the New Testaments have instructions regarding slaves which contemporary Jews and Christians generally disregard, and which Christian apologists frequently attempt to play down or deny.
Some fringe Christian Biblical literalists, notably those who believe in Dominionism, argue that biblical instructions regarding slavery and its institutions are still relevant.
SLAVERY IN THE JEWISH BIBLE
The JEWISH Bible identifies different categories of slaves including female Hebrew slaves, male Hebrew slaves, non-Hebrew and hereditary slaves. These were subject to different regulations.
Female Hebrews could be sold by their fathers and enslaved for life (Exodus 21:7-11), but under some conditions.
Male Hebrews could sell themselves into slavery for a six year period to eliminate their debts, after this period they might go free. However, if the male slave had been given a wife and had children with her, they would remain his master’s property. They could only stay with their family by becoming permanent slaves. (Exodus 21:2-5). Evangelical Christians, especially those who subscribe to Biblical inerrancy, will commonly emphasize this debt bondage and try to minimise the other forms of race-based chattel slavery when attempting to excuse the Bible for endorsing slavery.
Non-Hebrews, on the other hand, could (according to Leviticus 25:44) be subjected to slavery in exactly the way that it is usually understood. The slaves could be bought, sold and inherited when their owner died. This, by any standard, is race- or ethnicity-based, and Leviticus 25:44-46 explicitly allows slaves to be bought from foreign nations or foreigners living in Israel. It does say that simply kidnapping Hebrews to enslave them is a crime punishable by death (Deuteronomy 24:7), but no such prohibition exists regarding foreigners. War captives could be made slaves, assuming they had refused to make peace (this applied to women and children-men were simply killed), along with the seizure of all their property.(Deuteronomy 20:10-15)
Hereditary slaves were born into slavery and there is no apparent way by which they could obtain their freedom.
So the Bible endorses various types of slavery, see below – though Biblical literalists only want to talk about one version and claim that it wasn’t really so bad.
TYPES OF SLAVERY
As previously stated the JEWISH Bible endorsed different types or grades of slavery.
FEMALE HEBREW SLAVES
Female Hebrew slaves were to be treated differently from males. Parents could sell their daughters into slavery. (Exodus 21:7-11)
7If a man sells his daughter as a female slave, she is not to go free as the male slaves do. 8If she is displeasing in the eyes of her master who designated her for himself, then he shall let her be redeemed. He does not have authority to sell her to a foreign people because of his unfairness to her. 9If he designates her for his son, he shall deal with her according to the custom of daughters. 10If he takes to himself another woman, he may not reduce her food, her clothing, or her conjugal rights. 11If he will not do these three things for her, then she shall go out for nothing, without payment of money.
MALE HEBREW SLAVES
2If you buy a Hebrew slave, he shall serve for six years; but on the seventh he shall go out as a free man without payment. 3If he comes alone, he shall go out alone; if he is the husband of a wife, then his wife shall go out with him. 4If his master gives him a wife, and she bears him sons or daughters, the wife and her children shall belong to her master, and he shall go out alone. 5But if the slave plainly says, ‘I love my master, my wife and my children; I will not go out as a free man,’ 6then his master shall bring him to God, then he shall bring him to the door or the doorpost. And his master shall pierce his ear with an awl; and he shall serve him permanently.
It is interesting to note that if a slave wishes to remain with his wife and family he must submit to his master for life.
On the other hand Hebrew slaves – and only those Hebrew slaves who entered slavery “voluntarily” – got some severance package as described in Deuteronomy 15:12-15:
12If your kinsman, a Hebrew man or woman, is sold to you, then he shall serve you six years, but in the seventh year you shall set him free. 13When you set him free, you shall not send him away empty-handed. 14You shall furnish him liberally from your flock and from your threshing floor and from your wine vat; you shall give to him as the LORD your God has blessed you. 15You shall remember that you were a slave in the land of Egypt, and the LORD your God redeemed you; therefore I command you this today.
If the Israelites wanted full slaves they were instructed in Leviticus 25:44-46:
44As for your male and female slaves whom you may have—you may acquire male and female slaves from the pagan nations that are around you. 45Then, too, it is out of the sons of the sojourners who live as aliens among you that you may gain acquisition, and out of their families who are with you, whom they will have produced in your land; they also may become your possession. 46You may even bequeath them to your sons after you, to receive as a possession; you can use them as permanent slaves. But in respect to your countrymen, the sons of Israel, you shall not rule with severity over one another.
The children of slaves were born into slavery. Exodus 21:4:
If his master gives him a wife, and she bears him sons or daughters, the wife and her children shall belong to her master, and he shall go out alone.
Beating slaves was perfectly allowable under the following rules:
20If a man strikes his male or female slave with a rod and he dies at his hand, he shall be punished. 21If, however, he survives a day or two, no vengeance shall be taken; for he is his property.
26If a man strikes the eye of his male or female slave, and destroys it, he shall let him go free on account of his eye. 27And if he knocks out a tooth of his male or female slave, he shall let him go free on account of his tooth.
ABDUCTION AND SLAVE TRADE
Hebrews were not allowed to abduct fellow Hebrews and sell them.
Exodus 21:16: 16
He who kidnaps a man, whether he sells him or he is found in his possession, shall surely be put to death.
Given that the Hebrews were instructed in Leviticus 25 v 44 to obtain their slaves from the people around them, it is evident that this injunction to not abduct people referred to Hebrews and not non-Hebrews. Obtaining and selling non-Hebrews was evidently not a problem. Deuteronomy 24:7 specifies that only the abduction of Hebrews to enslave them is a crime.
An escaped slave could not be handed over to his master, and would gain full citizenship among Israelites:
15You shall not hand over to his master a slave who has escaped from his master to you. 16He shall live with you in your midst, in the place which he shall choose in one of your towns where it pleases him; you shall not mistreat him.
However, as BibleTrack complementaries put it regarding Deut 23:15
“Most students of the Old Testament agree that this regulation concerns a slave who has escaped from his master in some foreign land and sought refuge in Israel. We do know that, in addition to slaves captured in battle, debt slavery and voluntary slavery existed in Israel and was protected by law, so it seems unlikely that this law applies to those two categories of slaves. We simply aren’t given any detail beyond these two verses.
SLAVERY IN THE NEW TESTAMENT
The New Testament makes no condemnation of slavery and does no more than admonish slaves to be obedient and their masters not to be unfair. Paul, or whoever wrote the epistles, at no time suggested there was anything wrong with slavery. One could speculate that this might have been because he wanted to avoid upsetting the many slave owners in the early Christian congregations or to keep on good political terms with the Roman government. Or, more probably, he simply thought slavery was an acceptable fact of life as did practically everyone else at the time.
5Slaves, be obedient to those who are your masters according to the flesh, with fear and trembling, in the sincerity of your heart, as to Christ; 6not by way of eyeservice, as men-pleasers, but as slaves of Christ, doing the will of God from the heart. 7With good will render service, as to the Lord, and not to men, 8knowing that whatever good thing each one does, this he will receive back from the Lord, whether slave or free.
Christian slaves were told to obey their masters “for the sake of the cause” and be especially obedient to Christian masters:
1 Timothy 6:1-2:
1All who are under the yoke as slaves are to regard their own masters as worthy of all honor so that the name of God and our doctrine will not be spoken against. 2Those who have believers as their masters must not be disrespectful to them because they are brethren, but must serve them all the more, because those who partake of the benefit are believers and beloved. Teach and preach these principles.
There are instructions for Christian slave owners to treat their slaves well.
9And masters, do the same things to them, and give up threatening, knowing that both their Master and yours is in heaven, and there is no partiality with Him.
1Masters, grant to your slaves justice and fairness, knowing that you too have a Master in heaven.
One passage often cited by apologists as supposed evidence for New Testament condemnation of slavery is 1 Timothy 1:10. However, as the King James version accurately translates, this condemnation is of “men stealers” (Greek: andrapodistais), i.e. slave raiders who kidnapped and sold people as slaves, not slave traders or slave holders in general. So Paul only singled out slave raiders to be considered “lawless and rebellious,” and to be categorized with murderers, homosexuals, liars and oath breakers.
The rather bland admonishment to slave masters by Paul is more than balanced by the demands for absolute obedience made of slaves. It is also rather telling that the masters are likened to God and Jesus, while the masters are simply told that they have a higher lord. So much for Jesus as the embodiment of the underdog – Paul could have pointed to Jesus’ imprisonment and death as a cautionary tale to slave masters that even humble(d) characters can be important.
Before the apologist plays the “but Jesus didn’t condone slavery”-card, following all these Pauline examples, try reading Matthew 18:25, where Jesus uses slaves in a parable and has no qualms about recommending that not only a slave but also his wife and family be sold, while in other parables Jesus recommends that disobedient slaves should be beaten (Luke 12:47) or even killed (Matthew 24:51).
This is probably one of the clearest examples of religious moral relativism.
Most modern Christians prefer to avoid, or are unaware of, these sections of the JEWISH Bible. If forced to explain JEWISH Biblical justification for slavery, they may come up with something, but fortunately Christians as a group think it would be wrong to reintroduce slavery. Christian attempts to justify what is in the JEWISH Bible can lead to them sanctioning things that most moral humanists, and even most Christians, would say are wrong, as can be seen from the quote below.
CHRISTIANS ATTEMPT TO JUSTIFY SLAVERY:
“They ‘shall be of the heathen’ is the key phrase here. God approved of slavery in this instance only because it was His hope that those who became slaves of the Israelites from foreign nations might “be saved.” Even though they would lose their earthly freedom, God hoped that they would gain eternal freedom by coming to know Him, which is far more important.”
ATTEMPTS TO JUSTIFY THE JEWISH BIBLE’S SLAVERY PASSAGES
Argument 1: “Slavery in the Bible was more enlightened than that of 17th-19th Century America and other Ancient Near East cultures.”
Even granting this point for the sake of argument, this fails to answer the simple question: is owning another human ever moral, or not? The relative kindness of a slave owner does not enter into the basic moral question of owning other humans as property.
Argument 2: “They could be let go after 6 years” or “It was a mechanism for protecting those who could not pay their debts.” (A.k.a. “Debt bondage”)
Only some Hebrew male slaves were to be freed in the 7th year (Exodus 21:2). Slaves from surrounding countries could be kept as property forever (Leviticus 25:44-46). A further exception pertains to women whose fathers sold them into slavery, and for whom there was no release after six years (Exodus 21:7).
Argument 3: The Bible restricted slave owners’ actions (Exodus 21:20).
Exodus 21:20 does mandate punishment for a master who kills a slave with a rod, but the very next verse says “But if the slave survives a day or two, there is no punishment; for the slave is the owner’s property” (NRSV). The NIV, by contrast, translates this verse as “if the slave recovers after a day or two”, which changes its meaning. Either way, the emphasis is that the slave is first and foremost property, and therefore the greatest loss is to the owner, whose slave was “as good as money”.
Argument 4: “Slavery was allowed by God because of the time period, but was not the ideal will of God.”
There are many ways a creative, all-knowing, and all-powerful deity could make it clear that slavery is immoral while, for instance, giving the Israelite economy a grace period to let slavery “wind down”, should that be necessary. The passages concerning slavery from the Pentateuch (e.g. Exodus 21:2-7, Leviticus 25:44-46), by contrast, provide guidelines that allow for slavery to continue indefinitely. New Testament writers, too, who had an opportunity to overturn or clarify the Pentateuch’s instructions, did not do so.
Also it seems improbable that a God who was capable of assassinating israelites by the thousand if they did not follow his instructions to the letter would baulk at telling them to give up slaves.
Argument 5: “The term ‘slave’ is a poor translation. It should be ‘servant’.”
This may be plausible in some contexts, but not for Leviticus 25:46, which specifically allows that slaves are property who may be inherited by the owner’s children and kept for life. This passage makes no sense unless they are discussing slavery—permanent ownership of one human by another—as we know it today.
Jesus’ parable of the unforgiving servant (Matthew 18:23) makes no sense if said “servant” is not a slave, since the master has the power to sell both the “servant”, his wife and his children (Matthew 18:25).
It also makes little sense in the case of Matthew 24:51 in which these “servants” may be not only beaten by their master (as in Luke 12:47), but that the master “shall cut him asunder” in the words of the King James translation.
STEVEN SPIELBERG’S pseudo-historical film about a 19th-century mutiny and massacre aboard a Spanish slave ship, Amistad, and the subsequent trial of the Black mutineers is being praised by the reviewers. Spielberg, one of the wealthiest and most successful of Hollywood’s Jewish film makers, also is being praised by his kinsmen in various so-called ‘human rights’ organizations for using his propaganda skills to sensitize White, Gentile audiences to the horrors of slavery and make them feel just a little more guilty for treating non-Whites so badly in the past.
What the film doesn’t mention, of course, is that Spielberg’s Jewish kinsmen owned many, though not all, of the ships involved in the 18th- and 19th-century Atlantic trade in Black slaves and, in fact, played a very prominent role in bringing Black slaves to America.
The film rather tends to steer one away from blaming anyone for slavery except White Gentiles. This bit of misdirection is interesting in light of the fact that Jews have been dominant in the slave trade since at least Roman times – especially the trade in White slaves. Jewish slave-dealers followed Caesar’s armies everywhere – into Gaul, into Germany, and into other northern lands – eager to buy as many of the captives of the Romans – especially the female captives. Jews have remained dominant in the White slave trade until the present day – although during the Middle Ages the Christian Church tried unsuccessfully a number of times to stop them, beginning in the fifth century with an edict by the emperor Theodosius II against Jews owning Christian slaves. After being banned from owning or dealing in slaves by one emperor, the Jews would wait until the next emperor came along, then they would buy a charter giving them a monopoly in the slave trade. Then public outrage against the Jews would grow until another emperor would ban their slave-dealing again. Most of the time, however, the Jews were the undisputed masters of the White slave trade, and that is still the case today.
Interestingly enough, this fact was revealed in a recent news report in the Jewish newspaper the New York Times, of all places. The January 11, 1998 issue had a major article titled ‘Contraband Women’ written by a Jewish reporter in Israel, Michael Specter. The article deals specifically with the Jewish trade in Ukrainian and Russian women – although it doesn’t label the trade as ‘Jewish.’ What the report does say is this, and I quote:
‘Centered in Moscow and the Ukrainian capital Kiev, the networks trafficking women run east to Japan and Thailand, where thousands of young Slavic women now work against their will as prostitutes, and west to the Adriatic coast and beyond. The routes are controlled by Russian crime gangs based in Moscow.’
What the reader must understand is that these crime gangs don’t have a real Russian in them. They are entirely Jewish, but the agreed-upon subterfuge used by the newspapers in this country is to refer to them as Russian rather than as Jewish.
Pimps, law enforcement officials and relief groups all agree that Ukrainian and Russian women are now the most valuable in the trade. Because their immigration is often illegal – and because some percentage of the women choose to work as prostitutes – statistics are difficult to assess. But the United Nations estimates that four million people throughout the world are trafficked each year – forced through lies and coercion to work against their will in many types of servitude. The International Organization for Migration has said that as many as 500,000 women are annually trafficked into Western Europe alone. New York Times
The story of the exploitation of Eastern Europe by the Jews is a fascinating and infuriating one. Throughout the Middle Ages and into the modern era they focused on profiting from the weaknesses and vices of the Gentile populations of Poles, Russians, Ukrainians and others among whom they lived as a barely tolerated minority. In addition to being the moneylenders, they controlled the liquor business and owned the drinking establishments, the gambling dens, and the brothels. A number of 19th-century Russian writers, among them Dostoievski and Gogol, have described their destructive effects on Slavic peasant society and the perpetual condition of mutual hostility which existed between the Jews and the Slavs.
During the 19th and early 20th centuries the Jewish trade in White slaves from these lands expanded enormously. It has been described by the Jewish historian Edward Bristow in his 1982 book Prostitution and Prejudice, published by Oxford University Press and Schocken Books in New York. Although Bristow’s book is subtitled The Jewish Fight against White Slavery 1870-1939, it is nevertheless enormously revealing. The Jews recruited peasant girls in Polish and Russian villages, usually under false pretences, and transported them to brothels in Turkey, Egypt, and other parts of the Middle East; to Vienna, Budapest, and other major cities in the Austro-Hungarian Empire; and as far away as New York, New Orleans, and Buenos Aires. This Jewish trade in Slavic women naturally caused a great deal of hatred against the Jews by the Slavs, and this hatred broke out in pogroms and other popular actions against the Jews over and over again.
One would believe from the works of Mr. Spielberg and other Jewish propagandists that the hatred the Slavs bore against the Jews was based only on religious bigotry and that the Jews were completely innocent and inoffensive. One fascinating fact which Bristow’s book reveals is that the center of the Jewish trade in Polish girls was in a little town called Oswiecim. The German name for this town was Auschwitz.
I don’t mean to imply that the Jews were the only ones at fault in the White slave trade. Gentile politicians and police officials gladly accepted bribes from the Jews and in return allowed them to carry on their dirty business. And in the United States non-Jewish criminal elements such as the Mafia collaborated with the Jews or even ran their own White slave operations. But the trade in White slaves from Eastern Europe has been an exclusively Jewish activity for the last two hundred years.
The women are smuggled by car, bus, boat and plane. Handed off in the dead of night, many are told they will pick oranges, work as dancers or as waitresses. Others have decided to try their luck at prostitution, usually for what they assume will be a few lucrative months. They have no idea of the violence that awaits them.
The efficient, economically brutal routine – whether here in Israel, or in one of a dozen other countries – rarely varies. Women are held in apartments, bars and makeshift brothels; there they service, by their own count, as many as 15 clients a day. Often they sleep in shifts, four to a bed. The best that most hope for is to be deported after the police finally catch up with their captors. New York Times
Tens of thousands of pretty but naive young Russian and Ukrainian women are being swept up by the Jewish gangs – called ‘Russian’ gangs by the New York Times – and shipped off to a life of misery and degradation in Turkey, Pakistan, Thailand, and Israel, as well as to countries in Western Europe, where Jews also control organized crime. The young women, unable to find work in Russia or Ukraine or Poland and facing a bleak future in countries ravaged by decades of communism, are eager for any chance at a better life. They respond to advertisements that offer them work abroad as receptionists or secretaries and also promise free training and transportation. When the girls arrive at their destinations, however, they find something quite different – but by then it is too late.
One of these girls, Irina, a 21-year-old, green-eyed Ukrainian blond, was interviewed in Israel. She told how her Israeli employer took her to a brothel soon after her arrival in Israel. He took her passport away from her, burned it before her eyes, and told her that she now was his property and must work in the brothel. When Irina refused, she was beaten and raped. Luckier than most of the Slavic women lured to Israel, Irina eventually was swept up in a police raid and sent to prison as an illegal alien. She was awaiting deportation, along with hundreds of other Russian and Ukrainian women, when she was interviewed. She lamented the fact that the Israeli who had raped her and forced her to work in the brothel was not even arrested. Indeed, according to Jewish law, the rape of a Gentile woman is not illegal. Nor is it illegal in Israel to buy and sell slaves, so long as the slaves are not Jewish. Amazingly, the New York Times article reveals this fact.
The White slave trade is big business in Israel. Ukrainian authorities estimate that as many as 40,000 Ukrainian women under the age of 30 are taken from Ukraine each year. Some of these women respond to advertisements promising employment abroad, like Irina did, and some are simply kidnapped and smuggled out of the country. Those who try to escape from their Jewish captors are treated brutally. Often they are butchered in front of other captive women to keep the others terrified into doing whatever they are told. At slave markets operated by the Jewish gangs in Italy young Slavic women are stripped, put on blocks, and auctioned off to brothel owners.
The most astounding thing about this whole, filthy business is that most people are forced to learn about it from a Jewish newspaper like the New York Times. And really, you should read for yourself the article to which I referred. It was in the January 11 issue, and the news is not likely to be repeated. But ask yourself, why doesn’t Interpol, the international police agency, do something to put a stop to this White slave trade?
Why don’t the governments of the countries from which the women are being abducted do something?
Why don’t the mass media raise a hue and cry?
Why don’t powerful feminist organizations demand the eradication of White slavery?
And the answer to all of these questions is easy: they dare not do or say anything because it is a Jewish business.
‘This is a sophisticated, global operation,’ Mr. Tyler said. ‘It’s evil, and it’s successful because the money is so good. These men pay $500 to $1,000 for a Ukrainian or Russian woman. Do you understand what I am telling you? They will buy these women and make a fortune out of them.’
To illustrate his point, Mr. Tyler grabbed a black calculator and started calling out the sums as he punched them in.
‘Take a small place,’ he said, ‘with 10 girls. Each has 15 to 20 clients a day. Multiply that by say 200 shekels. So say 30,000 shekels a day comes in to each place. Each girl works 25 days a month. Minimum.’
Mr. Tyler was busy doing math as he spoke.
‘So we are talking about 750,000 shekels a month, or about $215,000. A man often owns five of these places. That’s a million dollars. No taxes, no real overhead. It’s a factory with slave labor. And we’ve got them all over Israel.’ New York Times
CONTRABAND WOMEN – A SPECIAL REPORT
January 11th, 1998 from New York Times > Articles
Traffickers’ New Cargo: Naive Slavic Women
by Michael Specter
RAMLE, Israel–Irina always assumed that her beauty would somehow rescue her from the poverty and hopelessness of village life. A few months ago, after answering a vague ad in a small Ukrainian newspaper, she slipped off a tour boat when it put in at Haifa, hoping to make a bundle dancing naked on the tops of tables.
She was 21, self-assured and glad to be out of Ukraine. Israel offered a new world, and for a week or two everything seemed possible. Then, one morning, she was driven to a brothel, where her boss burned her passport before her eyes.
“I own you,” she recalled his saying. “You are my property and you will work until you earn your way out. Don’t try to leave. You have no papers and you don’t speak Hebrew. You will be arrested and deported. Then we will get you and bring you back.”
It happens every single day. Not just in Israel, which has deported nearly 1,500 Russian and Ukrainian women like Irina in the past three years. But throughout the world, where selling naive and desperate young women into sexual bondage has become one of the fastest- growing criminal enterprises in the robust global economy.
The international bazaar for women is hardly new, of course. Asians have been its basic commodity for decades. But economic hopelessness in the Slavic world has opened what experts call the most lucrative market of all to criminal gangs that have flourished since the fall of Communism: white women with little to sustain them but their dreams. Pimps, law enforcement officials and relief groups all agree that Ukrainian and Russian women are now the most valuable in the trade.
Because their immigration is often illegal–and because some percentage of the women choose to work as prostitutes–statistics are difficult to assess. But the United Nations estimates that four million people throughout the world are trafficked each year–forced through lies and coercion to work against their will in many types of servitude. The International Organization for Migration has said that as many as 500,000 women are annually trafficked into Western Europe alone.
Many end up like Irina. Stunned and outraged by the sudden order to prostitute herself, she simply refused. She was beaten and raped before she succumbed. Finally she got a break. The brothel was raided and she was brought here to Neve Tirtsa in Ramle, the only women’s prison in Israel. Now, like hundreds of Ukrainian and Russian women with no documents or obvious forgeries, she is waiting to be sent home.
“I don’t think the man who ruined my life will even be fined.” she said softly, slow tears filling her enormous green eyes. “You can call me a fool for coming here. That’s my crime. I am stupid. A stupid girl from a little village. But can people really buy and sell women and get away with it? Sometimes I sit here and ask myself if that really happened to me, if it can really happen at all.”
Then, waving her arm toward the muddy prison yard, where Russian is spoken more commonly than Hebrew, she whispered one last thought:
“I’m not the only one, you know. They have ruined us all.”
Traffic Patterns: Russia and Ukraine Supply the Flesh
Centered in Moscow and the Ukrainian capital, Kiev, the networks trafficking women run east to Japan and Thailand, where thousands of young Slavic women now work against their will as prostitutes, and west to the Adriatic Coast and beyond. The routes are controlled by Russian (Jewish) crime gangs based in Moscow. Even when they do not specifically move the women overseas, they provide security, logistical support, liaison with brothel owners in many countries and, usually, false documents.
Women often start their hellish journey by choice. Seeking a better life, they are lured by local advertisements for good jobs in foreign countries at wages they could never imagine at home.
In Ukraine alone, the number of women who leave is staggering. As many as 400,000 women under 30 have gone in the past decade, according to their country’s Interior Ministry. The Thai Embassy in Moscow, which processes visa applications from Russia and Ukraine, says it receives nearly 1,000 visa applications a day, most of these from women.
Israel is a fairly typical destination. Prostitution is not illegal here, although brothels are, and with 250,000 foreign male workers–most of whom are single or here without their wives–the demand is great. Police officials estimate that there are 25,000 paid sexual transactions every day. Brothels are ubiquitous.
None of the women seem to realize the risks they run until it is too late. Once they cross the border their passports will be confiscated, their freedoms curtailed and what little money they have taken from them at once.
“You want to tell these kids that if something seems too good to be true it usually is.” said Lyudmilla Biryuk, a Ukrainian psychologist who has counseled women who have escaped or been released from bondage. “But you can’t imagine what fear and real ignorance can do to a person.”
The women are smuggled by car, bus, boat and plane. Handed off in the dead of night, many are told they will pick oranges, work as dancers or as waitresses.
Others have decided to try their luck at prostitution, usually for what they assume will be a few lucrative months. They have no idea of the violence that awaits them.
The efficient, economically brutal routine–whether here in Israel, or in one of a dozen other countries–rarely varies. Women are held in apartments, bars and makeshift brothels; there they service, by their own count, as many as 15 clients a day. Often they sleep in shifts, four to a bed. The best that most hope for is to be deported after the police finally catch up with their captors.
Few ever testify. Those who do risk death. Last year in Istanbul, Turkey, according to Ukrainian police investigators, two women were thrown to their deaths from a balcony while six of their Russian friends watched.
In Serbia, also last year, said a young Ukrainian woman who escaped in October, a woman who refused to work as a prostitute was beheaded in public.
In Milan a week before Christmas, the police broke up a ring that was holding auctions in which women abducted from the countries of the former Soviet Union were put on blocks, partially naked, and sold at an average price of just under $1,000.
“This is happening wherever you look now.” said Michael Platzer, the Vienna-based head of operations for the United Nations’ Center for International Crime Prevention. “The mafia is not stupid. There is less law enforcement since the Soviet Union fell apart and more freedom of movement. The earnings are incredible. The overhead is low–you don’t have to buy cars and guns. Drugs you sell once and they are gone. Women can earn money for a long time.”
“Also,” he added, “the laws help the gangsters. Prostitution is semilegal in many places and that makes enforcement tricky. In most cases punishment is very light.”
In some countries, Israel among them, there is not even a specific law against the sale of human beings.
Mr. Platzer said that although certainly “tens of thousands” of women were sold into prostitution each year, he was uncomfortable with statistics since nobody involved has any reason to tell the truth.
“But if you want to use numbers.” he said, “think about this. Two hundred million people are victims of contemporary forms of slavery. Most aren’t prostitutes, of course, but children in sweatshops, domestic workers, migrants. During four centuries, 12 million people were believed to be involved in the slave trade between Africa and the New World. The 200 million–and many of course are women who are trafficked for sex–is a current figure. It’s happening now. Today.”
Distress Calls: Far-Flung Victims Provide Few Clues
The distress call came from Donetsk, the bleak center of coal production in southern Ukraine. A woman was screaming on the telephone line. Her sister and a friend were prisoners in a bar somewhere near Rome. They spoke no Italian and had no way out, but had managed, briefly, to get hold of a man’s cell phone.
“Do you have any idea where they are, exactly?” asked Olga Shved, who runs La Strada in Kiev, Ukraine’s new center dedicated to fighting the trafficking of women in Eastern Europe and the countries of the former Soviet Union.
The woman’s answer was no. Ms. Shved began searching for files and telephone numbers of the local consul, the police, anybody who could help.
“Do they know how far from Rome they are,” she asked, her voice tightening with each word. “What about the name of the street or the bar? Anything will help,” she said, jotting notes furiously as she spoke. “We can get the police on this, but we need something. If they call back, tell them to give us a clue. The street number. The number of a bus that runs past. One thing is all we need.”
Ms. Shved hung up and called officials at Ukraine’s Interior Ministry and the Foreign Ministry. Her conversations were short, direct and obviously a routine part of her job.
That is because Ukraine–and to a lesser degree its Slavic neighbors Russia and Belarus–has replaced Thailand and the Philippines as the epicenter of the global business in trafficking women. The Ukrainian problem has been worsened by a ravaged economy, an atrophied system of law enforcement, and criminal gangs that grow more brazen each year. Young European women are in demand, and Ukraine, a country of 51 million people, has a seemingly endless supply. It is not that hard to see why.
Neither Russia nor Ukraine reports accurate unemployment statistics. But even partial numbers present a clear story of chaos and economic dislocation. Federal employment statistics in Ukraine indicate that more than two-thirds of the unemployed are women. The Government also keeps another statistic: employed but not working. Those are people who technically have jobs, and can use company amenities like day-care centers and hospitals. But they do not work or get paid. Three-quarters are women. And of those who have lost their jobs since the Soviet Union dissolved in 1991, more than 80 percent are women.
The average salary in Ukraine today is slightly less than $30 a month, but it is half that in the small towns that criminal gangs favor for recruiting women to work abroad. On average, there are 30 applicants for every job in most Ukrainian cities. There is no real hope; but there is freedom.
In that climate, looking for work in foreign countries has increasingly become a matter of survival.
“It’s no secret that the highest prices now go for the white women,” said Marco Buffo, executive director of On the Road, an antitrafficking organization in northern Italy. “They are the novelty item now. It used to be Nigerians and Asians at the top of the market. Now it’s the Ukrainians.” Economics is not the only factor causing women to flee their homelands. There is also social reality. For the first time, young women in Ukraine and Russia have the right, the ability and the willpower to walk away from their parents and their hometowns. Village life is disintegrating throughout much of the former Soviet world, and youngsters are grabbing any chance they can find to save themselves. “After the wall fell down, the Ukrainian people tried to live in the new circumstances,” said Ms. Shved. “It was very hard, and it gets no easier. Girls now have few opportunities yet great freedom. They see ‘Pretty Woman,’ or a thousand movies and ads with the same point, that somebody who is rich can save them. The glory and ease of wealth is almost the basic point of the Western advertising that we see. Here the towns are dying. What jobs there are go to men. So they leave.”
First, however, they answer ads from employment agencies promising to find them work in a foreign country. Here again, Russian crime gangs play a central role. They often recruit people through seemingly innocuous “mail order bride” meetings. Even when they do not, few such organizations can operate without paying off one gang or another. Sometimes want ads are almost honest, suggesting that the women can earn up to $1,000 a month as “escorts” abroad. Often they are vague or blatantly untrue.
Recruiting Methods: Ads Make Offers Too Good to Be True
One typical ad used by traffickers in Kiev last year read: “Girls: Must be single and very pretty. Young and tall. We invite you for work as models, secretaries, dancers, choreographers, gymnasts. Housing is supplied. Foreign posts available. Must apply in person.”
One young woman who did, and made it back alive, described a harrowing journey. “I met with these guys and they asked if I would work at a strip bar.” she said. “Why not, I thought. They said we would have to leave at once. We went by car to the Slovak Republic where they grabbed my passport. I think they got me new papers there, but threatened me if I spoke out. We made it to Vienna, then to Turkey. I was kept in a bar and I was told I owed $5,000 for my travel. I worked for three days, and on the fourth I was arrested.”
Lately, the ads have started to disappear from the main cities– where the realities of such offers are known now. These days the appeals are made in the provinces, where their success is undiminished.
Most of the thousands of Ukrainian women who go abroad each year are illegal immigrants who do not work in the sex business. Often they apply for a legal visa–to dance, or work in a bar–and then stay after it expires.
Many go to Turkey and Germany, where Russian crime groups are particularly powerful. Israeli leaders say that Russian women–they tend to refer to all women from the former Soviet Union as Russian– disappear off tour boats every day. Officials in Italy estimate that at least 30,000 Ukrainian women are employed illegally there now.
Most are domestic workers, but a growing number are prostitutes, some of them having been promised work as domestics only to find out their jobs were a lie. Part of the problem became clear in a two-year study recently concluded by the Washington-based nonprofit group Global Survival Network: police officials in many countries just don’t care.
The network, after undercover interviews with gangsters, pimps and corrupt officials, found that local police forces–often those best able to prevent trafficking–are least interested in helping. Gillian Caldwell of Global Survival Network has been deeply involved in the study. “In Tokyo,” she said, “a sympathetic senator arranged a meeting for us with senior police officials to discuss the growing prevalence of trafficking from Russia into Japan. The police insisted it wasn’t a problem, and they didn’t even want the concrete information we could have provided. That didn’t surprise local relief agencies, who cited instances in which police had actually sold trafficked women back to the criminal networks which had enslaved them.”
Official Reactions: Best-Placed to Help, But Least Inclined
Complacency among police agencies is not uncommon.
“Women’s groups want to blow this all out of proportion,”said Gennadi V. Lepenko, chief of Kiev’s branch of Interpol, the international police agency. “Perhaps this was a problem a few years ago. But it’s under control now.”
That is not the view at Ukraine’s Parliament–which is trying to pass new laws to protect young women–or at the Interior Ministry.
“We have a very serious problem here and we are simply not equipped to solve it by ourselves.” said Mikhail Lebed, chief of criminal investigations for the Ukrainian Interior Ministry. “It is a human tragedy, but also, frankly, a national crisis. Gangsters make more from these women in a week than we have in our law enforcement budget for the whole year. To be honest, unless we get some help we are not going to stop it.”
But solutions will not be simple. Criminal gangs risk little by ferrying women out of the country; indeed, many of the women go voluntarily. Laws are vague, cooperation between countries rare and punishment of traffickers almost nonexistent. Without work or much hope of a future at home, an eager teen-ager will find it hard to believe that the promise of a job in Italy, Turkey or Israel is almost certain to be worthless.
“I answered an ad to be a waitress.” said Tamara, 19, a Ukrainian prostitute in a massage parlor near Tel Aviv’s old Central Bus Station, a Russian-language ghetto for the cheapest brothels. “I’m not sure I would go back now if I could. What would I do there, stand on a bread line or work in a factory for no wages?”
Tamara, like all other such women interviewed for this article, asked that her full name not be published. She has classic Slavic features, with long blond hair and deep green eyes. She turned several potential customers away so she could speak at length with a reporter. She was willing to talk as long as her boss was out. She said she was not watched closely while she remained within the garish confines of the “health club.”
“I didn’t plan to do this.” she said, looking sourly at the rich red walls and leopard prints around her. “They took my passport, so I don’t have much choice. But they do give me money. And believe me, it’s better than anything I could ever get at home.”
Yitzhak Tyler, the chief of undercover activities for the Haifa police, is a big, open-faced man who doesn’t mince words.
“We got a hell of a problem on our hands,” he said. The port city of 200,000 has become the easiest entryway for women brought to Israel to work as prostitutes–though by no means the only one. Sometimes they walk off tour boats, but increasingly they come with forged documents that enable them to live and work in Israel. These have often been bought or stolen from elderly Jewish women in Russia or Ukraine.
“This is a sophisticated, global operation,” Mr. Tyler said. “It’s evil, and it’s successful because the money is so good. These men pay $500 to $1,000 for a Ukrainian or Russian woman. Do you understand what I am telling you? They will buy these women and make a fortune out of them.”
To illustrate his point, Mr. Tyler grabbed a black calculator and started calling out the sums as he punched them in.
“Take a small place,” he said, “with 10 girls. Each has 15 to 20 clients a day. Multiply that by say 200 shekels. So say 30,000 shekels a day comes in to each place. Each girl works 25 days a month. Minimum.”
Mr. Tyler was busy doing math as he spoke. “So we are talking about 750,000 shekels a month, or about $215,000. A man often owns five of these places. That’s a million dollars. No taxes, no real overhead. It’s a factory with slave labor. And we’ve got them all over Israel.”
The Tropicana, in Tel Aviv’s bustling business district, is one of the busiest bordellos. The women who work there, like nearly all prostitutes in Israel today, are Russian. Their boss, however, is not.
“Israelis love Russian girls.” said Jacob Golan, who owns this and two other clubs, and spoke willingly about the business he finds so “successful.” “They are blonde and good-looking and different from us.” he said, chuckling as he drew his hand over his black hair. “And they are desperate. They are ready to do anything for money.”
Always filled with half-naked Russian women, the club is open around the clock. There is a schedule on the wall next to the receptionist–with each woman’s hours listed in a different color, and the days and shifts rotating, as at a restaurant or a bar. Next to the schedule a sign reads, “We don’t accept checks.” Next to that there is a poster for a missing Israeli woman.
There are 12 cubicles at the Tropicana where 20 women work in shifts, 8 during the daytime, 12 at night. Business is always booming, and not just with foreign workers. Israeli soldiers, with rifles on their shoulders, frequent the place, as do business executives and tourists.
Mr. Golan was asked if most women who work at the club do so voluntarily. He laughed heartily.
“I don’t get into that,” he said, staring vacantly across his club at four Russian women sitting on a low couch. “They are brought here and told to work. I don’t force them. I pay them. What goes on between them and the men they are with, how could that be my problem?”
Deterrent Strategies: A System That Fails Those Who Testify
Every once in a while, usually with great fanfare and plenty of advance notice, Mr. Golan gets raided. He pays a fine, and the women without good false documents are taken to prison. If they are deported, the charges against them are dropped. But if a woman wants to file a complaint, then she must remain in prison until a trial is held. “In the past four years,” Betty Lahan, prison director of Neve Tirtsa here, said, “I don’t know of a single case where a woman chose to testify.”
Such punitive treatment of victims is the rule rather than the exception. In Italy, where the police say killings of women forced into prostitution average one a month, Parliament tried to create a sort of witness protection program. But it only allowed women to stay in the country for one year and did nothing to hide their identities.
“The deck is just so completely stacked against the women in all this,” said Daniella Pompei, an immigration specialist with the community of Sant’Egidio, the Catholic relief agency in Rome. “The police is the last place these women want to go.” She said that only 20 women had ever used the protection program.
It is not clear who will stop the mob. On a trip to Ukraine late last year, Hillary Rodham Clinton spoke out about the new white slave trade that has developed so rapidly there. The United States and the European Union have plans to work together to educate young women about the dangers of working abroad. Other initiatives, like stays of deportation for prisoners, victims’ shelters and counseling, have also been discussed.
“I don’t care about any of that,” said Lena, a young Latvian, one of the inmates waiting to be deported here. “I just want to know one thing. How will I ever walk down the street like a human being again?”
THE STORY OF “ZVI MIGDAL”
THE INFAMOUS JEWISH PROSTITUTION CARTEL
By Ushi Derman
January 4, 2018
Towards the end of the 19th century, the very sound of the name America had a magical effect on the millions of Jews in Eastern Europe, and they could not care less whether it was northern or southern America. For them, “America” was not a spot on the map, but a dream, a desirable destination, a place where Jews sleep on a bed made of money, and can even “eat an orange every day!”, as Isaac Bashevis Singer reported in his autobiography.
For a huge lot of Jewish women, though, the American experience was quite different. Rather than a safe haven or a heaven of oranges, they found horror and humiliation. One of the most shocking disgraceful criminal affairs in modern Jewish history took place in Buenos Aires, Argentina.
Like in most affairs, it was all about the financial principal of supply and demand. Back then Argentina was becoming one of the most leading economies in the world and was even called “the world’s granary”, thanks to its successful striving agricultural industries. As a results, by the end of the 19th century, Argentina was full of immigrants, Jews included. In 1895 there were appr. 6,000 Jews living in Argentina, and less than two decades later, in 1914, there were already 117,000. The city of Buenos Aires grew in frantic rates, not only Jews were rushing in, but also many immigrants from all over the world. One interesting characteristic of the Jewish immigration to Bueno Aires, that affected our story a great deal, was the odd ration between the genders: almost ten times more men than women.
A group of corrupted Jews were more than willing to demonstrate patriotism, and started to deliver the goods – literally. The growing demand for women led to the establishment of “Zvi Migdal” – a large powerful prostitution cartel established and owned by Jewish immigrants from Poland. Named after one of the founders, it was one of the largest networks in the South America, with branches and connections in Shanghai, Johannesburg, Rio de Janeiro and many other places.
At the peak of their activity they had tens of thousands of Jewishwomen working for them, some 2,000 brothels and a state of the art organizational mechanism working efficiently according to cruel mafia codes.
The main wing was established in Buenos Aires, concealed as a charity society. They were called Ruffians (Spanish for pimps), they traded and abused and enslaved unsuspecting Jewish women, just for money. The cartel had an original marketing tactic: every once in a while a polite Jewish gentleman used to come to the Jewish communities in Eastern Europe and spread ads calling “Jewish girls from good homes” offering either jobs at homes of rich Jews in Buenos Aires, or marriage. Feminism was still at its beginning back then, and the career opportunities of young women were scarce, they usually could expect to be no more mothers and housewives – therefore they were easily tempted to respond to the ads.
While still on the ship to Argentina they were locked, beaten, raped and starved, which the Jewish pimps referred to as their “re-education”. The Ruffians called these trips “remonte”, a term from the cattle trade.
Landing in Buenos Aires they were herded to a house where they had to undress and be inspected by pimps, who sold them in auctions as sex slaves. The “owners” – as well as customers – included officials, judges, and journalists.
The relations between the local Jewish community and the Ruffians were complicated. The local Jews called them “impure”, would not associate with them nor rent or sell them houses.
However, the ruffians wished to participate in prayers and ceremonies, and assumed generous donations would easily pave their way into the community as distinguished members.
It worked for them – but only for a while. One night, Nahum Sorkin, a known Zionist, was standing outside the Jewish theater trying to stop them from entering. Since then, the entire community started to condemn and denunciate the ruffians. They were no longer allowed in the synagogue nor in the cemetery. In response, the rich ruffians erected a fancy synagogue in the center of the Jewish quarter. The entrance floor was a glorious praying hall, while the upper floor was a brothel. During the synagogue’s inauguration the ruffians went outside with the Torah in their arms and circled the building, in front of the disgusted appalled neighbors who dared not interfere.
“Zvi Migdal” was active uninterruptedly for over four decades. The bosses were fearless, paying off every man who might jeopardize them: immigration officials, cops and judges that attempted to shut the cartel down; politicians; and city hall seniors who authorized the construction of the synagogue/brothel.
“Zvi Migdal” eventually fell, thanks to the efforts of three people: one Jewish prostitute, a police officer and a judge. In October 1922, Rachel Lea Liberman boarded the ship “Polania”, docking in Hamburg, with her two little boys. They were traveling to Buenos Aires, to join her husband Jacob Farber, a Jewish tailor who sailed to Argentina the previous year to look for a job. Their correspondence shows that Lea had high expectations from the new world across the sea. Nothing could prepare her for her bleak unfortunate fate. About a year after her arrival to Buenos Aires, her husband died of tuberculosis. Now she was a widow in a foreign country, mother of two, without a language and with no means of livelihood. Soon she had to sell her body.
Working for “Zvi Migdal” for several years, Lea managed to buy a house and to establish a small business. When she wished to resign and receive her share, the Zvi Migdal bosses tricked her into marring a charismatic stranger, who was in fact a mean violent sadist pimp, who forced her to work as a prostitute again.
Desperately – but also courageously – she turned to Julio Alsogaray, a police superintendent, known for his integrity and spotless measures. She presented him with evidences against the cartel’s bosses, which were carefully examined by the judge Dr. Rodriguez Ocampo, who did not succumb to the money and presents offered to him by the “Migdal” men.
The trial ended in September 1930, with 108 convictions, and long periods of imprisonment for all heads of the organization. After intensive public pressure, hundreds of Jewish pimps were imprisoned and deported to Uruguay. Towards the end of the 1930’s “Zvi Migdal” eventually ceased to exist altogether. The scars in the masses of hearts and bodies of young women were however irrecoverable forever.
By Ushi Derman
Jews are allowed by their “God” to keep slaves. The modern day Israel takes advantage of this divine permission.
“Your male and female slaves are to come from the nations around you; from them you may buy slaves. You may also buy some of the temporary residents living among you and members of their clans born in your country, and they will become your property. You can will them to your children as inherited property and can make them slaves for life, but you must not rule over your fellow Israelites ruthlessly.” (Leviticus 25:44-46)
- Prostitution and Prejudice: The Jewish Fight against White Slavery, 1870-1939 by Edward Bristow
Oxford, 340 pp, £15.00, November 1982, ISBN 0 19 822588 1
- Peasants, Rebels and Outcastesby Mikiso Hane
Scolar, 297 pp, £12.50, October 1982, ISBN 0 85967 670 6
Richard Titmuss has cast light on civilisation by comparing what happens when blood is sold and when it is donated. Edward Bristow’s subject, likewise, is a service which may be either donated or traded – or obtained under duress. His exploration of it takes him into unfamiliar recesses of public and private depravity, and shines a torch into the laundry room of Judaism.
This is to take white slavery as seriously as it was taken in the years before far worse befell in Europe. But it was never easy to keep a solemn countenance in the Runyonesque world that greeted the transatlantic voyager – stuffed full of promises and stowed away in a coal hole – who happened to find herself in Buenos Aires or the Lower East Side ghetto. Take the café at 92 Second Avenue, New York. Its owner was Abe (‘the Rabbi’) Rabbelle, president of the Independent Benevolent Association, ‘the scaffolding for Jewish commercial vice’. (Its initials must have escaped everyone’s memory when our own IBA was formed.) The listed habitués included ‘Kid Rags, mack and stuss house owner, Crazy Itch, gambler, Charlie Argument, strike breaker,’ and other leftovers from the cast of Guys and Dolls; while among the whores who might drop in for a word or two in passing was ‘the very successful Jenny Morris, nearly six feet tall, who was known as “Jenny the Factory” because of her rather unusual capacity for work’.
Bristow’s four-part counterpoint of criminality, market forces, sexual appetite and religious organisation is fully worked out in Buenos Aires. Today, this city’s historic role as the sin capital of the New World has faded far below Fleet Street’s horizon: even the Sun missed its opportunity last summer to speculate at feature length about the occupation of the Junta’s grandmothers. But during the colonisation of Argentina a century ago, ‘everything was arranged for high profits: the imbalance of numbers between males and females, the Latin cultural toleration of prostitutes, thorough police and political corruption, weak laws and new shipping routes.’
East European Jews never monopolised transnational prostitution systems – far from it. But enough of them were in the right place at the right time for an explanation to be worth seeking.
The first man through the revolving door was a Hungarian, ‘Bohemian Dovidl’, who had been brought up in his elder brother’s licensed brothel in Budapest. In 1867 he read in a newspaper that scarcity of women was the only drawback to the Eldorado on the La Plata coast of South America. He promptly took ship, and on arrival in Buenos Aires had himself baptised by the Jesuits. They thought highly of his plan to bring women into the country, and even advanced him money for the fares of the first batch, who were sold on arrival at windfall prices as the importer’s ‘daughters’ and ‘nieces’ to respectable men desperate for wives. Or so runs the story – one of the few which Bristow cannot verify from other sources and has to call ‘emblematic of the truth’.
Free-range arrangements of this kind later gave place to battery methods. But Buenos Aires is unintelligible without Galicia. One should begin with the supply side, not the demand side. As the eloquent and – in the social context of pre-1914 Vienna – courageous rabbi Joseph Bloch put it,
‘one must have seen the misery of the Polish Jewish cities for oneself, in order to understand that a trip to Buenos Aires is not frightening.’
Life expectancy was low – it was always a choice between VD abroad or malnutrition at home – and there were frequent calls from Latin America for new recruits or ‘remounts’, as they were called in a phrase that would have appealed to Sir Harry Flashman. Hersch Gottlieb, Sam Lubelski, ‘Napoleon’ Dickenfaden and other notorieties negotiated for them with agents and principals (that is, parents) at the equivalent of trade fairs in Poland and Austrian Galicia. One well-known exchange point, where Israel Meyrowitz was arrested for extradition to Germany, was the Hotel Silliger in Oswiecim (Auschwitz). The bargaining procedure is described vividly enough in Sholem Asch’s novel Mottke the Thief, though Bristow says that the facts of the traffic as a whole are closer to darker Yiddish fiction such as Hirschbein’s Miriam and Mendele’s Valley of Tears. The persistent tales of auction blocks – black slave style – in restaurants at ports of arrival may or may not be well-founded, but ‘it seems likely that something like nine thousand(9000) Jewish women came to practise prostitution in Argentina, Brazil and Uruguay during the quarter-century 1889-1914.’ Jewish procurers shared with the French the two hundred or so licensed houses in Buenos Aires at this period. Doubtless many of the girls were willing enough, at least initially, and the survivors, as they aged, became madams themselves. But there was nothing exceptional in Clara Adam’s experience with the kaftane Sigmund Reicher.
‘Lured from Germany by his offer of employment as a seamstress, she was raped during the journey by one of his accomplices, and virtually locked in a brothel from which she finally escaped in her slip at four o’clock one morning.’
The Paris journalist Albert Londres, author of an engaging and widely-read documentary novel The Road to Buenos Ayres (1928), understandably concentrated upon the ‘Franchucas’ who formed the aristocracy of the trade: they commanded five pesos a time to the Polaks’ two, and were also quicker to realise the productivity gains obtainable from oral variations. But he traced the East Europeans, too, from the sordid settlements outside Warsaw to the queues for their services in the casitas, where a really good worker could turn 400 tricks in a week. ‘The veritable White Slave Traffic, such as is conceived by the popular imagination, is carried on by the Polaks … on the German model, that is to say, very methodically.’
Not until 1930 did respectable Jews in Buenos Aires, by then the large majority, feel strong enough to mount prosecutions and make them stick on the Zvi Migdal, the fraternity that had been founded to protect the interests of ‘the unclean ones’, as they were called. Zvi Migdal paid for the separate burial grounds which the rabbis demanded:
‘In keeping with the South American custom, each gravestone had an enamelled photograph attached. One could study the features of the deceased pimp – fat, homberg, handlebar moustache – and read the incongruous inscription, “In the garden of Eden his soul rests.” ’
The founder of Zvi Migdal judged that religion would keep the women happy, if nothing else would, and he built them their own synagogue.
The term ‘white slavery’ was first used in the context of prostitution by a British vice campaigner, Dr Michael Ryan, as early as 1839, and it was used with special reference to Jewish procurers in the East End of London. When the Jewish Chronicle put aside the sense of shame in its community and decided to notice the problem, seventy years later, its comments went much further than might have been expected, identifying in the procurers’ activities on the international market ‘what is really an inversion of certain special characteristics with which Jews are endowed’.
By then London, though it became the home of the formidable Silver gang until they moved lock, stock and brothel to South Africa in 1898, was less remarkable for actual white slave traffic than for public hysteria about it. The Criminal Law Amendment Act 1912 even provided for the flogging of convicted pimps: according to Dr Bristow, this measure sent a lot of them on their travels again to countries with more indulgent penal systems.
(The reader of this absorbing book must expect his liberal assumptions or sensibilities to be put in check here and there.)
Throughout these pages, and not least because the author scrupulously refrains from labouring the point, the reader finds himself relating Jewish white slavery to the Final Solution and the climate of public opinion in Europe that eased it into German policy. In the Vienna of Hitler’s youth, the brothels of Mesdames Schick and Sachs, and the corruption of a certain police inspector Piss, were as famous as Sachertorte. This helped Hitler to write later in Mein Kampf:
‘In no other city of Western Europe could the relationship between Jewry and prostitution and even now the white slave traffic be studied better.’
It naturally made no difference to anti-Semites that the victims of the traffickers were themselves almost exclusively Jewish.
As Rabbi Bloch perceived, white slavery had become ‘the sexualisation of the blood libel’, which as late as 1900 in Central Europe put Jews on trial for the ritual murder of Christian children. Bloch also firmly maintained, and Bristow concurs, that anti-Semitism was a cause rather than a consequence of white slavery. That is, if either the Tsars or the commissars of Russia had been less concerned to discriminate against their Jewish population, the pool of squalor and starvation that made the traffic possible might never have formed.
But once it became clear internationally that in Latin America a substantial proportion (elsewhere a small minority) of the early Jewish settlement were engaged in mädchenhandels, official Jewry found itself in a lethal double bind. If it kept quiet about the unclean ones, it would be accused of winking at wickedness for commercial gain. But if it expelled offenders from synagogues, set up protection committees and pursued prosecutions, the whole problem would become notorious. Either way, Judaism was saddled with a public relations disaster beside which the brothels of Paddington were but a fleabite on the rump of the Church Commissioners.
This choice was itself defined in religious terms by sensitive Jews. For Claude Montefiore, whose collection of printed papers in the London Library is used here to good effect, it was a choice between chillul hashem (‘defiling the Name’) or kiddush(‘sanctification’, hence ‘integrity’). To their credit, many brave and tireless Jews on both sides of the Atlantic chose to expose the truth let it fall where it might. They ranged from Bertha Pappenheim (the ‘Anna O’ of the Freud-Breuer casebook, whose multiple life surely deserves a comprehensive biography) to less celebrated inquirers, social workers and publicists. But they were too close to the traffic to pull its roots up and disentangle them.
Even Bristow is better at research than analysis. The pious blamed deficient upbringing, and the progressive blamed primal poverty in the Old World or sweatshop drudgery in the New – Mamie Pinzer or someone like her must have been the author of the New York Yiddish comment on street-walking quoted by Lincoln Steffens: Es ist besser wie packin pants.
As the Victorian Irish proved to the satisfaction of John Burns, however, not all victims of malnutrition and unemployment succumb to prostitution. The medical educationalist Abraham Flexner – evidently the anonymous ‘student’ whom Theodore Dreiser mentions in his introduction to Londres’s book – spent two years at John D. Rockefeller’s request compiling his report on Prostitution in Europe (1914). Returning to the topic in his autobiography a quarter-century later, he concluded: ‘Prostitution occurs not solely as the result of economic pressure, but within the area of economic pressure under the influence of other conditions’ – among which he cited, relevantly enough for our own London, not just alcohol and low mentality, but:
‘the break-up of home or the destruction of home life, resulting from the constant influx of detached young men and women from the country into the large towns. Loneliness, sheer loneliness, even among employed persons, is not infrequently a decisive factor.’
Given the scale of rural depopulation and immigrant male labour that Europe has seen since 1945, what an epidemic of prostitution the pill and permissiveness must have saved us from.
Before returning to the contribution that the family psychology and religious organisation of the Ostjuden made to the system Bristow describes, it is worth glancing across to the aetiology of Japanese white slavery, which has recently been explored with equal skill in Mikiso Hane’s Peasants, Rebels and Outcastes. Most of the same techniques of enticement, stowing away, shipboard rape and compulsory indebtedness were borrowed by Japanese procurers. Or perhaps they invented them, for in Nagasaki, a city more influenced than others in pre-Meiji Japan by Christianity and foreign trade together, the traditional Japanese population controls of infanticide and abortion were less practised, and surplus female children were commonly sold abroad. The Japanese variant is that the slaves were younger (twelve upwards) and fared worse.
‘The life story of practically every one of the karayuki is an unmitigated horror story,’ writes Hane. Another difference is that in Japan the practice was ended, not by endogenic agitation as in Europe and America, but by the reforms of the American Occupation after 1945, preceded only by conscientious but inevitably ineffective campaigning by the Japanese Salvation Army under Yamamuro Gunpei in the 1900s.
There is or was something in common between the Jewish and Japanese family structures. Both tended to reduce girls to chattels and expected unquestioning obedience of them. The Galician girls evidently followed the decisions of their relatives and received the blandishments of the alphonses with a sleepwalking trust in the wisdom of the community, a trust very similar to the temper which undid so many millions when the wagon called for them in 1935-45.
(With hindsight, of course, even the girls drafted to the stews of Constantinople may have fared better than the siblings they left behind in Poland, but it would be a bold white-slaver who claimed this to his credit.)
The role of specifically religious obedience is summed up in the ingenious use the Jewish procurers made of stillah chuppah, a form of ritual marriage which any adult Jew could witness. This enabled a ruthless man to court a girl, marry her in the eyes of the synagogue (but not of the law), and later coerce her into prostitution. Even without prostitution, an abandoned wife – an agunah – was helpless without a get or certificate of divorce from her ‘husband’, and had no hope of remarrying. The coupling of this system to the street wisdom and commercial instincts of young male Jews – the ‘inversion’ mentioned by the Jewish Chronicle – did not in itself cause the white slave traffic, but it certainly made it a more practicable proposition.
One of the many sad ironies in the story Bristow has to tell is that the authorities – whether Catholic or Tsarist – had a simple counter to this stratagem available to them if they had condescended to apply it, and if Orthodox Jews could have brought themselves to co-operate. Rabbis could have been obliged to officiate at all stillah chuppah ceremonies, and licensed to give such unions civil validity too. But in Eastern Europe at the time, that would have been a solution too Western – and perhaps too Christian – to contemplate.
All these threads are drawn together by Bristow with considerable skill. There are rather more literals, inconsistencies of spelling and index omissions than one still expects of this publisher. But it can fairly be said of Bristow, as Dreiser said of Albert Londres, that ‘not Fabre himself, travelling here and there after his spiders, caterpillars and flies, has laboured more diligently or inspected more closely.’
- SLAVES AND CITIZENSJon Elster [3 June 1982]
- Ancient Slavery and Modern Ideology by M.I. Finley
- Economy and Society in Ancient Greece by M.I. Finley
- The Legacy of Greece: A New Appraisal edited by M.I. Finley
“Some fifteen years ago, in the course of reading up the history of technology, I came across an article by M.I. Finley, of whom I then knew nothing, on ‘Technical Innovat . . .”
- White Slaves
Christopher Driver [3 March 1983]
- Prostitution and Prejudice: The Jewish Fight against White Slavery, 1870-1939 by Edward Bristow
JEWS AND “WHITE SLAVERY”
Breaking silence, Patriots owner Kraft ‘truly sorry’ over prostitution scandal
Kraft, one of some 300 men charged with solicitation in Florida massage parlor bust, says he has ‘extraordinary respect for women’
By AP24 March 2019,
MIAMI — New England Patriots owner Robert Kraft is apologizing after being charged in a Florida massage parlor prostitution investigation.
“I am truly sorry,” Kraft said in a statement Saturday. “I know I have hurt and disappointed my family, my close friends, my co-workers, our fans and many others who rightfully hold me to a higher standard.”
The statement was first reported by ESPN.
Kraft said he has “extraordinary respect for women,” adding that his morals were shaped by his late wife. Myra Kraft died in 2011.
“Throughout my life, I have always tried to do the right thing. The last thing I would ever want to do is disrespect another human being,” he said.
“I expect to be judged not by my words, but by my actions. And through those actions, I hope to regain your confidence and respect,” Kraft said.
Kraft pleaded not guilty last month to two counts of misdemeanor solicitation of prostitution.
This week, attorneys for 77-year-old Kraft and other men charged in multiple Florida counties asked a judge to block the release of video recordings that police say show them engaging in sexual acts. The Associated Press is part of a media coalition trying to get the evidence in the case released.
Prosecutors have offered to drop the charges if Kraft and the men enter a diversion program for first-time offenders. That would include an admission they would be found guilty if their case went to trial, a $5,000 fine, 100 hours of community service and attendance in a class on the dangers of prostitution and its connection to human trafficking. They would also have to make a court appearance and be tested for sexually transmitted diseases.
Attorneys representing other defendants told the AP their clients won’t accept the offer because it is much tougher than what is offered in other Palm Beach County solicitation cases.
Kraft has an arraignment court hearing scheduled Thursday, though he is not expected to appear in person.
Kraft is one of about 300 men charged between Palm Beach and Orlando as part of a crackdown on illicit massage parlors and human trafficking. Ten parlors have closed and employees have also been charged. Many of the women are originally from China and were forced to live in the spas and not allowed to leave without an escort, according to investigators.
According to police records, Kraft was chauffeured on the evening of January 19 to a Jupiter massage parlor, where officers secretly recorded him engaging in a sex act and then handing over an undetermined amount of cash.
Investigators said Kraft returned 17 hours later. Kraft, who is worth $6 billion, was again videotaped engaging in sex acts before paying with a $100 bill and another bill, police said.
Hours later, he was in Kansas City for the AFC Championship game, where his Patriots defeated the Chiefs. His team then won the Super Bowl in Atlanta, the Patriots’ sixth NFL championship under his ownership.
William Burck, one of Kraft’s attorneys, told ESPN on Friday that police improperly obtained a search warrant to secretly install cameras inside the Jupiter massage parlor.
Kraft, who grew up in a Conservative Jewish family in Brookline, Massachusetts, was recently named the winner of the $1 million Genesis Prize, the so-called Jewish Nobel, for having “spoken out publicly and donated generously to organizations combating prejudices, including anti-Semitism and the delegitimization of the State of Israel.”
“[Israeli Prime Minister David Ben-Gurion] could hard imagine that Jewish women would stoop to crime or prostitution. When an associate, Meyer Weisgal, who resembled David Ben-Gurion, once told him humorously that a girl had walked up to him on a London street and offered her services, overwhelmed by the idea of sleeping with the ‘Israeli prime minister.’ Ben-Gurion, clearly troubled, was interested in only one thing: ‘Was she Jewish?’”
— Dan Kurzman, 1983, p. 39
‘But where do Jews enter the picture?’ I asked him.
‘Ah!’ said Simon [Wiesenthal], slapping his knee. ‘I haven’t told you something else. A few years ago, I have a talk with a man who went to school with Hitler. I ask him what Hitler was like in school and he says, ‘Normal. But maybe this hatred began after got this infection from a Jewish whore.’
‘Are you saying that Hitler caught syphillis from a Jewish prostitute?’ I asked increduously.
Wiesenthal laughed and said: ‘What’s the matter? You think only Jews can catch diseases from prostitutes?’
‘No, but were Jewish prostitutes common in Austria?’
‘Why not? Is there a Gentile monopoly on prostitution?’
‘I just don’t see it as a vocation for a nice Jewish girl.’
‘A nice Jewish girl,’ Simon mimicked. ‘You have those ‘nice Jewish girls’ in Israel, too, these days and they had them in Vienna years ago — when there were more Jews.’
— Alan Levy,
The Wiesenthal File. William B. Eerdmans Publishing Company,
Grand Rapids, MI, 1993 p. 18
At the end of the 19th and beginning of the twentieth century, Jews were deeply involved in what was popularly called “white slavery”: international prostitution rings. “White slavery,” notes scholar Albert Lindemann, “was a concern of Jewish leaders throughout the world, who recognized it as a special problem.” [LINDEMANN, p. 33]
[Jews have also dominated the pornography and commercial sexploitation business, a trend which continues to this day — see Mass Media section]
“Between 1880 and 1939,” notes scholar Edward Bristow, “the Jews played a conspicuous role in ‘white slavery,’ as the commercial prostitution of that era was dramatically called. Not only was this Jewish participation conspicuous, it was historically unprecedented, geographically widespread, and fraught with collective political dangers.” [BRISTOW, p. 1]
“Jewish trafficking,” says Bristow, “was anchored in brothel keeping, women freelanced or kept houses while their husbands procured … Jewish traffickers also supplied Gentile-run houses.” [BRISTOW, p. 56-57]
Rooted largely in Eastern and Central Europe where they “dominated the international traffic out of the area,” [BRISTOW, p. 2]
Jews were involved in prostitution rings that networked, wrote Arthur Mora (of London’s Jewish Association for the Protection of Girls and Women) in 1903, to “almost all parts of North and South Africa, to India, China, Japan, Philippine Island, North and South America, and also to many of the countries of Europe.” [BRISTOW, p. 1]
Jewish criminals trafficked women under their control virtually anywhere, also including the major cities of Bulgaria, Bosnia, Greece, Turkey, Lebanon, Egypt, Ceylon, Manchuria, South Africa, Rhodesia, and Mozambique. [BRISTOW, p. 181]
“By 1900,” says Bristow, “Jewish commercial vice was largely incorporated in underworld elements and many of it participants were predators of the poor.” [BRISTOW, p. 89]
Jewish pimps, procurers, and traffickers preyed mostly on non-Jewish women, but even large numbers of Jewish women were part of their stables.
In 1872, for example, Jewish prostitutes in Warsaw numbered 17% of the known prostitution population, in Krakow 27%, and in Vilna 47%. [BRISTOW, p. 23]
Within the Jewish community itself, it was not uncommon for recruiters to marry innocent Jewish young women and “deposit them in foreign brothels.” [BRISTOW, p. 25]
Many of the Jewish criminal underworld figures apparently saw no gap between their day-to-day activities and their religious lives, often maintaining their religious obligations. A Warsaw thug, Shilem Letzski, organized a small synagogue for Jewish “prostitutes, madams, pimps, and thieves.” This criminal community even had a rabbinical court “to settle disputes between pimps.” [BRISTOW, p. 60]
In Constantinople, prostitutes contributed money to “have their pimps called to Torah on holidays.” [SCHNEIDER, p. 225]
In New York City, “a public school and a large synagogue were situated right next door to the house of prositution.” [RUBINOW, I., 1959, p. 114-115]
In Buenos Aires, Argentina, notes Donna Guy, the Jewish pimp organization called the Varsovia Society:
“ostensibly functioned as a mutual aid society … In fact, the Varsovia
consisted of pimps who wanted to maintain their business and still lead
a religious life … Varsovia associates established their own synagogue
on Guemes Street in the midst of the traditional bordello district.”
[GUY, p. 22]
Israeli scholar Robert Rockaway notes also, for example, that prominent Jewish American mobster Longy Zwillman:
“always remained sensitive to his Jewish upbringings.”
When a close friend died, and the funeral was conducted in a church, Zwillman refused to attend. As he explained it, he was an ancestral member of the Jewish priest caste (the Cohens) and it was religiously forbidden to him to be with a dead body in a room. [ROCKAWAY, R., 1993, p. 30]
Dr. Louis Maretsky, the head of the B’nai B’rith in Germany, forlornly noted in 1912 that at least 271 of 402 prostitution traffickers on a Hamburg police list were Jewish; in reviewing similar lists for Eastern Europe and South Africa at least 374 of 644 were from the Jewish community. [BRISTOW, p. 56]
(No mention here is made of even higher possible percentages: as explored later, it has long been a tradition for many Jews in their diaspora to formally change their identifiable Jewish names). Concerning Galicia, Maretsky wrote that:
“the prominence of Jewish traffickers and brothel operators there is no doubt. From the files of the Austrian and German police there were 111 Jewish traffickers active in Galicia and the neighboring province of Bukovina for 1904-08 alone.” [BRISTOW, p. 56]
By 1889 Jewish women ran 203 of 289 (70%) of the licensed brothels in the “Pale of Settlement” (encompassing over 20 provinces in eastern Poland and western Russia — an area where Jews were about 12% of the population). 1122 of 5127 (22%) licensed prostitutes in this area were Jewish. [BRISTOW, p. 63]
The grievous political dangers for local Jewry in the context of enduring interethnic hostilities, when 78% of the rest of the women were Gentile, many indentured in Jewish houses, is obvious.
Further in the West, 16 of 19 licensed brothels in Warsaw were run by Jewish women; prostitutes in the low-class establishments were expected to service 40-50 customers a day, up to 60-70 on Holy Days. (In 1905 the respectable part of the Jewish Warsaw community rioted against the brothels; 40 whorehouses — legal and illegal — were reported destroyed, 8 persons killed, and 100 injured). [BRISTOW, p. 61]
In Minsk, Jews ran all four legitimate houses of ill repute. In the Russian province of Kherson (which includes the city of Odessa) 30 of 36 licensed brothels were Jewish-owned. The American Consul in Odessa wrote in 1908 that the “whole ‘business’ of prostitution is almost exclusively in the hands of the Jews.” [BRISTOW, p. 56]
Martin Fido notes another genre of Jewish Eastern European profiteer in the prostitution world, in England:
“Latvian ponces accompanied [prostitutes] to help them cross borders and find accomodation and working premises. These men were despised by police and by some of the criminal fraternity for ‘living off immoral earnings.’ But they were not pimps … They were effectively travel agents, couriers and managers in strange and unfriendly places. Their arrival in London ensured that a major strand of prostitution would be controlled by organized crime. One of these Latvians, Max Kassell, was still running a small stable of hookers in the 1930s, when he was murdered in Soho … Jewish dominance of the East End [of London] and its crime was reflected in their Yiddish name, ‘spielers’ (places for games). In the Brick Lane neighborhood, Isaac Bogard, a Jewish villain whose swarthy complexion and tightly curled black hair earned him the nickname ‘Darky the Coon,’ extended his interests. He began in the early years of the 20th century by supplying muscle for street traders who wanted to prevent newcomers from moving in, but he moved on to managing prostitutes and drinking clubs.” [FIDO, M., 2000, p. 19-20]
Then in London there was Harry ‘Little Hubby’ Distleman, “a Jewish club manager, gambler and possibly part-sharer (with his brother) in a chain of brothels.” [FIDO, M., 2000, p. 31]
Jewish author Chaim Bermant noted in the Jewish Chronicle in 1993 that:
“In the same period (1903-1909), 151 aliens [in Great Britain], most of them Jewish, were convicted for keeping brothels, and 521 for soliciting … Rabbi Avigdor Schonfeld … protested that to draw attention to the existence of Jewish prostitutes harmed the good name of the Jewish people.” [JEWISH CHRONICLE, 1-15-93]
More recently, Jewish singer Eddie Fisher recalls that:
“while performing in England in the late 1950s I had become friendly with a Jewish song plugger, a man who eventually left the music business to open a very exclusive whorehouse.” [FISHER, E., 1999, p. 293] A little later, there was the infamous Colin Levy:
“In 1973, one of the better-known and more appreciated solo practioners
of that tony [London prostitution] underworld was Norma Levy (nee Mary
Russell), an Irish-born prostitute in her mid-twenties whose career ‘on the
game’ was being managed by her husband Colin Levy, a petty crook …
In 1973, Colin Levy found himself short of money. Aware that one of
Norma’s patrons was the celebrated Lord Lambton, he decided to solve
his problem with a bit of blackmail. Camera in hand, he lay in wait outside
Norma’s bedroom during Lambton’s next visit to her flat. At the appropriate time, at a signal from Norma, he burst into the room. With flashlights popping in his face, the stunned Lambton was frozen on film, in
flagrante delicto, for posterity.” [Levy’s blackmail failed, but there was
a resultant scandal, including the ethics of newspaper (where Levy tried
to sell his photos] that published accounts of the story] [KIERNAN, T.,
1986, p. 162]
In Vienna, authorities knew of about 50 Jewish prostitution traffickers based in Czernowitz, “and they were a very inbred lot extending over two generations.” [BRISTOW, p. 74]
The most publicized ‘white slavery’ trial occurred in 1892, in Lemberg (once also called Lvov, then a Polish provincial capital, today called Lviv in Ukraine), where 27 traffickers — all Jews — were prosecuted for ensnaring women to go to Constantinople, Egypt, and India. Some of the women recruits understood their tasks, but others “were maids, others fieldworkers, one a butcher’s helper, all apparently promised honest jobs.” [BRISTOW, p. 74]
(Lemberg, “a cradle of Zionism from the 1880s onward,” also had anti-Jewish riots in 1918. [KRAJEWSKI, S., p. 340] )
There was a tradition of Yiddish folk songs about Jewish criminal behavior, like this:
“I am Salve, the thief,
Four brothers are we;
One is hungry, the other well fed,
But thieves all four are we.
One is a pickpocket,
The second a pimp, a handsome fellow;
One is a hijacker on the lookout for
And I am a house thief.
A pimp is common,
As all agree:
From his own wife,
He gets the disease
To be a hijacker is bitter:
You can rupture your lung,
It’s hard to earn something with some of
The best thing is to be a house thief.”
[RUBIN, R., 1979]
“In an age of pandemic anti-Semitism,” says Bristow, “a Jewish pimp was a political as well as a social force,” [BRISTOW, p. 4] very emotionally reinforcing anti-Jewish sentiments of the day. Jews were already blamed in central Europe for a financial crash in 1873 and economic competition between Jews and non-Jews was heightening.
A young and enraged Adolf Hitler paid particular attention to the highly visible phenomenon of Jewish street hustling and prostitution rings in Vienna, and was incensed that many non-Jewish women were coerced into the largely Jewish-run trade.
“In no other city of Western Europe,” he wrote in Mein Kampf, “could the relationship between Jewry and prostitution, and even now the white slave traffic, be studied better than in Vienna … an icy shudder ran down my spine when seeing for the first time the Jew as an evil, shameless, and calculating manager of this shocking vice, the outcome of the scum of the big city.” [BRISTOW, p. 84]
The Jewish prostitution business extended from Europe across the world, where it sometimes overlapped with French, Italian, Chinese, and other rings. In the Punjabi (Indian) capital of Lahore, “Jewish pimps were in the habit of leaving their women penniless only to reappear after workers had accumulated some money.” [BRISTOW, p. 195]
In Rio de Janeiro Jewish immigrants from Russia, Poland, Hungary, and Romania were so much identified with prostitution in the late 1800’s that “the kaftan, a Jew’s traditional long gown, became synonymous with pimp.” [BRISTOW, p. 113]
Thirty-nine Jews were expelled from Brazil in 1879 for soliciting women for prostitution and running illegal whorehouses. [BRISTOW, p. 114]
Of 199 licensed whorehouses in Buenos Aires in 1909, 102 were run by Jews and more than half the prostitutes were Jewish. [FRIED, p. 71]
4,248 Jewish women were registered for licensed brothels in Buenos Aires between 1880-1913, and those represented only the licensed ones. Edward Bristow estimates that 9,000 Jewish women immigrants came to Brazil in a 25-year span in that era as prostitutes (many were no doubt highly transient), when the total Jewish population of Argentina, Brazil, and Uruguay combined amounted to less than 60,000 people in 1910. [BRISTOW, p. 119]
In 1889, the Buenos Aires Bulletin Continental reported that 200 German/Austrian women were held against their will by Jewish pimps from Poland. [GUY, p. 5]
“Jewish procurers,” says Donna Guy,
“… became an organized ring in major cities all over the world.
They were particularly powerful in the Argentine port cities of Buenos Aires and Rosario … [GUY, p. 10]
Turn-of-the-century reports by the Hamburg B’nai B’rith [a Jewish fraternal organization] concluded that most prostitutes in Buenos Aires were Jewish and that traffickers ‘dress with ostentatious elegance, wear large diamonds, go to the theatre or opera daily; they have their own clubs and organizations where wares are sorted, auctioned, and sold … They have their own secret wireless code, are well organized, and– heavens! — in South America everything is possible.” [GUY, p. 19]
“Pooling their financial resources in a kind of guild,” notes another Jewish scholar, Howard Sachar,
“the [Polish Jewish] newcomers [to Argentina] in 1909 controlled slightly more than half the nearly two hundred licensed brothels in Buenos Aires. Jewish women served as their madams, and Jewish immigrant girls often were recruited and lured into their hands as prostitutes.” [SACHAR, H., 1985, p. 283]
In Cuba, Jews “became engaged in the ‘White Slave Trade,’” says Robert Levine, “importing prostitutes — some Jewish — from Poland … Many women recruited to the business had been trapped in the Russian and Polish Pale and throughout the Hapsburg Empire by force or fraud, and the human dilemma was great.” [LEVINE, p. 66]
Incredibly, even in Germany, where Jews have such a horrible history, such Jewish-related problems still bubble beneath the surface. In 1994 a US News and World Report reporter noted the observations of a Frankfort policeman patrolling Precinct 4:
“‘It’s all owned by Jews,’ [Bernd] Gayk says of the train station’s red light
district. ‘Practically everything in this area is owned by German Jews.
There is a single cabaret here owned by a German, but the rest belongs
to the Jews.’” [MARKS, J., p. 42, 44]
Shockingly, even shortly after the Holocaust when there were only a few thousand Jews left in Germany, they remained prominent in the prostitution business there. In 1961 Rabbi Richard L. Rubenstein interviewed Dean Heinrich Gruber of the Evangelical Church of East and West Berlin. Rubenstein notes that Gruber nearly himself perished in a Nazi concentration camp, and he “had a long and heroic record of opposition to the Nazis on Christian grounds as well as friendship and succor for Nazism’s chief victims [Jews].” [RUBENSTEIN, p. 5]
“The problem in Germany is that the Jews haven’t learned anything from what happened to them,” the Dean told a startled Rubenstein, “I always tell my Jewish friends that they shouldn’t put a hindrance in the way our fight against anti-Semitism.” [RUBENSTEIN, p. 7]
Gruber then complained that:
“many of the brothels and risqué night clubs, for example, were in Jewish hands, especially those in close proximity to army camps.” [RUBENSTEIN, p. 7]
And Rubenstein’s response to the clergyman? “Look,” the rabbi said,
“I don’t understand why you are so troubled about a pitifully small number of Jews in shady positions or interested in making money rather than following more edifying pursuits. It seems to me that every person pays a price for the kind of life he or she leads. Why should Germany be upset about a few such Jews unless they are overly involved in other peoples’ lives? Must every Jew make himself so pale, so inconspicuous, even invisible, that he will give no offense to Germans? … After what happened [the Holocaust], why should
any Jew remain and worry about German approval?” [RUBENSTEIN,
Marvin Wolf, a Jewish captain in the U.S. army serving in Germany, recalls that in 1971
“Rabbi David, the Jewish chaplain in Frankfort am Main — and the husband of my mother’s second cousin — told me that he knew several Jewish millionaires at whose homes I would be welcome — but, ‘I’m not crazy about any of them,’ he said. ‘What do you mean?’ I asked. ‘After the war, ’45, ’46, Germany was in ruins,’ he explained. ‘Terrible times. Nobody had money except the Occupation forces and a handful of Jews who had survived the camps and got a monthly pension — government reparations. In Frankfort, a few of these Jews recruited starving, desperate German girls and opened brothels. Got their revenge, and got rich, too. They’re in other businesses now, but do you really want to spend Pesach [Passover] with such people?” [WOLF, M. J., 1998]
By the turn of the century, “hundreds and hundreds” of Jewish women walked the Lower East Side of New York City as prostitutes. [FRIED, p. 8]
Benjamin Altman described the whores he saw on Allen Street:
“A hundred women on every … corner. Tall women, short women. Fair women. Ugly women.” [FRIED, p. 12]
Between November 15, 1908 and March 15, 1909, almost three-quarters of 2,093 prostitute cases before the New York City courts were “native-born” women, “a preponderance,” noted Albert Fried, “who were presumably Jewish.” (Ethnic categories included “Russian” and “Polish,” but not Jewish). [FRIED, p. 8] Of “foreign-born” prostitutes in court, 225 were Jewish, 154 French, 64 German, 31 Italian, 29 Irish, and 10 Polish. [FRIED, p. 8]
“The Jewish pimp,” says Albert Fried, “freely used marriage brokers and unemployment agencies to snare his victims — the young, the lonely, the innocent, the weak, the alienated, and the oppressed.” [FRIED, p. 14]
Starting out with one whore in 1890, for example, by 1912 Motche Greenberg had a “controlling interest in eight whorehouses and 114 women and was earning $4,000 a month, an incalculable amount by today’s standards.” [FRIED, p. 18]
In Chicago, by 1907 Rabbi Emil Hirsch declared that 75% of the “white slavery” in his city was controlled by Jews. [BRISTOW, p. 177]
The Jewish periodical the Forward forlornly reported that “the facts that were uncovered at the trial [for corruption] of [police] inspector McCann are horrifying. 75% of the white slave trade in Chicago is in Jewish hands. The owners of most of the immoral resorts on the West Side are Jews. Even in Gentile neighborhoods Jews stand out prominently in the nefarious business.” [FRIED, p. 70]
(Even in 2001, as a result of an undercover police investigation, Joel Gordon (a cantor, i.e., the man who sings liturgical songs and leads prayer in a synagogue) and his wife Alison Greenberg were tried in Chicago for running a brothel. Ginsberg was also charged with acts of prostitution. “We now realize,” said Howard Peritz, a member of Gordon’s synagogue, “that in starting a congregation around a man [Gordon], we were canonizing him.” [JEWISH TELEGRAPHIC AGENCY, 1-5-01] The same year, a synagogue room (Finchley Synagogue’s Kinloss Suite) in Great Britain made the news when it hosted a “stag party with three strippers performing ‘sexually explicit acts.’” Some of the money raised was supposed to go to a Jewish charity.) [ZERDIN, J., 29-01]
In 1987, a Jewish ultra-Orthodox group bought a slaughterhouse in Postville, Iowa, and began hiring illegal non-Jewish aliens from Eastern Europe to do the menial jobs at their company. Despite the fact that only Jews dominated the upper eschelons of the firm, and Jewish author Stephen Bloom’s underscores Jewish exploitation and condemnation of the entire non-Jewish community in his book called Postville, he frames the following in cautiously distancing, apologetic form:
“[A woman in her mid-twenties said:] ‘The managers are incredibly rude. One manager fired me because I wouldn’t go to bed with him.’ The translator used the word ‘manager,’ but the woman was most likely speaking of one of her supervisors, who would have been a Christian. ‘If the manager wants to sleep with you and you do, you get a raise. If you don’t, he makes your life miserable. Girls have no choice.’ No one [of a group of fellow workers] disputed what the woman said.” [BLOOM, S., 2001, p. 138]
In 1932, a few Polish-American officials of the city of Hamtramck (within Detroit) were charged by a grand jury with the “familiar charge of collusion with vice interests for gratuities.” The central player among those convicted was Jewish, Jacob Kaplan, “head of a vice syndicate” who collected $2,000 a month from disorderly houses in the Syndicate.” [WOOD, 1955, p. 53-54]
In 1941, the Detroit Free Press listed the names of those involved in another exposed vice ring in the area of Hamtramck, a ring that drew city officials and administrators into its web with bribes and payoffs. The racketeers included “Sam (the Jap) Gross, Hamtramck area brothel operator;” Charles Berman, “charged with operating a vice resort;” Irene Kaplan, “defendant in accusations as brothel keeper;” Ike (Forty Grand) Levy, “vice resort operator;” Kitty (Big Nose) Silverman, “reputed vice resort keeper;” and Jack (alias Jack Jesus) Silverman, “husband of Kitty.” [WOOD, A., 1955, p. 84, 86]
Israeli scholar Robert Rockaway notes the dimensions of Detroit’s all-Jewish Prohibition-era Purple Gang:
“Detroit’s Canadian border and existence of Jewish-owned Canadian distilleries, such as those of Sam and Harry Bronfman [Jewish founders of Seagram], offered opportunities to Detroit’s Jewish gangsters that rivaled bootlegging operations in Chicago and New York. Instead of transporting the liquor themselves, the Purples arranged for the Jewish-dominated ‘Little Jewish Navy’ to bring it across the river for them … The Gang’s dealings also extended to the sale of stolen diamonds, narcotics and prostitution in Canada.” [ROCKAWAY, R., 2001, p. 113-]
Green Bay, Wisconsin? George Tane, also Jewish, “was a bootlegger who controlled Green Bay, Wisconsin. After Prohibition, he owned all the houses of prostitution in the city.” [ROCKAWAY, R., 1993, p. 214]
Atlanta, Georgia? In 2001, Steven Kaplan, owner of the nude “Gold Club,” faced a Federal indictment on counts of “loan sharking, money laundering and bribing police officers.” He was also accused “of building a $50 million fortune in part by providing prostitutes for celebrities … Atlanta’s Gold Club is one of the most profitable nude clubs in the country.” [COURT TV, 4-30-2001]
With the American public beginning to note the high Jewish representation in the prostitution trade; some journalists implied wider corruption. In the June 1909 issue of McClure’s magazine, for instance, George Kibbe Turner wrote:
“Out of the Bowery and Red Light districts have come the new
development in New York politics — the great voting power of the
organized criminals. It was a notable development not only for New
York, but for the country at large. And no part of it was more
noteworthy than the appearance of the Jewish dealer in women, a
product of New York politics, who has vitiated more than any other
single agency the moral life of the great cities of America in the past ten
years.” [BELL, p. 187]
“It is an absolute fact,” wrote Ernest Bell in his 1911 book about white slavery, “that corrupt Jews are now the backbone of the loathsome traffic in New York and Chicago. The good Jews know this and feel keenly the unspeakable shame of it.” [BELL, p. 188]
“The criminal instincts that are so often found naturally in the Russian and Polish Jew,” wrote Frank Moss in a popular volume called American Metropolis (1897), “come to the surface in such ways as to warrant the opinion that these people are the worst element in the entire make-up of New York City … A large proportion of the people of New Israel are addicted to vice.” [FRIED, p. 55-56]
“Vice and crime did pervade the Lower East Side,” remarks Albert Fried, “and no one knew it more keenly than its residents. The better part of wisdom, so far as they were concerned, was to keep the disgrace quiet, to avoid publicizing it.” [FRIED, p. 59]
Meanwhile, in the early 1900’s the National Council of Jewish Women even had Yiddish-speaking volunteers working to keep new female immigrants at Ellis Island “out of the clutches of men (often Jewish) who would try to entice them into prostitution.” [SCHNEIDER, p. 224]
By the early years of the twentieth century, large urban department stores had reputations “as breeding grounds for prostitution.” In New York City, for example, Macy’s fell under suspicion to some, in part for its proximity to a former red light district. In 1913, Percy Strauss, the Vice President of Macy’s, hosted a “vice vigilante” group to investigate his store. “Strauss,” notes William Leach, “no dour Puritan, had a personal interest in leading a campaign against vice. For one thing, as a German Jew and spokesman for the Jewish community, he had to disprove the charge — widely made — that immigrant Jewish women (and many of his own employees, therefore) were more likely than other women to be prostitutes.” [LEACH, p. 117]
By 1915 the Committee Against Vice (of which Strauss had conveniently become chairman) published a report that affirmed that Macy’s was “normal.” “On the other hand,” says William Leach, “testimony in the ‘secret reports’ told a different tale. Saleswomen, it was revealed, passed around pornographic cards and poems about themselves, talked openly about ‘sex’ and ‘sex desire,’ and ‘gossiped about fairies,’ as one investigator put it. Private accounts by other investigative reformers echoed this view, that things at Macy’s and in other department stores were hardly ‘normal’ or ‘decent.’ ‘The strongest temptation of girls in department stores,’ warned one reformer, ‘is not poverty but luxury and money.’” [LEACH, p. 118]
Although Jewish poverty was — and is — often argued as a major reason for their high international representation in such a vice, a 1914 League of Nations survey of 25 Jewish prostitutes in Buenos Aires showed that only 4 of them claimed to be poor before their new trade. Nine, however, stated that their family lives had been “immoral or abusive in some way.” [BRISTOW, p. 95]
(As Robert Rockaway notes about the dozens of members in Detroit’s all-Jewish criminal Purple Gang, which was involved in everything from murder to prostitution: “[Purple Gang members] were not products of crushing poverty, broken homes, or widespread economic despair. Most of them had been raised in lower middle class households where the father had a steady, if not well-paying, job.”) [ROCKAWAY, R., 2001, p. 113-]
And what, in complete dismissal of the facts of history, is the common Jewish perspective about the unabashed prominence of Jews in the “white slave trade?” This, in 1998, from Jewish scholar Gary Tobin in a popular Jewish newspaper:
“For those with a knowledge of history of 19th century anti-Semitic
propaganda, the idea that Jews are running “the white slave trade” is
nothing new. Cartoon like stereotypes of loathsome Jewish villains
trading on the lost virtue of non-Jewish maidens was standard material
for the Nazis and their precursors … It took a sick mind to imagine that
Jews were running the world’s oldest profession.” [TOBIN,
Distinguished, p. 51]
Tobin was responding to a very disturbing article in the New York Times (January 11, 1998) which described the horrible situation that Slavic Gentile prostitutes face today, trapped in Israel. As the Times notes, with the collapse of the Soviet Union and a resulting economic chaos, literally hundreds of thousands of Russian and Ukrainian women have been dispersed throughout the world, most entrapped in an international prostitution trade run by the “Russian mafia.” (Although it is certainly inferred, what the Times article does not overtly mention is that a significant part of the Russian mafia is Jewish. . Glenn Frankel, however, a Washington Post correspondent in Jerusalem, took the perspective in 1994 that:
“there was much talk about the Russian mafia muscling in [to Israel], although the police and most crime experts agreed that the brothels were almost entirely under the control of the Israeli mafia and that the Russians worked mostly as low-level managers or hookers.” [FRANKEL, p. 175]
“Israel has become a routine destination for the global trafficking of women,” noted Leonard Fein in a 1998 Jewish Bulletin,
“women coerced into prostitution. The thousand such women brought into Israel annually derive principally from the countries of the former Soviet Union, and the way they get to Israel is that they are ‘purchased,’ each one costing between $10,000 and $20,000.
And they are, of course, expected to repay the cost to their masters through what amounts to indentured servitude — or, if you prefer the simpler and more straightforward, slavery … Some [are] as young as 15, and even 12 … Each woman earns between $50,000 and $100,000 for her pimp. The turnover of the prostitution trade in Israel comes to some $450 million a year.” [FEIN, 1998, p. 21]
In a country of six million people, this averages about $75 a year paid to a pimp for every man, woman, and child in Israel. There are today 150 brothels and sex shops in Tel Aviv alone. [SILVER, E., 8-25-2000, p. 32]
In an interview with Marina, a Russian prostitute, the (Jewish) Forward noted in 1995 that there were nine or ten “Russian” prostitution rings in Israel. “Girls are regularly beaten to keep them obedient,” Marina told the Forward, “… [The Israeli police are] regularly paid off with free visits to our girls. A reporter like you thinks you’re picking up a stone from the road, but you might find you’re digging into a mountain.” [SHILLING, p. 5] As a report by Israel’s Women’s Network noted in 1997:
“Every year, hundreds of women from the former Soviet Union
are lured to Israel, gaining entry by posing as immigrants, on
the promise of finding lucrative jobs, and then are lured into
prostitution by abusive pimps.” [GROSS, N., 1997, p. 16]
In 1998, Hungary’s Consul in Tel Aviv, Andrea Horvath complained that four Hungarian women “had allegedly met their Israeli employer in a Budapest discotheque. They were hired as dancers but were later forced to provide sexual services as well.” [MTI, 3-20-98] In 2000, Robert Friedman, in talking about his book about the “Russian Mafia,” noted Seimon Mogilevich, head of a major Jewish mobster network, noting him as “one of the world’s biggest traffickers in women, Eurasian women.” [PENKLAVA, M., 5-3-2000]
“Women are sold into the sex business in Israel for between $5,000 and $15,000,” reported the Jerusalem Post in 1998, “while the pimps who buy them can earn between $10,000 and $50,000 a year per woman … 2,000 women are brought to Israel from the CIS and forced by pimps to work as prostitutes. Many are brought here on false pretenses and held against their will.” As Ira Omait, head of the Haifa Emergency Shelter for Women told the Post, “We are fast heading in the direction of trade in minors for prostitution and slavery.” [COLLINS, L., 12-15-98, p. 5]
Incredibly, as noted in a Jerusalem Post editorial in 1998, “According to the Women’s Lobby [a women’s group in Israel], part of the [prostitution] problem is that there is no law against slavery in Israel.” [JERUSALEM POST, 1-13-98, p. 10]
“Poor Women of Ex-Soviet Union Lured Into Sex Slavery” headlined a 1998 Associated Press story. Women forced into prostitution in Israel, noted the article, were locked in rooms and provided only food and condoms. And Israeli law on the subject? In 1996 150 men were arrested for pimping or running brothels. Merely 21 cases went to trial, and no one was ever convicted of a crime. [LINZER, D., 6-13-98]
In 1998 an Israeli judge even ordered an insurance company to pay for a client’s prostitution addiction:
“An Israeli insurance company has been ordered to pay 300,000
shekels ($80,000) to fund the prostitution habit of a man injured
in a car accident.” [DEUTSCHE PRESSE-AGENTUR, 4-22-98]
The man claimed that since a 1993 car crash he couldn’t form relationships with women and relied on the prostitution world.
The 1998 New York Times article noted that more than 1,500 Slavic prostitutes — mostly from the Ukraine — have been deported from Israel for residence infractions in the past three years. (Israeli oppression knows no end: “Unlike many countries, Israel does not pay airfare for deportees.” [LINZER, D., 6-13-98]) Prostitution is not illegal in Israel and clients include foreign workers, “Israeli soldiers with rifles on their shoulders,” business executives, and tourists.
The Times noted that:
“The networks trafficking women run east to Japan and Thailand, where
thousands of young Slavic women now work against their will as
prostitutes, and west to the Adriatic Coast and beyond … The routes
are controlled by Russian gangs based in Moscow … In Ukraine alone …
as many as 400,000 women under 30 have gone in the past decade …
Israel is a fairly typical destination … Police officials [in Israel] estimate
that there are 25,000 paid sexual transactions every day. [This in a
country with a population of 6 million]. Brothels are ubiquitous … Once
they cross the border [into Israel] their passports will be confiscated
[by pimps], their freedoms curtailed and what little money they have
taken from them at once … The Tropicana, in Tel Aviv’s bustling
business district, is one of the busiest bordellos. The women who work
there, like nearly all prostitutes in Israel today, are Russian. Their bosses,
however, are not. ‘Israelis love Russian girls,’ said Jacob Golan, who
owns this and two other clubs, ‘…. They are blonde and good looking
and different than us … And they are desperate. They are ready to do
anything for money.” [SPECTER, p. 1]
“The situation,” wrote Jewish author David Weinberg in an 1998 article about prostitution in Israel entitled Not So Holy Land, “is enough to make you cry in despair, or vomit from shame.” [WEINBERG, D., 1-18-98, p. 8]
THE RUSSIAN JEWISH MAFIA
Is America being blackmailed into new Middle East wars on behalf of Israel?
The Russian mafia, known also as the Red Mafiya or the “Red Octopus”, is really the Jewish mafia in disguise.
It has secret links to Mossad, the Rothschild family, the Federal Reserve Bank, and to powerful Jewish organizations such as AIPAC and the ADL.
The activities of the Russian mafia range from the back streets of Moscow to the sex dens of Budapest and Tel Aviv, from the diamond mines of Sierra Leone to the jewelry workshops of Antwerp, from the plush casinos of Vegas and Atlantic City to the multimillion dollar gated mansions of Fisher Island, Miami, from the coke and heroin dives of Odessa and the Black Sea ports to the child porn parlors and underground brothels of Oregon. In the United States, the Russian mafia’s ground zero is Brighton Beach, Brooklyn, right slap-bang in the center of America’s most Jewish community.
“As of 2009,” Wikipedia reports, “Russian mafia groups have been said to reach over 50 countries and, as of 2010, have up to 300,000 members.”
Laura Radanko, herself Jewish, carried out some important research on the Russian mafia and lifted the lid on some of their most grisly secrets. She writes:
During the detente days of the early 1970s, when Soviet leader Leonid Brezhnev had agreed to allow the limited emigration of Soviet Jews, thousands of hard-core criminals, many of them released from Soviet Gulags by the KGB, took advantage of their nominal Jewish status to swarm into the United States….
In the 1970s, more than forty thousand Russian Jews settled in Brighton Beach. It was under the shadow of the elevated subway tracks on Brighton Beach Avenue, bustling with Russian meat markets, vegetable pushcarts, and bakeries, that the Russian gangsters resumed their careers as professional killers, thieves, and scoundrels….
— Laura Radanko, “The Superpower of Crime.
Brighton Beach, Brooklyn, also known as “Little Odessa” or “Little Russia by the Sea”, has now become the American headquarters of the Red Mafiya, ranked as the world’s most powerful criminal gang.
Though the “Russian” gangs of the former Soviet Union include Georgians, Armenians, Chechens and various other ethnic groups, the kingpins of the Russian mafia are all Jewish—in the same way as the leaders of the Bolshevik Revolution were Jewish, though their underlings were not. All the big guns of the Russian mafia—Semion Mogilevich, Monya Elson, Marat Balagula, Vyacheslav Ivankov, Vladimir Ginsberg, Ludwig Fainberg—are Jewish.
In Russia’s post-perestroika years (1990s), thousands of Russian thugs slipped into America with the greatest of ease. “The understaffed and ill-equipped Immigration Service,” Robert Friedman reveals, “seemed helpless to stop them.”
Hundreds of former Soviets athletes and Special Forces veterans of the Afghanistan wars, including many retired KGB agents, swarmed into America. (There are now between 500,000 to 750,000 Russian Jews here). Many Russian criminals ended up at Brighton Beach, Brooklyn, where they were to join the combat brigades of the Red Mafiya. They were at once put on retainers of $20,000 a month. No jobs as janitors or road sweepers for them! Like ducks to water, they took to sex trafficking, prostitution, pornography, drug running, loan sharking, stock market scams, arson, burglary, bank and jewelry frauds, counterfeiting, vote rigging, arms sales, extortion and murder.
The first four rules of the Jewish Russian mafia are these: never have emotions, reject your parents and closest relatives, screw as many women as possible, never work in a legitimate job.
The merciless cruelty of these psychopathic killers is so extreme that it is often said “they will shoot people just to see if their guns work.”
The investigative reporter Robert I. Friedman (1951-2002) revealed in his book Red Mafiya: How the Russian Mob Has Invaded America that the “Russian” mafia was in fact more Jewish than Russian.
Dr M. Raphael Johnson is even more specific in his details about the Jewishness of the Russian mafia. Having established themselves in Tel Aviv after the mass emigration of Russian Jews to Israel in the 1990s, the Russian mafia began to call itself the Israeli or Jewish mafia:
The roots of Jewish organized crime go far back into tsarist times. Organized crime syndicates assisted Lenin’s gangs in bank robberies and the creation of general mayhem. During the so-called revolution, it was difficult, sometimes impossible, to distinguish between Bolshevik ideologues and Jewish organized crime syndicates. They acted in nearly an identical manner….
The state of Israel is a major factor in the rise and power of the Jewish mafia. Jewish drug dealers, child porn pushers and slave traders are free from prosecution in Israel. Israel does not consider these to be crimes, so long as the victims are non-Jews. The Israeli state will not extradite its citizens to non-Jewish countries, and, therefore, Jewish murderers can quite easily escape punishment in Israel.
— Dr M. Raphael Johnson, The Judeo-Russian Mafia: From the Gulag to Brooklyn to World Dominion
Described by the CIA as a “grave threat” to global security and by Robert I. Friedman as the “world’s most dangerous man”, step forward Semion Mogilevich, arch-criminal extraordinaire and head honcho of the Red Mafiya— first in line of the rottweiler pack of Russian criminals unleashed on the world by the dissolution of the Soviet Union. With his headquarters in Budapest, the 67-year-old Mogilevich is behind most of the white slave trafficking of Russian and Eastern European girls to Israel.
The power of this Russian Jew is well-nigh legendary. He is the ideal villain for a James Bond movie. Not only does he own the armaments industry in Hungary, but he is said to control the entire vodka trade in Russia and Central Europe. He possesses his own army, artillery, mechanized infantry, anti-aircraft guns and missiles of all types. Furthermore, he is reported to have his own nuclear weapons, relics left over from the former Warsaw Pact countries. NATO has said he is a “threat to the stability of Europe,” though his name remains little known to the general public in view of the fact that his co-ethnics control the media.
With extensive holdings in Israel, Semion Mogilevich, “single-handedly controls the brothels in Israel, where Ukrainian and Russian girls are forced into sexual slavery.”
Every year, “hundreds of women from Eastern Europe and the former Soviet Union are smuggled into Israel. The traffic is linked to the Russian mafia taking root in Israel as a byproduct of mass immigration from the former Soviet Union.” (Sex Trade Flourishing in Holy Land, CBS, February 24, 2003).
With activities in countries ranging from Malaysia to Great Britain, Russian mobsters now operate in more than fifty nations. They smuggle heroin from Southeast Asia with the help and cooperation of the CIA. They traffic in weapons. And they have a special knack for large-scale extortion.
When the Judeo-Russian mafia and the Rothschild-controlled banksters, who are ultimately behind this vast criminal organization, want the American government to do something which the government is hesitant to do—for example, to bomb Iran—it will give the government a sign of its displeasure by staging a terrorist event.
Such as the Boston Marathon massacre.
The government will then get the message: Unless we bomb Iran, these Bolshevik Jews will continue to terrorize America. As they once terrorized Tsarist Russia. They will make us look impotent fools, because obviously we can’t control them. So we’d better do as they say—we’d better bomb Iran.
In the case of the Boston Marathon bombing, you will point out correctly, it was the Chechen mafia that was involved. Indirectly. Not the Judeo-Russian mafia. Save your breath. It’s well-known that the Chechen mafia is simply one of the many tributaries of the Judeo-Russian mafia. We know for a fact that Russian oligarch Boris Berezovsky gave his financial backing to the Chechen mafia in return for favors. He was the Big Jew behind the Chechen mafia, secretly pulling its strings and dictating its policies.
Berezovsky’s secret links to the Chechen mafia were first revealed by a Russian-American investigative reporter for Forbes magazine, Paul Klebnikov, in his chilling exposé Godfather of the Kremlin: Boris Berezovsky and the looting of Russia (2000). “No man profited more from Russia’s slide into the abyss,” Klebnikov noted. As a result of his explosive revelations, a contract was taken out on Klebnikov’s life. In 2004, the courageous reporter was gunned down in the streets of Moscow. (See here)
In a recent edition of Forbes magazine (24 March, 2013), we read this:
“Paul [Klebnikov] was gathering string for future articles that linked Nukhayev and other Chechen warlords with Berezovsky….Here’s where it gets even more interesting. Last summer, Berezovsky’s Chechen links came to the surface in a $6.5 billion London lawsuit that he had brought (and lost) against Roman Abramovich, a rival Russian oligarch….
Klebnikov was the first American reporter to be murdered in Russia, and with his silencing the world lost one of its foremost experts on the vast and murky crossroads of Russian organized crime, Kremlin politics, Chechen terrorists, billionaire oligarchs, and the spread of Russian mafia conglomerates around the world.” (See here)
It is through the Chechen mafia that international Jewry continues its secret war against Putin’s Russia.
Boris Berezovsky, Jewish Russian oligarch (1946-2013), died in mysterious circumstances on 23 March this year at his home in Ascot, England, age 67. According to the official version of events, he was suffering from depression, so he locked himself in his bathroom and hanged himself. One of his oldest friends, Nikolai Glushkov, said: “Boris was strangled.” Berezovsky’s close links with the Chechen mafia clearly indicate that Big Jewry probably had a hand in the Boston Marathon bombing. See Chechnya and the Boston bombing: link, if established, would be unprecedented.
Right now, the Chechen mafia has Moscow within its iron tentacles. But the Chechen mafia is itself controlled, as we now know, by the Jewish mafia—and by organized Jewry which helps it to money launder its vast illegal profits through the Jewish-controlled central banks of the world. This is not rocket science. The Jews control everyone by controlling the world’s money supply. Wasn’t it a Rothschild who said that? Sure, it was a Rothschild who said that. “Give me control of a nation’s money,” Mayer Amschel Rothschild famously said, “and I care not who makes its laws.”
So the Jews help to money launder the loot of the Chechen mafia: and in this way they control the Chechen mafia, just like they control the American government.
The Judeo-Russian mafia and the Chechen mafia are not competitors in America. They are cronies. They have carved out their separate kingdoms. Like friendly lions, they share the spoils, eating from different sides of the same carcase. The carcase is America.
How much does a contract murder cost? How much would it cost Mossad to outsource the Boston Marathon bombing to the Chechen mafia, assuming it decided to carry out such a terrorist act in the interests of Israel? Is $10 million too little? How about $100 million? That’s peanuts. Israel gets $8.2 million a day from the American taxpayer. It could buy a Chechen mafia massacre by sacrificing its pocket money for two days—or two weeks at the most—depending on the price.
Google the word “Israelification”. You’ll get “Israelification of American airports”. “Israelification of American domestic security”. “Israelification of US police”. “Israelification of America”.
Makes you think, doesn’t it?
Do American cops really need to hop on a plane to Tel Aviv to learn how to taser their fellow citizens?
How come all the airports that failed so suspiciously to stop alleged Al Qaeda terrorists from carrying out the 9/11 attack—how come all these airports were policed by ONE private security company, ICTS, owned by Israeli Jew Ezra Harel? “One company had automatic inside access to all of the airports from which hijacked planes departed on 9-11,” one reads with eyepopping incredulity. “An Israeli company. One that Mossad agents could easily find employment with.”
It looks like the Jewish mafia in America is not only running a protection racket called “Israeli Security”, it’s running a protection racket that doesn’t even work.
It is naïve to assume that all these terrorist attacks upon America are being carried out by amateurish Muslim organizations armed with little more than box cutters and pressure cooker bombs filled with nails and shrapnel. It is equally naïve to believe that the government is itself invariably complicit in these attacks and is orchestrating them for no other reason than that it needs an excuse to confiscate guns and usher in a spooky Orwellian police state.
No, it is far more plausible to regard these attacks upon America as Jewish mafia strong-arm tactics to coerce America into doing the will of Israel and in fighting its wars for it. If the administration refuses to do the dirty work of bombing Iran for Israel, then it will have to be taught a lesson. If American politicians cannot be persuaded by all the carrots provided by AIPAC, then maybe it is time to give them a taste of Mossad’s big stick.
The carrot and the stick. That’s how it works. This is how Jewish influence is exerted. AIPAC offers the carrots, Mossad wields the big stick.
The Judeo-Russian mafia or Red Mafiya—call it the “Jewish mafia” if you prefer— is now Mossad’s secret army in America. This is not a conspiracy theory, it’s an intellectually defensible thesis for which there is now more than enough evidence. He that has eyes to see, let him see.
This is what I read today in an authoritative source, and it chills my blood when I read it:
US intelligence officials worry that Russian gangsters will acquire weapons of mass destruction such as fissionable material or deadly, easily concealed pathogens such as the smallpox virus—all too readily available from poorly guarded military bases or scientific labs—and sell these deadly wares to any number of terrorist groups or renegade states.
In North America alone, there are now thirty Russian crime syndicates operating in at least seventeen US cities, most notably New York, Miami, San Francisco, Los Angeles, and Denver.
Mafiya groups that have flourished in post-perestroika Russia, they have something La Cosa Nostra can only dream about: their own country. The Russian mob virtually controls their nuclear-tipped former superpower, which provides them with vast financial assets and a truly global reach. Russian President Boris Yeltsin wasn’t exaggerating when he described Russia as “the biggest Mafia state in the world”.
In 1993, a high-ranking Russian immigration official in Moscow told US investigators that there were five million dangerous criminals in the former USSR who would be allowed to emigrate to the West.
Laura Radanko, The Russian Mafia in America
Guess what? They’re here! Right now.
Five million dangerous Russian criminals, former inhabitants of the gulags. Mossad’s American army. Five million Bolshevik terrorists, come to batten on America like bloodsucking vampires.
These guys aren’t listed in official statistics. Many of them are Israelis, part of the Israeli mafia, free to fly in and out of America without visas. Like they did around 9/11, casing the joint, clicking their cameras, doing high fives, and dancing as the Twin Towers came toppling down.
Never forget the Bolshevik Revolution. Sixty-six million dead Russian Christians put to death by cheka Jews. The descendants of these Bolshevik revolutionaries are now here! Like a pack of ravenous wolves moving in for the kill…
Adolf Hitler once described America as the Jews’ “new hunting grounds”. With remarkable prescience, he saw that the state of Israel would one day develop into organized Jewry’s international crime base from which it would plunder other countries, including America, with impunity:
It doesn’t even enter their heads to build up a Jewish state in Palestine for the purpose of living there; all they want is a central organization for their international world swindle, endowed with its own sovereign rights and removed from the intervention of other states: a haven for convicted scoundrels and a university for budding crooks.” — Adolf Hitler, Mein Kampf, Chapter 11
Never forget what Israeli Prime Minister Benjamin Netanyahu told Jonathan Pollard on exiting the spy’s prison cell after a friendly visit in 2002: “Once we squeeze all we can out of the United States it can dry up and blow away.”
America lies bleeding.
The carrion are feeding on its corpse right now.
THE TROPICANA BROTHEL
Following the closure of Tropicana, a bordello, its owner relates the tale of the rise and fall of an establishment that, in the past decade, came to symbolize the flourishing sex industry in Israel:
the exploitation of the prostitutes, the ties with the mafia, the relations with the police, the caprices of the clients, the loss of the Arab clientele. Tropicana as a mirror of Israeli society
Oct. 31, 2001
Tropicana, a bordello, operated throughout the 1990s in the diamond exchange district of Ramat Gan. Last month it shut down. The coalition that kept the coffers of the establishment full – affluent Israelis, Palestinians from the territories, and foreign workers – was no longer able to afford the cost, whether for financial or security reasons. The women who were compelled to work at Tropicana moved on to shabby places in south Tel Aviv. It’s the same work but for less money. Hundreds of women work as prostitutes in Metropolitan Tel Aviv. A few dozen of them do so under duress, having been brought to Israel under false pretences. A law that was passed last year by the Knesset and which prohibits white slavery has not succeeded in reducing the scale of the phenomenon.
Yaakov (“Jackie”) Yazdi, one of the owners of Tropicana, says that the disappearance of shame is the only thing that changed over the years in which the bordello operated. When Tropicana first opened, clients virtually snuck into the place. Over time, the embarrassment vanished and the people who arrived at Tropicana behaved as though they were dropping in at their bank branch or local post office to do some business.
Yazdi himself was never ashamed of the place. He was happy to give interviews and never hid his pride at what he did for a living. The soft-spoken man was born in Tehran 51 years ago. In his view, managing a bordello is a legitimate profession and nothing short of an avocation.
All kinds of urban legends sprang up about Yazdi during the period when Tropicana flourished. For years he was a target of the police and the income tax authorities, and was even arrested. But persistent rumors had it that he was cooperating with the authorities in return for their turning a blind eye on his business.
Yazdi has three children by his first wife and one child from his second wife, Svetlana, a new immigrant from Belarus. His eldest son, Golan, 27, says he hopes to follow in his father’s footsteps.
Yazdi chose to be interviewed in his office, which is adjacent to Tropicana. In the foyer are a pool table, a bar and plush sofas. Yazdi sat in a side room, wearing a black Hugo Boss T-shirt, at a desk on which was a container for charitable donations and opposite, a television broadcasting closed-circuit pictures of the street outside. With him were his lawyer, Yisrael Klein, and his son, Golan.
What Yazdi had to say flies in the face of all social norms and, in some cases, is tainted with racism and male chauvinism. His words are quoted here not only because of their anthropological value, but because they are more than mere words. Yazdi’s world, in which humiliation is redemption and exploitation is salvation, is the world in which the women who worked for him or for his business associates were forced to work.
`Not a final closure’
“We closed down because of the economic situation. A fairly large percentage of the clients, about a third, were Arabs. Of them, two-thirds were Israeli Arabs and a third were Arabs from the territories. In the past year, they stopped showing up altogether. Because of the fear. The police make a lot of visits to this area, there are roadblocks all the time, and the Arabs were harassed because of the situation; their cars were searched. Yesterday, I was walking around outside and there were five police vans in the area. Every driver had his papers checked.
“But the Arabs weren’t the only ones who stopped visiting. Because of the situation, about a third of the Jewish clients also don’t come anymore. They don’t have the money for Tropicana, so they look for other alternatives, other places that are cheaper. Do you know what’s going on out there? The number of guys using street prostitutes has gone up 500%. Today, if you wander around the stock exchange district, you can find 50 girls. There are four, five, six on every corner.
“I know from the street that a lot of people moved to apartments in south Tel Aviv. Apartments [of] girls who take NIS 50. Here it cost NIS 160, NIS 170, NIS 180, and the girl would get NIS 100 of that. The places in south Tel Aviv are less luxurious. They are not as clean, the girls aren’t as pretty, the towels aren’t always clean, there’s no hot water. Here, when you came in, you would see eight to 10 girls; you would take a seat, have some coffee, look over the girls and choose. There was a lobby, there was a cafeteria. In south Tel Aviv, you go in and a girl opens the door – take it or leave it, that’s what there is.
“One of the reasons, and I mean this sincerely, that I left the place open until now was that I didn’t want to hurt the livelihood of dozens of souls. When the situation changes, we’ll open again. This isn’t a final closure. A lot of people ask me when we’re going to open again. Sometimes I don’t feel right about answering, so I say we’re renovating.
“Everyone thinks that Tropicana made millions, but the amounts that were bandied about in the press are fantasies. There was more noise than money in Tropicana. We were five partners. Our net profit was around $10,000 to $15,000 a month. When you divvy that up among five people, it comes out to about the salary of a high-school teacher. In every healthy business, say a store or a restaurant, the expenses are between 20 and 30%. At Tropicana, because of undeclared expenses, such as protection and bribery, which we didn’t report, the expenses were higher. We reached a situation in which 80% of our revenues went for expenses.
“What do I mean by protection? Look, any place that isn’t legal always pays to the mafia. Whether it’s the Russians, the Arabs or the Israelis. What do you think, that there’s a shortage of mafias here? There was one week when we tried not to pay, so they threw a grenade at us. The grenade happened to have been thrown by the Arabs, but the Russians shot at us a few times. The amounts they demanded changed from one period to the next. The people who got protection money from us and my partners – who don’t want their names to be made known, so I can’t tell you who they are – have been threatening me all the time since we closed Tropicana.
“They want me to open up again. They’re threatening to burn my house, to burn the office. I had one threat that they would chop off Golan’s head. That’s why we have closed-circuit television. Sitting here, I can see the passageways, the building behind, from above and below. That’s how it is.”
`Every perversion and its price’
“The girls who worked here were mostly new immigrants from the former Soviet Union. They didn’t acclimatize well in Israel, they couldn’t find work. Half of them were married, some of them had children, and half were single. In the good days of Tropicana, say two or three years ago, they would make NIS 400-NIS 500 a day, based on five clients.
“I had a lot of conversations with the girls. It’s `transparent’ that every girl comes to this work because of the economic situation. No one comes to work here as a luxury, except for nymphomaniacs, which is rare. One of the reasons that I personally don’t see this work as something bad is that, according to my opinion, all I am doing is helping girls in distress who want to do this work.
“I saw it as helping them. I met a lot of girls whose families came to Israel thinking they would be able to live like human beings here, but who reached a condition in which they were living at the same level as in Russia. I saw that, in general, every family finds the girl who is suited for this work and sends them to do it, and she supports her brothers and sisters and grandparents and uncles and aunts. If there is a family that doesn’t have a girl, one of the boys becomes a thief, or becomes involved in protection, or a burglar, or a drug dealer. They can’t be here without money to buy food.
“I didn’t initiate a call to girls, except when we opened and I ran an ad saying `Girls needed for institute.’ Other than that, they always came to me. We didn’t have to run another ad in the paper or anything. They would come here, because this was supposed to be the only place where the girls could work quietly, without being told how to work, without being told what to do.
“Each girl who worked in Tropicana decided which client she would receive – if he seemed to be drunk, or on drugs, or didn’t want to shower, or had sores on his body, she didn’t have to take him. There are some girls for whom a large sexual organ isn’t suitable, such as those who are new to the work. There were some girls who wouldn’t take disabled people, people with amputated legs or arms. Those who wouldn’t take someone would leave the room and the client remained with a girl who didn’t mind. None of them would take clients who they thought might have a disease of some kind, or clients who refused to use condoms. The girls themselves took care of that. Each of them liked a different `trademark’: with oil or without oil, with a scent or without, banana-flavored, strawberry-flavored.
“We never told a girl she wasn’t suitable. We didn’t want to insult anyone. There were older women, some who weren’t pretty, fat ones – they came here and sat, and when it didn’t work out, they got up and left. The ones who were successful were the girls who gave good service, which means the right attitude toward the clients. Girls who could speak a little Hebrew, who were a bit sympathetic to the clients.
“What attracts Israeli men externally is usually a young girl with a large chest. That’s the demand of the Israeli male: young, blonde, big chest. But the Arab clients like full-bodied girls. They like to feel meat, to feel they are holding onto something. The older guys also liked girls who were a little full-bodied.
“The clients were men. All of them. We had married men who wanted some variety. Soldiers without a girlfriend. Young guys who didn’t have a steady girl. Some just wanted to come and go; those who didn’t have a steady partner. The married ones wanted perversions they couldn’t do with the wife. All the things they were ashamed to ask for. Especially blow-jobs. Israeli women don’t like giving blow-jobs, that’s well-known. The truth is that Israeli women don’t like to pamper men. I’m talking about the married women. They’re tired after cooking, taking care of the children, cleaning the house. They’re not even up to giving the man a massage.
“Here the men could get sado, bizarre, anal, orgies, all kinds of things like that. But that would cost more than NIS 190. Every perversion had its price, according to the girl’s demand. That added between NIS 100 and NIS 300.
“I do not only accept this business wholeheartedly, I accept it too much. If it were important only for a few sexual perverts, I would be ashamed. But there are a million clients a month in Israel. I look at this business as though it were a restaurant. When a person is hungry, he goes to a restaurant. When a person feels that he needs release, he goes to a parlor. I don’t find anything immoral about it.
“If there is anyone who can tell me that a man can restrain himself and not be with a woman until he gets married, if there is anyone who can prove that, who can suggest a different alternative – let’s say, doing hand jobs until the wedding – then it’s possible that my opinion is wrong. But as long as there are a million clients a month who want escorts, where is the moral problem here?
“Look what’s going on out there. There was a garage here that was turned into a parking lot. Yesterday, I went by there, I saw 15 cars, all of them with girls. The clients who came here, it was also a social outing for them. A lot of people would spend the evening here in the cafeteria. Is there something sad about that? I don’t think you know what’s happening. I don’t believe that if a guy feels lonely he comes to a place like this. If he feels lonely, he goes to a bar to look for a girl. Men come here to get release. It’s exactly the same as a hungry guy who wants a sandwich. I don’t understand why it’s hard for people to understand that.
“Is what I am saying getting through to you? I am saying that God gave men a stomach, his stomach becomes empty, so he feels the need to eat, otherwise he can’t concentrate at work, he gets an ulcer. It’s the same thing with sexual relations. Every few days, you have to release it.
“A man can’t go on without a woman. If he has no woman, or he quarrels with his wife, or she is pregnant, or she is having her period and he is religious – he can’t take it. There is no man who doesn’t go anywhere. There just isn’t, I don’t believe it. There is no man who can go for more than a week without having sexual relations, unless there is something wrong with him. A normal man needs a woman once a week. The ones who are ashamed to come here go to a peep show or to observation rooms and do it by hand.”
A man’s world
“Men always have more understanding and more sense and more intelligence than women. Let’s make it simple. If we take the theory that men and women have the same intelligence and that women can do whatever men can do – then why are all the key positions and the most sensitive things not run by women? Because there are fewer women? No. Because women aren’t so smart. I’m not a chauvinist, but it’s hard to find a woman who can reach a man’s level when it comes to intelligence. They have certain limitations. In general, a woman always has an intelligence limitation. They can understand things up to a certain level and beyond that, they can’t understand a thing.
“I am very liberal in my views when it comes to men and women. But it’s clear that as one who comes from Persia, I can’t be too different. The education you got at home, the way you grew up, always has its influence. As much as I want to be liberal and be different, it’s possible only up to a certain point. At home, I want to be in control. By control, I mean I will have the last word. You know what they say: When women can piss standing up like men, they will be able to make the same demands as men.
“I have no problem, as a married man, flirting with women. Svetlana has no problem with the fact that there were a few girls here that I had sexual relations with. Here is my personal opinion on the subject: As long as a man doesn’t have an extra-marital affair, and he goes only for release, I don’t call it cheating. It’s a little diversity. In my opinion, women should be happy that men go to a place like this once a week and don’t have an affair with some girl. Here he pays NIS 200 and that’s it. Smart women will understand me for sure.
“If there was a girl at Tropicana that I was attracted to, and she was attracted to me, too, we had a good time together. The truth is that I tried to do it as little as possible with the workers, because that’s not so healthy for the professional life. During the nine years that Tropicana was open, it only happened with a few girls. The ones I was attracted to were those with a head on their shoulders, because most of the Russian girls aren’t so smart. But some of them were doctors. There was one who was a chief surgeon in a hospital in Russia, there was one who was a lawyer, there was an engineer, very intelligent types.
“If I couldn’t satisfy Svetlana, I would also agree to a reverse arrangement. But women can be satisfied with a vibrator. Svetlana is a new immigrant from Belarus. I met her eight years ago in a Russian restaurant. It was at the birthday party for one of the girls, and she invited me. I saw Svetlana, I danced with her, I fell in love with her. She is twenty-something years younger than me, but for Russian women, age isn’t what counts. On the contrary, they have a tendency to go out with older men. Because Svetlana’s father died, maybe she also found a father image in me.
“I fell in love with her because she was so shy. When you work at a job like this, you are attracted to shy girls. She sat quietly and didn’t stand out, and because I am dealing all the time with girls in this profession, I am attracting to the shy, quiet ones. What Svetlana finds in me is that I am very respectful to women. I am a true gentleman. It’s the same with the working girls: I always speak nicely, respect them. There were a lot of times when I quarrelled with people who called them whores. I never insulted them because of the fact that they worked here and I never gave them the feeling that they were that kind of girl. I spoke to them the way you would speak with co-eds in university.”
`No moral problem’
“You ask what would happen if one of my daughters would tell me one day that she wants to work as a prostitute. If she were in financial distress and didn’t have what to eat, I would accept it. If there were no other way to solve the problem, then there is nothing to be done. The moral problem only exists because of prejudice. We treat prostitution differently from the way they do in the Commonwealth of Independent States. There, to be a prostitute is like any other job. There, the parents know where the girls work, and they’re not angry with them in the least. From my point of view, there is no moral problem about prostitution.
“But if a girl chooses this work not because of economic reasons – then I don’t accept it. It is work that isn’t pleasant. The woman has to be with people that she doesn’t like so much. It’s something like a nurse in a hospital who has to wipe the patients’ behinds. But a little less disgusting. There is no moral problem here. But it’s not pleasant for a girl to sleep with a man whom she doesn’t really like physically. It’s not a moral thing, it’s just not pleasant to be bed with someone she doesn’t like.
“In my conversations with girls, they told me they don’t look at the guy to see what he looks like, what he has, what he doesn’t have, only at the NIS 100. The reason I wouldn’t want my daughter to get into that is because it wouldn’t be pleasant for me if my daughter slept with someone she doesn’t want to sleep with, if she doesn’t have financial problems. Because, why do all the girls do it? They do it to give their children food, or to give them education, or buy them clothes.
“We have a video of a girl who worked here, that Golan and I filmed in Moldova. The girl sent all the money she made to Moldova, it was stolen, and then she was deported from Israel and she went back to Moldova with nothing. We filmed her living with her daughter, a beautiful girl and very smart, nine years old. Golan was in shock for a week. They live in a small room, they don’t even have a glass for drinking water from the tap, they don’t have a bed to sleep in, the girl has nothing to wear to school, she has no food, she asks the other children for a bit of their sandwiches.
“People don’t know what’s going on. That woman came here from Moldova to work, to send money so her girl would have clothes to wear. People who don’t know the situation first-hand can say whatever they like. The reality is that we traveled around and we saw how people live in Moldova. If there is someone who gets up in the morning and has no idea how to provide for his children, and he goes to trash cans to find food for them, and he has a girl of 18, what is easier than sending her to work to bring money home?
“That’s what we saw in Moldova. We were shocked. If they don’t send a girl from Moldova to Israel or Turkey or Greece in order to do that work – it means dying. People there are actually dying.”
“I am not religious, I am a secular person. But I believe in God. I believe in religion, I respect religion, I respect religious people. I myself built a synagogue. Where? In Bnei Brak. They had no problem with the fact that I was the one donating the money. Where exactly in Bnei Brak? The synagogue is now being built. It isn’t finished yet. When it’s ready, I’ll invite you to the opening.
“I am in touch with a lot of people; they know exactly what I do. My only ties with them are through the help I give them – either by buying a Torah scroll, or building a synagogue, or paying the expenses of a synagogue. What does it say – it’s written that you have to pay 10% to charity? Well, I give even more than 10%. A large part of the little that I make goes to charity.
“I came to Israel at the age of 16. I am from a very rich family, automobile dealers. I learned Hebrew fast. In the army I was a monitor in a secret unit. I listened in Arabic and Farsi. I like it, it was very interesting work. You hear about all kinds of raids, sabotage, operations of the enemy. You always know what’s going on before anyone else. You hear what the prime minister can hear only a quarter of an hour later. For example, if I had been on duty, I would have heard the pilots who smashed into the Twin Towers before Bush heard about it. In listening, you are the first one who hears.
“After my discharge I worked as a security officer at Ben-Gurion Airport. After that I moved to a company that sold advertising products – calendars, pens, things like that. In 1991, it was a bad time for business. Clients’ checks bounced, to the tune of NIS 300,000. I had a big overdraft in the bank. Unfortunately, there is a problem here in the country: When a check bounces, there is nothing to be done, no one does anything to people who sign checks and then can’t honor them.
“I looked around for something that would bring cash money, and fast, without any more checks and all kinds of tricks and sting operations. I didn’t take money from my parents – I always tried not to ask them for money. That was a principle with me. I always had the character of a guy who doesn’t ask anyone for favors. Even though my family said they wanted to help out, to replace my car, to replace my apartment, I always insisted on not asking for anything. Even when I went to visit them, I would stay in a hotel, not with them.
“Then a friend told me about this sphere. I took a loan from the bank and I gave a friend money to open Tropicana. Then the friend had to leave the country and I had no choice but to carry on with the business. I had to put the problem of the overdraft behind me very fast.
“The thing that made it possible for Tropicana to flourish in the past few years was the wave of immigration from Russia. Without that immigration, the business wouldn’t have existed. There aren’t enough girls in the country to do the work. Before the immigration from the CIS began, it was better to work at washing cars than in prostitution. Who in the world could have worked in this field? The peak years of Tropicana were around 1995 and 1996. We had between 70 and 80 clients a day then. It was important to me for us to be the most successful place. My character is that it’s important for me to be the most successful, and it was that way in other things I did, too.
“We became a symbol because we were the best. I am an advertising person and I knew how to advertise the place. Advertising means treating the clients well, order, discipline, public relations. We gave the clients the feeling that they were in an organized place. Besides that, at the time we opened there weren’t many handsome, luxury places.
“That’s also why I kept two monkeys here. I wanted them for the beauty of it, so the place would be more interesting. That was one of the advertising stunts. At first, there was one monkey, which a Russian circus left us as a gift. The monkey got bored, so we brought him a female monkey.
“I don’t have to work now. My mother and father died not long ago and I received an inheritance. In Persia, we have houses, property, warehouses of cars; I can get millions of dollars from there. I don’t bring it to Israel but to the United States, through my brothers.
“Right now I am taking care of myself. My health hasn’t been so good lately. They found all kinds of bad things in me. Not long ago I had a catheterization, now they found something in my blood, it might be cancer. All kinds of other troubles, too, a possible ulcer, kidney stones. I am doing a series of tests, taking care of myself a little.
“Why did I agree to be interviewed? First of all, because you asked, and I am a guy who all his life has tried not to say no to people when they ask for help. If you had told me your car had broken down somewhere, I would have come to help you. I am the type of guy who likes to help. But there is another reason, too. I know that the things I said will make a lot of people mad. But what I told you is how things really are. Maybe through the interview, people will understand our reality in Israel a little better. If you want to know what the reality is, you have to ask people like me, not the subscribers of Ha’aretz.” | https://shoah.org.uk/dutch-jews-and-the-white-slave-trade/ | 21 |
15 | Vitamin A is a group of unsaturated nutritional organic compounds that includes retinol, retinal, and several provitamin A carotenoids (most notably beta-carotene). Vitamin A has multiple functions: it is important for growth and development, for the maintenance of the immune system, and for good vision. Vitamin A is needed by the retina of the eye in the form of retinal, which combines with protein opsin to form rhodopsin, the light-absorbing molecule necessary for both low-light (scotopic vision) and color vision. Vitamin A also functions in a very different role as retinoic acid (an irreversibly oxidized form of retinol), which is an important hormone-like growth factor for epithelial and other cells.
In foods of animal origin, the major form of vitamin A is an ester, primarily retinyl palmitate, which is converted to retinol (chemically an alcohol) in the small intestine. The retinol form functions as a storage form of the vitamin, and can be converted to and from its visually active aldehyde form, retinal.
All forms of vitamin A have a beta-ionone ring to which an isoprenoid chain is attached, called a retinyl group. Both structural features are essential for vitamin activity. The orange pigment of carrots (beta-carotene) can be represented as two connected retinyl groups, which are used in the body to contribute to vitamin A levels. Alpha-carotene and gamma-carotene also have a single retinyl group, which give them some vitamin activity. None of the other carotenes have vitamin activity. The carotenoid beta-cryptoxanthin possesses an ionone group and has vitamin activity in humans.
Vitamin A can be found in two principal forms in foods:
Vitamin A deficiency is estimated to affect approximately one third of children under the age of five around the world. It is estimated to claim the lives of 670,000 children under five annually. Between 250,000 and 500,000 children in developing countries become blind each year owing to vitamin A deficiency, with the highest prevalence in Africa and southeast Asia. Vitamin A deficiency is "the leading cause of preventable childhood blindness", according to UNICEF. It also increases the risk of death from common childhood conditions such as diarrhea. UNICEF regards addressing vitamin A deficiency as critical to reducing child mortality, the fourth of the United Nations' Millennium Development Goals.
Vitamin A deficiency can occur as either a primary or a secondary deficiency. A primary vitamin A deficiency occurs among children and adults who do not consume an adequate intake of provitamin A carotenoids from fruits and vegetables or preformed vitamin A from animal and dairy products. Early weaning from breastmilk can also increase the risk of vitamin A deficiency.
Secondary vitamin A deficiency is associated with chronic malabsorption of lipids, impaired bile production and release, and chronic exposure to oxidants, such as cigarette smoke, and chronic alcoholism. Vitamin A is a fat-soluble vitamin and depends on micellar solubilization for dispersion into the small intestine, which results in poor use of vitamin A from low-fat diets. Zinc deficiency can also impair absorption, transport, and metabolism of vitamin A because it is essential for the synthesis of the vitamin A transport proteins and as the cofactor in conversion of retinol to retinal. In malnourished populations, common low intakes of vitamin A and zinc increase the severity of vitamin A deficiency and lead physiological signs and symptoms of deficiency. A study in Burkina Faso showed major reduction of malaria morbidity by use of combined vitamin A and zinc supplementation in young children.
Due to the unique function of retinal as a visual chromophore, one of the earliest and specific manifestations of vitamin A deficiency is impaired vision, particularly in reduced light - night blindness. Persistent deficiency gives rise to a series of changes, the most devastating of which occur in the eyes. Some other ocular changes are referred to as xerophthalmia. First there is dryness of the conjunctiva (xerosis) as the normal lacrimal and mucus-secreting epithelium is replaced by a keratinized epithelium. This is followed by the build-up of keratin debris in small opaque plaques (Bitot's spots) and, eventually, erosion of the roughened corneal surface with softening and destruction of the cornea (keratomalacia) and leading to total blindness. Other changes include impaired immunity (increased risk of ear infections, urinary tract infections, meningococcal disease), hyperkeratosis (white lumps at hair follicles), keratosis pilaris and squamous metaplasia of the epithelium lining the upper respiratory passages and urinary bladder to a keratinized epithelium. In relation to dentistry, a deficiency in vitamin A may lead to enamel hypoplasia.
Adequate supply, but not excess vitamin A, is especially important for pregnant and breastfeeding women for normal fetal development and in breastmilk. Deficiencies cannot be compensated by postnatal supplementation. Excess vitamin A, which is most common with high-dose vitamin supplements, can cause birth defects and therefore should not exceed recommended daily values.
Vitamin A metabolic inhibition as a result of alcohol consumption during pregnancy is one proposed mechanism for fetal alcohol syndrome, and is characterized by teratogenicity resembling maternal vitamin A deficiency or reduced retinoic acid synthesis during embryogenesis.
A 2012 review found no evidence that beta-carotene or vitamin A supplements increase longevity in healthy people or in people with various diseases. A 2011 review found that vitamin A supplementation of children at risk of deficiency aged under five reduced mortality by up to 24%. However, a 2016 and 2017 Cochrane review concluded there was not evidence to recommend blanket vitamin A supplementation for all infants less than a year of age, as it did not reduce infant mortality or morbidity in low- and middle-income countries. The World Health Organization estimated that vitamin A supplementation averted 1.25 million deaths due to vitamin A deficiency in 40 countries since 1998.
While strategies include intake of vitamin A through a combination of breast feeding and dietary intake, delivery of oral high-dose supplements remain the principal strategy for minimizing deficiency. About 75% of the vitamin A required for supplementation activity by developing countries is supplied by the Micronutrient Initiative with support from the Canadian International Development Agency. Food fortification approaches are feasible, but cannot ensure adequate intake levels. Observational studies of pregnant women in sub-Saharan Africa have shown that low serum vitamin A levels are associated with an increased risk of mother-to-child transmission of HIV. Low blood vitamin A levels have been associated with rapid HIV infection and deaths. Reviews on the possible mechanisms of HIV transmission found no relationship between blood vitamin A levels in the mother and infant, with conventional intervention established by treatment with anti-HIV drugs.
Given that vitamin A is fat-soluble, disposing of any excess taken in through diet takes much longer than with water-soluble B vitamins and vitamin C. This allows for toxic levels of vitamin A to accumulate. These toxicities only occur with preformed vitamin A (retinoid). The carotenoid forms (for example, beta-carotene as found in carrots) give no such symptoms, but excessive dietary intake of beta-carotene can lead to carotenodermia, a harmless but cosmetically displeasing orange-yellow discoloration of the skin.
In general, acute toxicity occurs at doses of 25,000IU/kg of body weight, with chronic toxicity occurring at 4,000IU/kg of body weight daily for 6-15 months. However, liver toxicities can occur at levels as low as 15,000IU (4500micrograms) per day to 1.4 million IU per day, with an average daily toxic dose of 120,000IU, particularly with excessive consumption of alcohol. In people with kidney failure, 4000IU can cause substantial damage. Signs of toxicity may occur with long-term consumption of vitamin A at doses of 25,000-33,000IU per day.
Excessive vitamin A consumption can lead to nausea, irritability, anorexia (reduced appetite), vomiting, blurry vision, headaches, hair loss, muscle and abdominal pain and weakness, drowsiness, and altered mental status. In chronic cases, hair loss, dry skin, drying of the mucous membranes, fever, insomnia, fatigue, weight loss, bone fractures, anemia, and diarrhea can all be evident on top of the symptoms associated with less serious toxicity. Some of these symptoms are also common to acne treatment with Isotretinoin. Chronically high doses of vitamin A, and also pharmaceutical retinoids such as 13-cis retinoic acid, can produce the syndrome of pseudotumor cerebri. This syndrome includes headache, blurring of vision and confusion, associated with increased intracerebral pressure. Symptoms begin to resolve when intake of the offending substance is stopped.
Chronic intake of 1500RAE of preformed vitamin A may be associated with osteoporosis and hip fractures because it suppresses bone building while simultaneously stimulating bone breakdown, although other reviews have disputed this effect, indicating further evidence is needed.
A 2012 systematic review found that beta-carotene and higher doses of supplemental vitamin A increased mortality in healthy people and people with various diseases. The findings of the review extend evidence that antioxidants may not have long-term benefits.
As some carotenoids can be converted into vitamin A, attempts have been made to determine how much of them in the diet is equivalent to a particular amount of retinol, so that comparisons can be made of the benefit of different foods. The situation can be confusing because the accepted equivalences have changed.
For many years, a system of equivalencies in which an international unit (IU) was equal to 0.3 ?g of retinol (~1 nmol), 0.6 ?g of ?-carotene, or 1.2 ?g of other provitamin-A carotenoids was used. This relationship is alternatively expressed by the retinol equivalent (RE): one RE corresponded to 1 ?g retinol, 2 ?g ?-carotene dissolved in oil (it is only partly dissolved in most supplement pills, due to very poor solubility in any medium), 6 ?g ?-carotene in normal food (because it is not absorbed as well as when in oils), and 12 ?g of either ?-carotene, ?-carotene, or ?-cryptoxanthin in food.
Newer research has shown that the absorption of provitamin-A carotenoids is only half as much as previously thought. As a result, in 2001 the US Institute of Medicine recommended a new unit, the retinol activity equivalent (RAE). Each ?g RAE corresponds to 1 ?g retinol, 2 ?g of ?-carotene in oil, 12?g of "dietary" beta-carotene, or 24?g of the three other dietary provitamin-A carotenoids.
|Substance and its chemical environment (per 1 ?g)||IU (1989)||?g RE (1989)||?g RAE (2001)|
|beta-Carotene, dissolved in oil||1.67||1/2||1/2|
|beta-Carotene, common dietary||1.67||1/6||1/12|
Because the conversion of retinol from provitamin carotenoids by the human body is actively regulated by the amount of retinol available to the body, the conversions apply strictly only for vitamin A-deficient humans. The absorption of provitamins depends greatly on the amount of lipids ingested with the provitamin; lipids increase the uptake of the provitamin.
A sample vegan diet for one day that provides sufficient vitamin A has been published by the Food and Nutrition Board (page 120). Reference values for retinol or its equivalents, provided by the National Academy of Sciences, have decreased. The RDA (for men) established in 1968 was 5000 IU (1500 ?g retinol). In 1974, the RDA was revised to 1000 RE (1000 ?g retinol). As of 2001, the RDA for adult males is 900 RAE (900 ?g or 3000 IU retinol). By RAE definitions, this is equivalent to 1800 ?g of ?-carotene supplement dissolved in oil (3000 IU) or 10800 ?g of ?-carotene in food (18000 IU).
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamin A in 2001. For infants up to 12 months there was not sufficient information to establish a RDA, so Adequate Intake (AI) shown instead. As for safety the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The calculation of retinol activity equivalents (RAE) is each ?g RAE corresponds to 1 ?g retinol, 2 ?g of ?-carotene in oil, 12 ?g of "dietary" beta-carotene, or 24 ?g of the three other dietary provitamin-A carotenoids.
|Life stage group||US RDAs or AIs (?g RAE/day)||Upper limits (UL, ?g/day)[IOM 1]|
|Infants||0-6 months||400 (AI)||500 (AI)|
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin A labeling purposes 100% of the Daily Value was set at 5,000 IU, but it was revised to 900 ?g RAE on 27 May 2016. Compliance with the updated labeling regulations was required by 1 January 2020 for manufacturers with US$10 million or more in annual food sales, and by 1 January 2021 for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women and men of ages 15 and older, the PRIs are set respectively at 650 and 750 ?g RE/day. PRI for pregnancy is 700 ?g RE/day, for lactation 1300/day. For children of ages 1-14 years, the PRIs increase with age from 250 to 600 ?g RE/day. These PRIs are similar to the U.S. RDAs. The EFSA reviewed the same safety question as the United States, and set a UL at 3000 ?g/day for preformed vitamin A.
|Source||Retinol activity equivalences|
|cod liver oil||30000|
|liver beef, pork, fish||6500|
|sweet potato[food 1]||961|
|collard greens frozen then boiled||575|
|bell pepper/capsicum, red||157|
|bell pepper/capsicum, green||18|
The role of vitamin A in the visual cycle is specifically related to the retinal form. Within the eye, 11-cis-retinal is bound to the protein "opsin" to form rhodopsin in rods and iodopsin (cones) at conserved lysine residues. As light enters the eye, the 11-cis-retinal is isomerized to the all-"trans" form. The all-"trans" retinal dissociates from the opsin in a series of steps called photo-bleaching. This isomerization induces a nervous signal along the optic nerve to the visual center of the brain. After separating from opsin, the all-"trans"-retinal is recycled and converted back to the 11-"cis"-retinal form by a series of enzymatic reactions. In addition, some of the all-"trans" retinal may be converted to all-"trans" retinol form and then transported with an interphotoreceptor retinol-binding protein (IRBP) to the pigment epithelial cells. Further esterification into all-"trans" retinyl esters allow for storage of all-trans-retinol within the pigment epithelial cells to be reused when needed. The final stage is conversion of 11-cis-retinal will rebind to opsin to reform rhodopsin (visual purple) in the retina. Rhodopsin is needed to see in low light (contrast) as well as for night vision. Kühne showed that rhodopsin in the retina is only regenerated when the retina is attached to retinal pigmented epithelium, which provides retinal. It is for this reason that a deficiency in vitamin A will inhibit the reformation of rhodopsin, and will lead to one of the first symptoms, night blindness.
Vitamin A, in the retinoic acid form, plays an important role in gene transcription. Once retinol has been taken up by a cell, it can be oxidized to retinal (retinaldehyde) by retinol dehydrogenases; retinaldehyde can then be oxidized to retinoic acid by retinaldehyde dehydrogenases. The conversion of retinaldehyde to retinoic acid is an irreversible step; this means that the production of retinoic acid is tightly regulated, due to its activity as a ligand for nuclear receptors. The physiological form of retinoic acid (all-trans-retinoic acid) regulates gene transcription by binding to nuclear receptors known as retinoic acid receptors (RARs) which are bound to DNA as heterodimers with retinoid "X" receptors (RXRs). RAR and RXR must dimerize before they can bind to the DNA. RAR will form a heterodimer with RXR (RAR-RXR), but it does not readily form a homodimer (RAR-RAR). RXR, on the other hand, may form a homodimer (RXR-RXR) and will form heterodimers with many other nuclear receptors as well, including the thyroid hormone receptor (RXR-TR), the Vitamin D3 receptor (RXR-VDR), the peroxisome proliferator-activated receptor (RXR-PPAR) and the liver "X" receptor (RXR-LXR).
The RAR-RXR heterodimer recognizes retinoic acid response elements (RAREs) on the DNA whereas the RXR-RXR homodimer recognizes retinoid "X" response elements (RXREs) on the DNA; although several RAREs near target genes have been shown to control physiological processes, this has not been demonstrated for RXREs. The heterodimers of RXR with nuclear receptors other than RAR (i.e. TR, VDR, PPAR, LXR) bind to various distinct response elements on the DNA to control processes not regulated by vitamin A. Upon binding of retinoic acid to the RAR component of the RAR-RXR heterodimer, the receptors undergo a conformational change that causes co-repressors to dissociate from the receptors. Coactivators can then bind to the receptor complex, which may help to loosen the chromatin structure from the histones or may interact with the transcriptional machinery. This response can upregulate (or downregulate) the expression of target genes, including Hox genes as well as the genes that encode for the receptors themselves (i.e. RAR-beta in mammals).
Vitamin A promotes the proliferation of T cells through an indirect mechanism involving an increase in IL-2. In addition to promoting proliferation, vitamin A (specifically retinoic acid) influences the differentiation of T cells. In the presence of retinoic acid, dendritic cells located in the gut are able to mediate the differentiation of T cells into regulatory T cells. Regulatory T cells are important for prevention of an immune response against "self" and regulating the strength of the immune response in order to prevent host damage. Together with TGF-?, Vitamin A promotes the conversion of T cells to regulatory T cells. Without Vitamin A, TGF-? stimulates differentiation into T cells that could create an autoimmune response.
Hematopoietic stem cells are important for the production of all blood cells, including immune cells, and are able to replenish these cells throughout the life of an individual. Dormant hematopoietic stem cells are able to self-renew, and are available to differentiate and produce new blood cells when they are needed. In addition to T cells, Vitamin A is important for the correct regulation of hematopoietic stem cell dormancy. When cells are treated with all-trans retinoic acid, they are unable to leave the dormant state and become active, however, when vitamin A is removed from the diet, hematopoietic stem cells are no longer able to become dormant and the population of hematopoietic stem cells decreases. This shows an importance in creating a balanced amount of vitamin A within the environment to allow these stem cells to transition between a dormant and activated state, in order to maintain a healthy immune system.
Vitamin A has also been shown to be important for T cell homing to the intestine, effects dendritic cells, and can play a role in increased IgA secretion, which is important for the immune response in mucosal tissues.
Vitamin A, and more specifically, retinoic acid, appears to maintain normal skin health by switching on genes and differentiating keratinocytes (immature skin cells) into mature epidermal cells. Exact mechanisms behind pharmacological retinoid therapy agents in the treatment of dermatological diseases are being researched. For the treatment of acne, the most prescribed retinoid drug is 13-cis retinoic acid (isotretinoin). It reduces the size and secretion of the sebaceous glands. Although it is known that 40 mg of isotretinoin will break down to an equivalent of 10 mg of ATRA -- the mechanism of action of the drug (original brand name Accutane) remains unknown and is a matter of some controversy. Isotretinoin reduces bacterial numbers in both the ducts and skin surface. This is thought to be a result of the reduction in sebum, a nutrient source for the bacteria. Isotretinoin reduces inflammation via inhibition of chemotactic responses of monocytes and neutrophils. Isotretinoin also has been shown to initiate remodeling of the sebaceous glands; triggering changes in gene expression that selectively induce apoptosis. Isotretinoin is a teratogen with a number of potential side-effects. Consequently, its use requires medical supervision.
Vitamin A-deprived rats can be kept in good general health with supplementation of retinoic acid. This reverses the growth-stunting effects of vitamin A deficiency, as well as early stages of xerophthalmia. However, such rats show infertility (in both male and females) and continued degeneration of the retina, showing that these functions require retinal or retinol, which are interconvertible but which cannot be recovered from the oxidized retinoic acid. The requirement of retinol to rescue reproduction in vitamin A deficient rats is now known to be due to a requirement for local synthesis of retinoic acid from retinol in testis and embryos.
Retinyl palmitate has been used in skin creams, where it is broken down to retinol and ostensibly metabolised to retinoic acid, which has potent biological activity, as described above. The retinoids (for example, 13-cis-retinoic acid) constitute a class of chemical compounds chemically related to retinoic acid, and are used in medicine to modulate gene functions in place of this compound. Like retinoic acid, the related compounds do not have full vitamin A activity, but do have powerful effects on gene expression and epithelial cell differentiation. Pharmaceutics utilizing megadoses of naturally occurring retinoic acid derivatives are currently in use for cancer, HIV, and dermatological purposes. At high doses, side-effects are similar to vitamin A toxicity.
The discovery of vitamin A may have stemmed from research dating back to 1816, when physiologist François Magendie observed that dogs deprived of nutrition developed corneal ulcers and had a high mortality rate. In 1912, Frederick Gowland Hopkins demonstrated that unknown accessory factors found in milk, other than carbohydrates, proteins, and fats were necessary for growth in rats. Hopkins received a Nobel Prize for this discovery in 1929. By 1913, one of these substances was independently discovered by Elmer McCollum and Marguerite Davis at the University of Wisconsin-Madison, and Lafayette Mendel and Thomas Burr Osborne at Yale University, who studied the role of fats in the diet. McCollum and Davis ultimately received credit because they submitted their paper three weeks before Mendel and Osborne. Both papers appeared in the same issue of the Journal of Biological Chemistry in 1913. The "accessory factors" were termed "fat soluble" in 1918 and later "vitamin A" in 1920. In 1919, Harry Steenbock (University of Wisconsin-Madison) proposed a relationship between yellow plant pigments (beta-carotene) and vitamin A. In 1931, Swiss chemist Paul Karrer described the chemical structure of vitamin A. Vitamin A was first synthesized in 1947 by two Dutch chemists, David Adriaan van Dorp and Jozef Ferdinand Arens.
During World War II, German bombers would attack at night to evade British defenses. In order to keep the 1939 invention of a new on-board Airborne Intercept Radar system secret from German bombers, the British Ministry of Information told newspapers that the nighttime defensive success of Royal Air Force pilots was due to a high dietary intake of carrots rich in vitamin A, propagating the myth that carrots enable people to see better in the dark. | https://popflock.com/learn?s=Vitamin_A | 21 |
29 | Covalent Bonding Worksheet
Covalent bonding worksheet covalent bonding occurs when two or more nonmetals share electrons, attempting to attain a stable octet outer their outer shell for at least part of the time. draw a dot diagram for each element listed. circle the unpaired electrons that will be shared between the elements.
hydrogen is.Some of the worksheets for this concept are chapters and practice work covalent bonds and, covalent, university of at, work chemical bonding ionic covalent, covalent bonds, science grade term work booklet complete, chapter, bonding basics.
List of Covalent Bonding Worksheet
Found worksheet you are looking for to, click on icon or print icon to worksheet to print or download. worksheet Bonding basics covalent bonds answer notes complete the chart for each element. follow your teachers directions to complete each covalent bond.
hydrogen hydrogen diatomic element write the symbols for each element. use fruity pebbles or other with bonding name covalent bonding occurs when two or more nonmetals share electrons. attempting to attain a stable octet of electrons at least part of the time.
1. 2 Page Worksheet Reviews Skill Divided Parts 1 Concise Fill Covalent Bonding Ionic Bonds Chemical Bond
Chemical bonding practice modified determine whether it is a covalent bond or an ionic bond.Help your students understand chemical bonding with this information text and worksheet for chemical bonding. the reading passage tells the story of how the different types of chemical bonding were discovered, and worksheets allow students to practice modeling the chemical bonds.
2. Page Worksheet Covering Identification Metals Covalent Bonding Chemical Bond
3. Ionic Covalent Bonding Worksheet Luxury Teaching Resources Worksheets
Give students the covalent bonding worksheet bonding and key.doc. guide students through the.The covalent bond is the part where the circles touch or overlap. each of the fluorine atoms has put one of its electrons into the shared part of the shell, so this is called a single covalent bond, which can be written as ff.
4. Ionic Covalent Bonds Color Number Bonding Activity
You can do the same thing with compounds. carbon dioxide is carbon and oxygen, and they are both nonmetals.Ionic bond and covalent bond worksheet chemical bonds ionic bonding and covalent this download now includes a page student workbook its a wonderful, beautiful thing.
5. Ionic Covalent Bonds Worksheet Bonding
6. Ionic Covalent Bonds Worksheet Chemical Formula Sorting Bonding
Grade and grade learners are expected to match the structural formula to the molecular formula.Oct, empirical and molecular formula worksheet answers. ask about our room carpet cleaning special. academic essays from the past exams critically discuss essay meaning quantitative data analysis tools autism topics to write about review paper introduction common app act writing score best acknowledgements myself in for.
7. Ionic Covalent Metallic Bonding Properties Laboratory
Differentiate between ionic and covalent bonds with this printable diagram on molecular physics. this chemistry resource can be used as a class handout or as a transparency. worksheets. what is a covalent bond in this physical science printable, students evaluate statements about ionic compounds, covalent compounds, and.
8. Ionic Covalent Metallic Bonds Bonding Chemistry
9. Ionic Covalent Naming Worksheet Bonding Bonds
10. Naming Covalent Compounds Worksheet Binary Bonding Grief Worksheets
11. Naming Ionic Compounds Nomenclature Chemistry Ion Worksheet
12. Naming Understanding Covalent Bonding Molecules Bond Length
Ionic and covalent bonds worksheet. grass. chapter worksheet key chemistry with at ionic and covalent bonding please Pure covalent bonding only occurs when two nonmetal atoms of the same kind bind to each other. when two different nonmetal atoms are bonded or a nonmetal and a metal are bonded, then the bond is a mixture of covalent and ionic bonding called polar covalent bonding.
13. Practice Naming Writing Chemical Compounds Distance Learning Covalent Bonding Bond
14. Ionic Bonng Worksheet Key Covalent Answer Bonds
A. ca and cl b. c and s c. mg and f d. n and o. e. h and o f. s and o g. and cl h. f and o. i. p and s j. h and cl k. c and h l. h and h.We begin our discussion of the relationship between structure and bonding in covalent compounds by describing the interaction between two identical neutral atoms for example the h molecule which contains a purely covalent bond.
15. Professionally Designed Worksheets
Expectations are referred to only fleetingly in the compatibility code.I encourage you to get the marriage expectation worksheet to help you and your partner work through each step in discovering, then sharing your expectations for each other, as well as your expectations for yourselves.
17. Reading Ice Cream Sales Data Tables Rhyming Words Worksheets Writing Thesis Statement Covalent Bonding Worksheet
The other three pairs of electrons on each chlorine atom are called lone pairs a pair of electrons in a structure that is not involved in covalent bonding. lone pairs are not involved in covalent bonding.Covalent bonding worksheet to discover the image more plainly in this article, you may click on the wanted image to see the graphic in its original sizing or in full.
18. Saber Worksheet Beautiful Covalent Bonding Relationship Worksheets Persuasive Writing
A person can also look at covalent bonding worksheet image gallery that we all get prepared to get the image you are interested in.A covalent bond in which one atom contributes both bonding electrons. structural formula, single covalent bond, ion, bond dissociation energy, coordinate covalent bond coordinate covalent bond.
19. Simple Directions Worksheet Ideas Covalent Bonding Ionic
20. Students Interact Combinations Atoms Learning Ionic Covalent Bonding Resource Classroom
This makes no Covalent bonds these are bonds formed between nonmetals. the nonmetals all have high ionization energies so octets of valence electrons are obtained by sharing electrons, instead of gaining or losing electrons. an example of covalent bonding is, fluorine gas.
21. Students Practice Chemical Bonding Basics Electron Dot Diagrams Instructions Sh Practices Worksheets Chemistry Covalent Worksheet
22. Worksheet Answer Key Ideas Worksheets Answers Keys
23. Worksheet Answers Covalent Bonding Worksheets
This lesson aligns with performance expectation construct and revise an explanation for the.By the way, related with types of chemical bonds worksheet answers, scroll down to see various similar photos to give you more ideas. balancing chemical equations worksheet answer key, chemical bonding worksheet answers and chemical bonding worksheet answer key are three of main things we want to show you based on the gallery title.
24. Worksheet People Kids Worksheets Geography Covalent Bonding
25. Ionic Covalent Bonding Worksheet Inspirational Teaching Methods
26. Ionic Bonding Worksheet Answer Key Print Yup Covalent
27. 9 Collisions Chemistry Covalent Bonding Ideas Molecular Shapes Octet Rule
28. Covalent Bonding Activity Worksheet Oxygen Molecule Activities Electron Configuration
29. 9 Ionic Covalent Bonds Ideas Chemistry Bonding Teaching
30. Atoms Ions Worksheet Answers Covalent Bonding Nouns Verbs Worksheets
31. Building Molecular Models Activity Geometry Covalent Bonding Worksheet Worksheets
32. Chemical Bonding Fish Games Ionic Covalent Bonds Teaching Chemistry
33. Chemical Bonding Fish Games Ionic Covalent Bonds Teaching Chemistry Lessons
34. Chemical Bonding Lesson Chemistry Classroom Physical Science High School Covalent
Missing addend worksheets. finding domain and range worksheet. hundreds tens and ones worksheets. the purchase worksheet answers. parent functions and transformations worksheet. short vowel review worksheets. character and setting worksheets. ions ionic bonding worksheet classroom chemist chemical answers.
35. Chemical Bonding Notes Theory Chemistry Worksheets
36. Chemical Bonding Worksheet Sierras Chemistry Blog Types Bonds Covalent Bond Compounds
37. Chemical Bonds Flow Chart Graphic Organizer Bond Organizers Teaching Chemistry
Why not be the first.Oct, covalent bonding worksheet from covalent bonding worksheet, sourceguillermotull.com. worksheet drawing covalent bonds worksheet ionic and covalent from covalent bonding worksheet, sourcecathhsli.org. dot and cross diagrams by teaching resources from covalent bonding worksheet, sourcetes.
comChapter practice worksheet covalent bonds and molecular structure how are ionic bonds and covalent bonds different ionic bonds result from the transfer of electrons from one atom to another covalent bonds result from two atoms sharing electrons. describe the relationship between the length of a bond and the strength of that bond.
38. Chemical Bonds Printable Teaching Bond Covalent Bonding Chemistry
Apr, that area is the chemical bonding area of the molecule. using the appropriate eraser, the student can then erase the negative space so that he or she can see what the positive areas look like. ionic bonding practice worksheet answers lovely worksheet ideas from ionic bonding practice worksheet answers, sourcetherlsh.
39. Covalent Bonding Practice Worksheet Practices Worksheets
S chemistry website home. covalent bonding practice worksheet covalent bonding practices worksheets physical science. saber answers image pixels scaled covalent bonding worksheet word problem worksheets persuasive writing.Dec, students practice chemical bonding basics by using electron dot diagrams.
40. Ionic Bonding Puzzle Activity Covalent
Glue the periodic table on the poster board with the completed puzzle pieces. turn the worksheet and poster board into. ionic bonding cutouts and periodic table coloring worksheet. use your puzzle pieces to combine the following ions to show how they make a Chemical bonding types of bonding the different types of chemical bonding are determined by how the valence electrons are shared among the bonded atoms.
41. Covalent Bonding Worksheet Answer Key Ionic Bonds Student Exploration Gizmo
42. Covalent Bonding Worksheet Answer Key Sierras Chemistry Blog Types Chemical Bonds Bond
Its a book. you can highlight. there are notes. you can draw. there are practices lots of practice there are explanations, examples and covalent compounds practice worksheet chemical bonds ionic bonding and covalent this download now contains a page student workbook its a magnificent, beautiful thing.
its a book. you can point it out. there are notes. you can draw. there is a practice lots and covalent bonding worksheets is a good first step for anyone wanting to learn about the subject. these basic worksheets include vital information that helps you determine whether you really do need to learn more about the process of ionic and covalent bonding.
43. Covalent Bonding Worksheet Ionic
Best images of ionic and covalent bonding practice worksheet. bonding packet. ionic and covalent bonding worksheet answer key luxury a ionic.Beautiful ionic bonds worksheet answers from ionic bonding practice worksheet, sourceduboismuseumassociation.
44. Diagrams Ionic Covalent Bonds Worksheet 6 Bonding
Org. module bonding revision notes in all levels chemistry from types of chemical bonds worksheet, sourcegetrevising.co.ukAug, chemical bonds worksheet answers from chemical bonds worksheet, sourcefacialreviveserum.com. if chemistry workbook a from chemical bonds worksheet, sourceslideshare.
net. types chemical bonds worksheet bonding ionic and from chemical bonds worksheet, sourceguillermotull.comAbout this quiz worksheet. this quiz and corresponding worksheet will help you gauge your understanding of covalent chemical bonds. topics need to know to pass the quiz include.
45. Dot Structure Mini Lesson Worksheet Chemistry Worksheets Classroom Lessons
46. Electron Configurations Periodic Table Configuration Chemistry Education Classroom
48. Identifying Ionic Covalent Bonds Bonding Chemistry Worksheets
49. Image Result Naming Covalent Compounds Worksheet Bonding Chemical
50. Intro Ionic Covalent Compounds Bonding
P o. naming ionic compounds practice worksheet name the following ionic compounds. antimony chlorine dioxide hydrogen iodide hi While we talk related with ionic and covalent bonding practice worksheet answers, below we will see several similar images to add more info.
ionic and covalent bonding worksheet, naming ionic compounds worksheet answers and ions ionic compounds worksheet are some main things we want to present to you based on the post title.Lab modeling ionic and covalent bonds. hand out the worksheet ionic versus covalent bonding versus covalent bonding and key.
51. Ionic Bonding Note Covalent Worksheet
52. Worksheet Polarity Bonds Answers Luxury Bond Ionic Bonding Covalent Practices Worksheets
For example note that hydrogen is content with, not. electrons. show how covalent bonding occurs in each of the following pairs of atoms. atoms may. this worksheet and answer key is a great way to assess students prior knowledge of ionic and covalent bonding.
it is a great for high school chemistry classes, and a wonderful review activity for middle and high school classes that have already learned about bonding. for each of the following covalent bonds write the symbols for each element. draw a dot structure for the valence shell of each element. | https://cafedoing.com/covalent-bonding-worksheet/ | 21 |
36 | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
Canada is a federation with eleven components: the national Government of Canada and ten provincial governments. All eleven governments derive their authority from the Constitution of Canada. There are also three territorial governments in the far north, which exercise powers delegated by the federal parliament, and municipal governments which exercise powers delegated by the province or territory. Each jurisdiction is generally independent from the others in its realm of legislative authority. The division of powers between the federal government and the provincial governments is based on the principle of exhaustive distribution: all legal issues are assigned to either the federal Parliament or the provincial Legislatures.
The division of powers is set out in the Constitution Act, 1867 (originally called the British North America Act, 1867), a key document in the Constitution of Canada. Some amendments to the division of powers have been made in the past century and a half, but the 1867 Act still sets out the basic framework of the federal and provincial legislative jurisdictions.
The federal nature of the Canadian constitution was a response to the colonial-era diversity of the Maritimes and the Province of Canada, particularly the sharp distinction between the French-speaking inhabitants of Lower Canada and the English-speaking inhabitants of Upper Canada and the Maritimes. John A. Macdonald, Canada's first prime minister, originally favoured a unitary system; later, after witnessing the carnage of the American Civil War, he supported a federal system to avoid similar violent conflicts.
The foundations of Canadian federalism were laid at the Quebec Conference of 1864. The Quebec Resolutions were a compromise between those who wanted sovereignty vested in the federal government and those who wanted it vested in the provinces. The compromise based the federation on the constitution of the British Empire, under which the legal sovereignty of imperial power was modified by the conventions of colonial responsible government, making colonies of settlement (such as those of British North America) self-governing in domestic affairs. A lengthy political process ensued before the Quebec Resolutions became the British North America Act of 1867. This process was dominated by John A. Macdonald, who joined British officials in attempting to make the federation more centralized than that envisaged by the Resolutions.
The resulting constitution was couched in more centralist terms than intended. As prime minister, Macdonald tried to exploit this discrepancy to impose his centralist ideal against chief opponent Oliver Mowat. In a series of political battles and court cases from 1872 to 1896,[a] Mowat reversed Macdonald's early victories and entrenched the co-ordinated sovereignty which he saw in the Quebec Resolutions. In 1888, Edward Blake summarized that view: "[It is] a federal as distinguished from a legislative union, but a union composed of several existing and continuing entities ... [The provinces are] not fractions of a unit but units of a multiple. The Dominion is the multiple and each province is a unit of that multiple ..." The accession of Wilfrid Laurier as prime minister inaugurated a new phase of constitutional consensus, marked by a more-egalitarian relationship between the jurisdictions. The federal government's quasi-imperial powers of disallowance and reservation, which Macdonald abused in his efforts to impose a centralised government, fell into disuse.
During World War I the federal Crown's power was extended with the introduction of income taxes and passage of the War Measures Act, the scope of which was determined by several court cases.[b] The constitution's restrictions of parliamentary power were affirmed in 1919 when, in the Initiatives and Referendums Reference, a Manitoba act providing for direct legislation by way of initiatives and referendums was ruled unconstitutional by the Privy Council on the grounds that a provincial viceroy (even one advised by responsible ministers) could not permit "the abrogation of any power which the Crown possesses through a person directly representing it".[nb 11] Social and technological changes also worked their way into constitutional authority; the Radio Reference found that federal jurisdiction extended to broadcasting,[nb 12] and the Aeronautics Reference found the same for aeronautics.[nb 13]
In 1926, the King–Byng Affair resulted in a constitutional crisis which was the impetus for changes in the relationship between the governor general and the prime minister. Although its key aspects were political in nature, its constitutional aspects continue to be debated. One result was the Balfour Declaration issued later that year, whose principles were eventually codified in the Statute of Westminster 1931. It, and the repeal of the Colonial Laws Validity Act 1865, gave the federal parliament the ability to make extraterritorial laws and abolish appeals to the Judicial Committee of the Privy Council. Criminal appeals were abolished in 1933,[na 1] but civil appeals continued until 1949.[na 2] The last Privy Council ruling of constitutional significance occurred in 1954, in Winner v. S.M.T. (Eastern) Limited.[nb 14] After that, the Supreme Court of Canada became the final court of appeal.
In 1937, Lieutenant Governor of Alberta John C. Bowen refused to give Royal Assent to three Legislative Assembly of Alberta bills. Two would have put the province's banks under the control of the provincial government; the third, the Accurate News and Information Act, would have forced newspapers to print government rebuttals to stories the provincial cabinet considered "inaccurate". All three bills were later declared unconstitutional by the Supreme Court of Canada in Reference re Alberta Statutes, which was upheld by the Judicial Committee of the Privy Council.[nb 15]
World War II's broader scope required passage of the National Resources Mobilization Act to supplement the powers in the War Measures Act to pursue the national war effort. The extent to which wartime federal power could expand was further clarified in the Chemicals Reference (which held that Orders in Council under the War Measures Act were equivalent to acts of parliament)[nb 16] and the Wartime Leasehold Regulations Reference, which held that wartime regulations could displace provincial jurisdiction for the duration of an emergency.[nb 17] Additional measures were required in order to secure control of the economy during that time. Jurisdiction over unemployment insurance was transferred permanently to the federal sphere;[na 3] the provinces surrendered their power to levy succession duties and personal and corporate income taxes for the duration of the war (and for one year afterwards) under the Wartime Tax Rental Agreement; and labour relations were centralized under federal control with the Wartime Labour Relations Regulations (lasting until 1948), in which the provinces ceded their jurisdiction over all labour issues.
Canada emerged from the war with better cooperation between the federal and provincial governments. This led to a welfare state, a government-funded health care system and the adoption of Keynesian economics. In 1951 section 94A was added to the British North America Act, 1867 to allow the Canadian parliament to provide for pensions.[na 4] This was extended in 1964 to allow supplementary benefits, including disability and survivors' benefits.[na 5] The era saw an increase in First Ministers' Conferences to resolve federal-provincial issues. The Supreme Court of Canada became the court of final appeal after the 1949 abolition of appeals to the Judicial Committee of the Privy Council and parliament received the power to amend the constitution, limited to non-provincial matters and subject to other constraints.[na 6]
1961 saw the last instance of a lieutenant governor reserving a bill passed by a provincial legislature. Frank Lindsay Bastedo, Lieutenant Governor of Saskatchewan, withheld Royal Assent and reserved Bill 5, An Act to Provide for the Alteration of Certain Mineral Contracts, to the Governor-in-Council for review. According to Bastedo, "[T]his is a very important bill affecting hundreds of mineral contracts. It raises implications which throw grave doubts of the legislation being in the public interest. There is grave doubt as to its validity". The act was upheld in an Order in Council by the federal government.[na 7]
Parliament passed the Canadian Bill of Rights, the first codification of rights by the federal government. Prime Minister Lester Pearson obtained passage of major social programs, including universal health care (a federal-provincial cost-sharing program), the Canada Pension Plan and Canada Student Loans. Quebec's Quiet Revolution encouraged increased administrative decentralization in Canada, with Quebec often opting out of federal initiatives and instituting its own (such as the Quebec Pension Plan). The Quebec sovereignty movement led to the victory of the Parti Québécois in the 1976 Quebec election, prompting consideration of further loosening ties with the rest of Canada; this was rejected in a 1980 referendum.
During the premiership of Pierre Trudeau, the federal government became more centralist. Canada experienced "conflictual federalism" from 1970 to 1984, generating tensions with Quebec and other provinces. The National Energy Program and other petroleum disputes sparked bitterness in Alberta, Saskatchewan and Newfoundland toward the federal government.
Although Canada achieved full status as a sovereign nation in the Statute of Westminster 1931, there was no consensus about how to amend the constitution; attempts such as the 1965 Fulton–Favreau formula and the 1971 Victoria Charter failed to receive unanimous approval from both levels of government. When negotiations with the provinces again stalled in 1982, Trudeau threatened to take the case for patriation to the British parliament "[without] bothering to ask one premier". According to the federal cabinet and Crown counsel, if the British Crown (in council, parliament and on the bench) exercised sovereignty over Canada, it would do so only at the request of the federal ministers.
Manitoba, Newfoundland and Quebec posed reference questions to their respective courts of appeal, in which five other provinces intervened in support. In his ruling, Justice Joseph O'Sullivan of the Manitoba Court of Appeal held that the federal government's position was incorrect; the constitutionally-entrenched principle of responsible government meant that "Canada had not one responsible government but eleven." Officials in the United Kingdom indicated that the British parliament was under no obligation to fulfill a request for legal changes desired by Trudeau, particularly if Canadian convention was not followed. All rulings were appealed to the Supreme Court of Canada. In a decision later known as the Patriation Reference, the court ruled that such a convention existed but did not prevent the federal parliament from attempting to amend the constitution without provincial consent and it was not the role of the courts to enforce constitutional conventions.
The Canadian parliament asked the British parliament to approve the Constitution Act, 1982, which it did in passage of the Canada Act 1982. This resulted in the introduction of the Canadian Charter of Rights and Freedoms, the transfer of constitutional amendment to a Canadian framework and the addition of section 92A to the Constitution Act, 1867, giving the provinces more jurisdiction over their natural resources.
The Progressive Conservative Party under Joe Clark and Brian Mulroney favoured the devolution of power to the provinces, culminating in the failed Meech Lake and Charlottetown accords. After merging in 2003 with the heavily devolutionist Canadian Alliance, the Conservative Party under Stephen Harper has maintained the same stance. When Harper was appointed prime minister in 2006, the frequency of First Ministers' conferences declined significantly; inter-provincial cooperation increased with meetings of the Council of the Federation, established by the provincial premiers, in 2003.
After the 1995 Quebec referendum on sovereignty, Prime Minister Jean Chrétien limited the ability of the federal government to spend money in areas under provincial jurisdiction. In 1999 the federal government and all provincial governments except Quebec's agreed to the Social Union Framework Agreement, which promoted common standards for social programmes across Canada. Former Prime Minister Paul Martin used the phrase "asymmetrical federalism" to describe this arrangement. The Supreme Court upholds the concepts of flexible federalism (where jurisdictions overlap) and cooperative federalism (where they can favourably interact), as noted in Reference re Securities Act.
As a federal monarchy, the Canadian Crown is present in all jurisdictions in the country,[nb 18] with the headship of state a part of all equally. Sovereignty is conveyed not by the governor general or federal parliament, but through the Crown itself as a part of the executive, legislative and judicial branches of Canada's 11 (one federal and 10 provincial) legal jurisdictions; linking the governments into a federal state, the Crown is "divided" into 11 "crowns". The fathers of the Canadian Confederation viewed the constitutional monarchy as a bulwark against potential fracturing of the Canadian federation, and the Crown remains central to Canadian federalism.
Distribution of legislative powers
Division of powers
The federal-provincial distribution of legislative powers (also known as the division of powers) defines the scope of the federal and provincial legislatures. These have been identified as exclusive to the federal or provincial jurisdictions or shared by all. Section 91 of the Constitution Act, 1867, lists the major federal parliament powers, based on the concepts of peace, order, and good government; while Section 92 of the Constitution Act, 1867 enumerates those of the provincial governments.
The act puts remedial legislation on education rights, uniform laws relating to property and civil rights (in all provinces other than Quebec), creation of a general court of appeal and other courts "for the better Administration of the Laws of Canada," and implementing obligations arising from foreign treaties, all under the purview of the federal legislature in Section 91. Some aspects of the Supreme Court of Canada were elevated to constitutional status in 1982.[nb 19]
The act lists the powers of the provincial parliaments (subject to the federal parliament's authority to regulate inter-provincial movement) in Section 92. These powers include the exploration, development and export to other provinces of non-renewable natural resources, forestry resources and electrical energy. Education is under provincial jurisdiction, subject to the rights of separate schools.
Old-age pensions, agriculture and immigration are shared within federal and provincial jurisdictions. One prevails over the other, however: for pensions, federal legislation will not displace provincial laws, and for agriculture and immigration it is the reverse.[why?]
The Constitution Act, 1871 allowed parliament to govern any territories not forming part of any province, and the Statute of Westminster 1931, gave parliament the ability to pass extraterritorial laws.
To rationalize how each jurisdiction may use its authority, certain doctrines have been devised by the courts: pith and substance[definition needed], including the nature of any ancillary powers and the colourability of legislation[clarification needed]; double aspect; paramountcy; inter-jurisdictional immunity; the living tree; the purposive approach, and charter compliance[definition needed] (most notably through the Oakes test). Additionally, there is the implied Bill of Rights.
Jurisdiction over public property
Jurisdiction over Crown property is divided between the provincial legislatures and the federal parliament, with the key provisions Sections 108, 109, and 117 of the Constitution Act, 1867. Public works are the property of the federal Crown, and natural resources are within the purview of the provinces.[nb 20] Title to such property is not vested in one jurisdiction or another, however, since the Canadian Crown is indivisible. Section 109 has been given a particularly-broad meaning; provincial legislation regulating labour used to harvest and the disposal of natural resources does not interfere with federal trade and commerce power,[nb 21][nb 22] and royalties have been held to cover the law relating to escheats.[nb 23] Canada cannot unilaterally create Indian reserves, since the transfer of such lands requires federal and provincial approval by Order in Council (although discussion exists about whether this is sound jurisprudence).[nb 24]
The provincial power to manage Crown land did not initially extend to Manitoba, Alberta and Saskatchewan when they were created from part of the Northwest Territories, since the land was vested in the federal Crown. It was vacated on some land (the Railway Belt and the Peace River Block) by British Columbia when it entered the confederation. Title to this land was not vested in those provinces until the passage of the Natural Resources Acts in 1930. The power is not absolute, however; provincial Crown land may be regulated or expropriated for federal purposes.[nb 25][nb 26] The administration of crown land is also subject to the rights of First Nations[nb 27] (since they are a relevant interest),[nb 28] and provincial power "is burdened by the Crown obligations toward the Aboriginal people in question".[nb 29] Debate exists about whether such burdens apply in the same manner in the Western provinces under the Natural Resources Acts.
Management of offshore resources is complex; although management of the beds of internal waters is vested in the provincial Crowns, management of beds of territorial seas is vested in the federal Crown (with management of the continental shelf and the exclusive economic zone).[nb 30][nb 31] The beds and islands of the waters between Vancouver Island and mainland British Columbia have been declared the property of the Crown in right of British Columbia.[nb 32] Federal-provincial management agreements have been implemented concerning offshore petroleum resources in the areas around Newfoundland and Labrador and Nova Scotia.[na 8][na 9]
Taxation and spending
Taxation is a power of the federal and provincial legislatures; provincial taxation is more restricted, in accordance with sections 92(2) and 92(9) of the Constitution Act, 1867. In Allard Contractors Ltd. v. Coquitlam (District), provincial legislatures may levy an indirect fee as part of a valid regulatory scheme.[nb 33] Gérard La Forest observed obiter dicta that section 92(9) (with provincial powers over property and civil rights and matters of a local or private nature) allows for the levying of license fees even if they constitute indirect taxation.
Parliament has the power to spend money on public debt and property. Although the Supreme Court of Canada has not ruled directly about constitutional limits on federal spending power,[nb 34] parliament can transfer payments to the provinces.[c] This arises from the 1937 decision of the Judicial Committee of the Privy Council on the Unemployment Insurance Reference, where Lord Atkin observed: "Assuming the Dominion has collected by means of taxation a fund, it by no means follows that any legislation which disposes of it is necessarily within Dominion competence ... If on the true view of the legislation it is found that in reality in pith and substance the legislation invades civil rights within the Province, or in respect of other classes of subjects otherwise encroaches upon the provincial field, the legislation will be invalid".[nb 36] In Re Canada Assistance Plan, Justice Sopinka held that the withholding of federal money previously granted to fund a matter within provincial jurisdiction does not amount to the regulation of that matter.[nb 37]
Federal legislative power
Much distribution of power has been ambiguous, leading to disputes which have been decided by the Judicial Committee of the Privy Council and (after 1949) the Supreme Court of Canada. The nature of the Canadian constitution was described by the Privy Council in 1913 as not truly federal (unlike the United States and Australia); although the British North America Act, 1867, states in its preamble that the colonies had expressed "their desire to be federally united into one Dominion", "the natural and literal interpretation of the word [federal] confines its application to cases in which these States, while agreeing on a measure of delegation, yet in the main continue to preserve their original Constitutions". The Privy Council determined that the Fathers of Confederation desired a "general Government charged with matters of common interest, and new and merely local Governments for the Provinces". Matters other than those listed in the British North America Act, 1867, as the responsibility of the federal or provincial parliaments fell to the federal legislature (the reverse of the arrangement between the federal and state congresses in the United States).[nb 38]
National and provincial concerns
The preamble of Section 91 of the Constitution Act, 1867 states: "It shall be lawful for the Queen ... to make laws for the Peace, Order, and good Government of Canada, in relation to all Matters not coming within the Classes of Subjects by this Act assigned exclusively to the Legislatures of the Provinces". In addition to assigning powers not stated elsewhere (which has been narrowly interpreted), this has led to the creation of the national-emergency and national-concern doctrines.
The national-emergency doctrine was described by Mr Justice Beetz in Reference re Anti-Inflation Act.[nb 39][d][e] The national-concern doctrine is governed by the principles stated by Mr Justice Le Dain in R. v. Crown Zellerbach Canada Ltd..[nb 41][f]
The federal government is partially limited by powers assigned to the provincial legislatures; for example, the Canadian constitution created broad provincial jurisdiction over direct taxation and property and civil rights. Many disputes between the two levels of government revolve around conflicting interpretations of the meaning of these powers.
By 1896, the Judicial Committee of the Privy Council arrived at a method of interpretation, known as the "four-departments doctrine", in which jurisdiction over a matter is determined in the following order:
- Does it fall under Section 92, ss. 1–15?
- Can it be characterized as falling under Section 91, ss. 1–29?
- Is it of a general nature, bringing it within Section 91's residuary clause
- If not, it falls under Section 92, ss. 16.
Although the Statute of Westminster 1931 declared that the Parliament of Canada had extraterritorial jurisdiction, the provincial legislatures did not achieve similar status. According to s. 92, "In each Province the Legislature may exclusively make Laws ...".
If a provincial law affects the rights of individuals outside the province:
In The Queen (Man.) v. Air Canada, it was held that the s. 92(2) power providing for "direct taxation within the province" does not extend to taxing sales on flights passing over (or through) a province, but the question of how far provincial jurisdiction can extend into a province's airspace was left undecided.[nb 44] However, the property and civil rights power does allow for determining rules with respect to conflict of laws in civil matters.[na 10]
Federal jurisdiction arises in several circumstances:
- Under the national-emergency doctrine for temporary legislation (the War Measures Act)
- Under the national-concern doctrine for:
- Matters not existing at confederation (radio and television)
- Matters of a local or private nature in a province which have become matters of national concern, such as what can accrue to the regulation of trade and commerce
- Matters where the grant is exclusive under Section 91 (criminal law)
- Matters where authority may be assumed (as with works for the general advantage of Canada)
The gap approach, employed sparingly, identifies areas of jurisdiction arising from oversights by the drafters of the constitution; for example, federal jurisdiction to incorporate companies is inferred from the power provinces have under Section 92 for "The Incorporation of Companies with Provincial Objects".
Uniformity of federal law
Section 129 of the Constitution Act, 1867 provided for laws in effect at the time of Confederation to continue until repealed or altered by the appropriate legislative authority. Similar provisions were included in the terms of union of other territories that were subsequently incorporated into Canada.
The uniformity of laws in some areas of federal jurisdiction was significantly delayed. Offences under the Criminal Code were not made uniform until 1892, when common-law criminal offences were abolished. Divorce law was not made uniform until 1968, Canadian maritime law not until 1971 and marriage law not until 2005. Provisions of the Civil Code of Lower Canada, adopted in 1865 by the former Province of Canada, affecting federal jurisdiction continued to be in force in Quebec (if they had not been displaced by other federal Acts) until their repeal on 15 December 2004.[na 11]
Interplay of jurisdictions
According to the Supreme Court of Canada, "our Constitution is based on an allocation of exclusive powers to both levels of government, not concurrent powers, although these powers are bound to interact in the realities of the life of our Constitution."[nb 45] Chief Justice Dickson observed the complexity of that interaction:
The history of Canadian constitutional law has been to allow for a fair amount of interplay and indeed overlap between federal and provincial powers. It is true that doctrines like interjurisdictional and Crown immunity and concepts like "watertight compartments" qualify the extent of that interplay. But it must be recognized that these doctrines and concepts have not been the dominant tide of constitutional doctrines; rather they have been an undertow against the strong pull of pith and substance, the aspect doctrine and, in recent years, a very restrained approach to concurrency and paramountcy issues.[nb 46]
Notable examples include:
- Although the provinces have the power to create criminal courts, only the federal government has the power to determine criminal procedure. Criminal procedure includes prosecution, and federal law can determine the extent of federal and provincial involvement.[nb 47] The provinces' power under s. 92(14) over the administration of justice includes the organization of courts and police forces, which determines the level of law enforcement. The Royal Canadian Mounted Police, as the federal police, contracts for the provision of many provincial and municipal police forces.
- Although the federal power to regulate fisheries does not override provincial authority to require a permit for catching fish in waters under provincial control,[nb 48] the regulation of recreational fisheries has been partially delegated under the Fisheries Act[na 12] to the provinces for specified species in specific provinces.[na 13]
- Works affecting navigation are subject to federal approval under the Navigable Waters Protection Act and provincial approval, since the beds of navigable waters are generally reserved to the Crown in right of the province.
- Although federal jurisdiction over broadcasting and most telecommunications is exclusive, the provinces may regulate advertising[nb 49] and cable installation (above or underground).[nb 50]
- Although the concept of marriage is under federal jurisdiction, the solemnization of marriages is controlled by the provinces.
- The provincial power to regulate security interests under the property and civil-rights power will be displaced by security interests created under a federal head of power – most notably under the banking power – but only to the extent that federal law has covered the field.[nb 51]
- Laws arising from the property and civil-rights power will be used to complement the interpretation of federal legislation where the federal Act has not provided otherwise, but federal power cannot be used to create rules of private law in areas outside its jurisdiction.[nb 52][na 14]
- In insolvency law, provincial statutes operate by federal incorporation into the Bankruptcy and Insolvency Act and the Companies' Creditors Arrangement Act. However, where a stay under federal law has been lifted in order to allow proceedings to take place, a province can impose a moratorium on proceedings falling under provincial law.[nb 53]
Delegation and cooperation
In 1899, Lord Watson asserted during the argument in CPR v Bonsecours[nb 54] that neither the federal parliament nor the provincial legislatures could give legislative authority to the other level. Subsequent attempts to dovetail federal and provincial legislation to achieve certain ends met with difficulty, such as an attempt by Saskatchewan to ensure enforcement of a federal statute[na 15] by enacting a complementary Act[na 16] declaring that the federal Act would continue in force under provincial authority if it was ruled ultra vires. The Saskatchewan Court of Appeal ruled a federal and provincial Act ultra vires, voiding both as an attempt by the province to vest powers in parliament unauthorized by the BNA Act.[nb 55]
The matter was addressed in 1950 by the Supreme Court, which held ultra vires a proposed Nova Scotia Act which would have authorized the inter-delegation of legislative and taxation authority between Parliament and the Nova Scotia legislature.[nb 56] In that decision, Justice Rand explained the distinction between delegation to a subordinate body and that to a legislative body.[h]
Later attempts to achieve federal-provincial coordination have succeeded with other types of legislative schemes involving:
- Conditional legislation (such as a federal Act, providing that it will not apply where a provincial Act has been enacted in a given matter). As Justice Rand declared in 1959, "That Parliament can so limit the operation of its own legislation and that it may do so upon any such event or condition is not open to serious debate".[nb 57]
- Incorporation by reference or adoption (for example, a federal regulation prohibiting vehicles from operating on a federal highway except "in accordance with the laws of the province and the municipality in which the highway is situated")[na 17]
- Joint schemes with administrative cooperation, such as the administrative authority granted by federal law to provincial transport boards to license extraprovincial transport[na 18]
Power to implement treaties
To understand how treaties can enter Canadian law, three significant cases must be considered: the Aeronautics Reference, the Radio Reference and the Labour Conventions Reference.[nb 58] Although the reasoning behind the judgments is complex, it is considered to break down as follows:
- Aeronautics were held by the Aeronautics Reference to be within the authority of the Parliament of Canada under s. 132 governing treaties entered into by the British Empire. After that treaty was replaced, it was held in Johannesson v. West St. Paul that in accordance with Ontario v. Canada Temperance Federation the field continued to be within federal jurisdiction under the power relating to peace, order and good government.
- Although an international agreement governing broadcasting was not a treaty of the British Empire, the Radio Reference held that it fell within federal jurisdiction; Canada's obligations under its agreements in this field required it to pass legislation applying to all Canadian residents, and the matter could be seen as analogous to telegraphs (already in the federal sphere).
- The Labour Conventions Reference dealt with labour relations (clearly within provincial jurisdiction); since the conventions were not treaties of the British Empire and no plausible argument could be made for the field attaining a national dimension or becoming of national concern, the Canadian Parliament was unable to exercise new legislative authority.
Although the Statute of Westminster 1931 had made Canada fully independent in governing its foreign affairs, the Judicial Committee of the Privy Council held that s. 132 did not evolve to take that into account. As noted by Lord Atkin at the end of the judgment,
It must not be thought that the result of this decision is that Canada is incompetent to legislate in performance of treaty obligations. In totality of legislative powers, Dominion and Provincial together, she is fully equipped. But the legislative powers remain distributed and if in the exercise of her new functions derived from her new international status she incurs obligations they must, so far as legislation be concerned when they deal with provincial classes of subjects, be dealt with by the totality of powers, in other words by co-operation between the Dominion and the Provinces. While the ship of state now sails on larger ventures and into foreign waters she still retains the watertight compartments which are an essential part of her original structure.
This case left undecided the extent of federal power to negotiate, sign and ratify treaties dealing with areas under provincial jurisdiction, and has generated extensive debate about complications introduced in implementing Canada's subsequent international obligations; the Supreme Court of Canada has indicated in several dicta that it might revisit the issue in an appropriate case.
Limits on legislative power
Outside the questions of ultra vires and compliance with the Canadian Charter of Rights and Freedoms, there are absolute limits on what the Parliament of Canada and the provincial legislatures can legislate. According to the Constitution Act, 1867:
- S. 96 has been construed to hold that neither the provincial legislatures nor Parliament can enact legislation removing part of the inherent jurisdiction of the superior courts.[nb 59]
- S. 121 states, "All Articles of the Growth, Produce, or Manufacture of any one of the Provinces shall, from and after the Union, be admitted free into each of the other Provinces". This amounts to a prohibition of inter-provincial tariffs.
- S. 125 states, "No Lands or Property belonging to Canada or any Province shall be liable to Taxation".
- Under s. 129, limits have been placed on the ability of the legislatures of Ontario and Quebec to amend or repeal Acts of the former Province of Canada. Where such an act created a body corporate operating in the former Province, the Judicial Committee of the Privy Council held that such bodies cannot have "provincial objects" and only the Parliament of Canada had power to deal with such acts.[nb 60] It has been held that this restriction exists for any Act applying equally to Upper and Lower Canada,[i] which became problematic when the Civil Code of Lower Canada was replaced by the Civil Code of Quebec.
While the Parliament of Canada has the ability to bind the Crown in right of Canada or of any province, the converse is not true for the provincial legislatures, as "[p]rovincial legislation cannot proprio vigore [ie, of its own force] take away or abridge any privilege of the Crown in right of the Dominion."[nb 62]
- The federal regulation of trade and commerce was circumscribed by the provincial property and civil rights power as a result of Citizen's Insurance Co. v. Parsons,[nb 1] disallowance and reservation of provincial statutes was curtailed as a political consequence of McLaren v. Caldwell,[nb 2] and the double aspect doctrine was introduced into Canadian jurisprudence via Hodge v. The Queen.[nb 3] Not all rulings, however, went in the provinces' favour. Russell v. The Queen established the right of the federal parliament to make laws applicable in the provinces if those laws relate to a concern that exists in all jurisdictions of the country[nb 4] and in Royal Bank of Canada v. The King the provinces were held not to possess the power to affect extraprovincial contract rights.[nb 5] Pith and substance, used to determine under which crown a given piece of legislation falls, was introduced in Cushing v. Dupuy.[nb 6]
- The Board of Commerce case affirmed that only a national emergency warranted the curtailment of citizens' rights by the federal parliament,[nb 7] subsequently reaffirmed by Fort Frances Pulp and Paper v. Manitoba Free Press,[nb 8] and was held to even include amending Acts of Parliament through regulations.[nb 9] However, Toronto Electric Commissioners v. Snider,[nb 10] held that such emergencies could not be used to unreasonably intrude on the provinces' property and civil rights power.
- The Alberta Court of Appeal in Winterhaven Stables Limited v. Canada (Attorney General) characterized that as possessing the following nature: "[The federal parliament] is entitled to spend the money that it raises through proper exercise of its taxing power in the manner that it chooses to authorize. It can impose conditions on such disposition so long as the conditions do not amount in fact to a regulation or control of a matter outside federal authority. The federal contributions are now made in such a way that they do not control or regulate provincial use of them. As well there are opting out arrangements that are available to those provinces who choose not to participate in certain shared-cost programs.[nb 35]
- "But if one looks at the practical effects of the exercise of the emergency power, one must conclude that it operates so as to give to Parliament for all purposes necessary to deal with the emergency, concurrent and paramount jurisdiction over matters which would normally fall within exclusive provincial jurisdiction. To that extent, the exercise of that power amounts to a temporary pro tanto amendment of a federal Constitution by the unilateral action of Parliament. The legitimacy of that power is derived from the Constitution: when the security and the continuation of the Constitution and of the nation are at stake, the kind of power commensurate with the situation 'is only to be found in that part of the Constitution which establishes power in the State as a whole'."[nb 40]
- "The extraordinary nature and the constitutional features of the emergency power of Parliament dictate the manner and form in which it should be invoked and exercised. It should not be an ordinary manner and form. At the very least, it cannot be a manner and form which admits of the slightest degree of ambiguity to be resolved by interpretation. In cases where the existence of an emergency may be a matter of controversy, it is imperative that Parliament should not have recourse to its emergency power except in the most explicit terms indicating that it is acting on the basis of that power. Parliament cannot enter the normally forbidden area of provincial jurisdiction unless it gives an unmistakable signal that it is acting pursuant to its extraordinary power. Such a signal is not conclusive to support the legitimacy of the action of Parliament but its absence is fatal."
- The national concern doctrine is separate and distinct from the national emergency doctrine of the peace, order and good government power, which is chiefly distinguishable by the fact that it provides a constitutional basis for what is necessarily legislation of a temporary nature;
- The national concern doctrine applies to both new matters which did not exist at Confederation and to matters which, although originally matters of a local or private nature in a province, have since, in the absence of national emergency, become matters of national concern;
- For a matter to qualify as a matter of national concern in either sense it must have a singleness, distinctiveness and indivisibility that clearly distinguishes it from matters of provincial concern and a scale of impact on provincial jurisdiction that is reconcilable with the fundamental distribution of legislative power under the Constitution;
- In determining whether a matter has attained the required degree of singleness, distinctiveness and indivisibility that clearly distinguishes it from matters of provincial concern it is relevant to consider what would be the effect on extra‑provincial interests of a provincial failure to deal effectively with the control or regulation of the intra‑provincial aspects of the matter.
- Aeronautics Reference at p. 8: # The legislation of the Parliament of the Dominion, so long as it strictly relates to subjects of legislation expressly enumerated in section 91, is of paramount authority, even if it trenches upon matters assigned to the Provincial Legislature by section 92.
- The general power of legislation conferred up on the Parliament of the Dominion by section 91 of the Act in supplement of the power to legislate upon the subjects expressly enumerated must be strictly confined to such matters as are unquestionably of national interest and importance, and must not trench on any of the subjects enumerated in section 92, as within the scope of Provincial legislation, unless these matters have attained such dimensions as to affect the body politic of the Dominion.
- It is within the competence of the Dominion Parliament to provide for matters which though otherwise within the legislative competence of the Provincial Legislature, are necessarily incidental to effective legislation by the Parliament of the Dominion upon a subject of legislation expressly enumerated in section 91.
- There can be a domain in which Provincial and Dominion legislation may overlap, in which case, neither legislation will be ultra vires if the field is clear, but if the field is not clear and the two legislations meet, the Dominion legislation must prevail.
- "In the generality of actual delegation to its own agencies, Parliament, recognizing the need of the legislation, lays down the broad scheme and indicates the principles, purposes and scope of the subsidiary details to be supplied by the delegate: under the mode of enactment now being considered, the real and substantial analysis and weighing of the political considerations which would decide the actual provisions adopted, would be given by persons chosen to represent local interests. Since neither is a creature nor a subordinate body of the other, the question is not only or chiefly whether one can delegate, but whether the other can accept. Delegation implies subordination and in Hodge v. The Queen, the following observations ... appear: Within these limits of subjects and area the local legislature is supreme, and has the same authority as the Imperial Parliament, or the parliament of the Dominion, would have had under like circumstances to confide to a municipal institution or body of its own creation authority to make by-laws or resolutions as to subjects specified in the enactment, and with the object of carrying the enactment into operation and effect.... It was argued at the bar that a legislature committing important regulations to agents or delegates effaces itself. That is not so. It retains its powers intact, and can, whenever it pleases, destroy the agency it has created and set up another, or take the matter directly into his own hands. How far it shall seek the aid of subordinate agencies, and how long it shall continue them, are matters for each legislature, and not for Courts of Law, to decide."
- Ex parte O'Neill, RJQ 24 SC 304, where it was held that the Legislative Assembly of Quebec was unable to repeal the Temperance Act, 1864,[na 19] but it could pass a concurrent statute for regulating liquor traffic within the Province. However, it has also been held that the Parliament of Canada could not repeal that Act with respect only to Ontario.[nb 61]
- Banting, Keith G.; Simeon, Richard (1983). And no one cheered: federalism, democracy, and the Constitution Act. Toronto: Methuen. pp. 14, 16. ISBN 0-458-95950-2.
- "Biography – MACDONALD, Sir JOHN ALEXANDER – Volume XII (1891-1900) – Dictionary of Canadian Biography". www.biographi.ca. Retrieved 1 February 2019.
- "John A. Macdonald on the Federal System". The Historica-Dominion Institute. Archived from the original on 27 June 2013. Retrieved 24 December 2012., quoting Parliamentary Debates on the Subject of the Confederation of the British North American Provinces—3rd Session, 8th Provincial Parliament of Canada. Quebec: Hunter, Rose & Co. 1865. pp. 29–45.
- Romney 1999, pp. 100–102.
- Lamot 1998, p. 125.
- Romney, Paul (1986). Mr Attorney: The Attorney-General for Ontario in court, cabinet and legislature, 1791-1899. Toronto: University of Toronto Press. p. 240–281.
- Edward Blake (1888). The St. Catharine's Milling and Lumber Company v. the Queen: Argument of Mr. Blake, of counsel for Ontario. Toronto: Press of the Budget. p. 6.
- Forsey, Eugene (1 October 2010), Forsey, Helen (ed.), "As David Johnson Enters Rideau Hall...", The Monitor, Ottawa: Canadian Centre for Policy Alternatives, retrieved 8 August 2012
- Bélanger, Claude. "Canadian federalism, the Tax Rental Agreements of the period of 1941–1962 and fiscal federalism from 1962 to 1977". Retrieved 20 January 2012.
- "Ontario Labour Relations Board: History". Retrieved 20 January 2012.
- Mallory, J. R. (1961). "The Lieutenant-Governor's Discretionary Powers: The Reservation of Bill 56". Canadian Journal of Economics and Political Science. 27 (4): 518–522. doi:10.2307/139438. JSTOR 139438.
- Dyck 2012, pp. 416–420
- Romney 1999, pp. 273–274
- Heard, Andrew (1990). "Canadian Independence". Vancouver: Simon Fraser University. Retrieved 25 August 2010.
- Noël, Alain (November 1998). "The Three Social Unions" (PDF). Policy Options (in French). Institute for Research on Public Policy. 19 (9): 26–29. Archived from the original (PDF) on 21 October 2007. Retrieved 22 August 2012.
- "Flexible federalism". The Free Library. Retrieved 19 January 2012.
- Douglas Brown (July 2005). "Who's afraid of Asymmetrical Federalism?". Optimum Online. 35 (2): 2 et seq. Retrieved 19 January 2012.
- Hunter, Christopher. "Cooperative Federalism & The Securities Act Reference: A Rocky Road". The Court. Archived from the original on 25 March 2012. Retrieved 19 January 2012.
- Roberts, Edward (2009). "Ensuring Constitutional Wisdom During Unconventional Times" (PDF). Canadian Parliamentary Review. Ottawa: Commonwealth Parliamentary Association. 23 (1): 13. Archived from the original (PDF) on 26 April 2012. Retrieved 21 May 2009.
- MacLeod, Kevin S. (2012), A Crown of Maples (PDF) (2012 ed.), Ottawa: Department of Canadian Heritage, p. 17, ISBN 978-1-100-20079-8, archived from the original (PDF) on 4 February 2016, retrieved 23 August 2012
- Jackson, Michael D. (2003). "Golden Jubilee and Provincial Crown" (PDF). Canadian Monarchist News. Toronto: Monarchist League of Canada. 7 (3): 6. Archived from the original (PDF) on 11 June 2015. Retrieved 21 May 2009.
- Smith, David E. (1995). The Invisible Crown. Toronto: University of Toronto Press. p. 8. ISBN 0-8020-7793-5.
- Smith, David E. (10 June 2010), The Crown and the Constitution: Sustaining Democracy? (PDF), Kingston: Queen's University, p. 6, archived from the original (PDF) on 17 October 2013, retrieved 18 May 2010
- Romney 1999, p. 274.
- Cabinet Secretary and Clerk of the Executive Council (April 2004), Executive Government Processes and Procedures in Saskatchewan: A Procedures Manual (PDF), Regina: Queen's Printer for Saskatchewan, p. 10, retrieved 30 July 2009
- Bowman, Laura. "Constitutional "Property" and Reserve Creation: Seybold Revisited" (PDF). Manitoba Law Journal. University of Manitoba, Robson Hall Faculty of Law. 32 (1): 1–25. Archived from the original (PDF) on 4 March 2016. Retrieved 17 September 2013.
- Hogg 2007, par. 29.2.
- Lambrecht, Kirk (30 July 2014). "The Importance of Location and Context to the Future Application of the Grassy Narrows Decision of the Supreme Court of Canada" (PDF). ABlawg.ca.
- Fisheries and Oceans Canada. "Canada's Ocean Estate – A Description of Canada's Maritime Zones". Queen's Printer for Canada. Archived from the original on 14 June 2008. Retrieved 4 September 2012.
- La Forest, G.V. (1981). The Allocation of Taxing Power Under the Canadian Constitution (2nd ed.). Toronto: Canadian Tax Foundation. p. 159. ISBN 0-88808006-9.
- Richer, Karine. "RB 07-36E: The Federal Spending Power". Queen's Printer for Canada. Retrieved 16 June 2015.
- for example, Claude Bélanger. "Theories and Interpretation of the Constitution Act, 1867". Marianopolis College. Retrieved 9 October 2012.
- Criminal Code, 1892, SC 1892, c 29
- "Backgrounder: A Third Bill to Harmonize Federal Law with the Civil Law of Quebec". Department of Justice (Canada). Archived from the original on 23 March 2012. Retrieved 8 August 2012.
- "NWPA Regulatory Framework". Transport Canada. Retrieved 22 August 2012.
- "Policy PL 2.02.02 – Ownership determinations – Beds of navigable waters" (PDF). Ministry of Natural Resources of Ontario. 26 February 2007. Retrieved 22 August 2012.
- "Procedure PL 2.02.02 – Ownership determinations – Beds of navigable waters" (PDF). Ministry of Natural Resources of Ontario. 26 February 2007. Retrieved 22 August 2012.
- "Dams, Water Crossings and Channelizations – The Lakes and Rivers Improvement Act". Ministry of Natural Resources of Ontario. Retrieved 22 August 2012.
- "Canadian Municipalities and the Regulation of Radio Antennae and their Support Structures — III. An Analysis of Constitutional Jurisdiction in Relation to Radiocommunication". Industry Canada. 6 December 2004. Retrieved 9 October 2012.
- La Forest 1975, p. 134.
- La Forest 1975, p. 135.
- La Forest 1975, pp. 135–137.
- La Forest 1975, p. 137–143.
- Cyr, Hugo (2009). "I – The Labour Conventions Case". Canadian Federalism and Treaty Powers: Organic Constitutionalism at Work. Brussels: P.I.E. Peter Lang SA. ISBN 978-90-5201-453-1. Retrieved 29 August 2012.
- Zagros Madjd-Sadjadi, Winston-Salem State University. "Subnational Sabotage or National Paramountcy? Examining the Dynamics of Subnational Acceptance of International Agreements" (PDF). Southern Journal of Canadian Studies, vol. 2, 1. Retrieved 12 January 2012.
- H. Scott Fairley (1999). "External Affairs and the Canadian Constitution". In Yves Le Bouthillier; Donald M. McRae; Donat Pharand (eds.). Selected Papers in International Law: Contribution of the Canadian Council on International Law. London: Kluwer International. pp. 79–91. ISBN 90-411-9764-8.
- "Canadian Interpretation and Construction of Maritime Conventions". Archived from the original on 9 September 2014. Retrieved 23 September 2014.
- Lefroy, Augustus Henry Frazer (1918). A short treatise on Canadian constitutional law. Toronto: The Carswell Company. p. 189.
- Lefroy, Augustus Henry Frazer (1913). Canada's Federal System. Toronto: The Carswell Company. pp. 162–163.
- Leclair, Jean (1999). "Thoughts on the Constitutional Problems Raised by the Repeal of the Civil Code of Lower Canada". The Harmonization of Federal Legislation with the Civil Law of the Province of Quebec and Canadian Bijuralism. Ottawa: Department of Justice. pp. 347–394.
Acts and other instruments
- Criminal Code Amendment Act, S.C. 1932–33, c. 53, s. 17
- Supreme Court Amendment Act, S.C. 1949 (2nd. session), c. 37, s. 3
- British North America Act, 1940, 3–4 Geo. VI, c. 36 (U.K.)
- British North America Act, 1951, 14–15 Geo. VI, c. 32 (U.K.)
- British North America Act, 1964, 12–13 Eliz. II, c. 73 (U.K.)
- British North America (No. 2) Act, 1949, 13 Geo. VI, c. 81 (U.K.)
- "Order in Council P.C. 1961-675", Canada Gazette, 13 May 1961, retrieved 19 August 2012
- "Canada-Newfoundland Atlantic Accord Implementation Act (S.C. 1987, c. 3)". Retrieved 4 September 2012.
- "Canada-Nova Scotia Offshore Petroleum Resources Accord Implementation Act (S.C. 1988, c. 28)". Retrieved 4 September 2012.
- for example, "Court Jurisdiction and Proceedings Transfer Act, SBC 2003, c. 28". Queen's Printer of British Columbia. Retrieved 5 September 2012.
- "Federal Law-Civil Law Harmonization Act, No. 1, S.C. 2001, c. 4, s. 3". Retrieved 8 August 2012.
- "Fisheries Act (R.S.C., 1985, c. F-14)". Retrieved 4 September 2012.
- "Recreational Fishing Regulations". Fisheries and Oceans Canada. 16 November 2007. Retrieved 4 September 2012.
- "Interpretation Act (R.S.C., 1985, c. I-21)". 26 February 2015. codifies the general rule at s. 8.1.
- Live Stock and Live Stock Products Act, R.S.C. 1927, c.120
- Live Stock and Live Stock Products Act, R.S.S. 1930, c. 151
- Government Property Traffic Regulations, C.R.C. 1977, c. 887, s. 6(1)
- Motor Vehicle Transport Act, R.S.C. 1985, c. 29 (3rd Supp.), s. 7
- An Act to amend the laws in force respecting the Sale of Intoxicating Liquors and the issue of Licenses therefor, and otherwise for repression of abuses resulting from such sale, S.C. 1864, c. 18
- The Citizens Insurance Company of Canada and The Queen Insurance Company v Parsons UKPC 49, (1881) 7 A.C. 96 (26 November 1881), Judicial Committee of the Privy Council (on appeal from Canada)
- Caldwell and another v McLaren UKPC 21, (1884) 9 A.C. 392 (7 April 1884), Judicial Committee of the Privy Council (on appeal from Canada)
- Hodge v The Queen (Canada) UKPC 59 at pp. 9–10, 9 App Cas 117 (15 December 1883), Judicial Committee of the Privy Council (on appeal from Ontario)
- Charles Russell v The Queen (New Brunswick) UKPC 33 at pp. 17–18, 7 App Cas 829, 8 CRAC 502 (23 June 1882), Judicial Committee of the Privy Council (on appeal from Canada)
- The Royal Bank of Canada and others v The King and another UKPC 1a, A.C. 212 (31 January 1913), Judicial Committee of the Privy Council (on appeal from Alberta)
- Cushing v Dupuy UKPC 22 at pp. 3–4, (1880) 5 AC 409 (15 April 1880), Judicial Committee of the Privy Council (on appeal from Quebec)
- The Attorney General of Canada v The Attorney General of Alberta and others ("Board of Commerce case") UKPC 107 at p. 4, 1 A.C. 191 (8 November 1921), Judicial Committee of the Privy Council (on appeal from Canada)
- The Fort Frances Pulp and Paper Company Limited v The Manitoba Free Press Company Limited and others UKPC 64 at p. 6, A.C. 695 (25 July 1923), Judicial Committee of the Privy Council (on appeal from Ontario)
- In Re George Edwin Gray, 1918 CanLII 86 at pp. 167–173, 180–183, 57 SCR 150 (19 July 1918), drawing on R v Halliday UKHL 1, AC 260 (1 May 1917)
- The Toronto Electric Commissioners v Colin G. Snider and others UKPC 2, AC 396 (20 January 1925), Judicial Committee of the Privy Council (on appeal from Ontario)
- In the matter of The Initiative and Referendum Act being Chapter 59 of the Acts of Legislative Assembly of Manitoba 6 George V. UKPC 60, AC 935 (3 July 1919), Judicial Committee of the Privy Council (on appeal from Manitoba)
- The Attorney General of Quebec v The Attorney General of Canada and others ("Radio Reference") UKPC 7, A.C. 304 (9 February 1932), Judicial Committee of the Privy Council (on appeal from Canada)
- The Attorney-General Canada v The Attorney-General of Ontario and others ("Aeronautics Reference") UKPC 93, A.C. 54 (22 October 1931), Judicial Committee of the Privy Council (on appeal from Canada)
- Israel Winner (doing business under the name and style of Mackenzie Coach Lines) and others v. S.M.T. (Eastern) Limited and others UKPC 8 (22 February 1954), Judicial Committee of the Privy Council (on appeal from Canada)
- Attorney General of Alberta v Attorney General of Canada UKPC 46 (14 July 1938), Judicial Committee of the Privy Council (on appeal from Canada)
- Reference as to the Validity of the Regulations in Relation to Chemicals Enacted by Order in Council and of an Order of the Controller of Chemicals Made Pursuant Thereto (The "Chemicals Reference"), 1943 CanLII 1, SCR 1 (1 May 1943), Supreme Court (Canada)
- Reference re Wartime Leasehold Regulations, 1950 CanLII 27, SCR 124 (1 March 1950), Supreme Court (Canada)
- Attorney-General of Canada v. Higbie, 1944 CanLII 29, SCR 385 (23 March 1944), Supreme Court (Canada)
- Reference re Supreme Court Act, ss. 5 and 6, 2014 SCC 21 (21 March 2014)
- The Attorney General for the Dominion of Canada v The Attorneys General for the Provinces of Ontario, Quebec and Nova Scotia ("Fisheries Case") UKPC 29, AC 700 (26 May 1898), Judicial Committee of the Privy Council (on appeal from Canada)
- Smylie v. The Queen (1900), 27 O.A.R. 172 (C.A.)
- Attorney-General for British Columbia and the Minister of Lands v. Brooks-Bidlake and Whitall, Ltd., 1922 CanLII 22, 63 SCR 466 (2 July 1922)
- The Attorney General of Ontario v Mercer UKPC 42, 8 AC 767 (8 July 1883), Judicial Committee of the Privy Council (on appeal from Canada)
- The Ontario Mining Company Limited and The Attorney General for the Dominion of Canada v The Attorney General for the Province of Ontario ("Ontario Mining Co. v. Seybold") UKPC 46, AC 73 (12 November 1902) (on appeal from Canada)
- Reference re Waters and Water-Powers, 1929 CanLII 72, SCR 200 (2 May 1929), Supreme Court (Canada)
- The Attorney General of Quebec v The Nipissing Central Railway Company and another ("Railway Act Reference") UKPC 39, AC 715 (17 May 1926), Judicial Committee of the Privy Council (on appeal from Canada)
- R. v. Sparrow, 1990 CanLII 104, 1 SCR 1075 (31 May 1990), Supreme Court (Canada)
- St. Catherines Milling and Lumber Company v The Queen UKPC 70, 14 AC 46 (12 December 1888), Judicial Committee of the Privy Council (on appeal from Canada)
- Grassy Narrows First Nation v. Ontario (Natural Resources), 2014 SCC 48 at par. 50 (11 July 2014)
- Reference Re: Offshore Mineral Rights, 1967 CanLII 71, SCR 792 (7 November 1967), Supreme Court (Canada)
- Reference re Newfoundland Continental Shelf, 1984 CanLII 132, 1 SCR 86 (8 March 1984), Supreme Court (Canada)
- Reference re: Ownership of the Bed of the Strait of Georgia and Related Areas, 1984 CanLII 138, 1 SCR 388 (17 May 1984), Supreme Court (Canada)
- Allard Contractors Ltd. v. Coquitlam (District), CanLII 45, 4 SCR 371 (18 November 1993)
- Finlay v. Canada (Minister of Finance), 1993 CanLII 129 at par. 29, 1 SCR 1080 (25 March 1993)
- Winterhaven Stables Limited v. Canada (Attorney General), 1988 ABCA 334 at par. 23, 53 DLR (4th) 413 (17 October 1988)
- The Attorney General of Canada v The Attorney General of Ontario and others JCPC 7, AC 355 (28 January 1937) (Canada)
- Reference Re Canada Assistance Plan (B.C.), 1991 CanLII 74 at par. 93, 2 SCR 525 (15 August 1991)
- The Attorney-General for Commonwealth of Australia and others v The Colonial Sugar Refining Company Limited and others UKPC 76, AC 237 (17 December 1913), P.C. (on appeal from Australia), and stated again in The Bonanza Creek Gold Mining Company Limited v The King and another UKPC 11, 1 AC 566 (24 February 1916), Judicial Committee of the Privy Council (on appeal from Canada)
- Reference re Anti-Inflation Act, 1976 CanLII 16, 2 SCR 373 (12 July 1976), Supreme Court (Canada), 463–464
- Viscount Haldane in Fort Frances, p. 704
- R. v. Crown Zellerbach Canada Ltd., 1988 CanLII 63 at par. 33, 49 DLR (4th) 161; 3 WWR 385 (24 March 1988), Supreme Court (Canada)
- Edgar F. Ladore and others v George Bennett and others UKPC 33, 3 D.L.R. 1, AC. 468 (8 May 1939), P.C. (on appeal from Ontario)
- Re Upper Churchill Water Rights Reversion Act, 1984 CanLII 17, 1 SCR 297 (3 May 1984), Supreme Court (Canada)
- The Queen (Man.) v. Air Canada, 1980 CanLII 16, 2 SCR 303 (18 July 1980), Supreme Court (Canada)
- Canadian Western Bank v. Alberta, 2007 SCC 22, 2 SCR 3 (31 May 2007), par. 32
- Ontario (Attorney General) v. OPSEU, 1987 CanLII 71, 2 SCR 2 (29 July 1987) at par. 27
- Attorney General of Canada v. Canadian National Transportation, Ltd., 1983 CanLII 36, 2 SCR 206, Supreme Court (Canada)
- The Attorney General for the Dominion of Canada v The Attorneys General for the Provinces of Ontario, Quebec and Nova Scotia ("Fisheries Reference") UKPC 29, A.C. 700 (26 May 1898), P.C. (on appeal from Canada)
- Attorney General of Quebec v. Kellogg's Co. of Canada, 1978 CanLII 185, 2 SCR 211 (19 January 1978), Supreme Court (Canada)
- The Corporation of the City of Toronto v The Bell Telephone Company of Canada UKPC 71 (11 November 1904), P.C. (on appeal from Ontario)
- Bank of Montreal v. Innovation Credit Union, 2010 SCC 47, 3 SCR 3 (5 November 2010)
- Clark v. Canadian National Railway Co., 1988 CanLII 18, 2 SCR 680 (15 December 1988)
- Abitibi Power and Paper Company Limited v Montreal Trust Company and others UKPC 37, AC 536 (8 July 1943) (on appeal from Ontario), upholding The Abitibi Power and Paper Company Limited Moratorium Act, 1941, S.O. 1941, c. 1
- Canadian Pacific Railway Company v The Corporation of the Parish of Notre Dame De Bonsecour UKPC 22, AC 367 (24 March 1899), P.C. (on appeal from Quebec)
- R. v. Zaslavsky, 1935 CanLII 142, 3 DLR 788 (15 April 1935), Court of Appeal (Saskatchewan, Canada)
- Attorney General of Nova Scotia v. Attorney General of Canada (the "Nova Scotia Inter-delegation case"), 1950 CanLII 26, SCR 31 (3 October 1950)
- Lord's Day Alliance v. Attorney-General of British Columbia, 1959 CanLII 42, SCR 497 (28 April 1959)
- The Attorney General of Canada v The Attorney General of Ontario and others ("Labour Conventions Reference") UKPC 6, A.C. 326 (28 January 1937), P.C. (on appeal from Canada)
- MacMillan Bloedel Ltd. v. Simpson, 1995 CanLII 57, 4 SCR 725 (14 December 1995); Re Residential Tenancies Act, 1981 SCC 24, 1 SCR 714 (28 May 1981); Crevier v. A.G. (Québec) et al., 1981 CanLII 30, 2 SCR 220 (20 October 1981); Trial Lawyers Association of British Columbia v. British Columbia (Attorney General), 2014 SCC 59 (2 October 2014)
- Rev. Robert Dobie v The Board for Management of the Presbyterian Church of Canada UKPC 4, 7 App Cas 136 (21 January 1882), P.C. (on appeal from Quebec)
- The Attorney General for Ontario v The Attorney General for the Dominion of Canada, and the Distillers and Brewers’ Association of Ontario (The "Local Prohibition Case") UKPC 20, AC 348 (9 May 1896), P.C. (on appeal from Canada)
- per Fitzpatrick CJ, in Gauthier v The King, 1918 CanLII 85 at p. 194, 56 SCR 176 (5 March 1918), Supreme Court (Canada)
- Dyck, Rand (2012). Canadian Politics: Critical Approaches (Concise) (5th ed.). Toronto, Ontario: Nelson Education. ISBN 978-0-17-650343-7. OCLC 669242306.
- Morris J. Fish (2011). "The Effect of Alcohol on the Canadian Constitution ... Seriously" (PDF). McGill Law Journal. 57 (1): 189–209. doi:10.7202/1006421ar. ISSN 1920-6356. Archived from the original (PDF) on 5 March 2016. Retrieved 6 August 2012.
- Hogg, Peter W. (2007). Constitutional Law of Canada (loose-leaf) (5th ed.). Toronto: Carswell. ISBN 978-0-7798-1337-7. ISSN 1914-1262. OCLC 398011547.CS1 maint: ref duplicates default (link)
- Gérard V. La Forest (1975). "Delegation of Legislative Power in Canada" (PDF). McGill Law Journal. 21 (1): 131–147. ISSN 1920-6356. Archived from the original (PDF) on 5 March 2016. Retrieved 7 September 2014.
- Lamot, Robert Gregory (1998). The Politics of the Judiciary: The S.C.C. and the J.C.P.C. in late 19th Century Ontario (PDF) (M.A.). Carleton University. ISBN 0-612-36831-9.
- J. Noel Lyon (1976). "The Central Fallacy of Canadian Constitutional Law" (PDF). McGill Law Journal. 22 (1): 40–70. ISSN 1920-6356. Archived from the original (PDF) on 13 March 2013. Retrieved 24 December 2012.
- Oliver, Peter C. (2011). "The Busy Harbours of Canadian Federalism: The Division of Powers and Its Doctrines in the McLachlin Court". In Dodek, Adam; Wright, David A. (eds.). Public Law at the McLachlin Court: the First Decade. Toronto, Ontario: Irwin Law. pp. 167–200. ISBN 978-1-55221-214-1. OCLC 774694912.
- Rocher, François; Smith, Miriam (2003). New Trends in Canadian Federalism (2nd ed.). Peterborough, Ontario: Broadview Press. ISBN 1551114143. OCLC 803829702.
- Romney, Paul (1999). Getting it wrong: how Canadians forgot their past and imperilled Confederation. Toronto: University of Toronto Press. ISBN 0-8020-8105-3.
- Stevenson, Garth (2003). Unfulfilled union: Canadian federalism and national unity (4th ed.). McGill-Queen's University Press. ISBN 0-7735-2744-3. OCLC 492159067.
Unfulfilled union: Canadian federalism and national unity.
- Federalism - The canadian encyclopedia
- Federalism in Canada: Basic Framework and Operation
- Federalism-e – published by Queen's University Institute of Intergovernmental Relations
- Canadian Federalism
- Studies on the Canadian Constitution and Canadian Federalism
- Constitutional Law professor Hester Lessard on the Downtown Eastside and Jurisdictional Justice
- Canadian Governments Compared – ENAP | https://wiki-offline.jakearchibald.com/wiki/Canadian_federalism | 21 |
19 | Culture of Canada
|Part of a series on the|
|Culture of Canada|
Canadian culture is a term that embodies the artistic, culinary, literary, humour, musical, political and social elements that are representative of Canada and Canadians. Throughout Canada's history, its culture has been influenced by European culture and traditions, especially British and French, and by its own indigenous cultures. Over time, elements of the cultures of Canada's immigrant populations have become incorporated into mainstream Canadian culture. The population has also been influenced by American culture because of a shared language, proximity and migration between the two countries.
Canada is often characterized as being "very progressive, diverse, and multicultural". Canada's federal government has often been described as the instigator of multicultural ideology because of its public emphasis on the social importance of immigration. Canada's culture draws from its broad range of constituent nationalities, and policies that promote a just society are constitutionally protected. Canadian Government policies—such as publicly funded health care; higher and more progressive taxation; outlawing capital punishment; strong efforts to eliminate poverty; an emphasis on cultural diversity; strict gun control; and most recently, legalizing same-sex marriage—are social indicators of Canada's political and cultural values. Canadians identify with the country's institutions of health care, military peacekeeping, the National park system and the Canadian Charter of Rights and Freedoms.
The Canadian government has influenced culture with programs, laws and institutions. It has created crown corporations to promote Canadian culture through media, such as the Canadian Broadcasting Corporation (CBC) and the National Film Board of Canada (NFB), and promotes many events which it considers to promote Canadian traditions. It has also tried to protect Canadian culture by setting legal minimums on Canadian content in many media using bodies like the Canadian Radio-television and Telecommunications Commission (CRTC).
Development of Canadian culture
For tens of thousands of years, Canada was inhabited by Aboriginal peoples from a variety of different cultures and of several major linguistic groupings. Although not without conflict and bloodshed, early European interactions with First Nations and Inuit populations in what is now Canada were arguably peaceful. First Nations and Métis peoples played a critical part in the development of European colonies in Canada, particularly for their role in assisting European coureur des bois and voyageurs in the exploration of the continent during the North American fur trade. Combined with late economic development in many regions, this comparably nonbelligerent early history allowed Aboriginal Canadians to have a lasting influence on the national culture (see: The Canadian Crown and Aboriginal peoples). Over the course of three centuries, countless North American Indigenous words, inventions, concepts, and games have become an everyday part of Canadian language and use. Many places in Canada, both natural features and human habitations, use indigenous names. The name "Canada" itself derives from the St. Lawrence Iroquoian word meaning "village" or "settlement". The name of Canada's capital city Ottawa comes from the Algonquin language term "adawe" meaning "to trade".
The French originally settled New France along the shores of the Atlantic Ocean and Saint Lawrence River during the early part of the 17th century. Themes and symbols of pioneers, trappers, and traders played an important part in the early development of French Canadian culture. The British conquest of New France during the mid-18th century brought 70,000 Francophones under British rule, creating a need for compromise and accommodation. The migration of 40,000 to 50,000 United Empire Loyalists from the Thirteen Colonies during the American Revolution (1775–1783) brought American colonial influences. Following the War of 1812 a large wave of Irish, Scottish and English settlers arrived in Upper Canada and Lower Canada.
The Canadian Forces and overall civilian participation in the First World War and Second World War helped to foster Canadian nationalism; however, in 1917 and 1944, conscription crises highlighted the considerable rift along ethnic lines between Anglophones and Francophones. As a result of the First and Second World Wars, the Government of Canada became more assertive and less deferential to British authority. Canada until the 1940s saw itself in terms of English and French cultural, linguistic and political identities, and to some extent aboriginal.
Legislative restrictions on immigration (such as the Continuous journey regulation and Chinese Immigration Act) that had favoured British, American and other European immigrants (such as Dutch, German, Italian, Polish, Swedish and Ukrainian) were amended during the 1960s, resulting in an influx of diverse people from Asia, Africa, and the Caribbean. By the end of the 20th century, immigrants were increasingly Chinese, Indian, Vietnamese, Jamaican, Filipino, Lebanese and Haitian. As of 2006, Canada has grown to have thirty four ethnic groups with at least one hundred thousand members each, of which eleven have over 1,000,000 people and numerous others are represented in smaller numbers. 16.2% of the population self identify as a visible minority. The Canadian public as-well as the major political parties support immigration.
Canada has also evolved to be religiously and linguistically diverse, encompassing a wide range of dialects, beliefs and customs. The 2011 Canadian census reported a population count of 33,121,175 individuals of whom 67.3% identify as being Christians; of these, Catholics make up the largest group, accounting for 38.7 percent of the population. The largest Protestant denomination is the United Church of Canada (accounting for 6.1% of Canadians), followed by Anglicans (5.0%), and Baptists (1.9%). About 23.9% of Canadians declare no religious affiliation, including agnostics, atheists, humanists, and other groups. The remaining are affiliated with non-Christian religions, the largest of which is Islam (3.2%), followed by Hinduism (1.5%), Sikhism (1.4%) Buddhism (1.1%) and Judaism (1.0%). English and French are the first languages of approximately 60% and 20% of the population; however in 2011, nearly 6.8 million Canadians listed a non-official language as their mother tongue. Some of the most common non-official first languages include Chinese (mainly Cantonese with 1,072,555 first-language speakers); Punjabi (430,705); Spanish (410,670); German (409,200); and Italian (407,490).
Evolution of legislation
French Canada's early development was relatively cohesive during the 17th and 18th centuries, and this was preserved by the Quebec Act of 1774, which allowed Roman Catholics to hold offices and practice their faith. In 1867, the Constitution Act was thought to meet the growing calls for Canadian autonomy while avoiding the overly strong decentralization that contributed to the Civil War in the United States. The compromises reached during this time between the English- and French-speaking Fathers of Confederation set Canada on a path to bilingualism which in turn contributed to an acceptance of diversity. The English and French languages have had limited constitutional protection since 1867 and full official status since 1969. Section 133 of the Constitution Act of 1867 (BNA Act) guarantees that both languages may be used in the Parliament of Canada. Canada adopted its first Official Languages Act in 1969, giving English and French equal status in the government of Canada. Doing so makes them "official" languages, having preferred status in law over all other languages used in Canada.
Prior to the advent of the Canadian Bill of Rights in 1960 and its successor the Canadian Charter of Rights and Freedoms in 1982, the laws of Canada did not provide much in the way of civil rights and this issue was typically of limited concern to the courts. Canada since the 1960s has placed emphasis on equality and inclusiveness for all people. For example, in 1995, the Supreme Court of Canada ruled in Egan v. Canada that sexual orientation should be "read in" to Section Fifteen of the Canadian Charter of Rights and Freedoms, a part of the Constitution of Canada guaranteeing equal rights to all Canadians. Following a series of decisions by provincial courts and the Supreme Court of Canada, on July 20, 2005, the Civil Marriage Act (Bill C-38) received Royal Assent, legalizing same-sex marriage in Canada. Canada thus became the fourth country to officially sanction same-sex marriage worldwide, after The Netherlands, Belgium, and Spain. Furthermore, sexual orientation was included as a protected status in the human-rights laws of the federal government and of all provinces and territories.
Today, Canada has a diverse makeup of ethnicities and nationalities and constitutional protection for policies that promote multiculturalism rather than cultural assimilation or a single national myth. In Quebec, cultural identity is strong, and many French-speaking commentators speak of a Quebec culture as distinguished from English Canadian culture and other French Canadian cultures. However, as a whole, Canada is in theory, a cultural mosaic—a collection of several regional, aboriginal, and ethnic subcultures. Celtic influences have allowed survival of non-English dialects in Nova Scotia and Newfoundland; Canada's Pacific trade has also brought a large Chinese influence into British Columbia and other areas. Multiculturalism in Canada was adopted as the official policy of the Canadian government during the prime ministership of Pierre Trudeau, and is enshrined in Section 27 of the Canadian Charter of Rights and Freedoms. In parts of Canada, especially the major cities of Montreal, Vancouver, Ottawa and Toronto, multiculturalism itself is the cultural norm in many urban communities.
Canada's large geographic size, the presence of a significant number of indigenous peoples, the conquest of one European linguistic population by another and relatively open immigration policy have led to an extremely diverse society. As a result, the issue of Canadian identity remains under scrutiny. Journalist and author Richard Gwyn has suggested that "tolerance" has replaced "loyalty" as the touchstone of Canadian identity. Journalist and professor Andrew Cohen wrote in 2007:
The Canadian Identity, as it has come to be known, is as elusive as the Sasquatch and Ogopogo. It has animated—and frustrated—generations of statesmen, historians, writers, artists, philosophers, and the National Film Board...Canada resists easy definition.
The question of Canadian identity was traditionally dominated by three fundamental themes: first, the often conflicted relations between English Canadians and French Canadians stemming from the French Canadian imperative for cultural and linguistic survival; secondly, the generally close ties between English Canadians and the British Empire, resulting in a gradual political process towards complete independence from the imperial power; and finally, the close proximity of English-speaking Canadians to the United States. In the 20th century, immigrants from African, Caribbean and Asian nationalities have shaped the Canadian identity, a process that continues today with the ongoing arrival of large numbers of immigrants from non-British or non-French backgrounds, adding the theme of multiculturalism to the debate. Much of the debate over contemporary Canadian identity is argued in political terms, and defines Canada as a country defined by its government policies, which are thought to reflect deeper cultural values.
Nationalism and protectionism
In general, Canadian nationalists are highly concerned about the protection of Canadian sovereignty and loyalty to the Canadian State, placing them in the civic nationalist category. It has likewise often been suggested that anti-Americanism plays a prominent role in Canadian nationalist ideologies. A unified, bi-cultural, tolerant and sovereign Canada remains an ideological inspiration to many Canadian nationalists. Alternatively French Canadian nationalism and support for maintaining French Canadian culture would inspire Quebec nationalists, many of whom were supporters of the Quebec sovereignty movement during the late-20th century.
Cultural protectionism in Canada has, since the mid-20th century, taken the form of conscious, interventionist attempts on the part of various Canadian governments to promote Canadian cultural production. Sharing a large border and (for the majority) a common language with the United States, Canada faces a difficult position in regard to American culture, be it direct attempts at the Canadian market or the general diffusion of American culture in the globalized media arena. While Canada tries to maintain its cultural differences, it also must balance this with responsibility in trade arrangements such as the General Agreement on Tariffs and Trade (GATT) and the North American Free Trade Agreement (NAFTA).
Official symbols of Canada include the maple leaf, beaver, and the Canadian Horse. Many official symbols of the country such as the Flag of Canada have been changed or modified over the past few decades to 'Canadianize' them and de-emphasise or remove references to the United Kingdom. Other prominent symbols include the Canada goose, common loon and more recently, the totem pole and inuksuk. Symbols of the monarchy in Canada continue to be featured in, for example, the Arms of Canada, the armed forces and the prefix Her Majesty's Canadian Ship. The designation 'Royal' remains for institutions as varied as the Royal Canadian Mounted Police and the Royal Winnipeg Ballet. During unification of the forces in the 1960s, a renaming of the branches took place, resulting in the abandonment of "royal designations" of the navy and air force. On August 16, 2011, the Government of Canada announced that the name "Air Command" was re-assuming the air force's original historic name, Royal Canadian Air Force; "Land Command" was re-assuming the name Canadian Army; and "Maritime Command" was re-assuming the name Royal Canadian Navy. These name changes were made to better reflect Canada's military heritage and align Canada with other key Commonwealth of Nations whose militaries use the royal designation.
Canadian humour is an integral part of the Canadian Identity. There are several traditions in Canadian humour in both English and French. While these traditions are distinct and at times very different, there are common themes that relate to Canadians' shared history and geopolitical situation in the Western Hemisphere and the world. Various trends can be noted in Canadian comedy. One trend is the portrayal of a "typical" Canadian family in an ongoing radio or television series. Other trends include outright absurdity, and political and cultural satire. Irony, parody, satire, and self-deprecation are arguably the primary characteristics of Canadian humour.
The beginnings of Canadian radio comedy date to the late 1930s with the debut of The Happy Gang, a long-running weekly variety show that was regularly sprinkled with corny jokes in between tunes. Canadian television comedy begins with Wayne and Shuster, a sketch comedy duo who performed as a comedy team during the Second World War, and moved their act to radio in 1946 before moving on to television. Second City Television, otherwise known as SCTV, Royal Canadian Air Farce, This Hour Has 22 Minutes, The Kids in the Hall and more recently Trailer Park Boys are regarded as television shows which were very influential on the development of Canadian humour. Canadian comedians have had great success in the film industry and are amongst the most recognized in the world.
Humber College in Toronto and the École nationale de l'humour in Montreal offer post-secondary programmes in comedy writing and performance. Montreal is also home to the bilingual (English and French) Just for Laughs festival and to the Just for Laughs Museum, a bilingual, international museum of comedy. Canada has a national television channel, The Comedy Network, devoted to comedy. Many Canadian cities feature comedy clubs and showcases, most notable, The Second City branch in Toronto (originally housed at The Old Fire Hall) and the Yuk Yuk's national chain. The Canadian Comedy Awards were founded in 1999 by the Canadian Comedy Foundation for Excellence, a not-for-profit organization.
Aboriginal artists were producing art in the territory that is now called Canada for thousands of years prior to the arrival of European settler colonists and the eventual establishment of Canada as a nation state. Like the peoples that produced them, indigenous art traditions spanned territories that extended across the current national boundaries between Canada and the United States. The majority of indigenous artworks preserved in museum collections date from the period after European contact and show evidence of the creative adoption and adaptation of European trade goods such as metal and glass beads. Canadian sculpture has been enriched by the walrus ivory, muskox horn and caribou antler and soapstone carvings by the Inuit artists. These carvings show objects and activities from the daily life, myths and legends of the Inuit. Inuit art since the 1950s has been the traditional gift given to foreign dignitaries by the Canadian government.
The works of most early Canadian painters followed European trends. During the mid-19th century, Cornelius Krieghoff, a Dutch-born artist in Quebec, painted scenes of the life of the habitants (French-Canadian farmers). At about the same time, the Canadian artist Paul Kane painted pictures of aboriginal life in western Canada. A group of landscape painters called the Group of Seven developed the first distinctly Canadian style of painting. All these artists painted large, brilliantly coloured scenes of the Canadian wilderness.
Since the 1930s, Canadian painters have developed a wide range of highly individual styles. Emily Carr became famous for her paintings of totem poles in British Columbia. Other noted painters have included the landscape artist David Milne, the painters Jean-Paul Riopelle, Harold Town and Charles Carson and multi-media artist Michael Snow. The abstract art group Painters Eleven, particularly the artists William Ronald and Jack Bush, also had an important impact on modern art in Canada. Government support has played a vital role in their development enabling visual exposure through publications and periodicals featuring Canadian art, as has the establishment of numerous art schools and colleges across the country.
Canadian literature is often divided into French- and English-language literatures, which are rooted in the literary traditions of France and Britain, respectively. Canada’s early literature, whether written in English or French, often reflects the Canadian perspective on nature, frontier life, and Canada’s position in the world, for example the poetry of Bliss Carman or the memoirs of Susanna Moodie and Catherine Parr Traill. These themes, and Canada's literary history, inform the writing of successive generations of Canadian authors, from Leonard Cohen to Margaret Atwood.
By the mid-20th century, Canadian writers were exploring national themes for Canadian readers. Authors were trying to find a distinctly Canadian voice, rather than merely emulating British or American writers. Canadian identity is closely tied to its literature. The question of national identity recurs as a theme in much of Canada's literature, from Hugh MacLennan's Two Solitudes (1945) to Alistair MacLeod's No Great Mischief (1999). Canadian literature is often categorized by region or province; by the socio-cultural origins of the author (for example, Acadians, Aboriginal peoples, LGBT, and Irish Canadians); and by literary period, such as "Canadian postmoderns" or "Canadian Poets Between the Wars."
Canadian authors have accumulated numerous international awards. In 1992, Michael Ondaatje became the first Canadian to win the Man Booker Prize for The English Patient. Margaret Atwood won the Booker in 2000 for The Blind Assassin and Yann Martel won it in 2002 for the Life of Pi. Carol Shields's The Stone Diaries won the Governor General's Awards in Canada in 1993, the 1995 Pulitzer Prize for Fiction, and the 1994 National Book Critics Circle Award. In 2013, Alice Munro was the first Canadian to be awarded the Nobel Prize in Literature for her work as "master of the modern short story". Munro is also a recipient of the Man Booker International Prize for her lifetime body of work, and three-time winner of Canada's Governor General's Award for fiction.
Canada has had a thriving stage theatre scene since the late 1800s. Theatre festivals draw many tourists in the summer months, especially the Stratford Shakespeare Festival in Stratford, Ontario, and the Shaw Festival in Niagara-on-the-Lake, Ontario. The Famous People Players are only one of many touring companies that have also developed an international reputation. Canada also hosts one of the largest fringe festival the Edmonton International Fringe Festival.
Canada's largest cities host a variety of modern and historical venues. The Toronto Theatre District is Canada's largest, as well as being the third largest English-speaking theatre district in the world. In addition to original Canadian works, shows from the West End and Broadway frequently tour in Toronto. Toronto's Theatre District includes the venerable Roy Thomson Hall; the Princess of Wales Theatre; the Tim Sims Playhouse; The Second City; the Canon Theatre; the Panasonic Theatre; the Royal Alexandra Theatre; historic Massey Hall; and the city's new opera house, the Sony Centre for the Performing Arts. Toronto's Theatre District also includes the Theatre Museum Canada.
Montreal's theatre district ("Quartier des Spectacles") is the scene of performances that are mainly French-language, although the city also boasts a lively anglophone theatre scene, such as the Centaur Theatre. Large French theatres in the city include Theatre Saint-Denis, Theatre du Nouveau Monde, and EXcentris.
Vancouver is host to, among others, the Vancouver Fringe Festival, the Arts Club Theatre Company, Carousel Theatre, Bard on the Beach, Theatre Under the Stars and Studio 58. It also home of Vancouver Theatresports League, the improvisational theatre company, wold-known for providing an impetus for the present worldwide interest in theatresports at Expo in 1986.
Calgary is home to Theatre Calgary, a mainstream regional theatre; Alberta Theatre Projects, a major centre for new play development in Canada; the Calgary Animated Objects Society; and One Yellow Rabbit, a touring company.
There are three major theatre venues in Ottawa; the Ottawa Little Theatre, originally called the Ottawa Drama League at its inception in 1913, is the longest-running community theatre company in Ottawa. Since 1969, Ottawa has been the home of the National Arts Centre, a major performing-arts venue that houses four stages and is home to the National Arts Centre Orchestra, the Ottawa Symphony Orchestra and Opera Lyra Ottawa. Established in 1975, the Great Canadian Theatre Company specializes in the production of Canadian plays at a local level.
Canadian television, especially supported by the Canadian Broadcasting Corporation, is the home of a variety of locally produced shows. French-language television, like French Canadian film, is buffered from excessive American influence by the fact of language, and likewise supports a host of home-grown productions. The success of French-language domestic television in Canada often exceeds that of its English-language counterpart. In recent years nationalism has been used to prompt products on television. The I Am Canadian campaign by Molson beer, most notably the commercial featuring Joe Canadian, infused domestically brewed beer and nationalism.
Canada's television industry is in full expansion as a site for Hollywood productions. Since the 1980s, Canada, and Vancouver in particular, has become known as Hollywood North. The American TV series Queer as Folk was filmed in Toronto. Canadian producers have been very successful in the field of science fiction since the mid-1990s, with such shows as The X-Files, Stargate SG-1, Highlander: The Series, the new Battlestar Galactica, My Babysitter's A Vampire, Smallville, and The Outer Limits, all filmed in Vancouver.
The CRTC's Canadian content regulations dictate that a certain percentage of a domestic broadcaster's transmission time must include content that is produced by Canadians, or covers Canadian subjects. These regulations also apply to US cable television channels such as MTV and the Discovery Channel, which have local versions of their channels available on Canadian cable networks. Similarly, BBC Canada, while showing primarily BBC shows from the United Kingdom, also carries Canadian output.
A number of Canadian pioneers in early Hollywood significantly contributed to the creation of the motion picture industry in the early days of the 20th century. Over the years, many Canadians have made enormous contributions to the American entertainment industry, although they are frequently not recognized as Canadians.
Canada has developed a vigorous film industry that has produced a variety of well-known films, actors, and auteurs. In fact, this eclipsing may sometimes be creditable for the bizarre and innovative directions of some works, such as auteurs Atom Egoyan (The Sweet Hereafter, 1997) and David Cronenberg (The Fly, Naked Lunch, A History of Violence). Also, the distinct French-Canadian society permits the work of directors such as Denys Arcand and Denis Villeneuve. At the 76th Academy Awards, Arcand's The Barbarian Invasions became Canada's first film to win the Academy Award for Best Foreign Language Film. James Cameron is a very successful Canadian filmmaker, having been nominated for and receiving many Academy Awards.
The National Film Board of Canada is 'a public agency that produces and distributes films and other audiovisual works which reflect Canada to Canadians and the rest of the world'. Canada has produced many popular documentaries such as The Corporation, Nanook of the North, Final Offer, and Canada: A People's History. The Toronto International Film Festival (TIFF) is considered by many to be one of the most prevalent film festivals for Western cinema. It is the première film festival in North America from which the Oscars race begins.
The music of Canada has reflected the multi-cultural influences that have shaped the country. Aboriginals, the French, and the British have all made contributions to the musical heritage of Canada. The country has produced its own composers, musicians and ensembles since the mid-1600s. From the 17th century onward, Canada has developed a music infrastructure that includes church halls; chamber halls; conservatories; academies; performing arts centres; record companys; radio stations, and television music-video channels. The music has subsequently been heavily influenced by American culture because of its proximity and migration between the two countries. Canadian rock has had a considerable impact on the development of modern popular music and the development of the most popular subgenres.
Patriotic music in Canada date back over 200 years as a distinct category from British patriotism, preceding the first legal steps to independence by over 50 years. The earliest known song, "The Bold Canadian", was written in 1812. The national anthem of Canada, O Canada adopted in 1980, was originally commissioned by the Lieutenant Governor of Quebec, the Honourable Théodore Robitaille, for the 1880 St. Jean-Baptiste Day ceremony. Calixa Lavallée wrote the music, which was a setting of a patriotic poem composed by the poet and judge Sir Adolphe-Basile Routhier. The text was originally only in French, before it was translated to English in 1906.
Music broadcasting in the country is regulated by the Canadian Radio-television and Telecommunications Commission (CRTC). The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards, which were first awarded in a ceremony during the summer of 1970.
Canada has one of the largest video-game industries in terms of employment numbers, right behind the USA and Japan, with 16,000 employees, 348 companies, and a direct annual economic impact of nearly $2 billion. Canada has grown from a minor player in the video-games industry to a major industry player. In part, this prominence is made possible by a large pool of university-educated talent and a high quality of life, but favourable government policies towards digital media companies also play a role in making Canada an attractive location for game development studios.
Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States. Television, magazines, and newspapers are primarily for-profit corporations based on advertising, subscription, and other sales-related revenues. Nevertheless, both the television broadcasting and publications sectors require a number of government interventions to remain profitable, ranging from regulation that bars foreign companies in the broadcasting industry to tax laws that limit foreign competition in magazine advertising.
The promotion of multicultural media in Canada began in the late 1980s as the multicultural policy was legislated in 1988. In the Multiculturalism Act, the federal government proclaimed the recognition of the diversity of Canadian culture. Thus, multicultural media became an integral part of Canadian media overall. Upon numerous government reports showing lack of minority representation or minority misrepresentation, the Canadian government stressed separate provision be made to allow minorities and ethnicities of Canada to have their own voice in the media.
Sports in Canada consists of a variety of games. Although there are many contests that Canadians value, the most common are Ice hockey, Lacrosse, Canadian football, basketball, soccer, curling, baseball and ringette. All but curling and soccer are considered domestic sports as they were either invented by Canadians or trace their roots to Canada.
Ice hockey, referred to as simply "hockey", is Canada's most prevalent winter sport, its most popular spectator sport, and its most successful sport in international competition. It is Canada's official national winter sport. Lacrosse, a sport with indigenous origins, is Canada's oldest and official summer sport. Canadian football is Canada's second most popular spectator sport, and the Canadian Football League's annual championship, the Grey Cup, is the country's largest annual sports event.
While other sports have a larger spectator base, association football, known in Canada as soccer in both English and French, has the most registered players of any team sport in Canada, and is the most played sport with all demographics, including ethnic origin, ages and genders. Professional teams exist in many cities in Canada and international soccer competitions such as the FIFA World Cup, UEFA Euro and the UEFA Champions League attract some of the biggest audiences in Canada. Other popular team sports include curling, street hockey, cricket, rugby softball and Ultimate frisbee. Popular individual sports include auto racing, boxing, karate, kickboxing, hunting, fishing, cycling, golf, hiking, horse racing, ice skating, skiing, snowboarding, swimming, triathlon, disc golf, water sports, and several forms of wrestling.
As a country with a generally cool climate, Canada has enjoyed greater success at the Winter Olympics than at the Summer Olympics, although significant regional variations in climate allow for a wide variety of both team and individual sports. Great achievements in Canadian sports are recognized by Canada's Sports Hall of Fame, while the Lou Marsh Trophy is awarded annually to Canada's top athlete by a panel of journalists. There are numerous other Sports Halls of Fame in Canada.
Canadian cuisine varies widely depending on the region. The former Canadian prime minister Joe Clark has been paraphrased to have noted: "Canada has a cuisine of cuisines. Not a stew pot, but a smorgasbord." There are considerable overlaps between Canadian food and the rest of the cuisine in North America, many unique dishes (or versions of certain dishes) are found and available only in the country. Common contenders for the Canadian national food include poutine and butter tarts. Other popular Canadian made foods include the Aboriginal fried bread bannock, French tourtière, Kraft Dinner, ketchup chips, date squares, and the cocktail bloody caesar. A noteworthy fact is that Canada is the world's largest producer of maple syrup.
The three earliest cuisines of Canada have First Nations, English, and French roots, with the traditional cuisine of English Canada closely related to British and American cuisine, while the traditional cuisine of French Canada has evolved from French cuisine and the winter provisions of fur traders. With subsequent waves of immigration in the 18th and 19th century from Central, Southern, and Eastern Europe, and then from Asia, Africa and Caribbean, the regional cuisines were subsequently augmented. The Jewish immigrants to Canada during the late 1800s also play a significant role to foods in Canada. The Montreal-style bagel and Montreal-style smoked meat are both food items originally developed by Jewish communities living in Montreal.
In 1984, Baron Moran, the British High Commissioner to Canada, described Canadians as "moderate, comfortable...decent and reasonable", but added that, due to the absence of competition, "Anyone who is even moderately good at what they do—in literature, the theater, skiing or whatever—tends to become a national figure. And anyone who stands out at all from the crowd tends to be praised to the skies and given the Order of Canada at once."
In a 2002 interview with the Globe and Mail, Aga Khan, the 49th Imam of the Ismaili Muslims, described Canada as "the most successful pluralist society on the face of our globe", citing it as "a model for the world". A 2007 poll ranked Canada as the country with the most positive influence in the world. 28,000 people in 27 countries were asked to rate 12 countries as either having a positive or negative worldwide influence. Canada’s overall influence rating topped the list with 54 per cent of respondents rating it mostly positive and only 14 per cent mostly negative.
The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the perceptions that arise from this oft-held contrast have gone to shape the advertised worldwide identities of both nations: the United States is seen as the rebellious child of the British Crown, forged in the fires of violent revolution; Canada is the calmer offspring of the United Kingdom, known for a more relaxed national demeanour.
- Canadian folklore
- Culture of Alberta
- Culture of Manitoba
- Culture of Saskatchewan
- Culture of Quebec
- History of free speech in Canada
- Public holidays in Canada
- List of Canadians
- Sana Loue; Martha Sajatovic (2011). Encyclopedia of Immigrant Health. Springer. p. 337. ISBN 978-1-4419-5655-2.
- Paul R. Magocsi; Multicultural History Society of Ontario (1999). Encyclopedia of Canada's Peoples. University of Toronto Press. pp. 1186–1187. ISBN 978-0-8020-2938-6.
- Organisation for Economic Co-operation and Development (1999). Economic and cultural transition towards a learning city: the case of Jena. OECD Publishing. p. 48. ISBN 978-92-64-17015-5.
- Christopher Ricks; Leonard Michaels (1990). The State of the language. University of California Press. p. 19. ISBN 978-0-520-05906-1.
- Anne-Marie Mooney Cotter (2011). Culture clash: an international legal perspective on ethnic discrimination. Ashgate Publishing, Ltd. p. 176. ISBN 978-1-4094-1936-5.
- Wayland, Shara (1997). "Immigration, Multiculturalism and National Identity in Canada" (PDF). University of Toronto Department of Political Science. Retrieved 2010-09-12.
- Sheila Petty; Garry Sherbert; Annie Gérin (2006). Canadian Cultural Poesis: Essays on Canadian Culture. Wilfrid Laurier Univ. Press. p. 348. ISBN 978-0-88920-486-7.
- Darrell Bricker; John Wright; Ipsos-Reid (Firm) (2005). What Canadians think- about almost- everything. Doubleday Canada. pp. 8–20. ISBN 978-0-385-65985-7.
- The Environics Institute (2010). "Focus Canada (Final Report)" (pdf). Queen's University. p. 4 (PDF page 8). Retrieved December 12, 2015.
- National Film Board of Canada (2005). "Mandate of the National Film Board". Archived from the original on April 21, 2006. Retrieved March 15, 2006.
- Trevor W. Harrison, John W. Friesen; Trevor Harrison; John W. Friesen (2010). Canadian Society in the Twenty-first Century: An Historical Sociological Approach. Canadian Scholars’ Press. p. 186. ISBN 978-1-55130-371-0.
- David L. Preston (2009). The Texture of Contact: European and Indian Settler Communities on the Frontiers of Iroquoia, 1667–1783. U of Nebraska Press. pp. 43–44. ISBN 978-0-8032-2549-7.
- J.R. Miller (2009). Compact, Contract, Covenant: Aboriginal Treaty-Making in Canada. University of Toronto Press. p. 34. ISBN 978-1-4426-9227-5.
- Patrick Macklem (2001). Indigenous difference and the Constitution of Canada. University of Toronto Press. p. 136. ISBN 978-0-8020-8049-3.
- Newhouse, David. "Hidden in Plain Sight Aboriginal Contributions to Canada and Canadian Identity Creating a new Indian Problem" (PDF). Centre of Canadian Studies, University of Edinburgh. Archived from the original (PDF) on July 23, 2011. Retrieved October 17, 2009.
- "Aboriginal place names contribute to a rich tapestry". Indian and Northern Affairs Canada. Retrieved October 17, 2009.
- John C. Hudson (2002). Across this land: a regional geography of the United States and Canada. JHU Press. p. 15. ISBN 978-0-8018-6567-1.
- "Canada in the Making: Pioneers and Immigrants". The History Channel. August 25, 2005. Retrieved November 30, 2006.
- John Powell (2009). Encyclopedia of North American Immigration. Infobase Publishing. p. 45. ISBN 978-1-4381-1012-7.
- Edited by R. Douglas Francis (2011-11-01). Canada and the British World: Culture, Migration, and Identity. UBC Press. p. 2. ISBN 978-0-7748-4031-6.
- Christopher Edward Taucar (2002). Canadian Federalism and Quebec Sovereignty. 47. American university studies: Political science. pp. 47–48. ISBN 978-0-8204-6242-4.
- Wayne C. Thompson (2013). Canada World Today. Rowman & Littlefield. p. 61. ISBN 978-1-4758-0474-4.
- Guy M. Robinson (1991). A Social geography of Canada. Dundurn Press Ltd. p. 86. ISBN 978-1-55002-092-2.
- Peter Kivisto (2008). Multiculturalism in a Global Society. John Wiley & Sons. p. 90. ISBN 978-0-470-69480-0.
- Patricia E.. Bromley (2011). Human Rights, Diversity, and National Identity: Changes in Civic Education Textbooks Cross-nationally (1970–2008) and in British Columbia (1871–2008). Stanford University. pp. 107–108. STANFORD:XT006FZ3167.
- Edward Ksenych; David Liu (2001). Conflict, order and action: readings in sociology. Canadian Scholars' Press. p. 407. ISBN 978-1-55130-192-1.
- "Immigration Policy in the 1970s". Canadian Heritage (Multicultural Canada). 2004. Retrieved April 12, 2010.
- "2006 Census release topics". Statistics Canada. Retrieved January 16, 2011.
- James Hollifield; Philip Martin; Pia Orrenius (2014). Controlling Immigration: A Global Perspective, Third Edition. Stanford University Press. p. 11. ISBN 978-0-8047-8627-0.
- Dianne R. Hales; Lara Lauzon (April 2009). An Invitation to Health. Cengage Learning. p. 440. ISBN 978-0-17-650009-2.
- William Downes (1998). Language and society. Cambridge University Press. p. 47. ISBN 978-0-521-45663-0.
- "Religions in Canada—Census 2011". Statistics Canada/Statistique Canada.
- "What Languages Do Canadians Speak? Language Statistics From the 2011 Census of Canada". About.com: Canada Online. October 31, 2012. Retrieved November 26, 2012.
- "Population by mother tongue, by province and territory". Statistics Canada. January 2013. Retrieved July 4, 2013.
- "Quebec". The Columbia Electronic Encyclopedia, Sixth Edition. Columbia University Press. 2003. Retrieved November 30, 2006.
- "American Civil war". The Canadian Encyclopedia. Historica Founcation. 2003. Retrieved November 30, 2006.
- François Vaillancourt, Olivier Coche. Official Language Policies at the Federal Level in Canada:costs and Benefits in 2006. The Fraser Institute. p. 11. GGKEY:B3Y7U7SKGUD.
- Paul André Linteau; René Durocher; Jean-Claude Robert (1983). Quebec, a history, 1867–1929. James Lorimer & Company. p. 219. ISBN 978-0-88862-604-2.
- Jochen Kosel (2010). The Language Situation in Canada with Special Regard to Quebec. GRIN Verlag. p. 15. ISBN 978-3-640-65926-5.
- Joan Church; Christian Schulze; Hennie Strydom (2007). Human rights from a comparative and international law perspective. Unisa Press. p. 82. ISBN 978-1-86888-361-5.
- Christopher MacLennan (2004). Toward the Charter: Canadians and the Demand for a National Bill of Rights, 1929–1960. McGill-Queen's University Press. p. 119. ISBN 978-0-7735-2536-8.
- Linda Silver Dranoff (2011). Every Canadian's Guide to the Law: Fourth Edition. HarperCollins Canada. pp. 373–. ISBN 978-1-4434-0559-1. Retrieved February 3, 2012.
- Paul Ubaldo Angelini (2011). Our Society: Human Diversity in Canada. Cengage Learning. p. 315. ISBN 978-0-17-650354-3.
- Anne-Marie Mooney Cotter (2010). Ask no questions: an international legal analysis on sexual orientation discrimination. Ashgate Publishing, Ltd. p. 140. ISBN 978-0-7546-7791-8.
- Shirley R. Steinberg (2009). Diversity and Multiculturalism. Peter Lang. p. 184. ISBN 978-1-4331-0345-2.
- David DeRocco; John F. Chabot (2008). From Sea to Sea to Sea: A Newcomer's Guide to Canada. Full Blast Productions. p. 13. ISBN 978-0-9784738-4-6.
- Daniel Franklin; Michael J. Baun (1995). Political culture and constitutionalism: a comparative approach. M.E. Sharpe. p. 61. ISBN 978-1-56324-416-2.
- Allan D. English (2004). Understanding Military Culture: A Canadian Perspective. McGill-Queen's University Press. p. 111. ISBN 978-0-7735-7171-6.
- Burgess, Ann Carroll; Burgess, Tom (2005). Guide to Western Canada. Globe Pequot Press. p. 31. ISBN 978-0-7627-2987-6.
- John Thomas Koch (2006). Celtic culture: a historical encyclopedia. ABC-CLIO. p. 376. ISBN 978-1-85109-440-0.
- Fen Osler Hampson (1997). Canada Among Nations, 1997: Asia Pacific Face-Off. McGill-Queen's University Press. p. 234. ISBN 978-0-88629-327-7.
- Jonathan L. Black-Branch; Canadian Education Association (1995). Making Sense of the Canadian Charter of Rights and Freedoms. Canadian Education Association. p. 38. ISBN 978-0-920315-78-1.
- James S. Duncan; David Ley (1993). Place/culture/representation. Routledge. pp. 205–. ISBN 978-0-415-09451-1.
- Marcia Wallace (1999). "Planning Amidst Diversity: The Challenges of Multiculturalism in Urban and Suburban Greater Toronto". University of Waterloo. Retrieved November 30, 2006.
- MacGregor, p.39
- Richard J. Gwyn (2008). John A: The Man Who Made Us. Random House Digital, Inc. p. 265. ISBN 978-0-679-31476-9.
- Andrew Cohen (2008). The Unfinished Canadian: The People We Are. McClelland & Stewart. pp. 3–. ISBN 978-0-7710-2286-9.
- Martin N. Marger (2011). Race and Ethnic Relations: American and Global Perspectives. Cengage Learning. p. 433. ISBN 978-1-111-18638-8.
- Philip Resnick (2005). The European Roots Of Canadian Identity. University of Toronto Press. p. 63. ISBN 978-1-55111-705-8.
- Roy MacGregor (2008). Canadians. Penguin Group (Canada). ISBN 978-0-14-318162-0.
- Steven Alexander Kennett (1998). Securing the Social Union: A Commentary on the Decentralized Approach. IIGR, Queen's University. p. 6. ISBN 978-0-88911-767-9.
- Brendon O'Connor (2007). Anti-Americanism: Comparative perspectives. Greenwood Publishing Group. p. 59. ISBN 978-1-84645-026-6.
- Peter H. Russell (2004). Constitutional odyssey: can Canadians become a sovereign people?. University of Toronto Press. p. 156. ISBN 978-0-8020-3777-0.
- Dominique Clift (1982). Quebec nationalism in crisis. McGill-Queen's University Press. pp. 106–108. ISBN 978-0-7735-0383-0.
- John Carlos Rowe (2010). A Concise Companion to American Studies. John Wiley and Sons. p. 393. ISBN 978-1-4051-0924-6.
- Marc Raboy (1990). Missed opportunities: the story of Canada's broadcasting policy. McGill-Queen's University Press. p. 301. ISBN 978-0-7735-0775-3.
- "National Horse of Canada Act". Canlii.org. Retrieved February 25, 2011.
- "The beaver". Pch.gc.ca. December 17, 2008. Retrieved February 25, 2011.
- "The Maple Leaf". Pch.gc.ca. November 17, 2008. Retrieved February 25, 2011.
- Phillip Alfred Buckner (2005). Canada and the end of empire. UBC Press. p. 86. ISBN 978-0-7748-0916-0.
- Ruhl, Jeffrey (January 2008). "Inukshuk Rising". Canadian Journal of Globalization. 1 (1): 25–30.
- Douglas J. Murray; Paul R. Viotti (1994). The defense policies of nations: a comparative study. JHU Press. p. 84. ISBN 978-0-8018-4794-3.
- "Restoration of traditional military service names welcomed". Government of New Brunswick (Intergovernmental Affairs Office of the Premier). 2011. Retrieved January 1, 2012.
- Scobie, Stephen "Humorous Writing in English". The Canadian Encyclopedia. Retrieved on: April 23, 2010.
- Lacombe, Michelle "Humorous Writing in French". The Canadian Encyclopedia. Retrieved on: April 23, 2010.
- Doug Owram (1997). Born at the right time: a history of the baby-boom generation. University of Toronto Press. p. 91. ISBN 978-0-8020-8086-8.
- Charles Boberg (2010). The English Language in Canada: Status, History and Comparative Analysis. Cambridge University Press. p. 45. ISBN 978-0-521-87432-8.
- William H. New (2002). Encyclopedia of literature in Canada. University of Toronto Press. p. 516. ISBN 978-0-8020-0761-2.
- Tim Nieguth (2015). The Politics of Popular Culture: Negotiating Power, Identity, and Place. MQUP. p. 188. ISBN 978-0-7735-9685-6.
- Serra Ayse Tinic (2005). On location: Canada's television industry in a global market. University of Toronto Press. p. 131. ISBN 978-0-8020-8548-1.
- Stephen Brooks (2002). The challenge of cultural pluralism. Greenwood Publishing Group. p. 45. ISBN 978-0-275-97001-7.
- Gil Murray (2003). Nothing on but the radio: a look back at radio in Canada and how it changed the world. Dundurn Press Ltd. p. 39. ISBN 978-1-55002-479-1.
- Doug Hill, Jeff Weingrad (2011-12-15). Saturday Night: A Backstage History of Saturday Night Life. Untreed Reads. p. 27. ISBN 978-1-61187-218-7.
- Maurice Charney (2005). Comedy: a geographic and historical guide. Greenwood Publishing Group. pp. 210–213. ISBN 978-0-313-32714-8.
- "Organisation Members". Cultural Human Resources Council. 2010. Retrieved February 2, 2012.
- John Robert Colombo (2001). 1000 questions about Canada: places, people, things, and ideas : a question-and-answer book on Canadian facts and culture. Dundurn Press Ltd. p. 213. ISBN 978-0-88882-232-1.
- Robert A. Stebbins (1990). The laugh-makers: stand-up comedy as art, business, and life-style. McGill-Queen's University Press. p. 25. ISBN 978-0-7735-0735-7.
- "History". The Canadian Comedy Awards & Festival. 2012. Retrieved February 3, 2012.
- Ruth B. Phillips (2011). Museum Pieces: Toward the Indigenization of Canadian Museums. McGill-Queen's University Press. p. 267. ISBN 978-0-7735-3905-1.
- John W. Friesen; Virginia Agnes Lyons Friesen (2006). Canadian Aboriginal Art and Spirituality: A Vital Link. Calgary, AB: Detselig Enterprises. pp. xxi – Intro. ISBN 9781550593044. OCLC 62129850.
- J. Russell Harper (1977). Painting in Canada: a history. University of Toronto Press. p. 57. ISBN 978-0-8020-6307-6.
- Nicola Förg (1999). Canada: Pacific coast, the Rockies, Prairie Provinces, and the Territories. Hunter Publishing, Inc. p. 233. ISBN 978-3-88618-368-5.
- Patricia Randolph Leigh (2010). International Exploration of Technology Equity and the Digital Divide: Critical, Historical and Social Perspectives. Idea Group Inc (IGI). p. 93. ISBN 978-1-61520-793-0.
- Pamela R. Stern (2010). Daily life of the Inuit. ABC-CLIO. p. 151. ISBN 978-0-313-36311-5.
- Roshen Dalal (2011). The Illustrated Timeline of the History of the World. The Rosen Publishing Group. p. 147. ISBN 978-1-4488-4797-6.
- Lynda Jessup (2001). Antimodernism and artistic experience: policing the boundaries of modernity. University of Toronto Press. p. 146. ISBN 978-0-8020-8354-8.
- Cheryl MacDonald (2009). Celebrated Pets: Endearing Tales of Companionship and Loyalty. Heritage House Publishing Co. pp. 57–. ISBN 978-1-894974-81-3.
- Iris Nowell (2011). Painters Eleven: The Wild Ones of Canadian Art. Douglas & McIntyre. p. 33. ISBN 978-1-55365-590-9.
- Sarah M. Corse (1997). Nationalism and literature: the politics of culture in Canada and the United States. Cambridge University Press. p. 60. ISBN 978-0-521-57912-4.
- W. J. Keith (2006). Canadian literature in English. The Porcupine's Quill. p. 19. ISBN 978-0-88984-283-0.
- "Robert Fulford's column about the international success of Canadian literature". Robertfulford.com. June 6, 2001. Retrieved February 25, 2011.
- Mary Virginia Brackett; Victoria Gaydosik (2006). The Facts on File Companion to the British Novel: Beginnings through the 19th century. Infobase Publishing. p. 323. ISBN 978-0-8160-5133-5.
- Shannon Eileen Hengen; Ashley Thomson (2007). Margaret Atwood: a reference guide, 1988–2005. Scarecrow Press. p. 272. ISBN 978-0-8108-5904-3.
- Yann Martel (2010). Beatrice and Virgil. Random House Digital, Inc. p. 212. ISBN 978-0-8129-8197-1.
- Abby H. P. Werlock (2001). Carol Shields's The stone diaries: a reader's guide. Continuum International Publishing Group. p. 69. ISBN 978-0-8264-5249-8.
- "The Nobel Prize in Literature 2013" (PDF) (Press release). 2013. Retrieved October 10, 2013.
- Julia Gaunce; Suzette Mayr; Don LePan; Marjorie Mather; Bryanne Miller (July 25, 2012). The Broadview Anthology of Short Fiction, second edition. Broadview Press. p. 236. ISBN 978-1-55481-076-5.
- Beth Osnes (2001). Acting: an International encyclopedia. ABC-CLIO. p. 57. ISBN 978-0-87436-795-9.
- Guek Cheng Pang (2004). Canada. Marshall Cavendish. pp. 102–. ISBN 978-0-7614-1788-0.
- Guek Cheng Pang (2004). Canada. Marshall Cavendish. pp. 121–. ISBN 978-0-7614-1788-0.
- Angelini, Paul Ubaldo (2011). Our Society: Human Diversity in Canada. Cengage Learning. p. 34. ISBN 9780176503543.
- "Toronto Theatre District". Showmetoronto.com. Retrieved 2013-02-12.
- "Discover". Quartier des spectacles. Retrieved 2013-02-12.
- "Montreal Festival | Just For Laughs". Hahaha.com. Retrieved 2013-02-12.
- Theatre BC. "Theatre Links". Theatre BC. Retrieved 2013-02-13.
- "2011 Operating Grant Recipients" (PDF). Calgary Arts Development (CADA). 2011. Retrieved 2013-02-13.
- Hale, James (2011). Frommer's Ottawa. John Wiley and Sons. p. 60. ISBN 978-0-470-68158-9.
- "NAC History | National Arts Centre". Nac-cna.ca. 1970-03-17. Retrieved 2011-06-07.
- "Great Canadian Theatre Company". Canadian Theatre Encyclopedia. 2011-01-13. Retrieved 2011-09-01.
- Julie K. Petersen (2002). The telecommunications illustrated dictionary. CRC Press. p. 152. ISBN 978-0-8493-1173-4.
- Patrick James; Mark J. Kasoff (2008). Canadian Studies in the New Millennium. University of Toronto Press. p. 157. ISBN 978-0-8020-9468-1.
- José Eduardo Igartua (2006). The other quiet revolution: national identities in English Canada, 1945–71. UBC Press. p. 229. ISBN 978-0-7748-1088-3.
- Vincent Mosco; Dan Schiller (2001). Continental order?: integrating North America for cybercapitalism. Rowman & Littlefield. p. 208. ISBN 978-0-7425-0954-2.
- John Punter (2003). The Vancouver achievement: urban planning and design. UBC Press. p. 4. ISBN 978-0-7748-0971-9.
- Mike Resnick (2007). Nebula Awards Showcase 2007. Penguin. p. 111. ISBN 978-1-4406-2261-8.
- Jay G. Blumler; T. J. Nossiter (1991). Broadcasting Finance in Transition: A Comparative Handbook. Oxford University Press. p. 32. ISBN 978-0-19-505089-9.
- Dana Rasmussen (2011). Canada's Influence on the Film Industry: Canada's Pioneers in Early Hollywood. BiblioBazaar. pp. iix (intro).
- Charles Foster (2000). Stardust and shadows: Canadians in early Hollywood. Dundurn Press Ltd. pp. 27–34. ISBN 978-1-55002-348-0.
- Gorham Anders Kindem (2000). The international movie industry. SIU Press (reprint). pp. 304–307. ISBN 978-0-8093-2299-2.
- Graham Fraser (2007). Sorry, I Don't Speak French: Confronting the Canadian Crisis That Won't Go Away. Random House Digital, Inc. p. 227. ISBN 978-0-7710-4767-1.
- Mark Kearney; Randy Ray (1998). The Great Canadian Trivia Book 2. Dundurn Press Ltd. pp. 80–. ISBN 978-0-88882-197-3.
- Ewan Ferlie; Laurence E. Lynn; Christopher Pollitt (2007). The Oxford handbook of public management. Oxford Handbooks Online. p. 457. ISBN 978-0-19-922644-3.
- "Toronto International Film Festival". Tiff.net. Retrieved February 25, 2011.
- Music in Canada 1600–1800. by Amtmann, Willy. Cambridge, Ont. : Habitex Books, 1975. 320 p.(ISBN 0-88912-020-X)
- La Musique au Québec 1600–1875. by Michelle Pharand. Montreal: Les Éditions de l'Homme (1976) (ISBN 0-7759-0517-8)
- Carl Morey (1997). Music in Canada: a research and information guide. Garland Pub. pp. ??. ISBN 978-0-8153-1603-9.
- "The history of broadcasting in Canada". The Canadian Communications Foundation.
- Profiles of Canada. edited by Kenneth G. Pryke, Walter C. Soderlund. Boulder, Colo. : NetLibrary, 2000.(ISBN 0-585-27925-X)
- "History of Canada in music". Historica Foundation of Canada.
- Canadian Music: Issues of Hegemony & Identity, eds Beveley Diamond & Robert Witmer. Canadian Scholars Press, 1994.
- Joan Nicks; Jeannette Sloniowski (2002). Slippery pastimes: reading the popular in Canadian culture. Wilfrid Laurier Univ. Press. p. 219. ISBN 978-0-88920-388-4.
- Adam Jortner (2011). The Gods of Prophetstown: The Battle of Tippecanoe and the Holy War for the American Frontier. Oxford University Press. p. 217. ISBN 978-0-19-976529-4.
- Government of Canada (2008-06-23). "Hymne national du Canada". Canadian Heritage. Government of Canada. Retrieved 2008-06-26.
- "'O Canada'". Historica-Dominion. Retrieved October 28, 2009.
- "Hymne national du Canada". Canadian Heritage. June 23, 2008. Retrieved June 26, 2008.
- Edwardson, Ryan (2008). Canadian content, culture and the quest for nationhood. University of Toronto Press. p. 127. ISBN 978-0-8020-9759-0.
- "Canada boasts the third-largest video game industry". Networkworld.com. April 6, 2010. Retrieved May 12, 2011.
- "Canada's Video Game Industry in 2011". Tech Vibes. Retrieved November 10, 2011.
- "The evolution of video games in Canada". CBC News. September 13, 2010. Retrieved May 11, 2011.
- "Canadian-made games and the question of outsourcing". CBC News. February 20, 2011. Retrieved May 23, 2011.
- "Canada's gaming industry is kicking butt | FP Tech Desk | Financial Post". Business.financialpost.com. Retrieved February 3, 2012.
- Mike Brake (1990). Comparative Youth Culture: The Sociology of Youth Cultures and Youth Subcultures in America, Britain, and Canada. Routledge. p. 160. ISBN 978-0-415-05108-8.
- Steven Globerman; Institute for Research on Public Policy (1983). Cultural Regulation in Canada. IRPP. p. 18. ISBN 978-0-920380-81-9.
- Robin Mansell; Marc Raboy (2011). The Handbook of Global Media and Communication Policy. John Wiley & Sons. ISBN 978-1-4443-9542-6.
- Paul Attallah; Leslie Regan Shade (2006). Mediascapes: new patterns in Canadian communication. Thomson Nelson. p. 227. ISBN 978-0-17-640652-3.
- Don Morrow; Kevin B. Wamsley (2013). Sport in Canada: A History. Oxford University Press. pp. 1–4. ISBN 978-0-19-544672-2.
- Bruce Kidd (1996). The struggle for Canadian sport. University of Toronto Press. pp. 189–. ISBN 978-0-8020-7664-9.
- Canadian Press (June 8, 2006). "Survey: Canadian interest in pro football is on the rise". Globe and Mail. Canada. Retrieved June 8, 2006.
- Glenn M. Wong (2009). The comprehensive guide to careers in sports. Jones & Bartlett Learning. p. 105. ISBN 978-0-7637-2884-7.
- "Canada's hockey obsession leading to burnout among young players". Canada.com. September 16, 2008.
- "World Cup TV ratings and soccer championships show huge rise". CBC News.
- Victor J. Danilov (1997). Hall of fame museums: a reference guide. Greenwood Publishing Group. p. 24. ISBN 978-0-313-30000-4.
- Edward Zawadzki (2001). The Ultimate Canadian Sports Trivia Book. Dundurn Press Ltd. p. 190. ISBN 978-0-88882-237-6.
- Pandi, George (April 5, 2008), "Let's eat Canadian, but is there really a national dish?", The Gazette (Montreal) Also published as "Canadian cuisine a smorgasbord of regional flavours"
- Trillin, Calvin (November 23, 2009), "Canadian Journal, 'Funny Food'", The New Yorker: 68–70
- Wong, Grace (October 2, 2010), Canada's national dish: 740 calories—and worth every bite?, CNN
- Sufrin, Jon (April 22, 2010), "Is poutine Canada's national food? Two arguments for, two against", Toronto Life
- Baird, Elizabeth (June 30, 2009), "Does Canada Have a National Dish?", Canadian Living
- DeMONTIS, RITA (June 21, 2010), "Canadians butter up to this tart", Toronto Sun
- Jennifer Andrews (2015). "34 Uniquely Canadian Foods (Other Than Poutine)". Ricotta & Radishes. Retrieved December 1, 2015.
- "Maple Syrup." Ontario Ministry of Agriculture, Food and Rural Affairs. Accessed July 2011.
- Linda Civitello (2011). Cuisine and Culture: A History of Food and People. John Wiley & Sons. pp. 401–402. ISBN 978-0-470-40371-6.
- Gail Simmons (2012). Talking with My Mouth Full: My Life as a Professional Eater. Hyperion. p. 45. ISBN 978-1-4013-0415-7.
- (Associated Press), "Letters reveal candid views of UK diplomats", WTOP, October 18, 2009.
- Linda A. White; Richard Simeon (2009). The Comparative Turn in Canadian Political Science. UBC Press. p. 102. ISBN 978-0-7748-1428-7.
- Stackhouse, John; Martin, Patrick (February 2, 2002), "Canada: 'A model for the world'", The Globe and Mail, Toronto, p. F3, retrieved June 29, 2009,
Canada is today the most successful pluralist society on the face of our globe, without any doubt in my mind. . . . That is something unique to Canada. It is an amazing global human asset
- "Canada – A Good Influence on the World". Canadavisa.com. March 7, 2007. Retrieved June 30, 2010.
- "Americans and Canadians -The North American Not-so-odd Couple". Pew Research Center. 2004.
- Mercer Human Res Consulting, Inc. (2009). The Global Manager's Guide to Living and Working Abroad: Western Europe and the Americas: Western Europe and the Americas. ABC-CLIO. p. 67. ISBN 978-0-313-35884-5.
- "The Myths that Made Canada". SeacoastNH.com. As I Please (column). March 14, 2004. Retrieved June 10, 2011.
- Mel Atkey (2006). Broadway North: The Dream of a Canadian Musical Theatre. Dundurn. p. 17. ISBN 978-1-4597-2120-3.
- Bart Beaty; Derek Briton; Gloria Filax (2010). How Canadians Communicate III: Contexts of Canadian Popular Culture. Athabasca University Press. ISBN 978-1-897425-59-6.
- David Carment; David Bercuson (2008). The World in Canada: Diaspora, Demography, and Domestic Politics. McGill-Queen's University Press. ISBN 978-0-7735-7455-7.
- Dominique Clément (2009). Canada's Rights Revolution: Social Movements and Social Change, 1937–82. UBC Press. ISBN 978-0-7748-5843-4.
- David H. Flaherty; Frank E. Manning (1993). Beaver Bites Back?. McGill-Queen's University Press. ISBN 978-0-7735-6429-9.
- Anne Howells (2004). Where are the voices coming from?: Canadian culture and the legacies of history. Rodopi. ISBN 978-90-420-1623-1.
- Mark Kearney; Randy Ray (2009). The Big Book of Canadian Trivia. Dundurn. ISBN 978-1-77070-614-9.
- Kearney, Mark; Ray, Randy (1999). Great Canadian Book of Lists. Dundurn. ISBN 978-0-88882-213-0.
- Andrew Podnieks (2006). A Canadian Saturday Night: Hockey and the Culture of a Country. Greystone Books Ltd. ISBN 978-1-926812-05-2.
- David Morton Rayside; Clyde Wilcox (2011). Faith, Politics, and Sexual Diversity in Canada and the United States. UBC Press. ISBN 978-0-7748-2009-7.
- Nelson Wiseman (2011). In Search of Canadian Political Culture. UBC Press. ISBN 978-0-7748-4061-3.
|Wikimedia Commons has media related to Culture of Canada.|
- Canadian Heritage
- Culture.CA - Canadian cultural portal online
- Cultural Information - Canada - Global Affairs Canada | https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.ipfs.dweb.link/wiki/Culture_of_Canada.html | 21 |
74 | What is retained earnings in simple words?
Retained earnings (RE) is the amount of net income left over for the business after it has paid out dividends to its shareholders. A business generates earnings that can be positive ( profits ) or negative (losses). The money not paid to shareholders counts as retained earnings .
What is an example of retained earnings?
For example , if a company sells $1 million in goods and is required to pay $200,000 out to shareholders, $1 million would be the company’s revenue while $800,000 ($1 million minus $200,000) would be the company’s retained earnings .
How do you calculate retained earnings on balance sheet?
To calculate retained earnings subtract a company’s liabilities from its assets to get your stockholder equity, then find the common stock line item in your balance sheet and take the total stockholder equity and subtract the common stock line item figure (if the only two items in your stockholder equity are common
How do you do Retained earnings?
Uses of Retained Earnings Expansion. The company may use the retained earnings to fund an expansion of its operations. New product launch. A company may also use the retained earnings to finance a new product launch to increase the company’s list of product offerings. Dividend payments. Merger or acquisition.
Is Retained earnings considered an asset?
Are retained earnings an asset ? Retained earnings are actually reported in the equity section of the balance sheet. Although you can invest retained earnings into assets , they themselves are not assets .
Is Retained Earnings actual cash?
Retained Earnings is the collective net income since a company began minus all of the dividends that the company has declared since it began. The amount is usually invested in assets or used to reduce liabilities. The retained earnings is rarely entirely cash .
What is the difference between retained earnings and net income?
Net income is the profit earned for a period. Any net income that is not paid out to shareholders at the end of a reporting period becomes retained earnings . Retained earnings are then carried over to the balance sheet where it is reported as such under shareholder’s equity.
What happens to retained earnings at year end?
At the end of the fiscal year , closing entries are used to shift the entire balance in every temporary account into retained earnings , which is a permanent account. The net amount of the balances shifted constitutes the gain or loss that the company earned during the period.
Is Retained earnings debit or credit?
The normal balance in the retained earnings account is a credit . This means that if you want to increase the retained earnings account, you will make a credit journal entry. A debit journal entry will decrease this account.
What is the journal entry for retained earnings?
Dividends. When dividends are declared by a corporation’s board of directors, a journal entry is made on the declaration date to debit Retained Earnings and credit the current liability Dividends Payable. It is the declaration of cash dividends that reduces Retained Earnings .
Is Retained earnings a equity?
Retained earnings are a company’s net income from operations and other business activities retained by the company as additional equity capital. Retained earnings are thus a part of stockholders’ equity . They represent returns on total stockholders’ equity reinvested back into the company.
How do you reconcile retained earnings?
The retained earnings calculation or formula is quite simple. Beginning retained earnings corrected for adjustments, plus net income, minus dividends, equals ending retained earnings . Just like the statement of shareholder’s equity, the statement of retained is a basic reconciliation .
Can you spend retained earnings?
Retained earnings can be used to pay additional dividends, finance business growth, invest in a new product line, or even pay back a loan. Most companies with a healthy retained earnings balance will try to strike the right combination of making shareholders happy while also financing business growth.
Are Retained earnings taxed?
Retained earnings can be kept in a separate account and are tax -exempt until they are distributed as salary, dividends, or bonuses. Salary and bonuses can be deducted from corporate income tax , but are taxed at the individual level. Dividends are not tax -deductible.
Can I use retained earnings for investing?
2. To Fund Fixed Asset Purchase. Retained earnings can also be used to fund CAPEX plans of the company. | https://www.virginialeenlaw.com/trends/question-what-is-retained-earnings.html | 21 |
18 | The term landslide or, less frequently, landslip, refers to several forms of mass wasting that may include a wide range of ground movements, such as rockfalls, deep-seated slope failures, mudflows, and debris flows. Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, in which case they are called submarine landslides. Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as a heavy rainfall, an earthquake, a slope cut to build a road, and many others), although this is not always identifiable.
Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include:
- saturation by rain water infiltration, snow melting, or glaciers melting;
- rising of groundwater or increase of pore water pressure (e.g. due to aquifer recharge in rainy seasons, or by rain water infiltration);
- increase of hydrostatic pressure in cracks and fractures;
- loss or absence of vertical vegetative structure, soil nutrients, and soil structure (e.g. after a wildfire – a fire in forests lasting for 3–4 days);
- erosion of the toe of a slope by rivers or sea waves;
- physical and chemical weathering (e.g. by repeated freezing and thawing, heating and cooling, salt leaking in the groundwater or mineral dissolution);
- ground shaking caused by earthquakes, which can destabilize the slope directly (e.g., by inducing soil liquefaction) or weaken the material and cause cracks that will eventually produce a landslide;
- volcanic eruptions;
Landslides are aggravated by human activities, such as:
- deforestation, cultivation and construction;
- vibrations from machinery or traffic;
- blasting and mining;
- earthwork (e.g. by altering the shape of a slope, or imposing new loads);
- in shallow soils, the removal of deep-rooted vegetation that binds colluvium to bedrock;
- agricultural or forestry activities (logging), and urbanization, which change the amount of water infiltrating the soil.
- temporal variation in land use and land cover (LULC): it includes the human abandonment of farming areas, e.g. due to the economic and social transformations which occurred in Europe after the Second World War. Land degradation and extreme rainfall can increase the frequency of erosion and landslide phenomena.
In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, geologist David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. This scheme was later modified by Cruden and Varnes in 1996, and refined by Hutchinson (1988), Hungr et al. (2001), and finally by Hungr, Leroueil and Picarelli (2014). The classification resulting from the latest update is provided below.
|Type of movement||Rock||Soil|
|Fall||Rock/ice fall||Boulder/debris/silt fall|
|Topple||Rock block topple||Gravel/sand/silt topple|
|Rock flexural topple|
|Slide||Rock rotational slide||Clay/silt rotational slide|
|Rock planar slide||Clay/silt planar slide|
|Rock wedge slide||Gravel/sand/debris slide|
|Rock compound slide||Clay/silt compound slide|
|Rock irregular slide|
|Spread||Rock slope spread||Sand/silt liquefaction spread|
|Sensitive clay spread|
|Flow||Rock/ice avalanche||Sand/silt/debris dry flow|
|Sensitive clay flowslide|
|Slope deformation||Mountain slope deformation||Soil slope deformation|
|Rock slope deformation||Soil creep|
|Note: the words in italics are placeholders. Use only one.|
Under this classification, six types of movement are recognized. Each type can be seen both in rock and in soil. A fall is a movement of isolated blocks or chunks of soil in free-fall. The term topple refers to blocks coming away by rotation from a vertical face. A slide is the movement of a body of material that generally remains intact while moving over one or several inclined surfaces or thin layers of material (also called shear zones) in which large deformations are concentrated. Slides are also sub-classified by the form of the surface(s) or shear zone(s) on which movement happens. The planes may be broadly parallel to the surface ("planar slides") or spoon-shaped ("rotational slides"). Slides can occur catastrophically, but movement on the surface can also be gradual and progressive. Spreads are a form of subsidence, in which a layer of material cracks, opens up, and expands laterally. Flows are the movement of fluidised material, which can be both dry or rich in water (such as in mud flows). Flows can move imperceptibly for years, or accelerate rapidly and cause disasters. Slope deformations are slow, distributed movements that can affect entire mountain slopes or portions of it. Some landslides are complex in the sense that they feature different movement types in different portions of the moving body, or they evolve from one movement type to another over time. For example, a landslide can initiate as a rock fall or topple and then, as the blocks disintegrate upon the impact, transform into a debris slide or flow. An avalanching effect can also be present, in which the moving mass entrains additional material along its path.
Slope material that becomes saturated with water may produce a debris flow or mud flow. However, also dry debris can exhibit flow-like movement. Flowing debris or mud may pick up trees, houses and cars, and block bridges and rivers causing flooding along its path. This phenomenon is particularly hazardous in alpine areas, where narrow gorges and steep valleys are conducive of faster flows. Debris and mud flows may initiate on the slopes or result from the fluidization of landslide material as it gains speed or incorporates further debris and water along its path. River blockages as the flow reaches a main stream can generate temporary dams. As the impoundments fail, a domino effect may be created, with a remarkable growth in the volume of the flowing mass, and in its destructive power.
An earthflow is the downslope movement of mostly fine-grained material. Earthflows can move at speeds within a very wide range, from as low as 1 mm/yr to many km/h. Though these are a lot like mudflows, overall they are more slow-moving and are covered with solid material carried along by the flow from within. Clay, fine sand and silt, and fine-grained, pyroclastic material are all susceptible to earthflows. These flows are usually controlled by the pore water pressures within the mass, which should be high enough to produce a low shearing resistance. On the slopes, some earthflow may be recognized by their elongated shape, with one or more lobes at their toes. As these lobes spread out, drainage of the mass increases and the margins dry out, lowering the overall velocity of the flow. This process also causes the flow to thicken. Earthflows occur more often during periods of high precipitation, which saturates the ground and builds up water pressures. However, earthflows that keep advancing also during dry seasons are not uncommon. Fissures may develop during the movement of clayey materials, which facilitate the intrusion of water into the moving mass and produce faster responses to precipitation.
A rock avalanche, sometimes referred to as sturzstrom, is a large and fast-moving landslide of the flow type. It is rarer than other types of landslides but it is often very destructive. It exhibits typically a long runout, flowing very far over a low-angle, flat, or even slightly uphill terrain. The mechanisms favoring the long runout can be different, but they typically result in the weakening of the sliding mass as the speed increases. The causes of this weakening are not completely understood. Especially for the largest landslides, it may involve the very quick heating of the shear zone due to friction, which may even cause the water that is present to vaporize and build up a large pressure, producing a sort of hovercraft effect. In some cases, the very high temperature may even cause some of the minerals to melt. During the movement, the rock in the shear zone may also be finely ground, producing a nanometer-size mineral powder that may act as a lubricant, reducing the resistance to motion and promoting larger speeds and longer runouts. The weakening mechanisms in large rock avalanches are similar to those occurring in seismic faults.
Slides can occur in any rock or soil material and are characterized by the movement of a mass over a planar or curvilinear surface or shear zone.
A debris slide is a type of slide characterized by the chaotic movement of material mixed with water and/or ice. It is usually triggered by the saturation of thickly vegetated slopes which results in an incoherent mixture of broken timber, smaller vegetation and other debris. Debris flows and avalanches differ from debris slides because their movement is fluid-like and generally much more rapid. This is usually a result of lower shear resistances and steeper slopes. Debris slides generally begin with the detachment of rock chunks high on the slopes, which break apart as they slide towards the bottom.
Clay and silt slides are usually slow but can experience episodic acceleration in response to heavy rainfall or rapid snowmelt. They are often seen on gentle slopes and move over planar surfaces, such as over the underlying bedrock. Failure surfaces can also form within the clay or silt layer itself, and they usually have concave shapes, resulting in rotational slides
Shallow and deep-seated landslides
A landslide in which the sliding surface is located within the soil mantle or weathered bedrock (typically to a depth from few decimeters to some meters) is called a shallow landslide. Debris slides and debris flows are usually shallow. Shallow landslides can often happen in areas that have slopes with high permeable soils on top of low permeable soils. The low permeable soil traps the water in the shallower soil generating high water pressures. As the top soil is filled with water, it can become unstable and slide downslope.
Deep-seated landslides are those in which the sliding surface is mostly deeply located, for instance well below the maximum rooting depth of trees. They usually involve deep regolith, weathered rock, and/or bedrock and include large slope failures associated with translational, rotational, or complex movements. They tend to form along a plane of weakness such as a fault or bedding plane. They can be visually identified by concave scarps at the top and steep areas at the toe.
Landslides that occur undersea, or have impact into water e.g. significant rockfall or volcanic collapse into the sea, can generate tsunamis. Massive landslides can also generate megatsunamis, which are usually hundreds of meters high. In 1958, one such tsunami occurred in Lituya Bay in Alaska.
- An avalanche, similar in mechanism to a landslide, involves a large amount of ice, snow and rock falling quickly down the side of a mountain.
- A pyroclastic flow is caused by a collapsing cloud of hot ash, gas and rocks from a volcanic explosion that moves rapidly down an erupting volcano.
Landslide prediction mapping
Landslide hazard analysis and mapping can provide useful information for catastrophic loss reduction, and assist in the development of guidelines for sustainable land-use planning. The analysis is used to identify the factors that are related to landslides, estimate the relative contribution of factors causing slope failures, establish a relation between the factors and landslides, and to predict the landslide hazard in the future based on such a relationship. The factors that have been used for landslide hazard analysis can usually be grouped into geomorphology, geology, land use/land cover, and hydrogeology. Since many factors are considered for landslide hazard mapping, GIS is an appropriate tool because it has functions of collection, storage, manipulation, display, and analysis of large amounts of spatially referenced data which can be handled fast and effectively. Cardenas reported evidence on the exhaustive use of GIS in conjunction of uncertainty modelling tools for landslide mapping. Remote sensing techniques are also highly employed for landslide hazard assessment and analysis. Before and after aerial photographs and satellite imagery are used to gather landslide characteristics, like distribution and classification, and factors like slope, lithology, and land use/land cover to be used to help predict future events. Before and after imagery also helps to reveal how the landscape changed after an event, what may have triggered the landslide, and shows the process of regeneration and recovery.
Using satellite imagery in combination with GIS and on-the-ground studies, it is possible to generate maps of likely occurrences of future landslides. Such maps should show the locations of previous events as well as clearly indicate the probable locations of future events. In general, to predict landslides, one must assume that their occurrence is determined by certain geologic factors, and that future landslides will occur under the same conditions as past events. Therefore, it is necessary to establish a relationship between the geomorphologic conditions in which the past events took place and the expected future conditions.
Natural disasters are a dramatic example of people living in conflict with the environment. Early predictions and warnings are essential for the reduction of property damage and loss of life. Because landslides occur frequently and can represent some of the most destructive forces on earth, it is imperative to have a good understanding as to what causes them and how people can either help prevent them from occurring or simply avoid them when they do occur. Sustainable land management and development is also an essential key to reducing the negative impacts felt by landslides.
GIS offers a superior method for landslide analysis because it allows one to capture, store, manipulate, analyze, and display large amounts of data quickly and effectively. Because so many variables are involved, it is important to be able to overlay the many layers of data to develop a full and accurate portrayal of what is taking place on the Earth's surface. Researchers need to know which variables are the most important factors that trigger landslides in any given location. Using GIS, extremely detailed maps can be generated to show past events and likely future events which have the potential to save lives, property, and money.
Since the ‘90s, GIS have been also successfully used in conjunction to decision support systems, to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy).
- Storegga Slide, some 8,000 years ago off the western coast of Norway. Caused massive tsunamis in Doggerland and other countries connected to the North Sea. A total volume of 3,500 km3 (840 cu mi) debris was involved; comparable to a 34 m (112 ft) thick area the size of Iceland. The landslide is thought to be among the largest in history.
- Landslide which moved Heart Mountain to its current location, the largest continental landslide discovered so far. In the 48 million years since the slide occurred, erosion has removed most of the portion of the slide.
- Flims Rockslide, ca. 12 km3 (2.9 cu mi), Switzerland, some 10000 years ago in post-glacial Pleistocene/Holocene, the largest so far described in the alps and on dry land that can be easily identified in a modestly eroded state.
- The landslide around 200 BC which formed Lake Waikaremoana on the North Island of New Zealand, where a large block of the Ngamoko Range slid and dammed a gorge of Waikaretaheke River, forming a natural reservoir up to 256 metres (840 ft) deep.
- Cheekye Fan, British Columbia, Canada, ca. 25 km2 (9.7 sq mi), Late Pleistocene in age.
- The Manang-Braga rock avalanche/debris flow may have formed Marsyangdi Valley in the Annapurna Region, Nepal, during an interstadial period belonging to the last glacial period. Over 15 km3 of material are estimated to have been moved in the single event, making it one of the largest continental landslides.
- A massive slope failure 60 km north of Kathmandu Nepal, involving an estimated 10–15 km3. Prior to this landslide the mountain may have been the world's 15th mountain above 8000m.
- The 1806 Goldau landslide on September 2, 1806
- The Cap Diamant Québec rockslide on September 19, 1889
- Frank Slide, Turtle Mountain, Alberta, Canada, on 29 April 1903
- Khait landslide, Khait, Tajikistan, Soviet Union, on July 10, 1949
- A Magnitude 7.5 earthquake in Yellowstone Park (August 17, 1959) caused a landslide that blocked the Madison River, and created Quake Lake.
- Monte Toc landslide (260 million cubic metres, 9.2 billion cubic feet) falling into the Vajont Dam basin in Italy, causing a megatsunami and about 2000 deaths, on October 9, 1963
- Hope Slide landslide (46 million cubic metres, 1.6 billion cubic feet) near Hope, British Columbia on January 9, 1965.
- The 1966 Aberfan disaster
- Tuve landslide in Gothenburg, Sweden on November 30, 1977.
- The 1979 Abbotsford landslip, Dunedin, New Zealand on August 8, 1979.
- The eruption of Mount St. Helens (May 18, 1980) caused an enormous landslide when the top 1300 feet of the volcano suddenly gave way.
- Val Pola landslide during Valtellina disaster (1987) Italy
- Thredbo landslide, Australia on 30 July 1997, destroyed hostel.
- Vargas mudslides, due to heavy rains in Vargas State, Venezuela, in December, 1999, causing tens of thousands of deaths.
- 2005 La Conchita landslide in Ventura, California causing 10 deaths.
- 2007 Chittagong mudslide, in Chittagong, Bangladesh, on June 11, 2007.
- 2008 Cairo landslide on September 6, 2008.
- The 2009 Peloritani Mountains disaster caused 37 deaths, on October 1.
- The 2010 Uganda landslide caused over 100 deaths following heavy rain in Bududa region.
- Zhouqu county mudslide in Gansu, China on August 8, 2010.
- Devil's Slide, an ongoing landslide in San Mateo County, California
- 2011 Rio de Janeiro landslide in Rio de Janeiro, Brazil on January 11, 2011, causing 610 deaths.
- 2014 Pune landslide, in Pune, India.
- 2014 Oso mudslide, in Oso, Washington
- 2017 Mocoa landslide, in Mocoa, Colombia
Evidence of past landslides has been detected on many bodies in the solar system, but since most observations are made by probes that only observe for a limited time and most bodies in the solar system appear to be geologically inactive not many landslides are known to have happened in recent times. Both Venus and Mars have been subject to long-term mapping by orbiting satellites, and examples of landslides have been observed on both planets.
This article should include a summary of Landslide mitigation. (July 2014)
- "Landslide synonyms". www.thesaurus.com. Roget's 21st Century Thesaurus. 2013. Retrieved 16 March 2018.
- McGraw-Hill Encyclopedia of Science & Technology, 11th Edition, ISBN 9780071778343, 2012
- USGS factsheet, Landslide Types and Processes, 2004.
- Hungr, Oldrich; Leroueil, Serge; Picarelli, Luciano (2014-04-01). "The Varnes classification of landslide types, an update". Landslides. 11 (2): 167–194. doi:10.1007/s10346-013-0436-y. ISSN 1612-5118. S2CID 38328696.
- Haflidason, Haflidi; Sejrup, Hans Petter; Nygård, Atle; Mienert, Jurgen; Bryn, Petter; Lien, Reidar; Forsberg, Carl Fredrik; Berg, Kjell; Masson, Doug (2004-12-15). "The Storegga Slide: architecture, geometry and slide development". Marine Geology. COSTA - Continental Slope Stability. 213 (1): 201–234. Bibcode:2004MGeol.213..201H. doi:10.1016/j.margeo.2004.10.007. ISSN 0025-3227.
- Subramanian, S. Siva; Fan, X.; Yunus, A. P.; Asch, T. van; Scaringi, G.; Xu, Q.; Dai, L.; Ishikawa, T.; Huang, R. (2020). "A Sequentially Coupled Catchment-Scale Numerical Model for Snowmelt-Induced Soil Slope Instabilities". Journal of Geophysical Research: Earth Surface. 125 (5): e2019JF005468. Bibcode:2020JGRF..12505468S. doi:10.1029/2019JF005468. ISSN 2169-9011.
- Hu, Wei; Scaringi, Gianvito; Xu, Qiang; Van Asch, Theo W. J. (2018-04-10). "Suction and rate-dependent behaviour of a shear-zone soil from a landslide in a gently-inclined mudstone-sandstone sequence in the Sichuan basin, China". Engineering Geology. 237: 1–11. doi:10.1016/j.enggeo.2018.02.005. ISSN 0013-7952.
- Fan, Xuanmei; Xu, Qiang; Scaringi, Gianvito (2017-12-01). "Failure mechanism and kinematics of the deadly June 24th 2017 Xinmo landslide, Maoxian, Sichuan, China". Landslides. 14 (6): 2129–2146. doi:10.1007/s10346-017-0907-7. ISSN 1612-5118. S2CID 133681894.
- Rengers, Francis K.; McGuire, Luke A.; Oakley, Nina S.; Kean, Jason W.; Staley, Dennis M.; Tang, Hui (2020-11-01). "Landslides after wildfire: initiation, magnitude, and mobility". Landslides. 17 (11): 2631–2641. doi:10.1007/s10346-020-01506-3. ISSN 1612-5118. S2CID 221110680.
- Edil, T. B.; Vallejo, L. E. (1980-07-01). "Mechanics of coastal landslides and the influence of slope parameters". Engineering Geology. Special Issue Mechanics of Landslides and Slope Stability. 16 (1): 83–96. doi:10.1016/0013-7952(80)90009-5. ISSN 0013-7952.
- Di Maio, Caterina; Vassallo, Roberto; Scaringi, Gianvito; De Rosa, Jacopo; Pontolillo, Dario Michele; Maria Grimaldi, Giuseppe (2017-11-01). "Monitoring and analysis of an earthflow in tectonized clay shales and study of a remedial intervention by KCl wells". Rivista Italiana di Geotecnica. 51 (3): 48–63. doi:10.19199/2017.3.0557-1405.048.
- Di Maio, Caterina; Scaringi, Gianvito; Vassallo, R (2014-01-01). "Residual strength and creep behaviour on the slip surface of specimens of a landslide in marine origin clay shales: influence of pore fluid composition". Landslides. 12 (4): 657–667. doi:10.1007/s10346-014-0511-z. S2CID 127489377.
- Fan, Xuanmei; Xu, Qiang; Scaringi, Gianvito; Li, Shu; Peng, Dalei (2017-10-13). "A chemo-mechanical insight into the failure mechanism of frequently occurred landslides in the Loess Plateau, Gansu Province, China". Engineering Geology. 228: 337–345. doi:10.1016/j.enggeo.2017.09.003. ISSN 0013-7952.
- Fan, Xuanmei; Scaringi, Gianvito; Domènech, Guillem; Yang, Fan; Guo, Xiaojun; Dai, Lanxin; He, Chaoyang; Xu, Qiang; Huang, Runqiu (2019-01-09). "Two multi-temporal datasets that track the enhanced landsliding after the 2008 Wenchuan earthquake". Earth System Science Data. 11 (1): 35–55. Bibcode:2019ESSD...11...35F. doi:10.5194/essd-11-35-2019. ISSN 1866-3508.
- Fan, Xuanmei; Xu, Qiang; Scaringi, Gianvito (2018-01-26). "Brief communication: Post-seismic landslides, the tough lesson of a catastrophe". Natural Hazards and Earth System Sciences. 18 (1): 397–403. Bibcode:2018NHESS..18..397F. doi:10.5194/nhess-18-397-2018. ISSN 1561-8633.
- WATT, SEBASTIAN F.L.; TALLING, PETER J.; HUNT, JAMES E. (2014). "New Insights into the Emplacement Dynamics of Volcanic Island Landslides". Oceanography. 27 (2): 46–57. doi:10.5670/oceanog.2014.39. ISSN 1042-8275. JSTOR 24862154.
- Laimer, Hans Jörg (2017-05-18). "Anthropogenically induced landslides – A challenge for railway infrastructure in mountainous regions". Engineering Geology. 222: 92–101. doi:10.1016/j.enggeo.2017.03.015. ISSN 0013-7952.
- Fan, Xuanmei; Xu, Qiang; Scaringi, Gianvito (2018-10-24). "The "long" runout rock avalanche in Pusa, China, on August 28, 2017: a preliminary report". Landslides. 16: 139–154. doi:10.1007/s10346-018-1084-z. ISSN 1612-5118. S2CID 133852769.
- Giacomo Pepe; Andrea Mandarino; Emanuele Raso; Patrizio Scarpellini; Pierluigi Brandolini; Andrea Cevasco (2019). "Investigation on Farmland Abandonment of Terraced Slopes Using Multitemporal Data Sources Comparison and Its Implication on Hydro-Geomorphological Processes". Water. MDPI. 8 (11): 1552. doi:10.3390/w11081552. ISSN 2073-4441. OCLC 8206777258., at the introductory section.
- Varnes D. J., Slope movement types and processes. In: Schuster R. L. & Krizek R. J. Ed., Landslides, analysis and control. Transportation Research Board Sp. Rep. No. 176, Nat. Acad. oi Sciences, pp. 11–33, 1978.
- Cruden, David M., and David J. Varnes. "Landslides: investigation and mitigation. Chapter 3-Landslide types and processes." Transportation research board special report 247 (1996).
- Hutchinson, J. N. "General report: morphological and geotechnical parameters of landslides in relation to geology and hydrogeology." International symposium on landslides. 5. 1988.
- Hungr O, Evans SG, Bovis M, and Hutchinson JN (2001) Review of the classification of landslides of the flow type. Environmental and Engineering Geoscience VII, 221-238.
- Iverson, Richard M. (1997). "The physics of debris flows". Reviews of Geophysics. 35 (3): 245–296. Bibcode:1997RvGeo..35..245I. doi:10.1029/97RG00426. ISSN 1944-9208.
- Easterbrook, Don J. (1999). Surface Processes and Landforms. Upper Saddle River: Prentice-Hall. ISBN 978-0-13-860958-0.
- Hu, Wei; Scaringi, Gianvito; Xu, Qiang; Huang, Runqiu (2018-06-05). "Internal erosion controls failure and runout of loose granular deposits: Evidence from flume tests and implications for post-seismic slope healing". Geophysical Research Letters. 45 (11): 5518. Bibcode:2018GeoRL..45.5518H. doi:10.1029/2018GL078030.
- Hu, Wei; Xu, Qiang; Wang, Gonghui; Scaringi, Gianvito; McSaveney, Mauri; Hicher, Pierre-Yves (2017-10-31). "Shear Resistance Variations in Experimentally Sheared Mudstone Granules: A Possible Shear-Thinning and Thixotropic Mechanism". Geophysical Research Letters. 44 (21): 11, 040. Bibcode:2017GeoRL..4411040H. doi:10.1002/2017GL075261.
- Scaringi, Gianvito; Hu, Wei; Xu, Qiang; Huang, Runqiu (2017-12-20). "Shear-Rate-Dependent Behavior of Clayey Bimaterial Interfaces at Landslide Stress Levels". Geophysical Research Letters. 45 (2): 766. Bibcode:2018GeoRL..45..766S. doi:10.1002/2017GL076214.
- Deng, Yu; Yan, Shuaixing; Scaringi, Gianvito; Liu, Wei; He, Siming (2020). "An Empirical Power Density-Based Friction Law and Its Implications for Coherent Landslide Mobility". Geophysical Research Letters. 47 (11): e2020GL087581. Bibcode:2020GeoRL..4787581D. doi:10.1029/2020GL087581. ISSN 1944-8007.
- Deng, Yu; He, Siming; Scaringi, Gianvito; Lei, Xiaoqin (2020). "Mineralogical Analysis of Selective Melting in Partially Coherent Rockslides: Bridging Solid and Molten Friction". Journal of Geophysical Research: Solid Earth. 125 (8): e2020JB019453. Bibcode:2020JGRB..12519453D. doi:10.1029/2020JB019453. ISSN 2169-9356.
- Rowe, Christie D.; Lamothe, Kelsey; Rempe, Marieke; Andrews, Mark; Mitchell, Thomas M.; Di Toro, Giulio; White, Joseph Clancy; Aretusini, Stefano (2019-01-18). "Earthquake lubrication and healing explained by amorphous nanosilica". Nature Communications. 10 (1): 320. Bibcode:2019NatCo..10..320R. doi:10.1038/s41467-018-08238-y. ISSN 2041-1723. PMC 6338773. PMID 30659201.
- Johnson, B.F. (June 2010). "Slippery slopes". Earth magazine. pp. 48–55.
- "Ancient Volcano Collapse Caused A Tsunami With An 800-Foot Wave". Popular Science. Retrieved 2017-10-20.
- Le Bas, T.P. (2007), "Slope Failures on the Flanks of Southern Cape Verde Islands", in Lykousis, Vasilios (ed.), Submarine mass movements and their consequences: 3rd international symposium, Springer, ISBN 978-1-4020-6511-8
- Mitchell, N (2003). "Susceptibility of mid-ocean ridge volcanic islands and seamounts to large scale landsliding". Journal of Geophysical Research. 108 (B8): 1–23. Bibcode:2003JGRB..108.2397M. doi:10.1029/2002jb001997.
- Chen, Zhaohua; Wang, Jinfei (2007). "Landslide hazard mapping using logistic regression model in Mackenzie Valley, Canada". Natural Hazards. 42: 75–89. doi:10.1007/s11069-006-9061-6. S2CID 128608263.
- Clerici, A; Perego, S; Tellini, C; Vescovi, P (2002). "A procedure for landslide susceptibility zonation by the conditional analysis method1". Geomorphology. 48 (4): 349–364. Bibcode:2002Geomo..48..349C. doi:10.1016/S0169-555X(02)00079-X.
- Cardenas, IC (2008). "Landslide susceptibility assessment using Fuzzy Sets, Possibility Theory and Theory of Evidence. Estimación de la susceptibilidad ante deslizamientos: aplicación de conjuntos difusos y las teorías de la posibilidad y de la evidencia". Ingenieria e Investigación. 28 (1).
- Cardenas, IC (2008). "Non-parametric modeling of rainfall in Manizales City (Colombia) using multinomial probability and imprecise probabilities. Modelación no paramétrica de lluvias para la ciudad de Manizales, Colombia: una aplicación de modelos multinomiales de probabilidad y de probabilidades imprecisas". Ingenieria e Investigación. 28 (2).
- Metternicht, G; Hurni, L; Gogu, R (2005). "Remote sensing of landslides: An analysis of the potential contribution to geo-spatial systems for hazard assessment in mountainous environments". Remote Sensing of Environment. 98 (2–3): 284–303. Bibcode:2005RSEnv..98..284M. doi:10.1016/j.rse.2005.08.004.
- De La Ville, Noemi; Chumaceiro Diaz, Alejandro; Ramirez, Denisse (2002). "Remote Sensing and GIS Technologies as Tools to Support Sustainable Management of Areas Devastated by Landslides" (PDF). Environment, Development and Sustainability. 4 (2): 221–229. doi:10.1023/A:1020835932757. S2CID 152358230.
- Fabbri, Andrea G.; Chung, Chang-Jo F.; Cendrero, Antonio; Remondo, Juan (2003). "Is Prediction of Future Landslides Possible with a GIS?". Natural Hazards. 30 (3): 487–503. doi:10.1023/B:NHAZ.0000007282.62071.75. S2CID 129661820.
- Lee, S; Talib, Jasmi Abdul (2005). "Probabilistic landslide susceptibility and factor effect analysis". Environmental Geology. 47 (7): 982–990. doi:10.1007/s00254-005-1228-z. S2CID 128534998.
- Ohlmacher, G (2003). "Using multiple logistic regression and GIS technology to predict landslide hazard in northeast Kansas, USA". Engineering Geology. 69 (3–4): 331–343. doi:10.1016/S0013-7952(03)00069-3.
- Rose & Hunger, "Forecasting potential slope failure in open pit mines" Archived 2017-07-13 at the Wayback Machine, Journal of Rock Mechanics & Mining Sciences, February 17, 2006. August 20, 2015.
- Lazzari, M.; Salvaneschi, P. (1999). "Embedding a Geographic Information System in a Decision Support System for Landslide Hazard Monitoring" (PDF). Natural Hazards. 20 (2–3): 185–195. doi:10.1023/A:1008187024768. S2CID 1746570.
- Weitere Erkenntnisse und weitere Fragen zum Flimser Bergsturz Archived 2011-07-06 at the Wayback Machine A.v. Poschinger, Angewandte Geologie, Vol. 11/2, 2006
- Fort, Monique (2011). "Two large late quaternary rock slope failures and their geomorphic significance, Annapurna, Himalayas (Nepal)". Geografia Fisica e Dinamica Quaternaria. 34: 5–16.
- Weidinger, Johannes T.; Schramm, Josef-Michael; Nuschej, Friedrich (2002-12-30). "Ore mineralization causing slope failure in a high-altitude mountain crest—on the collapse of an 8000 m peak in Nepal". Journal of Asian Earth Sciences. 21 (3): 295–306. Bibcode:2002JAESc..21..295W. doi:10.1016/S1367-9120(02)00080-9.
- "Hope Slide". BC Geographical Names.
- Peres, D. J.; Cancelliere, A. (2016-10-01). "Estimating return period of landslide triggering by Monte Carlo simulation". Journal of Hydrology. Flash floods, hydro-geomorphic response and risk management. 541: 256–271. Bibcode:2016JHyd..541..256P. doi:10.1016/j.jhydrol.2016.03.036.
- "Large landslide in Gansu Zhouqu August 7". Easyseosolution.com. August 19, 2010. Archived from the original on August 24, 2010.
- "Brazil mudslide death toll passes 450". Cbc.ca. January 13, 2011. Retrieved January 13, 2011.
|Wikimedia Commons has media related to Landslides.| | http://wiki-offline.jakearchibald.com/wiki/Landslide | 21 |
20 | This chapter is concerned with methods for analysing the results of designed experiments. The range of experiments covered include:
single factor designs with equal sized blocks such as randomized complete block and balanced incomplete block designs,
row and column designs such as Latin squares, and
complete factorial designs.
Further designs may be analysed by combining the analyses provided by multiple calls to functions or by using general linear model functions provided in Chapter G02.
2Background to the Problems
An experimental design consists of a plan for allocating a set of controlled conditions, the treatments, to subsets of the experimental material, the plots or units. Two examples are:
(i)In an experiment to examine the effects of different diets on the growth of chickens, the chickens were kept in pens and a different diet was fed to the birds in each pen. In this example the pens are the units and the different diets are the treatments.
(ii)In an experiment to compare four materials for wear-loss, a sample from each of the materials is tested in a machine that simulates wear. The machine can take four samples at a time and a number of runs are made. In this experiment the treatments are the materials and the units are the samples from the materials.
In designing an experiment the following principles are important.
(a)Randomization: given the overall plan of the experiment, the final allocation of treatments to units is performed using a suitable random allocation. This avoids the possibility of a systematic bias in the allocation and gives a basis for the statistical analysis of the experiment.
(b)Replication: each treatment should be ‘observed’ more than once. So in example (b) more than one sample from each material should be tested. Replication allows for an estimate of the variability of the treatment effect to be measured.
(c)Blocking: in many situations the experimental material will not be homogeneous and there may be some form of systematic variation in the experimental material. In order to reduce the effect of systematic variation the material can be grouped into blocks so that units within a block are similar but there is variation between blocks. For example, in an animal experiment litters may be considered as blocks; in an industrial experiment it may be material from one production batch.
(d)Factorial designs: if more than one type of treatment is under consideration, for example the effect of changes in temperature and changes in pressure, a factorial design consists of looking at all combinations of temperature and pressure. The different types of treatment are known as factors and the different values of the factors that are considered in the experiment are known as levels. So if three temperatures and four different pressures were being considered, then factor (temperature) would have levels and factor (pressure) would have four levels and the design would be a factorial giving a total of treatment combinations. This design has the advantage of being able to detect the interaction between factors, that is, the effect of the combination of factors.
The following are examples of standard experimental designs; in the descriptions, it is assumed that there are treatments.
(a)Completely Randomised Design: there are no blocks and the treatments are allocated to units at random.
(b)Randomised Complete Block Design: the experimental units are grouped into blocks of units and each treatment occurs once in each block. The treatments are allocated to units within blocks at random.
(c)Latin Square Designs: the units can be represented as cells of a square classified by rows and columns. The rows and columns represent sources of variation in the experimental material. The design allocates the treatments to the units so that each treatment occurs once in each row and each column.
(d)Balanced Incomplete Block Designs: the experimental units are grouped into blocks of units. The treatments are allocated so that each treatment is replicated the same number of times and each treatment occurs in the same block with any other treatment the same number of times. The treatments are allocated to units within blocks at random.
(e)Complete Factorial Experiments: if there are treatment combinations derived from the levels of all factors then either there are no blocks or the blocks are of size units.
The analysis of a designed experiment usually consists of two stages. The first is the computation of the estimate of variance of the underlying random variation in the experiment along with tests for the overall effect of treatments. This results in an analysis of variance (ANOVA) table. The second stage is a more detailed examination of the effect of different treatments either by comparing the difference in treatment means with an appropriate standard error or by the use of orthogonal contrasts.
The analysis assumes a linear model such as
where is the observed value for unit of block , is the overall mean, is the effect of the th block, is the effect of the th treatment which has been applied to the unit, and is the random error term associated with this unit. The expected value of is zero and its variance is .
In the analysis of variance, the total variation, measured by the sum of squares of observations about the overall mean, is partitioned into the sum of squares due to blocks, the sum of squares due to treatments, and a residual or error sum of squares. This partition corresponds to the parameters , and . In parallel to the partition of the sum of squares there is a partition of the degrees of freedom associated with the sums of squares. The total degrees of freedom is , where is the number of observations. This is partitioned into degrees of freedom for blocks, degrees of freedom for treatments, and degrees of freedom for the residual sum of squares. From these the mean squares can be computed as the sums of squares divided by their degrees of freedom. The residual mean square is an estimate of . An -test for an overall effect of the treatments can be calculated as the ratio of the treatment mean square to the residual mean square.
For row and column designs the model is
where is the effect of the th row and is the effect of the th column. Usually the rows and columns are orthogonal. In the analysis of variance the total variation is partitioned into rows, columns treatments and residual.
In the case of factorial experiments, the treatment sum of squares and degrees of freedom may be partitioned into main effects for the factors and interactions between factors. The main effect of a factor is the effect of the factor averaged over all other factors. The interaction between two factors is the additional effect of the combination of the two factors, over and above the additive effects of the two factors, averaged over all other factors. For a factorial experiment in blocks with two factors, and , in which the th unit of the th block received level of factor and level of factor the model is
where is the main effect of level of factor , is the main effect of level of factor , and is the interaction between level of and level of . Higher-order interactions can be defined in a similar way.
Once the significant treatment effects have been uncovered they can be further investigated by comparing the differences between the means with the appropriate standard error. Some of the assumptions of the analysis can be checked by examining the residuals.
Many experiments and investigations involve the assignment of a value (score) to a number of experimental units or objects of interest (subjects). The method used to score the subject will often be affected by measurement error which can, in turn, affect the analysis and interpretation of the data. Measurement error can be especially high when the score is based on the subjective opinion of one or more individuals (raters) and, therefore, it is important to be able to assess its magnitude. One way of doing this is to run a reliability study and calculate the intraclass correlation (ICC). The term intraclass correlation is a general one and can mean either a measure of interrater reliability, i.e., a measure of how similar the raters are, or intrarater reliability, i.e., a measure of how consistent each rater is.
There are a numerous different versions of the ICC, six of which are available in this chapter. The different versions of the ICC can lead to different conclusions when applied to the same data, it is, therefore, essential to choose the most appropriate based on the design of the reliability study and whether inter- or intrarater reliability is of interest. The six measures of the ICC are split into three different types of studies, denoted: , and . Each class of study results in two forms of the ICC, depending on whether inter- or intrarater reliability is of interest. A full description of the different designs and corresponding ICCs is given in Section 3 in g04gac.
3Recommendations on Choice and Use of Available Functions
This chapter contains functions that can handle a wide range of experimental designs plus functions for further analysis and a function to compute dummy variables for use in a general linear model.
g04bbc computes the analysis of variance and treatment means with standard errors for any block design with equal sized blocks. The function will handle both complete block designs and balanced and partially balanced incomplete block designs.
g04bcc computes the analysis of variance and treatment means with standard errors for a row and column designs such as a Latin square.
g04cac computes the analysis of variance and treatment means with standard errors for a complete factorial experiment.
Other designs can be analysed by combinations of calls to g04bbc,g04bccandg04cac. The functions compute the residuals from the model specified by the design, so these can then be input as the response variable in a second call to one of the functions. For example a factorial experiment in a Latin square design can be analysed by first calling g04bcc to remove the row and column effects and then calling g04cac with the residuals from g04bcc as the response variable to compute the ANOVA for the treatments. Another example would be to use both g02dacandg04bbc to compute an analysis of covariance.
For experiments with missing values, these values can be estimated by using the Healy and Westmacott procedure; see John and Quenouille (1977). This procedure involves starting with initial estimates for the missing values and then making adjustments based on the residuals from the analysis. The improved estimates are then used in further iterations of the process.
For designs that cannot be analysed by the above approach the function g04eac can be used to compute dummy variables from the classification variables or factors that define the design. These dummy variables can then be used with the general linear model function g02dac.
In addition to the functions for computing the means and the basic analysis of variance
one function is
available for further analysis.
g04dbc computes simultaneous confidence intervals for the differences between means with the choice of different methods such as the Tukey–Kramer, Bonferron and Dunn–Sidak.
g04gac calculates the intraclass correlation (ICC) from a reliability study and can return a measure of either the inter- or intrarater reliability. | https://www.nag.com/numeric/nl/nagdoc_27.1/clhtml/g04/g04intro.html | 21 |
31 | Kingdom of Jaffna
Jaffna Kingdom at its greatest extent (circa 1350)
Jaffna Kingdom in 1619
|Cinkaiariyan Cekaracacekaran I a.k.a. Kalinga Magha|
|Historical era||Transitional period|
• Independence from Pandya dynasty
• Aryacakravarti dynasty restored
The Jaffna Kingdom (Tamil: யாழ்ப்பாண அரசு, Sinhala: යාපනය රාජධානිය) (1215–1624 CE), also known as Kingdom of Aryachakravarti, of modern northern Sri Lanka was a historic monarchy that came into existence around the town of Jaffna on the Jaffna peninsula. It was traditionally thought to be established after the invasion of Magha, who is credited with the founding of the Jaffna kingdom and is said to have been from Kalinga, in India. Established as a powerful force in the north, north east and west of the island, it eventually became a tribute-paying feudatory of the Pandyan Empire in modern South India in 1258, gaining independence in 1323, when the last Pandyan ruler of Madurai was defeated and expelled in 1323 by Malik Kafur, the army general of the Muslim Delhi Sultanate. For a brief period, in the early to mid-14th century, it was an ascendant power in the island of Sri Lanka when all regional kingdoms accepted subordination. However, the kingdom was eventually overpowered by the rival Kotte Kingdom, around 1450 when it was invaded by Prince Sapumal under the orders of Parakramabahu VI.
It gained independence from Kingdom of Kotte control in 1467 and its subsequent rulers directed their energies towards consolidating its economic potential by maximising revenue from pearls and elephant exports and land revenue. It was less feudal than most of the other regional kingdoms on the island of Sri Lanka of the same period. During this period, important local Tamil literature was produced and Hindu temples were built, including an academy for language advancement.
The Sinhalese Nampota dated in its present form to the 14th or 15th century CE suggests that the whole of the Jaffna Kingdom, including parts of the modern Trincomalee District, was recognised as a Tamil region by the name Demala-pattana (Tamil city). In this work, a number of villages that are now situated in the Jaffna, Mullaitivu and Trincomalee districts are mentioned as places in Demala-pattana.
The arrival of the Portuguese on the island of Sri Lanka in 1505, and its strategic location in the Palk Strait connecting all interior Sinhalese kingdoms to South India, created political problems. Many of its kings confronted and ultimately made peace with the Portuguese. In 1617, Cankili II, a usurper to the throne, confronted the Portuguese but was defeated, thus bringing the kingdom's independent existence to an end in 1619. Although rebels like Migapulle Arachchi—with the help of the Thanjavur Nayak kingdom—tried to recover the kingdom, they were eventually defeated. Nallur, a suburb of modern Jaffna town, was its capital.
Part of a series on the
|Historical states of Sri Lanka|
|Part of a series on|
|Sri Lankan Tamils|
The origin of the Jaffna kingdom is obscure and still the subject of controversy among historians. Among mainstream historians, such as K. M. de Silva, S. Pathmanathan and Karthigesu Indrapala, the widely accepted view is that the Kingdom of the Aryacakravarti dynasty in Jaffna began in 1215 with the invasion of a previously unknown chieftain called Magha, who claimed to be from Kalinga in modern India. He deposed the ruling Parakrama Pandyan II, a foreigner from the Pandyan Dynasty who was ruling the Kingdom of Polonnaruwa at the time with the help of his soldiers and mercenaries from the Kalinga, modern Kerala and Damila (Tamil Nadu) regions in India.
After the conquest of Rajarata, he moved the capital to the Jaffna peninsula which was more secured by heavy Vanni forest and ruled as a tribute-paying subordinate of the Chola empire of Tanjavur, in modern Tamil Nadu, India. During this period (1247), a Malay chieftain from Tambralinga in modern Thailand named Chandrabhanu invaded the politically fragmented island. Although King Parakramabahu II (1236–1270) from Dambadeniya was able to repulse the attack, Chandrabhanu moved north and secured the throne for himself around 1255 from Magha. Sadayavarman Sundara Pandyan I invaded Sri Lanka in the 13th century and defeated Chandrabhanu the usurper of the Jaffna Kingdom in northern Sri Lanka. Sadayavarman Sundara Pandyan I forced Chandrabhanu to submit to the Pandyan rule and to pay tributes to the Pandyan Dynasty. But later on when Chandrabhanu became powerful enough he again invaded the Singhalese kingdom but he was defeated by the brother of Sadayavarman Sundara Pandyan I called Veera Pandyan I and Chandrabhanu lost his life. Sri Lanka was invaded for the 3rd time by the Pandyan Dynasty under the leadership of Arya Cakravarti who established the Jaffna kingdom.
Other scholars push the date of the founding of the kingdom in the 7th century CE with the ancient capital being Kathiramalai, which finds mention in Tamil literature.The king Ukkirasinghan thought as one of the early Jaffna king had his capital at Kathiramalai and is said to have married the Chola princess Maruta Piravika Valli. According to the scholars was the capital moved to Singainagar after an invasion of Parantaka Chola in the 10th century CE.
When Chandrabhanu embarked on a second invasion of the south, the Pandyas came to the support of the Sinhalese king and killed Chandrabhanu in 1262 and installed Aryacakravarti, a minister in charge of the invasion, as the king. When the Pandyan Empire became weak due to Muslim invasions, successive Aryacakravarti rulers made the Jaffna kingdom independent and a regional power to reckon with in Sri Lanka. All subsequent kings of the Jaffna Kingdom claimed descent from one Kulingai Cakravarti who is identified with Kalinga Magha by Swami Gnanaprakasar and Mudaliar Rasanayagam while maintaining their Pandyan progenitor's family name.
Politically, the dynasty was an expanding power in the 13th and 14th century with all regional kingdoms paying tribute to it. However, it met with simultaneous confrontations with the Vijayanagar empire that ruled from Vijayanagara, southern India, and a rebounding Kingdom of Kotte from the south of Sri Lanka. This led to the kingdom becoming a vassal of the Vijayanagar Empire as well as briefly losing its independence under the Kotte kingdom from 1450 to 1467. The kingdom was re-established with the disintegration of Kotte kingdom and the fragmentation of Vijayanagar Empire. It maintained very close commercial and political relationships with the Thanjavur Nayakar kingdom in southern India as well as the Kandyan and segments of the Kotte kingdom. This period saw the building of Hindu temples and a flourishing of literature, both in Tamil and Sanskrit.
Kotte conquest and restoration
The Kotte conquest of the Jaffna Kingdom was led by king Parakramabahu VI's adopted son, Prince Sapumal. This battle took place in many stages. Firstly, the tributaries to the Jaffna Kingdom in the Vanni area, namely the Vanniar chieftains of the Vannimai were neutralised. This was followed by two successive conquests. The first war of conquest did not succeed in capturing the kingdom. It was the second conquest dated to 1450 that eventually was successful. Apparently connected with this war of conquest was an expedition to Adriampet in modern South India, occasioned according to Valentyn by the seizure of a Lankan ship laden with cinnamon. The Tenkasi inscription of Arikesari Parakrama Pandya of Tinnevelly who saw the backs of kings at Singai, Anurai, and else where, may refer to these wars; it is dated between 1449–50 and 1453–54. Kanakasooriya Cinkaiariyan the Aryacakravarti king fled to South India with his family. After the departure of Sapumal Kumaraya to Kotte, Kanakasooriya Cinkaiarian re-took the kingdom in 1467.
Decline & dissolution
Portuguese traders reached Sri Lanka by 1505 where their initial forays were against the south-western coastal Kotte kingdom due to the lucrative monopoly on trade in spices that the Kotte kingdom enjoyed that was also of interest to the Portuguese. The Jaffna kingdom came to the attention of Portuguese officials in Colombo for multiple reasons which included their interference in Roman Catholic missionary activities, (which was assumed to be patronizing Portuguese interests) and their support to anti-Portuguese factions of the Kotte kingdom, such as the chieftains from Sittawaka. The Jaffna Kingdom also functioned as a logistical base for the Kandyan kingdom, located in the central highlands without access to any seaports, as an entrypot for military aid arriving from South India. Further, due to its strategic location, it was feared that the Jaffna kingdom may become a beachhead for the Dutch landings. It was king Cankili I who resisted contacts with the Portuguese and even massacred 600–700 Parava Catholics in the island of Mannar. These Catholics were brought from India to Mannar to take over the lucrative pearl fisheries from the Jaffna kings.
The first expedition led by Viceroy Dom Constantino de Bragança in 1560 failed to subdue the kingdom but wrested the Mannar Island from it. Although the circumstances are unclear, by 1582 the Jaffna king was paying a tribute of ten elephants or an equivalent in cash. In 1591, during the second expedition led by André Furtado de Mendonça, king Puvirasa Pandaram was killed and his son Ethirimanna Cinkam was installed as the monarch. This arrangement gave the Catholic missionaries freedom and monopoly in elephant export to the Portuguese, which the incumbent king however resisted. He helped the Kandyan kingdom under kings Vimaladharmasuriya I and Senarat during the period 1593–1635 with the intent of securing help from South India to resist the Portuguese. He however maintained autonomy of the kingdom without overly provoking the Portuguese.
Cankili II the usurper
With the death of Ethirimana Cinkam in 1617, his 3-year-old son was the proclaimed king with the late king's brother Arasakesari as regent. Cankili II, a usurper, and nephew of the late king killed all the princes of royal blood including Arasakesari and the powerful chief Periya Pillai Arachchi. His cruel actions made him unpopular leading to a revolt by the nominal Christian Mudaliyars Dom Pedro and Dom Luis (also known as Migapulle Arachchi, the son of Periya Pillai Arachchi) and drove Cankili to hide in Kayts in August–September 1618. Unable to secure Portuguese acceptance of his kingship and to suppress the revolt, Cankili II invited military aid from the Thanjavur Nayaks who sent a troop of 5000 men under the military commander Varunakulattan.
Cankili II was supported by the Kandy rulers. After the fall of the Jaffna kingdom, the two unnamed princesses of Jaffna had been married to Senarat's stepsons, Kumarasingha and Vijayapala. Cankili II expectably received military aid from the Thanjavur Nayak Kingdom. On his part, Raghunatha Nayak of Thanjavur made attempts to recover the Jaffna Kingdom for his protege, the Prince of Rameshwaram. However, all attempts to recover the Jaffna Kingdom from the Portuguese met with failure.
By June 1619, there were two Portuguese expeditions: a naval expedition that was repulsed by the Karaiyars and another expedition by Filipe de Oliveira and his 5,000 strong land army which was able to inflict defeat on Cankili II. Cankili, along with every surviving member of the royal family were captured and taken to Goa, where he was hanged. The remaining captives were encouraged to become monks or nuns in the holy orders, and as most obliged, it avoided further claimants to the Jaffna throne. In 1620 Migapulle Arachchi, with a troop of Thanjavur soldiers, revolted against the Portuguese and was defeated. A second rebellion was led by a chieftain called Varunakulattan with the support of Raghunatha Nayak.
According to Ibn Batuta, a traveling Moroccan historian of note, by 1344, the kingdom had two capitals: one in Nallur in the north and the other in Puttalam in the west during the pearling season. The kingdom proper, that is the Jaffna peninsula, was divided into various provinces with subdivisions of parrus meaning property or larger territorial units and ur or villages, the smallest unit, was administered on a hierarchical and regional basis. At the summit was the king whose kingship was hereditary; he was usually succeeded by his eldest son. Next in the hierarchy stood the adikaris who were the provincial administrators. Then came the mudaliyars who functioned as judges and interpreters of the laws and customs of the land. It was also their duty to gather information of whatever was happening in the provinces and report to higher authorities. The title was bestowed on the Karaiyar generals who commanded the navy and also on Vellalar chiefs. Administrators of revenues called kankanis or superintendents and kanakkappillais or accountants came next in line. These were also known as pandarapillai. They had to keep records and maintain accounts. The royal heralds whose duty was to convey messages or proclamations came from the Paraiyar community.
Maniyam was the chief of the parrus. He was assisted by mudaliyars who were in turn assisted by udaiyars, persons of authority over a village or a group of villages. They were the custodians of law and order and gave assistance to survey land and collect revenues in the area under their control. The village headman was called talaiyari, pattankaddi or adappanar and he assisted in the collection of taxes and was responsible for the maintenance of order in his territorial unit. The Adappanar were the headmen of the ports. The Pattankaddi and Adappanar were from the maritime Karaiyar and Paravar communities. In addition, each caste had a chief who supervised the performance of caste obligations and duties.
- Relationship with feudatories
Vannimais were regions south of the Jaffna peninsula in the present-day North Central and Eastern provinces and were sparsely settled by people. They were ruled by petty chiefs calling themselves Vanniar. Vannimais just south of the Jaffna peninsula and in the eastern Trincomalee district usually paid an annual tribute to the Jaffna kingdom instead of taxes. The tribute was in cash, grains, honey, elephants, and ivory. The annual tribute system was enforced due to the greater distance from Jaffna. During the early and middle part of the 14th century, the Sinhalese kingdoms in western, southern and central part of the island also became feudatories until the kingdom itself was briefly occupied by the forces of Parakramabahu VI around 1450 for about 17 years. Around the early 17th century, the kingdom also administered an exclave in Southern India called Madalacotta.
The economy of the Kingdom was almost exclusively based on subsistence agriculture until the 15th century. After the 15th century, however, the economy became diversified and commercialized as it became incorporated into the expanding Indian Ocean.
Ibn Batuta, during his visit in 1344, observed that the kingdom of Jaffna was a major trading kingdom with extensive overseas contacts, who described that the kingdom had a "considerable forces by the sea", testifying to their strong reputed navy. The Kingdom's trades were oriented towards maritime South India, with which it developed a commercial interdependence. The non-agriculture tradition of the kingdom became strong as a result of large coastal fishing and boating population and growing opportunities for seaborne commerce. Influential commercial groups, drawn mainly from south Indian mercantile groups as well as other, resided in the royal capital, port, and market centers. Artisan settlements were also established and groups of skilled tradesmen—carpenters, stonemasons, wavers, dryers, gold and silver smiths—resided in urban centers. Thus, a pluralistic socio-economic tradition of agriculture marine activities, commerce and handicraft production was well established.
Jaffna kingdom was less feudalized than other kingdoms in Sri Lanka, such as Kotte and Kandy. Its economy was based on more money transactions than transactions on land or its produce. The Jaffna defense forces were not feudal levies; soldiers in the kings service were paid in cash. The king's officials, namely Mudaliayars, were also paid in cash and the numerous Hindu temples seem not to have owned extensive properties, unlike the Buddhist establishments in the South. Temples and the administrators depended on the king and the worshippers for their upkeep. Royal and Army officials were thus a salaried class and these three institutions consumed over 60% of the revenues of the kingdom and 85% of the government expenditures. Much of the kingdom's revenues also came from cash except the Elephants from the Vanni feudatories. At the time of the conquest by the Portuguese in 1620, the kingdom which was truncated in size and restricted to the Jaffna peninsula had revenues of 11,700 pardaos of which 97% came from land or sources connected to the land. One was called land rent and another called paddy tax called arretane.
Apart from the land related taxes, there were other taxes, such as Garden tax from compounds where, among others, plantain, coconut and arecanut palms were grown and irrigated by water from the well. Tree tax on trees such as palmyrah, margosa and iluppai and Poll tax equivalent to a personal tax from each. Professional tax was collected from members of each caste or guild and commercial taxes consisting of, among others, stamp duty on clothes (clothes could not be sold privately and had to have official stamp), Taraku or levy on items of food, and Port and customs duties. Columbuthurai, which connected the Peninsula with the mainland at Poonakari with its boat services, was one of the chief port, and there were customs check posts at the sand passes of Pachilaippalai. Elephants from the southern Sinhalese kingdoms and the Vanni region were brought to Jaffna to be sold to foreign buyers. They were shipped abroad from a bay called Urukathurai, which is now called Kayts—a shortened form of Portuguese Caes dos elephantess (Bay of Elephants). Perhaps a peculiarity of Jaffna was the levy of license fee for the cremation of the dead.
Not all payments in kind were converted to cash, offerings of rice, bananas, milk, dried fish, game meat and curd persisted. Some inhabitants also had to render unpaid personal services called uliyam.
The kings also issued many types of coins for circulation. Several types of coins categorized as Sethu Bull coins issued from 1284 to 1410 are found in large quantities in the northern part of Sri Lanka. The obverse of these coins have a human figure flanked by lamps and the reverse has the Nandi (bull) symbol, the legend Setu in Tamil with a crescent moon above.
Saivism (a sect of Hinduism) in Sri Lanka has had continuous history from the early period of settlers from India. Hindu worship was widely accepted even as part of the Buddhist religious practices. During the Chola period in Sri Lanka, around the 9th and 10th century, Hinduism gained status as an official religion in the island kingdom. Kalinga Magha, whose rule followed that of the Cholas is remembered as a Hindu revivalist by the native literature of that period.
As the state religion, Saivism enjoyed all the prerogatives of the establishment during the period of the Jaffna kingdom. The Aryacakravarti dynasty was very conscious of its duties as a patron towards Saivism because of the patronage given by its ancestors to the Rameswaram temple, a well-known pilgrimage center of Indian Hinduism. As noted, one of the titles assumed by the kings was Setukavalan or protector of Setu another name for Rameswaram. Setu was used in their coins as well as in inscriptions as marker of the dynasty.
Sapumal Kumaraya (also known as Chempaha Perumal in Tamil), who ruled the Jaffna kingdom on behalf of the Kotte kingdom is credited with either building or renovating the Nallur Kandaswamy temple. Singai Pararasasegaram is credited with building the Sattanathar temple, the Vaikuntha Pillaiyar temple and the Veerakaliamman temple. He built a pond called Yamuneri and filled it with water from the Yamuna river of North India, which is considered holy by Hindus. He was a frequent the visitor of the Koneswaram temple, as was his son and successor King Cankili I. King Jeyaveera Cinkaiariyan had the traditional history of the temple compiled as a chronicle in verse, entitled Dakshina Kailasa Puranam, known today as the Sthala Puranam of Koneshwaram Temple. Major temples were normally maintained by the kings and a salary was paid from the royal treasury to those who worked in the temple, unlike in India and rest of Sri Lanka, where religious establishments were autonomous entities with large endowments of land and related revenue.
Most accepted Lord Shiva as the primary deity and the lingam, the universal symbol of Shiva, was consecrated in shrines dedicated to him. The other Hindu gods of the pantheon such as Murugan, Pillaiyar, Kali were also worshipped. At the village level, village deities were popular along with the worship of Kannaki whose veneration was common amongst the Sinhalese in the south as well. Belief in charm and evil spirits existed, just as in the rest of South Asia.
There were many Hindu temples within the Kingdom. Some were of great historic importance, such as the Koneswaram temple in Trincomalee, Ketheeswaram temple in Mannar, Naguleswaram temple in Keerimalai along with hundreds of other temples that were scattered over the region. The ceremonies and festivals were similar to those in modern South India, with some slight changes in emphasis. The Tamil devotional literature of Saiva saints was used in worship. The Hindu New Year falling on the middle of April was more elaborately celebrated and festivals, such as Navarattiri, Deepavali, Sivarattiri, and Thaiponkal, along with marriages, deaths and coming of age ceremonies were part of the daily life.
Until ca. 1550, when Cankili I expelled the Buddhists of Jaffna, who were all Sinhalese, and destroyed their many places of worship, Buddhism prevailed in the Jaffna kingdom, among the Sinhalese who had remained in the territory. Some important places of Buddhist worship in the Jaffna kingdom, which are mentioned in the Nampota are: Naga-divayina (Nagadipa, modern Nainativu), Telipola, Mallagama, Minuvangomu-viharaya and Kadurugoda (modern Kantharodai), of these only the Buddhist temple at Nagadipa survive today.
- Caste structure
The social organization of the people of the Jaffna kingdom was based on a caste system and a matrilineal kudi (clan) system similar to the caste structure of South India. The Aryacakravarti kings and their immediate family claimed Brahma-Kshatriya status, meaning Brahmins who took to martial life. The Madapalli were the palace stewards and cooks, the Akampadayar's formed the palace servants, the Paraiyar were the royal heralds and the Siviyar were the royal palanquin bearers. The army and navy generals were from the Karaiyar caste, who also controlled the pearl trade and whose chiefs were known as Mudaliyar, Paddankatti and Adapannar. The Mukkuvar and Thimilar were also engaged in the pearl fishery. The Udayars or village headmen and landlords of agriculture societies were mostly drawn from the Vellalar caste, who controlled the illegal activities such as stealing and robbery. The service providing communities were known as Kudimakkal and consisted of various groups such as the Ambattar, Vannar, Kadaiyar, Pallar, Nalavar, Paraiyar, Koviyar and Brahmin. The Kudimakkal had ritual importance in the temples and at funerals and weddings. The Chettys were well known as traders and owners of Hindu temples and the Pallar and Nalavar castes composed of the agriculturist labours who tilled the land. The weavers were the Paraiyars and Sengunthar who gave importance to the textile trade. The artisans also known as Kammalar were formed by the Kollar, Thattar, Tatchar, Kaltatchar and the Kannar.
- Foreign mercenaries & traders
Mercenaries of various ethnic and caste backgrounds from India, such as the Telugus (known locally as Vadugas) and Malayalees from the Kerala region were also employed by the king as soldiers. Muslim traders and sea pirates of Mapilla and Moor ethnicities as well as Sinhalese were in the Kingdom. The kingdom also functioned as a refuge for rebels from the south seeking shelter after failed political coups. According to the earliest historiographical literature of the Kingdom of Jaffna, Vaiyaapaadal, datable to 14th–15th century, in verse 77 lists the community of Papparavar (Berbers specifically and Africans in general) along with Kuchchiliyar (Gujaratis) and Choanar (Arabs) and places them under the caste category of Pa’l’luvili who are believed to be cavalrymen of Muslim faith . The caste of Pa’l’luvili or Pa’l’livili is peculiar to Jaffna. A Dutch census taken in 1790 in Jaffna records 196 male adults belonging to Pa’l’livili caste as taxpayers. That means the identity and profession existed until Dutch times. But, Choanakar, with 492 male adults and probably by this time generally meaning the Muslims, is found mentioned as a separate community in this census.
During the rule of the Aryacakravarti rulers, the laws governing the society was based on a compromise between a matriarchal system of society that seemed to have had deeper roots overlaid with a patriarchal system of governance. These laws seemed to have existed side by side as customary laws to be interpreted by the local Mudaliars. In some aspects such as in inheritance the similarity to Marumakattayam law of present-day Kerala and Aliyasanatana of modern Tulunadu was noted by later scholars. Further Islamic jurisprudence and Hindu laws of neighboring India also seemed to have affected the customary laws. These customary laws were later codified and put to print during the Dutch colonial rule as Thesavalamai in 1707. The rule under earlier customs seemed to have been females succeeded females. But when the structure of the society came to be based on patriarchal system, a corresponding rule was recognized, that males succeeded males. Thus, we see the devolution of muthusam (paternal inheritance) was on the sons, and the devolution of the chidenam (dowry or maternal inheritance) was on the females. Just as one dowried sister succeeded another, we had the corresponding rule that if one's brother died instate, his properties devolved upon his brothers to the exclusion of his sisters. The reason being that in a patriarchal family each brother formed a family unit, but all the brothers being agnates, when one of them died his property devolved upon his agnates.
The kings of the dynasty provided patronage to literature and education. Temple schools and traditional gurukulam classes in verandahs (known as Thinnai Pallikoodam in Tamil language) spread basic education in languages such as Tamil language and Sanskrit and religion to the upper classes. During the reign of Jeyaveera Cinkaiariyan rule, a work on medical science (Segarajasekaram), on astrology (Segarajasekaramalai) and on mathematics (Kanakathikaram) were authored by Karivaiya. During the rule of Gunaveera Cinkaiariyan, a work on medical sciences, known as Pararajasekaram, was completed. During Singai Pararasasegaram's rule, an academy for Tamil language propagation on the model of ancient Tamil Sangams was established in Nallur. This academy performed a useful service in collecting and preserving ancient Tamil works in manuscripts form in a library called Saraswathy Mahal. Singai Pararasasekaran's cousin Arasakesari was credited with translating the Sanskrit classic Raghuvamsa into Tamil. Pararasasekaran's brother Segarajasekaran and Arasakesari collected manuscripts from Madurai and other regions for the Saraswathy Mahal library. Among other literary works of historic importance compiled before the arrival of European colonizers, Vaiyapatal, written by Vaiyapuri Aiyar, is well known.
There were periodic waves of South Indian influence over Sri Lankan art and architecture, though the prolific age of monumental art and architecture seemed to have declined by the 13th century. Temples built by the Tamils of Indian origin from the 10th century belonged to the Madurai variant of Vijayanagar period. A prominent feature of the Madurai style was the ornate and heavily sculptured tower or gopuram over the entrance of temple. None of the important religious constructions of this style within the territory that formed the Jaffna kingdom survived the destructive hostility of the Portuguese.
Nallur, the capital was built with four entrances with gates. There were two main roadways and four temples at the four gateways. The rebuilt temples that exist now do not match their original locations which instead are occupied by churches erected by the Portuguese. The center of the city was Muthirai Santhai (market place) and was surrounded by a square fortification around it. There were courtly buildings for the Kings, Brahmin priests, soldiers and other service providers. The old Nallur Kandaswamy temple functioned as a defensive fort with high walls. In general, the city was laid out like the traditional temple town according to Hindu traditions.
- de Silva, A History of Sri Lanka, pp. 91–92
- Nadarajan, V. History of Ceylon Tamils, p. 72
- Indrapala, K. Early Tamil Settlements in Ceylon, p. 16
- Coddrington, K. Ceylon coins and currency, pp. 74–76
- Peebles, History of Sri Lanka, pp. 31–32
- The History of Sri Lanka by Patrick Peebles, p. 31
- Peebles, History of Sri Lanka, p. 34
- Pfaffenberger, B .The Sri Lankan Tamils, pp. 30–31
- Abeysinghe, T. Jaffna Under the Portuguese, pp. 29–30
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 63
- Kunarasa, K. The Jaffna Dynasty, pp. 73–74
- Gunasingam, M. Sri Lankan Tamil Nationalism, pp. 64–65
- Indrapala, K - The Evolution of an Ethnic Identity: The Tamils in Sri Lanka C. 300 BCE to C. 1200 CE. Colombo: Vijitha Yapa.
- Abeysinghe, T. Jaffna Under the Portuguese, pp. 58–63
- Gnanaprakasar, S. A critical history of Jaffna, pp. 153–172
- An historical relation of the island Ceylon, Volume 1, by Robert Knox and JHO Paulusz, pp. 19–47.
- An historical relation of the island Ceylon, Volume 1, by Robert Knox and JHO Paulusz, p. 43.
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 53
- Manogaran, C. The untold story of Ancient Tamils of Sri Lanka, pp. 22–65
- Kunarasa, K. The Jaffna Dynasty, pp. 1–53
- Rasanayagam, M. Ancient Jaffna, pp. 272–321
- "The so-called Tamil Kingdom of Jaffna". S. Ranwella. Retrieved 30 November 2007.
- Sri Lanka and South-East Asia: Political, Religious and Cultural Relations by W.M. Sirisena, p. 57
- Pillay, Kolappa Pillay Kanakasabhapathi (1963). South India and Ceylon. University of Madras. pp. 116, 117.
- Kunarasa, K. The Jaffna Dynasty, pp. 65–66
- Coddrington, Short history of Ceylon, pp. 91–92
- de Silva, A History of Sri Lanka, pp. 132–133
- Kunarasa, K. The Jaffna Dynasty, pp. 73–75
- Codrington, Humphry William. "Short history of Sri Lanka: Dambadeniya and Gampola Kings (1215–1411)". Lakdiva.org. Retrieved 25 November 2007.
- Humphrey William Codrington, A Short History of Ceylon Ayer Publishing, 1970; ISBN 0-8369-5596-X
- Abeysinghe, T. Jaffna Under the Portuguese, p. 2
- Kunarasa, K. The Jaffna Dynasty, pp. 82–84
- Gnanaprakasar, S. A critical history of Jaffna, pp. 113–117
- Abeysinghe, T. Jaffna Under the Portuguese, p. 3
- de Silva, A History of Sri Lanka, p. 166
- Vriddhagirisan, V. (1942). The Nayaks of Tanjore. Annamalai University: Annamalai University Historical Series. p. 80. ISBN 9788120609969.
- DeSilva, Chandra Richard (1972). The Portuguese in Ceylon, 1617-1638. University of London: School of Oriental and African Studies. p. 96.
- De Queyroz, The Temporal and Spiritual Conquest of Ceylon, pp. 51, 468
- Journal of Tamil Studies. International Institute of Tamil Studies. 1981. pp. 44–45.
- Abeyasinghe, Tikiri (1986). Jaffna under the Portuguese. Lake House Investments.
- Vriddhagirisan, V. (1995). Nayaks of Tanjore. Asian Educational Services. p. 91. ISBN 9788120609969.
- Kunarasa, K. The Jaffna Dynasty, p. 2
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 54
- "Yarl-Paanam". Eelavar Network. Archived from the original on 22 December 2007. Retrieved 24 November 2007.
- Rasanayagam, C. (1933). History of Jaffna யாழ்ப்பாணச் சரித்திரம் (in Tamil). ஏசியன் எடுகேஷனல் சர்வீசஸ். p. 17.
தமிழரசர்காலத்திற் போலவே "அதிகாரம்' என்னுந் தலைமைக்காரர் பறங்கியர் காலத்திலும் நியமிக் கப்பட்டிருந்தாலும், வரியறவிடும் முக்கிய தலைமைக் காரர், இறைசுவர் (Recebedor) என்றும், அவர்களுக்குக் கீழுள்ளவர்கள் ‘தலையாரிகள்' அல்லது மேயோருல், (Mayora) என்றும் அழைக்கப்பட்டார்கள். உத்தியோகங்களெல்லாம் உயர்ந்த சாதித் தலைவர்களுக்கே கொடுக்கப்பட்டன. தமிழரசர் காலத்தில் கப்பற்படைக்கு அதிபதிகளாயிருந்த கரையார்த் தலைவருக்கும், வேளாளருக்கு உதவியதுபோல் முதலியார்ப் பட்டமுங் கண்ணியமான உத்தியோகங்களுக்கு கொடுக்கப்பட்டன.
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 58
- Ragupathy, Ponnampalam (1987). Early Settlements in Jaffna: An Archaeological Survey. University of Jaffna: Thillimalar Ragupathy. pp. 167, 210.
- K, Arunthavarajah (March 2014). "The Administration of Jaffna Kingdom – A Historical View" (PDF). International Journal of Business and Administration Research Review. University of Jaffna. 2 (3): 32.
- Bastiampillai, Bertram (1 January 2006). Northern Ceylon (Sri Lanka) in the 19th century. Godage International Publishers. p. 94. ISBN 9789552088643.
- K., Arunthavarajah (2014). "THE ADMINISTRATION OF JAFFNA KINGDOM – A HISTORICAL VIEW". International Journal of Business and Administration Research. University of Jaffna: Department of History. 2: 28–34 – via IJBARR.
- de Silva, A History of Sri Lanka, p. 117
- Abeysinghe, T. Jaffna Under the Portuguese, p. 28
- Raghavan, M. D. (1971). Tamil culture in Ceylon: a general introduction. Kalai Nilayam. p. 140.
- V. Sundaram. "Rama Sethu: Historic facts vs political fiction". News Today. Archived from the original on 23 November 2008. Retrieved 29 November 2007.
- Parker, H. Ancient Ceylon: An Account of the Aborigines and of Part of the Early Civilisation, pp. 65, 115, 148
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 62
- Codrington, Humphry William. "The Polonaruwa Kings, (1070–1215)". Lakdiva.org. Retrieved 6 December 2007.
- Gnanaprakasar, S. A critical history of Jaffna, p. 103
- Pieris, Paulus Edward (1983). Ceylon, the Portuguese era: being a history of the island for the period, 1505–1658. 1. Sri Lanka: Tisara Prakasakayo. p. 262. OCLC 12552979.
- Navaratnam, C.S. (1964). A Short History of Hinduism in Ceylon. Jaffna. pp. 43–47. OCLC 6832704.
- Gunasingam, Sri Lankan Tamil Nationalism, p. 65
- Gunasingam, Sri Lankan Tamil Nationalism, p. 66
- Wilhelm Geiger Culture of Ceylon in mediaeval times, Edited by Heinz Bechert, p. 8
- C. Rasanayagam, Ancient Jaffna: being a research into the history of Jaffna, pp. 382–383
- De Silva, Chandra Richard Sri Lanka and the Maldive Islands, p. 128
- Seneviratna, Anuradha Anusmrti: thoughts on Sinhala culture and civilization, Volume 2
- Indrapala, Karthigesu Evolution of an Ethnic Identity, (2005), p. 210
- Rōhaṇa. University of Ruhuna. 1991. p. 35.
- Wickramasinghe, Nira (2014). Sri Lanka in the Modern Age: A History. Oxford University Press. ISBN 978-0-19-025755-2.
- Gnanaprakasar, S. A critical history of Jaffna, p. 96
- Pulavar, Mātakal Mayilvākan̲ap (1995). Mātakal Mayilvākan̲ap Pulavar el̲utiya Yāl̲ppāṇa vaipavamālai (in Tamil). Intu Camaya Kalācāra Aluvalkaḷ Tiṇaikkaḷam.
- Arasaratnam, Sinnappah (1996). Ceylon and the Dutch, 1600–1800: External Influences and Internal Change in Early Modern Sri Lanka. Variorum. p. 381. ISBN 978-0-86078-579-8.
- Raghavan, M.D. (1964). India in Ceylonese History: Society, and Culture. Asia Publishing House. p. 143.
- Hussein, Asiff (2007). Sarandib: an ethnological study of the Muslims of Sri Lanka. Asiff Hussein. p. 479. ISBN 978-9559726227.
- David, Kenneth (1977). The New Wind: Changing Identities in South Asia. Walter de Gruyter. p. 195. ISBN 978-3-11-080775-2.
- Nayagam, Xavier S. Thani (1959). Tamil Culture. Academy of Tamil Culture. p. 109.
- Ramaswamy, Vijaya (2017). Historical Dictionary of the Tamils. Rowman & Littlefield. p. 184. ISBN 978-1-5381-0686-0.
- Perinbanayagam, R.S. (1982). The karmic theater: self, society, and astrology in Jaffna. University of Massachusetts Press. p. 25. ISBN 9780870233746.
- Abeysinghe, T. Jaffna Under the Portuguese, p. 4
- "Place Name of the Day: Papparappiddi". Tamilnet. Retrieved 26 February 2008.
- Tambiah, Laws and customs of Tamils of Jaffna, pp. 18–20.
- Coddrington, H. Ceylon Coins and Currency, p. 74
- Arunachalam, M. (1981). Aintām Ulakat Tamil̲ Mānāṭu-Karuttaraṅku Āyvuk Kaṭṭuraikaḷ. International Association of Tamil Research. pp. 7–158.
- Nadarajan, V. History of Ceylon Tamils, pp. 80–84
- Kunarasa, K. The Jaffna Dynasty, p. 4
- Gunasingam, M. Sri Lankan Tamil Nationalism, p. 64
- V.N. Giritharan. "Nallur Rajadhani: City Layout". Archived from the original on 25 December 2007. Retrieved 2 December 2007.
|Wikimedia Commons has media related to Jaffna kingdom.|
- de Silva, K. M. (2005). A History of Sri Lanka. Colombo: Vijitha Yapa. p. 782. ISBN 955-8095-92-3.
- Abeysinghe, Tikiri (2005). Jaffna under the Portuguese. Colombo: Stamford Lake. p. 66. OCLC 75481767.
- Kunarasa, K (2003). The Jaffna Dynasty. Johor Bahru: Dynasty of Jaffna King's Historical Society. p. 122. ISBN 955-8455-00-8.
- Gnanaprakasar, Swamy (2003). A Critical History of Jaffna. New Delhi: Asian Educational Services. p. 122. ISBN 81-206-1686-3.
- Pathmanathan, S (1974). The Kingdom of Jaffna:Origins and early affiliations. Colombo: Ceylon Institute of Tamil Studies. p. 27.
- Gunasingam, Murugar (1999). Sri Lankan Tamil nationalism. Sydney: MV. p. 238. ISBN 0-646-38106-7.
- Nadarajan, Vasantha (1999). History of Ceylon Tamils. Toronto: Vasantham. p. 146.
- Coddrington, H. W. (1994). Short History of Ceylon. New Delhi: AES. p. 290. ISBN 81-206-0946-8.
- Parker, H. (1909). Ancient Ceylon: An Account of the Aborigines and of Part of the Early Civilisation. London: Luzac & Co. p. 695. ISBN 9788120602083. LCCN 81-909073.
- Tambiah, H. W (2001). Laws and customs of Tamils of Jaffna (revised ed.). Colombo: Women's Education & Research Centre. p. 259. ISBN 955-9261-16-9.
- Pfaffenberg, Brian (1994). The Sri Lankan Tamils. U.S.: Westview Press. p. 247. ISBN 0-8133-8845-7.
- Mayilvakanap Pulavar, Matakal (1884). The Yalpana Vaipava Malai, or The History of the Kingdom of Jaffna (First ed.). New Delhi: Asian Educational Services. p. 146. ISBN 978-81-206-1362-1.
- Manogaran, Chelvadurai (2000). The untold story of the ancient Tamils of Sri Lanka. Chennai: Kumaran. p. 81.
- "Yarl-Paanam". Eelavar Network. Archived from the original on 22 December 2007. Retrieved 24 November 2007.
- Rasanayagam, Mudaliyar (1926). Ancient Jaffna, being a research into the History of Jaffna from very early times to the Portuguese Period. Everymans Publishers Ltd, Madras (Reprint by New Delhi, AES in 2003). p. 390. ISBN 81-206-0210-2.
- Codrington, Humphry William. "Short history of Sri Lanka:Dambadeniya and Gampola Kings (1215–1411)". Lakdiva.org. Retrieved 25 November 2007.
- Coddrington, H. W. (1996). Ceylon Coins and Currency. New Delhi: Vijitha Yapa. p. 290. ISBN 81-206-1202-7.
- Peebles, Patrick (2006). The History of Sri Lanka. United States: Greenwood Press. p. 248. ISBN 0-313-33205-3. | https://en.wikipedia.org/wiki/Jaffna_kingdom | 21 |
22 | COVALENT BONDING NOTES
- In covalent bonds, the sharing of electrons usually occurs so that atoms can get the electron configurations of the noble gas.
- A coordinate covalent bond is different from other covalent bonds as one atom in the bond contributes 2 electrons for the bond and the other atom does not contribute any.
- One way in which the octet rule can sometimes fail to be obeyed is within molecules in which an atom has less than a total octet of valence electrons.
- Another way is within molecules in which an atom has more than a complete octet of valence electrons. The third way is in molecules whose total number of valence electrons is an odd number.
- The strength of a covalent bond is related to the bond dissociation energy as a large bond dissociation energy corresponds to a strong covalent bond.
- Resonance structures are used to represent a covalent bond because it is another way of representing the structure of the atoms within a molecule.
- An electron dot structure is used to represent a covalent bond as it shows the particular and a number of electrons that are shared in the bond.
- A triple bond forms double or triple covalent bonds if they can achieve a noble gas structure by sharing two pairs or three pairs of electrons.
- A structural formula reveals that the compound it represents has covalent bonds as dashes and shows the order of covalently bonded atoms.
- A hydrogen bond is stronger because a hydrogen double bond has a more powerful bond dissociation energy.
- Scientists are conservative.
- The temperature of Earth increased by 1C, for which more than half of the ice in The Arctic melted. Huge amounts of freshwater in the northern hemisphere/ocean = slow down the current of the gulf stream, which distributes heat.
- In arid areas, there is more evaporation = drought.
- When people claim they cannot breathe, it is often due to the increase in temperature in the atmosphere, due to climate change.
- Advantages of powder coating are resistant to fading, chipping, scratching, and wearing than other forms of finishes. The atom and the molecules bond together so they could stay close to each other.
- Electronegativity values determine the charge distribution in a polar covalent bond as the more electronegative atom attracts electrons more strongly and gains a slightly negative charge in polar covalent bonds.
- The less electronegative atom has a slightly positive charge in polar covalent bonds.
- The strengths of intermolecular attractions compare to the strengths of ionic bonds and covalent bonds as Intermolecular attractions are weaker than ionic and covalent bonds.
- The properties of covalent compounds are so diverse because of widely varying intermolecular attractions.
- When examining the structure of a carbon tetrachloride molecule, the molecule is noticed to be symmetrical so the polar bonds cancel, making it a nonpolar molecule.
- A network solid is different from most other covalent compounds because it consists of molecules that form a solid-like structure.
- For example, NaCl would form a solid with cubes, while covalent networking does the same.
- When polar molecules are placed between oppositely charged metal plates, they tend to become oriented with regard to the positive and negative plates.
- Atomic orbitals and molecular orbitals are related as atomic orbitals are properties of individual atoms, while molecular orbitals are properties of the entire molecule.
- When two atoms merge, atomic orbitals combine to produce molecular orbitals.
- Scientists use the VSEPR theory because electrons have a negative charge and repel each other, each molecule assumes the shape that places valence-electrons pairs as far apart as possible.
- Orbital hybridization provides information useful in describing molecules as the types of covalent bonds, molecular bonding, and molecular shape are provided.
- What shape would you expect a simple carbon-containing compound to have if the carbon atom has the following hydration?
a. sp - Trigonal Planar shape 2
b. Sp - Tetrahedral shape 3
c. Sp - Linear Shape
- What is a sigma bond? Describe, with the aid of a diagram, how the overlap of two half-filled 1s orbitals produces a sigma bond.
- A sigma bond is a bond formed when two atomic orbitals merge to create a molecular orbital that is symmetrical around the axis attaching two atomic nuclei.
- Explain why compounds containing C - N and C - O single bonds can form coordinate covalent bonds with H+ but compounds containing only C - H and C - C single bonds cannot.
- Unshared electron pairs are needed to coordinate covalent bonds. In compounds with only C-H and C-C bonds, there are no unshared pairs of electrons. Page 257
- How must the electronegativities of two atoms compare if a covalent bond between them is to be polar?
- The electronegativities of two atoms must be different if the polar covalent bond is present because one atom must pull shared electrons more strongly. | https://knowt.io/note/88f0d255-9612-4812-9553-3b779bdfb2e9/COVALENT-BONDING-NOTES | 21 |
15 | Last Updated on May 7, 2015, by eNotes Editorial. Word Count: 2484
Article abstract: Proudhon’s greatest activity was as a journalist and pamphleteer. Hailed by his followers as the uncompromising champion of human liberty, Proudhon voiced the discontentment of the revolutionary period of nineteenth century France.
Pierre-Joseph Proudhon was born on January 15, 1809, in the rural town of Besançon. Although the political and social climates were important influences on Proudhon’s life, the experiences he had as a child growing up in a working-class family shaped his philosophical views in even more important ways. Proudhon’s father, who was a brewer and, later, a cooper, went bankrupt because, unlike most brewers, he sold his measure of drink for a just price. Penniless after the loss of his business, Proudhon’s father was forced to move his family to a small farm near Burgille. Between the ages of eight and twelve, Proudhon worked as a cowherd, an experience which forged in him a lifelong identity with the peasant class.
Proudhon’s formal education began in 1820, when his mother arranged with the parish priest for him to attend the local college, which was the nineteenth century equivalent of high school. The stigma of poverty suddenly became very real to him when he contrasted his clothes with those of his wealthier comrades. Smarting from the insults of the other children, Proudhon protected himself from further pain by adopting a surly, sullen personality. During his fourth year at school, Proudhon read François Fénelon’s Démonstration de l’existence de Dieu (1713; A Demonstration of the Existence of God, 1713), which introduced him to the tenets of atheism. Proudhon then ceased to practice religion at the age of sixteen and began his lifelong war against the Church.
Proudhon’s life changed drastically on the eve of his graduation. Sensing that something was wrong when neither of his parents was present, Proudhon rushed home to find that his father, who had become a landless laborer, had lost everything in a last desperate lawsuit. Years later, Proudhon used his father’s inability to own farmland as the basis for his belief that society excluded the poor from the ownership of property.
At the age of eighteen, Proudhon was forced to abandon his formal education and take up a trade. He was apprenticed to the Besançon firm of the Gauthier brothers, which specialized in general theological publications. Proudhon became proud of his trade as a proofreader because it made him independent. At home among the printers, who were men of his own class, he found that he had traded the isolation of the middle-class school for the comradely atmosphere of the workshop.
The printshop also enabled Proudhon to continue his studies, in an informal way, for it was there that he developed his first intellectual passions. His budding interest in language was cultivated by a young editor named Fallot, who was the first great personal influence on Proudhon’s life. It was there too that Proudhon was introduced to the works of the utopian thinker Charles Fourier. Fourier’s position that a more efficient economy can revolutionize society from within is reflected in the anarchical doctrines of Proudhon’s greatest works.
Another lesson Proudhon learned at the printshop was that mastering a trade does not guarantee a living, as it would in a just society. His apprenticeship came to an end as a result of the Revolution of 1830, which overthrew the restored Bourbons. Although Proudhon hated to be out of work, he was infected with the spirit of revolution, which stayed with him throughout his life.
His friend Fallot persuaded Proudhon to move to Paris and apply for the Suard scholarship. During their visit to Paris, Fallot provided Proudhon with moral and financial support, because he was convinced that Proudhon had a great future ahead of him as a philosopher and a writer. When Fallot was stricken with cholera, however, Proudhon declined to accept his friend’s generosity any longer and began seeking employment in the printing houses of Paris, but to no avail. Discouraged, Proudhon left Fallot to convalesce by himself in Paris.
A turning point for Proudhon came with the publication of his book Qu’est-ce que la propriété? (1840; What Is Property?, 1876). The book was actually a showcase for his answer to this question—“Property is theft”—and it gained for Proudhon an immediate audience among those working-class citizens who had become disillusioned with Louis-Philippe, a king who clearly favored the privileged classes. Ironically, though, Proudhon was a defender of public property; he objected to the practice of drawing unearned income from rental property. This book represented a dramatic departure from the popular utopian theories embraced by most socialists of the day in that it employed economic, political, and social science as a means of viewing social problems.
Among the people who were attracted to Proudhon’s theories was Karl Marx. In 1842, Marx praised What Is Property? and met Proudhon in Paris. Since Proudhon had studied economic science in more depth than Marx had, Marx probably learned more from their meeting than did Proudhon. Two years later, though, Marx became disenchanted with Proudhon after the publication in 1846 of Proudhon’s first major work, Système des contradictions économiques: Ou, Philosophie de la misère (1846; System of Economic Contradictions, 1888).
Proudhon hoped that the Revolution of 1848 would bring his theories to fruition by deposing Louis-Philippe. He became the editor of a radical journal, Le Représentant du peuple (the representative of the people), in which he recorded one of the best eyewitness accounts of the Revolution. That same year, he was elected to the office of radical deputy. Surprisingly, Proudhon did not ally himself with the socialist Left. During his brief term in office, he voted against the resolution proclaiming the “right to work” and against the adoption of the constitution establishing the democratic Second Republic. His chief activity during his term in office was the founding of a “People’s Bank,” which would be a center of various workingmen’s associations and would overcome the scarcity of money and credit by universalizing the rate of exchange.
The feasibility of such a bank will never be known, because it was closed after only two months of operation when Proudhon’s career as a deputy came to an abrupt end. In 1849, Proudhon was arrested for writing violent articles attacking Napoleon III and was sentenced to three years in the Saint-Pelagie prison. Proudhon fled to Belgium but was promptly arrested when he returned to Paris under an assumed name to liquidate his bank, which had foundered in his absence.
Proudhon’s imprisonment was actually a fortunate experience. It afforded him ample time to study and write; he also founded a newspaper, Le Voix du peuple (the voice of the people). In Les Confessions d’un révolutionnaire (1849; the confessions of a revolutionary), written while he was in prison, Proudhon traced the history of the revolutionary movement in France from 1789 to 1849. In Idée générale de la révolution au XIXe siècle (1851; The General Idea of the Revolution in the Nineteenth Century, 1923), he appealed to the bourgeois to make their peace with the workers. La Révolution sociale démontrée par le coup d’état du 2 décembre (1852; the revolution demonstrated by the coup d’état), which was published a month after the release of Proudhon from prison, hailed the overthrow of the Second Republic as a giant step toward progress. Proudhon also proposed that anarchy was the true end of the social evolution of the nineteenth century. Because Proudhon suggested that Napoleon III should avoid making the same mistakes as Napoleon I, the book was banned by the minister of police. Still, the book created a sensation in France.
The most important event that occurred while Proudhon was in prison was his marriage to Euphrasie Piegard, an uneducated seamstress, whose management skills and resilience made her a suitable mate for a revolutionary. By marrying outside the Church, he indicated his contempt for the clergy. Marriage was good for Proudhon, and his happiness convinced him that marriage was an essential part of a just society.
The three years following Proudhon’s release from prison were marked by uncertainty and fear. By the end of 1852, Napoleon III’s reign was in crisis, and any writer who opposed him or the Crimean War was immediately ostracized. Proudhon’s attempts to start a journal through which he could persuade the regime of Napoleon III to move to the Left against the Church was thwarted by the Jesuits. With his journalistic career at an end, Proudhon began a series of literary projects.
The year 1855 saw a significant shift in Proudhon’s philosophical outlook. He arrived at the conclusion that what was needed was not a political system under which everyone benefited but a transformation of man’s consciousness. Proudhon’s new concern with ethics resulted in his De la justice dans la révolution et dans l’église (about the justice of the revolution and the church) in 1858. This three-volume work, which ranks as one of the greatest socialist studies of the nineteenth century, attacks the defenders of the status quo, including the Catholic church.
Although the book enjoyed great success, the anger that Proudhon had exhibited in this manifesto of defiance outraged the government and the Church. Once again, Proudhon was given a fine and a prison sentence. Proudhon submitted a petition to the senate, but to no avail; he was sentenced to three years’ imprisonment and ordered to pay a fine of four thousand francs; his publisher received a fine as well. Proudhon again fled to Belgium, where he settled as a mathematics professor under the assumed name of Durfort.
Though Proudhon’s publisher refused to accept any more of his political works, Proudhon continued to write. The last of Proudhon’s great treatises, La Guerre et la paix (war and peace), appeared in 1861. This two-volume work explored Proudhon’s view that only through war could man obtain justice and settle conflicts between nations. Proudhon also held that women must serve the state only as housewives and mothers in order to ensure a strong, virile nation. In response, Proudhon was branded a reactionary, a renegade, and a warmonger by both citizens and journalists.
Proudhon was forced to flee Belgium when his opposition to the nationalist movement, which he had expressed in various newspaper articles, created a furor. A large segment of readers objected to a statement in one of these articles that seemed to favor the annexation of Belgium by France.
After returning to France, Proudhon threw himself into his work, producing four books in only two years. This final burst of creativity was his last attempt to persuade the workers to abstain from political activity, while the imperial administration continued to distort the workings of universal suffrage. La Fédération de l’unité en Italie (1863; the federal principle and the unity of Italy) contains what is considered by many to be the best explanation of the federal principle that has ever been written. De la capacité politique des classes ouvrières (1863; of the political capacity of the working classes), inspired by the workers’ refusal to support the candidates of the Second Empire in the legislative election of 1863, reflects Proudhon’s new confidence in the proletariat. He now believed that the workers could be a viable force for achieving mutualism.
Although Proudhon’s mental faculties remained sharp, his health deteriorated rapidly in the last two years of his life. He died of an undetermined illness on January 19, 1865.
Pierre-Joseph Proudhon was a radical thinker who was incapable of identifying completely with any single political ideology. Early in his career, Proudhon was a revolutionary who denounced the established political and economic institutions. As he grew older, he began to absorb some of those bourgeois values that he had scorned in his youth, such as the importance of the family and the inheritance of property. Thus, he is best described as a man of contradictions, a radical, a realist, and a moralist. In fact, he was viewed as a dissenter by other dissenters of the day: liberals, democrats, and republicans, as well as his fellow socialists.
Proudhon’s influence on French politics extended well into the twentieth century. In the Paris Commune of 1871, Proudhon’s political views carried more weight than did those of Marx. By the end of the nineteenth century, however, Proudhon’s teachings seem to have been overshadowed by the Marxists. Through anarchism, Proudhon’s influence was transferred to revolutionary syndicalism, which dominated French trade unionism into the twentieth century. The syndicalists favored a violent approach to the class struggle and employed the general strike as a weapon. Just before World War II, though, French trade unionism turned away from Proudhon as it began to cater to various political factions.
Brogan, D. W. Proudhon. London: H. Hamilton, 1934. A short but complete biography which includes summaries and critiques of Proudhon’s work. The first half of the book does an excellent job of outlining those influences which shaped him as a writer and a thinker.
Dillard, Dudley. “Keynes and Proudhon.” Journal of Economic History, May, 1942: 63-76. A fine introduction to Proudhon’s economic and political philosophy. In his comparison between Proudhon and J. M. Keynes, who seems to have formulated his theories after Proudhon’s, Dillard highlights the most important points in Proudhon’s work, thereby clarifying some of Proudhon’s more difficult concepts for the average reader.
Hall, Constance Margaret. The Sociology of Pierre Joseph Proudhon, 1809-1865. New York: Philosophical Library, 1971. A penetrating analysis of Proudhon’s political philosophy and the effects it had on nineteenth century France. The brief biographical sketch in the beginning of the volume is an excellent introduction to Proudhon’s life and times.
Ritter, Alan. The Political Thought of Pierre-Joseph Proudhon. Princeton, N.J.: Princeton University Press, 1969. An in-depth study of Proudhon’s political views which explains the historical events that spawned his ideas and describes how Proudhon’s theories have been interpreted in various times. Also demonstrates how Proudhon attempted to integrate revolutionary, realistic, and moral concepts into a cohesive political theory.
Schapiro, J. Salwyn. “Pierre Joseph Proudhon, Harbinger of Fascism.” American Historial Review, July, 1945: 714-737. Theorizes that Proudhon was an intellectual forerunner of Fascism. Concentrates primarily on those radical elements of Proudhon’s works which seem to have influenced National Socialism. Also contains a brief but useful sketch of Proudhon’s life.
Woodcock, George. Pierre-Joseph Proudhon. New York: Macmillan, 1956. A standard biography of Proudhon’s life, combining voluminous details of his personal life with a discussion and critique of his writings and philosophical views. Provides invaluable insights into the turbulent historical period of which Proudhon was a product and shows the role that he played as a catalyst in these events. Emphasizes Proudhon’s willingness to suffer as a result of his devotion to his principles.
Unlock This Study Guide Now
- 30,000+ book summaries
- 20% study tools discount
- Ad-free content
- PDF downloads
- 300,000+ answers
- 5-star customer support | https://www.enotes.com/topics/pierre-joseph-proudhon | 21 |
18 | An instrument that measures the rotation of light.
A mixture of equal quantities of enantiomers is not active.
Even though the absolute configuration of either molecule may not be known, the relationship between the two configurations is determined by experimentation.
The pure enantiomers are separated from a racemic mixture.
A resolving agent is required for resolution.
A chromatographic column is used to separate enantiomers.
When groups are interchanged, an atom gives rise to stereoisomers.
The most common stereocenters are asymmetric carbon atoms and double-bonded carbons.
Giving rise to stereoisomers.
A stereoisomer is created by the characteristic of an atom or a group of atoms.
The atoms in the same order are different in how they are oriented in space.
When the molecules are placed on top of each other, the three-dimensional positions of all atoms coincide.
Each skill is followed by problem numbers.
Draw all the stereoisomers of the structure.
Draw their mirror images and classify them as achiral or chiral.
Any mirror planes of symmetry can be identified and drawn.
The stereochemistry of compounds with one or more asymmetric carbon atoms can be represented by the use of Fischer projections.
Explain the physical and chemical properties of pairs of enantiomers, diastereomers, and meso compounds.
There are four structures that are active.
The following projections can be converted to perspective formulas.
There are stereochemical relationships between structures.
Same compound, structural isomers, enantiomers, diastereomers are examples.
For each structure, draw the enantiomer.
The following samples were taken at 25 degC.
The sample is dissolved in 20.0 liters of alcohol.
The solution is placed in a tube.
The rotation iscounterclockwise.
0.050 g of sample is dissolved in 2.0 mL of alcohol and placed in a tube.
The rotation is clockwise.
bromine is introduced at the benzylic position next to the aromatic ring by free-radical bromination of the following compound.
Two stereoisomers result if the reaction stops at the monobromination stage.
There is a mechanism to show why free-radical halogenation occurs at the benzylic position.
There are two stereoisomers that result from monobromination at the benzylic position.
Will the two stereoisomers have the same physical properties?
This is a difficult problem if you think you know your definitions.
There are two structures and one pair of enantiomers.
C3 is not asymmetric nor is it a chiral center, yet it is a stereocenter.
Show how C3 is not a stereocenter.
A graduate student was looking at chemistry while studying cyclohexanones.
She was surprised to find that the product she used was active.
She repurified the product to make sure there were no contaminants.
The product was active.
The name implies that it can be degraded to d-(+)-glyceraldehyde.
The name implies that d-(-)-erythrose is active.
The absolute configuration of d-(+)-glyceraldehyde is determined by knowing the absolute configuration of d-(-)-erythrose.
We don't have to imagine all possible diastereomers of the compound in order to apply the working definition.
The working definition is not as complete as the original one.
Name alkyl halides, explain their physical properties, and describe their common uses.
The properties and reactions of alkyl predict which product is a halide.
Two of the most likely ways to introduce substitution and elimination are with alkyl halides.
Stereochemistry will play a major role in the study of these reactions.
Many other reactions show similarities to Halogen-containing organic substitution and elimination, and the techniques introduced in this chapter will be used compounds that are not very common in our study of organic reactions.
There are three major classes of organic compounds that are found in plants and venoms.
When it is ring, there are 2 hybrid carbon atoms of chlorine and bromine.
The chemistry of vinyl halides and aryl halides is different from the chemistry of alkyl disturbed by predatory fishes.
In later chapters, we consider the reactions of vinyl halides and aryl halides.
The structures of some alkyl halides, vinyl halides, and aryl halides are shown here.
The carbon-halogen bond in an alkyl halide is polar because it has more halogen atoms than carbon atoms.
A lot of reactions of alkyl halides result from breaking this bond.
The map shows the EPM of chloromethane around the chlorine atom and around the carbon and hydrogen atoms.
This carbon and Chloromethane can be attacked by a nucleophile.
The bonding pair of electrons with the halogen atom can leave as a halide ion.
By bond is seen in the EPM as an electron serving as a leaving group, the halogen can be eliminated from the alkyl halide, or it can rich region around chlorine and be replaced by a wide variety of functional groups.
The blue region around alkyl halides can be used as intermediates in the synthesis of many other functional groups.
There are two ways of naming alkyl halides.
Appendix 5 contains information about me.
Common names for halomethanes are not related to their structures.
The structures of the compounds should be given.
According to the nature of the carbon atom, butyl bromide Alkyl halides are classified.
Alkyl halides are used as household and industrial solvent.
Chloroform can be converted to dry cleaners using 1,1,1-trichloroethane and other solvents because it is toxic and causes cancer.
The room temperature was dissolved with Methylene chloride.
Coffee beans are used to produce Phosgene.
Concerns about the safety of extremely toxic because it reacts with coffee with residual traces of methylene chloride prompted coffee producers to use and deactivate many biological mol supercritical carbon dioxide instead.
Chloroform is more dangerous than ecules.
A small amount of alcohol has been replaced by methylene chloride in some industrial degreasers and paint removers.
methylene chloride is the safest halogenated solvent.
Alkyl halides are used as starting materials for making more complex molecules.
An important tool for organic synthesis is the conversion of alkyl halides to organometallic reagents.
Section 10-8 discusses the formation of organometallic compounds.
General anesthesia with a patient who is unconscious and relaxed was opened up in the 1840s by chloroform.
Chloroform is toxic and can cause cancer, so it was abandoned in favor of safer anesthetics.
The trade name for a less toxic halogenated anesthetic is Halothane.
Minor procedures can be helped by the use of ethyl chloride.
The numbing effect is enhanced when it is sprayed on the skin.
There are other alkyl halides and haloethers that are less toxic than chloroform, and they are still among the most common anesthetics.
There are some recent ones.
Their structures are compared with another.
People who were working or sleeping near leaking refrigerators were often killed by ammonia.
At one time, freon-12(r) was the most widely used refrigerant.
Earth's protective ozone layer was destroyed by freons released into the atmosphere.
The chlorine atoms in the ozone depletes into oxygen in the air.
The "hole" in the ozone layer that has been detected over the South Pole is blamed on the freon-catalyzed depletion of ozone.
Chapter 4 shows the extent of the ozone hole over the South Pole.
The future production and use of the ozone has been limited because of Halogenated compounds.
Low-boiling breakdown by soilbacteria has replaced freon-12 in aerosol cans.
Many of them are carbon dioxide or hydrocarbons.
The insoluble and unreactive elements in Freon-12 have been replaced by CHClF water.
There are some strains of the stratosphere.
HCFC-123 is used as a substitute for freon-11 in making plastic foams.
They use freon-134a as their source of food.
Human health has been contributed to by the use of alkyl halides.
People have died from diseases caused by mosquitoes, fleas, and other vermin.
The bubonic plague wiped out a third of the population of Europe in the Middle Ages.
People could not survive mosquito-borne diseases such as malaria, yellow fever, and sleeping sickness in the entire regions of Africa and tropical America.
These compounds are just as bad for birds, animals, and people as they are for insects.
Their use is hazardous, but still preferable to death by disease or starvation.
The war against insects changed in 1939 after the discovery of DDT.
An ounce of DDT is required to kill a person, but the same amount of pesticides protects an acre of land.
The mosquitoes and tsetse flies are carrying diseases.
Many inventions show side effects.
It is a long-term structure of the drug.
There are pesticides in the environment.
Decreases in several species are caused by centrations in wildlife.
In 1972, ethane was banned.
It was the first pesticide that was chlorinated.
Its use was large by the U.S. Environmental Protection Agency.
Pesticides accumulate in the environment and are rarely used.
Lindane is used to control body lice.
Many other pesticides have been developed.
Toxic effects are produced in wildlife when they accumulate in the environment.
If they are applied correctly, others can be used.
Chlorinated pesticides are rarely used in agriculture.
They are usually used to protect life or property.
Lindane and chlordane were once used to protect wooden buildings from termites.
Typhus, trench fever, and bubonic plague are some of the diseases that body lice transmit.
During World War II, X bond is divided between Russia and the United States.
Zero is added to the bond dipole moment vectors.
The larger halogens have longer bonds but have weaker electronegativities.
The bond angles and other factors that vary with a specific molecule are what make it difficult to predict the dipole moments.
The table lists the measured dipole moments of the methanes.
There are four symmetrically oriented polar bonds of the carbon tetrahalides.
Predict which compound has the higher dipole moment and explain your reasoning.
The boiling points of alkyl halides are influenced by two types of intermolecular forces.
The strongest attraction in alkyl halides is the London force.
Higher boiling points are caused by larger London attractions.
The boiling points are affected by the X bond.
Molecules with higher weights have higher boiling points because they are slower moving and have more surface area.
The surface areas of the alkyl halides are different from the surface areas of the halogens.
We can figure out the relative surface areas of halogen atoms by looking at their van der Waals radii.
Butyl fluoride is 33 degrees centigrade.
The influence of chlorine's larger surface area is shown by butyl chloride.
The heavier halogens have larger surface areas.
The boiling points of the ethyl halides increase in order.
As a result of their smaller surface areas, compounds with branched, more spherical shapes have lower boiling points.
The boiling point of butyl bromide is 73 degrees.
This effect is similar to what we saw with alkanes.
Predict which compound has the higher boiling point for each pair of compounds.
If your prediction was correct, you need to explain why the compound has a higher boiling point.
The densities of alkyls halides are listed in Table 6-2.
Their densities are similar to their boiling points.
Alkyl fluorides and alkyl chlorides are less dense than water.
Alkyl chlorides with two or more chlorine atoms are denser than water.
Most alkyl halides exploit the chemistry of functional groups.
We review free-radical halogenation and summarize other, more useful, alkyl halides.
Subsequent chapters discuss the othereses.
Although we discussed its mechanism at length in Section 4-3, free-radical halogenation is not an effective method for the synthesis of alkyl halides.
There are different kinds of hydrogen atoms that can be separated.
Multiple substitutions can be given by more than one halogen atom.
A messy mixture of products can be caused by the chlorination of propane.
In industry, free-radical halogenation can be useful because the reagents are cheap, the mixture of products can be separated, and each of the individual products is sold separately.
A good yield of one product is required in a laboratory.
In the laboratory, free-radical halogenation is rarely used.
The following examples show the types of compounds that can be used in a laboratory.
All the hydrogen atoms in cyclohexane are the same, and free-radical chlorination gives a usable yield.
The formation of dichlorides and trichlorides can be done, but only with a small amount of chlorine and an excess of cyclohexane.
Free-radical bromination gives good yields of products with one type of hydrogen atom that is more reactive than the others.
A tertiary hydrogen atom is preferentially given a tertiary free radical by isobutane.
We don't use free-radical halogenation in the laboratory because it tends to be plagued by products.
In most cases, free-radical bromination of alkenes can be done in a highlyselective manner.
The charge or radical can be delocalized by resonance with the double bond.
A resonance-stabilized primary allylic radical can be formed with less energy than a typical secondary radical.
The most stable radical is the only one that can be formed by bromination.
The allylic radical is the most stable of the radicals that can be formed.
Consider the free-radical bromination of cyclohexene.
Under the right conditions, free-radical bromination of cyclohexene can give a good yield of 3-cyclohexene, where bromine has substituted for an allylic hydrogen on the carbon atom next to the double bond.
The mechanism is similar to others.
A bromine radical gives a resonance-stabilized allylic radical.
A bromine radical regenerated by this radical continues the chain reaction.
The general mechanism for allylic bromination shows that either end of the radical can give products.
One of the products has the bromine atom in the same position as the hydrogen atom.
The carbon atom that bears the radical in the second resonance form of the allylic radical results in the other product.
A large concentration of bromine is not good for allylic bromination because it can add to the double bond.
There is no need for another bromine because most samples contain traces of br2 to start the reaction.
Chapter 15 contains more detail on Allylic and benzylic halogenations.
The two products arise as a result of the resonance-stabilized intermediate.
A summary of the most important methods of making alkyl halides.
Many of them are more useful than free-radical halogenation.
The methods are not discussed until later in the text.
You can use this summary for reference throughout the course if they are listed here.
Many other functional groups are easily converted to alkyl halides.
An alkene can be given if X is lost from the alkyl halide.
In the elimination, the reagent B:- reacts as a base, abstracting a protons from the alkyl halide.
Depending on the alkyl halide and the reaction conditions, most nucleophiles can engage in either substitution or elimination.
substitution and elimination reactions compete with each other.
alkyl halides are our first examples of substitution and eliminations in Chapter 7.
Eliminating and substituting other types of compounds are encountered in later chapters.
The leaving group and the nucleophile are identified.
There are eliminations in Chapter 7.
The leaving halide ion is called X.
The reaction of iodomethane with hydroxide ion is an example.
Methanol is the product.
The carbon atom of iodomethane has a bond to a negative iodine atom.
The carbon atom has a partial positive charge when the electron density is drawn away from it.
The partial positive charge attracts the negative charge.
The back side of the carbon atom is attacked by Hydroxide ion, donating a pair of electrons to form a new bond.
Curved arrows show the movement of electron pairs from the electron-rich nucleophile to the electron-poor carbon atom.
As the carbon-oxygen bond begins to form, the carbon-iodine bond must break because carbon can only accommodate eight electrons.
Iodide ion leaves with a pair of electrons that once bond it to the carbon atom.
The one-step mechanism is supported by information.
We can change the concentrations of the reactants and observe the effects on the reaction rate.
The reaction is first in each of the reactants and second overall.
The rate equation is consistent with a mechanism that requires a collision between a molecule and an ion.
Both species are present in the transition state, and the collision frequencies are proportional to concentrations.
Bimolecular reactions have rate equations that are second order.
The bond to the leaving group is partially broken as a transition state is unstable and partially formed.
A transition state is not a molecule that can be isolated and is only an instant.
There is only one length of time shown in the diagram.
The transition state is only one energy maximum.
The leaving group is coordinated by the negative charge leaving.
The transition state involves a five-coordinate carbon atom with two partial bonds.
The Key Mechanism shows a reaction.
A transition state is created when a bond to the nucleophile is forming at the same time as the bond to the leaving group is breaking.
The reaction takes place in a single step.
The leaving group was forced to leave by a strong nucleophile.
The order of reactivity is CH3X.
The reaction of 1-bromobutane with sodium methoxide gives 1-methoxybutane.
1-methoxybutane can be formed at a rate of 0.05 mol>L per second under certain conditions.
Consider the reaction of 1-bromobutane with ammonia.
Draw the reactants and transition state.
The initial product is the salt of an amine, which is deprotonated by the excess ammo nia to give the amine.
A different combination of an alkoxide and an alkyl bromide is used to make 1-methoxybutane.
The SN2 mechanism has many useful reactions.
The alcohol comes from the reaction of an alkyl halide with a hydroxide ion.
A wide variety of functional groups can be converted by other nucleophiles.
The table summarizes some of the types of compounds that can be formed.
The syn thesizing of alkyl iodides and fluorides is more difficult to make than alkyl chlorides and bromides.
Iodide is a good nucleophilic and many alkyl chlorides give alkyl iodides.
Alkyl fluorides are difficult to synthesis directly, and they are often made by treating alkyl chlorides or bromides with a crown ether, which enhances the normally weak nucleophilic nature of the fluoride salt.
The factors that affect the rates and products of organic reactions will be studied using the SN2 reaction as an example.
The type of solvent used, as well as the nucleophile and the alkyl halide, are important.
The first thing we do is consider what makes a good nucleophile.
A "stronger" nucleophile is an ion or molecule that reacts faster in the SN2 reac tion than a "weaker" nucleophile under the same conditions.
A strong nucleophile is more effective than a weak one at attacking the carbon atom.
Methanol and methoxide ion have very similar pairs of nonbonding electrons, but methoxide ion reacts with more than one million times faster than methanol.
A species with a negative charge is a stronger nucleophile than a neutral one.
Nonbonding electrons are readily available for bonding.
The negative charge is shared by the oxygen of methoxide ion and the halide leaving group.
The transition state has a partial negative charge on the halide but a partial positive charge on the methanol oxygen atom.
It is possible to say that a base is always a stronger nucleophile than its conjugate acid. | https://knowt.io/note/ffd58c17-d82c-4178-8c9d-cbdb079d1aef/4----Part-5-THE-STUDY-OF-CHEMICAL-REACTI | 21 |
18 | Canadian Charter of Rights and Freedoms
|This article is part of a series on the|
politics and government of
The Canadian Charter of Rights and Freedoms (French: La Charte canadienne des droits et libertés), in Canada often simply the Charter, is a bill of rights entrenched in the Constitution of Canada. It forms the first part of the Constitution Act, 1982. The Charter guarantees certain political rights to Canadian citizens and civil rights of everyone in Canada from the policies and actions of all areas and levels of the government. It is designed to unify Canadians around a set of principles that embody those rights. The Charter was signed into law by Queen Elizabeth II of Canada on April 17, 1982 along with the rest of the Act.
The Charter was preceded by the Canadian Bill of Rights, which was enacted in 1960. However, the Bill of Rights is only a federal statute, rather than a constitutional document. As a federal statute, it can be amended through the ordinary legislative process and has no application to provincial laws. The Supreme Court of Canada also narrowly interpreted the Bill of Rights and the Court was reluctant to declare laws inoperative. The relative ineffectiveness of the Canadian Bill of Rights motivated many to improve rights protections in Canada. The movement for human rights and freedoms that emerged after World War II also wanted to entrench the principles enunciated in the Universal Declaration of Human Rights. The British Parliament formally enacted the Charter as a part of the Canada Act 1982 at the request of the Parliament of Canada in 1982, the result of the efforts of the government of Prime Minister Pierre Trudeau.
One of the most notable effects of the adoption of the Charter was to greatly expand the scope of judicial review, because the Charter is more explicit with respect to the guarantee of rights and the role of judges in enforcing them than was the Bill of Rights. The courts, when confronted with violations of Charter rights, have struck down unconstitutional federal and provincial statutes and regulations or parts of statutes and regulations, as they did when Canadian case law was primarily concerned with resolving issues of federalism. The Charter, however, granted new powers to the courts to enforce remedies that are more creative and to exclude more evidence in trials. These powers are greater than what was typical under the common law and under a system of government that, influenced by Canada's parent country the United Kingdom, was based upon Parliamentary supremacy. As a result, the Charter has attracted both broad support from a majority of the Canadian electorate and criticisms by opponents of increased judicial power. The Charter only applies to government laws and actions (including the laws and actions of federal, provincial, and municipal governments and public school boards), and sometimes to the common law, not to private activity.
of Rights and Freedoms
|Part of the Constitution Act, 1982.|
|Guarantee of Rights and Freedoms|
|3, 4, 5|
|7, 8, 9, 10, 11, 12, 13, 14|
|Official Languages of Canada|
|16, 16.1, 17, 18, 19, 20, 21, 22|
|Minority Language Education Rights|
|25, 26, 27, 28, 29, 30, 31|
|Application of Charter|
Under the Charter, people physically present in Canada have numerous civil and political rights. Most of the rights can be exercised by any legal person (the Charter does not define the corporation as a "legal person"), but a few of the rights belong exclusively to natural persons, or (as in sections 3 and 6) only to citizens of Canada. The rights are enforceable by the courts through section 24 of the Charter, which allows courts discretion to award remedies to those whose rights have been denied. This section also allows courts to exclude evidence in trials if the evidence was acquired in a way that conflicts with the Charter and might damage the reputation of the justice system. Section 32 confirms that the Charter is binding on the federal government, the territories under its authority, and the provincial governments. The rights and freedoms enshrined in the Charter include:
Precluding all the freedoms and form the basis of the Charter, the very first section, known as limitations clause, allows governments to justify certain infringements of Charter rights. Every case in which a court discovers a violation of the Charter would therefore require a section 1 analysis to determine if the law can still be upheld. Infringements are upheld if the purpose for the government action is to achieve what would be recognized as an urgent or important objective in a free society, and if the infringement can be "demonstrably justified." Section 1 has thus been used to uphold laws against objectionable conduct such as hate speech (e.g., in R. v. Keegstra) and obscenity (e.g., in R. v. Butler). Section 1 also confirms that the rights listed in the Charter are guaranteed.
In addition, some of these rights are also subjected to the notwithstanding clause (section 33). The notwithstanding clause authorizes governments to temporarily override the rights and freedoms in sections 2 and 7–15 for up to five years, subject to renewal. The Canadian federal government has never invoked it, and some have speculated that its use would be politically costly. In the past, the notwithstanding clause was invoked routinely by the province of Quebec (which did not support the enactment of the Charter but is subject to it nonetheless). The provinces of Saskatchewan and Alberta have also invoked the notwithstanding clause, to end a strike and to protect an exclusively heterosexual definition of marriage, respectively. (Note that Alberta's use of the notwithstanding clause is of no force or effect, since the definition of marriage is federal not provincial jurisdiction.) The territory of Yukon also passed legislation once that invoked the notwithstanding clause, but the legislation was never proclaimed in force.
Generally, the right to participate in political activities and the right to a democratic form of government are protected:
- Section 6: protects the mobility rights of Canadian citizens which include the right to enter, remain in, and leave Canada. Citizens and Permanent Residents have the ability to move to and take up residence in any province to pursue gaining livelihood.
Rights of people in dealing with the justice system and law enforcement are protected, namely:
- Section 7: right to life, liberty, and security of the person.
- Section 8: freedom from unreasonable search and seizure.
- Section 9: freedom from arbitrary detention or imprisonment.
- Section 10: right to legal counsel and the guarantee of habeas corpus.
- Section 11: rights in criminal and penal matters such as the right to be presumed innocent until proven guilty.
- Section 12: right not to be subject to cruel and unusual punishment.
- Section 13: rights against self-incrimination
- Section 14: rights to an interpreter in a court proceeding.
- Section 15: equal treatment before and under the law, and equal protection and benefit of the law without discrimination.
Generally, people have the right to use either the English or French language in communications with Canada's federal government and certain provincial governments. Specifically, the language laws in the Charter include:
- Section 16: English and French are the official languages of Canada and New Brunswick.
- Section 16.1: the English and French-speaking communities of New Brunswick have equal rights to educational and cultural institutions.
- Section 17: the right to use either official language in Parliament or the New Brunswick legislature.
- Section 18: the statutes and proceedings of Parliament and the New Brunswick legislature are to be printed in both official languages.
- Section 19: both official languages may be used in federal and New Brunswick courts.
- Section 20: the right to communicate with and be served by the federal and New Brunswick governments in either official language.
- Section 21: other constitutional language rights outside the Charter regarding English and French are sustained.
- Section 22: existing rights to use languages besides English and French are not affected by the fact that only English and French have language rights in the Charter. (Hence, if there are any rights to use Aboriginal languages anywhere they would continue to exist, though they would have no direct protection under the Charter.)
Minority language education rights
- Section 23: rights for certain citizens belonging to French and English speaking minority communities to be educated in their own language.
Various provisions help to clarify how the Charter works in practice. These include,
- Section 25: states that the Charter does not derogate existing Aboriginal rights and freedoms. Aboriginal rights, including treaty rights, receive more direct constitutional protection under section 35 of the Constitution Act, 1982.
- Section 26: clarifies that other rights and freedoms in Canada are not invalidated by the Charter.
- Section 27: requires the Charter to be interpreted in a multicultural context.
- Section 28: states all Charter rights are guaranteed equally to men and women.
- Section 29: confirms the rights of religious schools are preserved.
- Section 30: clarifies the applicability of the Charter in the territories.
- Section 31: confirms that the Charter does not extend the powers of legislatures.
Finally, Section 34: states that Part I of the Constitution Act, 1982, containing the first 34 sections of the Act, may be collectively referred to as the "Canadian Charter of Rights and Freedoms".
Many of the rights and freedoms that are protected under the Charter, including the rights to freedom of speech, habeas corpus and the presumption of innocence, have their roots in a set of Canadian laws and legal precedents sometimes known as the Implied Bill of Rights. Many of these rights were also included in the Canadian Bill of Rights, which the Canadian Parliament enacted in 1960. However, the Canadian Bill of Rights had a number of shortcomings. Unlike the Charter, it was an ordinary Act of Parliament, which could be amended by a simple majority of Parliament, and it was applicable only to the federal government. The courts also chose to interpret the Bill of Rights conservatively, only on rare occasions applying it to find a contrary law inoperative. The Bill of Rights did not contain all of the rights that are now included in the Charter, omitting, for instance, the right to vote and freedom of movement within Canada.
The centennial of Canadian Confederation in 1967 aroused greater interest within the government in constitutional reform. Said reforms would include improving safeguards of rights, as well as patriation of the Constitution, meaning the British Parliament would no longer have to approve constitutional amendments. Subsequently, Attorney General Pierre Trudeau appointed law professor Barry Strayer to research a potential bill of rights. While writing his report, Strayer consulted with a number of notable legal scholars, including Walter Tarnopolsky. Strayer's report advocated a number of ideas that were later incorporated into the Charter, including protection for language rights. Strayer also advocated excluding economic rights. Finally, he recommended allowing for limits on rights. Such limits are included in the Charter's limitation and notwithstanding clauses. In 1968, Strayer was made the Director of the Constitutional Law Division of the Privy Council Office and in 1974 he became Assistant Deputy Minister of Justice. During those years, Strayer played a role in writing the bill that was ultimately adopted.
Meanwhile, Trudeau, who had become Liberal leader and prime minister in 1968, still very much wanted a constitutional bill of rights. The federal government and the provinces discussed creating one during negotiations for patriation, which resulted in the Victoria Charter in 1971. This never came to be implemented. However, Trudeau continued with his efforts to patriate the Constitution, and promised constitutional change during the 1980 Quebec referendum. He would succeed in 1982 with the passage of the Canada Act 1982. This enacted the Constitution Act, 1982.
The inclusion of a charter of rights in the Constitution Act was a much-debated issue. Trudeau spoke on television in October 1980, and announced his intention to constitutionalize a bill of rights that would include fundamental freedoms, democratic guarantees, freedom of movement, legal rights, equality and language rights. He did not want a notwithstanding clause. While his proposal gained popular support, provincial leaders opposed the potential limits on their powers. The federal Progressive Conservative opposition feared liberal bias among judges, should courts be called upon to enforce rights. Additionally, the British Parliament cited their right to uphold Canada's old form of government. At a suggestion of the Conservatives, Trudeau's government thus agreed to a committee of Senators and MPs to further examine the bill of rights as well as the patriation plan. During this time, 90 hours were spent on the bill of rights alone, all filmed for television, while civil rights experts and advocacy groups put forward their perceptions on the Charter's flaws and omissions and how to remedy them. As Canada had a parliamentary system of government, and as judges were perceived not to have enforced rights well in the past, it was questioned whether the courts should be named as the enforcers of the Charter, as Trudeau wanted. Conservatives argued that elected politicians should be trusted instead. It was eventually decided that the responsibility should go to the courts. At the urging of civil libertarians, judges could even now exclude evidence in trials if acquired in breach of Charter rights in certain circumstances, something the Charter was not originally going to provide for. As the process continued, more features were added to the Charter, including equality rights for people with disabilities, more sex equality guarantees and recognition of Canada's multiculturalism. The limitations clause was also reworded to focus less on the importance of parliamentary government and more on justifiability of limits in free societies; the latter logic was more in line with rights developments around the world after World War II.
In its decision in the Patriation Reference (1981), the Supreme Court of Canada had ruled there was a tradition that some provincial approval should be sought for constitutional reform. As the provinces still had doubts about the Charter's merits, Trudeau was forced to accept the notwithstanding clause to allow governments to opt out of certain obligations. The notwithstanding clause was accepted as part of a deal called the Kitchen Accord, negotiated by the federal Attorney General Jean Chrétien, Ontario's justice minister Roy McMurtry and Saskatchewan's justice minister Roy Romanow. Pressure from provincial governments (which in Canada have jurisdiction over property) and from the country's left wing, especially the New Democratic Party, also prevented Trudeau from including any rights protecting private property.
Nevertheless, Quebec did not support the Charter (or the Canada Act 1982), with "conflicting interpretations" as to why. The opposition could have owed to the Parti Québécois leadership being allegedly uncooperative, because it was more committed to gaining sovereignty for Quebec. It could have owed to Quebec leaders being excluded from the negotiation of the Kitchen Accord, which they saw as being too centralist. It could have owed to provincial leaders' objections to the Accord's provisions relating to the process of future constitutional amendment. They also opposed the inclusion of mobility rights and minority language education rights. The Charter is still applicable in Quebec because all provinces are bound by the Constitution. However, Quebec's opposition to the 1982 patriation package has led to two failed attempts to amend the Constitution (the Meech Lake Accord and Charlottetown Accord) which were designed primarily to obtain Quebec's political approval of the Canadian constitutional order.
While the Canadian Charter of Rights and Freedoms was adopted in 1982, it was not until 1985 that the main provisions regarding equality rights (section 15) came into effect. The delay was meant to give the federal and provincial governments an opportunity to review pre-existing statutes and strike potentially unconstitutional inequalities.
The typography of the physical document pictured here, and still distributed today, was typeset by Ottawa's David Berman intentionally in Carl Dair's Cartier typeface: at the time the most prominent Canadian typeface, having been commissioned by the Governor-General as a celebration of Canada's centenary in 1967.
The Charter has been amended since its enactment. Section 25 was amended in 1983 to explicitly recognize more rights regarding Aboriginal land claims, and section 16.1 was added in 1993. A proposed Rights of the Unborn Amendment in 1986–1987, which would have enshrined fetal rights, failed in the federal Parliament. Other proposed amendments to the Constitution, included in the Charlottetown Accord of 1992, were never passed. These amendments would have specifically required the Charter to be interpreted in a manner respectful of Quebec's distinct society, and would have added further statements to the Constitution Act, 1867 regarding racial and sexual equality and collective rights, and about minority language communities. Though the Accord was negotiated among many interest groups, the resulting provisions were so vague that Trudeau, then out of office, feared they would actually conflict with and undermine the Charter's individual rights. He felt judicial review of the rights might be undermined if courts had to favour the policies of provincial governments, as governments would be given responsibility over linguistic minorities. Trudeau thus played a prominent role in leading the popular opposition to the Accord.
Interpretation and enforcement
The task of interpreting and enforcing the Charter falls to the courts, with the Supreme Court of Canada being the ultimate authority on the matter.
With the Charter's supremacy confirmed by section 52 of the Constitution Act, 1982, the courts continued their practice of striking down unconstitutional statutes or parts of statutes as they had with earlier case law regarding federalism. However, under section 24 of the Charter, courts also gained new powers to enforce creative remedies and exclude more evidence in trials. Courts have since made many important decisions, including R. v. Morgentaler (1988), which struck down Canada's abortion law, and Vriend v. Alberta (1998), in which the Supreme Court found the province's exclusion of homosexuals from protection against discrimination violated section 15. In the latter case, the Court then read the protection into the law.
Courts may receive Charter questions in a number of ways. Rights claimants could be prosecuted under a criminal law that they argue is unconstitutional. Others may feel government services and policies are not being dispensed in accordance with the Charter, and apply to lower-level courts for injunctions against the government (as was the case in Doucet-Boudreau v. Nova Scotia (Minister of Education)). A government may also raise questions of rights by submitting reference questions to higher-level courts; for example, Prime Minister Paul Martin's government approached the Supreme Court with Charter questions as well as federalism concerns in the case Re Same-Sex Marriage (2004). Provinces may also do this with their superior courts. The government of Prince Edward Island initiated the Provincial Judges Reference by asking its provincial Supreme Court a question on judicial independence under section 11.
In several important cases, judges developed various tests and precedents for interpreting specific provisions of the Charter. These include the Oakes test for section 1, set out in the case R. v. Oakes (1986), and the (now defunct) Law test for section 15, developed in Law v. Canada (1999). Since Re B.C. Motor Vehicle Act (1985), various approaches to defining and expanding the scope of fundamental justice (the Canadian name for natural justice or due process) under section 7 have been adopted. (For more information, see the articles on each Charter section).
In general, courts have embraced a purposive interpretation of Charter rights. This means that since early cases like Hunter v. Southam (1984) and R. v. Big M Drug Mart (1985), they have concentrated not on the traditional, limited understanding of what each right meant when the Charter was adopted in 1982, but rather on changing the scope of rights as appropriate to fit their broader purpose. This is tied to the generous interpretation of rights, as the purpose of the Charter provisions is assumed to be to increase rights and freedoms of people in a variety of circumstances, at the expense of the government powers. Constitutional scholar Peter Hogg has approved of the generous approach in some cases, although for others he argues the purpose of the provisions was not to achieve a set of rights as broad as courts have imagined. Indeed, this approach has not been without its critics. Alberta politician Ted Morton and political scientist Rainer Knopff have been very critical of this phenomenon. Although they feel the basis for the approach, the living tree doctrine (the classical name for generous interpretations of the Canadian Constitution), is sound, they argue Charter case law has been more radical. When the living tree doctrine is applied right, the authors claim, "The elm remained an elm; it grew new branches but did not transform itself into an oak or a willow." The doctrine can be used, for example, so a right is upheld even when a government threatens to violate it with new technology, as long as the essential right remains the same; but the authors claim that the courts have used the doctrine to "create new rights." As an example, the authors note that the Charter right against self-incrimination has been extended to cover scenarios in the justice system that had previously been unregulated by self-incrimination rights in other Canadian laws.
Another general approach to interpreting Charter rights is to consider international legal precedents with countries that have specific rights protections, such as the United States Bill of Rights (an influence on aspects of the Charter) and the Constitution of South Africa. However, international precedent is only of guiding value, and is not binding. For example, the Supreme Court has referred to the Charter and the American Bill of Rights as being "born to different countries in different ages and in different circumstances."
Public interest groups frequently intervene in cases to make arguments on how to interpret the Charter. Some examples are the British Columbia Civil Liberties Association, Canadian Civil Liberties Association, the Canadian Mental Health Association, the Canadian Labour Congress, the Women's Legal Education and Action Fund (LEAF), and REAL Women of Canada. The purpose of such interventions is to assist the court and to attempt to influence the court to render a decision favourable to the legal interests of the group.
A further approach to the Charter, taken by the courts, is the dialogue principle, which involves greater participation by elected governments. This approach involves governments drafting legislation in response to court rulings and courts acknowledging the effort if the new legislation is challenged.
Comparisons with other human rights documents
Some Canadian Members of Parliament saw the movement to entrench a charter as contrary to the British model of Parliamentary supremacy. Others would say that the European Convention on Human Rights (ECHR) has now limited British parliamentary power to a greater degree than the Canadian Charter limited the power of the Canadian Parliament and provincial legislatures. Hogg has speculated that the British adopted the Human Rights Act 1998, which allows the ECHR to be enforced directly in domestic courts, partly because they were inspired by the similar Canadian Charter.
The Canadian Charter bears a number of similarities to the European Convention, specifically in relation to the limitations clauses contained in the European document. Because of this similarity with European human rights law, the Supreme Court of Canada turns not only to the Constitution of the United States case law in interpreting the Charter, but also to European Court of Human Rights cases.
The core distinction between the United States Bill of Rights and Canadian Charter is the existence of the limitations and notwithstanding clauses. Canadian courts have consequently interpreted each right more expansively. However, due to the limitations clause, where a violation of a right exists, the law will not necessarily grant protection of that right. In contrast, rights under the US Bill of Rights are absolute and so a violation will not be found until there has been sufficient encroachment on those rights. The sum effect is that both constitutions provide comparable protection of many rights. Fundamental justice (in section 7 of the Canadian Charter) is therefore interpreted to include more legal protections than due process, which is its US equivalent. Freedom of expression in section 2 also has a more wide-ranging scope than the First Amendment to the United States Constitution's freedom of speech. In RWDSU v. Dolphin Delivery Ltd. (1986), the Canadian Supreme Court considered picketing of the kind the US First Amendment did not permit, as it was disruptive conduct (though there was some speech involved that the First Amendment might otherwise protect). The Supreme Court, however, ruled the picketing, including the disruptive conduct, were fully protected under section 2 of the Charter. The Court then relied on section 1 to find the injunction against the picketing was just. The limitations clause has also allowed governments to enact laws that would be considered unconstitutional in the US. The Supreme Court of Canada has upheld some of Quebec's limits on the use of English on signs and has upheld publication bans that prohibit media from mentioning the names of juvenile criminals.
Section 28 of the Charter performs a function similar to that of the unratified Equal Rights Amendment in the US. While that proposed amendment had many critics, there was no comparable opposition to the Charter's section 28. Still, Canadian feminists had to stage large protests to demonstrate support for the inclusion of the section which had not been part of the original draft of the charter.
The International Covenant on Civil and Political Rights has several parallels with the Canadian Charter, but in some cases the Covenant goes further with regard to rights in its text. For example, a right to legal aid has been read into section 10 of the Charter (the right to counsel), but the Covenant explicitly guarantees the accused need not pay "if he does not have sufficient means."
The Canadian Charter has little to say, explicitly at least, about economic and social rights. On this point, it stands in marked contrast with the Quebec Charter of Human Rights and Freedoms and with the International Covenant on Economic, Social and Cultural Rights. There are some who feel economic rights ought to be read into section 7 rights to security of the person and section 15 equality rights to make the Charter similar to the Covenant. The rationale is that economic rights can relate to a decent standard of living and can help the civil rights flourish in a livable environment. Canadian courts, however, have been hesitant in this area, stating that economic rights are political questions and adding that as positive rights, economic rights are of questionable legitimacy.
The Charter itself influenced the Bill of Rights in the Constitution of South Africa. The limitations clause under section 36 of the South African law has been compared to section 1 of that Charter.
Jamaica's Charter of Fundamental Rights and Freedoms was also influenced, in part, by the Canadian charter.
The Charter and national values
The Charter was intended to be a source for national values and national unity. As Professor Alan Cairns noted, "The initial federal government premise was on developing a pan-Canadian identity." Trudeau himself later wrote in his Memoirs that "Canada itself" could now be defined as a "society where all people are equal and where they share some fundamental values based upon freedom", and that all Canadians could identify with the values of liberty and equality.
The Charter's unifying purpose was particularly important to the mobility and language rights. According to author Rand Dyck, some scholars believe section 23, with its minority language education rights, "was the only part of the Charter with which Pierre Trudeau was truly concerned". Through the mobility and language rights, French Canadians, who have been at the centre of unity debates, are able to travel throughout all Canada and receive government and educational services in their own language. Hence, they are not confined to Quebec (the only province where they form the majority and where most of their population is based), which would polarize the country along regional lines. The Charter was also supposed to standardize previously diverse laws throughout the country and gear them towards a single principle of liberty.
Former premier of Ontario Bob Rae has stated that the Charter "functions as a symbol for all Canadians" in practice because it represents the core value of freedom. Academic Peter Russell has been more skeptical of the Charter's value in this field. Cairns, who feels the Charter is the most important constitutional document to many Canadians, and that the Charter was meant to shape the Canadian identity, has also expressed concern that groups within society see certain provisions as belonging to them alone rather than to all Canadians. It has also been noted that issues like abortion and pornography, raised by the Charter, tend to be controversial. Still, opinion polls in 2002 showed Canadians felt the Charter significantly represented Canada, although many were unaware of the document's actual contents.
The only values mentioned by the Charter's preamble are recognition for the supremacy of God and the rule of law, but these have been controversial and of minor legal consequence. In 1999, MP Svend Robinson brought forward a failed proposal before the Canadian House of Commons that would have amended the Charter by removing the mention of God, as he felt it did not reflect Canada's diversity.
While the Charter has enjoyed a great deal of popularity, with 82% of Canadians describing it as a "good thing" in opinion polls in 1987 and 1999, the document has also been subject to published criticisms from both sides of the political spectrum. One left-wing critic is Professor Michael Mandel, who wrote that in comparison to politicians, judges do not have to be as sensitive to the will of the electorate, nor do they have to make sure their decisions are easily understandable to the average Canadian citizen. This, in Mandel's view, limits democracy. Mandel has also asserted that the Charter makes Canada more like the United States, especially by serving corporate rights and individual rights rather than group rights and social rights. He has argued that there are several rights that should be included in the Charter, such as a right to health care and a basic right to free education. Hence, the perceived Americanization of Canadian politics is seen as coming at the expense of values more important for Canadians. The union movement has been disappointed in the reluctance of the courts to use the Charter to support various forms of union activity, such as the "right to strike".
Right-wing critics Morton and Knopff have raised several concerns about the Charter, notably by alleging that the federal government has used it to limit provincial powers by allying with various rights claimants and interest groups. In their book The Charter Revolution & the Court Party, Morton and Knopff express their suspicions of this alliance in detail, accusing the Trudeau and Chrétien governments of funding litigious groups. For example, these governments used the Court Challenges Program to support minority language educational rights claims. Morton and Knopff also claim that crown counsel have intentionally lost cases in which the government was taken to court for allegedly violating rights, particularly gay rights and women's rights.
Political scientist Rand Dyck, in observing these criticisms, notes that while judges have had their scope of review widened, they have still upheld most laws challenged on Charter grounds. With regard to litigious interest groups, Dyck points out that "the record is not as clear as Morton and Knopff imply. All such groups have experienced wins and losses."
The political philosopher Charles Blattberg has criticized the Charter for contributing to the fragmentation of the country, at both the individual and group levels. In encouraging discourse based upon rights, the Charter is said to inject an adversarial spirit into Canadian politics, making it difficult to realize the common good. Blattberg also claims that the Charter undercuts the Canadian political community since it is ultimately a cosmopolitan document. Finally, he argues that people would be more motivated to uphold individual liberties if they were expressed with terms that are much "thicker" (less abstract) than rights.
- Canadian Bill of Rights
- Canadian Human Rights Act
- History of Canada
- Human rights in Canada
- Quebec Charter of Human Rights and Freedoms
- Supreme Court of Canada
- Veterans' Bill of Rights
- Only one federal law was declared inoperative by the Supreme Court of Canada: R. v. Drybones (1969), S.C.R. 282. For an example of the narrow interpretation of the Supreme Court of Canada see Attorney General of Canada v. Lavell, S.C.R. 1349.
- Hogg, Peter W. Constitutional Law of Canada. 2003 Student Ed. Scarborough, Ontario: Thomson Canada Limited, 2003, page 689.
- Hogg, Constitutional Law of Canada. 2003 Student Ed., pages 741–742
- Heather Scoffield, "Ottawa rules out invoking notwithstanding clause to stop migrant ships," Canadian Press, September 13, 2010
- Marriage Act, R.S.A. 2000, c. M-5. Accessed URL on March 10, 2006.
- McKnight, Peter. "Notwithstanding what?" The Vancouver Sun, January 21, 2006, pg. C.4.
- Library of Parliament, Parliamentary Information and Research Service, The Notwithstanding Clause of the Charter, prepared by David Johansen, 1989, as revised May 2005. Retrieved August 7, 2006.
- "Sources of Canadian Law", Department of Justice Canada. Retrieved March 20, 2006.
- The Constitutional Law Group, Canadian Constitutional Law, Third Edition, Toronto: Edmond Montgomery Publications Limited, p. 635.
- Hogg, Peter W. Canada Act 1982 Annotated. Toronto: The Carswell Company Limited, 1982.
- United States of America v. Cotroni; United States of America v. El Zein 1 S.C.R. 1469.
- Strayer, Barry L. "My Constitutional Summer of 1967", Reflections on the Charter, Department of Justice Canada. Retrieved March 18, 2006.
- "Charting the Future: Canada's New Constitution | CBC Archives". Archives.cbc.ca. Retrieved 2010-06-30.
- Weinrib, Lorraine Eisenstat. "Trudeau and the Canadian Charter of Rights and Freedoms: A Question of Constitutional Maturation." In Trudeau's Shadow: The Life and Legacy of Pierre Elliott Trudeau. Edited by Andrew Cohen and JL Granatstein. Vintage Canada, 1998, pages 269.
- Weinrib, 270.
- Weinrib, 271.
- Weinrib, 272.
- Weinrib, pages 271–272.
- David Johansen, "PROPERTY RIGHTS AND THE CONSTITUTION," Library of Parliament (Canada), Law and Government Division, October 1991.
- "The Night of Long Knives", Canada: A People's History. CBC. Retrieved April 8, 2006.
- CBC evening news broadcast, November 5, 1981. Online at CBC Archives, beginning at timepoint 4:04 of the clip. Retrieved August 8, 2006.
- Behiels, Michael D. "Who Speaks for Canada? Trudeau and the Constitutional Crisis." In Trudeau's Shadow: The Life and Legacy of Pierre Elliott Trudeau, page 346.
- R. v. Kapp, 2008 SCC 41, 2 SCR 483
- Hogg, Constitutional Law of Canada, 2003 Student Ed., pages 722 and 724–725.
- Morton, F.L. and Rainer Knopff. The Charter Revolution & the Court Party. Toronto: Broadview Press, 2000, pages 46–47.
- Hogg, Constitutional Law of Canada. 2003 Student Ed., pages 732; the case quoted was R. v. Rahey (1987) by Gérard La Forest.
- Saunders, Philip (April 2002). "The Charter at 20". CBC News Online. Archived from the original on 7 March 2006. Retrieved March 17, 2006.
- Brice Dickson, "Human Rights in the 21st Century," Amnesty International Lecture, Queen's University, Belfast, 11 November 1999.
- Hogg, Constitutional Law of Canada. 2003 Student Ed., pages 732–733.
- Manfredi, Christopher P. "The Canadian Supreme Court and American Judicial Review: United States Constitutional Jurisprudence and the Canadian Charter of Rights and Freedoms." The American Journal of Comparative Law, vol. 40, no. 1. (Winter, 1992), pages 12–13.
- Women's International Network News, "Women on the Move in Canada." Summer 1993, Vol. 19 Issue 3, page 71.
- Lugtig, Sarah and Debra Parkes, "Where do we go from here?" Herizons, Spring 2002, Vol. 15 Issue 4, page 14.
- Doris Anderson, “Canadian Women and the Charter of Rights” (2005) 19 Nat'l J Const L 369.
- Hogg, Constitutional Law of Canada. 2003 Student Ed., pages 733–734.
- Trudeau, Pierre Elliott. Memoirs, Toronto: McClelland & Stewart, 1993, pages 322–323.
- Dyck, Rand. Canadian Politics: Critical Approaches. Third ed. Scarborough, Ontario: Nelson Thomson Learning, 2000, page 442.
- Hogg, Constitutional Law of Canada. 2003 Student Ed., pages 704–705.
- Byfield, Joanne. "The right to be ignorant." Report/Newsmagazine (National Edition); December 16, 2002, Vol. 29, Issue 24, page 56.
- Tracey Tyler, "Support for Charter runs strong: Survey; Approval highest in Quebec on 20-year-old rights law", Toronto Star, Apr 12, 2002, p. A07.
- Dyck, page 446, summarizing Mandel, Michael, The Charter of Rights and the Legalization of Politics in Canada (Toronto: Wall and Thompson, 1989; revised edition, 1994)
- Morton and Knopff, 95. They complain about crown counsels on page 117.
- Dyck, page 448.
- Blattberg, Charles. Shall We Dance? A Patriotic Politics for Canada. Montreal and Kingston: McGill-Queen's University Press, 2003, especially pages 83–94
- G.-A Beaudoin and E. Ratushny, The Canadian Charter of Rights and Freedoms 2nd ed., Carswell, Toronto, 1989.
- P.W. Hogg, Constitutional law of Canada, 4th ed., Carswell: Scarborough with Supplement to Constitutional Law of Canada (2002-)
- J.P. Humphrey, Human Rights and the United Nations: A Great Adventure, New York: Transnational Publishers, 1984.
- J.E. Magnet, Constitutional Law, 8th ed. (2001).
- Black-Branch, Jonathan L (1995), Making sense of the Canadian Charter of Rights and Freedoms, Canadian Education Association ISBN 0-920315-78-X
- Silver, Cindy (1995?). Family Autonomy and the Charter of Rights: Protecting Parental Liberty in a Child-Centred Legal System, in series, Discussion Paper [of] the Centre for Renewal in Public Policy, 3. Gloucester, Ont.: Centre for Renewal in Public Policy. 27 p.
|Wikisource has original text related to this article:|
- Canadian Charter of Rights and Freedoms - Canadian Department of Justice website
- Building a Just Society: A Retrospective of Canadian Rights and Freedoms at Library and Archives Canada
- Charter of Rights Decisions Digest by the Canadian Legal Information Institute
- Constitutional Law of Canada by Professor Joseph E. Magnet, University of Ottawa
- Fundamental Freedoms: The Charter of Rights and Freedoms - Charter of Rights and Freedoms website with video, audio and the Charter in more than 10 languages | https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.ipfs.dweb.link/wiki/Canadian_Charter_of_Rights_and_Freedoms.html | 21 |
41 | |Date||14th century - 17th century|
|Outcome||Transition from the Middle Ages to the Modern era
The Italian Renaissance (Italian: Rinascimento [rinai'mento]) was a period in Italian history that covered the 14th through the 17th centuries. The period is known for the development of a culture that spread across Europe and marked the transition from the Middle Ages to modernity. Proponents of a "long Renaissance" argue that it started around the year 1300 and lasted until about 1600. In some fields, a Proto-Renaissance, beginning around 1250, is typically accepted. The French word renaissance (rinascimento in Italian) means "rebirth", and defines the period as one of cultural revival and renewed interest in classical antiquity after the centuries during what Renaissance humanists labeled as the "Dark Ages". The Renaissance author Giorgio Vasari used the term "Rebirth" in his Lives of the Most Excellent Painters, Sculptors, and Architects in 1550, but the concept became widespread only in the 19th century, after the work of scholars such as Jules Michelet and Jacob Burckhardt.
The Renaissance began in Tuscany in Central Italy and centred in the city of Florence. The Florentine Republic, one of the several city-states of the peninsula, rose to economic and political prominence by providing credit for European monarchs and by laying down the groundwork and foundation in capitalism and in banking. Renaissance culture later spread to Venice, heart of a Mediterranean empire and in control of the trade routes with the east since its participation in the crusades and following the journeys of Marco Polo between 1271 and 1295. Thus Italy renewed contact with antiquity which provided humanist scholars with new texts. Finally the Renaissance had a significant effect on the Papal States and on Rome, largely rebuilt by humanist and Renaissance popes such as Alexander VI (r. 1492-1503) and Julius II (r. 1503-1513), who frequently became involved in Italian politics, in arbitrating disputes between competing colonial powers and in opposing the Protestant Reformation which started c. 1517.
The Italian Renaissance has a reputation for its achievements in painting, architecture, sculpture, literature, music, philosophy, science, technology, and exploration. Italy became the recognized European leader in all these areas by the late 15th century, during the era of the Peace of Lodi (1454-1494) agreed between Italian states. The Italian Renaissance peaked in the mid-16th century as domestic disputes and foreign invasions plunged the region into the turmoil of the Italian Wars (1494-1559). However, the ideas and ideals of the Italian Renaissance spread into the rest of Europe, setting off the Northern Renaissance from the late 15th century. Italian explorers from the maritime republics served under the auspices of European monarchs, ushering in the Age of Discovery. The most famous among them include Christopher Columbus (who sailed for Spain), Giovanni da Verrazzano (for France), Amerigo Vespucci (for Portugal), and John Cabot (for England). Italian scientists such as Falloppio, Tartaglia, Galileo and Torricelli played key roles in the scientific revolution, and foreigners such as Copernicus and Vesalius worked in Italian universities. Historiographers have proposed various events and dates of the 17th century, such as the conclusion of the European Wars of Religion in 1648, as marking the end of the Renaissance.[need quotation to verify]
Accounts of Renaissance literature usually begin with the three great Italian writers of the 14th century: Dante Alighieri (Divine Comedy), Petrarch (Canzoniere), and Boccaccio (Decameron). Famous vernacular poets of the Renaissance include the epic authors Luigi Pulci (author of Morgante), Matteo Maria Boiardo (Orlando Innamorato), Ludovico Ariosto (Orlando Furioso), and Torquato Tasso (Jerusalem Delivered, 1581). 15th-century writers such as the poet Poliziano (1454-1494) and the Platonist philosopher Marsilio Ficino (1433-1499) made extensive translations from both Latin and Greek. In the early-16th century, Baldassare Castiglione laid out his vision of the ideal gentleman and lady in The Book of the Courtier (1528), while Niccolò Machiavelli (1469-1527) cast a jaundiced eye on "la verità effettuale della cosa"--the effectual truth of things--in The Prince, composed, in humanistic style, chiefly of parallel ancient and modern examples of virtù. Historians of the period include Machiavelli himself, his friend and critic Francesco Guicciardini (1483-1540) and Giovanni Botero (The Reason of State, 1589). The Aldine Press, founded in 1494 by the printer Aldo Manuzio, active in Venice, developed Italic type and pocket editions that one could carry in one's pocket; it became the first to publish printed editions of books in Ancient Greek. Venice also became the birthplace of the Commedia dell'Arte.
Italian Renaissance art exercised a dominant influence on subsequent European painting and sculpture for centuries afterwards, with artists such as Leonardo da Vinci (1452-1519), Michelangelo (1475-1564), Raphael (1483-1520), Donatello (c. 1386-1466), Giotto di Bondone (c. 1267-1337), Masaccio (1401-1428), Fra Angelico (c. 1395-1455), Piero della Francesca (c. 1415-1492), Domenico Ghirlandaio (1448-1494), Perugino (c. 1446-1523), Botticelli (c. 1445-1510), and Titian (c. 1488-1576). Italian Renaissance architecture had a similar Europe-wide impact, as practised by Brunelleschi (1377-1446), Leon Battista Alberti (1404-1472), Andrea Palladio (1508-1580), and Bramante (1444-1514). Their works include the Florence Cathedral (built from 1296 to 1436), St. Peter's Basilica (built 1506-1626) in Rome, and the Tempio Malatestiano (reconstructed from c. 1450) in Rimini, as well as several private residences. The musical era of the Italian Renaissance featured composers such as Giovanni Pierluigi da Palestrina (c. 1525-1594), the Roman School and later the Venetian School, and the birth of opera through figures like Claudio Monteverdi (1567-1643) in Florence. In philosophy, thinkers such as Galileo, Machiavelli, Giordano Bruno (1548-1600) and Pico della Mirandola (1463-1494) emphasized naturalism and humanism, thus rejecting dogma and scholasticism.
By the Late Middle Ages (circa 1300 onward), Latium, the former heartland of the Roman Empire, and southern Italy were generally poorer than the North. Rome became known as the city of the Vatican and the catholic church was part of the Papal States were loosely administered, and vulnerable to external interference, particularly by France, and later Spain. The Papacy was affronted when the Avignon Papacy was created in southern France as a consequence of pressure from King Philip the Fair of France. In the south, Sicily had for some time been under foreign domination, by the Arabs and then the Normans. Sicily was occupied during that time by the Emirate of Sicily and later, for two centuries, the Norman Kingdom and the Hohenstaufen Kingdom, but had declined by the late Middle Ages.
In contrast, Northern and Central Italy had become far more prosperous, and it has been calculated that the region was among the richest of Europe. The Crusades had built lasting trade links to the Levant, and the Fourth Crusade had done much to destroy the Byzantine Roman Empire as a commercial rival to the Venetians and Genoese. With navigation, sea-going vessels and seaports, the main trade routes from the East now bypassed the Byzantine Empire or the Arab lands and onward to the ports of Genoa, Pisa, and Venice. Luxury goods bought in the Levant, such as spices, dyes, and silks were imported to Italy and then resold throughout Europe. Moreover, the inland city-states profited from the rich agricultural land of the Po valley. From France, Germany, and the Low Countries, through the medium of the Champagne fairs, land and river trade routes brought goods such as wool, wheat, and precious metals into the region. The extensive trade that stretched from the Far East, Egypt to the Baltic generated substantial surpluses that allowed significant investment in mining and agriculture. By the 14th century the city of Venice had become an emporium for lands as far as Cyprus; it boasted a Naval fleet of over 5000 ships thanks to its arsenal, a vast complex of shipyards that was the first European facility to mass-produce commercial and military vessels. Genoa as well had become a maritime power, the level of development, stimulated by trade, allowed it to prosper. In particular, Florence became one of the wealthiest of the cities of Northern Italy, with mainly due to its woolen textile production, developed under the supervision of its dominant trade guild, the Arte della Lana. Wool was imported from Northern Europe (and in the 16th century from Spain) and together with dyes from the east were used to make high quality textiles.
The Italian trade routes that covered the Mediterranean and beyond were also major conduits of culture and knowledge. The ancient classics brought to Italy by those who migrated during and following the Ottoman conquest of the Byzantine Empire in the 15th century were important in sparking the new linguistic studies of the Renaissance, in newly created academies in Florence and Venice. Humanist scholars searched monastic libraries for ancient manuscripts and recovered Tacitus and other Latin authors. The rediscovery of Vitruvius meant that the architectural principles of Antiquity could be observed once more, and Renaissance artists were encouraged, in the atmosphere of humanist optimism, to excel the achievements of the Ancients, like Apelles, of whom they read.
After the destruction of the Roman Empire in the fifth century AD, the Roman Catholic Church rose to power in Europe. As the gatekeepers, their ruling power applied from the king to the common people.In the Middle Ages, the Church was considered to be conveying the will of God, and it regulated the standard of behavior in life. A lack of literacy required most people to rely on the priest's explanation of the Bible and laws.
In the eleventh century, the Church persecuted many groups including pagans, Jews, and lepers in order to eliminate irregularities in society and strengthen its power. In response to the Laity's challenge to Church authority, bishops played an important role, as they gradually lost control of secular authority, and in order to regain the power of discourse, they adopted extreme control methods, such as persecuting infidels.
The Roman Church collected wealth from believers in the Middle Ages, such as the sale of indulgences. The Church accumulated wealth but did not pay taxes, making the Church's wealth even more than some kings.
In the 13th century, much of Europe experienced strong economic growth. The trade routes of the Italian states linked with those of established Mediterranean ports and eventually the Hanseatic League of the Baltic and northern regions of Europe to create a network economy in Europe for the first time since the 4th century. The city-states of Italy expanded greatly during this period and grew in power to become de facto fully independent of the Holy Roman Empire; apart from the Kingdom of Naples, outside powers kept their armies out of Italy. During this period, the modern commercial infrastructure developed, with double-entry book-keeping, joint stock companies, an international banking system, a systematized foreign exchange market, insurance, and government debt. Florence became the centre of this financial industry and the gold florin became the main currency of international trade.
The new mercantile governing class, who gained their position through financial skill, adapted to their purposes the feudal aristocratic model that had dominated Europe in the Middle Ages. A feature of the High Middle Ages in Northern Italy was the rise of the urban communes which had broken from the control by bishops and local counts. In much of the region, the landed nobility was poorer than the urban patriarchs in the High Medieval money economy whose inflationary rise left land-holding aristocrats impoverished. The increase in trade during the early Renaissance enhanced these characteristics. The decline of feudalism and the rise of cities influenced each other; for example, the demand for luxury goods led to an increase in trade, which led to greater numbers of tradesmen becoming wealthy, who, in turn, demanded more luxury goods. This atmosphere of assumed luxury of the time created a need for the creation of visual symbols of wealth, an important way to show a family's affluence and taste.
This change also gave the merchants almost complete control of the governments of the Italian city-states, again enhancing trade. One of the most important effects of this political control was security. Those that grew extremely wealthy in a feudal state ran constant risk of running afoul of the monarchy and having their lands confiscated, as famously occurred to Jacques Coeur in France. The northern states also kept many medieval laws that severely hampered commerce, such as those against usury, and prohibitions on trading with non-Christians. In the city-states of Italy, these laws were repealed or rewritten.
The 14th century saw a series of catastrophes that caused the European economy to go into recession. The Medieval Warm Period was ending as the transition to the Little Ice Age began. This change in climate saw agricultural output decline significantly, leading to repeated famines, exacerbated by the rapid population growth of the earlier era. The Hundred Years' War between England and France disrupted trade throughout northwest Europe, most notably when, in 1345, King Edward III of England repudiated his debts, contributing to the collapse of the two largest Florentine banks, those of the Bardi and Peruzzi. In the east, war was also disrupting trade routes, as the Ottoman Empire began to expand throughout the region. Most devastating, though, was the Black Death that decimated the populations of the densely populated cities of Northern Italy and returned at intervals thereafter. Florence, for instance, which had a pre-plague population of 45,000 decreased over the next 47 years by 25-50%. Widespread disorder followed, including a revolt of Florentine textile workers, the ciompi, in 1378.
It was during this period of instability that authors such as Dante and Petrarch lived, and the first stirrings of Renaissance art were to be seen, notably in the realism of Giotto. Paradoxically, some of these disasters would help establish the Renaissance. The Black Death wiped out a third of Europe's population. The resulting labour shortage increased wages and the reduced population was therefore much wealthier, better fed, and, significantly, had more surplus money to spend on luxury goods. As incidences of the plague began to decline in the early 15th century, Europe's devastated population once again began to grow. The new demand for products and services also helped create a growing class of bankers, merchants, and skilled artisans. The horrors of the Black Death and the seeming inability of the Church to provide relief would contribute to a decline of church influence. Additionally, the collapse of the Bardi and Peruzzi banks would open the way for the Medici to rise to prominence in Florence. Roberto Sabatino Lopez argues that the economic collapse was a crucial cause of the Renaissance. According to this view, in a more prosperous era, businessmen would have quickly reinvested their earnings in order to make more money in a climate favourable to investment. However, in the leaner years of the 14th century, the wealthy found few promising investment opportunities for their earnings and instead chose to spend more on culture and art.
Unlike with Roman texts, which had been preserved and studied in Western Europe since late antiquity, the study of ancient Greek texts was very limited in medieval Italy. Ancient Greek works on science, maths and philosophy had been studied since the High Middle Ages in Western Europe and in the Islamic Golden Age (normally in translation), but Greek literary, oratorical and historical works (such as Homer, the Greek dramatists, Demosthenes and Thucydides) were not studied in either the Latin or medieval Muslim worlds; in the Middle Ages these sorts of texts were only studied by Byzantine scholars. Some argue that the Timurid Renaissance in Samarkand was linked with the Ottoman Empire, whose conquests led to the migration of Greek scholars to Italy. One of the greatest achievements of Italian Renaissance scholars was to bring this entire class of Greek cultural works back into Western Europe for the first time since late antiquity.
Another popular explanation for the Italian Renaissance is the thesis, first advanced by historian Hans Baron, that states that the primary impetus of the early Renaissance was the long-running series of wars between Florence and Milan. By the late 14th century, Milan had become a centralized monarchy under the control of the Visconti family. Giangaleazzo Visconti, who ruled the city from 1378 to 1402, was renowned both for his cruelty and for his abilities, and set about building an empire in Northern Italy. He launched a long series of wars, with Milan steadily conquering neighbouring states and defeating the various coalitions led by Florence that sought in vain to halt the advance. This culminated in the 1402 siege of Florence, when it looked as though the city was doomed to fall, before Giangaleazzo suddenly died and his empire collapsed.
Baron's thesis suggests that during these long wars, the leading figures of Florence rallied the people by presenting the war as one between the free republic and a despotic monarchy, between the ideals of the Greek and Roman Republics and those of the Roman Empire and Medieval kingdoms. For Baron, the most important figure in crafting this ideology was Leonardo Bruni. This time of crisis in Florence was the period when the most influential figures of the early Renaissance were coming of age, such as Ghiberti, Donatello, Masolino, and Brunelleschi. Inculcated with this republican ideology they later went on to advocate republican ideas that were to have an enormous impact on the Renaissance.
Northern Italy and upper Central Italy were divided into a number of warring city-states, the most powerful being Milan, Florence, Pisa, Siena, Genoa, Ferrara, Mantua, Verona and Venice. High Medieval Northern Italy was further divided by the long-running battle for supremacy between the forces of the Papacy and of the Holy Roman Empire: each city aligned itself with one faction or the other, yet was divided internally between the two warring parties, Guelfs and Ghibellines. Warfare between the states was common, invasion from outside Italy confined to intermittent sorties of Holy Roman Emperors. Renaissance politics developed from this background. Since the 13th century, as armies became primarily composed of mercenaries, prosperous city-states could field considerable forces, despite their low populations. In the course of the 15th century, the most powerful city-states annexed their smaller neighbors. Florence took Pisa in 1406, Venice captured Padua and Verona, while the Duchy of Milan annexed a number of nearby areas including Pavia and Parma.
The first part of the Renaissance saw almost constant warfare on land and sea as the city-states vied for preeminence. On land, these wars were primarily fought by armies of mercenaries known as condottieri, bands of soldiers drawn from around Europe, but especially Germany and Switzerland, led largely by Italian captains. The mercenaries were not willing to risk their lives unduly, and war became one largely of sieges and maneuvering, occasioning few pitched battles. It was also in the interest of mercenaries on both sides to prolong any conflict, to continue their employment. Mercenaries were also a constant threat to their employers; if not paid, they often turned on their patron. If it became obvious that a state was entirely dependent on mercenaries, the temptation was great for the mercenaries to take over the running of it themselves--this occurred on a number of occasions. Neutrality was maintained with France, which found itself surrounded by enemies when Spain disputed Charles VIII's claim to the Kingdom of Naples. Peace with France ended when Charles VIII invaded Italy to take Naples.
At sea, Italian city-states sent many fleets out to do battle. The main contenders were Pisa, Genoa, and Venice, but after a long conflict the Genoese succeeded in reducing Pisa. Venice proved to be a more powerful adversary, and with the decline of Genoese power during the 15th century Venice became pre-eminent on the seas. In response to threats from the landward side, from the early 15th century Venice developed an increased interest in controlling the terrafirma as the Venetian Renaissance opened.
On land, decades of fighting saw Florence, Milan, and Venice emerge as the dominant players, and these three powers finally set aside their differences and agreed to the Peace of Lodi in 1454, which saw relative calm brought to the region for the first time in centuries. This peace would hold for the next forty years, and Venice's unquestioned hegemony over the sea also led to unprecedented peace for much of the rest of the 15th century. In the beginning of the 15th century, adventurer and traders such as Niccolò Da Conti (1395-1469) traveled as far as Southeast Asia and back, bringing fresh knowledge on the state of the world, presaging further European voyages of exploration in the years to come.
Until the late 14th century, prior to the Medici, Florence's leading family were the House of Albizzi. In 1293 the Ordinances of Justice were enacted which effectively became the constitution of the republic of Florence throughout the Italian Renaissance. The city's numerous luxurious palazzi were becoming surrounded by townhouses, built by the ever prospering merchant class. In 1298, one of the leading banking families of Europe, the Bonsignoris, were bankrupted and so the city of Siena lost her status as the banking center of Europe to Florence.
The main challengers of the Albizzi family were the Medicis, first under Giovanni de' Medici, later under his son Cosimo de' Medici. The Medici controlled the Medici bank--then Europe's largest bank--and an array of other enterprises in Florence and elsewhere. In 1433, the Albizzi managed to have Cosimo exiled. The next year, however, saw a pro-Medici Signoria elected and Cosimo returned. The Medici became the town's leading family, a position they would hold for the next three centuries. Florence organized the trade routes for commodities between England and the Netherlands, France, and Italy. By the middle of the century, the city had become the banking capital of Europe, and thereby obtained vast riches. In 1439, Byzantine Emperor John VIII Palaiologos attended a council in Florence in an attempt to unify the Eastern and Western Churches. This brought books and, especially after the fall of the Byzantine Empire in 1453, an influx of scholars to the city. Ancient Greece began to be studied with renewed interest, especially the Neoplatonic school of thought, which was the subject of an academy established by the Medici.
Florence remained a republic until 1532 (see Duchy of Florence), traditionally marking the end of the High Renaissance in Florence, but the instruments of republican government were firmly under the control of the Medici and their allies, save during the intervals after 1494 and 1527. Cosimo and Lorenzo de' Medici rarely held official posts, but were the unquestioned leaders. Cosimo was highly popular among the citizenry, mainly for bringing an era of stability and prosperity to the town. One of his most important accomplishments was negotiating the Peace of Lodi with Francesco Sforza ending the decades of war with Milan and bringing stability to much of Northern Italy. Cosimo was also an important patron of the arts, directly and indirectly, by the influential example he set.
Cosimo was succeeded by his sickly son Piero de' Medici, who died after five years in charge of the city. In 1469 the reins of power passed to Cosimo's 21-year-old grandson Lorenzo, who would become known as "Lorenzo the Magnificent." Lorenzo was the first of the family to be educated from an early age in the humanist tradition and is best known as one of the Renaissance's most important patrons of the arts. Under Lorenzo, the Medici rule was formalized with the creation of a new Council of Seventy, which Lorenzo headed. The republican institutions continued, but they lost all power. Lorenzo was less successful than his illustrious forebears in business, and the Medici commercial empire was slowly eroded. Lorenzo continued the alliance with Milan, but relations with the papacy soured, and in 1478, Papal agents allied with the Pazzi family in an attempt to assassinate Lorenzo. Although the Pazzi conspiracy failed, Lorenzo's young brother, Giuliano, was killed, and the failed assassination led to a war with the Papacy and was used as justification to further centralize power in Lorenzo's hands.
Renaissance ideals first spread from Florence to the neighbouring states of Tuscany such as Siena and Lucca. The Tuscan culture soon became the model for all the states of Northern Italy, and the Tuscan dialect came to predominate throughout the region, especially in literature. In 1447 Francesco Sforza came to power in Milan and rapidly transformed that still medieval city into a major centre of art and learning that drew Leone Battista Alberti. Venice, one of the wealthiest cities due to its control of the Adriatic Sea, also became a centre for Renaissance culture, especially Venetian Renaissance architecture. Smaller courts brought Renaissance patronage to lesser cities, which developed their characteristic arts: Ferrara, Mantua under the Gonzaga, and Urbino under Federico da Montefeltro. In Naples, the Renaissance was ushered in under the patronage of Alfonso I, who conquered Naples in 1443 and encouraged artists like Francesco Laurana and Antonello da Messina and writers like the poet Jacopo Sannazaro and the humanist scholar Angelo Poliziano.
In 1417 the Papacy returned to Rome, but that once-imperial city remained poor and largely in ruins through the first years of the Renaissance. The great transformation began under Pope Nicholas V, who became pontiff in 1447. He launched a dramatic rebuilding effort that would eventually see much of the city renewed. The humanist scholar Aeneas Silvius Piccolomini became Pope Pius II in 1458. As the papacy fell under the control of the wealthy families, such as the Medici and the Borgias the spirit of Renaissance art and philosophy came to dominate the Vatican. Pope Sixtus IV continued Nicholas' work, most famously ordering the construction of the Sistine Chapel. The popes also became increasingly secular rulers as the Papal States were forged into a centralized power by a series of "warrior popes".
The nature of the Renaissance also changed in the late 15th century. The Renaissance ideal was fully adopted by the ruling classes and the aristocracy. In the early Renaissance artists were seen as craftsmen with little prestige or recognition. By the later Renaissance the top figures wielded great influence and could charge great fees. A flourishing trade in Renaissance art developed. While in the early Renaissance many of the leading artists were of lower- or middle-class origins, increasingly they became aristocrats.
As a cultural movement, the Italian Renaissance affected only a small part of the population. Italy was the most urbanized region of Europe, but three quarters of the people were still rural peasants. For this section of the population, life remained essentially unchanged from the Middle Ages. Classic feudalism had never been prominent in Northern Italy, and most peasants worked on private farms or as sharecroppers. Some scholars see a trend towards refeudalization in the later Renaissance as the urban elites turned themselves into landed aristocrats.
The situation differed in the cities. These were dominated by a commercial elite; as exclusive as the aristocracy of any Medieval kingdom. This group became the main patrons of and audience for Renaissance culture. Below them there was a large class of artisans and guild members who lived comfortable lives and had significant power in the republican governments. This was in sharp contrast to the rest of Europe where artisans were firmly in the lower class. Literate and educated, this group did participate in the Renaissance culture. The largest section of the urban population was the urban poor of semi-skilled workers and the unemployed. Like the peasants, the Renaissance had little effect on them. Historians debate how easy it was to move between these groups during the Italian Renaissance. Examples of individuals who rose from humble beginnings can be instanced, but Burke notes two major studies in this area that have found that the data do not clearly demonstrate an increase in social mobility. Most historians feel that early in the Renaissance social mobility was quite high, but that it faded over the course of the 15th century. Inequality in society was very high. An upper-class figure would control hundreds of times more income than a servant or labourer. Some historians see this unequal distribution of wealth as important to the Renaissance, as art patronage relies on the very wealthy.
The Renaissance was not a period of great social or economic change, only of cultural and ideological development. It only touched a small fraction of the population, and in modern times this has led many historians, such as any that follow historical materialism, to reduce the importance of the Renaissance in human history. These historians tend to think in terms of "Early Modern Europe" instead. Roger Osborne argues that "The Renaissance is a difficult concept for historians because the history of Europe quite suddenly turns into a history of Italian painting, sculpture and architecture."
The end of the Italian Renaissance is as imprecisely marked as its starting point. For many, the rise to power in Florence of the austere monk Girolamo Savonarola in 1494-1498 marks the end of the city's flourishing; for others, the triumphant return of the Medici family to power in 1512 marks the beginning of the late phase in the Renaissance arts called Mannerism. Other accounts trace the end of the Italian Renaissance to the French invasions of the early 16th century and the subsequent conflict between France and Spanish rulers for control of Italian territory. Savonarola rode to power on a widespread backlash over the secularism and indulgence of the Renaissance. His brief rule saw many works of art destroyed in the "Bonfire of the Vanities" in the centre of Florence. With the Medici returned to power, now as Grand Dukes of Tuscany, the counter movement in the church continued. In 1542 the Sacred Congregation of the Inquisition was formed and a few years later the Index Librorum Prohibitorum banned a wide array of Renaissance works of literature, which marks the end of the illuminated manuscript together with Giulio Clovio, who is considered the greatest illuminator of the Italian High Renaissance, and arguably the last very notable artist in the long tradition of the illuminated manuscript, before some modern revivals.
Under the suppression of the Catholic Church and the ravages of war, humanism became "akin to heresy".
Equally important was the end of stability with a series of foreign invasions of Italy known as the Italian Wars that would continue for several decades. These began with the 1494 invasion by France that wreaked widespread devastation on Northern Italy and ended the independence of many of the city-states. Most damaging was the 6 May 1527, Spanish and German troops' sacking Rome that for two decades all but ended the role of the Papacy as the largest patron of Renaissance art and architecture.
While the Italian Renaissance was fading, the Northern Renaissance adopted many of its ideals and transformed its styles. A number of Italy's greatest artists chose to emigrate. The most notable example was Leonardo da Vinci, who left for France in 1516, but teams of lesser artists invited to transform the Château de Fontainebleau created the School of Fontainebleau that infused the style of the Italian Renaissance in France. From Fontainebleau, the new styles, transformed by Mannerism, brought the Renaissance to the Low Countries and thence throughout Northern Europe.
This spread north was also representative of a larger trend. No longer was the Mediterranean Europe's most important trade route. In 1498, Vasco da Gama reached India, and from that date the primary route of goods from the Orient was through the Atlantic ports of Lisbon, Seville, Nantes, Bristol, and London.
The thirteenth-century Italian literary revolution helped set the stage for the Renaissance. Prior to the Renaissance, the Italian language was not the literary language in Italy. It was only in the 13th century that Italian authors began writing in their native language rather than Latin, French, or Provençal. The 1250s saw a major change in Italian poetry as the Dolce Stil Novo (Sweet New Style, which emphasized Platonic rather than courtly love) came into its own, pioneered by poets like Guittone d'Arezzo and Guido Guinizelli. Especially in poetry, major changes in Italian literature had been taking place decades before the Renaissance truly began.
With the printing of books initiated in Venice by Aldus Manutius, an increasing number of works began to be published in the Italian language in addition to the flood of Latin and Greek texts that constituted the mainstream of the Italian Renaissance. The source for these works expanded beyond works of theology and towards the pre-Christian eras of Imperial Rome and Ancient Greece. This is not to say that no religious works were published in this period: Dante Alighieri's The Divine Comedy reflects a distinctly medieval world view. Christianity remained a major influence for artists and authors, with the classics coming into their own as a second primary influence.
In the early Italian Renaissance, much of the focus was on translating and studying classic works from Latin and Greek. Renaissance authors were not content to rest on the laurels of ancient authors, however. Many authors attempted to integrate the methods and styles of the ancient Greeks into their own works. Among the most emulated Romans are Cicero, Horace, Sallust, and Virgil. Among the Greeks, Aristotle, Homer, and Plato were now being read in the original for the first time since the 4th century, though Greek compositions were few.
The literature and poetry of the Renaissance was largely influenced by the developing science and philosophy. The humanist Francesco Petrarch, a key figure in the renewed sense of scholarship, was also an accomplished poet, publishing several important works of poetry. He wrote poetry in Latin, notably the Punic War epic Africa, but is today remembered for his works in the Italian vernacular, especially the Canzoniere, a collection of love sonnets dedicated to his unrequited love Laura. He was the foremost writer of Petrarchan sonnets, and translations of his work into English by Thomas Wyatt established the sonnet form in that country, where it was employed by William Shakespeare and countless other poets.
Petrarch's disciple, Giovanni Boccaccio, became a major author in his own right. His major work was the Decameron, a collection of 100 stories told by ten storytellers who have fled to the outskirts of Florence to escape the black plague over ten nights. The Decameron in particular and Boccaccio's work in general were a major source of inspiration and plots for many English authors in the Renaissance, including Geoffrey Chaucer and William Shakespeare.
Aside from Christianity, classical antiquity, and scholarship, a fourth influence on Renaissance literature was politics. The political philosopher Niccolò Machiavelli's most famous works are Discourses on Livy, Florentine Histories and finally The Prince, which has become so well known in modern societies that the word Machiavellian has come to refer to the cunning and ruthless actions advocated by the book. Along with many other Renaissance works, The Prince remains a relevant and influential work of literature today.
There were many Italian Renaissance humanists who also praised and affirmed the beauty of the body in poetry and literature. In Baldassare Rasinus's panegyric for Francesco Sforza, Rasinus considered that beautiful people usually have virtue. In northern Italy, humanists had discussions about the connection between physical beauty and inner virtues. In Renaissance Italy, virtue and beauty were often linked together to praise men.
Petrarch encouraged the study of the Latin classics and carried his copy of Homer about, at a loss to find someone to teach him to read Greek. An essential step in the classic humanist education being propounded by scholars like Pico della Mirandola was the hunting down of lost or forgotten manuscripts that were known only by reputation. These endeavors were greatly aided by the wealth of Italian patricians, merchant-princes and despots, who would spend substantial sums building libraries. Discovering the past had become fashionable and it was a passionate affair pervading the upper reaches of society. I go, said Cyriac of Ancona, I go to awake the dead. As the Greek works were acquired, manuscripts found, libraries and museums formed, the age of the printing press was dawning. The works of Antiquity were translated from Greek and Latin into the contemporary modern languages throughout Europe, finding a receptive middle-class audience, which might be, like Shakespeare, "with little Latin and less Greek".
While concern for philosophy, art, and literature all increased greatly in the Renaissance, the period is usually seen as one of scientific backwardness. The reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Humanism stressed that nature came to be viewed as an animate spiritual creation that was not governed by laws or mathematics. At the same time philosophy lost much of its rigour as the rules of logic and deduction were seen as secondary to intuition and emotion.
During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, mathematics, manufacturing, anatomy and engineering. The collection of ancient scientific texts began in earnest at the start of the 15th century and continued up to the Fall of Constantinople in 1453, and the invention of printing democratized learning and allowed a faster propagation of new ideas. Although humanists often favored human-centered subjects like politics and history over study of natural philosophy or applied mathematics, many others went beyond these interests and had a positive influence on mathematics and science by rediscovering lost or obscure texts and by emphasizing the study of original languages and the correct reading of texts.
Italian universities such as Padua, Bologna and Pisa were scientific centres of renown and with many northern European students, the science of the Renaissance spread to Northern Europe and flourished there as well. Figures such as Copernicus, Francis Bacon, Descartes, and Galileo made contributions to scientific thought and experimentation, paving the way for the scientific revolution that later flourished in Northern Europe. Bodies were also stolen from gallows and examined by many like Andreas Vesalius, a professor of anatomy. This allowed him to create more accurate skeleton models by making more than 200 corrections to the works of Galen who dissected animals.
Major developments in mathematics include the spread of algebra throughout Europe, especially Italy. Luca Pacioli published a book on mathematics at the end of the fifteenth century, in which he first published positive and negative signs. Basic mathematical symbols were introduced by Simon Stevin in the 16th and early 17th centuries. Symbolic algebra was established by the French mathematician François Viete in the 16th century. He published "Introduction to Analytical Methods" in 1591, systematically sorting out algebra, and for the first time consciously used letters to represent unknown and known numbers. In his other book "On the Recognition and Correction of Equations," Viete improved the solution of the third degree and fourth degree equations, and also established the relationship between the roots and coefficients of quadratic and cubic equations, which is called "Viete's formulas" now. Trigonometry also achieved greater development during the Renaissance. The German mathematician Regiomontanus's "On Triangles of All Kinds" was Europe's first trigonometric work independent of astronomy. The book systematically elaborated plane triangles and spherical triangles, as well as a very precise table of trigonometric functions.
In painting, the Late Medieval painter Giotto di Bondone, or Giotto, helped shape the artistic concepts that later defined much of the Renaissance art. The key ideas that he explored - classicism, the illusion of three-dimensional space and a realistic emotional context - inspired other artists such as Masaccio, Michelangelo and Leonardo da Vinci. He was not the only Medieval artist to develop these ideas, however; the artists Pietro Cavallini and Cimabue both influenced Giotto's use of statuesque figures and expressive storylines.
The frescos of Florentine artist Masaccio are generally considered to be among the earliest examples of Italian Renaissance art. Masaccio incorporated the ideas of Giotto, Donatello and Brunelleschi into his paintings, creating mathematically precise scenes that give the impression of three-dimensional space. The Holy Trinity fresco in the Florentine church of Santa Maria Novella, for example, looks as if it is receding at a dramatic angle into the dark background, while single-source lighting and foreshortening appear to push the figure of Christ into the viewer's space.
While mathematical precision and classical idealism fascinated painters in Rome and Florence, many Northern artists in the regions of Venice, Milan and Parma preferred highly illusionistic scenes of the natural world. The period also saw the first secular (non-religious) themes. There has been much debate as to the degree of secularism in the Renaissance, which had been emphasized by early 20th-century writers like Jacob Burckhardt based on, among other things, the presence of a relatively small number of mythological paintings. Those of Botticelli, notably The Birth of Venus and Primavera, are now among the best known, although he was deeply religious (becoming a follower of Savonarola) and the great majority of his output was of traditional religious paintings or portraits.
In sculpture, the Florentine artist Donato di Niccolò di Betto Bardi, or Donatello, was among the earliest sculptors to translate classical references into marble and bronze. His second sculpture of David was the first free-standing bronze nude created in Europe since the Roman Empire.
The period known as the High Renaissance of painting was the culmination of the varied means of expression and various advances in painting technique, such as linear perspective, the realistic depiction of both physical and psychological features, and the manipulation of light and darkness, including tone contrast, sfumato (softening the transition between colours) and chiaroscuro (contrast between light and dark), in a single unifying style which expressed total compositional order, balance and harmony. In particular, the individual parts of the painting had a complex but balanced and well-knit relationship to the whole. The most famous painters from this phase are Leonardo da Vinci, Raphael, and Michelangelo and their images, including Leonardo's The Last Supper and Mona Lisa, Raphael's The School of Athens and Michelangelo's Sistine Chapel Ceiling are the masterpieces of the period and among the most widely known works of art in the world.
High Renaissance painting evolved into Mannerism, especially in Florence. Mannerist artists, who consciously rebelled against the principles of High Renaissance, tend to represent elongated figures in illogical spaces. Modern scholarship has recognized the capacity of Mannerist art to convey strong (often religious) emotion where the High Renaissance failed to do so. Some of the main artists of this period are Pontormo, Bronzino, Rosso Fiorentino, Parmigianino and Raphael's pupil Giulio Romano.
In Florence, the Renaissance style was introduced with a revolutionary but incomplete monument by Leone Battista Alberti. Some of the earliest buildings showing Renaissance characteristics are Filippo Brunelleschi's church of San Lorenzo and the Pazzi Chapel. The interior of Santo Spirito expresses a new sense of light, clarity and spaciousness, which is typical of the early Italian Renaissance. Its architecture reflects the philosophy of Renaissance humanism, the enlightenment and clarity of mind as opposed to the darkness and spirituality of the Middle Ages. The revival of classical antiquity can best be illustrated by the Palazzo Rucellai. Here the pilasters follow the superposition of classical orders, with Doric capitals on the ground floor, Ionic capitals on the piano nobile and Corinthian capitals on the uppermost floor. Soon, Renaissance architects favored grand, large domes over tall and imposing spires, doing away with the Gothic style of the predating ages.
In Mantua, Alberti ushered in the new antique style, though his culminating work, Sant'Andrea, was not begun until 1472, after the architect's death.
The High Renaissance, as we call the style today, was introduced to Rome with Donato Bramante's Tempietto at San Pietro in Montorio (1502) and his original centrally planned St. Peter's Basilica (1506), which was the most notable architectural commission of the era, influenced by almost all notable Renaissance artists, including Michelangelo and Giacomo della Porta. The beginning of the late Renaissance in 1550 was marked by the development of a new column order by Andrea Palladio. Giant order columns that were two or more stories tall decorated the facades.
During the Italian Renaissance, mathematics was developed and spread widely. As a result, some Renaissance architects used mathematical knowledge like calculation in their drawings, such as Baldassarre Peruzzi.
In Italy during the 14th century there was an explosion of musical activity that corresponded in scope and level of innovation to the activity in the other arts. Although musicologists typically group the music of the Trecento (music of the 14th century) with the late medieval period, it included features which align with the early Renaissance in important ways: an increasing emphasis on secular sources, styles and forms; a spreading of culture away from ecclesiastical institutions to the nobility, and even to the common people; and a quick development of entirely new techniques. The principal forms were the Trecento madrigal, the caccia, and the ballata. Overall, the musical style of the period is sometimes labelled as the "Italian ars nova." From the early 15th century to the middle of the 16th century, the center of innovation in religious music was in the Low Countries, and a flood of talented composers came to Italy from this region. Many of them sang in either the papal choir in Rome or the choirs at the numerous chapels of the aristocracy, in Rome, Venice, Florence, Milan, Ferrara and elsewhere; and they brought their polyphonic style with them, influencing many native Italian composers during their stay.
The predominant forms of sacred music during the period were the mass and the motet. By far the most famous composer of church music in 16th-century Italy was Palestrina, the most prominent member of the Roman School, whose style of smooth, emotionally cool polyphony was to become the defining sound of the late 16th century, at least for generations of 19th- and 20th-century musicologists. Other Italian composers of the late 16th century focused on composing the main secular form of the era, the madrigal; for almost a hundred years these secular songs for multiple singers were distributed all over Europe. Composers of madrigals included Jacques Arcadelt, at the beginning of the age, Cipriano de Rore, in the middle of the century, and Luca Marenzio, Philippe de Monte, Carlo Gesualdo, and Claudio Monteverdi at the end of the era. Italy was also a centre of innovation in instrumental music. By the early 16th century keyboard improvisation came to be greatly valued, and numerous composers of virtuoso keyboard music appeared. Many familiar instruments were invented and perfected in late Renaissance Italy, such as the violin, the earliest forms of which came into use in the 1550s.
By the late 16th century Italy was the musical centre of Europe. Almost all of the innovations which were to define the transition to the Baroque period originated in northern Italy in the last few decades of the century. In Venice, the polychoral productions of the Venetian School, and associated instrumental music, moved north into Germany; in Florence, the Florentine Camerata developed monody, the important precursor to opera, which itself first appeared around 1600; and the avant-garde, manneristic style of the Ferrara school, which migrated to Naples and elsewhere through the music of Carlo Gesualdo, was to be the final statement of the polyphonic vocal music of the Renaissance.
Any unified theory of a renaissance, or cultural overhaul, during the European early modern period is overwhelmed by a massive volume of differing historiographical approaches. Historians like Jacob Burckhardt (1818-1897) have often romanticized the enlightened vision that Italian Renaissance writers have promulgated concerning their own narrative of denouncing the fruitlessness of the Middle Ages. By promoting the Renaissance as the definitive end to the "stagnant" Middle Ages, the Renaissance has acquired the powerful and enduring association with progress and prosperity for which Burckhardt's The Civilization of the Renaissance in Italy is most responsible. Modern scholars have objected to this prevailing narrative, citing the medieval period's own vibrancy and key continuities that link, rather than divide, the Middle Ages and the Renaissance.
Elizabeth Lehfeldt (2005) points to the Black Death as a turning point in Europe that set in motion several movements that were gaining massive traction in the years before, and has accounted for many subsequent events and trends in Western civilization, such as the Reformation. Rather than see this as a distinct cutoff between eras of history, the rejuvenated approach to studying the Renaissance aims to look at this as a catalyst that accelerated trends in art and science that were already well developed. For example, Danse Macabre, the artistic movement using death as the focal point, is often credited as a Renaissance trend, yet Lehfeldt argues that the emergence of Gothic art during medieval times was morphed into Danse Macabre after the Black Death swept over Europe.
Recent historians who take a more revisionist perspective, such as Charles Haskins (1860-1933), identify the hubris and nationalism of Italian politicians, thinkers, and writers as the cause for the distortion of the attitude towards the early modern period. In The Renaissance of the Twelfth Century (1927), Haskins asserts that it is human nature to draw stark divides in history in order to better understand the past. However, it is essential to understand history as continuous and constantly building off of the past. Haskins was one of the leading scholars in this school of thought, and it was his (along with several others) belief that the building blocks for the Italian Renaissance were all laid during the Middle Ages, calling on the rise of towns and bureaucratic states in the late 11th century as proof of the significance of this "pre-renaissance." The flow of history that he describes paints the Renaissance as continuation of the Middle Ages that may not have been as positive of a change as popularly imagined. Many historians after Burckhardt have argued that the regression of the Latin language, economic recession, and social inequality during the Renaissance have been intentionally glossed over by previous historians in order to promote the mysticism of the era.
Burckhardt famously described the Middle Ages as a period that was "seen clad in strange hues", promoting the idea that this era was inherently dark, confusing, and unprogressive. The term middle ages was first referred to by humanists such as Petrarch and Biondo, during the late 15th century, describing it as a period connecting an important beginning and an important end, and as a placeholder for the history that exists between both sides of the period. This period was eventually referred to as the "dark" ages in the 19th century by English historians, which has further tainted the narrative of medieval times in favor of promoting a positive feeling of the individualism and humanism that spurred from the Renaissance.
The origin and development of capitalism in Italy are illustrated by the economic life of the great city of Florence. | https://popflock.com/learn?s=Italian_Renaissance | 21 |
22 | World Hearing Day is the largest global campaign that focuses on hearing health. Launched by the World Health Organization (WHO), this call to action emphasizes addressing hearing loss and raising awareness about preventative measures. Hearing loss is one of the most common medical conditions that people experience. According to WHO, 466 million people globally have disabling hearing loss. This public health epidemic is expected to rapidly increase so focusing on prevention is critical.
World Hearing Day: Screen, Rehabilitate, Communicate
In addition to advocating for public hearing health policy that expands accessibility, World Hearing Day highlights three important strategies:
- Screen: integrate regular screenings into annual health check-ins. Having hearing assessed consistently allows you to monitor your hearing health and intervene early if you experience any changes. Early intervention can significantly improve treatment and health outcomes.
- Rehabilitate: the most common treatment for hearing loss is hearing aids. These highly advanced devices are designed to absorb, amplify, and process sound; providing significant support and increasing a person’s hearing ability.
- Communicate: treating hearing loss has countless benefits including enhancing communication. Effective communication is what sustains healthy relationships, strengthens job performance, enhances social life and general wellness.
Treating hearing loss also reduces the risk of developing other associated health conditions including cognitive decline and accidental injuries.
Hearing Loss FAQs
Though hearing loss is one of the most pervasive health conditions that people experience today, it is often underdiagnosed for a variety of reasons. There are various misconceptions about impaired hearing that contributes to overlooking its occurrence and the impact it has on all facets of life.
- Doesn’t hearing loss only impact older adults?
Hearing loss impacts people of all ages. It is true that age-related hearing loss (referred to as presbycusis) significantly impacts older adults:
- 25% of adults ages 65-74 have some degree of hearing loss
- 50% of adults 75 and older have disabling hearing loss
Additionally, according to WHO:
- Over 43 million people globally, ages 12-35, live with disabling hearing loss
- 1.1 billion teens and young adults are at risk for developing hearing loss caused by exposure to loud noise
Hearing loss isn’t fatal so how can it be a serious condition?
Hearing loss is the third most common chronic health condition that adults experience. It is also a permanent condition that cannot be cured and if untreated, it can lead to significant health issues. Additionally, the symptoms it produces can take a toll on daily living by straining communication which is essential to managing personal and professional responsibilities. Strained communication often leads to social withdrawal and this combination can take a toll on mental health, contributing to depression, loneliness, and anxiety. Impacting physical and mental health, untreated hearing loss can deepen if it is not addressed.
Speaking loudly solves the problem right?
People often assume that they can self-manage declining hearing with strategies like speaking loudly, reading lips, pretending to hear etc. But these strategies are not effective and can actually exacerbate hearing loss by delaying treatment. Hearing loss is best assessed and treated by a hearing healthcare professional. The most common treatment is hearing aids which are electronic devices that are designed to provide support with amplifying and processing incoming sound.
Tips for Effective Hearing Care
There are various ways you can tend to your hearing health. Simple tips to integrate into your daily life that prioritize your hearing health includes:
- Wear protective gear: earmuffs, earplugs, headphones etc. serve as a protective barrier for the ears, reducing the amount and impact of sound absorbed.
- Reduce exposure to loud noise: by avoiding noisy environments, opting for quieter settings, maintaining low volume setting on electronic devices etc.
- Take listening breaks: your ears and brain are constantly working to process incoming sound information throughout the day. Taking breaks from this stimuli by powering off sources of sound allows the auditory system to rest and replenish.
- Have hearing assessed: hearing tests measure hearing capacity in both ears, establishing any impairment and the degree. Having hearing tested regularly allows you to be informed about your hearing needs and intervene if you experience changes. | https://exceptionalhearingcare.com/march-3rd-is-world-hearing-day-hearing-care-for-all/ | 21 |
25 | Moon dust and the age of the solar system
Using a figure published in 1960 of 14,300,000 tons per year as the meteoritic dust influx rate to the earth, creationists have argued that the thin dust layer on the moon’s surface indicates that the moon, and therefore the earth and solar system, are young. Furthermore, it is also often claimed that before the moon landings there was considerable fear that astronauts would sink into a very thick dust layer, but subsequently scientists have remained silent as to why the anticipated dust wasn’t there. An attempt is made here to thoroughly examine these arguments, and the counter arguments made by detractors, in the light of a sizable cross-section of the available literature on the subject.
Of the techniques that have been used to measure the meteoritic dust influx rate, chemical analyses (of deep sea sediments and dust in polar ice), and satellite-borne detector measurements appear to be the most reliable. However, upon close examination the dust particles range in size from fractions of a micron in diameter and fractions of a microgram in mass up to millimetres and grams, whence they become part of the size and mass range of meteorites. Thus the different measurement techniques cover different size and mass ranges of particles, so that to obtain the most reliable estimate requires an integration of results from different techniques over the full range of particle masses and sizes. When this is done, most current estimates of the meteoritic dust influx rate to the earth fall in the range of 10,000-20,000 tons per year, although some suggest this rate could still be as much as up to 100,000 tons per year.
Apart from the same satellite measurements, with a focusing factor of two applied so as to take into account differences in size and gravity between the earth and moon, two main techniques for estimating the lunar meteoritic dust influx have been trace element analyses of lunar soils, and the measuring and counting of microcraters produced by impacting micrometeorites on rock surfaces exposed on the lunar surface. Both these techniques rely on uniformitarian assumptions and dating techniques. Furthermore, there are serious discrepancies between the microcrater data and the satellite data that remain unexplained, and that require the meteoritic dust influx rate to be higher today than in the past. But the crater-saturated lunar highlands are evidence of a higher meteorite and meteoritic dust influx in the past. Nevertheless the estimates of the current meteoritic dust influx rate to the moon’s surface group around a figure of about 10,000 tons per year.
Prior to direct investigations, there was much debate amongst scientists about the thickness of dust on the moon. Some speculated that there would be very thick dust into which astronauts and their spacecraft might ‘disappear’, while the majority of scientists believed that there was minimal dust cover. Then NASA sent up rockets and satellites and used earth-bound radar to make measurements of the meteoritic dust influx, results suggesting there was only sufficient dust for a thin layer on the moon. In mid-1966 the Americans successively soft-landed five Surveyor spacecraft on the lunar surface, and so three years before the Apollo astronauts set foot on the moon NASA knew that they would only find a thin dust layer on the lunar surface into which neither the astronauts nor their spacecraft would ‘disappear’. This was confirmed by the Apollo astronauts, who only found up to a few inches of loose dust.
The Apollo investigations revealed a regolith at least several metres thick beneath the loose dust on the lunar surface. This regolith consists of lunar rock debris produced by impacting meteorites mixed with dust, some of which is of meteoritic origin. Apart from impacting meteorites and micrometeorites it is likely that there are no other lunar surface processes capable of both producing more dust and transporting it. It thus appears that the amount of meteoritic dust and meteorite debris in the lunar regolith and surface dust layer, even taking into account the postulated early intense meteorite and meteoritic dust bombardment, does not contradict the evolutionists’ multi-billion year timescale (while not proving it). Unfortunately, attempted counter-responses by creationists have so far failed because of spurious arguments or faulty calculations. Thus, until new evidence is forthcoming, creationists should not continue to use the dust on the moon as evidence against an old age for the moon and the solar system.
One of the evidences for a young earth that creationists have been using now for more than two decades is the argument about the influx of meteoritic material from space and the so-called ‘dust on the moon’ problem. The argument goes as follows:
‘It is known that there is essentially a constant rate of cosmic dust particles entering the earth’s atmosphere from space and then gradually settling to the earth’s surface. The best measurements of this influx have been made by Hans Pettersson, who obtained the figure of 14 million tons per year.1 This amounts to 14 x 1019 pounds in 5 billion years. If we assume the density of compacted dust is, say, 140 pounds per cubic foot, this corresponds to a volume of 1018 cubic feet. Since the earth has a surface area of approximately 5.5 x 1015 square feet, this seems to mean that there should have accumulated during the 5-billion- year age of the earth, a layer of meteoritic dust approximately 182 feet thick all over the world!
There is not the slightest sign of such a dust layer anywhere of course. On the moon’s surface it should be at least as thick, but the astronauts found no sign of it (before the moon landings, there was considerable fear that the men would sink into the dust when they arrived on the moon, but no comment has apparently ever been made by the authorities as to why it wasn’t there as anticipated).
Even if the earth is only 5,000,000 years old, a dust layer of over 2 inches should have accumulated.
Lest anyone say that erosional and mixing processes account for the absence of the 182-foot meteoritic dust layer, it should be noted that the composition of such material is quite distinctive, especially in its content of nickel and iron. Nickel, for example, is a very rare element in the earth’s crust and especially in the ocean. Pettersson estimated the average nickel content of meteoritic dust to be 2.5 per cent, approximately 300 times as great as in the earth’s crust. Thus, if all the meteoritic dust layer had been dispersed by uniform mixing through the earth’s crust, the thickness of crust involved (assuming no original nickel in the crust at all) would be 182 x 300 feet, or about 10 miles!
Since the earth’s crust (down to the mantle) averages only about 12 miles thick, this tells us that practically all the nickel in the crust of the earth would have been derived from meteoritic dust influx in the supposed (5 x 109 year) age of the earth!’2
This is indeed a powerful argument, so powerful that it has upset the evolutionist camp. Consequently, a number of concerted efforts have been recently made to refute this evidence.3-9 After all, in order to be a credible theory, evolution needs plenty of time (that is, billions of years) to occur because the postulated process of transforming one species into another certainly can’t be observed in the lifetime of a single observer. So no evolutionist could ever be happy with evidence that the earth and the solar system are less than 10,000 years old.
But do evolutionists have any valid criticisms of this argument? And if so, can they be answered?
Criticisms of this argument made by evolutionists fall into three categories:-
- The question of the rate of meteoritic dust influx to the earth and moon,
- The question as to whether NASA really expected to find a thick dust layer on the moon when their astronauts, landed, and
- The question as to what period of time is represented by the actual layer of dust found on the moon.
Dust Influx to the Earth
The man whose work is at the centre of this controversy is Hans Pettersson of the Swedish Oceanographic Institute. In 1957, Pettersson (who then held the Chair of Geophysics at the University of Hawaii) set up dust-collecting units at 11,000 feet near the summit of Mauna Loa on the island of Hawaii and at 10,000 feet on Mt Haleakala on the island of Maui. He chose these mountains because
‘occasionally winds stir up lava dust from the slopes of these extinct volcanoes, but normally the air is of an almost ideal transparency, remarkably free of contamination by terrestrial dust.’10
With his dust-collecting units, Pettersson filtered measured quantities of air and analysed the particles he found. Despite his description of the lack of contamination in the air at his chosen sampling sites, Pettersson was very aware and concerned that terrestrial (atmospheric) dust would still swamp the meteoritic (space) dust he collected, for he says: ‘It was nonetheless apparent that the dust collected in the filters would come preponderantly from terrestrial sources.’11 Consequently he adopted the procedure of having his dust samples analysed for nickel and cobalt, since he reasoned that both nickel and cobalt were rare elements in terrestrial dust compared with the high nickel and cobalt contents of meteorites and therefore by implication of , meteoritic dust also.
Based on the nickel analysis of his collected dust, Pettersson finally estimated that about 14 million tons of dust land on the earth annually. To quote Petterson again:
‘Most of the samples contained small but measurable quantities of nickel along with the large amount of iron. The average for 30 filters was 14.3 micrograms of nickel from each 1,000 cubic metres of air. This would mean that each 1,000 cubic metres of air contains .6 milligram of meteoritic dust. If meteoritic dust descends at the same rate as the dust created by the explosion of the Indonesian volcano Krakatoa in 1883, then my data indicate that the amount of meteoritic dust landing on the earth every year is 14 million tons. From the observed frequency of meteors and from other data Watson (F.G. Watson of Harvard University) calculates the total weight of meteoritic matter reaching the earth to be between 365,000 and 3,650,000 tons a year. His higher estimate is thus about a fourth of my estimate, based upon theHawaiian studies. To be on the safe side, especially in view of the uncertainty as to how long it takes meteoritic dust to descend, I am inclined to find five million tons per year plausible.’12
Now several evolutionists have latched onto Pettersson’s conservatism with his suggestion that a figure of 5 million tons per year is more plausible and have thus promulgated the idea that Pettersson’s estimate was ‘high’,13 ‘very speculative’,14 and ‘tentative’.15 One of these critics has even gone so far as to suggest that ‘Pettersson’s dust- collections were so swamped with atmospheric dust that his estimates were completely wrong’16 (emphasis mine). Others have said that ‘Pettersson’s samples were apparently contaminated with far more terrestrial dust than he had accounted for.’17 So what does Pettersson say about his 5 million tons per year figure?:
‘The five-million-ton estimate also squares nicely with the nickel content of deep-ocean sediments. In 1950 Henri Rotschi of Paris and I analysed 77 samples of cores raised from the Pacific during the Swedish expedition. They held an average of. 044 per cent nickel. The highest nickel content in any sample was .07 per cent. This, compared to the average .008- per-cent nickel content of continental igneous rocks, clearly indicates a substantial contribution of nickel from meteoritic dust and spherules.
If five million tons of meteoritic dust fall to the earth each year, of which 2.5 per cent is nickel, the amount of nickel added to each square centimetre of ocean bottom would be .000000025 gram per year, or .017 per cent of the total red-clay sediment deposited in a year. This is well within the .044-per-cent nickel content of the deep-sea sediments and makes the five- million-ton figure seem conservative.’18
In other words, as a reputable scientist who presented his assumptions and warned of the unknowns, Pettersson was happy with his results.
But what about other scientists who were aware of Pettersson and his work at the time he did it? Dr Isaac Asimov’s comments,19 for instance, confirm that other scientists of the time were also happy with Pettersson’s results. Of Pettersson’s experiment Asimov wrote:-
‘At a 2-mile height in the middle of the Pacific Ocean one can expect the air to be pretty free of terrestrial dust. Furthermore, Pettersson paid particular attention to the cobalt content of the dust, since meteor dust is high in cobalt whereas earthly dust is low in it.’20
Indeed, Asimov was so confident in Pettersson’s work that he used Pettersson’s figure of 14,300,000 tons of meteoritic dust falling to the earth’s surface each year to do his own calculations. Thus Asimov suggested:
‘Of course, this goes on year after year, and the earth has been in existence as a solid body for a good long time: for perhaps as long as 5 billion years. If, through all that time, meteor dust has settled to the earth at the same rate as it does, today, then by now, if it were undisturbed, it would form a layer 54 feet thick over all of the earth.’21
This sounds like very convincing confirmation of the creationist case, but of course, the year that Asimov wrote those words was 1959, and a lot of other meteoritic dust influx measurements have since been made. The critics are also quick to point this out -
‘… we now have access to dust collection techniques using aircraft, high-altitude balloons and spacecraft. These enable researchers to avoid the problems of atmospheric dust which plagued Pettersson.’22
However, the problem is to decide which technique for estimating the meteoritic dust influx gives the ‘true’ figure. Even Phillips admits this when he says:
‘(Techniques vary from the use of high altitude rockets with collecting grids to deep-sea core samples. Accretion rates obtained by different methods vary from 102 to 109 tons/year. Results from identical methods also differ because of the range of sizes of the measured particles.’23
One is tempted to ask why it is that Pettersson’s 5-14 billion tons per year figure is slammed as being ‘tentative’, ‘very speculative’ and ‘completely wrong’, when one of the same critics openly admits the results from the different, more modern methods vary from 100 to 1 billion tons per year, and that even results from identical methods differ? Furthermore, it should be noted that Phillips wrote this in 1978, some two decades and many moon landings after Pettersson’s work!
|(a) Small Size In Space (<0.1 cm)|
|Penetration Satellites||36,500-182,500 tons/yr|
|Al26 (sea sediment)||73,000-3,650,000 tons/yr|
|Rare Gases||<3,650,000 tons/yr|
|(b) Cometary Meteors (104-102g) In Space|
|Cometary Meteors||73,000 tons/yr|
|(c)‘Any’ Size in Space|
|(i) Spherules||< 110 tons/yr|
|(ii) Total Winter||<730 tons/yr|
|(iii) Total Annual||<200,000 tons/yr|
|Balloon Meshes||<220,000 tons/yr|
|Airplane Filters||<91,500 tons/yr|
|(i) Dust Counter||3,650,000 tons/yr|
|(ii) Coronograph||365,000 tons/yr|
|Ni (Antarctic ice)||3,650,000-11,000,000 tons/yr|
|Ni (sea sediment)||<3,650,000 tons/yr|
|Os (sea sediment)||110,000 tons/yr|
|CI36 (sea sediment) Sea-sediment||1,825,000 tons/yr|
|(d) Large Size in Space|
|Table 1. Measurements and estimates of the meteoritic dust influx to the earth. (The data are adapted from Parkin and Tilles,24 who have fully referenced all their data sources.) (All figures have been rounded off.)|
Other Estimates, Particularly by Chemical Methods
In 1968, Parkin and Tilles summarised all the measurement data then available on the question of influx of meteoritic (interplanetary) material (dust) and tabulated it.24 Their table is reproduced here as Table 1, but whereas they quoted influx rates in tons per day, their figures have been converted to tons per year for ease of comparison with Pettersson’s figures.
Even a quick glance at Table 1 confirms that most of these experimentally-derived measurements are well below Pettersson’s 5-14 million tons per year figure, but Phillips’ statement (quoted above) that results vary widely, even from identical methods, is amply verified by noting the range of results listed under some of the techniques. Indeed, it also depends on the experimenter doing the measurements (or estimates, in some cases). For instance, one of the astronomical methods used to estimate the influx rate depends on calculation of the density of the very fine dust in space that causes the zodiacal light. In Table 1, two estimates by different investigators are listed because they differ by 2-3 orders of magnitude.
On the other hand, Parkin and Tilles’ review of influx measurements, while comprehensive, was not exhaustive, there being other estimates that they did not report. For example, Pettersson25 also mentions an influx estimate based on meteorite data of 365,000-3,650,000 tons/year made by F. G. Watson of Harvard University (quoted earlier), an estimate which is also 2-3 orders of magnitude different from the estimate listed by Parkin and Tilles and reproduced in Table 1. So with such a large array of competing data that give such conflicting orders-of-magnitude different estimates, how do we decide which is the best estimate that somehow might approach the ‘true’ value?
Another significant research paper was also published in 1968. Scientists Barker and Anders were reporting on their measurements of iridium and osmium concentration in dated deep-sea sediments (red clays) of the central Pacific Ocean Basin, which they believed set limits to the influx rate of cosmic matter, including dust.26 Like Pettersson before them, Barker and Anders relied upon the observation that whereas iridium and osmium are very rare elements in the earth’s crustal rocks, those same two elements are present in significant amounts in meteorites.
|Element||Sampling Site|| Accretion Rate|
|* Normalized to the composition of C1 carbonaceous chondrites (one class of meteorites).|
|Table 2. Estimates of the accretion rate of cosmic matter by chemical methods (after Barker and Anders,26 who have fully referenced all their data sources).|
Their results are included in Table 2 (last four estimates), along with earlier reported estimates from other investigators using similar and other chemical methods. They concluded that their analyses, when compared with iridium concentrations in meteorites (C1 carbonaceous chondrites), corresponded to a meteoritic influx rate forth entire earth of between 30,000 and 90,000 tons per year. Furthermore, they maintained that a firm upper limit on the influx rate could be obtained by assuming that all the iridium and osmium in deep-sea sediments is of cosmic origin. The value thus obtained is between 50,000 and 150,000 tons per year. Notice, however, that these scientists were careful to allow for error margins by using a range of influx values rather than a definitive figure. Some recent authors though have quoted Barker and Anders’ result as 100,000 tons, instead of 100,000 ± 50,000 tons. This may not seem a rather critical distinction, unless we realise that we are talking about a 50% error margin either way, and that’s quite a large error margin in anyone’s language regardless of the magnitude of the result being quoted.
Even though Barker and Anders’ results were published in 1968, most authors, even fifteen years later, still quote their influx figure of 100,000 ± 50,000 tons per year as the most reliable estimate that we have via chemical methods. However, Ganapathy’s research on the iridium content of the ice layers at the South Pole27 suggests that Barker and Anders’ figure underestimates the annual global meteoritic influx.
Ganapathy took ice samples from ice cores recovered by drilling through the ice layers at the US Amundsen-Scott base at the South Pole in 1974, and analysed them for iridium. The rate of ice accumulation at the South Pole over the last century or so is now particularly well established, because two very reliable precision time markers exist in the ice layers for the years 1884 (when debris from the August 26, 1883 Krakatoa volcanic eruption was deposited in the ice) and 1953 (when nuclear explosions began depositing fission products in the ice). With such an accurately known time reference framework to put his iridium results into, Ganapathy came up with a global meteoritic influx figure of 400,000 tons per year, four times higher than Barker and Anders’ estimate from mid-Pacific Ocean sediments.
In support of his estimate, Ganapathy also pointed out that Barker and Anders had suggested that their estimate could be stretched up to three times its value (that is, to 300,000 tons per year) by compounding several unfavorable assumptions. Furthermore, more recent measurements by Kyte and Wasson of iridium in deep-sea sediment samples obtained by drilling have yielded estimates of 330,000-340,000 tons per year.28 So Ganapathy’s influx estimate of 400,000 tons of meteoritic material per year seems to represent a fairly reliable figure, particularly because it is based on an accurately known time reference framework.
Estimates via Aircraft and Spacecraft Methods
So much for chemical methods of determining the rate of annual meteoritic influx to the earth’s surface. But what about the data collected by high-flying aircraft and spacecraft, which some critics29,30 are adamant give the most reliable influx estimates because of the elimination of a likelihood of terrestriat dust contamination? Indeed, on the basis of the dust collected by the high-flying U-2 aircraft, Bridgstock dogmatically asserts that the influx figure is only 10,000 tonnes per year.31,32 To justify his claim Bridgstock refers to the reports by Bradley, Brownlee and Veblen,33 and Dixon, McDonnel1 and Carey34 who state a figure of 10,000 tons for the annual influx of interplanetary dust particles. To be sure, as Bridgstock says,35 Dixon, McDonnell and Carey do report that ‘. ..researchers estimate that some 10,000 tonnes of them fall to Earth every year.’36 However, such is the haste of Bridgstock to prove his point, even if it means quoting out of context, he obviously didn’t carefully read, fully comprehend, and/or deliberately ignored all of Dixon, McDonnell and Carey’s report, otherwise he would have noticed that the figure ‘some 10,000 tonnes of them fall to Earth every year’ refers only to a special type of particle called Brownlee particles, not to all cosmic dust particles. To clarify this, let’s quote Dixon, McDonnell and Carey:
‘Over the past 10 years, this technique has landed a haul of small fluffy cosmic dust grains known as “Brownlee particles” after Don Brownlee, an American researcher who pioneered the routine collection of particles by aircraft, and has led in their classification. Their structure and composition indicate that the Brown lee particles are indeed extra-terrestrial in origin (see Box 2), and researchers estimate that some 10,000 tonnes of them fall to Earth every year. But Brownlee particles represent only part of the total range of cosmic dust particles’37 (emphasis mine).
And further, speaking of these ‘fluffy’ Brownlee particles:
‘The lightest and fluffiest dust grains, however, may enter the atmosphere on a trajectory which subjects them to little or no destructive effects, and they eventually drift to the ground. There these particles are mixed up with greater quantities of debris from the larger bodies that burn up as meteors, and it is very difficult to distinguish the two’38 (emphasis ours).
What Bridgstock has done, of course, is to say that the total quantity of cosmic dust that hits the earth each year according to Dixon, McDonnell and Carey is 10,000 tonnes, when these scientists quite clearly stated they were only referring to a part of the total cosmic dust influx, and a lesser part at that. A number of writers on this topic have unwittingly made similar mistakes.
But this brings us to a very crucial aspect of this whole issue, namely, that there is in fact a complete range of sizes of meteoritic material that reaches the earth, and moon for that matter, all the way from large meteorites metres in diameter that produce large craters upon impact, right down to the microscopic-sized ‘fluffy’ dust known as Brownlee particles, as they are referred to above by Dixon, McDonnell, and Carey. And furthermore, each of the various techniques used to detect this meteoritic material does not necessarily give the complete picture of all the sizes of particles that come to earth, so researchers need to be careful not to equate their influx measurements using a technique to a particular particle size range with the total influx of meteoritic particles. This is of course why the more experienced researchers in this field are always careful in their records to stipulate the particle size range that their measurements were made on.
Figure 1. The mass ranges of interplanetary (meteoritic) dust particles as detected by various techniques (adapted from Millman39). The particle penetration, impact and collection techniques make use of satellites and rockets. The techniques shown in italics are based on lunar surface measurements.
Millman39 discusses this question of the particle size ranges over which the various measurement techniques are operative. Figure 1 is an adaptation of Millman’s diagram. Notice that the chemical techniques, such as analyses for iridium in South Pole ice or Pacific Ocean deep-sea sediments, span nearly the full range of meteoritic particles sizes, leading to the conclusion that these chemical techniques are the most likely to give us an estimate closest to the ‘true’ influx figure. However, Millman40 and Dohnanyi41 adopt a different approach to obtain an influx estimate. Recognising that most of the measurement techniques only measure the influx of particles of particular size ranges, they combine the results of all the techniques so as to get a total influx estimate that represents all the particle size ranges. Because of overlap between techniques, as is obvious from Figure 1, they plot the relation between the cumulative number of particles measured (or cumulative flux) and the mass of the particles being measured, as derived from the various measurement techniques. Such a plot can be seen in Figure 2. The curve in Figure 2 is the weighted mean flux curve obtained by comparing, adding together and taking the mean at anyone mass range of all the results obtained by the various measurement techniques. A total influx estimate is then obtained by integrating mathematically the total mass under the weighted mean flux curve over a given mass range.
Figure 2. The relation between the cumulative number of particles and the lower limit of mass to which they are counted, as derived from various types of recording - rockets, satellites, lunar rocks, lunar seismographs (adapted from Millman39). The crosses represent the Pegasus and Explorer penetration data.
By this means Millman42 estimated that in the mass range 10-12 to 103g only a mere 30 tons of meteoritic material reach the earth each day, equivalent to an influx of 10,950 tons per year. Not surprisingly, the same critic (Bridgstock) that erroneously latched onto the 10,000 tonnes per year figure of Dixon, McDonnell and Carey to defend his (Bridgstock’s) belief that the moon and the earth are billions of years old, also latched onto Millman’s 10,950 tons per year figure.43 But what Bridgstock has failed to grasp is that Dixon, McDonnell and Carey’s figure refers only to the so-called Brownlee particles in the mass range of 10-12 to 10-6g, whereas Millman’s figure, as he stipulates himself, covers the mass range of 10-12 to 103g. The two figures can in no way be compared as equals that somehow support each other because they are not in the same ballpark since the two figures are in fact talking about different particle mass ranges.
Furthermore, the close correspondence between these two figures when they refer to different mass ranges, the 10,000 tonnes per year figure of Dixon, McDonnell and Carey representing only 40% of the mass range of Millman’s 10,950 tons per year figure, suggests something has to be wrong with the techniques used to derive these figures. Even from a glance at the curve in Figure 2, it is obvious that the total mass represented by the area under the curve in the mass range 10-6 to 103g can hardly be 950 or so tons per year (that is, the difference between Millman’s and Dixon, McDonnell and Carey’s figures and mass ranges), particularly if the total mass represented by the area under the curve in the mass range 10-12 to 10-6g is supposed to be 10,000 tonnes per year (Dixon, McDonnell and Carey’s figure and mass range). And Millman even maintains that the evidence indicates that two-thirds of the total mass of the dust complex encountered by the earth is in the form of particles with masses between 10-6.5 and 10-3.5g, or in the three orders of magnitude 10-6, 10-5 and 10-4g, respectively,44 outside the mass range for the so-called Brownlee particles. So if Dixon, McDonnell and Carey are closer to the truth with their 1985 figure of 10,000 tonnes per year of Brownlee particles (mass range 10-12 to 10-6g), and if two-thirds of the total particle influx mass lies outside the Brownlee particle size range, then Millman’s 1975 figure of 10,950 tons per year must be drastically short of the ‘real’ influx figure, which thus has to be at least 30,000 tons per year.
Millman admits that if some of the finer dust partlcles do not register by either penetrating or cratering, satellite or aircraft collection panels, it could well be that we should allow for this by raising the flux estimate. Furthermore, he states that it should also be noted that the Prairie Network fireballs (McCrosky45), which are outside his (Millman’s) mathematical integration calculations because they are outside the mass range of his mean weighted influx curve, could add appreciably to his flux estimate.46 In other words, Millman is admitting that his influx estimate would be greatly increased if the mass range used in his calculations took into account both particles finer than 10-12g and particularly particles greater than l03g.
Figure 3. Cumulative flux of meteoroids and related objects into the earth’s atmosphere having a mass of M(kg) (adapted from Dohnanyi41). His data sources used to derive this plot are listed in his bibliography.
Unlike Millman, Dohnanyi47 did take into account a much wider mass range and smaller cumulative fluxes, as can be seen in his cumulative flux plot in Figure 3, and so he did obtain a much higher total influx estimate of some 20,900 tons of dust per year coming to the earth. Once again, if McCrosky’s data on the Prairie Network fireballs were included by Dohnanyi, then his influx estimate would have been greater. Furthermore, Dohnanyi’s estimate is primarily based on supposedly more reliable direct meas- urements obtained using collection plates and panels on satellites, but Millman maintains that such satellite penetration methods may not be registering the finer dust particles because they neither penetrate nor crater the collection panels, and so any influx estimate based on such data could be underestimating the ‘true’ figure. This is particularly significant since Millman also highlights the evidence that there is another concentration peak in the mass range 10-13 to 10-14g at the lower end of the theoretical effectiveness of satellite penetration data collection (see Figure 1 again). Thus even Dohnanyi’s influx estimate is probably well below the ‘true’ figure.
Representativeness and Assumptions
This leads us to a consideration of the representativeness both physically and statistically of each of the influx measurement dust collection techniques and the influx estimates derived from them. For instance, how representitive is a sample of dust collected on the small plates mounted on a small satellite or U-2 aircraft compared with the enormous volume of space that the sample is meant to represent? We have already seen how Millman admits that some dust particles probably do not penetrate or crater the plates as they are expected to and so the final particle count is thereby reduced by an unknown amount. And how representative is a drill core or grab sample from the ocean floor? After all, aren’t we analysing a split from a 1-2 kilogram sample and suggesting this represents the tonnes of sediments draped over thousands of square kilometres of ocean floor to arrive at an influx estimate for the whole earth?! To be sure, careful repeat samplings and analyses over several areas of the ocean floor may have been done, but how representative both physically and statistically are the results and the derived influx estimate?
Of course, Pettersson’s estimate from dust collected atop Mauna Loa also suffers from the same question of representativeness. In many of their reports, the researchers involved have failed to discuss such questions. Admittedly there are so many potential unknowns that any statistical quantification is well-nigh impossible, but some discussion of sample representativeness should be attempted and should translate into some ‘guesstimate’ of error margins in their final reported dust influx estimate. Some like Barker and Anders with their deep-sea sediments48 have indicated error margins as high as ±50%, but even then such error margins only refer to the within and between sample variations of element concentrations that they calculated from their data set, and not to any statistical ‘guesstimate’ of the physical representativeness of the samples collected and analysed. Yet the latter is vital if we are trying to determine what the ‘true’ figure might be.
But there is another consideration that can be even more important, namely, any assumptions that were used to derive the dust influx estimate from the raw measurements or analytical data. The most glaring example of this is with respect to the interpretation of deep-sea sediment analyses to derive an influx estimate. In common with all the chemical methods, it is assumed that all the nickel, iridium and osmium in the samples, over and above the average respective contents of appropriate crustal rocks, is present in the cosmic dust in the deep-sea sediment samples. Although this seems to be a reasonable assumption, there is no guarantee that it is completely correct or reliable. Furthermore, in order to calculate how much cosmic dust is represented by the extra nickel, iridium and osmium con- centrations in the deep-sea sediment samples, it is assumed that the cosmic dust has nickel, iridium and osmium concentrations equivalent to the average respective concentrations in Type I carbonaceous chondrites (one of the major types of meteorites). But is that type of meteorite representative of all the cosmic matter arriving at the earth’s surface? Researchers like Barker and Anders assume so because everyone else does! To be sure there are good reasons for making that assumption, but it is by no means certain the Type I carbonaceous chondrites are representative of all the cosmic material arriving at the earth’s surface, since it has been almost impossible so far to exclusively collect such material for analysis. (Some has been collected by spacecraft and U-2 aircraft, but these samples still do not represent that total composition of cosmic material arriving at the earth’s surface since they only represent a specific particle mass range in a particular path in space or the upper atmosphere.)
However, the most significant assumption is yet to come. In order to calculate an influx estimate from the assumed cosmic component of the nickel, iridium and osmium concentrations in the deep-sea sediments it is necessary to determine what time span is represented by the deep-sea sediments analysed. In other words, what is the sedimentation rate in that part of the ocean floor sampled and how old therefore are our sediment samples? Based on the uniformitarian and evolutionary assumptions, isotopic dating and fossil contents are used to assign long time spans and old ages to the sediments. This is seen not only in Barker and Anders’ research, but in the work of Kyte and Wasson who calculated influx estimates from iridium measurements in so-called Pliocene and Eocene-Oligocene deep-sea sediments.49 Unfortunately for these researchers, their influx estimates depend absolutely on the validity of their dating and age assumptions. And this is extremely crucial, for if they obtained influx estimates of 100,000 tons per year and 330,000-340,000 tons per year respectively on the basis of uniformitarian and evolutionary assumptions (slow sedimentation and old ages), then what would these influx estimates become if rapid sedimentation has taken place over a radically shorter time span? On that basis, Pettersson’s figure of 5-14 million tons per year is not far-fetched!
On the other hand, however, Ganapathy’s work on ice cores from the South Pole doesn’t suffer from any assumptions as to the age of the analysed Ice samples because he was able to correlate his analytical results with two time-marker events of recent recorded history. Consequently his influx estimate of 400,000 tons per year has to be taken seriously. Furthermore, one of the advantages of the chemical methods of influx estimating, such as Ganapathy’s analyses of iridium in ice cores, is that the technique in theory, and probably in practice, spans the complete mass range of cosmic material (unlike the other techniques - see Figure 1 again) and so should give a better estimate. Of course, in practice this is difficult to verify, statistically the likelihood of sampling a macroscopic cosmic particle in, for example, an ice core is virtually nonexistent. In other words, there is the question’ of representativeness again, since the ice core is taken to represent a much larger area of ice sheet, and it may well be that the cross sectional area intersected by the ice core is an anomalously high or low concentration of cosmic dust particles, or in fact an average concentration -who knows which?
Finally, an added problem not appreciated by many working in the field is that there is an apparent variation in the dust influx rate according to the latitude. Schmidt and Cohen reported50 that this apparent variation is most closely related to geomagnetic latitude so that at the poles the resultant influx is higher than in equatorial regions. They suggested that electromagnetic interactions could cause only certain charged particles to impinge preferentially at high latitudes. This may well explain the difference between Ganapathy’s influx estimate of 400,000 tons per year from the study of the dust in Antarctic ice and, for example, Kyte and Wasson s estimate of 330,000-340,000 tons per year based on iridium measurements in deep-sea sediment samples from the mid-Pacific Ocean.
A number of other workers have made estimates of the meteoritic dust influx to the earth that are often quoted with some finality. Estimates have continued to be made up until the present time, so it is important to contrast these in order to arrive at the general consensus.
In reviewing the various estimates by the different methods up until that time, Singer and Bandermann5l argued in 1967 that the most accurate method for determining the meteoritic dust influx to the earth was by radiochemical measurements of radioactive Al26 in deep-sea sediments. Their confidence in this method was because it can be shown that the only source of this radioactive nuclide is interplanetary dust and that therefore its presence in deep-sea sediments was a more certain indicator of dust than any other chemical evidence. From measurements made others they concluded that the influx rate is 1250 tons per day, the error margins being such that they indicated the influx rate could be as low as 250 tons per day or as high as 2,500 tons per day. These figures equate to an influx rate of over 450,000 tons per year, ranging from 91,300 tons per year to 913,000 tons per year.
They also defended this estimate via this method as opposed to other methods. For example, satellite experiments, they said, never measured a concentration, nor even a simple flux of particles, but rather a flux of particles having a particular momentum or energy greater than some minimum threshold which depended on the detector being used. Furthermore, they argued that the impact rate near the earth should increase by a factor of about 1,000 compared with the value far away from the earth. And whereas dust influx can also be measured in the upper atmosphere, by then the particles have already begun slowing down so that any vertical mass motions of the atmosphere may result in an increase in concentration of the dust particles thus producing a spurious result. For these and other reasons, therefore, Singer and Bandermann were adamant that their estimate based on radioactive Al26 in ocean sediments is a reliable determination of the mass influx rate to the earth and thus the mass concentration of dust in interplanetary space.
Other investigators continued to rely upon a combination of satellite, radio and visual measurements of the ‘different particle masses to arrive at a cumulative flux rate. Thus in 1974 Hughes reported52 that
‘from the latest cumulative influx rate data the influx of interplanetary dust to the earth’s surface in the mass range 10-13 - 106g is found to be 5.7 x 109 g yr-1’,
or 5,700 tons per year, drastically lower than the Singer and Bandermann estimate from Al26 in ocean sediments. Yet within a year Hughes had revised his estimate upwards to 1.62 x 1010 g yr-1, with error calculations indicating that the upper and lower limits are about 3.0 and 0.8 x 1010g yr-1 respectively.53 Again this was for the particle mass range between 10-13g and 106 g, and this estimate translates to 16,200 tons per year between lower to upper limits of 8,000 - 30,000 tons per year. So confident now was Hughes in the data he had used for his calculations that he submitted an easier-to-read account of his work in the widely-read, popular science magazine, New Scientist.54 Here he again argued that
‘as the earth orbits the sun it picks up about 16,000 tonnes of interplanetary material each year. The particles vary in size from huge meteorites weighing tonnes to small microparticles less than 0.2 micron in diameter. The majority originate from decaying comets.’
Figure 4. Plot of thecumulative flux of interplanetary matter (meteorites, meteors, and meteoritic dust, etc.) into the earth’s atmosphere (adapted from Hughes54). Note that he has subdivided the debris into two modes of origin - cometary and asteroidal - based on mass, with the former category being further subdivided according to detection techniqes. From this plot Hughes calculated a flux of 16,000 tonnes per year.
Figure 4 shows the cumulative flux curve built from the various sources of data that he used to derive his calculated influx of about 16,000 tons per year. However, it should be noted here that using the same methodology with similar data Millman55 had in 1975, and Dohnanyi56 in 1972, produced influx estimates of 10,950 tons per year and 20,900 tons per year respectively (Figures 2 and 3 can be compared with Figure 4). Nevertheless, it could be argued that these two estimates still fall within the range of 8,000 -30,000 tons per year suggested by Hughes. In any case, Hughes’ confidence in his estimate is further illustrated by his again quoting the same 16,000 tons per year influx figure in a paper published in an authoritative book on the subject of cosmic dust.58
Meanwhile, in a somewhat novel approach to the problem, Wetherill in 1976 derived a meteoritic dust influx estimate by looking at the possible dust production rate at its source.59 He argued that whereas the present sources of meteorites are probably multiple, it being plausible that both comets and asteroidal bodies of several kinds contribute to the flux of meteorites on the earth, the immediate source of meteorites is those asteroids, known as Apollo objects, that in their orbits around the sun cross the earth’s orbit. He then went on to calculate the mass yield of meteoritic dust (meteoroids) and meteorites from the fragmentation and cratering of these Apollo asteroids. He found that the combined yield from both crate ring and complete fragmentation to be 7.6 x 1010g yr-l, which translates into a figure of 76,000 tonnes per year. Of this figure he calculated that 190 tons per year would represent meteorites in the mass range of 102 - 106g, a figure which compared well with terrestrial meteorite mass impact rates obtained by various other calculation methods, and also with other direct measurement data, including observation of the actual meteorite flux. This figure of 76,000 tons per year is of course much higher than those estimates based on cumulative flux calculations such as those of Hughes,60 but still below the range of results gained from various chemical analyses of deep-sea sediments, such as those of Barker and Anders,61 Kyte and Wasson,62 and Singer and Bandermann,63 and of the Antarctic ice by Ganapathy.64 No wonder a textbook in astronomy compiled by a worker in the field and published in 1983 gave a figure for the total meteoroid flux of about 10,000 - 1,000,000 tons per year.65
In an oft-quoted paper published in 1985, Griin and his colleagues66 reported on yet another cumulative flux calculation, but this time based primarily on satellite measurement data. Because these satellite measurements had been made in interplanetary space, the figure derived from them, would be regarded as a measure of the interplanetary dust flux. Consequently, to calculate from that figure the total meteoritic mass influx on the earth both the gravitational increase at the earth and the surface area of the earth had to be taken into account. The result was an influx figure of about 40 tons per day, which translates to approximately 14,600 tons per year. This of course still equates fairly closely to the influx estimate made by Hughes.67
As well as satellite measurements, one of the other major sources of data for cumulative flux calculations has been measurements made using ground-based radars. In 1988 Olsson-Steel68 reported that previous radar meteor observations made in the VHF band had rendered a flux of particles in the 10-6 - 10-2g mass range that was anomalously low when compared to the, fluxes derived from optical meteor observations or satellite measurements. He therefore found that HF radars were necessary in order to detect the total flux into the earth’s atmosphere. Consequently he used radar units near Adelaide and Alice Springs in Australia to make measurements at a number of different frequencies in the HF band. Indeed, Olsson-Steel believed that the radar near Alice Springs was at that time the most powerful device ever used for meteor detection, and be- cause of its sensitivity the meteor count rates were extremely high. From this data he calculated a total influx of particles in the range 10-6 - 10-2g of 12,000 tons per year, which as he points out is almost identical to the flux in the same mass range calculated by Hughes.69,70 He concluded that this implies that, neglecting the occasional asteroid or comet impact, meteoroids in this mass range dominate the total flux to the atmosphere, which he says amounts to about 16,000 tons per year as calculated by Thomas et al.71
In a different approach to the use of ice as a meteoritic dust collector, in 1987 Maurette and his colleagues72 reported on their analyses of meteoritic dust grains extracted from samples of black dust collected from the melt zone of the Greenland ice cap. The reasoning behind this technique was that the ice now melting at the edge of the ice cap had, during the time since it formed inland and flowed outwards to the melt zone, been collecting cosmic dust of all sizes and masses. The quantity thus found by analysis represents the total flux over that time period, which can then be converted into an annual influx rate. While their analyses of the collected dust particles were based on size fractions, they relied on the mass-to-size relationship established by Griin et al.73 to convert their results to flux estimates. They calculated that each kilogram of black dust they collected for extraction and analysis of its contained meteoritic dust corresponded to a collector surface of approximately 0.5 square metres which had been exposed for approximately 3,000 years to meteoritic dust infall. Adding together their tabulated flux estimates for each size fraction below 300 microns yields a total meteoritic dust influx estimate of approximately 4,500 tons per year, well below that calculated from satellite and radar measurements, and drastically lower than that calculated by chemical analyses of ice.
However, in their defense it can at least be said that in comparison to the chemical method this technique is based on actual identification of the meteoritic dust grains, rather than expecting the chemical analyses to represent the meteoritic dust component in the total samples of dust analysed. Nevertheless, an independent study in another polar region at about the same time came up with a higher influx rate more in keeping with that calculated from satellite and radar measurements. In that study, Tuncel and Zoller74 measured the iridium content in atmospheric samples collected at the South Pole. During each 10-day sampling period, approximately 20,000-30,000 cubic metres of air was passed through a 25-centimetre-diameter cellulose filter, which was then submitted for a wide range of analyses. Thirty such atmospheric particulate samples were collected over an 11 month period, which ensured that, seasonal variations were accounted for. Based on their analyses they discounted any contribution of iridium to their samples from volcanic emissions, and concluded that iridium concentrations in their samples could be used to estimate both the meteoritic dust component in their atmospheric particulate samples and thus the global meteoritic dust influx rate. Thus they calculated a global flux of 6,000 -11,000 tons per year.
In evaluating their result they tabulated other estimates from the literature via a wide range of methods, including the chemical analyses of ice and sediments. In defending their estimate against the higher estimates produced by those chemical methods, they suggested that samples (particularly sediment samples) that integrate large time intervals include in addition to background dust particles the fragmentation products from large bodies. They reasoned that this meant the chemical methods do not discriminate between background dust particles and fragmentation products from large bodies, and so a significant fraction of the flux estimated from sediment samples may be due to such large body impacts. On the other hand, their estimate of 6,000-11,000 tons per year for particles smaller than 106g they argued is in reasonable agreement with estimates from satellite and radar studies.
Finally, in a follow-up study, Maurette with another group of colleagues75 investigated a large sample of micrometeorites collected by the melting and filtering of approximately 100 tons of ice from the Antarctic ice sheet. The grains in the sample were first characterised by visual techniques to sort them into their basic meteoritic types, and then selected particles were submitted for a wide range of chemical and isotopic analyses. Neon isotopic analyses, for example, were used to confirm which particles were of extraterrestrial origin. Drawing also on their previous work they concluded that a rough estimate of the meteoritic dust flux, for particles in the size range 50-300 microns, as recovered from either the Greenland or the Antarctic ice sheets, represents about a third of the total mass influx on the earth at approximately 20,000 tons per year.
|Technique|| Influx Estimate|
|Ni in atmospheric dust||14,300,000|
| Barker and Anders
|Ir and Os in deep-sea sediments|| 100,000
(50,000 - 150,000)
|Ir in Antarctic ice||400,000|
| Kyte and Wasson
|Ir in deep-sea sediments||330,000 - 340,000|
|Satellite, radar, visual||10,950|
|Satellite, radar, visual||20,900|
| Singer and Bandermann
|Al26 in deep-sea sediments|| 456,000
(91,300 - 913,000)
(1975 - 1978)
|Satellite, radar, visual|| 16,200
(8,000 - 30,000)
|Fragmentation of Apollo asteroids||76,000|
| Grün et al.
|Satellite data particularly||14,600|
|Radar data primarily||16,000|
| Maurette et al.
|Dust from melting Greenland ice||4,500|
| Tuncel and Zoller
|Ir in Antarctic atmospheric particulates||6,000 - 11,000|
| Maurette et al.
|Dust from melting Antarctic ice||20,000|
|Table 3. Summary of the earth’s meteoritic dust influx estimates via the different measurement techniques.|
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to the earth. Table 3 is the summary of the estimates discussed here, most of which are repeatedly referred to in the literature.
Clearly, there is no consensus in the literature as to what the annual influx rate is. Admittedly, no authority today would agree with Pettersson’s 1960 figure of 14,000,000 tons per year. However, there appear to be two major groupings -those chemical methods which give results in the 100,000-400,000 tons per year range or thereabouts, and those methods, particularly cumulative flux calculations based on satellite and radar data, that give results in the range 10,000-20,000 tons per year or thereabouts. There are those that would claim the satellite measurements give results that are too low because of the sensitivities of the techniques involved, whereas there are those on the other hand who would claim that the chemical methods include background dust particles and fragmentation products.
Perhaps the ‘safest’ option is to quote the meteoritic dust influx rate as within a range. This is exactly what several authorities on this subject have done when producing textbooks. For example, Dodd76 has suggested a daily rate of between 100 and 1,000 tons, which translates into 36,500-365,000 tons per year, while Hartmann,77 who refers to Dodd, quotes an influx figure of 10,000-1 million tons per year. Hartmann’s quoted influx range certainly covers the range of estimates in Table 3, but is perhaps a little generous with the upper limit. Probably to avoid this problem and yet still cover the wide range of estimates, Henbest writing in New Scientist in 199178 declares:
‘Even though the grains are individually small, they are so numerous in interplanetary space that the Earth sweeps up some 100,000 tons of cosmic dust every year.’79
Perhaps this is a ‘safe’ compromise!
However, on balance we would have to say that the chemical methods when reapplied to polar ice, as they were by Maurette and his colleagues, gave a flux estimate similar to that derived from satellite and radar data, but much lower than Ganapathy’s earlier chemical analysis of polar ice. Thus it would seem more realistic to conclude that the majority of the data points to an influx rate within the range 10,000-20,000 tons per year, with the outside possibility that the figure may reach 100,000 tons per year.
Dust Influx to the Moon
Van Till et al. suggest:
‘To compute a reasonable estimate for the accumulation of meteoritic dust on the moon we divide the earth’s accumulation rate of 16,000 tons per year by 16 for the moon’s smaller surface area, divide again by 2 for the moon’s smaller gravitational force, yielding an accumulation rate of about 500 tons per year on the moon.’80
However, Hartmann81 suggests a figure of 4,000 tons per year from his own published work,82 although this estimate is again calculated from the terrestrial influx rate taking into account the smaller surface area of the moon.
These estimates are of course based on the assumption that the density of meteoritic dust in the area of space around the earth-moon system is fairly uniform, an assumption verified by satellite measurements. However, with the US Apollo lunar exploration missions of 1969-1972 came the opportunities to sample the lunar rocks and soils, and to make more direct measurements of the lunar meteoritic dust influx.
Lunar Rocks and Soils
One of the earliest estimates based on actual moon samples was that made by Keays and his colleagues,83 who analysed for trace elements twelve lunar rock and soil samples brought back by the Apollo 11 mission. From their results they concluded that there was a meteoritic or cometary component to the samples, and that component equated to an influx rate of 2.9 x 10-9g cm-2 yr-l of carbonaceous-chondrite-like material. This equates to an influx rate of over 15,200 tons per year. However, it should be kept in mind that this estimate is based on the assumption that the meteoritic component represents an accumulation over a period of more than 1 billion years, the figure given being the anomalous quantity averaged over that time span. These workers also cautioned about making too much of this estimate because the samples were only derived from one lunar location.
Within a matter of weeks, four of the six investigators published a complete review of their earlier work along with some new data.84 To obtain their new meteoritic dust influx estimate they compared the trace element contents of their lunar soil and breccia samples with the trace element contents of their lunar rock samples. The assumption then was that the soil and breccia is made up of the broken-down rocks, so that therefore any trace element differences between the rocks and soils/breccias would represent material that had been added to the soils/breccias as the rocks were mechanically broken down. Having determined the trace element content of this ‘extraneous component’ in their soil samples, they sought to identify its source. They then assumed that the exposure time of the region (the Apollo 11 landing site or Tranquillity Base) was 3.65 billion years, so in that time the proton flux from the solar -wind would account for some 2% of this extraneous trace elements component in the soils, leaving the remaining 98% or so to be of meteoritic (to be exact, ‘particulate’) origin. Upon further calculation, this approximate 98% portion of the extraneous component seemed to be due to an approximate 1.9% admixture of carbonaceous-chondrite-like material (in other words, meteoritic dust of a particular type), and the quantity involved thus represented, over a 3.65 billion year history of soil formation, an average influx rate of 3.8 x 10-9gcm-2 yr-l, which translates to over 19,900 tons per year. However, they again added a note of caution because this estimate was only based on a few samples from one location.
Nevertheless, within six months the principal investigators of this group were again in print publishing further results and an updated meteoritic dust influx estimate.85 By now they had obtained seven samples from the Apollo 12 landing site, which included two crystalline rock samples, four samples from core ‘drilled’ from the lunar regolith, and a soil sample. Again, all the samples were submitted for analyses of a suite of trace elements, and by again following the procedure outlined above they estimated that for this site the extraneous component represented an admixture of about 1.7% meteoritic dust material, very similar to the soils at the Apollo 11 site. Since the trace element content of the rocks at the Apollo 12 site was similar to that at the Apollo 11 site, even though the two sites are separated by 1,400 kilometres, other considerations aside, they concluded that this
‘spatial constancy of the meteoritic component suggests that the influx rate derived from our Apollo 11 data, 3.8 x 10-9gcm-2yr-l, is a meaningful average for the entire moon.’86
So in the abstract to their paper they reported that
‘an average meteoritic influx rate of about 4 x 10-9 per square centimetre per year thus seems to be valid for the entire moon. ’87
This latter figure translates into an influx rate of approximately 20,900 tons per year.
Ironically, this is the same dust influx rate estimate as for the earth made by Dohnanyi using satellite and radar measurement data via a cumulative flux calculation.88 As for the moon’s meteoritic dust influx, Dohnanyi estimated that using ‘an appropriate focusing factor of 2,’ it is thus half of the earth’s influx, that is, 10,450 tons per year.89 Dohnanyi defended his estimate, even though in his words it ‘is slightly lower than the independent estimates’ of Keays, Ganapathy and their colleagues. He suggested that in view of the uncertainties involved, his estimate and theirs were ‘surprisingly close’.
While to Dohnanyi these meteoritic dust influx estimates based on chemical studies of the lunar rocks seem very close to his estimate based primarily on satellite measurements, in reality the former are between 50% and 100% greater than the latter. This difference is significant, reasons already having been given for the higher influx estimates for the earth based on chemical analyses of deep- sea sediments compared with the same cumulative flux estimates based on satellite and radar measurements. Many of the satellite measurements were in fact made from satellites in earth orbit, and it has consequently been assumed that these measurements are automatically applicable to the moon. Fortunately, this assumption has been verified by measurements made by the Russians from their moon-orbiting satellite Luna 19, as reported by Nazarova and his colleagues.90 Those measurements plot within the field of near-earth satellite data as depicted by, for example, Hughes.91 Thus there seems no reason to doubt that the satellite measurements in general are applicable to the meteoritic dust influx to the moon. And since Nazarova et al.’s Luna 19 measurements are compatible with Hughes’ cumulative flux plot of near-earth satellite data, then Hughes, meteoritic dust influx estimate for the earth is likewise applicable to the moon, except that when the relevant focusing factor, as outlined and used by Dohnanyi,92 is taken into account we obtain a meteoritic dust influx to the moon estimate from this satellite data (via the standard cumulative flux calculation method) of half the earth’s figure, that is, about 8,000-9,000 tons per year.
Apart from satellite measurements using various techniques and detectors to actually measure the meteoritic dust influx to the earth-moon system, the other major direct detection technique used to estimate the meteoritic dust influx to the moon has been the study of the microcraters that are found in the rocks exposed at the lunar surface. It is readily apparent that the moon’s surface has been impacted by large meteorites, given the sizes of the craters that have resulted, but craters of all sizes are found on the lunar surface right down to the micro-scale. The key factors are the impact velocities of the particles, whatever their size, and the lack of an atmosphere on the moon to slow down (or burn up) the meteorites. Consequently, provided their mass is sufficient, even the tiniest dust particles will produce microcraters on exposed rock surfaces upon impact, just as they do when impacting the windows on spacecraft (the study of microcraters on satellite windows being one of the satellite measurement techniques). Additionally, the absence of an atmosphere on the moon, combined with the absence of water on the lunar surface, has meant that chemical weathering as we experience it on the earth just does not happen on the moon. There is of course still physical erosion, again due to impacting meteorites of all sizes and masses, and due to the particles of the solar wind, but these processes have also been studied as a result of the Apollo moon landings. However, it is the microcraters in the lunar rocks that have been used to estimate the dust influx to the moon.
Perhaps one of the first attempts to try and use microcraters on the moon’s surface as a means of determining the meteoritic dust influx to the moon was that of Jaffe,93 who compared pictures of the lunar surface taken by Surveyor 3 and then 31 months later by the Apollo 12 crew. The Surveyor 3 spacecraft sent thousands of television pictures of the lunar surface back to the earth between April 20 and May 3, 1967, and subsequently on November 20, 1969 the Apollo 12 astronauts visited the same site and took pictures with a hand camera. Apart from the obvious signs of disturbance of the surface dust by the astronauts, Jaffe found only one definite change in the surface. On the bottom of an imprint made by one of the Surveyor footpads when it bounced on landing, all of the pertinent Apollo pictures showed a particle about 2mm in diameter that did not appear in any of the Surveyor pictures. After careful analysis he concluded that the particle was in place subsequent to the Surveyor picture-taking. Furthermore, because of the resolution of the pictures any crater as large as 1.5mm in diameter should have been visible in the Apollo pictures. Two pits were noted along with other particles, but as they appeared on both photographs they must have been produced at the time of the Surveyor landing. Thus Jaffe concluded that no meteorite craters as large as 1.5 mm in diameter appeared on the bottom of the imprint, 20cm in diameter, during those 31 months, so therefore the rate of meteorite impact was less than 1 particle per square metre per month. This corresponds to a flux of 4 x 10-7 particles m-2sec-1 of particles with a mass of 3 x 10-8g, a rate near the lower limit of meteoritic dust influx derived from spacecraft measurements, and many orders of magnitude lower than some previous estimates. He concluded that the absence of detectable craters in the imprint of the Surveyor 3 footpad implied a very low meteoritic dust influx onto the lunar surface.
With the sampling of the lunar surface carried out by the Apollo astronauts and the return of rock samples to the earth, much attention focused on the presence of numerous microcraters on exposed rock surfaces as another means of calculating the meteoritic dust influx. These microcraters range in diameter from less than 1 micron to more than 1 cm, and their ubiquitous presence on exposed lunar rock sur- faces suggests that microcratering has affected literally every square centimetre of the lunar surface. However, in order to translate quantified descriptive data on microcraters into data on interplanetary dust particles and their influx rate, a calibration has to be made between the lunar microcrater diameters and the masses of the particles that must have impacted to form the craters. Hartung et al.94 suggest that several approaches using the results of laboratory cratering experiments are possible, but narrowed their choice to two of these approaches based on microparticle accelerator experiments. Because the crater diameter for any given particle diameter increases proportionally with increasing impact velocity, the calibration procedure employs a constant impact velocity which is chosen as 20km/sec. Furthermore, that figure is chosen because the velocity distribution of interplanetary dust or meteoroids based on visual and radar meteors is bounded by the earth and the solar system escape velocities, and has a maximum at about 20km/sec, which thus conventionally is considered to be the mean velocity for meteoroids. Particles impacting the moon may have a minimum velocity of 2.4km/sec, the lunar escape velocity, but the mean is expected to remain near 20km/sec because of the relatively low effective crosssection of the moon for slower particles. Inflight velocity measurements of micron-sized meteoroids are generally consistent with this distribution. So using a constant impact velocity of 20km/sec gives a calibration relationship between the diameters of the impacting particles and the diameters of the microcraters. Assuming a density of 3g/cm3 allows this calibration relationship to be between the diameters of the microcraters and the masses of the impacting particles.
After determining the relative masses of micrometeoroids, their flux on the lunar surface may then be obtained by correlating the areal density of microcraters on rock surfaces with surface exposure times for those sample rocks. In other words, in order to convert crater populations on a given sample into the interplanetary dust flux the sample’s residence time at the lunar surface must be known.95 These residence times at the lunar surface, or surface exposure times, have been determined either by Cosmogenic Al26 radioactivity measurements or by cosmic ray track density measurements,96 or more often by solar-flare particle track density measurements.97
On this basis Hartung et al.98 concluded that an average minimum flux of particles 25 micrograms and larger is 2.5 x 10-6 particles per cm2 per year on the lunar surface supposedly over the last 1 million years, and that a minimum cumulative flux curve over the range of masses 10-12 - 10-4g based on lunar data alone is about an order of magnitude less than independently derived present-day flux data from satellite-borne detector experiments. Furthermore, they found that particles of masses 10-7 - 10-4g are the dominant contributors to the cross-sectional area of interplanetary dust particles, and that these particles are largely responsible for the exposure of fresh lunar rock surfaces by superposition of microcraters. Also, they suggested that the overwhelming majority of all energy deposited at the surface of the moon by impact is delivered by particles 10-6 - 10-2g in mass.
A large number of other studies have been done on microcraters on lunar surface rock samples and from them calculations to estimate the meteoritic dust (micrometeoroid) influx to the moon. For example, Fechtig et al. investigated in detail a 2cm2 portion of a particular sample using optical and scanning electron microscope (SEM) techniques. Microcraters were measured and counted optically, the results being plotted to show the relationship between microcrater diameters and the cumulative crater frequency. Like other investigators, they found that in all large microcraters 100-200 microns in diameter there were on average one or two ‘small’ microcraters about 1 micron in diameter within them, while in all ‘larger’ microcraters (200-1,000 microns in diameter), of which there are many on almost all lunar rocks, there are large numbers of these ‘smaller’ microcraters. The counting of these ‘small’ microcraters within the ‘larger’ microcraters was found to be statistically significant in estimating the overall microcratering rate and the distribution of particle sizes and masses that have produced the microcraters, because, assuming an unchanging impacting particle size or energy distribution with time, they argued that an equal probability exists for the case when a large crater superimposes itself upon a small crater, thus making its observation impossible, and the case when a small crater superimposes itself upon a larger crater, thus enabling the observation of the small crater. In other words, during the random cratering process, on the average, for each small crater observable within a larger microcrater, there must have existed one small microcrater rendered unobservable by the subsequent formation of the larger microcrater. Thus they reasoned it is necessary to correct the number of observed small craters upwards to account for this effect. Using a correction factor of two they found that their resultant microcrater size distribution plot agreed satisfactorily with that found in another sample by Schneider et al.100 Their measuring and counting of microcraters on other samples also yielded size distributions similar to those reported by other investigators on other samples.
Fechtig et al. also conducted their own laboratory simulation experiments to calibrate microcrater size with impacting particle size, mass and energy. Once the cumulative microcrater number for a given area was calculated from this information, the cumulative meteoroid flux per second for this given area was easily calculated by again dividing the cumulative microcrater number by the exposure ages of the samples, previously determined by means of solar-flare track density measurements. Thus they calculated a cumulative meteoroid flux on the moon of 4 (±3) x 10-5 particles m-2 sec-1, which they suggested is fairly consistent with in situ satellite measurements. Their plot comparing micrometeoroid fluxes derived from lunar microcrater measurements with those attained from various satellite experiments (that is, the cumulative number of particles per square metre per second across the range of particle masses) is reproduced in Figure 5.
Mandeville101 followed a similar procedure in studying the microcraters in a breccia sample collected at the Apollo 15 landing site. Crater numbers were counted and diameters measured. Calibration curves were experimentally derived to relate impact velocity and microcrater diameter, plus impacting particle mass and microcrater diameter. The low solar-flare track density suggested a short and recent exposure time, as did the low density of microcraters. Consequently, in their calculating of the cumulative micrometeoroid flux they assumed a 3,000-year exposure time because of this measured solar-flare track density and the assumed solar-track production rate. The resultant cumulative particle flux was 1.4 x 10-5 particles per square metre per second for particles greater than 2.5 x 10-10g at an impact velocity of 20km/sec, a value which again appears to be in close agreement with flux values obtained by satellite measurements, but at the lower end of the cumulative flux curve calculated from microcraters by Fechtig et al.
Figure 5. Comparison of micrometeoroid fluxes derived from lunar microcrater measurements (cross-hatched and labelled ‘MOON’) with those obtained in various satellite in situ experiments (adapted from Fechtig et al.99) The range of masses/sizes has been subdivided into dust and meteors.
Schneider et al.102 also followed the same procedure in looking at microcraters on Apollo 15 and 16, and Luna 16 samples. After counting and measuring microcraters and calibration experiments, they used both optical and scanning electron microscopy to determine solar-flare track densities and derive solar-flare exposure ages. They plotted their resultant cumulative meteoritic dust flux on a flux versus mass diagram, such as Figure 5, rather than quantifying it. However, their cumulative flux curve is close to the results of other investigators, such as Hartung et al.103 Nevertheless, they did raise some serious questions about the microcrater data and the derivation of it, because they found that flux values based on lunar microcrater studies are generally less than those based on direct measurements made by satellite-borne detectors, which is evident on Figure 5 also. They found that this discrepancy is not readily resolved but may be due to one or more factors. First on their list of factors was a possible systematic error existing in the solar-flare track method, perhaps related to our present-day knowledge of the solar-flare particle flux. Indeed, because of uncertainties in applying the solar-flare flux derived from solar-flare track records in time-control led situations such as the Surveyor 3 spacecraft, they concluded that these implied their solar-flare exposure ages were systematically too low by a factor of between two and three. Ironically, this would imply that the calculated cumulative dust flux from the microcraters is systematically too high by the same factor, which would mean that there would then be an even greater discrepancy between flux values from lunar microcrater studies and the direct measurements made by the satellite-borne detectors. However, they suggested that part of this systematic difference may be because the satellite-borne detectors record an enhanced flux due to particles ejected from the lunar surface by impacting meteorites of all sizes. In any case, they argued that some of this systematic difference may be related to the calibration of the lunar microcraters and the satellite-borne detectors. Furthermore, because we can only measure the present flux, for example by satellite detectors, it may in fact be higher than the long-term average, which they suggest is what is being derived from the lunar microcrater data.
Morrison and Zinner104 also raised questions regarding solar-flare track density measurements and derived exposure ages. They were studying samples from the Apollo 17 landing area and counted and measured microraters on rock sample surfaces whose original orientation on the lunar surface was known, so that their exposure histories could be determined to test any directional variations in both the micrometeoroid flux and solar-flare particles. Once measured, they compared their solar-flare track density versus depth profiles against those determined by other investigators on other samples and found differences in the steepnesses of the curves, as well as their relative positions with respect to the track density and depth values. They found that differences in the steepnesses of the curves did not correlate with differences in supposed exposure ages, and thus although they couldn’t exclude these real differences in slopes reflecting variations in the activity of the sun, it was more probable that these differences arose from variations in observational techniques, uncertainties in depth measurements, erosion, dust cover on the samples, and/or the precise lunar surface exposure geometry of the different samples measured. They then suggested that the weight of the evidence appeared to favour those curves (track density versus depth profiles) with the flatter slopes, although such a conclusion could be seriously questioned as those profiles with the flatter slopes do not match the Surveyor 3 profile data even by their own admission.,
Rather than calculating a single cumulative flux figure, Morrison and Zinner treated the smaller microcraters separately from the larger microcraters, quoting flux rates of approximately 900 0.1 micron diameter craters per square centimetre per year and approximately 10 -15 x 10-6 500 micron diameter or greater craters per square centimetre per year. They found that these rates were independent of the pointing direction of the exposed rock surface relative to the lunar sky and thus this reflected no variation in the micrometeorite flux directionally. These rates also appeared to be independent of the supposed exposure times of the samples. They also suggested that the ratio of microcrater numbers to solar-flare particle track densities would make a convenient measure for comparing flux results of different laboratories/investigators and varying sampling situations. Comparing such ratios from their data with those of other investigations showed that some other investigators had ratios lower than theirs by a factor of as much as 50, which can only raise serious questions about whether the microcrater data are really an accurate measure of meteoritic dust influx to the moon. However, it can’t be the microcraters themselves that are the problem, but rather the underlying assumptions involved in the determination/estimation of the supposed ages of the rocks and their exposure times.
Another relevant study is that made by Cour-Palais,105 who examined the heat-shield windows of the command modules of the Apollo 7 - 17 (excluding Apollo 11) spacecrafts for meteoroid impacts as a means of estimating the interplanetary dust flux. As part of the study he also compared his results with data obtained from the Surveyor 3 lunar-lander’s TV shroud. In each case, the length of exposure time was known, which removed the uncertainty and assumptions that are inherent in estimation of exposure times in the study of microcraters on lunar rock samples. Furthermore, results from the Apollo spacecrafts represented planetary space measurements very similar to the satellite-borne detector techniques, whereas the Surveyor 3 TV shroud represented a lunar surface detector. In all, Cour-Palais found a total of 10 micrometeoroid craters of various diameters on the windows of the Apollo spacecrafts. Calibration tests were conducted by impacting these windows with microparticles for various diameters and masses, and the results were used to plot a calibration curve between the diameters of the micrometeoroid craters and the estimated masses of the impacting micrometeoroids. Because the Apollo spacecrafts had variously spent time in earth orbit, and some in lunar orbit also, as well as transit time in interplanetary space between the earth and the moon, correction factors had to be applied so that the Apollo window data could be taken as a whole to represent measurements in interplanetary space. He likewise applied a modification factor to the Surveyor 3 TV shroud results so that with the Apollo data the resultant cumulative mass flux distribution could be compared to results obtained from satellite-borne detector systems, with which they proved to be in good agreement.
He concluded that the results represent an average micrometeoroid flux as it exists at the present time away from the earth’s gravitational sphere of influence for masses < l0-7g. However, he noted that the satellite-borne detector measurements which represent the current flux of dust are an order of magnitude higher than the flux supposedly recorded by the lunar microcraters, a record which is interpreted as the ‘prehistoric’ flux. On the other hand he, corrected the Surveyor 3 results to discount the moon’s gravitational effect and bring them into line with the interplanetary dust flux measurements made by satellite- borne detectors. But if the Surveyor 3 results are taken to represent the flux at the lunar surface then that flux is currently an order of magnitude lower than the flux recorded by the Apollo spacecrafts in interplanetary space. In any case, the number of impact craters measured on these respective spacecrafts is so small that one wonders how statistically representative these results are. Indeed, given the size of the satellite-borne detector systems, one could argue likewise as to how representative of the vastness of interplanetary space are these detector results.
Figure 6. Cumulative fluxes (numbers of micrometeoroids with mass greater than the given mass which will impact every second on a square metre of exposed surface one astronomical unit from the sun) derived from satellite and lunar microcrater data (adapted from Hughes106).
Others had been noticing this disparity between the lunar microcrater data and the satellite data. For example, Hughes reported that this disparity had been known ‘for many years’.106 His diagram to illustrate this disparity is shown here as Figure 6. He highlighted a number of areas where he saw there were problems in these techniques for measuring micrometeoroid influx. For example, he reported that new evidence suggested that the meteoroid impact velocity was about 5km/sec rather than the 20km/ sec that had hithertofore been assumed. He suggested that taking this into account would only move the curves in Figure 6 to the right by factors varying with the velocity dependence of microphone response and penetration hole size (for the satellite-borne detectors) and crater diameter (the lunar microcraters), but because these effects are only functions of meteoroid momentum or kinetic energy their use in adjusting the data is still not sufficient to bring the curves in Figure 6 together (that is, to overcome this disparity between the two sets of data). Furthermore, with respect to the lunar microcrater data, Hughes pointed out that two other assumptions, namely, the ratio of the diameter of the microcrater to the diameter of the impacting particle being fairly constant at two, and the density of the particle being 3g per cm3, needed to be reconsidered in the light of laboratory experiments which had shown the ratio decreases with particle density and particle density varies with mass. He suggested that both these factors make the interpretation of microcraters more difficult, but that ‘the main problem’ lies in estimating the time the rocks under consideration have remained exposed on the lunar surface. Indeed, he pointed to the assumption that solar activity has remained constant in the past, the key assumption required for calculation of an exposure age, as ‘the real stumbling block’ - the particle flux could have been lower in the past or the solar-flare flux could have been higher. He suggested that because laboratory simulation indicates that solarwind sputter erosion is the dominant factor determining microcrater lifetimes, then knowing this enables the micrometeoroid influx to be derived by only considering rock surfaces with an equilibrium distribution of microcraters. He concluded that this line of research indicated that the micrometeoroid influx had supposedly increased by a factor of four in the last 100,000 years and that this would account for the disparity between the lunar microcrater data and the satellite data as shown by the separation of the two curves in Figure 6. However, this ‘solution’, according to Hughes, ‘creates the question of why this flux has increased’ a problem which appears to remain unsolved.
In a paper reviewing the lunar microcrater data and the lunar micrometeoroid flux estimates, Hörz et al.107 discuss some key issues that arise from their detailed summary of micrometeoroid fluxes derived by various investigators from lunar sample analyses. First, the directional distribution of micrometeoroids is extremely non-uniform, the meteoroid flux differing by about three orders of magnitude between the direction of the earth’s apex and anti-apex. Since the moon may only collect particles greater than 1012g predominantly from only the apex direction, fluxes derived from lunar microcrater statistics, they suggest, may have to be increased by as much as a factor of p for comparison with satellite data that were taken in the apex direction. On the other hand, apex-pointing satellite data generally have been corrected upward because of an assumed isotropic flux, so the actual anisotropy has led to an overestimation of the flux, thus making the satellite results seem to represent an upper limit for the flux. Second, the micrometeoroids coming in at the apex direction appear to have an average impact velocity of only 8km/sec, whereas the fluxes calculated from lunar microcraters assume a standard impact velocity of 20km/sec. If as a result corrections are made, then the projectile mass necessary to produce any given microcrater will increase, and thus the moon-based flux for masses greater than 10-10g will effectively be enhanced by a factor of approximately 5. Third, particles of mass less than 10-12g generally appear to have relative velocities of at least 50km/sec, whereas lunar flux curves for these masses are based again on a 20km/sec impact velocity. So again, if appropriate corrections are made the lunar cumulative micrometeoroid flux curve would shift towards smaller masses by a factor of possibly as much as 10. Nevertheless, Hörz et al. conclude that
‘as a consequence the fluxes derived from lunar crater statistics agree within the order of magnitude with direct satellite results if the above uncertainties in velocity and directional distribution are considered.’
Although these comments appeared in a review paper published in 1975, the footnote on the first page signifies that the paper was presented at a scientific meeting in 1973, the same meeting at which three of those investigators also presented another paper in which they made some further pertinent comments. Both there and in a previous paper, Gault, Hörz and Hartung108,109 had presented what they considered was a ‘best’ estimate of the cumulative meteoritic dust flux based on their own interpretation of the most reliable satellite measurements. This ‘best’ estimate they expressed mathematically in the form
|N=9.l4 x l0-6m-l.213|| l0-7
Figure 7. The micrometeoroid flux measurements from spacecraft experiments which were selected to define the mass-flux distribution (adapted from Gault et al.109) Also shown is the incremental mass flux contained within each decade of m, which sum to approximately 10,000 tonnes per year. Their data sources used are listed in their bibliography.
They commented that the use of two such exponential expressions with the resultant discontinuity is an artificial representation for the flux and not intended to represent a real discontinuity, being used for mathematical simplicity and for convenience in computational procedures. They also plotted this cumulative flux presented by these two exponential expressions, together with the incremental mass flux in each decade of particle mass, and that plot is reproduced here as Figure 7. Note that their flux curve is based on what they regard as the most reliable satellite measurements. Note also, as they did, that the fluxes derived from lunar rocks (the microcrater data) ‘are not necessarily directly comparable with the current satellite or photographic meteor data.’ 110 However, using their cumulative flux curve as depicted in Figure 7, and their histogram plot of incremental mass flux, it is possible to estimate (for example, by adding up each incremental mass flux) the cumulative mass flux, which comes to approximately 2 x 10-9gcm-2yr-1 or about 10,000 tons per year. This is the same estimate that they noted in their concluding remarks:-
‘We note that the mass of material contributing to any enhancement, which the earth-moon system is currently sweeping up, is of the order of 1010g per year.’111
Having derived this ‘best’ estimate flux from their mathematical modelling of the ‘most reliable satellite measurements’ their later comments in the same paper seem rather contradictory:-
‘If we follow this line of reasoning, the basic problem then reduces to consideration of the validity of the “best” estimate flux, a question not unfamiliar to the subject of micrometeoroids and a question not with- out considerable historical controversy. We will note here only that whereas it is plausible to believe that a given set of data from a given satellite may be in error for any number of reasons, we find the degree of correlation between the various spacecraft experiments used to define the “best” flux very convincing, especially when consideration is given to the different techniques employed to detect and measure the flux. Moreover, it must be remembered that the abrasion rates, affected primarily by microgram masses, depend almost exclusively on the satellite data while the rupture times, affected only by milligram masses, depend exclusively on the photographic meteor determinations of masses. It is extremely awkward to explain how these fluxes from two totally different and independent techniques could be so similarly in error. But if, in fact, they are in error then they err by being too high, and the fluxes derived from lunar rocks are a more accurate description of the current near- earth micrometeoroid flux.’(emphasis theirs )112
One is left wondering how they can on the one hand emphasise the lunar microcrater data as being a more accurate description of the current micrometeoroid flux, when they based their ‘best’ estimate of that flux on the ‘most reliable satellite measurements’. However, their concluding remarks are rather telling. The reason, of course, why the lunar microcrater data is given such emphasis is because it is believed to represent a record of the integrated cumulative flux over the moon’s billions-of- years history, which would at face value appear to be a more statistically reliable estimate than brief point-in-space satellite-borne detector measurements. Nevertheless, they are left with this unresolved discrepancy between the microcrater data and the satellite measurements, as has already been noted. So they explain the microcrater data as presenting the ‘prehistoric’ flux, the fluxes derived from the lunar rocks being based on exposure ages derived from solar- flare track density measurements and assumptions regarding solar-flare activity in the past. As for the lunar microcrater data used by Gault et al., they state that the derived fluxes are based on exposure ages in the range 2,500 - 700,000 years, which leaves them with a rather telling enigma. If the current flux as indicated by the satellite measurements is an order of magnitude higher than the microcrater data representing a ‘prehistoric’ flux, then the flux of meteoritic dust has had to have increased or been enhanced in the recent past. But they have to admit that
‘if these ages are accepted at face value, a factor of 10 enhancement integrated into the long term average limits the onset and duration of enhancement to the past few tens of years’
They note that of course there are uncertainties in both the exposure ages and the magnitude of an enhancement, but the real question is the source of this enhanced flux of particles, a question they leave unanswered and a problem they pose as the subject for future investigation. On the other hand, if the exposure ages were not accepted, being too long, then the microcrater data could easily be reconciled with the ‘more reliable satellite measurements’.
Only two other micrometeoroid and meteor influx measuring techniques appear to have been tried. One of these was the Apollo 17 Lunar Ejecta and Micrometeorite Experiment, a device deployed by the Apollo 17 crew which was specifically designed to detect micrometeorites.113 It consisted of a box containing monitoring equipment with its outside cover being sensitive to impacting dust particles. Evidently, it was capable not only of counting dust particles, but also of measuring their masses and velocities, the objective being to establish some firm limits on the numbers of microparticles in a given size range which strike the lunar surface every year. However, the results do not seem to have added to the large database already established by microcrater investigations.
The other direct measurement technique used was the Passive Seismic Experiment in which a seismograph was deployed by the Apollo astronauts and left to register subsequent impact events.114 In this case, however, the particle sizes and masses were in the gram to kilogram range of meteorites that impacted the moon’s surface with sufficient force to cause the vibrations to be recorded by the seismograph. Between 70 and 150 meteorite impacts per year were recorded, with masses in the range 100g to 1,000 kg, implying a flux rate of
log N = -1.62 -1.16 log m,
where N is the number of bodies that impact the lunar surface per square kilometre per year, with masses greater than m grams.115 This flux works out to be about one order of magnitude less than the average integrated flux from microcrater data. However, the data collected by this experiment have been used to cover that particle mass range in the development of cumulative flux curves (for example, see Figure 2 again) and the resultant cumulative mass flux estimates.
Figure 8. Constraints on the flux of micrometeoroids and larger objects according to a variety of independent lunar studies (adapted from Hörz et al.107)
Hörz et al. summarised some of the basic constraints derived from a variety of independent lunar studies on the lunar flux of micrometeoroids and larger objects.116 They also plotted the broad range of cumulative flux curves that were bounded by these constraints (see Figure 8). Included are the results of the Passive Seismic Experiment and the direct measurements of micrometeoroids encountered by spacecraft windows. They suggested that an upper limit on the flux can be derived from the mare cratering rate and from erosion rates on lunar rocks and other cratering data. Likewise, the negative findings on the Surveyor 3 camera lens and the perfect preservation of the footpad print of the Surveyor 3 landing gear (both referred to above) also define an upper limit. On the other hand, the lower limit results from the study of solar and galactic radiation tracks in lunar soils, where it is believed the regolith has been reworked only by micrometeoroids, so because of presumed old undisturbed residence times the flux could not have been significantly lower than that indicated. The ‘geochemical’, evidence is also based on studies of the lunar soils where the abundance of trace elements are indicative of the type and amount of meteoritic contamination. Hörz et al. suggest that strictly, only the passive seismometer, the Apollo windows and the mare craters yield a cumulative mass distribution. All other parameters are either a bulk measure of a meteoroid mass or energy, the corresponding ‘flux’ being calculated via the differential mass-distribution obtained from lunar microcrater investigations (‘lunar rocks , on Figure 8). Thus the corresponding arrows on Figure 8 may be shifted anywhere along the lines defining the ‘upper’ and ‘lower’ limits. On the other hand, they point out that the Surveyor 3 camera lens and footpad analyses define points only.
|Technique|| Influx Estimate|
|Calculated from estimates of influx to the earth||4,000|
| Keays et al.
|Geochemistry of lunar soil and rocks||15,200|
| Ganapathy et al.
|Geochemistry of lunar soil and rocks||19,900|
|Calculated from satellite, radar data||10,450|
| Nazarova et al.
|Lunar orbit satellite data||8,000 - 9,000|
| by comparison with Hughes
|Calculated from satellite, radar data||(4,000 - 15,000)|
| Gault, et al.
|Combination of lunar microcrater and satellite data||10,000|
|Table 4. Summary of the lunar meteoritic dust influx estimates.|
Table 4 summarises the different lunar meteoritic dust estimates. It is difficult to estimate a cumulative mass flux from Hörz et al.’s diagram showing the basic constraints for the flux of micrometeoroids and larger objects derived from independent lunar studies (see Figure 8), because the units on the cumulative flux axis are markedly different to the units on the same axis of the cumulative flux and cumulative mass diagram of Gault et al. from which they estimated a lunar meteoritic dust influx of about 10,000 tons per year. The Hörz et al. basic constraints diagram seems to have been partly constructed from the previous figure in their paper, which however includes some of the microcrater data used by Gault et al. in their diagram (Figure 7 here) and from which the cumulative mass flux calculation gave a flux estimate of 10,000 tons per year. Assuming then that the basic differences in the units used on the two cumulative flux diagrams (Figures 7 and 8 here) are merely a matter of the relative numbers in the two log scales, then the Gault et al. cumulative flux curve should fall within a band between the upper and lower limits, that is, within the basic constraints, of Hörz et al.’s lunar cumulative flux summary plot (Figure 8 here). Thus a flux estimate from Hörz et al.’s broad lunar cumulative flux curve would still probably centre around the 10,000 tons per year estimate of Gault et al.
In conclusion, therefore, on balance the evidence points to a lunar meteoritic dust influx figure of around 10,000 tons per year. This seems to be a reasonable, approximate estimate that can be derived from the work of Hörz et al., who place constraints on the lunar cumulative flux by carefully drawing on a wide range of data from various techniques. Even so, as we have seen, Gault et al. question some of the underlying assumptions of the major measurement techniques from which they drew their data - in particular, the lunar microcrater data and the satellite measurement data. Like the ‘geochemical’ estimates, the microcrater data depends on uniformitarian age assumptions, including the solar-flare rate, and in common with the satellite data, uniformitarian assumptions regarding the continuing level of dust in interplanetary space and as influx to the moon. Claims are made about variations in the cumulative dust influx in the past, but these also depend upon uniformitarian age assumptions and thus the argument could be deemed circular. Nevertheless, questions of sampling statistics and representativeness aside, the figure of approximately 10,000 tons per year has been stoutly defended in the literature based primarily on present-day satellite-borne detector measurements.
Finally, one is left rather perplexed by the estimate of the moon’s accumulation rate of about 500 tons per year made by Van Till et al.117 In their treatment of the ‘moon dust controversy’, they are rather scathing in their comments about creationists and their handling of the available data in the literature. For example, they state:
‘The failure to take into account the published data pertinent to the topic being discussed is a clear failure to live up to the codes of thoroughness and integrity that ought to characterize professional science’118
‘The continuing publication of those claims by young- earth advocates constitutes an intolerable violation of the standards of professional integrity that should characterize the work of natural scientists’119
Having been prepared to make such scathing comments, one would have expected that Van Till and his colleagues would have been more careful with their own handling of the scientific literature that they purport to have carefully scanned. Not so, because they failed to check their own calculation of 500 tons per year for lunar dust influx with those estimates that we have seen in the same literature which were based on some of the same satellite measurements that Van Till et al. did consult, plus the microcrater data which they didn’t. But that is not all - they failed to check the factors they used for calculating their lunar accumulation rate from the terrestrial figure they had established from the literature. If they had consulted, for example, Dohnanyi, as we have already seen, they would have realised that they only needed to use a focusing factor of two, the moon’s smaller surface area apparently being largely irrelevant. So much for lack of thoroughness! Had they surveyed the literature thoroughly, then they would have to agree with the conclusion here that the dust influx to the moon is approximately 10,000 tons per year.
Pre-Apollo Lunar Dust Thickness Estimates
The second major question to be addressed is whether NASA really expected to find a thick dust layer on the moon when their astronauts landed on July 20, 1969. Many have asserted that because of meteoritic dust influx estimates made by Pettersson and others prior to the Apollo moon landings, that NASA was cautious in case there really was a thick dust layer into which their lunar lander and astronauts might sink.
Asimov is certainly one authority at the time who is often quoted. Using the 14,300,000 tons of dust per year estimate of Pettersson, Asimov made his own dust on the moon calculation and commented:
‘But what about the moon? It travels through space with us and although it is smaller and has a weaker gravity, it, too, should sweep up a respectable quantity of micrometeors.
To be sure, the moon has no atmosphere to friction the micrometeors to dust, but the act of striking the moon’s surface should develop a large enough amount of heat to do the job.
Now it is already known, from a variety of evidence, that the moon (or at least the level lowlands) is covered with a layer of dust. No one, however, knows for sure how thick this dust may be.
It strikes me that if this dust is the dust of falling micrometeors, the thickness may be great. On the moon there are no oceans to swallow the dust, or winds to disturb it, or life forms to mess it up generally one way or another. The dust that forms must just lie there, and if the moon gets anything like the earth’s supply, it could be dozens of feet thick.
In fact, the dust that strikes craters quite probably rolls down hill and collects at the bottom, forming “drifts” that could be fifty feet deep, or more. Why not?
I get a picture, therefore, of the first spaceship, picking out a nice level place for landing purposes coming slowly downward tail-first … and sinking majestically out of sight.’120
Asimov certainly wasn’t the first to speculate about the thickness of dust on the moon. As early as 1897 Peal121 was speculating on how thick the dust might be on the moon given that ‘it is well known that on our earth there is a considerable fall of meteoric dust.’ Nevertheless, he clearly expected only ‘an exceedingly thin coating’ of dust. Several estimates of the rate at which meteorites fall to earth were published between 1930 and 1950, all based on visual observations of meteors and meteorite falls. Those estimates ranged from 26 metric tons per year to 45,000 tons per year.122 In 1956 Öpik123 estimated 25,000 tons per year of dust falling to the earth, the same year Watson124 estimated a total accumulation rate of between 300,000 and 3 million tons per year, and in 1959 Whipple125 estimated 700,000 tons per year.
However, it wasn’t just the matter of meteoritic dust falling to the lunar surface that concerned astronomers in their efforts to estimate the thickness of dust on the lunar surface, since the second source of pulverised material on the moon is the erosion of exposed rocks by various processes. The lunar craters are of course one of the most striking features of the moon and initially astronomers thought that volcanic activity was responsible for them, but by about 1950 most investigators were convinced that meteorite impact was the major mechanism involved.126 Such impacts pulverise large amounts of rock and scatter fragments over the lunar surface. Astronomers in the 1950s agreed that the moon’s surface was probably covered with a layer of pulverised material via this process, because radar studies were consistent with the conclusion that the lunar surface was made of fine particles, but there were no good ways to estimate its actual thickness.
Yet another contributing source to the dust layer on the moon was suggested by Lyttleton in 1956,127 He suggested that since there is no atmosphere on the moon, the moon‘s surface is exposed to direct radiation, so that ultraviolet light and x-rays from the sun could slowly erode the surface of exposed lunar rocks and reduce them to dust, Once formed, he envisaged that the dust particles might be kept in motion and so slowly ‘flow’ to lower elevations on the lunar surface where they would accumulate to form a layer of dust which he suggested might be ‘several miles deep’. Lyttleton wasn’t alone, since the main proponent of the thick dust view in British scientific circles was Royal Greenwich astronomer Thomas Gold, who also suggested that this loose dust covering the lunar surface could present a serious hazard to any spacecraft landing on the moon.128 Whipple, on the other hand, argued that the dust layer would be firm and compact so that humans and vehicles would have no trouble landing on and moving across the moon’s surface.129 Another British astronomer, Moore, took note of Gold’s theory that the lunar seas ‘were covered with layers of dust many kilometres deep’ but flatly rejected this. He commented:
‘The disagreements are certainly very marked. At one end of the scale we have Gold and his supporters, who believe in a dusty Moon covered in places to a great depth; at the other, people such as myself, who incline to the view that the dust can be no more than a few centimetres deep at most. The only way to clear the matter up once and for all is to send a rocket to find out’150
So it is true that some astronomers expected to find a thick dust layer, but this was no means universally supported in the astronomical community. The Russians too were naturally interested in this question at this time because of their involvement in the ‘space race’, but they also had not reached a consensus on this question of the lunar dust. Sharonov,131 for example, discussed Gold’s theory and arguments for and against a thick dust layer, admitting that ‘this theory has become the object of animated discussion.’ Nevertheless, he noted that the ‘majority of selenologists’ favoured the plains of the lunar ‘seas’ (mares ) being layers of solidified lavas with minimal dust cover.
Research in the Early 1960s
The lunar dust question was also on the agenda of the December 1960 Symposium number 14 of the International Astronomical Union held at the Puikovo Observatory near Leningrad. Green summed up the arguments as follows:
‘Polarization studies by Wright verified that the surface of the lunar maria is covered with dust. However, various estimates of the depth of this dust layer have been proposed. In a model based on the radioastronomy techniques of Dicke and Beringer and others, a thin dust layer is assumed, Whipple assumes the covering to be less than a few meters’ thick.
On the other hand, Gold, Gilvarry, and Wesselink favor a very thick dust layer. … Because no polar homogenization of lunar surface details can be demonstrated, however, the concept of a thin dust layer appears more reasonable. … Thin dust layers, thickening in topographic basins near post-mare craters, are predicted for mare areas’132
In a 1961 monograph on the lunar surface, Fielder discussed the dust question in some detail, citing many of those who had been involved in the controversy. Having discussed the lunar mountains where he said ‘there may be frequent pockets of dust trapped in declivities’ he concluded that the mean dust cover over the mountains would only be a millimetre or so.133 But then he went on to say,
‘No measurements made so far refer purely to marebase materials. Thus, no estimates of the composition of maria have direct experimental backing. This is unfortunate, because the interesting question “How deep is the dust in the lunar seas?” remains unanswered.’
In 1964 a collection of research papers were published in a monograph entitled The Lunar Surface Layer, and the consensus therein amongst the contributing authors was that there was not a thick dust layer on the moon’s surface. For example, in the introduction, Kopal stated that
‘this layer of loose dust must extend down to a depth of at least several centimeters, and probably a foot or so; but how much deeper it may be in certain places remains largely conjectural’134
In a paper on ‘Dust Bombardment on the Lunar Surface’, McCracken and Dubin undertook a comprehensive review of the subject, including the work of Öpik and Whipple, plus many others who had since been investigating the meteoritic dust influx to the earth and moon, but concluded that
‘The available data on the fluxes of interplanetary dust particles with masses less than 104gm show that the material accreted by the moon during the past 4.5 billion years amounts to approximately 1 gm/cm2 if the flux has remained fairly constant’135
(Note that this statement is based on the uniformitarian age constraints for the moon.) Thus they went on to say that
‘The lunar surface layer thus formed would, therefore, consist of a mixture of lunar material and interplanetary material (primarily of cometary origin) from 10cm to 1m thick. The low value for the accretion rate for the small particles is not adequate to produce large-scale dust erosion or to form deep layers of dust on the moon. …’.136
In another paper, Salisbury and Smalley state in their abstract:
‘It is concluded that the lunar surface is covered with a layer of rubble of highly variable thickness and block size. The rubble in turn is mantled with a layer of highly porous dust which is thin over topographic highs, but thick in depressions. The dust has a complex surface and significant, but not strong, coherence’137
In their conclusions they made a number of predictions.
‘Thus, the relief of the coarse rubble layer expected in the highlands should be largely obliterated by a mantle of fine dust, no more than a few centimeters thick over near-level areas, but meters thick in steep- walled depressions. …The lunar dust layer should provide no significant difficulty for the design of vehicles and space suits. …’138
Expressing the opposing view was Hapke, who stated that
‘recent analyses of the thermal component of the lunar radiation indicate that large areas of the moon may be covered to depths of many meters by a substance which is ten times less dense than rock. …Such deep layers of dust would be in accord with the suggestion of Gold’139
He went on:
‘Thus, if the radio-thermal analyses are correct, the possibility of large areas of the lunar surface being covered with thick deposits of dust must be given serious consideration.’140
However, the following year Hapke reported on research that had been sponsored by NASA, at a symposium on the nature of the lunar surface, and appeared to be more cautious on the dust question. In the proceedings he wrote:
‘I believe that the optical evidence gives very strong indications that the lunar surface is covered with a layer of fine dust of unknown thickness’141
There is no question that NASA was concerned about the presence of dust on the moon’s surface and its thickness. That is why they sponsored intensive research efforts in the 1960s on the questions of the lunar surface and the rate of meteoritic dust influx to the earth and the moon. In order to answer the latter question, NASA had begun sending up rockets and satellites to collect dust particles and to measure their flux in near-earth space. Results were reported at symposia, such as that which was held in August 1965 at Cambridge, Massachusetts, jointly sponsored by NASA and the Smithsonian Institution, the proceedings of which were published in 1967.142
A number of creationist authors have referred to this proceedings volume in support of the standard creationist argument that NASA scientists had found a lot of dust in space which confirmed the earlier suggestions of a high dust influx rate to the moon and thus a thick lunar surface layer of dust that would be a danger to any landing spacecraft. Slusher, for example, reported that he had been involved in an intensive review of NASA data on the matter and found ‘that radar, rocket, and satellite data published in 1976 by NASA and the Smithsonian Institution show that a tremendous amount of cosmic dust is present in the space around the earth and moon.’143
(Note that the date of publication was incorrectly reported as 1976, when it in fact is the 1967 volume just referred to above.) Similarly, Calais references this same 1967 proceedings volume and says of it,
‘NASA has published data collected by orbiting satellites which confirm a vast amount of cosmic dust reaching the vicinity of the earth-moon system’144,145
Both these assertions, however, are far from correct, since the reports published in that proceedings volume contain results of measurements taken by detectors on board spacecraft such as Explorer XVI, Explorer XXIII, Pegasus I and Pegasus II, as well as references to the work on radio meteors by Elford and cumulative flux curves incorporating the work of people like Hawkins, Upton and Elsässer. These same satellite results and same investigators’ contributions to cumulative flux curves appear in the 1970s papers of investigators whose cumulative flux curves have been reproduced here as Figures 3, 5 and 7, all of which support the 10,000 - 20,000 tons per year and approximately 10,000 tons per year estimates for the meteoritic dust influx to the earth and moon respectively - not the ‘tremendous’ and ‘vast’ amounts of dust incorrectly inferred from this proceedings volume by Slusher and Calais.
Pre-Apollo Moon Landings
The next stage in the NASA effort was to begin to directly investigate the lunar surface as a prelude to an actual manned landing. So seven Ranger spacecraft were sent up to transmit television pictures back to earth as they plummeted toward crash landings on selected flat regions near the lunar equator.146 The last three succeeded spectacularly, in 1964 and 1965, sending back thousands of detailed lunar scenes, thus increasing a thousand-fold our ability to see detail. After the first high-resolution pictures of the lunar surface were transmitted by television from the Ranger VII spacecraft in 1964, Shoemaker147 concluded that the entire lunar surface was blanketed by a layer of pulverised ejecta caused by repeated impacts and that this ejecta would range from boulder-sized rocks to finely-ground dust. After the remaining Ranger crash-landings, the Ranger investigators were agreed that a debris layer existed, although interpretations varied from virtually bare rock with only a few centimetres of debris (Kuiper, Strom and Le Poole) through to estimates of a layer from a few to tens of metres deep (Shoemaker).148 However, it can’t be implied as some have done149 that Shoemaker was referring to a dust layer that thick that was unstable enough to swallow up a landing spacecraft. After all, the consolidation of dust and boulders sufficient to support a load has nothing to do with a layer’s thickness. In any case, Shoemaker was describing a surface layer composed of debris from meteorite impacts, the dust produced being from lunar rocks and not from falling meteoritic dust.
But still the NASA planners wanted to dispel any lingering doubts before committing astronauts to a manned spacecraft landing on the lunar surface, so the soft-landing Surveyor series of spacecraft were designed and built. However, the Russians just beat the Americans when they achieved the first lunar soft-landing with their Luna 9 spacecraft. Nevertheless, the first American Surveyor spacecraft successfully achieved a soft-landing in mid-1966 and returned over 11,000 splendid photographs, which showed the moon’s surface in much greater detail than ever before.150 Between then and January 1968 four other Surveyor spacecraft were successfully landed on the lunar surface and the pictures obtained were quite remarkable in their detail and high resolution, the last in the series (Surveyor 7) returning 21,000 photographs as well as a vast amount of scientific data. But more importantly,
‘as each spindly, spraddle-legged craft dropped gingerly to the surface, its speed largely negated by retrorockets, its three footpads sank no more than an inch or two into the soft lunar soil. The bearing strength of the surface measured as much as five to ten pounds per square inch, ample for either astronaut or landing spacecraft’151
Two of the Surveyors carried a soil mechanics surface sampler which was used to test the soil and any rock fragments within reach. All these tests and observations gave a consistent picture of the lunar soil. As Pasachoff noted:
‘It was only the soft landing of the Soviet Luna and American Surveyor spacecraft on the lunar surface in 1966 and the photographs they sent back that settled the argument over the strength of the lunar surface; the Surveyor perched on the surface without sinking in more than a few centimeters’152152
Moore concurred, with the statement that
‘up to 1966 the theory of deep dust-drifts was still taken seriously in the United States and there was considerable relief when the soft-Ianding of Luna 9 showed it to be wrong’153
Referring to Gold’s deep-dust theory of 1955, Moore went on to say that although this theory had gained a considerable degree of respectability, with the successful soft-landing of Luna 9 in 1966 ‘it was finally discarded.’154 So it was in May 1966 when Surveyor I landed on the moon three years before Apollo 11 that the long debate over the lunar surface dust layer was finally settled, and NASA officials then knew exactly how much dust there was on the surface and that it was capable of supporting spacecraft and men.
Since this is the case, creationists cannot say or imply, as some have,155-160 that most astronomers and scientists expected a deep dust layer. Some of course did, but it is unfair if creationists only selectively refer to those few scientists who predicted a deep dust layer and ignore the majority of scientists who on equally scientific grounds had predicted only a thin dust layer. The fact that astronomy textbooks and monographs acknowledge that there was a theory about deep dust on the moon,161,162 as they should if they intend to reflect the history of the development of thought in lunar science, cannot be used to bolster a lop-sided presentation of the debate amongst scientists at the time over the dust question, particularly as these same textbooks and monographs also indicate, as has already been quoted, that the dust question was settled by the Luna and Surveyor soft-landings in 1966. Nor should creationists refer to papers like that of Whipple,163 who wrote of a ‘dust cloud’ around the earth, as if that were representative of the views at the time of all astronomers. Whipple’s views were easily dismissed by his colleagues because of subsequent evidence. Indeed, Whipple did not continue promoting his claim in subsequent papers, a clear indication that he had either withdrawn it or been silenced by the overwhelming response of the scientific community with evidence against it, or both.
The Apollo Lunar Landing
Two further matters need to be also dealt with. First, there is the assertion that NASA built the Apollo lunar lander with large footpads because they were unsure about the dust and the safety of their spacecraft. Such a claim is inappropriate given the success of the Surveyor soft-landings, the Apollo lunar lander having footpads which were proportionally similar to the relative sizes of the respective spacecraft. After all, it stands to reason that since the design of Surveyor spacecraft worked so well and survived landing on the lunar surface that the same basic design should be followed in the Apollo lunar lander.
As for what Armstrong and Aldrin found on the lunar surface, all are agreed that they found a thin dust layer. The transcript of Armstrong’s words as he stepped onto the moon are instructive:
‘I am at the foot of the ladder. The LM [lunar module ] footpads are only depressed in the surface about one or two inches, although the surface appears to be very, very fine grained, as you get close to it. It is almost like a powder. Now and then it is very fine. I am going to step off the LM now. That is one small step for man, one giant leap for mankind’164
Moments later while taking his first steps on the lunar surface, he noted:
‘The surface is fine and powdery. I can - I can pick it up loosely with my toe. It does adhere in fine layers like powdered charcoal to the sole and sides of my boots. I only go in a small fraction of an inch, maybe an eighth of an inch, but I can see the footprints of my boots and the treads in the fine sandy particles.‘
And a little later, while picking up samples of rocks and fine material, he said:
‘This is very interesting. It is a very soft surface, but here and there where I plug with the contingency sample collector, I run into a very hard surface, but it appears to be very cohesive material of the same sort. I will try to get a rock in here. Here’s a couple’165
So firm was the ground, that Armstrong and Aldrin had great difficulty planting the American flag into the rocky and virtually dust-free lunar surface.
The fact that no further comments were made about the lunar dust by NASA or other scientists has been taken by some166-168 to represent some conspiracy of silence, hoping that some supposed unexplained problem will go away. There is a perfectly good reason why there was silence - three years earlier the dust issue had been settled and Armstrong and Aldrin only confirmed what scientists already knew about the thin dust layer on the moon. So because it wasn’t a problem just before the Apollo 11 landing, there was no need for any talk about it to continue after the successful exploration of the lunar surface. Armstrong himself may have been a little concerned about the constituency and strength of the lunar surface as he was about to step onto it, as he appears to have admitted in subsequent interviews,169 but then he was the one on the spot and about to do it, so why wouldn’t he be concerned about the dust, along with lots of other related issues.
Finally, there is the testimony of Dr William Overn.170,171 Because he was working at the time for the Univac Division of Sperry Rand on the television sub-system for the Mariner IV spacecraft he sometimes had exchanges with the men at the Jet Propulsion Laboratory (JPL) who were working on the Apollo program. Evidently those he spoke to were assigned to the Ranger spacecraft missions which, as we have seen, were designed to find out what the lunar surface really was like; in other words, to investigate among other things whether there was a thin or thick dust layer on the lunar surface. In Bill’s own words:
‘I simply told them that they should expect to find less than 10,000 years’ worth of dust when they got there. This was based on my creationist belief that the moon is young. The situation got so tense it was suggested I bet them a large amount of money about the dust. … However, when the Surveyor spacecraft later landed on the moon and discovered there was virtually no dust, that wasn’t good enough for these people to pay off their bet. They said the first landing might have been a fluke in a low dust area! So we waited until ... astronauts actually landed on the moon. …’172
Neither the validity of this story nor Overn’s integrity is in question. However, it should be noted that the bet Overn made with the JPL scientists was entered into at a time when there was still much speculation about the lunar surface, the Ranger spacecraft just having been crash-landed on the moon and the Surveyor soft-landings yet to settle the dust issue. Furthermore, since these scientists involved with Overn were still apparently hesitant after the Surveyor missions, it suggests that they may not have been well acquainted with NASA’s other efforts, particularly via satellite measurements, to resolve the dust question, and that they were not ‘rubbing shoulders with’ those scientists who were at the forefront of these investigations which culminated in the Surveyor soft-landings settling the speculations over the dust. Had they been more informed, they would not have entered into the wager with Overn, nor for that matter would they have seemingly felt embarrassed by the small amount of dust found by Armstrong and Aldrin, and thus conceded defeat in the wager. The fact remains that the perceived problem of what astronauts might face on the lunar surface was settled by NASA in 1966 by the Surveyor soft-landings.
Moon Dust and the Moon’s Age
The final question to be resolved is, now that we know how much meteoritic dust falls to the moon’s surface each year, then what does our current knowledge of the lunar surface layer tell us about the moon’s age? For example, what period of time is represented by the actual layer of dust found on the moon? On the one hand creationists have been using the earlier large dust influx figures to support a young age of the moon, and on the other hand evolutionists are satisfied that the small amount of dust on the moon supports their billions-of-years moon age.
The Lunar Regolith
To begin with, what makes up the lunar surface and how thick is it? The surface layer of pulverised material on the moon is now, after on-site investigations by the Apollo astronauts, not called moon dust, but lunar regolith, and the fine materials in it are sometimes referred to as the lunar soil. The regolith is usually several metres thick and extends as a continuous layer of debris draped over the entire lunar bedrock surface. The average thickness of the regolith on the maria is 4-5m, while the highlands regolith is about twice as thick, averaging about 10m.173 The seismic properties of the regolith appear to be uniform on the highlands and maria alike, but the seismic signals indicate that the regolith consists of discrete layers, rather than being simply ‘compacted dust’. The top surface is very loose due to stirring by micrometeorites, but the lower depths below about 20cm are strongly compacted, probably due to shaking during impacts.
The complex layered nature of the regolith has been studied in drill-core samples brought back by the Apollo missions. These have clearly revealed that the regolith is not a homogeneous pile of rubble. Rather, it is a layered succession of ejecta blankets.174 An apparent paradox is that the regolith is both well mixed on a small scale and also displays a layered structure. The Apollo 15 deep core tube, for example, was 2.42 metres long, but contained 42 major textural units from a few millimetres to 13cm in thickness. It has been found that there is usually no correlation between layers in adjacent core tubes, but the individual layers are well mixed. This paradox has been resolved by recognising that the regolith is continuously ‘gardened’ by large and small meteorites and micrometeorites. Each impact inverts much of the microstratigraphy and produces layers of ejecta, some new and some remnants of older layers. -The new surface layers are stirred by micrometeorites, but deeper stirring is rarer. The result is that a complex layered regolith is built up, but is in a continual state of flux, particles now at the surface potentially being buried deeply by future impacts. In this way, the regolith is turned over, like a heavily bombarded battlefield. However, it appears to only be the upper 0.5 - l mm of the lunar surface that is subjected to intense churning and mixing by the meteoritic influx at the present time. Nevertheless, as a whole, the regolith is a primary mixing layer of lunar materials from all points on the moon with the incoming meteoritic influx, both meteorites proper and dust.
Figure 9. Processes of erosion on the lunar surface today appear to be extremely slow compared with the processes on the earth. Bombardment by micrometeorites is believed to be the main cause. A large meteorite strikes the surface very rarely, excavating bedrock and ejecting it over thousands of square kilometres, sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporized on impact, and larger fragments of the debris produce secondary craters. Such an event at a mare site pulverizes and churns the rubble and dust that form the regolith. Accompanying base surges of hot clouds of dust. gas and shock waves might compact the dust into breccias. Cosmic rays continually bombard the surface. During the lunar day ions from the solar wind and unshielded solar radiation impinge on the surface. (Adapted from Eglinton et al.176)
Lunar Surface Processes
So apart from the influx of the meteoritic dust, what other processes are active on the moon’s surface, particularly as there is no atmosphere or water on the moon to weather and erode rocks in the same way as they do on earth? According to Ashworth and McDonnell,
‘Three major processes continuously affecting the surface of the moon are meteor impact, solar wind sputtering, and thermal erosion’175
The relative contributions of these processes towards the erosion of the lunar surface depend upon various factors, such as the dimensions and composition of impacting bodies and the rate of meteoritic impacts and dust influx, These processes of erosion on the lunar surface are of course extremely slow compared with erosion processes on the earth, Figure 9, after Eglinton et al.,176 attempts to illustrate these lunar surface erosion processes.
Of these erosion processes the most important is obviously impact erosion, Since there is no atmosphere on the moon, the incoming meteoritic dust does not just gently drift down to the lunar surface, but instead strikes at an average velocity that has been estimated to be between 13 and 18 km/sec,177 or more recently as 20 km/sec,178 with a maximum reported velocity of 100 km/sec.179 Depending not ,ony on the velocity but on the mass of the impacting dust particles, more dust is produced as debris.
A number of attempts have been made to quantify the amount of dust-caused erosion of bare lunar rock on the lunar surface. Hörz et al.180 suggested a rate of 0.2-0.4mm/106 year (or 20-40 x 10-9cm/yr) after examination of micrometeorite craters on the surfaces of lunar rock samples brought back by the Apollo astronauts. McDonnell and Ashworth181 discussed the range of erosion rates over the range of particle diameters and the surface area exposed. They thus suggested that a rate of 1-3 x 10-7cm/yr (or 100-300 x 10-9cm/yr), basing this estimate on Apollo moon rocks also, plus studies of the Surveyor 3 camera. They later revised this estimate, concluding that on the scale of tens of metres impact erosion accounts for the removal of some 10-7cm/yr (or 100x 10-9cm/yr) of lunar material.182 However, in another paper, Gault et al.183 tabulated calculated abrasion rates for rocks exposed on the lunar surface compared with observed erosion rates as determined from solar-flare particle tracks. Discounting the early satellite data and just averaging the values calculated from the best, more recent satellite data and from lunar rocks, gave an erosion rate esti mate of 0.28cm/106yr (or 280 x 10-9cm/yr), while the average of the observed erosion rates they found from the literature was 0.03cm/106yr (or 30 x 10-9cm/yr). However, they naturally favoured their own ‘best’ estimate from the satellite data of both the flux and the consequent abrasion rate, the latter being 0.1 cm/106yr (or 100 x 10-9cm/ yr), a figure identical with that ofMcDonnell and Ashworth. Gault et al. noted that this was higher, by a factor approaching an order of magnitude, than the ‘consensus’ of the observed values, a discrepancy which mirrors the difference between the meteoritic dust influx estimates derived from the lunar rocks compared with the satellite data.
These estimates obviously vary from one to another, but 30-100 x 10-9cm/yr would seem to represent a ‘middle of the range’ figure. However, this impact erosion rate only applies to bare, exposed rock. As McCracken and Dubin have stated, once a surface dust layer is built up initially from the dust influx and impact erosion, this initial surface dust layer would protect the underlying bedrock surface against continued erosion by dust particle bombardment.184 If continued impact erosion is going to add to the dust and rock fragments in the surface layer and regolith, then what is needed is some mechanism to continually transport dust away from the rock surfaces as it is produced, so as to keep exposing bare rock again for continued impact erosion. Without some active transporting process, exposed rock surfaces on peaks and ridges would be worn away to give a somewhat rounded moonscape (which is what the Apollo astronauts found), and the dust would thus collect in thicker accumulations at the bottoms of slopes. This is illustrated in Figure 9.
So bombardment of the lunar surface by micrometeorites is believed to be the main cause of surface erosion. At the Current rate of removal, however, it would take a million years to remove an approximately 1mm thick skin of rock from the whole lunar surface and convert it to dust. Occasionally a large meteorite strikes the surface (see Figure 9 again), excavating through the dust down into the bedrock and ejecting debris over thousands of square kilometres sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporised on impact, and larger fragments of the debris create secondary craters. Such an event at a mare site pulverises and churns the rubble and dust that forms the regolith.
The solar wind is the next major contributor to lunar surface erosion. The solar wind consists primarily of protons, electrons, and some alpha particles, that are continuously being ejected by the sun. Once again, since the moon has virtually no atmosphere or magnetic field, these particles of the solar wind strike the lunar surface unimpeded at velocities averaging 600 km/sec, knocking individual atoms from rock and dust mineral lattices. Since the major components of the solar wind are H+ (hydrogen) ions, and some He (helium) and other elements, the damage upon impact to the crystalline structure of the rock silicates creates defects and voids that accommodate the gases and other elements which are simultaneously implanted in the rock surface. But individual atoms are also knocked out of the rock surface, and this is called sputtering or sputter erosion. Since the particles in the solar wind strike the lunar surface with such high velocities,
‘one can safely conclude that most of the sputtered atoms have ejection velocities higher than the escape velocity of the moon’185
There would thus appear to be a net erosional mass loss from the moon to space via this sputter erosion.
As for the rate of this erosional loss, Wehner186 suggested a value for the sputter rate of the order of 0.4 angstrom (Å)/yr. However, with the actual measurement of the density of the solar wind particles on the surface of the moon, and lunar rock samples available for analysis, the intensity of the solar wind used in sputter rate calculations was downgraded, and consequently the estimates of the sputter rate itself (by an order of magnitude lower). McDonnell and Ashworth187 estimated an average sputter rate of lunar rocks of about 0.02Å/yr, which they later revised to 0.02-0.04Å/yr.188 Further experimental work refined their estimate to 0.043Å/yr,189 which was reported in Nature by Hughes.190 This figure of 0.043 Å/yr continued to be used and confirmed in subsequent experimental work,191 although Zook192 suggested that the rate may be higher, even as high as 0.08Å/yr.193 Even so, if this sputter erosion rate continued at this pace in the past then it equates to less than one centimetre of lunar surface lowering in one billion years. This not only applies to solid rock, but to the dust layer itself, which would in fact decrease in thickness in that time, in opposition to the increase in thickness caused by meteoritic dust influx. Thus sputter erosion doesn’t help by adding dust to the lunar surface, and in any case it is such a slow process that the overall effect is minimal. Yet another potential form of erosion process on the lunar surface is thermal erosion, that is, the breakdown of the lunar surface around impact/crater areas due to the marked temperature changes that result from the lunar diurnal cycle. Ashworth and McDonnell194 carried out tests on lunar rocks, submitting them to cycles of changing temperature, but found it ‘impossible to detect any surface changes’. They therefore suggested that thermal erosion is probably ‘not a major force.’ Similarly, McDonnell and Flavill195 conducted further experiments and found that their samples showed no sign of ‘degradation or enhancement’ due to the temperature cycle that they had been subjected to. They reported that
‘the conditions were thermally equivalent to the lunar day-night cycle and we must conclude that on this scale thermal cycling is a very weak erosion mechanism.‘
The only other possible erosion process that has ever been mentioned in the literature was that proposed by Lyttleton196 and Gold.197 They suggested that high-energy ultraviolet and x-rays from the sun would slowly pulverize lunar rock to dust, and over millions of years this would create an enormous thickness of dust on the lunar surface. This was proposed in the 1950s and debated at the time, but since the direct investigations of the moon from the mid-1960s onwards, no further mention of this potential process has appeared in the technical literature, either for the idea or against it. One can only assume that either the idea has been ignored or forgotten, or is simply ineffective in producing any significant erosion, contrary to the suggestions of the original proposers. The latter is probably true, since just as with impact erosion the effect of this radiation erosion would be subject to the critical necessity of a mechanism to clean rock surfaces of the dust produced by the radiation erosion. In any case, even a thin dust layer will more than likely simply absorb the incoming rays, while the fact that there are still exposed rock surfaces on the moon clearly suggests that Lyttleton and Gold’s radiation erosion process has not been effective over the presumed millions of years, else all rock surfaces should long since have been pulverized to dust. Alternately, of course, the fact that there are still exposed rock surfaces on the moon could instead mean that if this radiation erosion process does occur then the moon is quite young.
So how much dust is there on the lunar surface? Because of their apparent negligible or non-existent contribution, it may be safe to ignore thermal, sputter and radiation erosion. This leaves the meteoritic dust influx itself and the dust it generates when it hits bare rock on the lunar surface (impact erosion). However, our primary objective is to determine whether the amount of meteoritic dust in the lunar regolith and surface dust layer, when compared to the current meteoritic dust influx rate, is an accurate indication of the age of the moon itself, and by implication the earth and the solar system also.
Now we concluded earlier that the consensus from all the available evidence, and estimate techniques employed by different scientists, is that the meteoritic dust influx to the lunar surface is about 10,000 tons per year or 2x10-9g cm-2yr-1. Estimates of the density of micrometeorites vary widely, but an average value of 19/cm3 is commonly used. Thus at this apparent rate of dust influx it would take about a billion years for a dust layer a mere 2cm thick to accumulate over the lunar surface. Now the Apollo astronauts apparently reported a surface dust layer of between less than 1/8 inch (3mm)and 3 inches (7.6cm). Thus, if this surface dust layer were composed only of meteoritic dust, then at the current rate of dust influx this surface dust layer would have accumulated over a period of between 150 million years (3mm) and 3.8 billion years (7.6cm). Obviously, this line of reasoning cannot be used as an argument for a young age for the moon and therefore the solar system.
However, as we have already seen, below the thin surface dust layer is the lunar regolith, which is up to 5 metres thick across the lunar maria and averages 10 metres thick in the lunar highlands. Evidently, the thin surface dust layer is very loose due to stirring by impacting meteoritic dust (micrometeorites), but the regolith beneath which consists of rock rubble of all sizes down to fines (that are referred to as lunar soil) is strongly compacted. Nevertheless, the regolith appears to be continuously ‘gardened’ by large and small meteorites and micrometeorites, particles now at the surface potentially being buried deeply by future impacts. This of course means then that as the regolith is turned over meteoritic dust particles in the thin surface layer will after some time end up being mixed into the lunar soil in the regolith below. Therefore, also, it cannot be assumed that the thin loose surface layer is entirely composed of meteoritic dust, since lunar soil is also brought up into this loose surface layer by impacts.
However, attempts have been made to estimate the proportion of meteoritic material mixed into the regolith. Taylor198 reported that the meteoritic compositions recognised in the maria soils turn out to be surprisingly uniform at about 1.5% and that the abundance patterns are close to those for primitive unfractionated Type I carbonaceous chondrites. As described earlier, this meteoritic component was identified by analysing for trace elements in the broken-down rocks and soils in the regolith and then assuming that any trace element differences represented the meteoritic material added to the soils. Taylor also adds that the compositions of other meteorites, the ordinary chondrites, the iron meteorites and the stony-irons, do not appear to be present in the lunar regolith, which may have some significance as to the origin of this meteoritic material, most of which is attributed to the influx of micrometeorites. It is unknown what the large crater-forming meteorites contribute to the regolith, but Taylor suggests possibly as much as 10% of the total regolith. Additionally, a further source of exotic elements is the solar wind, which is estimated to contribute between 3% and 4% to the soil. This means that the total contribution to the regolith from extra-lunar sources is around 15%. Thus in a five metre thick regolith over the maria, the thickness of the meteoritic component would be close to 60cm, which at the current estimated meteoritic influx rate would have taken almost 30 billion years to accumulate, a timespan six times the claimed evolutionary age of the moon.
The lunar surface is heavily cratered, the largest crater having a diameter of 295kms. The highland areas are much more heavily cratered than the maria, which suggested to early investigators that the lunar highland areas might represent the oldest exposed rocks on the lunar surface. This has been confirmed by radiometric dating of rock samples brought back by the Apollo astronauts, so that a detailed lunar stratigraphy and evolutionary geochronological framework has been constructed. This has led to the conclusion that early in its history the moon suffered intense bombardment from scores of meteorites, so that all highland areas presumed to be older than 3.9 billion years have been found to be saturated with craters 50-100 km in diameter, and beneath the 10 metre-thick regolith is a zone of breccia and fractured bedrock estimated in places to be more than 1 km thick.199
Figure 10. Cratering history of the moon (adapted from Taylor200). An aeon represents a billion years on the evolutionists’ time scale, while the vertical bar represents the error margin in the estimation of the cratering rate at each data point on the curve.
Following suitable calibration, a relative crater chronology has been established, which then allows for the cratering rate through lunar history to be estimated and then plotted, as it is in Figure 10.200 There thus appears to be a general correlation between crater densities across the lunar surface and radioactive ‘age’ dates. However, the crater densities at the various sites cannot be fitted to a straightforward exponential decay curve of meteorites or asteroid populations.201 Instead, at least two separate groups of objects seem to be required. The first is believed to be approximated by the present-day meteoritic flux, while the second is believed to be that responsible for the intense early bombardment claimed to be about four billion years ago. This intense early bombardment recorded by the crater-saturated surface of the lunar highland areas could thus explain the presence of the thicker regolith (up to 10 metres) in those areas.
It follows that this period of intense early bombardment resulted from a very high influx of meteorites and thus meteoritic dust, which should now be recognisable in the regolith. Indeed, Taylor202 lists three types of meteoritic debris in the highlands regolith- the micrometeoritic component, the debris from the large-crater-producing bodies, and the material added during the intense early bombardment. However, the latter has proven difficult to quantify. Again, the use of trace element ratios has enabled six classes of ancient meteoritic components to be identified, but these do not correspond to any of the currently known meteorite classes, both iron and chondritic. It would appear that this material represents the debris from the large projectiles responsible for the saturation cratering in the lunar highlands during the intense bombardment early in the moon’s history. It is this early intense bombardment with its associated higher influx rate of meteoritic material that would account for not only the thicker regolith in the lunar highlands, but the 12% of meteoritic component in the thinner regolith of the maria that we have calculated (above) would take up to 30 billion years to accumulate at the current meteoritic influx rate. Even though the maria are believed to be younger than the lunar highlands and haven’t suffered the same saturation cratering, the cratering rate curve of Figure 10 suggests that the meteoritic influx rate soon after formation of the maria was still almost 10 times the current influx rate, so that much of the meteoritic component in the regolith could thus have more rapidly accumulated in the early years after the maria’s formation. This then removes the apparent accumulation timespan anomaly for the evolutionists’ timescale, and suggests that the meteoritic component in the maria regolith is still consistent with its presumed 3 billion year age if uniformitarian assumptions are used. This of course is still far from satisfactory for those young earth creationists who believed that uniformitarian assumptions applied to moon dust could be used to deny the evolutionists’ vast age for the moon.
Given that as much as 10% of the maria regolith may have been contributed by the large crater-forming meteorites,203 impact erosion by these large crater-producing meteorites may well have had a significant part in the development of the regolith, including the generation of dust, particularly if the meteorites strike bare lunar rock. Furthermore, any incoming meteorite, or micrometeorite for that matter, creates a crater much bigger than itself,204 and since most impacts are at an oblique angle the resulting secondary cratering may in fact be more important205 in generating even more dust. However, to do so the impacting meteorite or micrometeorite must strike bare exposed rock on the lunar surface. Therefore, if bare rock is to continue to be available at the lunar surface, then there must be some mechanism to move the dust off the rock as quickly as it is generated, coupled with some transport mechanism to carry it and accumulate it in lower areas, such as the maria.
Various suggestions have been made apart from the obvious effect of steep gradients, which in any case would only produce local accumulation. Gold, for example, listed five possibilities,206 but all were highly speculative and remain unverified. More recently, McDonnell207 has proposed that electrostatic charging on dust particle surfaces may cause those particles to levitate across the lunar surface up to 10 or more metres. As they lose their charge they float back to the surface, where they are more likely to settle in a lower area. McDonnell gives no estimate as to how much dust might be moved by this process, and it remains somewhat tentative. In any case, if such transport mechanisms were in operation on the lunar surface, then we would expect the regolith to be thicker over the maria because of their lower elevation. However, the fact is that the regolith is thicker in the highland areas where the presumed early intense bombardment occurred, the impact-generated dust just accumulating locally and not being transported any significant distance.
Having considered the available data, it is inescapably clear that the amount of meteoritic dust on the lunar surface and in the regolith is not at all inconsistent with the present meteoritic dust influx rate to the lunar surface operating, over the multi-billion year time framework proposed by evolutionists, but including a higher influx rate in the early history of the moon when intense bombardment occurred producing many of the craters on the lunar surface. Thus, for the purpose of ‘proving’ a young moon, the meteoritic dust influx as it appears to be currently known is at least two orders of magnitude too low. On the other hand, the dust influx rate has, appropriately enough, not been used by evolutionists to somehow ‘prove’ their multi-billion year timespan for lunar history. (They have recognised some of the problems and uncertainties and so have relied more on their radiometric dating of lunar rocks, coupled with wide- ranging geochemical analyses of rock and soil samples, all within the broad picture of the lunar stratigraphic succession.) The present rate of dust influx does not, of course, disprove a young moon.
Attempted Creationist Responses
Some creationists have tentatively recognised that the moon dust argument has lost its original apparent force. For example, Taylor(Paul)208 follows the usual line of argument employed by other creationists, stating that based on published estimates of the dust influx rate and the evolutionary timescale, many evolutionists expected the astronauts to find a very thick layer of loose dust on the moon, so when they only found a thin layer this implied a young moon. However, Taylor then admits that the case appears not to be as clear cut as some originally thought, particularly because evolutionists can now point to what appear to be more accurate measurements of a smaller dust influx rate compatible with their timescale. Indeed, he says that the evidence for disproving an old age using this particular process is weakened, but that furthermore, the case has been blunted by the discovery of what is said to be meteoritic dust within the regolith. However, like Calais,209,210 Taylor points to the NASA report211 that supposedly indicated a very large amount of cosmic dust in the vicinity of the earth and moon (a claim which cannot be substantiated by a careful reading of the papers published in that report, as we have already seen). He also takes up DeYoung’s comment212 that because all evolutionary theories about the origin of the moon and the solar system predict a much larger amount of incoming dust in the moon’s early years, then a very thick layer of dust would be expected, so it is still missing. Such an argument cannot be sustained by creationists because, as we have seen above, the amount of meteoritic dust that appears to be in the regolith seems to be compatible with the evolutionists’ view that there was a much higher influx rate of meteoritic dust early in the moon’s history at the same time as the so-called ‘early intense bombardment’.
Indeed, from Figure 10 it could be argued that since the cratering rate very early in the moon’s history was more than 300 times today’s cratering rate, then the meteoritic dust influx early in the moon’s history was likewise more than 300 times today’s influx rate. That would then amount to more than 3 million tons of dust per year, but even at that rate it would take a billion years to accumulate more than six metres thickness of meteoritic dust across the lunar surface, no doubt mixed in with a lesser amount of dust and rock debris generated by the large-crater-producing meteorite impacts. However, in that one billion years, Figure 10 shows that the rate of meteoritic dust influx is postulated to have rapidly declined, so that in fact a considerably lesser amount of meteoritic dust and impact debris would have accumulated in that supposed billion years. In other words, the dust in the regolith and the surface layer is still compatible with the evolutionists’ view that there was a higher influx rate early in the moon’s history, so creationists cannot use that to shore up this considerably blunted argument.
Coupled with this, it is irrelevant for both Taylor and DeYoung to imply that because evolutionists say that the sun and the planets were formed from an immense cloud of dust which was thus obviously much thicker in the past, that their theory would thus predict a very thick layer of dust. On the contrary, all that is relevant is the postulated dust influx after the moon’s formation, since it is only then that there is a lunar surface available to collect the dust, which we can now investigate along with that lunar surface. So unless there was a substantially greater dust influx after the moon formed than that postulated by the evolutionists (see Figure 10 and our calculations above), then this objection also cannot be used by creationists.
De Young also adds a second objection in order to counter the evolutionists’ case. He maintains that the revised value of a much smaller dust accumulation from space is open to question, and that scientists continue to make major adjustments in estimates of meteors and space dust that fall upon the earth and moon.213 If this is meant to imply that the current dust influx estimate is open to question amongst evolutionists, then it is simply not the case, because there is general agreement that the earlier estimates were gross overestimates. As we have seen, there is much support for the current figure, which is two orders of magnitude lower than many of the earlier estimates. There may be minor adjustments to the current estimate, but certainly not anything major.
While De Young hints at it, Taylor (Ian)214 is quite open in suggesting that a drastic revision of the estimated meteoritic dust influx rate to the moon occurred straight after the Apollo moon landings, when the astronauts , observations supposedly debunked the earlier gross over-estimates, and that this was done quietly but methodically in some sort of deliberate way. This is simply not so. Taylor insinuates that the Committee for Space Research (COSPAR) was formed to work on drastically downgrading the meteoritic dust influx estimate, and that they did this only based on measurements from indirect techniques such as satellite-borne detectors, visual meteor counts and observations of zodiacal light, rather than dealing directly with the dust itself. That claim does not take into account that these different measurement techniques are all necessary to cover the full range of particle sizes involved, and that much of the data they employed in their work was collected in the 1960s before the Apollo moon landings. Furthermore, that same data had been used in the 1960s to produce dust influx estimates, which were then found to be in agreement with the minor dust layer found by the astronauts subsequently. In other words, the data had already convinced most scientists before the Apollo moon landings that very little dust would be found on the moon, so there is nothing ‘fishy’ about COSPAR’s dust influx estimates just happening to yield the exact amount of dust actually found on the moon’s surface. Furthermore, the COSPAR scientists did not ignore the dust on the moon’s surface, but used lunar rock and soil samples in their work, for example, with the study of lunar microcraters that they regarded as representing a record of the historic meteoritic dust influx. Attempts were also made using trace element geochemistry to identify the quantity of meteoritic dust in the lunar surface layer and the regolith below.
A final suggestion from De Young is that perhaps there actually is a thick lunar dust layer present, but it has been welded into rock by meteorite impacts.215 This is similar and related to an earlier comment about efforts being made to re-evaluate dust accumulation rates and to find a mechanism for lunar dust compaction in order to explain the supposed absence of dust on the lunar surface that would be needed by the evolutionists’ timescale216 For support, Mutch217 is referred to, but in the cited pages Mutch only talks about the thickness of the regolith and the debris from cratering, the details of which are similar to what has previously been discussed here. As for the view that the thick lunar dust is actually present but has been welded into rock by meteorite impacts, no reference is cited, nor can one be found. Taylor describes a ‘mega-regolith’ in the highland areas218 which is a zone of brecciation, fracturing and rubble more than a kilometre thick that is presumed to have resulted from the intense early bombardment, quite the opposite to the suggestion of meteorite impacts welding dust into rock. Indeed, Mutch,219 Ashworth and McDonnell220 and Taylor221 all refer to turning over of the soil and rubble in the lunar regolith by meteorite and micrometeorite impacts, making the regolith a primary mixing layer of lunar materials that have not been welded into rock. Strong compaction has occurred in the regolith, but this is virtually irrelevant to the issue of the quantity of meteoritic dust on the lunar surface, since that has been estimated using trace element analyses.
Parks222 has likewise argued that the disintegration of meteorites impacting the lunar surface over the evolutionists’ timescale should have produced copious amounts of dust as they fragmented, which should, when added to calculations of the meteoritic dust influx over time, account for dust in the regolith in only a short period of time. However, it has already been pointed out that this debris component in the maria regolith only amounts to 10%, which quantity is also consistent with the evolutionists, postulated cratering rate over their timescale. He then repeats the argument that there should have been a greater rate of dust influx in the past, given the evolutionary theories for the formation of the bodies in the solar system from dust accretion, but that argument is likewise negated by the evolutionists having postulated an intense early bombardment of the lunar surface with a cratering rate, and thus a dust influx rate, over two orders of magnitude higher than the present (as already discussed above). Finally, he infers that even if the dust influx rate is far less than investigators had originally supposed, it should have contributed much more than the 1.5%’s worth of the 1-2 inch thick layer of loose dust on the lunar surface. The reference cited for this percentage of meteoritic dust in the thin loose dust layer on the lunar surface is Ganapathy et al.223 However, when that paper is checked carefully to see where they obtained their samples from for their analytical work, we find that the four soil samples that were enriched in a number of trace elements of meteoritic origin came from depths of 13-38 cms below the surface, from where they were extracted by a core tube. In other words, they came from the regolith below the 1-2 inch thick layer of loose dust on the surface, and so Parks’ application of this analytical work is not even relevant to his claim. In any case, if one uses the current estimated meteoritic dust influx rate to calculate how much meteoritic dust should be within the lunar surface over the evolutionists’ timescale one finds the results to be consistent, as has already been shown above.
Parks may have been influenced by Brown, whose personal correspondence he cites. Brown, in his own publication,224 has stated that
‘if the influx of meteoritic dust on the moon has been at just its present rate for the last 4.6 billion years, then the layer of dust should be over 2,000 feet thick’
Furthermore, he indicates that he made these computations based on the data contained in Hughes225 and Taylor.226 This is rather baffling, since Taylor does not commit himself to a meteoritic dust influx rate, but merely refers to the work of others, while Hughes concentrates on lunar microcraters and only indirectly refers to the meteoritic dust influx rate. In any case, as we have already seen, at the currently estimated influx rate of approximately 10,000 tons per year a mere 2 cm thickness of meteoritic dust would accumulate on the lunar surface every billion years, so that in 4.6 billion years there would be a grand total of 9.2 cm thickness. One is left wondering where Brown’s figure of 2,000 feet (approximately 610 metres) actually came from? If he is taking into account Taylor’s reference to the intense early bombardment, then we have already seen that, even with a meteoritic dust influx rate of 300 times the present figure, we can still comfortably account for the quantity of meteoritic dust found in the lunar regolith and the loose surface layer over the evolutionists’ timescale. While defence of the creationist position is totally in order, baffling calculations are not. Creation science should always be good science; it is better served by thorough use of the technical literature and by facing up to the real data with sincerity, as our detractors have often been quick to point out.
So are there any loopholes in the evolutionists’ case that the current apparent meteoritic dust influx to the lunar surface and the quantity of dust found in the thin lunar surface dust layer and the regolith below do not contradict their multi-billion year timescale for the moon’s history? Based on the evidence we currently have the answer has to be that it doesn’t look like it. The uncertainties involved in the possible erosion process postulated by Lyttleton and Gold (that is, radiation erosion) still potentially leaves that process as just one possible explanation for the amount of dust in a young moon model, but the dust should no longer be used as if it were a major problem for evolutionists. Both the lunar surface and the lunar meteoritic influx rate seem to be fairly well characterised, even though it could be argued that direct geological investigations of the lunar surface have only been undertaken briefly at 13 sites (six by astronauts and seven by unmanned spacecraft) scattered across a portion of only one side of the moon.
Furthermore, there are some unresolved questions regarding the techniques and measurements of the meteoritic dust influx rate. For example, the surface exposure times for the rocks on whose surfaces microcraters were measured and counted are dependent on uniformitarian age assumptions. If the exposure times were in fact much shorter, then the dust influx estimates based on the lunar microcraters would need to be drastically revised, perhaps upwards by several orders of magnitude. As it is, we have seen that there is a recognised discrepancy between the lunar microcrater data and the satellite-borne detector data, the former being an order of magnitude lower than the latter. Hughes227 explains this in terms of the meteoritic dust influx having supposedly increased by a factor of four in the last 100,000 years, whereas Gault et al.228 admit that if the ages are accepted at face value then there had to be an increase in the meteoritic dust influx rate by a factor of 10 in the past few tens of years! How this could happen we are not told, yet according to estimates of the past cratering rate there was in fact a higher influx of meteorites, and by inference meteoritic dust, in the past. This is of course contradictory to the claims based on lunar microcrater data. This seems to leave the satellite-borne detector measurements as apparently the more reliable set of data, but it could still be argued that the dust collection areas on the satellites are tiny, and the dust collection timespans far too short, to be representative of the quantity of dust in the space around the earth-moon system.
Should creationists then continue to use the moon dust as apparent evidence for a young moon, earth and solar system? Clearly, the answer is no. The weight of the evidence as it currently exists shows no inconsistency within the evolutionists’ case, so the burden of proof is squarely on creationists if they want to argue that based on the meteoritic dust the moon is young. Thus it is inexcusable for one creationist writer to recently repeat verbatim an article of his published five years earlier,229,230 maintaining that the meteoritic dust is proof that the moon is young in the face of the overwhelming evidence against his arguments. Perhaps any hope of resolving this issue in the creationists, favour may have to wait for further direct geological investigations and direct measurements to be made by those manning a future lunar surface laboratory, from where scientists could actually collect and measure the dust influx, and investigate the characteristics of the dust in place and its interaction with the regolith and any lunar surface processes.
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to both the earth and the moon. On the earth, chemical methods give results in the range of 100,000-400,000 tons per year, whereas cumulative flux calculations based on satellite and radar data give results in the range 10,000-20,000 tons per year. Most authorities on the subject now favour the satellite data, although there is an outside possibility that the influx rate may reach 100,000 tons per year. On the moon, after assessment of the various techniques employed, on balance the evidence points to a meteoritic dust influx figure of around 10,000 tons per year.
Although some scientists had speculated prior to spacecraft landing on the moon that there would be a thick dust layer there, there were many scientists who disagreed and who predicted that the dust would be thin and firm enough for a manned landing. Then in 1966 the Russians with their Luna 9 spacecraft and the Americans with their five successful Surveyor spacecraft accomplished soft-landings on the lunar surface, the footpads of the latter sinking no more than an inch or two into the soft lunar soil and the photographs sent back settling the argument over the thickness of the dust and its strength. Consequently, before the Apollo astronauts landed on the moon in 1969 the moon dust issue had been settled, and their lunar exploration only confirmed the prediction of the majority, plus the meteoritic dust influx measurements that had been made by satellite-borne detector systems which had indicated only a minor amount.
Calculations show that the amount of meteoritic dust in the surface dust layer, and that which trace element analyses have shown to be in the regolith, is consistent with the current meteoritic dust influx rate operating over the evolutionists’ timescale. While there are some unresolved problems with the evolutionists’ case, the moon dust argument, using uniformitarian assumptions to argue against an old age for the moon and the solar system, should for the present not be used by creationists.
Research on this topic was undertaken spasmodically over a period of more than seven years by Dr Andrew Snelling. A number of people helped with the literature search and obtaining copies of papers, in particular, Tony Purcell and Paul Nethercott. Their help is acknowledged. Dave Rush undertook research independently on this topic while studying and working at the Institute for Creation Research, before we met and combined our efforts. We, of course, take responsibility for the conclusions, which unfortunately are not as encouraging or complimentary for us young earth creationists as we would have liked.
Editorial update, 3 July 2014
New NASA data has turned up that is said to have been on ‘long-lost’ tapes,231 and shows a dust influx rate some ten times that of previous measurements. At face value it seems to raise the possibility of at least a partial revival of the moon dust argument. Given the very careful and detailed creationist analyses which led to its abandonment in the first place, and the other factors that could potentially affect these results (see this summary by a friend and ally), any reassessment would need to be similarly thorough and careful.
- Pettersson, H., 1960. Cosmic spherules and meteoric dust, Scientific American 202(2):123–132.
- Morris, H. M. (ed.), 1974. Scientific Creationism, Creation-Life Publishers, San Diego, pp. 151–152.
- Phillips, P. G., 1978. Meteoritic influx and the age of the earth. In: Origins and Chance: Selected Readings from the Journal of the American Affiliation, D. L. Willis (ed.), American Scientific Affiliation, Eigin, Illinois, pp. 74–76.
- Awbrey, F. T., 1983. Spacedust, the moon’s surface, and the age of the cosmos, Creation/Evolution XIII:21–29.
- Shore, S. N., 1984. Footprints in the dust: the lunar surface and creationism, Creation/Evolution XIV:32–35.
- Miller, K. R., 1984. Scientific creationism versus evolution: the mislabelled debate, In: Science and Creationism, A. Montagu(ed.), Oxford University Press, England, pp. 18–63.
- Bridgstock, M. W., 1985. Creation science: you’ve got to believe it to see it! Ideas In Education 4(3):12–17.
- Bridgstock, M. W., 1986. Dusty arguments. In: Creationism: An Australian Perspective, M. W. Bridgstock and K. Smith (eds), 2nd edition, Australian Skeptics, Melbourne, pp. 18–19.
- Van Till, H. J., Young, D. A. and Menninga, C., 1988. Footprints on the dusty moon, In: Science Held Hostage, Inter Varsity Press, Downers Grove, Illinois, chapter 4, pp. 67–82.
- Pettersson, Ref. 1, p. 132.
- Pettersson, Ref. 1, p. 132.
- Pettersson, Ref. 1, p. 132.
- Phillips, Ref. 3, p. 75.
- Awbrey, Ref. 4, p. 22.
- Bridgstock, Ref. 8, p. 18.
- Bridgstock, Ref. 7, p. 16.
- VanTill et al., Ref. 9, p.71.
- Pettersson, Ref. 1, p. 132.
- Asimov, I., 1959. 14 million tons of dust per year, Science Digest 45(1):33–36.
- Asimov, Ref. 19, p. 34.
- Asimov, Ref. 19, p. 35.
- Bridgstock, Ref. 8, p. 18.
- Phillips, Ref. 3, p. 74.
- Parkin, D. W. and Tilles, D., 1968. Influx measurements of extraterrestrial material, Science 159:936–946.
- Pettersson, Ref. 1, p. 132.
- Barker,J. L. and Anders, E., 1968. Accretion rate of cosmic matter from iridium and osmium contents of deep-sea sediments, Geochimica et Cosmochimica Acta 32:627–645.
- Ganapathy, R., 1983. The Tunguska explosion of 1908: discovery of meteoritic debris near the explosion site and at the South Pole. Science 220:1158–1161.
- Kyte, F. and Wasson, J. T., 1982. Lunar and Planetary Science, 13:411ff.
- Miller, Ref. 6, p. 44.
- Bridgstock, Ref. 8, p. 18.
- Bridgstock, Ref. 7, p. 16.
- Bridgstock, Ref. 8, p. 18.
- Bradley, J. P., Brownlee, D. E. and Veblen, D. R., 1983. Pyroxene whiskers and platelets in interplanetary dust: evidence of vapour phase growth. Nature 301:473–477.
- Dixon, D., McDonnell, T. and Carey, B., 1985. The dust that lights up the Zodiac, New Scientist, January 10, 1985:26–29.
- Bridgstock, Ref. 7, p. 16.
- Dixon et al., Ref. 34, p. 27.
- Dixon et al., Ref. 34, p. 27.
- Dixon et al., Ref. 34, pp. 26–27.
- Millman, P. M., 1975. Dust in the solar system. In: The Dusty Universe, G. B. Field and A. G. W. Cameron (eds), Smithsonian Astrophysical Observatory and Neale Watson Academic Publications, New York, pp. 185–209.
- Millman, Ref. 39.
- Dohnanyi, J. S., 1972, Interplanetary objects in review: statistics of their masses and dynamics, Icarus 17:1–48. 42.
- Millman, Ref. 39, p. 191.
- Bridgstock, Ref. 8, p. 18.
- Millman, Ref. 39, p. 191.
- McCrosky, R. E., 1968. Distributions of large meteoric bodies. Smithsonian Astrophysical Observatory, Special Report 280.
- Millman, Ref. 39, p. 191.
- Dohnanyi, Ref. 41.
- Barker and Anders, Ref. 26.
- Kyte and Wasson, Ref. 28.
- Schmidt,R. A. and Cohen, T. J., 1964. Particle accretion rates: variation with latitude, Science 145:924–926.
- Singer, S. F. and Banderrnann, L. W., 1967. Nature and origin of zodiacal dust. In: The Zodiacal Light and the Interplanetary Medium, National Aeronautics and Space Administration, USA, pp. 379–397.
- Hughes, D. W., 1974. Interplanetary dust and its influx to the earth’s surface, Space Research XIV (COSPAR), Akademie-Verlag, Berlin, pp.789–791.
- Hughes, D. W., 1975. Cosmic dust influx to the earth. Space Research XV (COSPAR), Akademie-Verlag, Berlin, pp. 531–539.
- Hughes, D. W., 1976. Earth -an interplanetary dustbin. New Scientist, 8 July 1976; 64–66.
- Millman, Ref. 39.
- Dohnanyi, Ref. 41.
- Hughes, Ref. 53, p. 537.
- Hughes, D. W.,1978. Meteors. In: Cosmic Dust, J. A. M.McDonnell (ed.), John Wiley and Sons, Chichester, England, pp. 123–185.
- Wetherill, G. W., 1976. Where do the meteorites come from? A re-evaluation of the Earth-crossing Apollo objects as sources of chondritic meteorites. Geochimica et Cosmochimica Acta 40:1297–1317.
- Hughes, Ref. 53.
- Barker and Anders, Ref. 26.
- Kyte and Wasson, Ref. 28.
- Singer and Bandermann, Ref. 51.
- Ganapathy, Ref. 27.
- Hartmann, W. K.,1983. Moons and Planets, 2nd Edition, Wadsworth Publishing Company, Belmont, California, pp. 161–164.
- Grün, E., Zook, H. A., Fechtig, H. and Giese, R. H., 1985. Collisional balance of the meteoritic complex. Icarus 62:244–272.
- Hughes, Ref. 53, p. 537.
- Olsson-Steel, D. I., 1988. The near-earth flux of microgram dust. In: Dust in the Universe, M. E. Bailey and D. A. Williams (eds), Cambridge University Press, England, pp. 187–192.
- Hughes, Ref. 53, p. 537.
- Hughes, Ref. 58.
- Thomas, R. M., Whitham, P. S. and Elford, W. G., 1986. Frequency dependence of radar meteor echo rates. Proceedings of the Astronomical Society of Australia 6:303–306.
- Maurette, M., Jehanno, C., Robin, E. and Hammer, C.,1987. Characteristics and mass distribution of extraterrestrial dust from the Greenland ice cap. Nature 328:699–702.
- Grün et al., Ref. 66.
- Tuncel, G. and Zolier, W. H., 1987. Atmospheric iridium at the South Pole as a measure of the meteoritic component. Nature 329:703–705.
- Maurette, M., Olinger, C., Michel-Levy, M. C., Kurate, G., Pourchet, M., Brandstatter, F. and Bourot-Denise, M., 1991. A collection of diverse micrometeorites recovered from 100 tonnes of Antarctic blue ice. Nature 35144–47.
- Dodd, R. T., 1981. Meteorites: A Petrologic-Chemical Synthesis, Cambridge University Press, Cambridge, England, pp. 1–3.
- Hartmann, Ref. 65, p.161.
- Henbest, N., 1991. Dust in space. New Scientist, 18 May 1991, Inside Science supplement no.45, pp. 1–4.
- Henbest, Ref. 78, p. 4.
- Van Till et al., Ref. 9, p. 71.
- Hartmann, Ref. 65, p. 161.
- Hartmann, W. K.,1980. Dropping stones in magma oceans. Proceedings of the Conference on the Lunar Highlands Crust, Pergamon Press, New York, pp. 162–163.
- Keays, R. R., Ganapathy, R., Laul, J. C., Anders, E., Herzog, G. F. and Jeffrey, P. M., 1970. Trace elements and radioactivity in lunar rocks: implications for meteorite infall, solar-wind flux, and formation conditions of moon. Science 167:490–493.
- Ganapathy, R., Keays, R. R., Laul, J. C. and Anders, E., 1970. Trace elements in Apollo 11 lunar rocks; implications for meteorite influx and origin of moon. Proceedings of the Apollo 11 Lunar Science Conference 2:1117–1142.
- Ganapathy, R., Keays, R. R. and Anders, E., 1970. Apollo 12 lunar samples: trace element analysis of a core and the uniformity of the regolith. Science 170:533–535.
- Ganapathy et al., Ref. 85, p. 535.
- Ganapathy et al., Ref. 85, p. 533.
- Dohnanyi, Ref. 41, p. 8.
- Dohnanyi, J. S., 1971. Flux of micrometeoroids: lunar sample analyses compared with flux model. Science 173:558.
- Nazarova,T. N., Rybakov, A. K., Bazazyants, S. I. and Kuzmich, A. I., 1973. Investigations of meteoritic matter in the vicinity of the earth and the moon from the orbiting station Salyut and the moon satellite Luna 19. Space Research XIII (COSPAR), Akademie-Verlag,Berlin, pp. 1033–1036.
- Hughes,Ref. 53, p. 532.
- Dohnanyi, Ref. 41, pp. 7–8.
- Jaffe, L. D., 1970. Lunar surface: changes in 31 months and micrometeoroid flux, Science 170:1092–1094.
- Hartung, 1. B., Hörz, F. and Gault, D.E., 1972. Lunar microcraters and interplanetary dust. Proceedings of the Third Lunar Science Conference 3:2735–2753.
- Schneider, E., Storzer, D., Hartung, J. B., Fechtig, H. and Gentner, W., 1973. Microcraters on Apollo 15 and 16 samples and corresponding cosmic dust fluxes. Proceedings of the Fourth Lunar Science Conference 3:3277–3290.
- Hartung et al., Ref. 94, p. 2738.
- Morrison, D. A. and Zinner, E., 1975. Studies of solar flares and impact craters in partially protected crystals. Proceedings of the Sixth Lunar Science Conference 3:3373–3390.
- Hartung et al., Ref. 94, p. 2751.
- Fechtig, H., Hartung, J. B., Nagel K. and Neukum, G., 1974. Lunar microcrater studies, derived meteoroid fluxes, and comparison with satellite-borne experiments. Proceedings of the Fifth Lunar Science Conference 3:2463–2474.
- Schneider et al., Ref. 95, pp. 3277–3281.
- Mandeville, J. C., 1975. Microcraters observed on 15015 breccia and micrometeoroid flux. Proceedings or the Sixth Lunar Science Conference 3:3403–3408.
- Schneider et al., Ref. 95, pp. 3284–3285.
- Hartung et al., Ref. 94.
- Morrison and Zinner, Ref. 97.
- Cour-Palais, B. G., 1974. The current micrometeoroid flux at the moon for masses s10-7g from the Apollo window and Surveyor 3 TV camera results. Proceedings of the Fifth Lunar Science Conference 3:2451–2462.
- Hughes, D. W., 1974. The changing micrometeoroid influx. Nature 251:379–380.
- Hörz, F., Brownlee, D. E., Fechtig, H., Hartung, J. B., Morrison, D. A., Neukum, G.,Schneider, E.,Vedder, J. F. and Gault, D. E., 1975. Lunar microcraters: implications for the micrometeoroid complex. Planetary and Space Science 23:151–172.
- Gault, D. E., Hörz, F. and Hartung, J. B., 1973. Abrasion and catastrophic rupture of lunar rocks: some implications to the micrometeoroid flux at 1 AU, Space Research XIII (COSPAR), Akademie-Verlag, Berlin, pp. 1085–1093.
- Gault, D. E., Hörz, F. and Hartung, J. B., 1972. Effects of microcratering on the lunar surface. Proceedings or the Third Lunar Science Conference 3:2713–2734.
- Gault et al., Ref. 108, p. 1087.
- Gault et al., Ref. 108, p. 1092.
- Gault et al., Ref. 108, p. 1092.
- Cadogan, P., 1981. The Moon: Our Sister Planet, Cambridge University Press, Cambridge, England, p. 237.
- Hörz et al., Ref. 107, p. 168.
- Taylor, S. R., 1975. Lunar Science: A Post-Apollo View, Pergamon Press Inc., New York, p. 92.
- Hörz et al., Ref. 107, pp. 168–169.
- Van Till et al.,Ref. 9, p.71.
- Van Till et al., Ref. 9, p. 80.
- Van Till, et al.,Ref. 9, p.82.
- Asimov, Ref. 19, pp. 35–36.
- Peal, S. E., 1897. The dark tints of the lunar maria. Journal of the British Astronomical Association, VII(7):400–401.
- Buddhue, J. D., 1950. Meteoritic Dust, University of New Mexico Press, Albuquerque.
- Öpik, E. J.,1956. Interplanetary dust and terrestrial accretion of meteoritic matter. Irish Astronomical Journal 4:84–135.
- Watson, F. G., 1956. Between the Planets, Harvard University Press, Cambridge, Massachusetts.
- Whipple, F. L., 1959. On the lunar dust layer. In: Vistas In Astronautics, Pergamon Press, New York, Vol. 2.
- Baldwin, R. R., 1949. The Face of the Moon, University of Chicago Press, Chicago.
- Lyttleton, R. A., 1956. The Modern Universe, Harper and Brolhers, New York, p. 72.
- Gold, T., 1955. The lunar surface. Monthly Notices of the Royal Astronomical Society 115:585.
- Whipple, Ref. 125.
- Moore, P ., 1963. Survey of the Moon, Eyre and Spottiswoode, London,p.120.
- Sharanov, V. V., 1960. In: The Moon: a Russian View, A.V. Markov (ed), The University of Chicago Press, pp. 354–357.
- Green, J., 1962. Geosciences applied to lunar exploration. In: The Moon, Z. Kopal and Z. K. Michailov (eds), International Astronomical Union Symposium No.14, Academic Press, London, p. 199.
- Fielder, G., 1961. Structure of the Moon’s Surface, Pergamon Press, New York, p. 125.
- Kopal, Z., 1964. Introduction. In: The Lunar Surface Layer- Materials and Characteristics, J. W. Salisbury and P. E. Glaser(eds), Academic Press, New York, p. xviii.
- McCracken, C. W. and Dubin, M., 1964. Dust bombardment on the lunar surface. In: The Lunar Surface Layer: Materials and Characteristics, J. W. Salisbury and P. E. Glaser (eds), Academic Press, New York, p. 203.
- McCracken and Dubin, Ref. 135, p. 204.
- Salisbury,J. W.and Smalley,V. G., l964. The lunar surface layer. In: The Lunar Surface Layer: Materials and Characteristics, J. W. Salisbury and P. E. Glaser(eds), Academic Press, New York, p. 411.
- Salisbury and Smalley, Ref. 137, p. 431.
- Hapke, B. W., 1964. Photometric and other laboratory studies relating to the lunar surface. In: The Lunar Surface Layer: Materials and Characteristics, J .W .Salisbury and P. E. Glaser (eds), Academic Press, New York, p. 332.
- Hapke, Ref. 139, p. 333.
- Hapke, B. W., 1965. Optical properties of the moon’s surface. In: The Nature of the Lunar Surface, W. N. Hess (ed.), Proceedings of the 1965 International Astronomical Union-National Aeronautics and Space Administration Symposium, Johns Hopkins Press, Baltimore, p.141.
- Hawkins, G. S. (ed.), 1967. Meteor Orbits and Dust: The Proceedings of a Symposium, National Aeronautics and Space Administration, Washington, D.C., Publication SP–135 and Smithsonian Institution, Cambridge, Massachusetts, Smithsonian Contribution 10 Astrophysics Vol.11.
- Ackerman, P.D., l986. Moon dust and the question of time. In: It’s a Young World After All: Exciting Evidences for Recent Creation, Baker Book House, Grand Rapids, Michigan, chapter 1, p. 23.
- Calais, R., 1987. Cleaning up the dust on the moon. Bible-Science Newsletter 25(10):2.
- Calais, R., 1992. Proof that the moon is young! Bible-Science News, 30(8):2.
- Weaver, K. F., 1969. The moon-man’s first goal in space. National Geographic 135(2):218.
- Shoemaker, E. M., 1965. Preliminary analysis of the fine structure of the lunar surface. Ranger VII, Part II Experimenters Analyses and Interpretations, Jet Propulsion Laboratory Technical Report 32–700, p.75.
- Hartmann, W. K., 1972. Moons and Planets, 1st Edition, Bogden and Quigley, Inc., Publishers, Belmont, California, p. 280.
- Whitcomb, J. C. and DeYoung, D. B., 1978. The Moon: Its Creation, Form and Significance, BMH Books, Winona Lake, Indiana, p. 95.
- Moore, P ., 1981. The Moon, Mitchell Beazley Publishers, London, p. 15.
- Weaver. Ref. 146, p. 219.
- Pasachoff, J. M., 1977. Contemporary Astronomy, W. B. Saunders Company, Philadelphia, p. 295.
- Moore, Ref. 150, p. 15.
- Moore, Ref. 150, p. 18.
- Slusher, H. S., 1980. Age of the Cosmos, Institute for Creation Research, San Diego, ICR Technical Monograph No.9, pp. 41–42.
- Taylor, I. T., 1984. In the Minds of Men -Darwin and the New World Order, TFE Publishing, Toronto, Canada, pp. 328–329,460.
- Taylor, I. T., 1988. An interview. Bible-Science Newsletter, 26(8):9.
- Calais, Ref. 144, pp. 1–2. 159.
- Calais, Ref. 145, pp. 1–2.
- Morris, Ref. 2, p. 152.
- Dixon, R. T., 1971. Dynamic Astronomy, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, p. 149.
- Rand McNally, 1978. New Concise Atlas of the Universe, Mitchell Beazley Publishers Ltd, London, p. 41.
- Whipple, F. L., 1961. The dust cloud about the earth. Nature, 189(4759):127–128.
- Armstrong, N. A., Aldrin, E. E. and Collins, M., 1969. Man walks on another world. In: First explorers on the moon -the incredible story of Apollo 11, National Geographic, 136(6):739.
- Armstrong et a1., Ref. 164, p. 746.
- Taylor, Ref. 156, p. 329.
- Taylor, Ref. 157, p. 9.
- Morris, Ref. 2, p. 152.
- Ackerman, Ref. 143, pp. 19, 22.
- Bible-Science Newsletter, 20(1) (January, 1982).
- Interview with Bill Overn, Ex Nihilo, 6(1):13–15 (July, 1983).
- Ref. 171, p. 14.
- Taylor, Ref. 115, pp. 57–58.
- Taylor, Ref. 115, pp. 60–61.
- Ashworth, D. G. and McDonnell, J. A. M., 1973. Lunar surface micro-erosion related to interplanetary dust particle distributions. Space Research XIII (COSPAR), Akademie-Verlag, Berlin, p. 1071.
- Eglinton, G., Maxwell, J. R. and Pillinger, C. T., 1972. The carbon chemistry of the moon. Scientific American, 227(4):81–90.
- Zook, H. A., 1975. The state of meteoritic material on the moon. Proceedings of the Sixth Lunar Science Conference, pp. 1653–1672.
- Grün et a1., Ref. 66, pp. 247–248.
- McDonnell, J. A. M., 1978. Microparticle studies by space instrumentation. In: Cosmic Dust, J. A. M. McDonnell (ed.), John Wiley and Sons, Chichester, England, pp. 370–372.
- Hörz, F., Hartung, J. B. and Gault, D. E., 1971. Micrometeorite craters on lunar rock surfaces. Journal of Geophysical Research. 76(23): 5770–5798.
- McDonnell,J. A. M.and Ashworth,D. G.,1972. Erosion phenomena on the lunar surface and meteorites. Space Research XII (COSPAR), Akademie-Verlag, Berlin, pp. 333–347.
- Ashworth and McDonnell, Ref. 175, p. 1071.
- Gault et a1., Ref. 108, pp. 1089–1090.
- McCracken and Dubin, Ref. 135, p. 203.
- Wehner, G. K., 1964. Sputtering effects on the lunar surface. In: The Lunar Surface Layer -Materials and Characteristics, J. W. Salisbury and P. E. Glaser (eds), Academic Press, New York, p. 318.
- Wehner, Ref. 185, p. 318.
- McDonnell and Ashworth, Ref. 181, p. 338.
- Ashworth and McDonnell, Ref. 175, p. 1072.
- McDonnell, J. A. M. and Flavill, R. P ., 1974. Solar wind sputtering on the lunar surface: equilibrium crater densities related to past and present microparticle influx rates. Proceedings of the Fifth Lunar Science Conference, Vol. 3, pp. 2441–2449.
- Hughes, Ref. 106, p. 380.
- McDonnell,J. A. M. and Carey, W. C., 1975. Solar-wind sputter erosion of microcrater populations on the lunar surface. Proceedings of the Sixth Lunar Science Conference, pp. 3391–3402.
- Zook, Ref. 177, p. 1669.
- McDonnell and Carey, Ref. 191, p. 3393.
- Ashworth and McDonnell, Ref. 175, pp. 1071–1072.
- McDonnell and Flavill, Ref. 189, p. 2441.
- Lyttleton, Ref. 127. 42
- Gold, Ref. 128.
- Taylor, Ref. 115, p. 171.
- Taylor, Ref. 115, p. 83.
- Taylor, Ref. 115, p. 86.
- Taylor, Ref. 115, p. 85.
- Taylor, Ref. 115, p. 259.
- Taylor, Ref. 115, p. 171.
- Gault et al., Ref. 108, p. 1086.
- Grün et al., Ref. 66, pp. 249–250.
- Sharonov, Ref. 131, p. 356.
- McDonnell, J. A. M., 1979. Lunar surface grain motion: electrostatic charging, supercharging (electret effects) and mechanical bonding. Space Research XIX (COSPAR), Pergamon Press, Oxford, pp. 455–457.
- Taylor, P. S., 1989. The Illustrated Origins Answer Book, Films for Christ Association, Inc., Mesa, Arizona, p. 17.
- Calais, Ref. 144, p. 2.
- Calais, Ref. 145, p. 2.
- Hawkins, Ref. 142.
- DeYoung, D. B., 1989. Questions and Answers on Astronomy and the Bible, Baker Book House, Grand Rapids, Michigan, p. 33.
- DeYoung, Ref. 212, p. 33.
- Taylor,Ref. 157, p. 9.
- DeYoung, Ref. 212, p. 34.
- Whitcomb and DeYoung, Ref. 149, p. 95.
- Mutch, T., 1972. Geology or the Moon, Princeton University Press, New Jersey, pp. 256–257.
- Taylor, Ref. 115, p. 83.
- Mutch, Ref. 217, pp. 256–257.
- Ashworth and McDonnell, Ref. 175, p. 1082.
- Taylor, Ref. 115, p. 61.
- Parks, W. S., 1991. Moondust: response to Kuban. Creation Research Society Quarterly, 28(2):75–76.
- Ganapathy et al., Ref. 85.
- Brown, W. T., 1989. In the Beginning, Fifth Edition, Center for Scientific Creation, Phoenix, Arizona, pp. 17, 54.
- Hughes, Ref. 106.
- Taylor, Ref. 115, pp. 84–92. 227.
- Hughes, Ref.106.
- Gault et al., Ref. 108, p. 1092.
- Calais, Ref. 144.
- Calais, Ref. 145.
- Rediscovered Apollo data gives first measure of how fast moon dust piles up, Phys.org, 20 November 2013. | https://creation.com/moon-dust-and-the-age-of-the-solar-system | 21 |
17 | The word “fossil” comes from a Latin word meaning “anything dug out of the ground”. Fossils are remains of prehistoric plants or animals that have been buried in the ground for many years and finally hardened into rock.
Fossils are utilized for different purposes. Some show the different extinct species that lived millions of years ago like dinosaurs and some fossilized animals and plants are useful in such a way that they produce oil and coal.
Paleontologists, scientists who study fossils, can extract information about the lifestyles, behaviours and habits of dinosaurs. They also learn about the history of Earth and the process of evolution through fossil research. Restoring dinosaur fossils is no easy task. Perfecting the reconstruction of a dinosaur demands a great deal of time, effort, technical expertise, and scientific knowledge. A complete dinosaur skeleton displayed anywhere today is usually a composite of parts extracted from two or more separate finds.
Steps to Create Dinosaurs Models:
1. Fossils are dug out of ground.
2. Rocks containing fossils are removed in one piece to prevent damage to the fossils.
3. These rocks are taken back to the laboratory, where special tools are used to carefully remove excess rock and soil.
4. A special hardening compound is used to protect the fossils, fix broken pieces, and rebuild missing bones with artificial material.
5. Experts also study fossils of dinosaur skin. Skin fossils are rarer than bone fossils.
6. By studying present-day animals that resemble dinosaurs in certain ways, scientists get their ideas about how dinosaurs lived.
7. Nobody knows the true color of dinosaurs. Scientists believe that their skin had colors for warning and for differentiating sex.
8. The preserved fossils are pieced together based in existing information about dinosaurs’ shapes.
9. Based on what is known of the anatomy of animals today, clay muscles and skins are fitted onto a model of the newly assembled skeleton.
Formation of Coal through Fossils:
A fossil fuel like natural gas and oil, coal is formed by the transformation of organic matter. Plants that lived in vast swamps 300 million years ago are the source of most of the coal that is mined today. As those plants died, they sank into the swamps and were covered with sediments. Gradually, they decomposed into peat. As the peat sank, ground heat and pressure removed hydrogen, oxygen and nitrogen from it to leave mostly carbon. This is what we know as coal. Coal is hard, black mineral and is classified by its carbon contents – the higher the carbon content, the more heat it gives off when burnt. The lowest grade of coal is lignite, followed by sub-bituminous coal, bituminous coal and anthracite. Coal is also known as “Black Gold”.
There are some places where coal seams are near the Earth’s surface. Once the soil is removed, it is easy to dig the coal out. This is known as opencast mining.
Steps of Formation of Coal:
1. Hundreds of millions years ago, there were many large ferns and other plants growing on the Earth.
2. Plants that died in swamps and on river banks were covered with soil and mud. The dead plants slowly sank into the ground.
3. As the years passed, the weight of the ground and the heat of the Earth changed the dead plants into coal.
4. Coal is usually found in layers, or seams, under the ground. To get it to the surface, we have to dig it out.
Formation of Oil through Fossils:
Geologists believe that the remains of such single-celled plank tonic plants and animals as blue-green algae and foraminifera became oil in a process that spanned million of years. As these organisms died, they fell to the seabed and were buried by sediments. Chemical rearrangement and bacterial activity then converted the organisms into kerogen (kerosene). Thermal action and pressure from layers of sedimentary rocks are believed to have converted the kerogen (kerosene) into crude oil. There is only a finite amount of oil in the world and it is being used up at an ever increasing rate. Other energy sources need to be developed quickly if the world is to avoid an energy shortage.
Oilfields – areas that contain oil- are scattered about the world. More than half the world’s oil is located in the Middle East. Countries that do not have their own supply of oil must buy it from other countries.
Steps of Formation of Oil:
1. Millions of years ago, water covered much more of the Earth’s surface than it does today. Living in the water were billions of tiny plants and animals.
2. When they died, the plants and animals fell to the ocean floor and piled up. Particles of eroded rocks in the form of sand and mid covered them up.
3. The layers of sand and mud were very heavy. This weight turned the sand and mud into sedimentary rocks. Heat from the Earth and the weight of the rocks caused the dead plants and animals to change into oil.
4. The oil slowly moved up through pores, or small holes, in the layers of rocks. In some cases, it came to rocks with no pores and so was trapped. It is from those traps that oil is drilled out.
5. To get the oil, an oil company must first drill down to it. First, a rig is set up to support the drill. Then, a well is drilled into the ground until the oil is reached. If the oilfield is in the sea, then offshore rigs are used. | https://smartenotes.com/fossils-and-utilization-of-fossils/ | 21 |
15 | Pathogens are microorganisms – for example, bacteria and viruses that cause disease. However, infection with a pathogen does not necessarily lead to disease. Infection arises when viruses, bacteria, or other microbes enter the body and accumulate. The body’s way of responding to infection is our immune system.
White blood cells and antibodies go get to work get rid of the infection. Pathogens are everywhere; they are in the environment every day. They are spread in many different ways: Droplet Infection; When we cough, talk or sneeze, small droplets shoot out of our mouths.
If an individual has an infection, their droplets contain microorganisms. The people around them will then breath in the microorganisms and the bacteria/virus that those droplets could contain. This is how common colds are spread.
If you drink water that has been contaminated by sewage or you eat raw or undercooked food you are taking in a lot of microorganisms directly to your gut. An example of this is salmonella. Vectors; An animal that spreads disease – causing organisms to spread from one host to another but does not suffer tall itself is called a vector. E.
G, mosquitoes spread malaria.
A parasite is an organism on/in a host and gets its food from the host. Parasites cause disease in the human body, , some are easy to treat and some are not.
Fungi infections are very common and can affect the skin, hair and nails. They are mainly caused by two groups of fungi. Dermatomes or yeast fungal skin infections can cause a variety of different rashes. They are usually treated with cream or tablets e. G.
Athletes foot.Task 2 Standard Infection ControlPrecautions are made in order to prevent cross transmission from recognized/unrecognized sources of infection.These resources of (potential) infection include blood and other body fluid secretions or excretions and any equipment or items in the care environment which are likely to become contaminated. These precautions should be applied at all times within a healthcare setting or where healthcare is being provided in the NASH, residential home etc.
Hands are the most common way that micro- organisms (e. G bacteria) might be transported and then cause infections, especially in those who are most susceptible to infection; for example, a patient on a ward who had been ill for a while, their immune system is low.Good hand hygiene is one of the most important things we can do in reducing transmission of infectious agents, including Healthcare Associated Infections (HCI) during delivery of care. The NASH has three levels of hand hygiene.The first being, Social Hand Hygiene; ‘To render the hands physically clean and to remove microorganisms picked up during activities considered ‘social’ activities (transient Micro-organisms)’ 17/11/14, www. Misapprehensions. Nash.
UK. (This should last for at least 15 seconds).Secondly, Hygienic (aseptic) Hand Hygiene; ‘To remove or destroy transient Micro-organisms. Also, to provide residual effect during times when hygiene is particularly important in protecting yourself and others (reduces those resident micro-organisms which normally live on the skin)’ 17/1 1/14, Finally, Surgical scrub; ‘To remove or destroy transient microorganisms and to substantially reduce resident micro-organisms during times when surgical procedures are being carried out’ 17/11/14, www. Misapprehensions. Nash.
UK. This should last for at least 2-3 minutes, making sure all areas Of hands and forearms are washed thoroughly).When managing an out brake at work you eave to follow procedures that have been put in place to protect service users and healthcare providers.
For example if a nurse came into work and disclosed that they have been had vomiting and diarrhea all night you should tell the ward manager or maternal as they would need to go home. They would not be allowed back into work for 48 hours from their last episode, due to the RIDDED guidelines.Your workplace should have a written policy on waste segregation and disposal which provides guidance on all aspects, including special waste, like pharmaceuticals and psychotic waste, segregation of waste, and audits. This should include the color coding of bags used for waste, for example: ; Municipal/domestic waste (black bags) ; Offensive waste (tiger striped) ; Infectious waste (orange).
MI/DID. Whilst on placement in Harrington General Hospital I saw what happens when you have to manage an outbreak.A service user had been referred to ward AH and they had a viral infection. The service user was isolated; they were segregated into a side room.
When any of the nurses went in they had to follow certain precaution, for example, warring gloves, masks and an apron. (http://www. PC’s.
Org. UK/en/resources/health_and_safety/ lath_and_safety_legal_summaries/ health_and_safety_at_work_act_1 974. CFML, Assessed 23/1 1/14). There are policies and procedure that you have to follow which dictate which type Of apron and mask you should ware.
These have been put in place to safeguard staff and service users and to abide by the law. , The Health and Safety at Work Act’ 1 974, states that you have to provide members of staff with PEE (personal protective equipment). By using policies and procedure such as ‘wear an apron’ (policy) and ‘you put an apron on correctly by.
.. ‘ (Procedure) you are following the law and keeping everyone involved safe. (http:// www. Legislation. Gob. UK/gaps/1974/37 Assessed 23/1 1/14).
My Seen.Ice providers following the policies and procedures put in place that are involved with handling and infectious outbreak, the said infection should not be passed on or spread. The policies and procedures have been proven to work and therefore the infection should stop. Isolation ( one of the Ann.’s policies/ procedures) helps Staff understand how to manage an outbreak, making it easier to control. Task 3 pa/MM There are many key aspects of legislations and guidelines relevant to the prevention and control of infection prevention and control, for example, Health and Safety Act’ (1974).This Act places responsibilities on the Trust and individual employees to do what is reasonable to adequately control the risks of infection to staff and others. Kinder this legislation all employees have the responsibilities to cooperate with the Trust on matters of health and safety and in the context of this policy particularly regarding the reduction of risks from healthcare associated infections.
Infection control policies, procedures and protocols are designed to outline the principles and responsibilities associated with the prevention and control of infection in a health care setting. Http://move. Hose. Gob. UK/electricity/information/testing.HTML Assessed, 24/11/14) You can prevent infection by following policies and procedures of The Health and Safety Act 1974. An example Of this within a healthcare setting is Washing your hands with hands sanitized after patient contact, you are not just looking out for yourself but you are following the policies and procedures and upholding the safety of your colleagues and service users as you are preventing the spread of bacterial microorganisms from your hands. RIDDED’ (1995, updated in 2013) is the Reporting of Injuries, Diseases and Dangerous Occurrences Regulations.
This legislation has been put in place to make sure that if anyone has a slip, trip, fall or minor injury it is documented; this helps with insurance claims and also help accidents from happening again. If any accidents happen, a report had to be written and this report has to be kept in file for up to three years. It needs to be available to inspectors to see if they so wish.In following RIDDED and flowing the policies and procedures individuals are preventing infection and no one is being put at risk.
An example Of this is doctors cut themselves at the end of surgery with one of the surgical knife, hey would need to report it to their senior who would then send them for treatment. By this happening the manager and doctor they are following the policies and procedures and if it is a minor injury the doctor will most likely be k to keep on working and not be putting the patient at risk. COACH’ (2002); Control of Substances Hazardous to Health Regulations.
This legislation has been in place since 2002, it protects employees who might work around/ come into contact with hazardous substances. These could be anything from acids to blood, urine or other bodily fluids. COACH watches these substances, their use and their disposal. They regulate where/how these substances are kept. The way in which they are labeled and their effects, just incase of emergencies. The Food Safety Act’ (1990), provides the framework for all food legislation in Britain.The main responsibilities for all food businesses under the Act are: to make sure you do not include anything in food, remove anything from food or treat food in any way which means it would be damaging to the health of people eating it to make sure that the food you serve or sell is of the nature, substance or quality which consumers would expect to make sure that the DOD is labeled, advertised and presented in a way that is not false or misleading; for example when the hospital shop workers go round the ward with the trolley full of goods and magazines.
This Act makes sure that the highest possible standards for organizations that are preparing and seen. ‘inning food, meaning that food served must be in a “fit stay’ before serving it otherwise it IS illegal. This legislation links into The Food Hygiene Regulations and The Regulations on The Hygiene of Foodstuffs 2006.This secondary Act gives us guidelines Of general food hygiene and how you must document DOD safety management systems.
(http://www. Food. Gob. K/business- industry/guidance’s/hugged , Assessed: 25/11/14) A policy and procedure of The Food Safety Act 1990 is the keeping of meats and other foods separate. To do different chopping boards should be used, for example two different colored boards and knives which can help prevent cross contamination of foods.
In following these legislations, you are ensuring that illness caused by food isn’t spread as these legislations say that food ‘must be traceable from farm to fork’.Therefore, if these policies and procedures are allowed, there will not be any outbreaks in food poisoning, for example, Salmonella. (http://WV. Detonates. Gob.
UK/sections/environment/ environments althea/foodstuffs/food_hygiene_legislation. Asps http:// www. Nash.
Conditions/Food-poisoning/Pages/Causes. Asps, Assessed: 25/11/14) PA/MM/DO. Individuals working within the Health and Social Care have a responsibility to minimize the risk of infection, not only to protect themselves but to also safeguard patients and their colleagues.To ensure this happens, there is a code of conduct that must be followed, this is the same for any work place. Everyone has their own responsibilities within their workplace with the exception Of Mangers everyone is usually has the same responsibilities or different but on the same level. When someone is a Manager or a in a Senior position their responsibilities are larger and therefore if something happens or someone makes a mistake then as the senior member or staff it would fall on you to sort it out.
Everyone who has a job in Health and Social care has to acknowledge that they have a role in the prevention of the spreading infection and illness and infection control. Due to this it is very important that hose professionals, along side the public, remember to wash their hands then use hand gel. This ensure any bacteria or infectious microorganisms on your hands are killed, so you don’t give it the change to spread from person to person.
It is not just the job of Health and Social Care Professionals responsibility to make sure infections don’t spread it is EVERYONE!This is the reason why gloves, aprons and other PEE are used in health care settings as things are so easily spread and there are very sick service users with low immune system so are at higher risk of contracting infections. This is why risk assessments are used, they help us understand which individuals are most at risk and then you can put measures in to prevent anything form happening. Risk Assessments are charts, which show you the value of the possibility of something happening, weather that be a high risk ROR low risk.Whilst on placement in Harrington General Hospital I observed many risk assessment being carried out. One was for bed sores and another was for an individual who had just had a knee replacement and the Occupational therapist was looking at the risk assessment of their likelihood to have falls, in order for hem to see what the individual might need at home, or if he needed a walking stick. This gave me the opportunity to see the work Of an occupational therapist and the Multidisciplinary Team at work.The Risk Assessment we have been given states the risk – infection control – and how to reduce it; using protective equipment.
This is helpful as it is informative and you then know what you need to be doing in order to keep infection under control. This Risk assessment doesn’t go into much detail, however, it portrays the message it needs to, which is coming into contact with infected material could cause infection to spread. It provides instructions, shush as, ” Bags to be changed when % full” and ” No sitting on beds”. | https://finnolux.com/food-3/ | 21 |
16 | Types Of Demand: 1. They slow it during the expansion phase of the business cycle to combat inflation. Types of Demand 1) Derived Demand: This is a type of demand which occurs as a result of the demand for other commodities i.e. Direct and indirect demand: (or) Producers’ goods and consumers’ goods: demand for goods that are directly used for consumption by the ultimate consumer is known as direct demand (example: Demand for T shirts). In this short revision video we cover different types of demand – namely effective, latent, derived, composite and joint demand. On the other hand demand for goods that are used by producers for producing goods and services. Demand primarily dependent upon price is called price demand. Types of Demand . There are large number of goods and services available in every economy. The demand for one commodity will necessitate the demand for another commodity. TYPES OF DEMAND. It refers to the quantity of commodities the consumer is willing to buy at a given price and time. Finished products include any item sold directly to a consumer. There are two types of price demand-(i) Individual Demand. Negative demand: If the market response to a product is negative, it shows that people are not aware of the features of the service and the benefits offered. This demand is sensitive or responsive to the change in price. Types of Demand includes Price demand, Cross demand, Income demand, Direct demand, Derived demand, Joint demand and Composite demand. The following are the basic types of product demand. The demand for an item is unrelated to the demand for other items. Price Demand. Perfectly Elastic Demand Definition: When a small change (rise or fall) in the price results in a large change (fall or rise) in the quantity demanded, it is known as perfectly elastic demand.. Perfectly Elastic Demand. Independent demand is the demand for finished products; it does not depend on the demand for other products. Different types of goods demand. Demand is a basic economic force that drives a firm's revenue. Negative demand- This occurs when a major part of the market dislikes the product and may even pay a … The quantity demanded by a consumer due to the change in price. Negative demand- Consumers dislike the product and may even pay a price to avoid it. Eight demand states are possible: 1. Their classification is important in order to carry out a demand analysis for managerial decisions. The demand for money is affected by several factors, including the level of income, interest rates, and inflation as well as uncertainty about the future. (Hospitals, Life Insurance) 2. If you offer any paid services, then you are trying to raise demand for them. Independent demand. When single consumer demand for a commodity. the demand of the product is not for its own sake, but for the manufacturing of another product which is in demand. Product demand is customer willingness to purchase a product or service at a given price. Demand drives economic growth. The way in which these factors affect money demand is usually explained in terms of the three motives for demanding money: the transactions, the precautionary, and the speculative motives. Types of Demand. Nonexistent demand – Consumers may be unaware or uninterested in the product. The two types of demand are independent and dependent. For each state of demand, there is a marketing task and a marketing technique. Under such type of elasticity of demand, a small rise in price results in a fall in demand to zero, while a small fall in price causes an increase in demand to infinity. Businesses want to increase demand so they can improve profits.Governments and central banks boost demand to end recessions.
Tagalog Dirty Phrases, Behavioural Science Ppt, Weather In Sydney, Australia In February, National Institute Of Standards Tech, White Bean Side Dish, Paleo Meals Delivered Chicago, Kutsinta Recipe Without Lye, Behavioural Science Courses In Canada, Southern Countries Of The World, | https://thegoodbean.com/article/types-of-demand-e4c13b | 21 |
18 | The overconfidence effect is a well-established bias in which a person's subjective confidence in his or her judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
The most common way in which overconfidence has been studied is by asking people how confident they are of specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy, implying people are more sure that they are correct than they deserve to be. If human confidence had perfect calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects were correct about 80% of the time, whereas they claimed to be 100% certain. Put another way, the error rate was 20% when subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they were wrong 20% of the time.
One manifestation of the overconfidence effect is the tendency to overestimate one's standing on a dimension of judgment or performance. This subsection of overconfidence focuses on the certainty one feels in their own ability, performance, level of control, or chance of success. This phenomenon is most likely to occur on hard tasks, hard items, when failure is likely or when the individual making the estimate is not especially skilled. Overestimation has been seen to occur across domains other than those pertaining to one's own performance. This includes the illusion of control, planning fallacy.
Illusion of controlEdit
Illusion of control describes the tendency for people to behave as if they might have some control when in fact they have none. However, evidence does not support the notion that people systematically overestimate how much control they have; when they have a great deal of control, people tend to underestimate how much control they have.
The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how long it will take them to get things done. It is strongest for long and complicated tasks, and disappears or reverses for simple tasks that are quick to complete.
Wishful-thinking effects, in which people overestimate the likelihood of an event because of its desirability, are relatively rare. This may be in part because people engage in more defensive pessimism in advance of important outcomes, in an attempt to reduce the disappointment that follows overly optimistic predictions.
Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage (2004). Much of the evidence for overprecision comes from studies in which participants are asked about their confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from overprecision; they are one and the same in these item-confidence judgments. After making a series of item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items they claim to have gotten right. One possible explanation for this is that item-confidence judgments were inflated by overprecision, and that their judgments do not demonstrate systematic overestimation.
The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time. In fact, hit rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that they think their knowledge is more accurate than it actually is.
Overplacement is the most prominent manifestation of the overconfidence effect which is a belief that erroneously rates someone as better than others. This subsection of overconfidence occurs when people believe themselves to be better than others, or "better-than-average". It is the act of placing yourself or rating yourself above others (superior to others). Overplacement more often occurs on simple tasks, ones we believe are easy to accomplish successfully.
Perhaps the most celebrated better-than-average finding is Svenson's (1981) finding that 93% of American drivers rate themselves as better than the median. The frequency with which school systems claim their students outperform national averages has been dubbed the "Lake Wobegon" effect, after Garrison Keillor's apocryphal town in which "all the children are above average." Overplacement has likewise been documented in a wide variety of other circumstances. Kruger (1999), however, showed that this effect is limited to "easy" tasks in which success is common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are worse than others.
Some researchers have claimed that people think good things are more likely to happen to them than to others, whereas bad events were less likely to happen to them than to others. But others have pointed out that prior work tended to examine good outcomes that happened to be common (such as owning one's own home) and bad outcomes that happened to be rare (such as being struck by lightning). Event frequency accounts for a proportion of prior findings of comparative optimism. People think common events (such as living past 70) are more likely to happen to them than to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of control, and beliefs in false superiority, because it helps them cope and thrive. Although there is some evidence that optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable to the alternative explanation that their forecasts are accurate.
Overconfidence has been called the most "pervasive and potentially catastrophic" of all the cognitive biases to which human beings fall victim. It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes.
Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence of inefficient enduring legal disputes. If corporations and unions were prone to believe that they were stronger and more justified than the other side, that could contribute to their willingness to endure labor strikes. If nations were prone to believe that their militaries were stronger than were those of other nations, that could explain their willingness to go to war.
Overprecision could have important implications for investing behavior and stock market trading. Because Bayesians cannot agree to disagree, classical finance theory has trouble explaining why, if stock market traders are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer. If market actors are too sure their estimates of an asset's value is correct, they will be too willing to trade with others who have different information than they do.
Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which they drew conclusions from a case study. Along with their answers, subjects gave a confidence rating in the form of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the subjects were given more information about the case study, their confidence increased from 33% to 53%. However their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated overconfidence which increased as the subjects had more information to base their judgment on.
Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could conceivably promote it. For instance, those most likely to have the courage to start a new business are those who most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more credible, then contenders for leadership learn that they should express more confidence than their opponents in order to win election. However, Overconfidence can be liability or aseet during the political election. Candidates tend to lose advantage when verbally expressed overconfidence does not meet current performance, and tend to gain advantage express overconfidence non-verbally.
Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do not.
Very high levels of core self-evaluations, a stable personality trait composed of locus of control, neuroticism, self-efficacy, and self-esteem, may lead to the overconfidence effect. People who have high core self-evaluations will think positively of themselves and be confident in their own abilities, although extremely high levels of core self-evaluations may cause an individual to be more confident than is warranted.
- Calibrated probability assessment – Subjective probabilities assigned in a way that historically represents their uncertainty
- Confidence – State of trusting that a belief or course of action is correct
- Dunning–Kruger effect – Cognitive bias where people with low ability overestimate their skill
- False consensus effect – Attributional type of cognitive bias
- Hard–easy effect – A cognitive bias relating to mis-estimating success based on perceived difficulty
- Hindsight bias – Tendency to perceive past events as more predictable than they actually were at the time
- Heuristics in judgment and decision-making – Simple strategies or mental processes involved in making quick decisions
- Impostor syndrome – Psychological pattern of doubting one's accomplishments and fearing being exposed as a "fraud"
- List of cognitive biases – Systematic patterns of deviation from norm or rationality in judgment
- Misplaced loyalty – Loyalty placed where it is not respected or to an unworthy cause
- Optimism bias – Cognitive bias that causes someone to believe that they themselves are less likely to experience a negative event
- Political midlife crisis – Turning point or watershed moment in the fortunes of a governing entity
- Depressive realism – Hypothesis that depressed individuals make more realistic inferences than do non-depressed individuals
- Pallier, Gerry; Wilkinson, Rebecca; Danthiir, Vanessa; Kleitman, Sabina; Knezevic, Goran; Stankov, Lazar; Roberts, Richard D. (2002). "The Role of Individual Differences in the Accuracy of Confidence Judgments". The Journal of General Psychology. 129 (3): 257–299. doi:10.1080/00221300209602099. PMID 12224810. S2CID 6652634.
- Moore, Don A.; Healy, Paul J. (April 2008). "The trouble with overconfidence". Psychological Review. 115 (2): 502–517. doi:10.1037/0033-295X.115.2.502. ISSN 1939-1471. PMID 18426301.
- Moore, Don A.; Healy, Paul J. (2008). "The trouble with overconfidence". Psychological Review. 115 (2): 502–517. CiteSeerX 10.1.1.335.2777. doi:10.1037/0033-295X.115.2.502. PMID 18426301. Archived from the original on 2014-11-06.
- Moore, Don A.; Schatz, Derek (August 2017). "The three faces of overconfidence". Social and Personality Psychology Compass. 11 (8): e12331. doi:10.1111/spc3.12331. ISSN 1751-9004.
- Adams, P. A.; Adams, J. K. (1960). "Confidence in the recognition and reproduction of words difficult to spell". The American Journal of Psychology. 73 (4): 544–552. doi:10.2307/1419942. JSTOR 1419942. PMID 13681411.
- Lichtenstein, Sarah; Fischhoff, Baruch; Phillips, Lawrence D. (1982). "Calibration of probabilities: The state of the art to 1980". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos (eds.). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 306–334. ISBN 978-0-521-28414-1.
- Langer, Ellen J. (1975). "The illusion of control". Journal of Personality and Social Psychology. 32 (2): 311–328. doi:10.1037/0022-35220.127.116.111. S2CID 30043741.
- Buehler, Roger; Griffin, Dale; Ross, Michael (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology. 67 (3): 366–381. doi:10.1037/0022-3518.104.22.1686. S2CID 4222578.
- Krizan, Zlatan; Windschitl, Paul D. (2007). "The influence of outcome desirability on optimism" (PDF). Psychological Bulletin. 133 (1): 95–121. doi:10.1037/0033-2909.133.1.95. PMID 17201572. Archived from the original (PDF) on 2014-12-17. Retrieved 2014-11-07.
- Norem, Julie K.; Cantor, Nancy (1986). "Defensive pessimism: Harnessing anxiety as motivation". Journal of Personality and Social Psychology. 51 (6): 1208–1217. doi:10.1037/0022-3522.214.171.1248. PMID 3806357.
- McGraw, A. Peter; Mellers, Barbara A.; Ritov, Ilana (2004). "The affective costs of overconfidence" (PDF). Journal of Behavioral Decision Making. 17 (4): 281–295. CiteSeerX 10.1.1.334.8499. doi:10.1002/bdm.472. Archived (PDF) from the original on 2016-03-04.
- Harvey, Nigel (1997). "Confidence in judgment". Trends in Cognitive Sciences. 1 (2): 78–82. doi:10.1016/S1364-6613(97)01014-0. PMID 21223868. S2CID 8645740.
- Hoffrage, Ulrich (2004). "Overconfidence". In Pohl, Rüdiger (ed.). Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement and memory. Psychology Press. ISBN 978-1-84169-351-4.
- Gigerenzer, Gerd (1993). "The bounded rationality of probabilistic mental models". In Manktelow, K. I.; Over, D. E. (eds.). Rationality: Psychological and philosophical perspectives. London: Routledge. pp. 127–171. ISBN 9780415069557.
- Alpert, Marc; Raiffa, Howard (1982). "A progress report on the training of probability assessors". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos (eds.). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 294–305. ISBN 978-0-521-28414-1.
- Vörös, Zsófia (2020). "Effect of the different forms of overconfidence on venture creation: Overestimation, overplacement and overprecision". Journal of Management & Organization. 19 (1): 1–14. doi:10.1017/jmo.2019.93.
- Svenson, Ola (1981). "Are we all less risky and more skillful than our fellow drivers?". Acta Psychologica. 47 (2): 143–148. doi:10.1016/0001-6918(81)90005-6.
- Cannell, John Jacob (1989). "How public educators cheat on standardized achievement tests: The "Lake Wobegon" report". Friends for Education. Archived from the original on 2014-11-07.
- Dunning, David (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself. Psychology Press. ISBN 978-1841690742.
- Kruger, Justin (1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments". Journal of Personality and Social Psychology. 77 (2): 221–232. doi:10.1037/0022-35126.96.36.199. PMID 10474208.
- Weinstein, Neil D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology. 39 (5): 806–820. CiteSeerX 10.1.1.535.9244. doi:10.1037/0022-35188.8.131.526.
- Chambers, John R.; Windschitl, Paul D. (2004). "Biases in Social Comparative Judgments: The Role of Nonmotivated Factors in Above-Average and Comparative-Optimism Effects". Psychological Bulletin. 130 (5): 813–838. doi:10.1037/0033-2909.130.5.813. PMID 15367082. S2CID 15974667.
- Chambers, John R.; Windschitl, Paul D.; Suls, Jerry (2003). "Egocentrism, Event Frequency, and Comparative Optimism: When what Happens Frequently is "More Likely to Happen to Me"". Personality and Social Psychology Bulletin. 29 (11): 1343–1356. doi:10.1177/0146167203256870. PMID 15189574. S2CID 8593467.
- Kruger, Justin; Burrus, Jeremy (2004). "Egocentrism and focalism in unrealistic optimism (and pessimism)". Journal of Experimental Social Psychology. 40 (3): 332–340. doi:10.1016/j.jesp.2003.06.002.
- Taylor, Shelley E.; Brown, Jonathon D. (1988). "Illusion and well-being: A social psychological perspective on mental health". Psychological Bulletin. 103 (2): 193–210. CiteSeerX 10.1.1.385.9509. doi:10.1037/0033-2909.103.2.193. PMID 3283814.
- Kahneman, Daniel (19 October 2011). "Don't Blink! The Hazards of Confidence". New York Times. Adapted from: Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-1-4299-6935-2.
- Plous, Scott (1993). The Psychology of Judgment and Decision Making. McGraw-Hill Education. ISBN 978-0-07-050477-6.
- Thompson, Leigh; Loewenstein, George (1992). "Egocentric interpretations of fairness and interpersonal conflict" (PDF). Organizational Behavior and Human Decision Processes. 51 (2): 176–197. doi:10.1016/0749-5978(92)90010-5. Archived (PDF) from the original on 2014-11-07.
- Babcock, Linda C.; Olson, Craig A. (1992). "The Causes of Impasses in Labor Disputes". Industrial Relations. 31 (2): 348–360. doi:10.1111/j.1468-232X.1992.tb00313.x. S2CID 154389983.
- Johnson, Dominic D. P. (2004). Overconfidence and War: The Havoc and Glory of Positive Illusions. Harvard University Press. ISBN 978-0-674-01576-0.
- Aumann, Robert J. (1976). "Agreeing to Disagree". The Annals of Statistics. 4 (6): 1236–1239. doi:10.1214/aos/1176343654.
- Daniel, Kent; Hirshleifer, David; Subrahmanyam, Avanidhar (1998). "Investor Psychology and Security Market Under- and Overreactions" (PDF). The Journal of Finance. 53 (6): 1839–1885. doi:10.1111/0022-1082.00077. hdl:2027.42/73431. S2CID 32589687.
- Oskamp, Stuart (1965). "Overconfidence in case-study judgments" (PDF). Journal of Consulting Psychology. 29 (3): 261–265. doi:10.1037/h0022125. PMID 14303514. Archived (PDF) from the original on 2014-11-07. Reprinted in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 287–293. ISBN 978-0-521-28414-1.
- Radzevick, J. R.; Moore, D. A. (2009). "Competing To Be Certain (But Wrong): Social Pressure and Overprecision in Judgment" (PDF). Academy of Management Proceedings. 2009 (1): 1–6. doi:10.5465/AMBPP.2009.44246308. Archived from the original (PDF) on 2014-11-07.
- Elizabeth.R, Tenney; David, Hunsaker; Nathan, Meikle (2018). "Research: When Overconfidence Is an Asset, and When It's a Liability". Harvard Business Review.
- Fowler, James H.; Johnson, Dominic D. P. (2011-01-07). "On Overconfidence". Seed Magazine. ISSN 1499-0679. Archived from the original on 2011-08-12. Retrieved 2011-08-14.
- Judge, Timothy A.; Locke, Edwin A.; Durham, Cathy C. (1997). "The dispositional causes of job satisfaction: A core evaluations approach". Research in Organizational Behavior. 19. pp. 151–188. ISBN 978-0762301799.
- "Overconfidence". Psychology Today. Retrieved 2021-03-08.
- Rimondi, Christopher (2019-08-06). "Chernobyl, Anatoly Dyatlov and Engineering Arrogance". Medium. Retrieved 2021-03-08.
- Larrick, Richard P.; Burson, Katherine A.; Soll, Jack B. (2007). "Social comparison and confidence: When thinking you're better than average predicts overconfidence (and when it does not)". Organizational Behavior and Human Decision Processes. 102 (1): 76–94. doi:10.1016/j.obhdp.2006.10.002.
- Baron, Johnathan (1994). Thinking and Deciding. Cambridge University Press. pp. 219–224. ISBN 978-0-521-43732-5.
- Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. ISBN 978-0-521-79679-8.
- Sutherland, Stuart (2007). Irrationality. Pinter & Martin. pp. 172–178. ISBN 978-1-905177-07-3. | https://en.m.wikipedia.org/wiki/Overconfidence_effect | 21 |
109 | Dear Readers, Here we have given Practice English Reading Comprehension quiz and questions for IBPS/Indian Bank PO Prelims 2018 Exams with detailed explanation. Candidates those who are preparing for Upcoming Bank Exams can make use of it.
Click “Start Quiz” to attend these Questions and view Solutions
Direction (1-10): Read the passage carefully and answer the following questions.
Capital has been defined as that part of a person’s wealth, other than land, which yields an income or which aids in the production of further wealth.
Capital serves as an instrument of production. Anything which is used in production is capital.
In the ordinary language, capital is used in the sense of money. But when we talk of capital as a factor of production, it is quite wrong to confuse capital with money. There is no doubt that money is a form of wealth and it yields income, when it is lent out. But it cannot be called capital. Capital is a factor of production, but money as such does not serve as a factor of production. It is another thing that with money we can buy machinery and raw materials which then serve as factors of production.
Capital has been produced by man working with nature. Hence, capital may also be defined as man-made instrument of production. Capital, thus, consists of those goods which are produced for use in future production. Machines, tools and instruments, factories, canals, dams, transport equipment, stocks of raw materials, etc., are some of the examples of capital. All of them are produced by man to help in the production of further goods, services and wealth.
While money is used to purchase goods and services for consumption, capital is more durable and is used to generate wealth through investment. Examples of capital include automobiles, patents, software and brand names. All of these items are inputs that can be used to create wealth. Besides being used in production, capital can be rented out for a monthly or annual fee to create wealth.
Capital has its own peculiarities and characteristics which distinguish it from other factors of production.
Capital cannot produce without the help of the active services of labour. labour is an active, whereas capital is a passive factor of production. Capital on its own cannot produce anything until labour works on it.
The composition or supply of capital is not automatic, but it is produced with the joint efforts of labour and land. Therefore, capital is a produced means of production.
The total supply of land cannot be changed, whereas the supply of capital can be increased or decreased. If the residents of a country produce more or save more from their income, and these savings are invested in factories or capital goods, it increases the supply of capital.
Of all the factors of production, capital is the most mobile. Land is perfectly immobile. Labour and entrepreneur also lack mobility. Capital can be easily transported from one place to another.
The term “capital” can refer to a number of different concepts in the business world. While most people think of financial capital, or the money a company uses to fund operations, human capital and social capital are both important contributors to a company’s overall financial health.
The most common forms of financial capital are debt and equity.
Equity capital is funds paid into a business by investors in exchange for common or preferred stock. This represents the core funding of a business, to which debt capital funding may be added. The equity shareholders are the owners of the company who have significant control over its management. They enjoy the rewards and bear the risk of ownership.
In other words, it can be said that the equity capital refers to that portion of the organization’s capital, which is raised in exchange for the share of ownership in the company. These shares are called the equity shares.
There are various advantages of equity capital. Equity investors do not require a pledge of collateral. Existing business assets remain unencumbered and available to serve as security for loans.The firm has no obligation to redeem the equity shares since these have no maturity date. The equity capital acts as a cushion for the lenders, as with more and more equity base, the company can easily raise additional funds on favorable terms. Thus, it increases the creditworthiness of the company. Conducive feature of equity share capital is that the firm is not bound to pay dividends, in case there is a cash deficit. The firm can skip the equity dividends without any legal consequences.
There are some disadvantages also of raising the finances through the issue of equity shares. With the more issue of equity shares, the ownership gets diluted along with the control over the management of the company. The cost of equity capital is high since the equity shareholders expect a higher rate of return as compared to other investors.
Debt financing occurs when a firm sells fixed income products, such as bonds, bills, or notes, to investors to obtain the capital needed to grow and expand its operations. When a company issues a bond, the investors that purchase the bond are lenders who are either retail or institutional investors that provide the company with debt financing.
Human capital is a much less intangible concept, but its contribution to a company’s success is no less important. Human capital refers to the skills and abilities a company’s employees bring to the operation.
Social capital is an even more intangible asset, referring to the relationships people have to each other, and the desire they have to do things for and with others within their social networks.
Working capital is a measure of both a company’s operational efficiency and its short-term financial health. Working capital can also be defined as the difference between a company’s current assets and current liabilities, working capital is a measure of a company’s short-term liquidity – more specifically, its ability to cover its debts, accounts payable and other obligations that are due within a year.
If a company’s current assets do not exceed its current liabilities, then it may have trouble paying back creditors or go bankrupt. A declining working capital ratio is a red flag for the firm and financial analysts.
1) According to the passage, what is the most advantageous feature of equity capital?
a) equity shareholders are the actual owners of the firm
b) firm do not require a pledge of collateral.
c) firm has no obligation to redeem the equity shares
d) it increases the creditworthiness of the firm
e) firm is not bound to pay dividends, in case of cash deficit
2) According to the passage, which type of capital tells about the short term liquidity of the company?
a) equity capital
b) trading capital
c) working capital
d) debt capital
e) human capital
3)According to the passage, which of the followings will not be counted as the characteristic of capital?
a) capital is a passive factor of production
b)capital is variable
c) capital is more mobile than other factor of production
d) capital depreciates
e)capital is produced means of production
4)According to the passage, which of the followings will not be considered as a capital?
a) tools and instruments
b) brand name
c) bank balance
e) stock of raw materials
5) Which of the following statements is true in the context of the passage?
a) Money is a form of capital.
b) Capital can be defined as man-made instrument of production.
c) Social capital is tangible asset of the firm.
d)None of the above
e) All are true
6) Find the incorrect statement on the basis of the given passage.
a) Human capital refers to the skills and abilities a company’s employees.
b) With the more issue of equity shares, the ownership gets diluted.
c) A declining working capital ratio is a sign of relief for firm’s financial health.
d) Debt financing occurs when a firm sells fixed income
e) All are correct
7) Choose the word which as same meaning as the word intangible
8) Choose the word which as same meaning as the word Conducive
9) Choose the word which as opposite meaning as the word durable
10) Choose the word which as opposite meaning as the word diluted
1) Answer: e)
According to the passage the most advantageous feature of equity capital is that the firm is not bound to pay dividends, in case there is a cash deficit.
2) Answer: c)
It is given in the passage that working capital is a measure of a company’s short-term liquidity – more specifically, its ability to cover its debts, accounts payable and other obligations that are due within a year.
3) Answer: d)
According to the passage, ‘capital depreciates’ will not be counted as the characteristic of capital as it is not given in the passage.
4) Answer: c)
According to the passage, ‘bank balance’ will not be considered as a capital because it is clearly mentioned in the passage that capital consists of those goods which are produced for use in future production so ‘bank balance’ will be considered as a wealth not capital.
5) Answer: b)
According to the passage, true statement is “capital can be defined as man-made instrument of production”.
6) Answer: c)
According to the passage, declining working capital ratio is a red flag for financial analysts and firm as working capital tells about the firm’s ability to cover its debts, accounts payable and other obligations that are due within a year.
7) Answer: c)
The meaning of “intangible” is “insubstantial / elusive / vague”.
8) Answer: b)
The meaning of “conducive” is “favorable / helpful / accommodating”.
9) Answer: a)
The meaning of “durable” is “strong / stable” and its opposite is “weak / tenuous”.
10) Answer: d)
The meaning of “diluted” is “weak /thinned” and its opposite is “strenuous /intense”.
Daily Practice Test Schedule | Good Luck
|Topic||Daily Publishing Time|
|Daily News Papers & Editorials||8.00 AM|
|Current Affairs Quiz||9.00 AM|
|Quantitative Aptitude “20-20”||11.00 AM|
|Vocabulary (Based on The Hindu)||12.00 PM|
|General Awareness “20-20”||1.00 PM|
|English Language “20-20”||2.00 PM|
|Reasoning Puzzles & Seating||4.00 PM|
|Daily Current Affairs Updates||5.00 PM|
|Data Interpretation / Application Sums (Topic Wise)||6.00 PM|
|Reasoning Ability “20-20”||7.00 PM|
|English Language (New Pattern Questions)||8.00 PM| | https://www.ibpsguide.com/english-reading-comprehension-test-day-224/ | 21 |
36 | Partition of India
The Partition of India was the division of British India into two independent dominion states, India and Pakistan. The Dominion of India is today the Republic of India; the Dominion of Pakistan is today the Islamic Republic of Pakistan and the People's Republic of Bangladesh. The partition involved the division of two provinces, Bengal and Punjab, based on district-wide non-Muslim or Muslim majorities. The partition also saw the division of the British Indian Army, the Royal Indian Navy, the Indian Civil Service, the railways, and the central treasury. The partition was outlined in the Indian Independence Act 1947 and resulted in the dissolution of the British Raj, or Crown rule in India. The two self-governing countries of India and Pakistan legally came into existence at midnight on 15 August 1947.
|Date||15 August 1947|
|Outcome||Partition of British Indian Empire into independent dominions, India and Pakistan, sectarian violence, religious cleansing and refugee crises|
|Deaths||200,000 to 2 million deaths|
10 to 20 million displaced
The partition displaced between 10 and 20 million people along religious lines, creating overwhelming refugee crises in the newly constituted dominions. There was large-scale violence, with estimates of the loss of life accompanying or preceding the partition disputed and varying between several hundred thousand and two million. The violent nature of the partition created an atmosphere of hostility and suspicion between India and Pakistan that affects their relationship to this day.
The term partition of India does not cover the secession of Bangladesh from Pakistan in 1971, nor the earlier separations of Burma (now Myanmar) and Ceylon (now Sri Lanka) from the administration of British India. The term also does not cover the political integration of princely states into the two new dominions, nor the disputes of annexation or division arising in the princely states of Hyderabad, Junagadh, and Jammu and Kashmir, though violence along religious lines did break out in some princely states at the time of the partition. It does not cover the incorporation of the enclaves of French India into India during the period 1947–1954, nor the annexation of Goa and other districts of Portuguese India by India in 1961. Other contemporaneous political entities in the region in 1947, the Kingdom of Sikkim, Kingdom of Bhutan, Kingdom of Nepal, and the Maldives were unaffected by the partition.
Among princely states, the violence was often highly organised with the involvement or complicity of the rulers. It is believed that in the Sikh states (except for Jind and Kapurthala) the Maharajas were complicit in the ethnic cleansing of Muslims, while other Maharajas such as those of Patiala, Faridkot, and Bharatpur were heavily involved in ordering them. The ruler of Bharatpur is said to have witnessed the ethnic cleansing of his population, especially at places such as Deeg.
Background, pre-World War II (1905–1938)
Partition of Bengal: 1905
- 1909 Percentage of Hindus.
- 1909 Percentage of Muslims.
- 1909 Percentage of Sikhs, Buddhists, and Jains.
In 1905, during his second term as Viceroy of India, Lord Curzon divided the Bengal Presidency—the largest administrative subdivision in British India—into the Muslim-majority province of Eastern Bengal and Assam and the Hindu-majority province of Bengal (present-day Indian states of West Bengal, Bihar, Jharkhand, and Odisha). Curzon's act, the partition of Bengal—which had been contemplated by various colonial administrations since the time of Lord William Bentinck, though never acted upon—was to transform nationalist politics as nothing else before it.
The Hindu elite of Bengal, many of whom owned land that was leased out to Muslim peasants in East Bengal, protested strongly. The large Bengali-Hindu middle-class (the Bhadralok), upset at the prospect of Bengalis being outnumbered in the new Bengal province by Biharis and Oriyas, felt that Curzon's act was punishment for their political assertiveness. The pervasive protests against Curzon's decision predominantly took the form of the Swadeshi ('buy Indian') campaign, involving a boycott of British goods. Sporadically, but flagrantly, the protesters also took to political violence, which involved attacks on civilians. The violence, however, would be ineffective, as most planned attacks were either pre-empted by the British or failed. The rallying cry for both types of protest was the slogan Bande Mataram (Bengali, lit: 'Hail to the Mother'), the title of a song by Bankim Chandra Chatterjee, which invoked a mother goddess, who stood variously for Bengal, India, and the Hindu goddess Kali. The unrest spread from Calcutta to the surrounding regions of Bengal when Calcutta's English-educated students returned home to their villages and towns. The religious stirrings of the slogan and the political outrage over the partition were combined as young men, in such groups as Jugantar, took to bombing public buildings, staging armed robberies, and assassinating British officials. Since Calcutta was the imperial capital, both the outrage and the slogan soon became known nationally.
The overwhelming, predominantly-Hindu protest against the partition of Bengal, along with the fear of reforms favouring the Hindu majority, led the Muslim elite of India in 1906 to the new viceroy Lord Minto, asking for separate electorates for Muslims. In conjunction, they demanded proportional legislative representation reflecting both their status as former rulers and their record of cooperating with the British. This would result in the founding of the All-India Muslim League in Dacca in December 1906. Although Curzon by now had returned to England following his resignation over a dispute with his military chief, Lord Kitchener, the League was in favor of his partition plan. The Muslim elite's position, which was reflected in the League's position, had crystallized gradually over the previous three decades, beginning with the 1871 Census of British India, which had first estimated the populations in regions of Muslim majority. For his part, Curzon's desire to court the Muslims of East Bengal had arisen from British anxieties ever since the 1871 census, and in light of the history of Muslims fighting them in the 1857 Mutiny and the Second Anglo-Afghan War.
In the three decades since the 1871 census, Muslim leaders across northern India had intermittently experienced public animosity from some of the new Hindu political and social groups. The Arya Samaj, for example, had not only supported Cow Protection Societies in their agitation, but also—distraught at the Census' Muslim numbers—organized "reconversion" events for the purpose of welcoming Muslims back to the Hindu fold. In the United Provinces, Muslims became anxious in the late-19th century as Hindu political representation increased, and Hindus were politically mobilized in the Hindi-Urdu controversy and the anti-cow-killing riots of 1893. In 1905 Muslim fears grew when Tilak and Lajpat Rai attempted to rise to leadership positions in the Congress, and the Congress itself rallied around the symbolism of Kali. It was not lost on many Muslims, for example, that the bande mataram rallying cry had first appeared in the novel Anandmath in which Hindus had battled their Muslim oppressors. Lastly, the Muslim elite, including Nawab of Dacca, Khwaja Salimullah, who hosted the League's first meeting in his mansion in Shahbag, was aware that a new province with a Muslim majority would directly benefit Muslims aspiring to political power.
World War I, Lucknow Pact: 1914–1918
World War I would prove to be a watershed in the imperial relationship between Britain and India. 1.4 million Indian and British soldiers of the British Indian Army would take part in the war, and their participation would have a wider cultural fallout: news of Indian soldiers fighting and dying with British soldiers, as well as soldiers from dominions like Canada and Australia, would travel to distant corners of the world both in newsprint and by the new medium of the radio. India's international profile would thereby rise and would continue to rise during the 1920s. It was to lead, among other things, to India, under its name, becoming a founding member of the League of Nations in 1920 and participating, under the name, "Les Indes Anglaises" (British India), in the 1920 Summer Olympics in Antwerp. Back in India, especially among the leaders of the Indian National Congress, it would lead to calls for greater self-government for Indians.
The 1916 Lucknow Session of the Congress was also the venue of an unanticipated mutual effort by the Congress and the Muslim League, the occasion for which was provided by the wartime partnership between Germany and Turkey. Since the Turkish Sultan, or Khalifah, also had sporadically claimed guardianship of the Islamic holy sites of Mecca, Medina, and Jerusalem, and since the British and their allies were now in conflict with Turkey, doubts began to increase among some Indian Muslims about the "religious neutrality" of the British, doubts that had already surfaced as a result of the reunification of Bengal in 1911, a decision that was seen as ill-disposed to Muslims. In the Lucknow Pact, the League joined the Congress in the proposal for greater self-government that was campaigned for by Tilak and his supporters; in return, the Congress accepted separate electorates for Muslims in the provincial legislatures as well as the Imperial Legislative Council. In 1916, the Muslim League had anywhere between 500 and 800 members and did not yet have its wider following among Indian Muslims of later years; in the League itself, the pact did not have unanimous backing, having largely been negotiated by a group of "Young Party" Muslims from the United Provinces (UP), most prominently, the brothers Mohammad and Shaukat Ali, who had embraced the Pan-Islamic cause. However, it did have the support of a young lawyer from Bombay, Muhammad Ali Jinnah, who was later to rise to leadership roles in both the League and the Indian independence movement. In later years, as the full ramifications of the pact unfolded, it was seen as benefiting the Muslim minority elites of provinces like UP and Bihar more than the Muslim majorities of Punjab and Bengal. At the time, the "Lucknow Pact" was an important milestone in nationalistic agitation and was seen so by the British.
Montagu–Chelmsford Reforms: 1919
Secretary of State for India, Montagu and Viceroy Lord Chelmsford presented a report in July 1918 after a long fact-finding trip through India the previous winter. After more discussion by the government and parliament in Britain, and another tour by the Franchise and Functions Committee to identify who among the Indian population could vote in future elections, the Government of India Act of 1919 (also known as the Montagu–Chelmsford Reforms) was passed in December 1919. The new Act enlarged both the provincial and Imperial legislative councils and repealed the Government of India's recourse to the "official majority" in unfavourable votes. Although departments like defence, foreign affairs, criminal law, communications, and income-tax were retained by the Viceroy and the central government in New Delhi, other departments like public health, education, land-revenue, local self-government were transferred to the provinces. The provinces themselves were now to be administered under a new dyarchical system, whereby some areas like education, agriculture, infrastructure development, and local self-government became the preserve of Indian ministers and legislatures, and ultimately the Indian electorates, while others like irrigation, land-revenue, police, prisons, and control of media remained within the purview of the British governor and his executive council. The new Act also made it easier for Indians to be admitted into the civil service and the army officer corps.
A greater number of Indians were now enfranchised, although, for voting at the national level, they constituted only 10% of the total adult male population, many of whom were still illiterate. In the provincial legislatures, the British continued to exercise some control by setting aside seats for special interests they considered cooperative or useful. In particular, rural candidates, generally sympathetic to British rule and less confrontational, were assigned more seats than their urban counterparts. Seats were also reserved for non-Brahmins, landowners, businessmen, and college graduates. The principle of "communal representation," an integral part of the Minto-Morley Reforms, and more recently of the Congress-Muslim League Lucknow Pact, was reaffirmed, with seats being reserved for Muslims, Sikhs, Indian Christians, Anglo-Indians, and domiciled Europeans, in both provincial and Imperial legislative councils. The Montagu-Chelmsford reforms offered Indians the most significant opportunity yet for exercising legislative power, especially at the provincial level; however, that opportunity was also restricted by the still limited number of eligible voters, by the small budgets available to provincial legislatures, and by the presence of rural and special interest seats that were seen as instruments of British control.
Introduction of the two-nation theory: 1924
The two-nation theory is the ideology that the primary identity and unifying denominator of Muslims in the Indian subcontinent is their religion, rather than their language or ethnicity, and therefore Indian Hindus and Muslims are two distinct nations regardless of commonalities. The two-nation theory was a founding principle of the Pakistan Movement (i.e., the ideology of Pakistan as a Muslim nation-state in South Asia), and the partition of India in 1947.
The ideology that religion is the determining factor in defining the nationality of Indian Muslims was undertaken by Muhammad Ali Jinnah, who termed it as the awakening of Muslims for the creation of Pakistan. It is also a source of inspiration to several Hindu nationalist organizations, with causes as varied as the redefinition of Indian Muslims as non-Indian foreigners and second-class citizens in India, the expulsion of all Muslims from India, the establishment of a legally Hindu state in India, prohibition of conversions to Islam, and the promotion of conversions or reconversions of Indian Muslims to Hinduism.
Under my scheme the Muslims will have four Muslim States: (1) The Pathan Province or the North-West Frontier; (2) Western Punjab (3) Sindh and (4) Eastern Bengal. If there are small Muslim communities in any other part of India, sufficiently large to form a province, they should be similarly constituted. But it should be distinctly understood that this is not a united India. It means a clear partition of India into a Muslim India and a non-Muslim India.
There are varying interpretations of the two-nation theory, based on whether the two postulated nationalities can coexist in one territory or not, with radically different implications. One interpretation argued for sovereign autonomy, including the right to secede, for Muslim-majority areas of the Indian subcontinent, but without any transfer of populations (i.e., Hindus and Muslims would continue to live together). A different interpretation contends that Hindus and Muslims constitute "two distinct and frequently antagonistic ways of life and that therefore they cannot coexist in one nation." In this version, a transfer of populations (i.e., the total removal of Hindus from Muslim-majority areas and the total removal of Muslims from Hindu-majority areas) was a desirable step towards a complete separation of two incompatible nations that "cannot coexist in a harmonious relationship."
Opposition to the theory has come from two sources. The first is the concept of a single Indian nation, of which Hindus and Muslims are two intertwined communities. This is a founding principle of the modern, officially-secular Republic of India. Even after the formation of Pakistan, debates on whether Muslims and Hindus are distinct nationalities or not continued in that country as well. The second source of opposition is the concept that while Indians are not one nation, neither are the Muslims or Hindus of the subcontinent, and it is instead the relatively homogeneous provincial units of the subcontinent which are true nations and deserving of sovereignty; the Baloch has presented this view, Sindhi, and Pashtun sub-nationalities of Pakistan and the Assamese and Punjabi sub-nationalities of India.
Muslim homeland, provincial elections: 1930–1938
In 1933, Choudhry Rahmat Ali had produced a pamphlet, entitled Now or never, in which the term Pakistan, 'land of the pure,' comprising the Punjab, North West Frontier Province (Afghania), Kashmir, Sindh, and Balochistan, was coined for the first time. However, the pamphlet did not attract political attention and, a little later, a Muslim delegation to the Parliamentary Committee on Indian Constitutional Reforms gave short shrift to the idea of Pakistan, calling it "chimerical and impracticable." In 1932, British Prime Minister Ramsay MacDonald accepted Dr. Ambedkar's demand for the "Depressed Classes" to have separate representation in the central and provincial legislatures. The Muslim League favoured the award as it had the potential to weaken the Hindu caste leadership. However, Mahatma Gandhi, who was seen as a leading advocate for Dalit rights, went on a fast to persuade the British to repeal the award. Ambedkar had to back down when it seemed Gandhi's life was threatened.
Two years later, the Government of India Act 1935 introduced provincial autonomy, increasing the number of voters in India to 35 million. More significantly, law and order issues were for the first time devolved from British authority to provincial governments headed by Indians. This increased Muslim anxieties about eventual Hindu domination. In the 1937 Indian provincial elections, the Muslim League turned out its best performance in Muslim-minority provinces such as the United Provinces, where it won 29 of the 64 reserved Muslim seats. However, in the Muslim-majority regions of the Punjab and Bengal regional parties outperformed the League. In Punjab, the Unionist Party of Sikandar Hayat Khan, won the elections and formed a government, with the support of the Indian National Congress and the Shiromani Akali Dal, which lasted five years. In Bengal, the League had to share power in a coalition headed by A. K. Fazlul Huq, the leader of the Krishak Praja Party.
The Congress, on the other hand, with 716 wins in the total of 1585 provincial assemblies seats, was able to form governments in 7 out of the 11 provinces of British India. In its manifesto, Congress maintained that religious issues were of lesser importance to the masses than economic and social issues. However, the election revealed that Congress had contested just 58 out of the total 482 Muslim seats, and of these, it won in only 26. In UP, where the Congress won, it offered to share power with the League on condition that the League stops functioning as a representative only of Muslims, which the League refused. This proved to be a mistake as it alienated Congress further from the Muslim masses. Besides, the new UP provincial administration promulgated cow protection and the use of Hindi. The Muslim elite in UP was further alienated, when they saw chaotic scenes of the new Congress Raj, in which rural people who sometimes turned up in large numbers in Government buildings, were indistinguishable from the administrators and the law enforcement personnel.
The Muslim League conducted its investigation into the conditions of Muslims under Congress-governed provinces. The findings of such investigations increased fear among the Muslim masses of future Hindu domination. The view that Muslims would be unfairly treated in an independent India dominated by the Congress was now a part of the public discourse of Muslims.
Background, during and post-World War II (1939–1947)
With the outbreak of World War II in 1939, Lord Linlithgow, Viceroy of India, declared war on India's behalf without consulting Indian leaders, leading the Congress provincial ministries to resign in protest. By contrast the Muslim League, which functioned under state patronage, organized "Deliverance Day" celebrations (from Congress dominance) and supported Britain in the war effort. When Linlithgow met with nationalist leaders, he gave the same status to Jinnah as he did to Gandhi, and a month later described the Congress as a "Hindu organization."
In March 1940, in the League's annual three-day session in Lahore, Jinnah gave a two-hour speech in English, in which were laid out the arguments of the Two-nation theory, stating, in the words of historians Talbot and Singh, that "Muslims and Hindus…were irreconcilably opposed monolithic religious communities and as such, no settlement could be imposed that did not satisfy the aspirations of the former." On the last day of its session, the League passed, what came to be known as the Lahore Resolution, sometimes also "Pakistan Resolution," demanding that "the areas in which the Muslims are numerically in the majority as in the North-Western and Eastern zones of India should be grouped to constitute independent states in which the constituent units shall be autonomous and sovereign." Though it had been founded more than three decades earlier, the League would gather support among South Asian Muslims only during the Second World War.
August Offer, Churchill proposal: 1940–1942
In August 1940, Lord Linlithgow proposed that India be granted a Dominion status after the war. Having not taken the Pakistan idea seriously, Linlithgow supposed that what Jinnah wanted was a non-federal arrangement without Hindu domination. To allay Muslim fears of Hindu domination, the "August Offer" was accompanied by the promise that a future constitution would consider the views of minorities. Neither the Congress nor the Muslim League were satisfied with the offer, and both rejected it in September. The Congress once again started a program of civil disobedience.
In March 1942, with the Japanese fast moving up the Malayan Peninsula after the Fall of Singapore, and with the Americans supporting independence for India, Winston Churchill, the wartime Prime Minister of Britain, sent Sir Stafford Cripps, leader of the House of Commons, with an offer of dominion status to India at the end of the war in return for the Congress's support for the war effort. Not wishing to lose the support of the allies they had already secured—the Muslim League, Unionists of Punjab, and the Princes—Cripps's offer included a clause stating that no part of the British Indian Empire would be forced to join the post-war Dominion. The League rejected the offer, seeing this clause as insufficient in meeting the principle of Pakistan. As a result of that proviso, the proposals were also rejected by the Congress, which, since its founding as a polite group of lawyers in 1885, saw itself as the representative of all Indians of all faiths. After the arrival in 1920 of Gandhi, the pre-eminent strategist of Indian nationalism, the Congress had been transformed into a mass nationalist movement of millions.
Quit India Resolution
In August 1942, Congress launched the Quit India Resolution, asking for drastic constitutional changes which the British saw as the most serious threat to their rule since the Indian rebellion of 1857. With their resources and attention already spread thin by a global war, the nervous British immediately jailed the Congress leaders and kept them in jail until August 1945, whereas the Muslim League was now free for the next three years to spread its message. Consequently, the Muslim League's ranks surged during the war, with Jinnah himself admitting, "The war which nobody welcomed proved to be a blessing in disguise." Although there were other important national Muslim politicians such as Congress leader Abul Kalam Azad, and influential regional Muslim politicians such as A. K. Fazlul Huq of the leftist Krishak Praja Party in Bengal, Sikander Hyat Khan of the landlord-dominated Punjab Unionist Party, and Abd al-Ghaffar Khan of the pro-Congress Khudai Khidmatgar (popularly, "red shirts") in the North West Frontier Province, the British were to increasingly see the League as the main representative of Muslim India. The Muslim League's demand for Pakistan pitted it against the British and Congress.
In January 1946, mutinies broke out in the armed services, starting with RAF servicemen frustrated with their slow repatriation to Britain. The insurgencies came to a head in February 1946 with the mutiny of the Royal Indian Navy in Bombay, followed by others in Calcutta, Madras, and Karachi. Although the mutinies were rapidly suppressed, they had the effect of spurring the Attlee government to action. Labour Prime Minister Clement Attlee had been deeply interested in Indian independence since the 1920s, and for years had supported it. He now took charge of the government position and gave the issue the highest priority. A Cabinet Mission was sent to India led by the Secretary of State for India, Lord Pethick Lawrence, which also included Sir Stafford Cripps, who had visited India four years before. The objective of the mission was to arrange for an orderly transfer to independence.
In early 1946, new elections were held in India. With the announcement of the polls, the line had been drawn for Muslim voters to choose between a united Indian State or partition. At the end of the war in 1945, the colonial government had announced the public trial of three senior officers of Subhas Chandra Bose's defeated Indian National Army (INA) who stood accused of treason. Now as the trials began, the Congress leadership, although having never supported the INA, chose to defend the accused officers. The subsequent convictions of the officers, the public outcry against the beliefs, and the eventual remission of the sentences created positive propaganda for Congress, which enabled it to win the party's subsequent electoral victories in eight of the eleven provinces. The negotiations between the Congress and the Muslim League, however, stumbled over the issue of partition.
British rule had lost its legitimacy for most Hindus, and conclusive proof of this came in the form of the 1946 elections with the Congress winning 91 percent of the vote among non-Muslim constituencies, thereby gaining a majority in the Central Legislature and forming governments in eight provinces, and becoming the legitimate successor to the British government for most Hindus. If the British intended to stay in India the acquiescence of politically active Indians to British rule would have been in doubt after these election results, although the views of many rural Indians were uncertain even at that point. The Muslim League won the majority of the Muslim vote as well as most reserved Muslim seats in the provincial assemblies, and it also secured all the Muslim seats in the Central Assembly.
- An aged and abandoned Muslim couple and their grandchildren are sitting by the roadside on this arduous journey. "The old man is dying of exhaustion. The caravan has gone on," wrote Bourke-White.
- An old Sikh man is carrying his wife. Over 10 million people were uprooted from their homeland and traveled on foot, bullock carts and trains to their promised new home.
- Gandhi in Bela, Bihar, after attacks on Muslims, 28 March 1947.
Cabinet Mission: July 1946
Recovering from its performance in the 1937 elections, the Muslim League was finally able to make good on the claim that it and Jinnah alone represented India's Muslims and Jinnah quickly interpreted this vote as a popular demand for a separate homeland. However, tensions heightened while the Muslim League was unable to form ministries outside the two provinces of Sind and Bengal, with the Congress forming a ministry in the NWFP and the key Punjab province coming under a coalition ministry of the Congress, Sikhs and Unionists.
The British, while not approving of a separate Muslim homeland, appreciated the simplicity of a single voice to speak on behalf of India's Muslims. Britain had wanted India and its army to remain united to keep India in its system of 'imperial defence'. With India's two political parties unable to agree, Britain devised the Cabinet Mission Plan. Through this mission, Britain hoped to preserve the united India which they and the Congress desired, while concurrently securing the essence of Jinnah's demand for a Pakistan through 'groupings.' The Cabinet mission scheme encapsulated a federal arrangement consisting of three groups of provinces. Two of these groupings would consist of predominantly Muslim provinces, while the third grouping would be made up of the predominantly Hindu regions. The provinces would be autonomous, but the centre would retain control over the defence, foreign affairs, and communications. Though the proposals did not offer independent Pakistan, the Muslim League accepted the proposals. Even though the unity of India would have been preserved, the Congress leaders, especially Nehru, believed it would leave the Center weak. On 10 July 1946, Nehru gave a "provocative speech," rejected the idea of grouping the provinces and "effectively torpedoed" both the Cabinet mission plan and the prospect of a United India.
Direct Action Day: August 1946
After the Cabinet Mission broke down, Jinnah proclaimed 16 August 1946 Direct Action Day, with the stated goal of peacefully highlighting the demand for a Muslim homeland in British India. However, on the morning of the 16th, armed Muslim gangs gathered at the Ochterlony Monument in Calcutta to hear Huseyn Shaheed Suhrawardy, the League's Chief Minister of Bengal, who, in the words of historian Yasmin Khan, "if he did not explicitly incite violence certainly gave the crowd the impression that they could act with impunity, that neither the police nor the military would be called out and that the ministry would turn a blind eye to any action they unleashed in the city." That very evening, in Calcutta, Hindus were attacked by returning Muslim celebrants, who carried pamphlets distributed earlier which showed a clear connection between violence and the demand for Pakistan, and directly implicated the celebration of Direct Action Day with the outbreak of the cycle of violence that would later be called the "Great Calcutta Killing of August 1946". The next day, Hindus struck back, and the violence continued for three days in which approximately 4,000 people died (according to official accounts), both Hindus and Muslims. Although India had had outbreaks of religious violence between Hindus and Muslims before, the Calcutta killings were the first to display elements of "ethnic cleansing". Violence was not confined to the public sphere, but homes were entered and destroyed, and women and children were attacked. Although the Government of India and the Congress were both shaken by the course of events, in September, a Congress-led interim government was installed, with Jawaharlal Nehru as united India's prime minister.
The communal violence spread to Bihar (where Hindus attacked Muslims), to Noakhali in Bengal (where Muslims targeted Hindus), to Garhmukteshwar in the United Provinces (where Hindus attacked Muslims), and on to Rawalpindi in March 1947 in which Hindus were attacked or driven out by Muslims.
Plan for partition: 1946–1947
The British Prime Minister Attlee appointed Lord Louis Mountbatten as India's last viceroy, giving him the task to oversee British India's independence by June 1948, with the instruction to avoid partition and preserve a United India, but with adaptable authority to ensure a British withdrawal with minimal setbacks. Mountbatten hoped to revive the Cabinet Mission scheme for a federal arrangement for India. But despite his initial keenness for preserving the centre, the tense communal situation caused him to conclude that partition had become necessary for a quicker transfer of power.
Vallabhbhai Patel was one of the first Congress leaders to accept the partition of India as a solution to the rising Muslim separatist movement led by Muhammad Ali Jinnah. He had been outraged by Jinnah's Direct Action campaign, which had provoked communal violence across India, and by the viceroy's vetoes of his home department's plans to stop the violence on the grounds of constitutionality. Patel severely criticized the viceroy's induction of League ministers into the government and the revalidation of the grouping scheme by the British without Congress approval. Although further outraged at the League's boycott of the assembly and non-acceptance of the plan of 16 May despite entering government, he was also aware that Jinnah enjoyed popular support amongst Muslims, and that an open conflict between him and the nationalists could degenerate into a Hindu-Muslim civil war. The continuation of a divided and weak central government would in Patel's mind, result in the wider fragmentation of India by encouraging more than 600 princely states towards independence.
Between the months of December 1946 and January 1947, Patel worked with civil servant V. P. Menon on the latter's suggestion for a separate dominion of Pakistan created out of Muslim-majority provinces. Communal violence in Bengal and Punjab in January and March 1947 further convinced Patel of the soundness of partition. Patel, a fierce critic of Jinnah's demand that the Hindu-majority areas of Punjab and Bengal be included in a Muslim state, obtained the partition of those provinces, thus blocking any possibility of their inclusion in Pakistan. Patel's decisiveness on the partition of Punjab and Bengal had won him many supporters and admirers amongst the Indian public, which had been tired of the League's tactics. Still, he was criticized by Gandhi, Nehru, secular Muslims, and socialists for a perceived eagerness for the partition.
Proposal of the Indian Independence Act
When Lord Mountbatten formally proposed the plan on 3 June 1947, Patel gave his approval and lobbied Nehru and other Congress leaders to accept the proposal. Knowing Gandhi's deep anguish regarding proposals of partition, Patel engaged him in private meetings discussions over the perceived practical unworkability of any Congress-League coalition, the rising violence, and the threat of civil war. At the All India Congress Committee meeting called to vote on the proposal, Patel said:
I fully appreciate the fears of our brothers from [the Muslim-majority areas]. Nobody likes the division of India, and my heart is heavy. But the choice is between one division and many divisions. We must face facts. We cannot give way to emotionalism and sentimentality. The Working Committee has not acted out of fear. But I am afraid of one thing, that all our toil and hard work of these many years might go waste or prove unfruitful. My nine months in office have completely disillusioned me regarding the supposed merits of the Cabinet Mission Plan. Except for a few honourable exceptions, Muslim officials from the top down to the chaprasis (peons or servants) are working for the League. The communal veto given to the League in the Mission Plan would have blocked India's progress at every stage. Whether we like it or not, de facto Pakistan already exists in the Punjab and Bengal. Under the circumstances, I would prefer a de jure Pakistan, which may make the League more responsible. Freedom is coming. We have 75 to 80 percent of India, which we can make strong with our genius. The League can develop the rest of the country.
Following Gandhi's denial and Congress' approval of the plan, Patel represented India on the Partition Council, where he oversaw the division of public assets and selected the Indian council of ministers with Nehru. However, neither he nor any other Indian leader had foreseen the intense violence and population transfer that would take place with partition. Late in 1946, the Labour government in Britain, its exchequer exhausted by the recently concluded World War II, decided to end British rule of India, and in early 1947 Britain announced its intention of transferring power no later than June 1948. However, with the British army unprepared for the potential for increased violence, the new viceroy, Louis Mountbatten, advanced the date for the transfer of power, allowing less than six months for a mutually agreed plan for independence.
In June 1947, the nationalist leaders, including Nehru and Abul Kalam Azad on behalf of the Congress, Jinnah representing the Muslim League, B. R. Ambedkar representing the Untouchable community, and Master Tara Singh representing the Sikhs, agreed to a partition of the country along religious lines in stark opposition to Gandhi's views. The predominantly Hindu and Sikh areas were assigned to the new India and predominantly Muslim areas to the new nation of Pakistan; the plan included a partition of the Muslim-majority provinces of Punjab and Bengal. The communal violence that accompanied the announcement of the Radcliffe Line, the line of partition, was even more horrific.
Describing the violence that accompanied the partition of India, historians Ian Talbot and Gurharpal Singh write:
There are numerous eyewitness accounts of the maiming and mutilation of victims. The catalogue of horrors includes the disemboweling of pregnant women, the slamming of babies' heads against brick walls, the cutting off of the victim's limbs and genitalia, and the displaying of heads and corpses. While previous communal riots had been deadly, the scale and level of brutality during the Partition massacres were unprecedented. Although some scholars question the use of the term 'genocide' concerning the partition massacres, much of the violence was manifested with genocidal tendencies. It was designed to cleanse an existing generation and prevent its future reproduction."
On 14 August 1947, the new Dominion of Pakistan came into being, with Muhammad Ali Jinnah sworn in as its first Governor-General in Karachi. The following day, 15 August 1947, India, now Dominion of India, became an independent country, with official ceremonies taking place in New Delhi, Jawaharlal Nehru assuming the office of prime minister, and with Viceroy Mountbatten staying on as the country's first Governor General. Gandhi remained in Bengal to work with the new refugees from the partitioned subcontinent.
Geographic partition, 1947
The actual division of British India between the two new dominions was accomplished according to what has come to be known as the "3 June Plan" or "Mountbatten Plan". It was announced at a press conference by Mountbatten on 3 June 1947, when the date of independence - 15 August 1947 - was also announced. The plan's main points were:
- Sikhs, Hindus and Muslims in Punjab and Bengal legislative assemblies would meet and vote for partition. If a simple majority of either group wanted partition, then these provinces would be divided.
- Sind and Baluchistan were to make their own decision.
- The fate of North-West Frontier Province and Sylhet district of Assam was to be decided by a referendum.
- India would be independent by 15 August 1947.
- The separate independence of Bengal was ruled out.
- A boundary commission to be set up in case of partition.
The Indian political leaders accepted the Plan on 2 June. It could not deal with the question of the princely states, which were not British possessions, but on 3 June Mountbatten advised them against remaining independent and urged them to join one of the two new dominions.
The Muslim League's demands for a separate country were thus conceded. The Congress's position on unity was also taken into account while making Pakistan as small as possible. Mountbatten's formula was to divide India and, at the same time, retain maximum possible unity. Abul Kalam Azad expressed concern over the likelihood of violent riots, to which Mountbatten replied:
At least on this question I shall give you complete assurance. I shall see to it that there is no bloodshed and riot. I am a soldier and not a civilian. Once the partition is accepted in principle, I shall issue orders to see that there are no communal disturbances anywhere in the country. If there should be the slightest agitation, I shall adopt the sternest measures to nip the trouble in the bud.
On 3 June 1947, the partition plan was accepted by the Congress Working Committee. Boloji states that in Punjab, there were no riots, but there was communal tension, while Gandhi was reportedly isolated by Nehru and Patel and observed maun vrat (day of silence). Mountbatten visited Gandhi and said he hoped that he would not oppose the partition, to which Gandhi wrote the reply: "Have I ever opposed you?"
Within British India, the border between India and Pakistan (the Radcliffe Line) was determined by a British Government-commissioned report prepared under the chairmanship of a London barrister, Sir Cyril Radcliffe. Pakistan came into being with two non-contiguous enclaves, East Pakistan (today Bangladesh) and West Pakistan, separated geographically by India. India was formed out of the majority Hindu regions of British India, and Pakistan from the majority Muslim areas.
On 18 July 1947, the British Parliament passed the Indian Independence Act that finalized the arrangements for partition and abandoned British suzerainty over the princely states, of which there were several hundred, leaving them free to choose whether to accede to one of the new dominions or to remain independent outside both. The Government of India Act 1935 was adapted to provide a legal framework for the new dominions.
Following its creation as a new country in August 1947, Pakistan applied for membership of the United Nations and was accepted by the General Assembly on 30 September 1947. The Dominion of India continued to have the existing seat as India had been a founding member of the United Nations since 1945.
The Punjab—the region of the five rivers east of Indus: Jhelum, Chenab, Ravi, Beas, and Sutlej—consists of inter-fluvial doabs ('two rivers'), or tracts of land lying between two confluent rivers (see map on the right):
- the Sindh-Sagar doab (between Indus and Jhelum);
- the Jech doab (Jhelum/Chenab);
- the Rechna doab (Chenab/Ravi);
- the Bari doab (Ravi/Beas); and
- the Bist doab (Beas/Sutlej).
In early 1947, in the months leading up to the deliberations of the Punjab Boundary Commission, the main disputed areas appeared to be in the Bari and Bist doabs. However, some areas in the Rechna doab were claimed by the Congress and Sikhs. In the Bari doab, the districts of Gurdaspur, Amritsar, Lahore, and Montgomery were all disputed. All districts (other than Amritsar, which was 46.5% Muslim) had Muslim majorities; albeit, in Gurdaspur, the Muslim majority, at 51.1%, was slender. At a smaller area-scale, only three tehsils (sub-units of a district) in the Bari doab had non-Muslim majorities: Pathankot, in the extreme north of Gurdaspur, which was not in dispute; and Amritsar and Tarn Taran in Amritsar district. Nonetheless, there were four Muslim-majority tehsils east of Beas-Sutlej, two of which where Muslims outnumbered Hindus and Sikhs together.
Before the Boundary Commission began formal hearings, governments were set up for the East and the West Punjab regions. Their territories were provisionally divided by "notional division" based on simple district majorities. In both the Punjab and Bengal, the Boundary Commission consisted of two Muslim and two non-Muslim judges with Sir Cyril Radcliffe as a common chairman. The mission of the Punjab commission was worded generally as: "To demarcate the boundaries of the two parts of Punjab, based on ascertaining the contiguous majority areas of Muslims and non-Muslims. In doing so, it will take into account other factors." Each side (the Muslims and the Congress/Sikhs) presented its claim through counsel with no liberty to bargain. The judges, too, had no mandate to compromise, and on all major issues they "divided two and two, leaving Sir Cyril Radcliffe the invidious task of making the actual decisions."
Independence, population transfer and violence
- Train to Pakistan being given an honor-guard send-off. New Delhi railway station, 1947
- Rural Sikhs in a long oxcart train headed towards India. 1947.
- Two Muslim men (in a rural refugee train headed towards Pakistan) carrying an old woman in a makeshift doli or palanquin of 1947.
- A refugee train on its way to Punjab, Pakistan
Massive population exchanges occurred between the two newly formed states in the months immediately following the partition. There was no conception that population transfers would be necessary because of the partitioning. Religious minorities were expected to stay put in the states they found themselves residing in. However, an exception was made for Punjab, where the transfer of populations was organized because of the communal violence affecting the province, this did not apply to other provinces.
"The population of undivided India in 1947 was approx 390 million. After partition, there were 330 million people in India, 30 million in West Pakistan, and 30 million people in East Pakistan (now Bangladesh)." Once the boundaries were established, about 14.5 million people crossed the borders to what they hoped was the relative safety of religious majority. The 1951 Census of Pakistan identified the number of displaced persons in Pakistan at 7,226,600, presumably all Muslims who had entered Pakistan from India; the 1951 Census of India counted 7,295,870 displaced persons, apparently all Hindus and Sikhs who had moved to India from Pakistan immediately after the partition. The overall total is therefore around 14.5 million, although since both censuses were held about 4 years after the partition, these numbers include net population increase following the mass migration.
About 11.2 million (77.4% of the displaced persons) were in the west, the majority from the Punjab of it: 6.5 million Muslims moved from India to West Pakistan, and 4.7 million Hindus and Sikhs moved from West Pakistan to India; thus the net migration in the west from India to West Pakistan (now Pakistan) was 1.8 million. The other 3.3 million (22.6% of the displaced persons) were in the east: 2.6 million moved from East Pakistan to India, and 0.7 million moved from India to East Pakistan (now Bangladesh); thus, net migration in the east was 1.9 million into India.
Regions affected by Partition
The partition of British India split the former British province of Punjab between the Dominion of India and the Dominion of Pakistan. The mostly Muslim western part of the province became Pakistan's Punjab province; the mostly Hindu and Sikh eastern part became India's East Punjab state (later divided into the new states of Punjab, Haryana and Himachal Pradesh). Many Hindus and Sikhs lived in the west, and many Muslims lived in the east, and the fears of all such minorities were so great that the Partition saw many people displaced and much inter-communal violence. Some have described the violence in Punjab as a retributive genocide. Total migration across Punjab during the partition is estimated at around 12 million people; around 6.5 million Muslims moved from East Punjab to West Punjab, and 4.7 million Hindus and Sikhs moved from West Punjab to East Punjab.
The newly formed governments had not anticipated, and were completely unequipped for, a two-way migration of such staggering magnitude, and massive violence and slaughter occurred on both sides of the new India-Pakistan border. Estimates of the number of deaths vary, with low estimates at 200,000 and high estimates at 2,000,000. The worst case of violence among all regions is concluded to have taken place in Punjab. Virtually no Muslim survived in East Punjab (except in Malerkotla) and virtually no Hindu or Sikh survived in West Punjab.
Lawrence James observed that "Sir Francis Mudie, the governor of West Punjab, estimated that 500,000 Muslims died trying to enter his province, while the British high commissioner in Karachi put the full total at 800,000. This makes nonsense of the claim by Mountbatten and his partisans that only 200,000 were killed": [James 1998: 636].
During this period, many alleged that Tara Singh was endorsing the killing of Muslims. On 3 March 1947, at Lahore, Singh, along with about 500 Sikhs, declared from a dais "Death to Pakistan." According to political scientist Ishtiaq Ahmed:
On March 3, radical Sikh leader Master Tara Singh famously flashed his kirpan (sword) outside the Punjab Assembly, calling for the destruction of the Pakistan idea prompting violent response by the Muslims mainly against Sikhs but also Hindus, in the Muslim-majority districts of northern Punjab. Yet, at the end of that year, more Muslims had been killed in East Punjab than Hindus and Sikhs together in West Punjab.
The province of Bengal was divided into the two separate entities of West Bengal, awarded to the Dominion of India, and East Bengal, awarded to the Dominion of Pakistan. East Bengal was renamed East Pakistan in 1955, and later became the independent nation of Bangladesh after the Bangladesh Liberation War of 1971.
While the Muslim majority districts of Murshidabad and Malda were given to India, the Hindu majority district of Khulna and the Buddhist majority, but sparsely populated, Chittagong Hill Tracts were given to Pakistan by the Radcliffe award.
Thousands of Hindus, located in the districts of East Bengal, which were awarded to Pakistan, found themselves being attacked, and this religious persecution forced hundreds of thousands of Hindus from East Bengal to seek refuge in India. The massive influx of Hindu refugees into Calcutta affected the demographics of the city. Many Muslims left the city for East Pakistan, and the refugee families occupied some of their homes and properties.
Chittagong Hill Tracts
Buddhist majority Chittagong Hill Tracts was given to Pakistan even though the British Parliament or the Indian Independence Act 1947 did not give mandate to the Boundary Commission to separate the Chittagong Hill Tracts from India. In 1947, Chittagong Hill Tracts had 98.5% Buddhist and Hindu majority. According to the Indian Independence Act 1947, Indian province of Bengal was divided into West Bengal and East Bengal on religious ground. Chittagong Hill Tracts was an excluded area since 1900 and was not part of Bengal. Chittagong Hill Tracts had no representative at the Bengal Legislative Assembly in Calcutta, since it was not part of Bengal.
On 15 August 1947, Chakma and other indigenous Buddhists celebrated independence day by hoisting Indian flag in Rangamati, the capital of Chittagong Hill Tracts. When the boundaries of Pakistan and India were announced by radio on 17 August 1947, they were shocked to know that the Chittagong Hill Tracts had been awarded to Pakistan. The indigenous people sent a delegation led by Sneha Kumar Chakma to Delhi to seek help from the Indian leadership. Sneha Kumar Chakma contacted Deputy Prime Minister Vallabhbhai Patel by phone. Vallabhbhai Patel was willing to help, but insisted Sneha Kumar Chakma to seek agreement from Prime Minister Jawaharlal Nehru. But Nehru refused to help fearing that military conflict for Chittagong Hill Tracts might draw the British back to India.
The Baluch Regiment of the Pakistani Army entered Chittagong Hill Tracts a week after independence and lowered the Indian flag on 21st August at gun point. East Pakistan viewed the indigenous Buddhist people as pro-India and systematically discriminated against them in jobs, education, trades and economic opportunities. The situation of indigenous people became worse after the emergence of Bangladesh in 1971. Bangladesh government sponsored hundreds of thousands of Muslim settlers to migrate to Chittagong Hill Tracts with the purpose changing the demographic profile of the region. Bangladesh government sent tens of thousands of armed forces personnel to protect the Muslim settlers and suppress the indigenous Buddhist resistance. Bangladeshi armed forces and Muslim settlers committed more than 20 massacres in Chittagong Hill Tracts, numerous rapes, extrajudicial killings, tortures, forcible conversions, land grabs.
At the time of partition, the majority of Sindh's prosperous upper and middle class was Hindu. The Hindus were mostly concentrated in cities and formed the majority of the population in cities including Hyderabad, Karachi, Shikarpur, and Sukkur. During the initial months after partition, only some Hindus migrated. However, by late 1947 and early 1948, the situation began to change. Large numbers of Muslims refugees from India started arriving in Sindh and began to live in crowded refugee camps.
On 6 December 1947, communal violence broke out in Ajmer in India, precipitated by an argument between some Sindhi Hindu refugees and local Muslims in the Dargah Bazaar. Violence in Ajmer again broke out in the middle of December with stabbings, looting and arson resulting in mostly Muslim casualties. Many Muslims fled across the Thar Desert to Sindh in Pakistan. This sparked further anti-Hindu riots in Hyderabad, Sindh. On 6 January anti-Hindu riots broke out in Karachi, leading to an estimate of 1100 casualties. The arrival of Sindhi Hindu refugees in North Gujarat's town of Godhra in March 1948 again sparked riots there which led to more emigration of Muslims from Godhra to Pakistan. These events triggered the large scale of exodus of Hindus. An estimated 1.2 - 1.4 million Hindus migrated to India primarily by ship or train.
Despite the migration, a significant Sindhi Hindu population still resides in Pakistan's Sindh province, where they number at around 2.3 million as per Pakistan's 1998 census. Some districts in Sindh had a Hindu majority like Tharparkar District, Umerkot, Mirpurkhas, Sanghar and Badin, but these have decreased drastically due to persecution. Due to the religious persecution of Hindus in Pakistan, Hindus from Sindh are still migrating to India.
There was no mass violence in Gujarat as there was in Punjab and Bengal. However, Gujarat experienced large refugee migrations. Est. 340,000 Muslims migrated to Pakistan, of which 75% went to Karachi largely due to business interests. The number of incoming refugees was quite large, with over a million people migrating to Gujarat. These Hindu refugees were largely Sindhi and Gujarati.
For centuries Delhi had been the capital of the Mughal Empire from Babur to the successors of Aurangzeb and previous Turkic Muslim rulers of North India. The series of Islamic rulers keeping Delhi as a stronghold of their empires left a vast array of Islamic architecture in Delhi, and a strong Islamic culture permeated the city. In 1911, when the British Raj shifted their colonial capital from Calcutta to Delhi, the nature of the city began changing. The core of the city was called ‘Lutyens’ Delhi,’ named after the British architect Edwin Lutyens, and was designed to service the needs of the small but growing population of the British elite. Nevertheless, the 1941 census listed Delhi's population as being 33.2% Muslim.
As refugees began pouring into Delhi in 1947, the city was ill-equipped to deal with the influx of refugees. Refugees "spread themselves out wherever they could. They thronged into camps … colleges, temples, gurudwaras, dharmshalas, military barracks, and gardens." By 1950, the government began allowing squatters to construct houses in certain portions of the city. As a result, neighbourhoods such as Lajpat Nagar and Patel Nagar sprang into existence, which carry a distinct Punjabi character to this day. However, as thousands of Hindu and Sikh refugees from Punjab fled to the city, upheavals ensued as communal pogroms rocked the historical stronghold of Indo-Islamic culture and politics. A Pakistani diplomat in Delhi, Hussain, alleged that the Indian government was intent on eliminating Delhi's Muslim population or was indifferent to their fate. He reported that army troops openly gunned down innocent Muslims. Prime Minister Jawaharlal Nehru estimated 1,000 casualties in the city. However, other sources claim that the casualty rate was 20 times higher. Gyanendra Pandey's more recent account of the violence in Delhi puts the figure of Muslim casualties in Delhi at between 20,000 and 25,000.
Tens of thousands of Muslims were driven to refugee camps regardless of their political affiliations, and numerous historical sites in Delhi such as the Purana Qila, Idgah, and Nizamuddin were transformed into refugee camps. In fact, many Hindu and Sikh refugees eventually occupied the abandoned houses of Delhi's Muslim inhabitants. At the culmination of the tensions in Delhi, 330,000 Muslims had migrated to Pakistan. The 1951 Census registered a drop of the Muslim population in the city from 33.2% in 1941 to 5.3% in 1951.
In several cases, rulers of Princely States were involved in communal violence or did not do enough to stop in time. Some rulers were away from their states for the summer, such as those of the Sikh states. Some believe that the rulers were whisked away by communal ministers in large part to avoid responsibility for the soon-to-come ethnic cleansing. However, in Bhawalpur and Patiala, upon the return of their ruler to the state, there was a marked decrease in violence, and the rulers consequently stood against the cleansing. The Nawab of Bahawalpur was away in Europe and returned on 1 October, shortening his trip. A bitter Hassan Suhrawardy would write to Mahatma Gandhi:
With the exceptions of Jind and Kapurthala, the violence was well organised in the Sikh states, with logistics provided by the durbar. In Patiala and Faridkot, the Maharajas responded to the call of Master Tara Singh to cleanse India of Muslims. The Maharaja of Patiala was offered the headship of a future united Sikh state that would rise from the "ashes of a Punjab civil war." The Maharaja of Faridkot, Harinder Singh, is reported to have listened to stories of the massacres with great interest going so far as to ask for "juicy details" of the carnage. The Maharaja of Bharatpur State personally witnessed the cleansing of Muslim Meos at Khumbar and Deeg. When reproached by Muslims for his actions, Brijendra Singh retorted by saying: "Why come to me? Go to Jinnah."
In Alwar and Bahawalpur communal sentiments extended to higher echelons of government, and the prime ministers of these States were said to have been involved in planning and directly overseeing the cleansing. In Bikaner, by contrast, the organisation occurred at much lower levels.
Alwar and Bharatpur
In Alwar and Bharatpur, princely states of Rajputana (modern-day Rajasthan), there were bloody confrontation between the dominant, Hindu land-holding community and the Muslim cultivating community. Well-organised bands of Hindu Jats, Ahirs and Gurjars, started attacking Muslim Meos in April 1947. By June, more than fifty Muslim villages had been destroyed. The Muslim League was outraged and demanded that the Viceroy provide Muslim troops. Accusations emerged in June of the involvement of Indian State Forces from Alwar and Bharatpur in the destruction of Muslim villages both inside their states and in British India.
In the wake of unprecedented violent attacks unleashed against them in 1947, 100,000 Muslim Meos from Alwar and Bharatpur were forced to flee their homes, and an estimated 30,000 are said to have been massacred. On 17 November, a column of 80,000 Meo refugees went to Pakistan. However, 10,000 stopped travelling due to the risks.
Jammu and Kashmir
In September–November 1947 in the Jammu region of the princely state of Jammu and Kashmir, a large number of Muslims were massacred, and others driven away to West Punjab. The impetus for this violence was partly due to the "harrowing stories of Muslim atrocities", brought by Hindu and Sikh refugees arriving to Jammu from West Punjab since March 1947. The killings were carried out by extremist Hindus and Sikhs, aided and abetted by the forces of the Dogra State, headed by the Maharaja of Jammu and Kashmir Hari Singh. Observers state that Hari Singh aimed to alter the demographics of the region by eliminating the Muslim population and ensure a Hindu majority.
Resettlement of refugees: 1947–1951
Resettlement in India
According to the 1951 Census of India, 2% of India's population were refugees (1.3% from West Pakistan and 0.7% from East Pakistan). Delhi received the largest number of refugees for a single city – the population of Delhi grew rapidly in 1947 from under 1 million (917,939) to a little less than 2 million (1,744,072) during the period 1941–1951. The refugees were housed in various historical and military locations such as the Purana Qila, Red Fort, and military barracks in Kingsway Camp (around the present Delhi University). The latter became the site of one of the largest refugee camps in northern India, with more than 35,000 refugees at any given time besides Kurukshetra camp near Panipat. The campsites were later converted into permanent housing through extensive building projects undertaken by the Government of India from 1948 onwards. Many housing colonies in Delhi came up around this period, like Lajpat Nagar, Rajinder Nagar, Nizamuddin East, Punjabi Bagh, Rehgar Pura, Jangpura, and Kingsway Camp. Several schemes such as the provision of education, employment opportunities, and easy loans to start businesses were provided for the refugees at the all-India level.
Many Sikhs and Hindu Punjabis came from West Punjab and settled in East Punjab (which then also included Haryana and Himachal Pradesh) and Delhi. Hindus fleeing from East Pakistan (now Bangladesh) settled across Eastern India and Northeastern India, many ending up in neighbouring Indian states such as West Bengal, Assam, and Tripura. Some migrants were sent to the Andaman islands, where Bengalis today form the largest linguistic group.
Sindhi Hindus settled predominantly in Gujarat, Maharashtra, and Rajasthan. Some, however, settled further afield in Madhya Pradesh. A new township was established for Sindhi Hindu refugees in Maharashtra. The Governor-General of India, Sir Rajagopalachari, laid the foundation for this township and named it Ulhasnagar ('city of joy').
Resettlement in Pakistan
The 1951 Census of Pakistan recorded that the most significant number of Muslim refugees came from the East Punjab and nearby Rajputana states (Alwar and Bharatpur). They were several 5,783,100 and constituted 80.1% of Pakistan's total refugee population. This was the effect of the retributive ethnic cleansing on both sides of the Punjab where the Muslim population of East Punjab was forcibly expelled like the Hindu/Sikh population in West Punjab.
Migration from other regions of India were as follows: Bihar, West Bengal and Orissa, 700,300 or 9.8%; UP and Delhi 464,200 or 6.4%; Gujarat and Bombay, 160,400 or 2.2%; Bhopal and Hyderabad 95,200 or 1.2%; and Madras and Mysore 18,000 or 0.2%.
So far as their settlement in Pakistan is concerned, 97.4% of the refugees from East Punjab and its contiguous areas went to West Punjab; 95.9% from Bihar, West Bengal and Orissa to the erstwhile East Pakistan; 95.5% from UP and Delhi to West Pakistan, mainly in Karachi Division of Sindh; 97.2% from Bhopal and Hyderabad to West Pakistan, mainly Karachi; and 98.9% from Bombay and Gujarat to West Pakistan, largely to Karachi; and 98.9% from Madras and Mysore went to West Pakistan, mainly Karachi.
West Punjab received the largest number of refugees (73.1%), mainly from East Punjab and its contiguous areas. Sindh received the second largest number of refugees, 16.1% of the total migrants, while the Karachi division of Sindh received 8.5% of the total migrant population. East Bengal received the third-largest number of refugees, 699,100, who constituted 9.7% of the total Muslim refugee population in Pakistan. 66.7% of the refugees in East Bengal originated from West Bengal, 14.5% from Bihar and 11.8% from Assam.
NWFP and Baluchistan received the lowest number of migrants. NWFP received 51,100 migrants (0.7% of the migrant population) while Baluchistan received 28,000 (0.4% of the migrant population). The Government undertook a census of refugees in West Punjab in 1948, which displayed their place of origin in India.
A study of the total population inflows and outflows in the districts of Punjab, using the data provided by the 1931 and 1951 Census has led to an estimate of 1.3 million missing Muslims who left western India but did not reach Pakistan. The corresponding number of missing Hindus/Sikhs along the western border is estimated to be approximately 0.8 million. This puts the total of missing people, due to partition-related migration along the Punjab border, to around 2.2 million. Another study of the demographic consequences of partition in the Punjab region using the 1931, 1941 and 1951 censuses concluded that between 2.3 and 3.2 million people went missing in the Punjab.
Rehabilitation of women
Both sides promised each other that they would try to restore women abducted and raped during the riots. The Indian government claimed that 33,000 Hindu and Sikh women were abducted, and the Pakistani government claimed that 50,000 Muslim women were abducted during riots. By 1949, there were legal claims that 12,000 women had been recovered in India and 6,000 in Pakistan. By 1954, there were 20,728 Muslim women recovered from India, and 9,032 Hindu and Sikh women recovered from Pakistan. Most of the Hindu and Sikh women refused to go back to India, fearing that their family would never accept them, a fear mirrored by Muslim women.
Even after the 1951 Census, many Muslim families from India continued migrating to Pakistan throughout the 1950s and the early 1960s. According to historian Omar Khalidi, the Indian Muslim migration to West Pakistan between December 1947 and December 1971 was from U.P., Delhi, Gujarat, Rajasthan, Maharashtra, Madhya Pradesh, Karnataka, Andhra Pradesh, Tamil Nadu, and Kerala. The next stage of migration was between 1973 and the 1990s, and the primary destination for these migrants was Karachi and other urban centres in Sindh.
In 1959, the International Labour Organisation (ILO) published a report stating that from 1951 to 1956, a total of 650,000 Muslims from India relocated to West Pakistan. However, Visaria (1969) raised doubts about the authenticity of the claims about Indian Muslim migration to Pakistan, since the 1961 Census of Pakistan did not corroborate these figures. However, the 1961 Census of Pakistan did incorporate a statement suggesting that there had been a migration of 800,000 people from India to Pakistan throughout the previous decade. Of those who left for Pakistan, most never came back.
Indian Muslim migration to Pakistan declined drastically in the 1970s, a trend noticed by the Pakistani authorities. In June 1995, Pakistan's interior minister, Naseerullah Babar, informed the National Assembly that between the period of 1973–1994, as many as 800,000 visitors came from India on valid travel documents. Of these only 3,393 stayed. In a related trend, intermarriages between Indian and Pakistani Muslims have declined sharply. According to a November 1995 statement of Riaz Khokhar, the Pakistani High Commissioner in New Delhi, the number of cross-border marriages has dropped from 40,000 a year in the 1950s and 1960s to barely 300 annually.
In the aftermath of the Indo-Pakistani War of 1965, 3,500 Muslim families migrated from the Indian part of the Thar Desert to the Pakistani section of the Thar Desert. 400 families were settled in Nagar after the 1965 war and an additional 3000 settled in the Chachro taluka in Sind province of West Pakistan. The government of Pakistan provided each family with 12 acres of land. According to government records, this land totalled 42,000 acres.
The 1951 census in Pakistan recorded 671,000 refugees in East Pakistan, the majority of which came from West Bengal. The rest were from Bihar. According to the ILO in the period 1951–1956, half a million Indian Muslims migrated to East Pakistan. By 1961 the numbers reached 850,000. In the aftermath of the riots in Ranchi and Jamshedpur, Biharis continued to migrate to East Pakistan well into the late sixties and added up to around a million. Crude estimates suggest that about 1.5 million Muslims migrated from West Bengal and Bihar to East Bengal in the two decades after partition.
Due to religious persecution in Pakistan, Hindus continue to flee to India. Most of them tend to settle in the state of Rajasthan in India. According to the Human Rights Commission of Pakistan data, just around 1,000 Hindu families fled to India in 2013. In May 2014, a member of the ruling Pakistan Muslim League-Nawaz (PML-N), Dr Ramesh Kumar Vankwani, revealed in the National Assembly of Pakistan that around 5,000 Hindus are migrating from Pakistan to India every year. Since India is not a signatory to the 1951 United Nations Refugee Convention it refuses to recognise Pakistani Hindu migrants as refugees.
The population in the Tharparkar district in the Sind province of West Pakistan was 80% Hindu and 20% Muslim at the time of independence in 1947. During the Indo-Pakistani wars of 1965 and 1971, the Hindu upper castes and their retainers fled to India, this led to a massive demographic shift in the district. In 1978 India gave citizenship to 55,000 Pakistanis. By the time of the 1998 census of Pakistan, Muslims made up 64.4% of the population and Hindus 35.6% of the population in Tharparkar.
The migration of Hindus from East Pakistan to India continued unabated after partition. The 1951 census in India recorded that 2.5 million refugees arrived from East Pakistan, of which 2.1 million migrated to West Bengal while the rest migrated to Assam, Tripura and other states. These refugees arrived in waves and did not come solely at partition. By 1973 their number reached over 6 million. The following data displays the major waves of refugees from East Pakistan and the incidents which precipitated the migrations:
The partition was a highly controversial arrangement, and remains a cause of much tension on the Indian subcontinent today. According to American scholar Allen McGrath, many British leaders including the British Viceroy, Mountbatten, were unhappy over the partition of India. Lord Mountbatten of Burma had not only been accused of rushing the process through but also is alleged to have influenced the Radcliffe Line in India's favor. The commission took longer to decide on a final boundary than on the partition itself. Thus the two nations were granted their independence even before there was a defined boundary between them.
Some critics allege that British haste led to increased cruelties during the partition. Because independence was declared prior to the actual partition, it was up to the new governments of India and Pakistan to keep public order. No large population movements were contemplated; the plan called for safeguards for minorities on both sides of the new border. It was a task at which both states failed. There was a complete breakdown of law and order; many died in riots, massacre, or just from the hardships of their flight to safety. What ensued was one of the largest population movements in recorded history. According to Richard Symonds, at the lowest estimate, half a million people perished and twelve million became homeless.
However, many argue that the British were forced to expedite the partition by events on the ground. Once in office, Mountbatten quickly became aware that if Britain were to avoid involvement in a civil war, which seemed increasingly likely, there was no alternative to partition and a hasty exit from India. Law and order had broken down many times before partition, with much bloodshed on both sides. A massive civil war was looming by the time Mountbatten became Viceroy. After the Second World War, Britain had limited resources, perhaps insufficient to the task of keeping order. Another viewpoint is that while Mountbatten may have been too hasty, he had no real options left and achieved the best he could under difficult circumstances. The historian Lawrence James concurs that in 1947 Mountbatten was left with no option but to cut and run. The alternative seemed to be involved in a potentially bloody civil war from which it would be difficult to get out.
Conservative elements in England consider the partition of India to be the moment that the British Empire ceased to be a world power, following Curzon's dictum: "the loss of India would mean that Britain drop straight away to a third rate power."
Venkat Dhulipala rejects the idea that the British divide and rule policy was responsible for partition and elaborates on the perspective that Pakistan was popularly imagined as a sovereign Islamic state or a 'New Medina', as a potential successor to the defunct Turkish caliphate and as a leader and protector of the entire Islamic world. Islamic scholars debated over creating Pakistan and its potential to become a true Islamic state. The majority of Barelvis supported the creation of Pakistan and believed that any co-operation with Hindus would be counter productive. Most Deobandis, who were led by Maulana Husain Ahmad Madani, were opposed to the creation of Pakistan and the two-nation theory. According to them Muslims and Hindus could be a part of a single nation.
In their authoritative study of the partition, Ian Talbot and Gurharpal Singh have shown that the partition was not the inevitable end of the so-called British 'divide and rule policy' nor was it the inevitable end of Hindu-Muslim differences.
A cross-border student initiative, The History Project, was launched in 2014 to explore the differences in perception of the events during the British era, which led to the partition. The project resulted in a book that explains both interpretations of the shared a history in Pakistan and India.
A Berkeley, California-based non-profit organization, The 1947 Partition Archive, collects oral histories from those who lived through the partition and consolidated the interviews into an archive. A 2019 book by Kavita Puri, Partition Voices: Untold British Stories, based on the BBC Radio 4 documentary series of the same name, includes interviews with about two dozen people who witnessed partition and subsequently migrated to Britain.
In October 2016, The Arts and Cultural Heritage Trust (TAACHT) of India set up what they describe as "the world’s first Partition Museum" at Town Hall in Amritsar, Punjab. The Museum, which is open from Tuesday to Sunday, offers multimedia exhibits and documents that describe both the political process that led to partition and carried it forward, and video and written narratives offered by survivors of the events.
Artistic depictions of the partition
The partition of India and the associated bloody riots inspired many in India and Pakistan to create literary/cinematic depictions of this event. While some creations depicted the massacres during the refugee migration, others concentrated on the aftermath of the partition in terms of difficulties faced by the refugees in both side of the border. Even now, more than 70 years after the partition, works of fiction and films are made that relate to the events of partition.
The early members of the Bombay Progressive Artist's Group cite "The partition" of India and Pakistan as a key reason for its founding in December 1947. Those members included F. N. Souza, M. F. Husain, S. H. Raza, S. K. Bakre, H. A. Gade, and K. H. Ara, who went on to become some of the most important and influential Indian artists of the 20th Century.
- "Subh-e-Azadi" ('Freedom's Dawn'; 1947), Urdu poem by Faiz Ahmad Faiz
- "Toba Tek Singh" (1955), short story by Saadat Hassan Manto
- Train to Pakistan (1956) by Khushwant Singh
- A Bend in the Ganges (1965) by Manohar Malgonkar
- Tamas (1974) by Bhisham Sahni
- AZADI (1975) by Chaman Nahal, originally written in English and winner of the 1977 Sahitya Akedemi Award in India
- Ice-Candy Man (1988) by Bapsi Sidhwa
- Forgotten Atrocities (2012), memoir by Bal K. Gupta
Salman Rushdie's novel Midnight's Children (1980), which won the Booker Prize and The Best of the Booker, wove its narrative based on the children born with magical abilities on midnight of 14 August 1947. Freedom at Midnight (1975) is a non-fiction work by Larry Collins and Dominique Lapierre that chronicled the events surrounding the first Independence Day celebrations in 1947.
The novel Lost Generations (2013) by Manjit Sachdeva describes the March 1947 massacre in rural areas of Rawalpindi by the Muslim League, followed by massacres on both sides of the new border in August 1947 seen through the eyes of an escaping Sikh family, their settlement and partial rehabilitation in Delhi, and ending in ruin (including death), for the second time in 1984, at the hands of mobs after a Sikh assassinated the prime minister.
- Lahore (1948)
- Chinnamul (1950, directed by Nemai Ghosh; Bengali)
- Nastik (1954)
- Chhalia (1960)
- Bhowani Junction (1956, directed by George Cukor)
- Dharmputra (1961)
- Ritwik Ghatak's Bengali trilogy: Meghe Dhaka Tara (1960), Komal Gandhar (1961), and Subarnarekha (1962)
- Garm Hava (1973)
- Tamas (1987).
From the late 1990s onwards, more films on the theme of partition were made, including several mainstream ones, such as:
- Earth (1998)
- Train to Pakistan (1998; based on the aforementioned book)
- Hey Ram (2000)
- Gadar: Ek Prem Katha (2001)
- Khamosh Pani (2003)
- Pinjar (2003)
- Partition (2007)
- Madrasapattinam (2010)
- Viceroy's House (2017)
The biographical films Gandhi (1982), Jinnah (1998) and Sardar (1993) also feature independence and partition as significant events in their screenplay. The Pakistani drama Daastan, based on the novel Bano, highlights the plight of Muslim girls who were abducted and raped during partition. The partition is also depicted in the historical sports drama film Gold (2018), based on events which impacted the Indian national field hockey team at the time.
The 2013 Google India "Reunion" advertisement, about the partition of India, has had a strong impact in India and Pakistan, leading to hope for the easing of travel restrictions between the two countries. The advertisement went viral and was viewed more than 1.6 million times before officially debuting on television on 15 November 2013.
- "The death toll remains disputed with figures ranging from 200,000 to 2 million."
- "Some 12 million people were displaced in the divided province of Punjab alone, and up to 20 million in the subcontinent as a whole."
- British India consisted of those regions of the British Raj, or the British Indian Empire, which were directly administered by Britain; other regions, of nominal sovereignty, that were indirectly ruled by Britain, were called princely states.
- Coastal Ceylon, part of the Madras Presidency of British India from 1796, became the separate crown colony of British Ceylon in 1802. Burma, gradually annexed by the British during 1826–86 and governed as a part of the British Indian administration until 1937, was directly administered after that. Burma was granted independence on 4 January 1948 and Ceylon on 4 February 1948. (See History of Sri Lanka and History of Burma.)
- The Himalayan kingdom of Sikkim was established as a princely state after the Anglo-Sikkimese Treaty of 1861. However, the issue of sovereignty was left undefined. In 1947, Sikkim became an independent kingdom under the suzerainty of India and remained so until 1975 when it was absorbed into India as the 22nd state. Other Himalayan kingdoms, Nepal and Bhutan, having signed treaties with the British designating them as independent states, were not a part of British India. The Indian Ocean island of The Maldives, became a protectorate of the British crown in 1887 and gained its independence in 1965.
- "Some 12 million people were displaced in the divided province of Punjab alone, and up to 20 million in the subcontinent as a whole."
- In 1947, when Kishan Lal walked next to Dhyan Chand in East Africa in the Indian colours, the legendary field hockey team from 1936 had all but emptied. With 1947 came the Partition and most of the talented players were partitioned too with many moving to Pakistan"
- Talbot & Singh 2009, p. 2.
- Population Redistribution and Development in South Asia. Springer Science & Business Media. 2012. p. 6. ISBN 978-9400953093.
- "Rupture in South Asia" (PDF). United Nations High Commission for Refugees. Retrieved 16 January 2021.
- Dr Crispin Bates (3 March 2011). "The Hidden Story of Partition and its Legacies". BBC. Retrieved 16 January 2021.
- Vazira Fazila‐Yacoobali Zamindar (4 February 2013). India–Pakistan Partition 1947 and forced migration. doi:10.1002/9781444351071.wbeghm285. ISBN 9781444334890. Retrieved 16 January 2021.
- Partition (n), 7. b (3rd ed.). Oxford English Dictionary. 2005.
The division of British India into India and Pakistan, achieved in 1947.
- Sword For Pen, Time, 12 April 1937
- "Sikkim". Encyclopædia Britannica. 2008.
- Encyclopædia Britannica. 2008. "Nepal.", Encyclopædia Britannica. 2008. "Bhutan."
- Copland, Ian (2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. p. 140.
- Spear 1990, p. 176
- Spear 1990, p. 176, Stein & Arnold 2010, p. 291, Ludden 2002, p. 193, Metcalf & Metcalf 2006, p. 156
- Bandyopādhyāẏa 2004, p. 260
- Ludden 2002, p. 193
- Ludden 2002, p. 199
- Ludden 2002, p. 200
- Stein & Arnold 2010, p. 286
- Talbot & Singh 2009, p. 20.
- Ludden 2002, p. 201
- Brown 1994, pp. 197–198
- Olympic Games Antwerp 1920: Official Report Archived 5 May 2011 at the Wayback Machine, Nombre de bations representees, p. 168. Quote: "31 Nations avaient accepté l'invitation du Comité Olympique Belge: ... la Grèce – la Hollande Les Indes Anglaises – l'Italie – le Japon ..."
- Brown 1994, pp. 200–201
- Brown 1994, pp. 205–207
- Talbot, Ian. 1999. "Pakistan's Emergence." Pp. 253–63 in The Oxford History of the British Empire: Historiography, edited by R. W. Winks. Oxford: Oxford University Press. ISBN 978-0-19-820566-1. OCLC 1036799442.
- Liaquat Ali Khan (1940), Pakistan: The Heart of Asia, Thacker & Co. Ltd., ISBN 978-1443726672,
... There is much in the Musalmans which, if they wish, can roll them into a nation. But isn't there enough that is common to both Hindus and Muslims, which if developed, is capable of molding them into one people? Nobody can deny that there are many modes, manners, rites, and customs that are common to both. Nobody can deny that there are rites, customs, and usages based on religion that do divide Hindus and Muslims. The question is, which of these should be emphasized ...
- "Two-Nation Theory Exists". Pakistan Times. Archived from the original on 11 November 2007.
- Cruise O'Brien, Conor. August 1988. "Holy War Against India". The Atlantic Monthly 262(2):54–64. Retrieved 8 June 2020.
- Shakir, Moin. 1979. "Review: Always in the Mainstream." Economic and Political Weekly 14(33):1424. JSTOR 4367847 "[T]he Muslims are not Indians but foreigners or temporary guests—without any loyalty to the country or its cultural heritage—and should be driven out of the country ..."
- Sankhdher, M. M., and K. K. Wadhwa. 1991. National unity and religious minorities. Gitanjali Publishing House. ISBN 978-81-85060-36-1. "... In their heart of hearts, the Indian Muslims are not Indian citizens, are not Indians: they are citizens of the universal Islamic ummah, of Islamdom ..."
- Savarkar, Vinayak Damodar, and Sudhakar Raje. 1989. Savarkar: commemoration volume. Savarkar Darshan Pratishthan. "His historic warning against conversion and call for Shuddhi was condensed in the dictum 'Dharmantar is Rashtrantar' (to change one's religion is to change one's nationality) ..."
- Chakravarty, Nikhil, ed. 1990. Mainstream, 28:32–52. ISSN 0542-1462. "'Dharmantar is Rashtrantar' is one of the old slogans of the VHP..."
- "The Partition of India". Frontline. 22 December 2002.
- Carlo Caldarola (1982), Religions and societies, Asia and the Middle East, Walter de Gruyter, ISBN 978-90-279-3259-4,
... Hindu and Muslim cultures constitute two distinct and frequently antagonistic ways of life, and that therefore they cannot coexist in one nation ...
- S. Harman (1977), Plight of Muslims in India, DL Publications, ISBN 978-0-9502818-2-7,
... strongly and repeatedly pressed for the transfer of the population between India and Pakistan. At the time of partition, some of the two-nation theory protagonists proposed that the entire Hindu population should migrate to India, and all Muslims should move over to Pakistan, leaving no Hindus in Pakistan and no Muslims in India ...
- M. M. Sankhdher (1992), Secularism in India, dilemmas and challenges, Deep & Deep Publication, ISBN 9788171004096,
... The partition of the country did not take the two-nation theory to its logical conclusion, i.e., complete transfer of populations ...
- Rafiq Zakaria (2004), Indian Muslims: where have they gone wrong?, Popular Prakashan, ISBN 978-81-7991-201-0,
... As a Muslim, Hindus, and Muslims are one nation and not two ... two nations have no basis in history... they shall continue to live together for another thousand years in united India ...
- Pakistan Constituent Assembly. 1953. "Debates: Official report, Volume 1; Volume 16." Government of Pakistan Press. "[S]ay that Hindus and Muslims are one, single nation. It is a very peculiar attitude on the part of the leader of the opposition. If his point of view were accepted, then the very justification for the existence of Pakistan would disappear ..."
- Janmahmad (1989), Essays on Baloch national struggle in Pakistan: emergence, dimensions, repercussions, Gosha-e-Adab,
... would be completely extinct as a people without any identity. This proposition is the crux of the matter, shaping the Baloch attitude towards Pakistani politics. For Baloch to accept the British-conceived two-nation theory for the Indian Muslims would mean losing their Baloch identity in the process ...
- Stephen P. Cohen (2004), The idea of Pakistan, Brookings Institution Press, p. 212, ISBN 978-0-8157-1502-3,
[In the view of G. M. Sayed,] the two-nation theory became a trap for Sindhis—instead of liberating Sindh, it fell under Punjabi-Mohajir domination, and until his death in 1995 he called for a separate Sindhi 'nation', implying a separate Sindhi country.
- Ahmad Salim (1991), Pashtun and Baloch history: Punjabi view, Fiction House,
... Attacking the 'two-nation theory' in Lower House on December 14, 1947, Ghaus Bux Bizenjo said: "We have a distinct culture like Afghanistan and Iran, and if the mere fact that we are Muslim requires us to amalgamate with Pakistan, then Afghanistan and Iran should also be amalgamated with Pakistan ...
- Principal Lecturer in Economics Pritam Singh; Pritam Singh (2008). Federalism, Nationalism and Development: India and the Punjab Economy. Routledge. pp. 137–. ISBN 978-1-134-04946-2.
- Pritam Singh (2008). Federalism, Nationalism and Development: India and the Punjab Economy. Routledge. pp. 173–. ISBN 978-1-134-04945-5.
- Talbot & Singh 2009, p. 31.
- "The turning point in 1932: on Dalit representation". The Hindu. 3 May 2018. Retrieved 28 May 2018.
- Talbot & Singh 2009, p. 32.
- Talbot & Singh 2009, pp. 32–33.
- Talbot & Singh 2009, p. 33.
- Talbot & Singh 2009, p. 34.
- Yasmin Khan (2017). The Great Partition: The Making of India and Pakistan, New Edition. Yale University Press. pp. 18–. ISBN 978-0-300-23364-3.
Although it was founded in 1909 the League had only caught on among South Asian Muslims during the Second World War. The party had expanded astonishingly rapidly and was claiming over two million members by the early 1940s, an unimaginable result for what had been previously thought of as just one of the numerous pressure groups and small but insignificant parties.
- William Roger Louis; Wm. Roger Louis (2006). Ends of British Imperialism: The Scramble for Empire, Suez, and Decolonization. I.B. Tauris. pp. 397–. ISBN 978-1-84511-347-6.
He made a serious misjudgment in underestimating Muslim sentiment before the outbreak of the war. He did not take the idea of 'Pakistan' seriously. After the adoption of the March 1940 Lahore resolution, calling for the creation of a separate state or states of Pakistan, he wrote: 'My first reaction is, I confess, that silly as the Muslim scheme for partition is, it would be a pity to throw too much cold water on it at the moment.' Linlithgow surmised that what Jinnah feared was a federal India dominated by Hindus. Part of the purpose of the famous British 'August offer' of 1940 was to assure the Muslims that they would be protected against a 'Hindu Raj' as well as to hold over the discussion of the 1935 Act and a 'new constitution' until after the war.
- L. J. Butler (2002). Britain and Empire: Adjusting to a Post-Imperial World. I.B. Tauris. pp. 41–. ISBN 978-1-86064-448-1.
Viceroy Linlithgow's 'August Offer,' made in 1940, proposed Dominion status for India after the war, and the inclusion of Indians in a larger Executive Council and a new War Advisory Council, and promised that minority views would be taken into account in future constitutional revision. This was not enough to satisfy either the Congress or the Muslim League, who both rejected the offer in September, and shortly afterward Congress launched a fresh campaign of civil disobedience.
- Talbot & Singh 2009, pp. 34–35.
- Talbot & Singh 2009, p. 35.
- Ayesha Jalal (1994). The Sole Spokesman: Jinnah, the Muslim League and the Demand for Pakistan. Cambridge University Press. p. 81. ISBN 978-1-139-93570-8.
Provincial option, he argued, was insufficient security. Explicit acceptance of the principle of Pakistan offered the only safeguard for Muslim interests throughout India and had to be the precondition for any advance at the center. So he exhorted all Indian Muslims to unite under his leadership to force the British and the Congress to concede 'Pakistan.' If the real reasons for Jinnah's rejection of the offer were rather different, it was not Jinnah but his rivals who had failed to make the point publicly.
- Khan 2007, p. 18.
- Stein & Arnold 2010, p. 289: Quote: "Gandhi was the leading genius of the later, and ultimately successful, campaign for India's independence"
- Metcalf & Metcalf 2006, p. 209.
- Khan 2007, p. 43.
- Robb 2002, p. 190
- Gilmartin, David (2009). "Muslim League Appeals to the Voters of Punjab for Support of Pakistan". In D. Metcalf, Barbara (ed.). Islam in South Asia in Practice. Princeton University Press. pp. 410–. ISBN 978-1-4008-3138-8.
At the all-India level, the demand for Pakistan pitted the League against the Congress and the British.
- Judd 2004, pp. 172–173
- Barbara Metcalf (2012). Husain Ahmad Madani: The Jihad for Islam and India's Freedom. Oneworld Publications. pp. 107–. ISBN 978-1-78074-210-6.
- Judd 2004, pp. 170–171
- Judd 2004, p. 172
- Brown 1994, pp. 328–329: "Yet these final years of the raj showed conclusively that British rule had lost legitimacy and that among the vast majority of Hindus Congress had become the raj's legitimate successor. Tangible proof came in the 1945–6 elections to the central and provincial legislatures. In the former, Congress won 91 percent of the votes cast in non-Muslim constituencies, and in the latter, gained an absolute majority and became the provincial raj in eight provinces. The acquiescence of the politically aware (though possibly not of many villagers even at this point) would have been seriously in doubt if the British had displayed any intention of staying in India."
- Barbara D. Metcalf; Thomas R. Metcalf (2012). A Concise History of Modern India. Cambridge University Press. pp. 212–. ISBN 978-1-139-53705-6.
- Burton Stein (2010). A History of India. John Wiley & Sons. pp. 347–. ISBN 978-1-4443-2351-1.
- Sugata Bose; Ayesha Jalal (2004). Modern South Asia: History, Culture, Political Economy (2nd ed.). Psychology Press. pp. 148–149. ISBN 978-0-415-30787-1.
- Burton Stein (2010). A History of India. John Wiley & Sons. p. 347. ISBN 978-1-4443-2351-1.
His standing with the British remained high, however, for even though they no more agreed with the idea of a separate Muslim state than the Congress did, government officials appreciated the simplicity of a single negotiating voice for all of India's Muslims.
- Jeffery J. Roberts (2003). The Origins of Conflict in Afghanistan. Greenwood Publishing Group. pp. 85–. ISBN 978-0-275-97878-5.
Virtually every Briton wanted to keep India united. Many expressed moral or sentimental obligations to leave India intact, either for the inhabitants' sake or simply as a lasting testament to the Empire. The Cabinet Defense Committee and the Chiefs of Staff, however, stressed the maintenance of a united India as vital to the defense (and economy) of the region. A unified India, an orderly transfer of power, and a bilateral alliance would, they argued, leave Britain's strategic position undamaged. India's military assets, including its seemingly limitless manpower, naval and air bases, and expanding production capabilities, would remain accessible to London. India would thus remain of crucial importance as a base, training ground, and staging area for operations from Egypt to the Far East.
- Darwin, John (3 March 2011). "Britain, the Commonwealth and the End of Empire". BBC. Retrieved 10 April 2017.
But the British still hoped that a self-governing India would remain part of their system of 'imperial defense'. For this reason, Britain was desperate to keep India (and its army) united.
- Barbara D. Metcalf; Thomas R. Metcalf (2002). A Concise History of India. Cambridge University Press. pp. 212–. ISBN 978-0-521-63974-3.
By this scheme, the British hoped they could at once preserve united India desired by the Congress, and by themselves, and at the same time, through the groups, secure the essence of Jinnah's demand for a 'Pakistan'.
- Barbara D. Metcalf; Thomas R. Metcalf (2002). A Concise History of India. Cambridge University Press. pp. 211–213. ISBN 978-0-521-63974-3.
Its proposal for an independent India involved a complex, three-tiered federation, whose central feature was the creation of groups of provinces. Two of these groups would comprise the Muslim majority provinces of east and west; a third would include the Hindu majority regions of the center and south. These groups, given responsibility for most of the functions of government, would be subordinated to a Union government, would be subordinated to a Union government controlling defense, foreign affairs, and communications. Nevertheless, the Muslim League accepted the Cabinet mission's proposals. The ball was now in Congress's court. Although the grouping scheme preserved a united India, the Congress leadership, above all Jawaharlal Nehru, now slated to be Gandhi's successor, increasingly concluded that under the Cabinet mission proposals the Center would be too weak to achieve the goals of the Congress, which envisioned itself as the successor to the Raj. Looking ahead to the future, the Congress, especially its socialist wing headed by Nehru, wanted a central government that could direct and plan for an India, free of colonialism, that might eradicate its people's poverty and grow into an industrial power. India's business community also supported the idea of a strong central government In a provocative speech on 10 July 1946, Nehru repudiated the notion of compulsory grouping or provinces, the key to Jinnah's Pakistan. Provinces, he said, must be free to join any group. With this speech, Nehru effectively torpedoed the Cabinet mission scheme, and with it, any hope for a united India.
- Khan 2007, pp. 64–65.
- Talbot & Singh 2009, p. 69: Quote: "Despite the Muslim League's denials, the outbreak was linked with the celebration of Direction Action Day. Muslim procession that had gone to the staging ground of the 150-foot Ochterlony Monument on the maidan to hear the Muslim League Prime Minister Suhrawardy attacked Hindus on their way back. They were heard shouting slogans as 'Larke Lenge Pakistan' (We shall win Pakistan by force). Violence spread to North Calcutta when Muslim crowds tried to force Hindu shopkeepers to observe the day's strike (hartal) call. The circulation of pamphlets in advance of Direct Action Day demonstrated a clear connection between the use of violence and the demand for Pakistan."
- Talbot & Singh 2009, p. 67 Quote: "The signs of 'ethnic cleansing' are first evident in the Great Calcutta Killing of 16–19 August 1946."
- Talbot & Singh 2009, p. 68.
- Talbot & Singh 2009, p. 67 Quote: "(Signs of 'ethnic cleansing') were also present in the wave of violence that rippled out from Calcutta to Bihar, where there were high Muslim casualty figures, and to Noakhali deep in the Ganges-Brahmaputra delta of Bengal. Concerning the Noakhali riots, one British officer spoke of a 'determined and organized' Muslim effort to drive out all the Hindus, who accounted for around a fifth of the total population. Similarly, the Punjab counterparts to this transition of violence were the Rawalpindi massacres of March 1947. The level of death and destruction in such West Punjab villages as Thoa Khalsa was such that communities couldn't live together in its wake."
- Ziegler, Philip (1985). Mountbatten: The Official Biography. London: HarperCollins. p. 359. ISBN 978-0002165433..
- Ayesha Jalal (1994). The Sole Spokesman: Jinnah, the Muslim League and the Demand for Pakistan. Cambridge University Press. p. 250. ISBN 978-0-521-45850-4.
These instructions were to avoid partition and obtain a unitary government for British India and the Indian States and at the same time observe the pledges to the princes and the Muslims; to secure agreement to the Cabinet Mission plan without coercing any of the parties; somehow to keep the Indian army undivided, and to retain India within the Commonwealth. (Attlee to Mountbatten, 18 March 1947, ibid, 972–974)
- Ayesha Jalal (1994). The Sole Spokesman: Jinnah, the Muslim League and the Demand for Pakistan. Cambridge University Press. p. 251. ISBN 978-0-521-45850-4.
When Mountbatten arrived, it was not wholly inconceivable that a settlement on the Cabinet Mission's terms might still be secured limited bloodshed called for a united Indian army under effective control. But keeping the army intact was now inextricably linked with keeping India united, this is why Mountbatten started by being vehemently opposed to 'abolishing the center'.
- Talbot, Ian (2009). "Partition of India: The Human Dimension". Cultural and Social History. 6 (4): 403–410. doi:10.2752/147800409X466254. S2CID 147110854.
Mountbatten had intended to resurrect the Cabinet Mission proposals for a federal India. British officials were unanimously pessimistic about Pakistan's state’s future economic prospects. The agreement to an Indian Union contained in the Cabinet Mission proposals had been initially accepted by the Muslim League as the grouping proposals gave considerable autonomy in the Muslim majority areas. Moreover, there was the possibility of withdrawal and thus acquiring Pakistan by the back-door after a ten-year interval. The worsening communal situation and extensive soundings with Indian political figures convinced Mountbatten within a month of his arrival that partition was, however, the only way to secure a speedy and smooth transfer of power.
- Gandhi, Rajmohan. Patel: A Life. pp. 395–397.
- Menon, V. P. Transfer of Power in India. p. 385.
- Jain, Jagdish Chandra (1 January 1987). Gandhi, the Forgotten Mahatma. Mittal Publications. ISBN 9788170990376. Retrieved 22 May 2020 – via Google Books.
- Talbot & Singh 2009, pp. 67–68.
- Menon, V.P (1957). Transfer of Power in India. Orient Blackswan. p. 512. ISBN 978-8125008842.
- Sankar Ghose, Jawaharlal Nehru, a biography (1993), p. 181
- Jagmohan (2005). Soul and Structure of Governance in India. Allied Publishers. p. 49. ISBN 978-8177648317.
- Gopal, Ram (1991). Hindu Culture During and After Muslim Rule: Survival and Subsequent Challenges. M.D. Publications Pvt. Ltd. p. 133. ISBN 978-8170232056.
- Ray, Jayanta Kumar (2013). India's Foreign Relations, 1947–2007. Routledge. p. 58. ISBN 978-1136197154.
- Ishtiaq Ahmed, State, Nation and Ethnicity in Contemporary South Asia (London & New York, 1998), p. 99
- Raju, Thomas G. C. (Fall 1994). "Nations, States, and Secession: Lessons from the Former Yugoslavia". Mediterranean Quarterly. 5 (4): 40–65.
- Spate 1947, pp. 126–137
- Vazira Fazila-Yacoobali Zamindar (2010). The Long Partition and the Making of Modern South Asia: Refugees, Boundaries, Histories. Columbia University Press. pp. 40–. ISBN 978-0-231-13847-5.
Second, it was feared that if an exchange of populations was agreed to in principle in Punjab, ' there was likelihood of trouble breaking out in other parts of the subcontinent to force Muslims in the Indian Dominion to move to Pakistan. If that happened, we would find ourselves with inadequate land and other resources to support the influx.' Punjab could set a very dangerous precedent for the rest of the subcontinent. Given that Muslims in the rest of India, some 42 million, formed a population larger than the entire population of West Pakistan at the time, economic rationality eschewed such a forced migration. However, in divided Punjab, millions of people were already on the move, and the two governments had to respond to this mass movement. Thus, despite these important reservations, the establishment of the MEO led to an acceptance of a 'transfer of populations' in divided Punjab, too, 'to give a sense of security' to ravaged communities on both sides. A statement of the Indian government's position of such a transfer across divided Punjab was made in the legislature by Neogy on November 18, 1947. He stated that although the Indian government's policy was 'to discourage mass migration from one province to another.' Punjab was to be an exception. In the rest of the subcontinent migrations were not to be on a planned basis, but a matter of individual choice. This exceptional character of movements across divided Punjab needs to be emphasized, for the agreed and 'planned evacuations' by the two governments formed the context of those displacements.
- Peter Gatrell (2013). The Making of the Modern Refugee. OUP Oxford. pp. 149–. ISBN 978-0-19-967416-9.
Notwithstanding the accumulated evidence of inter-communal tension, the signatories to the agreement that divided the Raj did not expect the transfer of power and the partition of India to be accompanied by a mass movement of population. Partition was conceived as a means of preventing migration on a large scale because the borders would be adjusted instead. Minorities need not be troubled by the new configuration. As Pakistan's first Prime Minister, Liaquat Ali Khan, affirmed, 'the division of India into Pakistan and India Dominions was based on the principle that minorities will stay where they were and that the two states will afford all protection to them as citizens of the respective states'.
- "When Muslims left Pakistan for India". The New Indian Express (Opinion).
- "The partition of India and retributive genocide in the Punjab, 1946–47: means, methods, and purposes" (PDF). Retrieved 19 December 2006.
- Talbot, Ian (2009). "Partition of India: The Human Dimension". Cultural and Social History. 6 (4): 403–410. doi:10.2752/147800409X466254. S2CID 147110854.
The number of casualties remains a matter of dispute, with figures being claimed that range from 200,000 to 2 million victims.
- D'Costa, Bina (2011). Nationbuilding, Gender and War Crimes in South Asia. Routledge. p. 53. ISBN 978-0415565660.
- Butalia, Urvashi (2000). The Other Side of Silence: Voices From the Partition of India. Duke University Press.
- Sikand, Yoginder (2004). Muslims in India Since 1947: Islamic Perspectives on Inter-Faith Relations. Routledge. p. 5. ISBN 978-1134378258.
- "A heritage all but erased". The Friday Times. 25 December 2015. Retrieved 26 June 2017.
- Bharadwaj, Prasant; Khwaja, Asim; Mian, Atif (30 August 2008). "The Big March: Migratory Flows after the Partition of India" (PDF). Economic & Political Weekly: 43. Retrieved 16 January 2016.
- "Sikh Social Warriors". Archived from the original on 23 July 2018. Retrieved 25 July 2018.
- "The 'bloody' Punjab partition – VIII". 27 September 2018.
- Ahmed, Ishtiaq (31 January 2013). "The Punjab Bloodied, Partitioned and Cleansed".
- Butt, Shafiq (24 April 2016). "A page from history: Dr Ishtiaq underscores need to build bridges".
- Talbot, Ian (1993). "The role of the crowd in the Muslim League struggled for Pakistan". The Journal of Imperial and Commonwealth History. 21 (2): 307–333. doi:10.1080/03086539308582893.
Four thousand Muslim shops and homes were destroyed in the walled area of Amritsar during a single week in March 1947. were these exceptions which prove the rule? It appears that casualty figures were frequently higher when Hindus rather than Muslims were the aggressors.
- Nisid Hajari (2015). Midnight's Furies: The Deadly Legacy of India's Partition. Houghton Mifflin Harcourt. pp. 139–. ISBN 978-0-547-66921-2.
- Chatterji, Joya (2007). The Spoils of Partition: Bengal and India, 1947–1967. p. 45. ISBN 978-1139468305.
- Khisha, Mukur K. (1998). All That Glisters. Minerva Press. p. 49. ISBN 978-1861060525.
- Chakma, Deepak K. (2013). The Partition and The Chakmas. p. 239.
- Chakma, Dipak Kumar (2013). The Partition and the Chakmas. India: D. K. Chakma. p. 42. ISBN 978-935-104-9272.
- Talukdar, S. P. (1994). Chakmas: An Embattled Tribe. India: Uppal Publishing House. p. 64. ISBN 978-818-556-5507.
- "Sindhi Voices from the Partition". The HeritageLab.in. 16 August 2020.
- Bhavnani, Nandita (2014). The Making of Exile: Sindhi Hindus and the Partition of India. Westland. ISBN 978-93-84030-33-9.
- Markovits, Claude (2000). The Global World of Indian Merchants, 1750–1947. Cambridge University Press. p. 278. ISBN 978-0-521-62285-1.
- "Population of Hindus in the World". Pakistan Hindu Council. Archived from the original on 18 May 2013.
- Abi-Habib, Maria (5 October 2019). "Hard Times Have Pakistani Hindus Looking to India, Where Some Find Only Disappointment". The New York Times.
- Acyuta Yājñika; Suchitra Sheth (2005). The Shaping of Modern Gujarat: Plurality, Hindutva, and Beyond. Penguin Books India. pp. 225–. ISBN 978-0-14-400038-8.
- Balasubrahmanyan, Suchitra (2011). "Partition and Gujarat: The Tangled Web of Religious, Caste, Community and Gender Identities". South Asia: Journal of South Asian Studies. tandfonline. 34 (3): 460–484. doi:10.1080/00856401.2011.620556. S2CID 145404336.
- Guha, Ramachandra. Gandhi before India. ISBN 978-0-307-47478-0. OCLC 903907799.
- Nisid Hajari (2015). Midnight's Furies: The Deadly Legacy of India's Partition. Houghton Mifflin Harcourt. pp. 160–. ISBN 978-0-547-66921-2.
- Zamindar, Vazira Fazila-Yacoobali (2010). The Long Partition and the Making of Modern South Asia: Refugees, Boundaries, Histories. Columbia University Press. p. 247. ISBN 978-0-231-13847-5.
- Kumari, Amita (2013). "Delhi as Refuge: Resettlement and Assimilation of Partition Refugees". Economic and Political Weekly: 60–67.
- Sharma, Bulbul (2013). Muslims In Indian Cities. HarperCollins Publishers India. ISBN 978-93-5029-555-7.
- Copland, Ian (2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. p. 159.
- Copland, I (2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. p. 158.
- Copland, Ian (2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. p. 148.
- Copland, I. (26 April 2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. ISBN 9780230005983.
- Copland, Ian (2005). State, Community and Neighbourhood in Princely North India, c. 1900-1950. p. 157.
- Pandey, Gyanendra (2001). Remembering Partition: Violence, Nationalism and History in India. Cambridge University Press. p. 39. ISBN 978-0-521-00250-9.
- Marston, Daniel (2014). The Indian Army and the End of the Raj. Cambridge University Press. p. 306. ISBN 978-1139915762.
- Khan 2007, p. 135
- Chattha, Ilyas Ahmad (September 2009), Partition and Its Aftermath: Violence, Migration and the Role of Refugees in the Socio-Economic Development of Gujranwala and Sialkot Cities, 1947–1961. University of Southampton, retrieved 16 February 2016. pp. 179, 183.
- A.G. Noorani (25 February 2012). "Horrors of Partition". Frontline.
- Census of India, 1941 and 1951.
- Kaur, Ravinder (2007). Since 1947: Partition Narratives among Punjabi Migrants of Delhi. Oxford University Press. ISBN 978-0-19-568377-6.
- Johari, Aarefa. "Facing eviction, residents of a Mumbai Partition-era colony fear they will become homeless again". Scroll.in. Retrieved 20 October 2018.
- Chitkara, G.M. (1998). Converts Do Not Make A Nation. APH Publishing. p. 216. ISBN 978-81-7024-982-5.
- Ghosh, Papiya (2001). "The Changing Discourse Of The Muhajirs". India International Centre Quarterly. 28 (3): 58. JSTOR 23005560.
- Bharadwaj, Prasant; Khwaja, Asim; Mian, Atif (30 August 2008). "The Big March: Migratory Flows after the Partition of India" (PDF). Economic & Political Weekly: 43. Retrieved 16 January 2016
- Hill, K., Selzer, W., Leaning, J., Malik, S., & Russell, S. (2008). The Demographic Impact of Partition in Punjab in 1947. Population Studies, 62(2), 155–170.
- Perspectives on Modern South Asia: A Reader in Culture, History, and ... – Kamala Visweswara. nGoogle Books.in (16 May 2011).
- Borders & boundaries: women in India's partition – Ritu Menon, Kamla Bhasi. nGoogle Books.in (24 April 1993).
- Jayawardena, Kumari; de Alwi, Malathi (1996). Embodied violence: Communalising women's sexuality in South Asia. Zed Books. ISBN 978-1-85649-448-9.
- Khalidi, Omar (Autumn 1998). "From Torrent to Trickle: Indian Muslim Migration to Pakistan, 1947–97". Islamic Studies. 37 (3): 339–352. JSTOR 20837002.
- Hasan, Arif; Mansoor, Raza (2009). Migration and Small Towns in Pakistan; Volume 15 of Rural-urban interactions and livelihood strategies are working paper. IIED. p. 16. ISBN 978-1-84369-734-3.
- Hasan, Arif (30 December 1987). "Comprehensive assessment of drought and famine in Sind arid ones leading to a realistic short and long-term emergency intervention plan" (PDF). p. 25. Retrieved 12 January 2016.
- Hill, K.; Seltzer, W; Leaning, J.; Malik, S. J.; Russell, S. S. (1 September 2006). "The Demographic Impact of Partition: Bengal in 1947". Archived from the original (PDF) on 1 September 2006. Retrieved 22 May 2020.
- Ben Whitaker, The Biharis in Bangladesh, Minority Rights Group, London, 1971, p. 7.
- Chatterji – Spoils of partition. p. 166
- Rizvi, Uzair Hasan (10 September 2015). "Hindu refugees from Pakistan encounter suspicion and indifference in India". Dawn.
- Haider, Irfan (13 May 2014). "5,000 Hindus migrating to India every year, NA told". Dawn. Retrieved 15 January 2016.
- P. N. Luthra – Rehabilitation, pp. 18–19
- Aditi Kapoor, A home ... far from home?, The Hindu, 30 July 2000. During the Bangladesh liberation war, 11 million people from both communities took shelter in India. After the war, 1.5 million decided to stay.
- Chatterji, Joya (September 2007), "'Dispersal' and the Failure of Rehabilitation: Refugee Camp-dwellers and Squatters in WestBengal", Modern Asian Studies, 41 (5): 998, doi:10.1017/S0026749X07002831, JSTOR 4499809
- Stephen P. Cohen (2004). The Idea of Pakistan. Brookings Institution Press. p. 59. ISBN 978-0-8157-9761-6.
American scholar Allen Mcgrath
- Allen McGrath (1996). The Destruction of Pakistan's Democracy. Oxford University Press. p. 38. ISBN 978-0-19-577583-9.
Undivided India, their magnificent imperial trophy, was besmirched by the creation of Pakistan, and the division of India was never emotionally accepted by many British leaders, Mountbatten among them.
- Niall Ferguson (2003). Empire: how Britain made the modern world. Allen Lane. p. 349. ISBN 9780713996159.
In particular, Mountbatten put pressure on the supposedly neutral Boundary Commissioner, Sir Cyril Radcliffe—cruelly mocked at the time by W.H.Auden—to make critical adjustments in India's favor when drawing the frontier through the Punjab.
- "K. Z. Islam, 2002, The Punjab Boundary Award, In retrospect". Archived from the original on 17 January 2006. Retrieved 22 May 2020.
- Partitioning India over lunch, Memoirs of a British civil servant Christopher Beaumont. BBC News (10 August 2007).
- Stanley Wolpert, 2006, Shameful Flight: The Last Years of the British Empire in India, Oxford University Press, ISBN 0-19-515198-4
- Symonds, Richard (1950). The Making of Pakistan. London: Faber and Faber. p. 74. OCLC 1462689.
At the lowest estimate, half a million people perished and twelve millions became homeless.
- Lawrence J. Butler, 2002, Britain and Empire: Adjusting to a Post-Imperial World, p. 72
- Lawrence J. Butler, 2002, Britain and Empire: Adjusting to a Post-Imperial World, p 72
- Ronald Hyam, Britain's Declining Empire: The Road to Decolonisation, 1918–1968, p. 113; Cambridge University Press, ISBN 0-521-86649-9, 2007
- Lawrence James, Rise and Fall of the British Empire
- Judd, Dennis, The Lion and the Tiger: The Rise and Fall of the British Raj, 1600–1947. Oxford University Press: New York. (2010) p. 138.
- "Was Pakistan sufficiently imagined before independence?". The Express Tribune. 23 August 2015. Retrieved 8 March 2017.
- Ashraf, Ajaz. "The Venkat Dhulipala interview: 'On the Partition issue, Jinnah and Ambedkar were on the same page'". Scroll.in. Retrieved 8 March 2017.
- Long, Roger D.; Singh, Gurharpal; Samad, Yunas; Talbot, Ian (2015). State and Nation-Building in Pakistan: Beyond Islam and Security. Routledge. p. 167. ISBN 978-1317448204.
In the 1940s a solid majority of the Barelvis were supporters of the Pakistan Movement and played a supporting role in its final phase (1940–7), mostly under the banner of the All-India Sunni Conference which had been founded in 1925.
- John, Wilson (2009). Pakistan: The Struggle Within. Pearson Education India. p. 87. ISBN 978-8131725047.
During the 1946 election, Barelvi Ulama issued fatwas in favour of the Muslim League.
- Cesari, Jocelyne (2014). The Awakening of Muslim Democracy: Religion, Modernity, and the State. Cambridge University Press. p. 135. ISBN 978-1107513297.
For example, the Barelvi ulama supported the formation of the state of Pakistan and thought that any alliance with Hindus (such as that between the Indian National Congress and the Jamiat ulama-I-Hind [JUH]) was counterproductive.
- Jaffrelot, Christophe (2004). A History of Pakistan and Its Origins. Anthem Press. p. 224. ISBN 978-1843311492.
Believing that Islam was a universal religion, the Deobandi advocated a notion of a composite nationalism according to which Hindus and Muslims constituted one nation.
- Abdelhalim, Julten (2015). Indian Muslims and Citizenship: Spaces for Jihād in Everyday Life. Routledge. p. 26. ISBN 978-1317508755.
Madani...stressed the difference between qaum, meaning a nation, hence a territorial concept, and millat, meaning an Ummah and thus a religious concept.
- Sikka, Sonia (2015). Living with Religious Diversity. Routledge. p. 52. ISBN 978-1317370994.
Madani makes a crucial distinction between qaum and millat. According to him, qaum connotes a territorial multi-religious entity, while millat refers to the cultural, social and religious unity of Muslims exclusively.
- Jayeeta Sharma (2010) A Review of "The Partition of India," History: Reviews of New Books, 39:1, 26–27, doi:10.1080/03612759.2011.520189
- "The News International: Latest News Breaking, Pakistan News". The News International. Retrieved 22 May 2020.
- "The History Project". The History Project. Retrieved 18 November 2017.
- Sengupta, Somini (13 August 2013). "Potent Memories From a Divided India". The New York Times. ISSN 0362-4331. Retrieved 22 February 2020.
- Ghosh, Bishwanath (24 August 2019). "'Partition Voices – Untold British Stories' review: The long shadow of Partition". The Hindu. ISSN 0971-751X. Retrieved 22 February 2020.
- Mishra, Anodya (15 September 2019). "This collection of Partition interviews gives us new ways to look at migration and refugees". Scroll.in. Retrieved 22 February 2020.
- "About the Partition Museum". Retrieved 17 March 2018.
- Cleary, Joseph N. (2002). Literature, Partition and the Nation-State: Culture and Conflict in Ireland, Israel, and Palestine. Cambridge University Press. p. 104. ISBN 978-0-521-65732-7. Retrieved 27 July 2012.
The partition of India figures in a good deal of imaginative writing...
- "Progressive Artists Group of Bombay: An Overview". Artnewsnviews.com. 12 May 2012. Retrieved 18 November 2017.
- Bhatia, Nandi (1996). "Twentieth Century Hindi Literature". In Natarajan, Nalini (ed.). Handbook of Twentieth-Century Literatures of India. Greenwood Publishing Group. pp. 146–147. ISBN 978-0-313-28778-7. Retrieved 27 July 2012.
- Roy, Rituparna (2011). South Asian Partition Fiction in English: From Khushwant Singh to Amitav Ghosh. Amsterdam University Press. pp. 24–29. ISBN 978-90-8964-245-5. Retrieved 27 July 2012.
- Mandal, Somdatta (2008). "Constructing Post-partition Bengali Cultural Identity through Films". In Bhatia, Nandi; Roy, Anjali Gera (eds.). Partitioned Lives: Narratives of Home, Displacement, and Resettlement. Pearson Education India. pp. 66–69. ISBN 978-81-317-1416-4. Retrieved 27 July 2012.
- Dwyer, R. (2010). "Bollywood's India: Hindi Cinema as a Guide to Modern India". Asian Affairs. 41 (3): 381–398. doi:10.1080/03068374.2010.508231. S2CID 70892666. (subscription required)
- Sarkar, Bhaskar (2009). Mourning the Nation: Indian Cinema in the Wake of Partition. Duke University Press. p. 121. ISBN 978-0-8223-4411-7. Retrieved 27 July 2012.
- Vishwanath, Gita; Malik, Salma (2009). "Revisiting 1947 through Popular Cinema: a Comparative Study of India and Pakistan" (PDF). Economic and Political Weekly. XLIV (36): 61–69. Archived from the original (PDF) on 21 September 2013. Retrieved 27 July 2012.
- Raychaudhuri, Anindya. 2009. "Resisting the Resistible: Re-writing Myths of Partition in the Works of Ritwik Ghatak." Social Semiotics 19(4):469–481. doi:10.1080/10350330903361158.
- DelhiAugust 23, Ananya Bhattacharya New; August 24, 2018UPDATED; Ist, 2018 08:54. "Gold fact check: Truth vs fiction in Akshay Kumar film". India Today.CS1 maint: numeric names: authors list (link)
- Naqvi, Sibtain (19 November 2013). "Google can envision Pakistan-India harmony in less than 4 minutes…can we?". The Express Tribune.
- "Google reunion ad reignites hope for easier Indo-Pak visas". Deccan Chronicle. PTI. 15 November 2013.
- Chatterjee, Rhitu (20 November 2013). "This ad from Google India brought me to tears". The World. Public Radio International.
- Peter, Sunny (15 November 2013). "Google Search: Reunion Video Touches Emotions in India, Pakistan; Goes Viral [Video]". International Business Times.
- "Google's India-Pak reunion ad strikes emotional chord". The Times of India. 14 November 2013. Archived from the original on 17 November 2013.
- Johnson, Kay (15 November 2013). "Google ad an unlikely hit in both India, Pakistan by referring to traumatic 1947 partition". ABC News/Associated Press.
- Textbook histories
- Bandyopādhyāẏa, Śekhara (2004), From Plassey to partition: a history of modern India, Delhi: Orient Blackswan, ISBN 978-81-250-2596-2
- Bose, Sugata; Jalal, Ayesha (2004), Modern South Asia: History, Culture, Political economy: second edition, Routledge, ISBN 978-1-134-39715-0
- Brown, Judith Margaret (1994), Modern India: the origins of an Asian democracy, Oxford University Press, ISBN 978-0-19-873112-2
- Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, ISBN 978-0-19-882905-8
- Kulke, Hermann; Rothermund, Dietmar (2004), A history of India, Routledge, ISBN 978-0-415-32920-0
- Ludden, David (2002), India and South Asia: a short history, Oneworld, ISBN 978-1-85168-237-9
- Markovits, Claude (2004), A history of modern India, 1480–1950, Anthem Press, ISBN 978-1-84331-152-2
- Metcalf, Barbara Daly; Metcalf, Thomas R. (2006), A concise history of modern India, Cambridge University Press, ISBN 978-0-521-86362-9
- Peers, Douglas M. (2006), India under colonial rule: 1700–1885, Pearson Education, ISBN 978-0-582-31738-3
- Robb, Peter (2002), A History of India, Palgrave Macmillan (published 2011), ISBN 978-0-230-34549-2
- Spear, Percival (1990) [First published 1965], A History of India, 2, Penguin Books, ISBN 978-0-14-013836-8
- Stein, Burton; Arnold, David (2010), A History of India, John Wiley and Sons, ISBN 978-1-4051-9509-6
- Talbot, Ian (2016), A History of Modern South Asia: Politics, States, Diasporas, Yale University Press, ISBN 978-0-300-19694-8
- Talbot, Ian (2015), Pakistan: A New History, Hurst, ISBN 978-1-84904-370-0
- Talbot, Ian; Singh, Gurharpal (2009), The Partition of India, Cambridge University Press, ISBN 978-0-521-85661-4
- Wolpert, Stanley (2008), A new history of India, Oxford University Press, ISBN 978-0-19-533756-3
- Ansari, Sarah. 2005. Life after Partition: Migration, Community and Strife in Sindh: 1947–1962. Oxford, UK: Oxford University Press. 256 pages. ISBN 0-19-597834-X
- Ayub, Muhammad (2005). An army, Its Role and Rule: A History of the Pakistan Army from Independence to Kargil, 1947–1999. RoseDog Books. ISBN 978-0-8059-9594-7..
- Butalia, Urvashi. 1998. The Other Side of Silence: Voices from the Partition of India. Durham, NC: Duke University Press. 308 pages. ISBN 0-8223-2494-6
- Bhavnani, Nandita. The Making of Exile: Sindhi Hindus and the Partition of India. Westland, 2014.
- Butler, Lawrence J. 2002. Britain and Empire: Adjusting to a Post-Imperial World. London: I.B.Tauris. 256 pages. ISBN 1-86064-449-X
- Chakrabarty; Bidyut. 2004. The Partition of Bengal and Assam: Contour of Freedom (RoutledgeCurzon, 2004) online edition
- Chattha, Ilyas Ahmad (2009), Partition and Its Aftermath: Violence, Migration and the Role of Refugees in the Socio-Economic Development of Gujranwala and Sialkot Cities, 1947–1961, University of Southampton, School of Humanities, Centre for Imperial and Post-Colonial Studies
- Chatterji, Joya. 2002. Bengal Divided: Hindu Communalism and Partition, 1932—1947. Cambridge and New York: Cambridge University Press. 323 pages. ISBN 0-521-52328-1.
- Chester, Lucy P. 2009. Borders and Conflict in South Asia: The Radcliffe Boundary Commission and the Partition of Punjab. Manchester University Press. ISBN 978-0-7190-7899-6.
- Daiya, Kavita. 2008. Violent Belongings: Partition, Gender, and National Culture in Postcolonial India. Philadelphia: Temple University Press. 274 pages. ISBN 978-1-59213-744-2.
- Dhulipala, Venkat. 2015. Creating a New Medina: State Power, Islam, and the Quest for Pakistan in Late Colonial North India. Cambridge University Press. ISBN 1-10-705212-2
- Gilmartin, David. 1988. Empire and Islam: Punjab and the Making of Pakistan. Berkeley: University of California Press. 258 pages. ISBN 0-520-06249-3.
- Gossman, Partricia. 1999. Riots and Victims: Violence and the Construction of Communal Identity Among Bengali Muslims, 1905–1947. Westview Press. 224 pages. ISBN 0-8133-3625-2
- Hansen, Anders Bjørn. 2004. "Partition and Genocide: Manifestation of Violence in Punjab 1937–1947", India Research Press. ISBN 978-81-87943-25-9.
- Harris, Kenneth. Attlee (1982) pp 355–87
- Hasan, Mushirul (2001), India's Partition: Process, Strategy and Mobilization, New Delhi: Oxford University Press, ISBN 978-0-19-563504-1.
- Herman, Arthur. Gandhi & Churchill: The Epic Rivalry that Destroyed an Empire and Forged Our Age (2009)
- Ikram, S. M. 1995. Indian Muslims and Partition of India. Delhi: Atlantic. ISBN 81-7156-374-0
- Jain, Jasbir (2007), Reading Partition, Living Partition, Rawat, ISBN 978-81-316-0045-0
- Jalal, Ayesha (1993), The Sole Spokesman: Jinnah, the Muslim League and the Demand for Pakistan, Cambridge: Cambridge University Press, ISBN 978-0-521-45850-4
- Judd, Denis (2004), The lion and the tiger: the rise and fall of the British Raj, 1600–1947, Oxford University Press, ISBN 978-0-19-280579-9
- Kaur, Ravinder. 2007. "Since 1947: Partition Narratives among Punjabi Migrants of Delhi". Oxford University Press. ISBN 978-0-19-568377-6.
- Khan, Yasmin (2007), The Great Partition: The Making of India and Pakistan, Yale University Press, ISBN 978-0-300-12078-3
- Khosla, G. D. Stern reckoning : a survey of the events leading up to and following the partition of India New Delhi: Oxford University Press:358 pages Published: February 1990 ISBN 0-19-562417-3
- Lamb, Alastair (1991), Kashmir: A Disputed Legacy, 1846–1990, Roxford Books, ISBN 978-0-907129-06-6
- Mookerjea-Leonard, Debali. (2017). Literature, Gender, and the Trauma of Partition: The Paradox of Independence London and New York: Routledge. ISBN 978-1138183100.
- Moon, Penderel. (1999). The British Conquest and Dominion of India (2 vol. 1256 pp)
- Moore, R.J. (1983). Escape from Empire: The Attlee Government and the Indian Problem, the standard history of the British position
- Nair, Neeti. (2010) Changing Homelands: Hindu Politics and the Partition of India
- Page, David, Anita Inder Singh, Penderel Moon, G. D. Khosla, and Mushirul Hasan. 2001. The Partition Omnibus: Prelude to Partition/the Origins of the Partition of India 1936–1947/Divide and Quit/Stern Reckoning. Oxford University Press. ISBN 0-19-565850-7
- Pal, Anadish Kumar. 2010. World Guide to the Partition of INDIA. Kindle Edition: Amazon Digital Services. 282 KB. ASIN B0036OSCAC
- Pandey, Gyanendra. 2002. Remembering Partition:: Violence, Nationalism and History in India. Cambridge University Press. 232 pages. ISBN 0-521-00250-8 online edition
- Panigrahi; D.N. 2004. India's Partition: The Story of Imperialism in Retreat London: Routledge. online edition
- Raja, Masood Ashraf. Constructing Pakistan: Foundational Texts and the Rise of Muslim National Identity, 1857–1947, Oxford 2010, ISBN 978-0-19-547811-2
- Raza, Hashim S. 1989. Mountbatten and the partition of India. New Delhi: Atlantic. ISBN 81-7156-059-8
- Shaikh, Farzana. 1989. Community and Consensus in Islam: Muslim Representation in Colonial India, 1860–1947. Cambridge, UK: Cambridge University Press. 272 pages. ISBN 0-521-36328-4.
- Singh, Jaswant. (2011) Jinnah: India, Partition, Independence
- Talib, Gurbachan Singh, & Shromaṇī Guraduārā Prabandhaka Kameṭī. (1950). Muslim League attack on Sikhs and Hindus in the Punjab, 1947. Amritsar: Shiromani Gurdwara Parbankhak Committee.
- Talbot, Ian. 1996. Freedom's Cry: The Popular Dimension in the Pakistan Movement and Partition Experience in North-West India. Oxford University Press. ISBN 978-0-19-577657-7.
- Talbot, Ian and Gurharpal Singh (eds). 1999. Region and Partition: Bengal, Punjab and the Partition of the Subcontinent. Oxford and New York: Oxford University Press. 420 pages. ISBN 0-19-579051-0.
- Talbot, Ian. 2002. Khizr Tiwana: The Punjab Unionist Party and the Partition of India. Oxford and New York: Oxford University Press. 216 pages. ISBN 0-19-579551-2.
- Talbot, Ian. 2006. Divided Cities: Partition and Its Aftermath in Lahore and Amritsar. Oxford and Karachi: Oxford University Press. 350 pages. ISBN 0-19-547226-8.
- Wolpert, Stanley. 2006. Shameful Flight: The Last Years of the British Empire in India. Oxford and New York: Oxford University Press. 272 pages. ISBN 0-19-515198-4.
- Wolpert, Stanley. 1984. Jinnah of Pakistan
- Brass, Paul. 2003. The partition of India and retributive genocide in the Punjab,1946–47: means, methods, and purposes Journal of Genocide Research (2003), 5#1, 71–101
- Gilmartin, David (1998), "Partition, Pakistan, and South Asian History: In Search of a Narrative", The Journal of Asian Studies, 57 (4): 1068–1095, doi:10.2307/2659304, JSTOR 2659304
- Gilmartin, David (1998), "A Magnificent Gift: Muslim Nationalism and the Election Process in Colonial Punjab", Comparative Studies in Society and History, 40 (3): 415–436, doi:10.1017/S0010417598001352, JSTOR 179270
- Gupta, Bal K. "Death of Mahatma Gandhi and Alibeg Prisoners" www.dailyexcelsior.com
- Gupta, Bal K. "Train from Pakistan" www.nripulse.com
- Gupta, Bal K. "November 25, 1947, Pakisatni Invasion of Mirpur". www.dailyexcelsior.com
- Jeffrey, Robin (1974), "The Punjab Boundary Force and the Problem of Order, August 1947", Modern Asian Studies, 8 (4): 491–520, doi:10.1017/s0026749x0000562x, JSTOR 311867
- Ravinder Kaur (2014), "Bodies of Partition: Of Widows, Residue and Other Historical Waste", Histories of Victimhood, Ed., Henrik Rønsbo and Steffen Jensen, Pennsylvania University Press
- Kaur, Ravinder. 2009. 'Distinctive Citizenship: Refugees, Subjects and Postcolonial State in India's Partition', Cultural and Social History.
- Kaur, Ravinder. 2008. 'Narrative Absence: An 'untouchable' account of India's Partition Migration, Contributions to Indian Sociology.
- Kaur Ravinder. 2007. "India and Pakistan: Partition Lessons". Open Democracy.
- Kaur, Ravinder. 2006. "The Last Journey: Social Class in the Partition of India". Economic and Political Weekly, June 2006. epw.org.in
- Khalidi, Omar (1998-01-01). "From Torrent to Trickle: Indian Muslim Migration to Pakistan, 1947–97". Islamic Studies. 37 (3): 339–352.
- Khan, Lal (2003), Partition – Can it be undone?, Wellred Publications, p. 228, ISBN 978-1-900007-15-3
- Mookerjea-Leonard, Debali (2005), "Divided Homelands, Hostile Homes: Partition, Women and Homelessness", Journal of Commonwealth Literature, 40 (2): 141–154, doi:10.1177/0021989405054314, S2CID 162056117
- Mookerjea-Leonard, Debali (2004), "Quarantined: Women and the Partition", Comparative Studies of South Asia, Africa and the Middle East, 24 (1): 35–50, doi:10.1215/1089201x-24-1-35
- Morris-Jones (1983), "Thirty-Six Years Later: The Mixed Legacies of Mountbatten's Transfer of Power", International Affairs, 59 (4): 621–628, doi:10.2307/2619473, JSTOR 2619473
- Noorani, A. G. (22 December 2001), "The Partition of India", Frontline, 18 (26), archived from the original on 2 April 2008, retrieved 12 October 2011
- Spate, O. H. K. (1947), "The Partition of the Punjab and of Bengal", The Geographical Journal, 110 (4/6): 201–218, doi:10.2307/1789950, JSTOR 1789950
- Spear, Percival (1958), "Britain's Transfer of Power in India", Pacific Affairs, 31 (2): 173–180, doi:10.2307/3035211, JSTOR 3035211
- Talbot, Ian (1994), "Planning for Pakistan: The Planning Committee of the All-India Muslim League, 1943–46", Modern Asian Studies, 28 (4): 875–889, doi:10.1017/s0026749x00012567
- Visaria, Pravin M (1969), "Migration Between India and Pakistan, 1951–61", Demography, 6 (3): 323–334, doi:10.2307/2060400, JSTOR 2060400, PMID 21331852, S2CID 23272586
- Chopra, R. M., "The Punjab And Bengal", Calcutta, 1999.
- Primary sources
- Mansergh, Nicholas, and Penderel Moon, eds. The Transfer of Power 1942–47 (12 vol., London: HMSO . 1970–83) comprehensive collection of British official and private documents
- Moon, Penderel. (1998) Divide & Quit
- Narendra Singh Sarila, "The Shadow of the Great Game: The Untold Story of India's Partition," Publisher: Carroll & Graf
- Collins, Larry and Dominique Lapierre: Freedom at Midnight. London: Collins, 1975. ISBN 0-00-638851-5
- Seshadri, H. V. (2013). The tragic story of partition. Bangalore: Sahitya Sindhu Prakashana, 2013.
- Zubrzycki, John. (2006) The Last Nizam: An Indian Prince in the Australian Outback. Pan Macmillan, Australia. ISBN 978-0-330-42321-2.
- Memoirs and oral history
- Azad, Maulana Abul Kalam (2003) [First published 1959], India Wins Freedom: An Autobiographical Narrative, New Delhi: Orient Longman, ISBN 978-81-250-0514-8
- Bonney, Richard; Hyde, Colin; Martin, John. "Legacy of Partition, 1947–2009: Creating New Archives from the Memories of Leicestershire People," Midland History, (Sept 2011), Vol. 36 Issue 2, pp 215–224
- Mountbatten, Pamela. (2009) India Remembered: A Personal Account of the Mountbattens During the Transfer of Power
- Mohammed, Javed: Walk to Freedom, Rumi Bookstore, 2006. ISBN 978-0-9701261-2-2
|Wikimedia Commons has media related to Partition of British India.|
|Wikimedia Commons has media related to Partition of India.|
- 1947 Partition Archive
- Partition of Bengal – Encyclopædia Britannica
- India Memory Project – 1947 India Pakistan Partition
- The Road to Partition 1939–1947 – The National Archives
- Indian Independence Bill, 1947
- India's Partition: The Forgotten Story British film-maker Gurinder Chadha, directors of Bend It Like Beckham and Viceroy's House, travels from Southall to Delhi and Shimla to find out about the Partition of India – one of the most seismic events of the 20th century. Partition saw India divided into two new nations – Independent India and Pakistan. The split led to violence, disruption, and death.
- Sir Ian Scott, Mountbatten's deputy private secretary in 1947, talking about the run up to Partition
- India: A People Partitioned oral history interviews by Andrew Whitehead, 1992-2007
- Select Research Bibliography on the Partition of India, Compiled by Vinay Lal, Department of History, UCLA; University of California at Los Angeles
- South Asian History: Colonial India – University of California, Berkeley Collection of documents on colonial India, Independence, and Partition
- Indian Nationalism – Fordham University archive of relevant public-domain documents | https://library.kiwix.org/wikipedia_en_top_maxi/A/Partition_of_India | 21 |
19 | |Preceded by||Regency era|
|Followed by||Edwardian era|
The status of women in the Victorian era was often seen as an illustration of the striking discrepancy between the United Kingdom's national power and wealth and what many, then and now, consider its appalling social conditions. During the era symbolized by the reign of a female monarch, Queen Victoria, women did not have the right to vote, sue, or - if they were married - own property. At the same time, women participated in the paid workforce in increasing numbers following the Industrial Revolution. Feminist ideas spread among the educated middle classes, discriminatory laws were repealed, and the women's suffrage movement gained momentum in the last years of the Victorian era.
In the Victorian era, women were seen, by the middle classes at least, as belonging to the domestic sphere, and this stereotype required them to provide their husbands with a clean home, to put food on the table and to raise their children. Women's rights were extremely limited in this era, losing ownership of their wages, all of their physical property, excluding land property, and all other cash they generated once married. When a Victorian man and woman married, the rights of the woman were legally given over to her spouse. Under the law the married couple became one entity represented by the husband, placing him in control of all property, earnings, and money. In addition to losing money and material goods to their husbands, Victorian wives became property to their husbands, giving them rights to what their bodies produced: children, sex and domestic labor. Marriage abrogated a woman's right to consent to sexual intercourse with her husband, giving him "ownership" over her body. Their mutual matrimonial consent therefore became a contract to give herself to her husband as he desired according to a modern feminist view.
The rights and privileges of Victorian women were limited, and both single and married women had to live with hardships and disadvantages. Victorian women were disadvantaged both financially and sexually, enduring inequalities within their marriages and society. There were sharp distinctions between men's and women's rights during this era; men were allotted more stability, financial status, and power over their homes and women. Marriages for Victorian women became contracts which were extremely difficult if not impossible to get out of during the Victorian era. Women's rights groups fought for equality and over time made strides in attaining rights and privileges; however, many Victorian women endured their husband's control and even cruelty, including sexual violence, verbal abuse, and economic deprivation, with no way out. While husbands participated in affairs with other women, wives endured infidelity, as they had no rights to divorce on these grounds and divorce was considered to be a social taboo.
"The Angel in the House"
By the Victorian era, the concept of "pater familias", meaning the husband as head of the household and moral leader of his family, was firmly entrenched in British culture. A wife's proper role was to love, honour and obey her husband, as her marriage vows stated. A wife's place in the family hierarchy was secondary to her husband, but far from being considered unimportant, a wife's duties to tend to her husband and properly raise her children were considered crucial cornerstones of social stability by the Victorians.
Representations of ideal wives were abundant in Victorian culture, providing women with their role models. The Victorian ideal of the tirelessly patient, sacrificing wife is depicted in The Angel in the House, a popular poem by Coventry Patmore, published in 1854:
Man must be pleased; but him to please
Is woman's pleasure; down the gulf
Of his condoled necessities
She casts her best, she flings herself ...
She loves with love that cannot tire;
And when, ah woe, she loves alone,
Through passionate duty love springs higher,
As grass grows taller round a stone.
Virginia Woolf described the angel as:
immensely sympathetic, immensely charming, utterly unselfish. She excelled in the difficult arts of family life. She sacrificed herself daily ... in short, she was so constituted that she never had a mind but preferred to sympathize always with the minds and wishes of others. Above all ... she was pure. Her purity was supposed to be her chief beauty.
There are many publications from the Victorian era that give explicit direction for the man's role in the home and his marriage. Advice such as "The burden, or, rather the privilege, of making home happy is not the wife's alone. There is something demanded of the lord and master and if he fails in his part, domestic misery must follow" (published in 1883 in Our Manners and Social Customs by Daphne Dale) was common in many publications of the time.
Literary critics of the time suggested that superior feminine qualities of delicacy, sensitivity, sympathy, and sharp observation gave women novelists a superior insight into stories about home, family, and love. This made their work highly attractive to the middle-class women who bought the novels and the serialized versions that appeared in many magazines. However, a few early feminists called for aspirations beyond the home. By the end of the century, the "New Woman" was riding a bicycle, wearing bloomers, signing petitions, supporting worldwide mission activities, and talking about the vote. Feminists of the 20th century reacted in hostile fashion to the "Angel of the House" theme since they felt the norm was still holding back their aspirations. Virginia Woolf was adamant. In a lecture to the Women's Service League in 1941, she said "killing the Angel in the House was part of the occupation of a woman writer."
"The Household General"
'The Household General' is a term coined in 1861 by Isabella Beeton in her influential manual Mrs Beeton's Book of Household Management. Here she explained that the mistress of a household is comparable to the commander of an army or the leader of an enterprise. To run a respectable household and secure the happiness, comfort and well-being of her family she must perform her duties intelligently and thoroughly. For example, she had to organize, delegate and instruct her servants, which was not an easy task as many of them were not reliable. Isabella Beeton's upper-middle-class readers may also have had a large complement of "domestics", a staff requiring supervision by the mistress of the house. Beeton advises her readers to maintain a "housekeeping account book" to track spending. She recommends daily entries and checking the balance monthly. In addition to tracking servants' wages, the mistress of the house was responsible for tracking payments to tradesmen such as butchers and bakers. If a household had the means to hire a housekeeper, whose duties included keeping the household accounts, Beeton goes so far as to advise readers to check the accounts of housekeepers regularly to ensure nothing was amiss.
Beeton provided a table of domestic servant roles and their appropriate annual pay scale ("found in livery" meant that the employer provided meals and a work uniform). The sheer number of Victorian servants and their duties makes it clear why expertise in logistical matters would benefit the mistress of the house. Beeton indicates that the full list of servants in this table would be expected in the household of a "wealthy nobleman"; her readers are instructed to adjust staff size and pay according to the household's available budget, and other factors such as a servant's level of experience:
|When not found in livery||When found in livery|
|Page or Footboy||£8–£18||£6–£14|
|When no extra
allowance is made for
tea, sugar and beer
|When an extra|
allowance is made for
tea, sugar and beer
|Under Housemaid||£8–£12||£6 10s.–£10|
"The Household General" was expected to organise parties and dinners to bring prestige to her husband, also making it possible for them to network. Beeton gives extensively detailed instructions on how to supervise servants in preparation for hosting dinners and balls. The etiquette to be observed in sending and receiving formal invitations is given, as well as the etiquette to be observed at the events themselves. The mistress of the house also had an important role in supervising the education of the youngest children. Beeton makes it clear that a woman's place is in the home, and her domestic duties come first. Social activities as an individual were less important than household management and socialising as her husband's companion. They were to be strictly limited:
After luncheon, morning calls and visits may be made and received.... Visits of ceremony, or courtesy ... are uniformly required after dining at a friend's house, or after a ball, picnic, or any other party. These visits should be short, a stay of from fifteen to twenty minutes being quite sufficient. A lady paying a visit may remove her boa or neckerchief; but neither shawl nor bonnet....
Advice books on housekeeping and the duties of an ideal wife were plentiful during the Victorian era, and sold well among the middle class. In addition to Mrs. Beeton's Book of Household Management, there were Infant Nursing and the Management of Young Children (1866) and Practical Housekeeping; or, the duties of a home-wife (1867) by Mrs. Frederick Pedley, and From Kitchen to Garret by Jane Ellen Panton, which went through 11 editions in a decade. Shirley Forster Murphy a doctor and medical writer, wrote the influential Our Homes, and How to Make them Healthy (1883), before he served as London's chief medical officer in the 1890s.
Working-class domestic life
Domestic life for a working-class family was far less comfortable. Legal standards for minimum housing conditions were a new concept during the Victorian era, and a working-class wife was responsible for keeping her family as clean, warm, and dry as possible in housing stock that was often literally rotting around them. In London, overcrowding was endemic in the slums inhabited by the working classes. (See Life and Labour of the People in London.) Families living in single rooms were not unusual. The worst areas had examples such as 90 people crammed into a 10-room house, or 12 people living in a single room (7 feet 3 inches by 14 feet). Rents were exorbitant; 85 percent of working-class households in London spent at least one-fifth of their income on rent, with 50 percent paying one-quarter to one-half of their income on rent. The poorer the neighbourhood, the higher the rents. Rents in the Old Nichol area near Hackney, per cubic foot, were five to eleven times higher than rents in the fine streets and squares of the West End of London. The owners of the slum housing included peers, churchmen, and investment trusts for estates of long-deceased members of the upper classes.
Domestic chores for women without servants meant a great deal of washing and cleaning. Coal-dust from stoves (and factories) was the bane of the Victorian woman's housekeeping existence. Carried by wind and fog, it coated windows, clothing, furniture and rugs. Washing clothing and linens would usually be done one day a week, scrubbed by hand in a large zinc or copper tub. Some water would be heated and added to the wash tub, and perhaps a handful of soda to soften the water. Curtains were taken down and washed every fortnight; they were often so blackened by coal smoke that they had to be soaked in salted water before being washed. Scrubbing the front wooden doorstep of the home every morning was also an important chore to maintain respectability.
Divorce and legal discrimination
Domestic violence and abuse
The law regarded men as persons, and legal recognition of women's rights as autonomous persons would be a slow process, and would not be fully accomplished until well into the 20th century (in Canada, women achieved legal recognition through the "Persons Case", Edwards v. Canada (Attorney General) in 1929). Women lost the rights to the property they brought into the marriage, even following divorce; a husband had complete legal control over any income earned by his wife; women were not allowed to open banking accounts; and married women were not able to conclude a contract without her husband's legal approval. These property restrictions made it difficult or impossible for a woman to leave a failed marriage, or to exert any control over her finances if her husband was incapable or unwilling to do so on her behalf.
Domestic violence towards wives was given increasing attention by social and legal reformers as the 19th century continued. The first animal-cruelty legislation in Sudan was passed in 1824, however, legal protection from domestic violence was not granted to women until 1853 with the Act for the Better Prevention and Punishment of Aggravated Assaults upon Women and Children. Even this law did not outright ban violence by a man against his wife and children; it imposed legal limits on the amount of force that was permitted.
Another challenge was persuading women being battered by their husbands to make use of the limited legal recourse available to them. In 1843, an organisation founded by animal-rights and pro-temperance activists was established to help this social cause. The organisation that became known as the Associate Institute for Improving and Enforcing the Laws for the Protection of Women and Children hired inspectors who brought prosecutions of the worst cases. It focused its efforts on work-class women, since Victorian practise was to deny that middle-class or aristocratic families were in need of such intervention. There were sometimes cracks in the facade of propriety. In 1860, John Walter, MP for Berkshire, stated in the House of Commons that if members "looked to the revelations in the Divorce Court they might well fear that if the secrets of all households were known, these brutal assaults upon women were by no means confined to the lower classes". A strong deterrent to middle-class or aristocratic wives seeking legal recourse, or divorce, was the social stigma and shunning that would follow such revelations in a public trial.
Divorce and separation
Great change in the situation of women took place in the 19th century, especially concerning marriage laws and the legal rights of women to divorce or gain custody of children. The situation that fathers always received custody of their children, leaving the mother without any rights, slowly started to change. The Custody of Infants Act in 1839 gave mothers of unblemished character access to their children in the event of separation or divorce, and the Matrimonial Causes Act in 1857 gave women limited access to divorce. But while the husband only had to prove his wife's adultery, a woman had to prove her husband had not only committed adultery but also incest, bigamy, cruelty or desertion. In 1873 the Custody of Infants Act extended access to children to all women in the event of separation or divorce. In 1878, after an amendment to the Matrimonial Causes Act, women could secure a separation on the grounds of cruelty and claim custody of their children. Magistrates even authorised protection orders to wives whose husbands have been convicted of aggravated assault. An important change was caused by an amendment to the Married Women's Property Act 1884. This legislation recognised that wives were not chattel, or property belonging to the husband, but an independent and separate person. Through the Guardianship of Infants Act in 1886, women could be made the sole guardian of their children if their husband died. Women slowly had their rights changed so that they could eventually leave their husbands for good. Some notable dates include:
- 1857: violence recognized as grounds for divorce
- 1870: women could keep money they earned
- 1878: entitlement to spousal and child support recognized
Cultural taboos surrounding the female body
The ideal Victorian woman was pure, chaste, refined, and modest. This ideal was supported by etiquette and manners. The etiquette extended to the pretension of never acknowledging the use of undergarments (in fact, they were sometimes generically referred to as "unmentionables"). The discussion of such a topic, it was feared, would gravitate towards unhealthy attention on anatomical details. As one Victorian lady expressed it: "[those] are not things, my dear, that we speak of; indeed, we try not even to think of them". The pretence of avoiding acknowledgement of anatomical realities met with embarrassing failure on occasion. In 1859, the Hon. Eleanor Stanley wrote about an incident where the Duchess of Manchester moved too quickly while manoeuvring over a stile, tripping over her large hoop skirt:
[the Duchess] caught a hoop of her cage in it and went regularly head over heels lighting on her feet with her cage and whole petticoats above, above her head. They say there was never such a thing seen – and the other ladies hardly knew whether to be thankful or not that a part of her undergarments consisted in a pair of scarlet tartan knickerbockers (the things Charlie shoots in) which were revealed to the view of all the world in general and the Duc de Malakoff in particular".
However, despite the fact that Victorians considered the mention of women's undergarments in mixed company unacceptable, men's entertainment made great comedic material out of the topic of ladies' bloomers, including men's magazines and music hall skits.
Equestrian riding was an exerting pastime that became popular as a leisure activity among the growing middle classes. Many etiquette manuals for riding were published for this new market. For women, preserving modesty while riding was crucial. Breeches and riding trousers for women were introduced, for the practical reason of preventing chafing, yet these were worn under the dress. Riding clothes for women were made at the same tailors that made men's riding apparel, rather than at a dressmaker, so female assistants were hired to help with fittings.
The advent of colonialism and world travel presented new obstacles for women. Travel on horseback (or on donkeys, or even camels) was often impossible to do sidesaddle because the animal had not been "broken" (trained) for sidesaddle riding. Riding costumes for women were introduced that used breeches or zouave trousers beneath long coats in some countries, while jodhpurs breeches used by men in India were adopted by women. These concessions were made so that women could ride astride a horse when necessary, but they were still exceptions to the rule of riding sidesaddle until after World War I. Travel writer Isabella Bird (1831–1904) was instrumental in challenging this taboo. At age 42, she travelled abroad on a doctor's recommendation. In Hawaii, she determined that seeing the islands riding sidesaddle was impractical, and switched to riding astride. She was an ambitious traveller, going to the American West, the Rocky Mountains, Japan, China, Baghdad, Tehran, and the Black Sea. Her written accounts sold briskly.
Women's physical activity was a cause of concern at the highest levels of academic research during the Victorian era. In Canada, physicians debated the appropriateness of women using bicycles:
A series of letters published in the Dominion Medical Monthly and Ontario Medical Journal in 1896, expressed concern that women seated on bicycle seats could have orgasms. Fearful of unleashing and creating a nation of 'over-sexed' females, some physicians urged colleagues to encourage women to eschew 'modern dangers' and continue to pursue traditional leisure pursuits. However, not all medical colleagues were convinced of the link between cycling and orgasm, and this debate on women's leisure activities continued well into the 20th century.
Victorian morality and sexuality
Women were expected to have sex with only one man, her husband. However, it was acceptable for men to have multiple partners in their life; some husbands had lengthy affairs with other women while their wives stayed with their husbands because divorce was not an option. If a woman had sexual contact with another man, she was seen as "ruined" or "fallen". Victorian literature and art was full of examples of women paying dearly for straying from moral expectations. Adulteresses met tragic ends in novels, including the ones by great writers such as Tolstoy, Flaubert or Thomas Hardy. While some writers and artists showed sympathy towards women's subjugation to this double standard, some works were didactic and reinforced the cultural norm.
In the Victorian era, sex was not discussed openly and honestly; public discussion of sexual encounters and matters were met with ignorance, embarrassment and fear. One public opinion of women's sexual desires was that they were not very troubled by sexual urges. Even if women's desires were lurking, sexual experiences came with consequences for women and families. Limiting family sizes resulted in resisting sexual desires, except when a husband had desires which as a wife women were "contracted" to fulfill. Many people in the Victorian era were "factually uninformed and emotionally frigid about sexual matters". To discourage premarital sexual relations the New Poor Law provided that "women bear financial responsibilities for out-of-wedlock pregnancies". In 1834 women were made legally and financially supportive of their illegitimate children. Sexual relations for women could not just be about desire and feelings: this was a luxury reserved for men; the consequences of sexual interactions for women took away the physical desires that women could possess.
Portsmouth Dockyard by James Tissot, 1877. This work is Tissot's revision to his earlier work, The Thames. According to the Tate gallery, it "shocked audiences when it was shown at the Royal Academy in 1876 because of the questionable sexual morals of its characters. This painting was exhibited as a corrective".
Contagious Diseases Prevention Acts
The situation of women perceived as unclean was worsened through the Contagious Diseases Acts, the first of which became law in 1864. Women suspected of being unclean were subjected to an involuntary genital examination. Refusal was punishable by imprisonment; diagnosis with an illness was punishable by involuntary confinement to hospital until perceived as cured.
The disease prevention law was only applied to women, which became the primary rallying point for activists who argued that the law was both ineffective and inherently unfair to women. Women could be picked up off the streets, suspected of prostitution on little or no evidence, and subjected to an examination. These were inexpertly performed by male police officers, making the exams painful as well as humiliating. After two extensions of the law in 1866 and 1869 the acts were finally repealed in 1896. Josephine Butler was a women's rights crusader who fought to repeal the Acts.
Women were generally expected to marry and perform household and motherly duties rather than seek formal education. Even women who were not successful in finding husbands were generally expected to remain uneducated, and to take a position in childcare (as a governess or as a supporter to other members of her family). The outlook for education-seeking women improved when Queen's College in Harley Street, London was founded in 1848 – the goal of this college was to provide governesses with a marketable education. Later, the Cheltenham Ladies' College and other girls' public schools were founded, increasing educational opportunities for women's education and leading eventually to the development of the National Union of Women's Suffrage Societies in 1897.
Women in the workforce
Working-class women often had occupations to make ends meet, and to ensure family income in the event that a husband became sick, injured, or died. There was no workers' compensation until late in the Victorian era, and a husband too ill or injured to work often meant an inability to pay the rent and a stay at the dreaded Victorian workhouse.
Throughout the Victorian era, some women were employed in heavy industry such as coal mines and the steel industry. Although they were employed in fewer numbers as the Victorian era continued and employment laws changed, they could still be found in certain roles. Before the Mines and Collieries Act 1842, women (and children) worked underground as "hurriers" who carted tubs of coal up through the narrow mine shafts. In Wolverhampton, the law did not have much of an impact on women's mining employment, because they mainly worked above-ground at the coal mines, sorting coal, loading canal boats, and other surface tasks. Women also traditionally did "all the chief tasks in agriculture" in all counties of England, as a government inquiry found in 1843. By the late 1860s, agricultural work was not paying well, and women turned to industrial employment.
In areas with industrial factories, women could find employment on assembly lines for items ranging from locks to canned food. Industrial laundry services employed many women (including inmates of Magdalene asylums who did not receive wages for their work). Women were also commonly employed in the textile mills that sprang up during the industrial revolution in such cities as Manchester, Leeds, and Birmingham. Working for a wage was often done from the home in London, although many women worked as "hawkers" or street vendors, who sold such things as watercress, lavender, flowers or herbs that they would collect at the Spitalfields fruit and vegetable market. Many working-class women worked as washerwomen, taking in laundry for a fee. Animal-breeding in slum flats was common, such as dogs, geese, rabbits and birds, to be sold at animal and bird markets. Housing inspectors often found livestock in slum cellars, including cows and donkeys. Spinning and winding wool, silk, and other types of piecework were a common way of earning income by working from home, but wages were very low, and hours were long; often 14 hours per day were needed to earn enough to survive. Furniture-assembling and -finishing were common piecework jobs in working-class households in London that paid relatively well. Women in particular were known as skillful "French polishers" who completed the finish on furniture. The lowest-paying jobs available to working-class London women were matchbox-making, and sorting rags in a rag factory, where flea- and lice-ridden rags were sorted to be pulped for manufacturing paper. Needlework was the single largest paid occupation for women working from home, but the work paid little, and women often had to rent sewing machines that they could not afford to buy. These home manufacturing industries became known as "sweated industries". The Select Committee of the House of Commons defined sweated industries in 1890 as "work carried on for inadequate wages and for excessive hours in unsanitary conditions". By 1906, such workers earned about a penny an hour.
Women could not expect to be paid the same wage as a man for the same work, despite the fact that women were as likely as men to be married and supporting children. In 1906, the government found that the average weekly factory wage for a woman ranged from 11s 3d to 18s 8d, whereas a man's average weekly wage was around 25s 9d. Women were also preferred by many factory owners because they could be "more easily induced to undergo severe bodily fatigue than men". Childminding was another necessary expense for many women working in factories. Pregnant women worked up until the day they gave birth and returned to work as soon as they were physically able. In 1891, a law was passed requiring women to take four weeks away from factory work after giving birth, but many women could not afford this unpaid leave, and the law was unenforceable.
As education for girls spread literacy to the working-classes during the mid- and late-Victorian era, some ambitious young women were able to find salaried jobs in new fields, such as salesgirls, cashiers, typists and secretaries. Work as a domestic, such as a maid or cook, was common, but there was great competition for employment in the more respectable, and higher-paying, households. Private registries were established to control the employment of the better-qualified domestic servants.
Throughout the Victorian era, respectable employment for women from solidly middle-class families was largely restricted to work as a school teacher or governess. Once telephone use became widespread, work as a telephone operator became a respectable job for middle-class women needing employment.
Three medical professions were opened to women in the 19th century: nursing, midwifery, and doctoring. However, it was only in nursing, the one most subject to the supervision and authority of male doctors, that women were widely accepted. Victorians thought the doctor's profession characteristically belonged only to the male sex and a woman should not intrude upon this area but stay with the conventions the will of God has assigned to her. In conclusion, Englishmen would not have woman surgeons or physicians; they confined them to their role as nurses. Florence Nightingale (1820–1910) was an important figure in renewing the traditional image of the nurse as the self-sacrificing, ministering angel—the 'Lady with the lamp', spreading comfort as she passed among the wounded. She succeeded in modernising the nursing profession, promoting training for women and teaching them courage, confidence and self-assertion.
Middle-class women's leisure activities included in large part traditional pastimes such as reading, embroidery, music, and traditional handicrafts. Upper-class women donated handicrafts to charity bazaars, which allowed these women to publicly display and sell their handicrafts.
More modern pursuits were introduced to women's lives during the 19th century. Opportunities for leisure activities increased dramatically as real wages continued to grow and hours of work continued to decline. In urban areas, the nine-hour workday became increasingly the norm; the 1874 Factory Act limited the workweek to 56.5 hours, encouraging the movement toward an eventual eight-hour workday. Helped by the Bank Holiday Act of 1871, which created a number of fixed holidays, a system of routine annual vacations came into play, starting with white-collar workers and moving into the working-class. Some 200 seaside resorts emerged thanks to cheap hotels and inexpensive railway fares, widespread banking holidays and the fading of many religious prohibitions against secular activities on Sundays. Middle-class Victorians used the train services to visit the seaside. Large numbers travelling to quiet fishing villages such as Worthing, Morecambe and Scarborough began turning them into major tourist centres, and entrepreneurs led by Thomas Cook saw tourism and overseas travel as viable business models.
By the late Victorian era, the leisure industry had emerged in all cities with many women in attendance. It provided scheduled entertainment of suitable length at convenient locales at inexpensive prices. These included sporting events, music halls, and popular theater. Women were now allowed in some sports, such as archery, tennis, badminton and gymnastics.
In the early part of the nineteenth century, it was believed that physical activity was dangerous and inappropriate for women. Girls were taught to reserve their delicate health for the express purpose of birthing healthy children. Furthermore, the physiological difference between the sexes helped to reinforce the societal inequality. An anonymous female writer was able to contend that women were not intended to fill male roles, because "women are, as a rule, physically smaller and weaker than men; their brain is much lighter; and they are in every way unfitted for the same amount of bodily or mental labour that men are able to undertake." Yet by the end of the century, medical understanding of the benefits of exercise created a significant expansion in physical culture for girls. Thus by 1902, The Girl's Empire magazine was able to run a series of articles on "How to Be Strong", proclaiming, "The old-fashioned fallacies regarding health, diet, exercise, dress, &c., have nearly all been exploded, and to-day women are discarding the old ideas and methods, and entering into the new régime with a zest and vigour which bodes well for the future."
Girl's magazines, such as The Girl's Own Paper and The Girl's Empire frequently featured articles encouraging girls to take up daily exercises or learn how to play a sport. Popular sports for girls included hockey, golf, cycling, tennis, fencing, and swimming. Of course, many of these sports were limited to the middle and upper classes who could afford the necessary materials and free time needed to play. Nonetheless, the inclusion of girls in physical culture created a new space for girls to be visible outside of the home and to partake in activities previously only open to boys. Sports became central to the lives of many middle-class girls, to the point where social commentators worried it would overshadow other cultural concerns. For example, a 1902 Girl's Own Paper article on "Athletics for Girls" bewailed, "To hear some modern schoolgirls, and even modern mothers, talk, one would suppose that hockey was the chief end of all education! The tone of the school—the intellectual training—these come in the second place. Tennis, cricket, but above all, hockey!"
Nevertheless, older cultural conventions connecting girls to maternity and domesticity continued to influence girls' development. Thus, while girls had more freedom than ever before, much of the physical culture for girls was simultaneously justified in terms of motherhood: athletic, healthy girls would have healthier children, better able to improve the British race. For instance, an early article advising girls to exercise stresses the future role of girls as mothers to vindicate her argument: "If, then, the importance of duly training the body in conjunction with the mind is thus recognised in the cause of our boys, surely the future wives and mothers of England—for such is our girls' destiny—may lay claim to a no less share of attention in this respect."
The Fair Toxophilites by William Powell Frith (1872). Archery, or toxophily, became a popular leisure activity for upper-class women in Britain.
An illustration from the book Horsemanship for Women by Theodore Hoe Mead (1887). Women equestrians rode "side saddle", succeeding at challenging manoeuvres despite this sport handicap.
A Rally by Sir John Lavery. Badminton and tennis were popular occasions for parties, with women playing "mixed doubles" alongside male players.
Victorian women's fashion
Victorian women's clothing followed trends that emphasised elaborate dresses, skirts with wide volume created by the use of layered material such as crinolines, hoop skirt frames, and heavy fabrics. Because of the impracticality and health impact of the era's fashions, a reform movement began among women.
The ideal silhouette of the time demanded a narrow waist, which was accomplished by constricting the abdomen with a laced corset. While the silhouette was striking, and the dresses themselves were often exquisitely detailed creations, the fashions were cumbersome. At best, they restricted women's movements and at worst, they had a harmful effect on women's health. Physicians turned their attention to the use of corsets and determined that they caused several medical problems: compression of the thorax, restricted breathing, organ displacement, poor circulation, and prolapsed uterus.
Articles advocating the reform of women's clothing by the British National Health Society, the Ladies' Dress Association, and the Rational Dress Society were reprinted in The Canada Lancet, Canada's medical journal. In 1884, Dr. J. Algernon Temple of Toronto even voiced concern that the fashions were having a negative impact on the health of young women from the working classes. He pointed out that a young working-class woman was likely to spend a large part of her earnings on fine hats and shawls, while "her feet are improperly protected, and she wears no flannel petticoat or woollen stockings".
Florence Pomeroy, Lady Haberton, was president of the Rational Dress movement in Britain. At a National Health Society exhibition held in 1882, Viscountess Haliburton presented her invention of a "divided skirt", which was a long skirt that cleared the ground, with separate halves at the bottom made with material attached to the bottom of the skirt. She hoped that her invention would become popular by supporting women's freedom of physical movement, but the British public was not impressed by the invention, perhaps because of the negative "unwomanly" association of the style with the American Bloomers movement. Amelia Jenks Bloomer had encouraged the wearing of visible bloomers by feminists to assert their right to wear comfortable and practical clothing, but it was no more than a passing fashion itself among radical feminists. The movement to reform women's dress would persist and have longterm success, however; by the 1920s, Coco Chanel was enormously successful at selling a progressive, far less restrictive silhouette that abandoned the corset and raised hemlines. The new silhouette symbolised modernism for trendy young women and became the 20th century standard. Other Paris designers continued reintroducing pants for women and the trend was gradually adopted over the next century.
Fashion trends, in one sense, travelled "full circle" over the course of the Victorian era. The popular women's styles during the Georgian era, and at the very beginning of Victoria's reign, emphasised a simple style influenced by flowing gowns worn by women in Ancient Greece and Rome. The Empire waist silhouette was replaced by a trend towards ornate styles and an artificial silhouette, with the restrictiveness of women's clothing reaching its low point during the mid-century passion for narrow corseted waists and hoop skirts. The iconic wide-brimmed women's hats of the later Victorian era also followed the trend towards ostentatious display. Hats began the Victorian era as simple bonnets. By the 1880s, milliners were tested by the competition among women to top their outfits with the most creative (and extravagant) hats, designed with expensive materials such as silk flowers and exotic plumes such as ostrich and peacock. As the Victorian era drew to a close, however, fashions were showing indications of a popular backlash against excessive styles. Model, actress and socialite Lillie Langtry took London by storm in the 1870s, attracting notice for wearing simple black dresses to social events. Combined with her natural beauty, the style appeared dramatic. Fashions followed her example (as well as Queen Victoria's wearing of mourning black later in her reign). According to Harold Koda, the former Curator-in-chief of the Metropolitan Museum of Art's Costume Institute, "The predominantly black palette of mourning dramatizes the evolution of period silhouettes and the increasing absorption of fashion ideals into this most codified of etiquettes," said Koda, "The veiled widow could elicit sympathy as well as predatory male advances. As a woman of sexual experience without marital constraints, she was often imagined as a potential threat to the social order."
Evolution of Victorian women's fashion
Mrs. Lillie Langtry by George Frederic Watts (1880).
Fashionable women in Queensland, Australia around 1900.
Women subjects of the British Empire
Queen Victoria reigned as the monarch of Britain's colonies and as Empress of India. The influence of British imperialism and British culture was powerful throughout the Victorian era. Women's roles in the colonial countries were determined by the expectations associated with loyalty to the Crown and the cultural standards that it symbolised.
The upper classes of Canada were almost without exception of British origin during the Victorian era. At the beginning of the Victorian era, British North America included several separate colonies that joined together as a Confederation in 1867 to create Canada. Military and government officials and their families came to British North America from England or Scotland, and less often were of Protestant Irish origin. Most business interests were controlled by Canadians who were of British stock. English-speaking minorities who immigrated to Canada struggled for economic and government influence, including large numbers of Roman Catholic Irish and later Ukrainians, Poles, and other European immigrants. French-Canadians remained largely culturally isolated from English-speaking Canadians (a situation later described in Two Solitudes by Hugh MacLennan). Visible minority groups, such as indigenous First Nations and Chinese labourers, were marginalised and suffered profound discrimination. Women's status was thus heavily dependent upon their ethnic identity as well as their place within the dominant British class structure.
English-speaking Canadian writers became popular, especially Catharine Parr Traill and her sister Susanna Moodie, middle-class English settlers who published memoirs of their demanding lives as pioneers. Traill published The Backwoods of Canada (1836) and Canadian Crusoes (1852), and Moodie published Roughing it in the Bush (1852) and Life in the Clearings (1853). Their memoirs recount the harshness of life as women settlers, but were nonetheless popular.
Upper-class Canadian women emulated British culture and imported as much of it as possible across the Atlantic. Books, magazines, popular music, and theatre productions were all imported to meet women's consumer demand.
Upper-class women supported philanthropic causes similar to the educational and nursing charities championed by upper-class women in England. The Victorian Order of Nurses, still in existence, was founded in 1897 as a gift to Queen Victoria to commemorate her Diamond Jubilee. The Imperial Order of the Daughters of the Empire, founded in 1900, supports educational bursaries and book awards to promote Canadian patriotism, but also to support knowledge of the British Empire. Both organisations had Queen Victoria as their official patron. One of the patrons of Halifax's Victoria School of Art and Design (founded in 1887 and later named the Nova Scotia College of Art and Design) was Anna Leonowens. Women began making headway in their struggle to gain access to higher education: in 1875, the first woman university graduate in Canada was Grace Annie Lockhart (Mount Allison University). In 1880, Emily Stowe became the first woman licensed to practice medicine in Canada.
Women's legal rights made slow progress throughout the 19th century. In 1859, Upper Canada passed a law allowing married women to own property. In 1885, Alberta passed a law allowing unmarried women who owned property gained the right to vote and hold office in school matters.
Women's suffrage would not be achieved until the World War I period. Suffrage activism began during the later decades of the Victorian era. In 1883, the Toronto Women's Literary and Social Progress Club met and established the Canadian Women's Suffrage Association.
- Goodwin, Harvey (1885). . London: Society for Promoting Christian Knowledge.
- Buckner, Phillip Alfred (2005). Rediscovering the British World. Calgary: University of Calgary Press.
- Buckner, Phillip Alfred (2005). Rediscovering the British World. Calgary: Calgary University Press. p. 137.
- Baines, Barbara J. (1998). "Effacing Rape in Early Modern Representation". ELH. 65 (1): 69–98. doi:10.1353/elh.1998.0009. JSTOR 30030170. S2CID 143715436.
- Kreps, Barbara Irene (Spring 2002). "The Paradox of Women: The Legal Position of Early Modern Wives and Thomas Dekker's The Honest Whore". ELH. 69 (1): 83–102. doi:10.1353/elh.2002.0007. S2CID 144628070.
- Bailey, Joanne (Winter 2007). "English Marital Violence in Litigation, Literature and the Press". Women's History. 19 (4): 144–153. doi:10.1353/jowh.2007.0065. S2CID 144830901.
- Forman, Cody Lisa (2000). "The Politics of Illegitimacy in an Age of Reform: Women, Reproduction, and Political Economy in England's New Poor Law of 1834". Women's History. 11 (4): 131–156. doi:10.1353/jowh.2000.0005. S2CID 153778368.
- K. Theodore Hoppen, The mid-Victorian generation, 1846–1886 (2000), pp 316ff.
- Patmore, Coventry Kelsey Dighton. "The Angel in the House". Project Gutenberg. Retrieved 6 November 2011.
- Woolf, Virginia (1996). "The Professions of Women". In Gilbert, Sandra; Susan Gubar (eds.). Norton Anthology of Literature by Women (2 ed.). W. W. Norton & Company. p. 1346. ISBN 978-0-393-96825-5.
- Erbsen, Wayne. Manners and Morals of Victorian America. Native Ground Books & Music, Ashville; 2009. ISBN 978-1-883206-54-3. LCCN: 2008943591.
- F. Elizabeth Gray, "Angel of the House" in Adams, ed., Encyclopedia of the Victorian Era (2004) 1: 40–41
- Marcus, Jane (1981). New Feminist Essays on Virginia Woolf. U of Nebraska Press. pp. 45–46. ISBN 0803230702.
- Beeton, Isabella (1861). "The Book of Household Management". Retrieved 11 November 2011.
- Flanders, Judith (2003). The Victorian House. London: Harper Perennial. pp. 392–3. ISBN 0-00-713189-5.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. p. 6. ISBN 978-1-84413-331-4.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. pp. 9–10. ISBN 978-1-84413-331-4.
- Murray, Janet Horowitz (1982). Strong-Minded Women and Other Lost Voices from Nineteenth-Century England. New york: Pantheon Books. pp. 177. ISBN 0-394-71044-4.
- Murray, Janet Horowitz (1982). Strong-Minded Women and Other Lost Voices from Nineteenth-Century England. New york: Pantheon Books. pp. 179. ISBN 0-394-71044-4.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. p. 112. ISBN 978-1-84413-331-4.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. p. 113. ISBN 978-1-84413-331-4.
- Hurvitz, Rachael. "Women and Divorce in the Victorian Era". Retrieved 6 October 2015.
- Cunnington, C. Willett (1990). English Women's Clothing in the Nineteenth Century: A Comprehensive Guide with 1,117 Illustrations. Dover Publications. p. 20. ISBN 978-0-486-26323-6.
- Cunnington, C. Willett (1990). English Women's Clothing in the Nineteenth Century: A Comprehensive Guide with 1,117 Illustrations. Dover Publications. pp. 20–1. ISBN 978-0-486-26323-6.
- Cunnington, C. Willett (1990). English Women's Clothing in the Nineteenth Century: A Comprehensive Guide with 1,117 Illustrations. Dover Publications. p. 22. ISBN 978-0-486-26323-6.
- Berg, Valerie (2010). The Berg Companion to Fashion. Berg Publishers. pp. 249–50. ISBN 978-1-84788-592-0.
- Berg, Valerie (2010). The Berg Companion to Fashion. Berg Publishers. p. 250. ISBN 978-1-84788-592-0.
- O'Connor, Eileen. "Medicine and Women's Clothing and Leisure Activities in Victorian Canada". Yale Journal for Humanities in Medicine. Archived from the original on 1 October 2011. Retrieved 26 October 2011.
- Marsh, Jan. "Sex & Sexuality in the 19th Century". Victoria and Albert Museum. Retrieved 4 March 2013.
- "Portsmouth Dockyards by James Tissot". Tate Collection. Retrieved 7 November 2011.
- McElroy, Wendy. "The Contagious Disease Acts". Archived from the original on 9 October 2011. Retrieved 30 October 2011.
- "Education in Victorian Britain". The British Library. Retrieved 24 January 2017.
- "Women at Work in the 19th Century". Wolverhampton City Council. Retrieved 6 November 2011.
- Perkin, Joan (1993). Victorian Women. London: John Murray (Publishers) Ltd. p. 188. ISBN 0-7195-4955-8.
- "Women of the "Lower" Working Class". www.victorianweb.org. Retrieved 26 July 2016.
- Perkin, Joan (1993). Victorian Women. London: John Murray (Publishers) Ltd. p. 189. ISBN 0-7195-4955-8.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. pp. 60, 64. ISBN 978-1-84413-331-4.
- Perkin, Joan (1993). Victorian Women. London: John Murray (Publishers) Ltd. pp. 189–90. ISBN 0-7195-4955-8.
- "The working classes and the poor". The British Library. Retrieved 26 July 2016.
- Perkin, Joan (1993). Victorian Women. London: John Murray (Publishers) Ltd. p. 191. ISBN 0-7195-4955-8.
- Perkin, Joan (1993). Victorian Women. London: John Murray (Publishers) Ltd. pp. 192–3. ISBN 0-7195-4955-8.
- Wise, Sarah (2009). The Blackest Streets: The Life and Death of a Victorian Slum. London: Vintage Books. p. 61. ISBN 978-1-84413-331-4.
- Whitlock, Tammy C. (2016). Crime, Gender and Consumer Culture in Nineteenth-Century England. Routledge. ISBN 9781351947572.
- Waddington, Keir (2000). Charity and the London hospitals, 1850-1898. London: Royal Historical Society. pp. 47–49. ISBN 9780861932467.
- G. R. Searle, A New England?: Peace and War, 1886–1918 (Oxford University Press, 2004), 529–70.
- Hugh Cunningham, Time, work and leisure: Life changes in England since 1700 (2014)
- John K. Walton, The English seaside resort. A social history 1750–1914 (1983)
- Searle, A New England? pp 547–53
- Anonymous (1876). Women's Work: A Woman's Thoughts on Women's Rights. London: William Blackwood and Sons. p. 7.
- Anonymous (1902). How to Be Strong: Special Exercises with the Sandow Grip Dumb-bell. London: Andrew Melrose Publishing Company. p. 4.
- Watson, Lily (25 October 1902). "Athleticism for Girls". The Girl's Own Paper. 24: 62.
- Arnold, Mrs. Wallace (17 May 1884). "The Physical Education of Girls". The Girl's Own Paper. 5: 516.
- Murray, Janet Horowitz (1982). Strong-Minded Women and Other Lost Voices from 19th Century England. New York: Pantheon Books. pp. 68–70. ISBN 0-394-71044-4.
- Marian Fowler, The Embroidered Tent: Five Gentlewomen in Early Canada: Elizabeth Simcoe, Catharine Parr Traill, Susanna Moodie, Anna Jameson, Lady Dufferin (House Of Anansi, 1982).
- Adams, James Eli, ed. Encyclopedia of the Victorian Era (4 Vol. 2004), short essays on a wide range of topics by experts
- Altick, Richard Daniel. Victorian People and Ideas: A Companion for the Modern Reader of Victorian Literature. (1974) online free
- Anderson, Patricia. When Passion Reigned: Sex and the Victorians (Basic Books, 1995).
- Attwood, Nina. The Prostitute's Body: Rewriting Prostitution in Victorian Britain (Routledge, 2015).
- Barrett-Ducrocq, Francoise. Love in the Time of Victoria: Sexuality, Class and Gender in Nineteenth-Century London (Verso, 1991).
- Branca, Patricia. Silent Sisterhood: Middle Class Women in the Victorian Household. (Carnegie Mellon UP, 1975).
- DeLamont, Sara, and Lorna Duffin, eds. The Nineteenth-Century Woman: Her Cultural and Physical Worlds (1978).
- Doughan, David, and Peter Gordon. Dictionary of British Women's Organisations, 1825-196 (Routledge, 2014).
- Flanders, Judith. Inside the Victorian Home: A Portrait of Domestic Life in Victorian England (2004).
- Gorham, Deborah. The Victorian girl and the feminine ideal (Routledge, 2012).
- Hawkins, Sue. Nursing and women's labour in the nineteenth century: the quest for independence (Routledge, 2010).
- Kent, Christopher. "Victorian social history: post-Thompson, post-Foucault, postmodern." Victorian Studies (1996): 97–133. JSTOR 3828799
- Kent, Susan Kingsley. Sex and suffrage in Britain 1860–1914 (1987)
- Kramer, David. "George Gissing and Women's Work: Contextualizing the Female Professional." English Literature in Transition, 1880–1920 43.3 (2000): 316–330.
- Malone, Cynthia Northcutt. "Near Confinement: Pregnant Women in the Nineteenth-Century British Novel." Dickens Studies Annual (2000): 367–385. online. JSTOR 44371995
- Martin, Jane. Women and the politics of schooling in Victorian and Edwardian England (1999).
- Matus, Jill L. Unstable Bodies: Victorian Representations of Sexuality and Maternity (Manchester UP, 1995).
- Mitchell, Sally. Daily Life in Victorian England. (1996).
- Mitchell, Sally. The New Girl: Girls' Culture in England, 1880–1915 (1995).
- Murdoch, Lydia. Daily life of Victorian women (ABC-CLIO, 2013).
- Murray, Janet Horowitz. Strong-minded women: and other lost voices from nineteenth-century England (1982).
- O'Gorman, Francis, ed. The Cambridge companion to Victorian culture (2010)
- Perkin, Harold. The Origins of Modern English Society: 1780–1880 (1969) online at Questia
- Perkin, Joan. Victorian women (NYU Press, 1995).
- Poovey, Mary. Uneven Developments: The Ideological Work of Gender in Mid-Victorian England (U of Chicago Press, 1988).
- Roberts, Adam Charles, ed. Victorian culture and society: the essential glossary (2003).
- Roderick, Gordon. Victorian education and the ideal of womanhood (Routledge, 2016).
- Ross, Ellen. Love and Toil: Motherhood in Outcast London, 1870–1918 (1993).
- Thompson, F. M. L. Rise of Respectable Society: A Social History of Victorian Britain, 1830–1900 (1988) deals with family, marriage, & childhood.
- Walkowitz, Judith R. "History and the Politics of Prostitution: Prostitution and the Politics of History." in Marlene Spanger and May-Len Skilbrei, eds., Prostitution Research in Context (2017) pp. 18–32.
- Goodwin, Harvey (1885). . London: Society for Promoting Christian Knowledge.
|Wikimedia Commons has media related to Women of the United Kingdom.| | https://wiki-offline.jakearchibald.com/wiki/Women_in_the_Victorian_era | 21 |
15 | Hyperinflation is used to describe situations where the prices of goods and services rise uncontrollably over a defined time period. In other words, hyperinflation is rapid inflation.
Hyperinflation is when the prices of goods and services rise more than 50% per month. At that rate, a loaf of bread could cost one amount in the morning and a higher one in the afternoon.
Causes Of Hyperinflation:
Hyperinflation has two main causes: an increase in the money supply and demand-pull inflation which significantly affect the Gross Domestic Product of the nation.
The increase in the money supply is often caused by government printing and infusing more money into the domestic economy. As there is more money in circulation, prices rise.
Demand-pull inflation occurs when a surge in demand exceeds supply, sending prices higher. This can happen due to increased consumer spending due to a growing economy, a sudden rise in exports, or more government spending.
Hyperinflation tends to occur during a period of economic turmoil or depression.
Countries that have suffered horrendous inflation rates are Germany, Venezuela, Zimbabwe, and the United States during the Civil War. Venezuela is still trying to cope with hyperinflation in the present day.
Effects Of Hyperinflation:
– To keep from paying more tomorrow, people begin hoarding. This creates shortages. It starts with durable goods. If hyperinflation continues, people hoard perishable goods, like bread and milk. These daily supplies become scarce, and the economy falls apart.
– People lose their life savings as cash becomes worthless. For that reason, the elderly are the most vulnerable to hyperinflation.
– Hyperinflation sends the value of the currency plummeting in foreign exchange markets. The nation’s importers go out of business as the cost of foreign goods skyrockets. However, there are two winners in hyperinflation. The first, are those who took out loans and find that higher prices make their debt worthless by comparison until it is virtually wiped out. Exporters are also winners because the falling value of the local currency makes exports cheaper compared to foreign competitors. Additionally, exporters receive hard foreign currency, which increases in value as the local currency falls. | https://explified.com/what-is-hyperinflation/ | 21 |
39 | The pancreas is an organ of the digestive system located deep in the upper part of the abdomen, behind the stomach and in front of the spine.
The pancreas is only about 2 inches wide and 6 to 8 inches long, and sits horizontally across the abdomen. It is composed of 3 contiguous parts:
- The large, rounded portion of the gland is called the head. It is located on the right side of the abdomen and abutting the beginning of the small intestine, which is called the duodenum.
- The middle section, called the body, is tucked behind the stomach.
- The thin end of the pancreas, called the tail, is located on the left side of the abdomen, next to the spleen.
The pancreas is a glandular tissue comprised of two cell types: 1. exocrine (produces and secretes substances into a duct, which drains into the duodenum) and 2. endocrine (produces and secretes substances into the blood) tissue. The exocrine tissue comprises 95 percent of the pancreas, and the endocrine tissue makes up the remaining 5 percent of the pancreas.
Exocrine glandular tissue produces pancreatic enzymes. These enzymes travel down the pancreatic duct and into the duodenum where they aid in the digestion of food. The endocrine glandular tissue of the pancreas produces hormones and releases them into the bloodstream. Two of these hormones, insulin and glucagon, help control blood sugar levels.
What is Cancer of the Pancreas?
What Is Pancreatic Cancer? Pancreatic cancer develops when anomalous cells in the pancreas grow and divide out of control to form a tumor. Learn more here
Because the pancreas lies deep in the abdomen, a doctor performing an examination on a patient would not be able to feel a pancreatic tumor. Pancreatic cancer has no early warning signs, and there are currently no effective screening tests. As a result, pancreatic cancer is usually discovered late. Often, the diagnosis is not made until the cancer has spread to other areas of the body (stage IV). However, research focused on better diagnostic tests and newer treatments provides a more optimistic future for patients diagnosed with pancreatic cancer. In fact, a blood test and better scans are in development.
Before 2030, pancreatic cancer is expected to be the second-leading cause of cancer-related deaths in the United States, second only to lung cancer. In 2020, the American Cancer Society estimates there will be 57,600 new pancreatic cancer cases, and 47,050 people will pass away from the disease.
Types of Pancreatic Cancer
The most common type of pancreatic cancer arises from the exocrine cells and is called a pancreatic ductal adenocarcinoma (PDAC). These tumors are designated “ductal” because they microscopically form structures that resemble the pancreatic ducts. About two-thirds of all pancreatic cancers arise in the head of the pancreas. The remainder arise in the body and tail. These tumors are malignant, meaning they can invade nearby tissues and organs. Cancerous cells can also spread through the blood and lymphatic systems to other parts of the body. When this occurs, they are called metastatic cancer.
Tumors can also resemble the endocrine cells of the pancreas. These types of rare tumors are called islet cell tumors, pancreatic endocrine neoplasms, or pancreatic neuroendocrine tumors, are generally less aggressive, and may be curable if detected early. It is important to distinguish between exocrine and endocrine tumors because each has different pancreatic signs and symptoms, is diagnosed using different tests, has different treatments, and has different prognoses (likely course of the disease). Our research efforts are focused on PDAC.
Precursors to Pancreatic Cancer
An understanding of the lesions that give rise to pancreatic cancer is important because many of these precursor lesions can be identified and removed before they cause pancreatic cancer. Some of these precursors form cysts, which are collections of fluid within the substance of the pancreas.
Almost 3 percent of American adults have a pancreatic cyst. Improvements in imaging tests over the past decade have led to a significant increase in the number of patients found to have a cyst in their pancreas. Most of these cysts are harmless and can be safely watched and followed.
Intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs)
have been recognized as special types of cysts in the pancreas because they are precursor lesions that can later progress to invasive cancers if left untreated. Both IPMNs and MCNs are called “mucinous” because they produce large amounts of mucus, which, in the case of IPMNs, can clog and enlarge the pancreatic duct. IPMNs and MCNs are very different from most pancreatic tumors because they may be present for a long time without spreading. Surgical removal is the treatment of choice for IPMN cysts that are at high risk for progressing to invasive pancreatic cancer. However, doctors have to balance the risk of over-treating patients with harmless cysts with the benefit of removing a precancerous lesion. Many small IPMN and MCN cysts can safely be followed with annual surveillance imaging, most commonly using magnetic resonance imaging (MRI) scans.
Because it can be hard to tell which IPMNs and MCNs are precancerous and which are harmless, researchers have been studying them and their genetic makeup for new ways to determine which are more likely to progress to pancreatic cancer. Our researchers are actively developing new molecular tests to better classify pancreatic cysts. Recently, researchers at Johns Hopkins identified a panel of molecular markers and clinical features that show promise for classifying pancreatic cysts and determining which cysts require surgery. This panel has the potential to lower the number of unnecessary surgeries by an overwhelming 91 percent. This more specific panel of markers is likely to provide physicians with additional information to help them determine whether surgery or surveillance of the cyst(s) is the most appropriate course of action for their patients, based on the type of mutation they see in a particular cyst. Avoiding unnecessary pancreatic surgery is important, and this research on cysts is one step forward.
What causes pancreatic cancer?
Genes and Pancreatic Cancer
All the cells in the body contain DNA. DNA is the molecule in the cell nucleus that carries the instructions (genes) for making living organisms. When cells grow and divide, they also copy their DNA.
Research conducted by Dr. Vogelstein at Johns Hopkins found that random, unpredictable ‘mistakes’ that occur when DNA is copied account for nearly two-thirds of cancer mutations, and that environmental factors account for another 29 percent. Mutations in DNA occur frequently, especially when cells divide. Cells have an exceptional ability to repair these changes in DNA. However, the DNA repair mechanisms can also fail. When they do, these mistakes in DNA can be passed along to future copies of the altered cell. More abnormal cells can then be produced and when these abnormal cells continue to grow unchecked, cancer may develop.
The DNA mutations that cause pancreatic cancer may be either inherited from a parent or acquired as we age. Inherited mutations are carried in the DNA of a person’s reproductive cells and can be passed on to that person’s children.
Not everyone who has an inherited mutation will develop pancreatic cancer.
Acquired mutations are ones that develop during a person’s lifetime, either as random mutations in DNA or in response to injuries from harmful environmental factors such as exposure to the carcinogens in tobacco smoke or cosmic rays. Scientists believe that most cancers result from complex DNA changes that involve many different genes. Some of these outside factors are called risk factors. Certain risk factors increase the chances of a person developing cancer. Not everyone who has an acquired mutation will develop pancreatic cancer. It is important to note that pancreatic cancer is relatively rare, striking only 12 to 13 people per 100,000 each year, so even doubling a rare risk still means that the risk is very low.
Family History and Pancreatic Cancer
Pancreatic cancer may be inherited because it can run in families. This means that blood relatives of patients with pancreatic cancer may have an increased risk of developing the disease. The risk depends on the gene inherited. If the gene inherited isn’t known, inherited risk can still be estimated based on the number of first-degree relatives (i.e., a sibling, parent, or child) that an individual has who have been diagnosed with pancreatic cancer. One first-degree relative with pancreatic cancer means a two to four-fold risk, two relatives increases the risk by six or seven-fold, and three first-degree relatives, which is highly unusual, results in a 32-fold risk. Having a family member who had pancreatic cancer younger than 50 years of age is an added risk of pancreatic cancer. However, not everyone with a family history of pancreatic cancer will develop the disease.
- Random genetic mistakes
- Smoking or environmental causes
Data for hereditary and smoking causes are from the American Cancer Society.
Inherited mutations in known cancer-causing genes such as BRCA2, BRCA1, PALB2, p16/CDKN2A, ATM, STK11, PRSS1, SPINK1, and in one of the DNA mismatch repair (DNA is not properly repaired) genes have been shown to increase the risk of developing pancreatic cancer. These genes are therefore called familial pancreatic cancer genes. However, not everyone who has one of these mutations will develop pancreatic cancer.
It is estimated that 10 percent of pancreatic cancer is familial. Researchers around the world have set up pancreatic cancer registries to study the hereditary factors that influence pancreatic cancer. The qualifications for joining a registry may vary from one registry to another and may include providing answers to a questionnaire and a blood or saliva sample. Some registries enroll patients and family members who have at least one relative who has pancreatic cancer. Other registries require that enrollees have at least two relatives who have pancreatic cancer. Registry participants must be 18 years of age or older.
Screening programs are currently being explored for patients with a known genetic abnormality that predisposes them to pancreatic cancer or who have a strong family history of pancreatic cancer. These screening programs often include screening with endoscopic ultrasound (EUS) and MRI. This is because these imaging technologies may be useful for detecting small lesions and may identify early pancreatic tumors.
Hereditary syndromes are inherited genetic mutations in one or more genes that may predispose the affected individuals to the development of certain cancers and may also cause the early age of onset of these cancers. The hereditary syndromes listed below have been associated with the development of pancreatic cancer.
Familial Breast Cancer Syndrome. People who have the breast cancer 2 gene (BRCA2) mutation have an increased risk of several cancers, among them pancreatic. Inherited mutations in the BRCA2 gene are particularly common in the Ashkenazi Jewish population. It has recently been suggested that cancers that arise in patients with a BRCA2 mutation may be particularly sensitive to treatment with drugs called PARP inhibitors. Although the association is not as strong as it is with BRCA2, inherited mutations in the first breast cancer gene, BRCA1, may also increase the risk of pancreatic cancer.
Familial Atypical Multiple Mole Melanoma (FAMMM) Syndrome. People with FAMMM syndrome, also called p16-Leiden, have many different-sized skin moles that are asymmetrical and raised. Most cases of FAMMM syndrome are caused by inherited mutations in the p16/CDKN2A gene.
Peutz-Jeghers Syndrome (PJS). People with this rare syndrome have mutations in the STK11/LKB1 gene. Polyps in the small intestine and dark spots on the mouth and fingers characterize the syndrome. In people with PJS, the risks of gastrointestinal tumors such as esophageal, small bowel, colorectal, and pancreatic cancer are increased.
Hereditary Pancreatitis. Hereditary pancreatitis is a rare disease in which patients develop recurrent episodes of severe recurrent pancreatitis at an early age. The main genes related to this disorder are PRSS1, SPINK1, and the cystic fibrosis gene, CFTR. About 30 to 40 percent of people with hereditary pancreatitis will develop pancreatic cancer by age 70, and the risk is especially high among patients with hereditary pancreatitis who also smoke cigarettes.
Hereditary Nonpolyposis Colon Cancer (HNPCC; Lynch Syndrome). People with HNPCC have a higher-than-normal chance of developing colon, pancreatic, uterine, stomach, or ovarian cancer. People with this disorder have inherited mutations in DNA mismatch repair genes. Recently it has been shown that the drug Keytruda (pembrolizumab) may be very effective in the treatment of cancers that arise in patients with HNPCC who develop pancreatic cancer.
Partner and Localizer of BRCA2 (PALB2). Mutations in this gene, which is related to BRCA2, also increases the risk of breast and pancreatic cancer.
ATM. Mutations in this gene may increase the risk of pancreatic cancer.
What are the Risk Factors of Pancreatic Cancer?
Risk factors are characteristics, habits, or environmental exposures that have been shown to increase the odds of developing a disease. Some can be controlled, while others cannot.
Risk Factors You Can Influence
Smoking. Smoking or being exposed to secondhand smoke is the leading preventable cause of pancreatic cancer. People who smoke have twice the chance of getting pancreatic cancer compared with people who do not smoke. Importantly, the risk of cancer falls after smoking cessation. Over time, smokers who quit will decrease their risk of developing pancreatic cancer, and after 10 years the risk in ex-smokers is the same as that of nonsmokers.
Obesity. People who are significantly overweight are more likely to develop pancreatic cancer compared with those who are not overweight, with those patients who are obese during their teens and twenties having the highest risk.
Other Risk Factors of Pancreatic Cancer
Age. As people get older, their risk of pancreatic cancer increases. Pancreatic cancer mostly affects people 55 years of age or older.
Race. In the United States, pancreatic cancer is more common in African Americans than in Caucasians, although the reasons are not clear. Differences in dietary habits, the rates of obesity and diabetes, and the frequency of cigarette smoking exist between these groups. Genetic or other unknown factors may also explain the higher incidence in African Americans.
Medical Factors. The incidence of pancreatic cancer is higher in people who have any of the following medical conditions.
• Chronic pancreatitis (inflammation that causes irreversible damage to the pancreas)
• Long-term diabetes mellitus (high blood sugar)
• Helicobacter pylori infection or ulcers
Adult Onset Diabetes. Long-term diabetes is a risk factor for pancreatic cancer. New onset diabetes in an older person can be the first sign of pancreatic cancer. In fact, up to 80 percent of patients with pancreatic cancer are either prediabetic or are in a presymptomatic phase of diabetes.
Presence of Risk Factors
When a person has one, or even more than one, of these risk factors, it does not mean that he/she will develop pancreatic cancer. Conversely, some people who do not have any risk factors will still get pancreatic cancer. Researchers are working to understand how lifestyle and environmental risk factors interact with an individual’s genetic makeup to influence pancreatic cancer development. Most importantly, the best way to reduce your risk of developing pancreatic cancer is to not smoke, and to maintain a healthy body weight.
Pancreatic Cancer Diagnosis
Several steps are involved in making a diagnosis of pancreatic cancer. The first thing your doctor will do is ask questions about your medical history, family history, possible risk factors, and symptoms. Answering these questions honestly and completely will help both you and your doctor during the diagnostic process.
MEDICAL HISTORY QUESTIONS
- Do you have pain?
- Where is the pain located?
- How long have you had the pain?
- How intense is the pain (i.e., on a scale from 0 to 10)?
- Is there something you can do that causes the pain to come back?
- Is there something you can do that makes the pain go away?
- Have you lost weight without trying?
- What other symptoms do you have?
- If you have jaundice: When did you notice the jaundice?
- If you have dark urine or light stools: How long have you had this?
- Has anyone in your family ever had cancer?
- Has anyone in your family ever had pancreatic cancer?
Answering these questions honestly and completely will help both you and your doctor during the diagnostic process.
A doctor will perform a physical examination and check your abdomen for tenderness, fluid buildup, enlargement of your gallbladder or liver (that may result from blockage of the bile duct), and masses. Your lymph nodes will be checked for tenderness and swelling, and any sign of jaundice will be noted. Your doctor also may order blood or urine tests, testing of stool samples, or imaging tests.
Blood tests are frequently performed for diagnostic purposes. No single blood test can be used to make a diagnosis of pancreatic cancer. When a person has pancreatic cancer, however, elevated levels of bilirubin or liver enzymes may be present.
Different tumor markers in the blood are used to detect and monitor many types of cancer. Tumor markers are substances, usually complex proteins, produced by tumor cells. Proteins form the basis of body structures such as cells, tissues, and organs. Enzymes and some hormones are composed of protein. Some tumor markers can indicate specific types of cancer; others are found in several types of cancer.
Two commercially available tumor marker tests are of use in patients with pancreatic cancer: cancer antigen 19-9 (CA 19-9) and carcinoembryonic antigen (CEA). These markers are not accurate enough to be used to screen healthy people or to make a diagnosis of pancreatic cancer. However, CA 19-9 and CEA are frequently used to track the progress of treatment in patients with pancreatic cancer. CA 19-9 is a substance found on the surface of certain types of cells and is shed by tumor cells, making it useful in following the course of cancer. The presence of the protein CEA may indicate cancer because elevations in CEA levels are not usually found in people who are healthy. CEA is not as useful as is CA 19-9 in pancreatic cancer testing.
We have funded researchers at Johns Hopkins who have designed a blood test called CancerSEEK that can detect the presence of pancreatic cancer as part of a panel of eight common cancers –pancreas, ovary, liver, stomach, esophagus, colorectum, lung and breast. It can identify the presence of relatively early cancer, and can detect the organ of origin of the cancers. This test is an important breakthrough because these eight cancers account for more than 60 percent of cancer deaths. While further testing is needed, the goal is for CancerSEEK to be offered as part of routine medical checks.
If you have blood and urine testing, your doctor will receive written reports from the laboratory. If the results show high levels of bilirubin, it may be an indication of pancreatic cancer. However, many other medical situations can cause an elevation in bilirubin. Additional testing will almost always be needed to confirm a diagnosis of pancreatic cancer. Liver function tests will also be performed on blood samples to determine if a tumor is affecting the liver.
Imaging tests are important tests used to detect pancreatic cancer. These tests use a variety of methods to see inside the body. CT scans – or some variation of a CT scan — of the chest, abdomen, and pelvis are most commonly used in the diagnosis of pancreatic cancer.
A CT scan, formerly called computed axial tomography (CAT) scan, uses a large machine shaped like a donut to take detailed, cross-sectional, X-ray images from many different angles while you lie on a table that moves into the machine. The computer combines these images into a series of views of the area in question for diagnostic purposes. A CT scan may be done at a special center or in a hospital, but it does not require an overnight stay. This test is not painful, and no sedation is needed.
A dye, called a contrast agent, can be injected into a vein to produce better CT images of body structures. Typically, a contrast agent is also given by mouth to provide better images of the stomach and small intestines.
In many centers, modifications of basic CT scanners are used to image the pancreas more accurately. A multiphase CT scan is a sensitive imaging test used to evaluate patients suspected of having pancreatic cancer. Multiphase CT scanning may produce detailed, 3-dimensional images of the pancreas.
A helical CT scanner with multiple detector rows, called a multidetector row helical CT (MDCT) scan, is one of the latest technological advances in CT scanners. MDCT has advantages over other CT methods, including improved image resolution and the ability to rapidly scan large volumes, thus allowing for imaging of the entire pancreas in a single breath-hold by the patient.
Ultrasound is another imaging test that is commonly used. During this test, sound waves are bounced off internal organs to produce echoes. The computer creates patterns from these echoes, as normal and abnormal tissues produce different patterns.
EUS and LUS
EUS (endoscopic ultrasound) and LUS (laproscopic ultrasound) are minimally invasive procedures. EUS is performed using an endoscope, which is a long, thin instrument with a light at the end used to look deep inside the body. During EUS, an endoscope is passed down the esophagus, through the stomach, and into the duodenum. The machine that makes the sound waves is then turned on, and images are created by visualizing the pancreas through the stomach or the duodenum.
Advantages of EUS are that the ultrasound probe can be placed immediately adjacent to the pancreas, producing detailed images. It also allows for biopsies of the pancreas to be obtained to confirm the presence of cancer.
Magnetic Resonance Imaging
MRI is a noninvasive, painless imaging method that is commonly used today. MRI uses powerful magnets, instead of X-rays as in a CT scan, to view internal structures and organs. Since it does not involve radiation, MRI may be safer in patients who require repeated imaging over many years, such as patients with pancreatic cysts. The energy from the radio waves is absorbed by the body then released. A computer translates the patterns formed by this energy release into detailed images of areas inside the body. MRI produces cross-sectional slices like a CT scanner, but also produces slices that are parallel to the length of the body.
MRIs are performed at a special imaging center or at a hospital. If you have any metal in your body, you should check with your doctor prior to undergoing an MRI scan. Some types of metal implants (such as prosthetic hips, prosthetic knees, pacemakers, and heart valves) may cause problems when exposed to high magnetic forces such as those used in MRI.
Positron Emission Tomography (PET) Scan
PET scan is an imaging test that shows not only anatomy, but also biological function. During a PET scan, a small amount of radioactive glucose (sugar) is injected into a vein. Cancer cells take up sugar at higher rates than normal cells. A special camera detects the radioactivity that is taken up by malignant tissue, and a computer creates detailed images. The images created by a PET scan can be used to find cancer cells in the pancreas and in other areas of the body. Recently developed machines combine CT imaging with PET scanning to more accurately identify where cancer is located within the body.
Endoscopic Retrograde Cholangiopancreatography (ERCP)
ERCP is an invasive procedure that is used in conjunction with a dye to view the bile and pancreatic ducts for obstructions. During an ERCP, you will receive an anesthetic to numb the throat and medication for sedation. A thin tube is passed down the throat, through the stomach, and into the small intestine. From there, the gastroenterologist who is performing the procedure will identify the bile duct and pancreatic duct so that the dye can be injected into them. Then, X-rays are taken. This is an outpatient procedure but also may be performed in the hospital.
ERCP is especially helpful in patients with jaundice because a stent can be inserted into the bile duct and left in place to keep the bile duct open, often relieving the jaundice and its associated symptoms. Tissue samples also can be taken during the procedure. ERCP can cause complications, and is usually used to help manage symptoms and not for diagnostic purposes.
Because the only definitive way to diagnose cancer is to directly visualize cancer cells under a microscope, after having the necessary blood tests and scans, a biopsy may be performed when pancreatic cancer is suspected. A biopsy is the process of removing tissue samples, which are then examined under a microscope to check for cancer cells.
A biopsy can be performed in an outpatient setting or in the hospital. Biopsy specimens can be obtained in different ways as listed below.
Fine-Needle Aspiration (FNA) Biopsy
In a FNA biopsy, imaging by CT scan or EUS is used together with a long, thin needle to obtain tissue specimens. The CT scan or EUS imaging method allows the doctor to view the position of the needle to ensure that the needle is in the tumor. EUS also can be used to place the needle directly through the wall of the duodenum or stomach and into the tumor for collection of tissue specimens.
A brush biopsy procedure is used with ERCP. A small brush is inserted through an endoscope into the bile and pancreatic ducts. Cells are scraped off the insides of the ducts with the brush.
Laparoscopy is a minimally invasive surgical procedure. You will receive general anesthesia during the procedure. A laparoscope is inserted through a small incision in the abdomen. The doctor can then view the tumor and remove tissue samples for examination.
QUESTIONS TO ASK AFTER DIAGNOSIS
Asking good questions will help you get the best care possible for pancreatic cancer. You have a right to have all questions answered to your satisfaction.
- What type of pancreatic cancer do I have, and what is the stage (resectable, borderline resectable, locally advanced or metastatic)?
- Should I have any additional tests to more accurately stage my cancer?
- What is the treatment plan that you recommend?
- What are the potential benefits, risks, and side effects of that treatment?
- Where will the treatment be given, and how often?
- How will I know if the treatment is working?
- Who will be part of my care team?
- Are clinical trials available for my type and stage of pancreatic cancer?
- If surgery is recommended, is the center that will perform my surgery a high-volume one?
- If I have border-line resectable or locally advanced pancreatic cancer, what will your institution do to try to make my cancer resectable?
- Should I have my tumor or my blood (germline) genetically sequenced?
- Can you estimate the amount of time I may need to recover from surgery?
Signs and Symptoms of Pancreatic Cancer
A Silent Disease
Pancreatic cancer is often called a silent disease because many times there are no signs or symptoms until the cancer is in an advanced stage. Even when there are early signs and symptoms, they may be vague and easily attributed to another disease. The signs and symptoms also may be confusing to patients and healthcare providers because they vary depending on where the tumor is located in the pancreas (the head, body, or tail). It is important to see your doctor if you have any of the signs or symptoms of pancreatic cancer.
Jaundice is a yellowing of the skin and the whites of the eyes. Symptoms that may occur with jaundice are itching (which may be severe), dark urine, and light or clay-colored stool. Jaundice occurs when bilirubin stains the skin. Bilirubin is a dark-yellow brown substance made in the liver that travels down the bile duct and into the small intestine. When the bile duct is blocked by a tumor or when a tumor is located in the head of the pancreas near the bile duct, the bile is prevented from reaching the intestines. The bile then accumulates in tissues, blood, and the skin, leading to jaundice.
There are other, more common causes of jaundice, such as hepatitis (inflammation of the liver) or obstruction of the bile duct by a gallstone.
Skin can start to itch or turn yellow when bilirubin builds up in the skin.
This common sign of advanced pancreatic cancer occurs when the tumor presses on organs and nerves around the pancreas. The pain may be constant or intermittent and can be worse after eating or when lying down. Many conditions other than pancreatic cancer can also cause back pain, which makes this a challenging symptom.
Digestive Problems or Pain
People with pancreatic cancer may lose weight, may have little or no appetite, or may suffer from malnutrition. When pancreatic enzymes cannot be released into the intestine, digesting food, especially high-fat foods, may be difficult. Over time, significant weight loss and malnutrition may result, at which time a doctor should be consulted.
Nausea or Vomiting
If the tumor blocks the upper part of the small intestine (the duodenum), nausea and vomiting may result.
Similar to back pain, abdominal pain is a common sign of advanced pancreatic cancer which occurs when the tumor presses on organs and nerves around the pancreas. Many conditions other than pancreatic cancer can also cause abdominal pain, which makes this a challenging symptom.
Pancreatic cancer can cause blood to clot more easily, and can be the first sign of the tumor. These clots occur in the veins and can block blood flow. They can occur in the legs (deep vein thrombosis), lung (pulmonary embolism), or organs such as the pancreas itself or liver (portal vein thrombosis).
An inflammation of the pancreas called pancreatitis can be a sign of pancreatic cancer when pancreatitis is chronic or when it appears for the first time and is not related to either drinking alcohol or gallstones.
Developing diabetes mellitus (sugar diabetes), especially after the age of 50, can be a sign of pancreatic cancer. The majority of patients with diabetes, however, will not develop pancreatic cancer. As noted earlier, long-term diabetes is also a risk factor for pancreatic cancer.
When to See a Doctor
Many other illnesses can cause these signs and symptoms, but it is important to take them seriously and see your doctor as soon as possible. If you have a first-degree relative (parent, sibling or child) with pancreatic cancer, tell your doctor and consider joining a pancreatic cancer registry.
Cancer registries are used to collect accurate and complete data about people with cancer that can be used for cancer control and epidemiological research, public health program planning, and to improve patient care. Collecting this information also increases the chances of finding a cure, because the data helps physicians and researchers learn more about the causes of cancer and how to detect cancer earlier.
Data from registries may point out environmental risk factors or high risk behaviors, so that measures to prevent people from getting cancer can be identified. Additionally, local, state, and national cancer agencies and cancer control programs may use registry data from defined areas to make important decisions about public health. | https://lustgarten.org/patient-journey/what-is-pancreatic-cancer/ | 21 |
48 | The Axis powers
|Historical era||World War II|
|25 November 1936|
|22 May 1939|
|27 September 1940|
|2 September 1945|
The Axis powers,[nb 1] originally called the Rome–Berlin Axis, was a military coalition that fought in World War II against the Allies. The Axis powers agreed on their opposition to the Allies, but did not completely coordinate their activity.
The Axis grew out of the diplomatic efforts of Nazi Germany, the Kingdom of Italy, and the Empire of Japan to secure their own specific expansionist interests in the mid-1930s. The first step was the protocol signed by Germany and Italy in October 1936. Benito Mussolini declared on 1 November 1936 that all other European countries would from then on rotate on the Rome–Berlin axis, thus creating the term "Axis". The almost simultaneous second step was the signing in November 1936 of the Anti-Comintern Pact, an anti-communist treaty between Germany and Japan. Italy joined the Pact in 1937 and Hungary and Spain joined in 1939. The "Rome–Berlin Axis" became a military alliance in 1939 under the so-called "Pact of Steel", with the Tripartite Pact of 1940 leading to the integration of the military aims of Germany, Italy and Japan. As such the Anti-Comintern Pact, the Tripartite Pact, and the Pact of Steel were the agreements that formed the main bases of the Axis.
Particularly within Europe, the term "the Axis" is still often used primarily to refer to the alliance between Italy and Germany, though outside Europe it is normally understood as including Japan.
At its zenith in 1942 during World War II, the Axis presided over territories that occupied large parts of Europe, North Africa, and East Asia. In contrast to the Allies, there were no three-way summit meetings and cooperation and coordination was minimal, and on occasion the interests of the major Axis powers were at variance with each other. The war ended in 1945 with the defeat of the Axis powers and the dissolution of their alliance. As in the case of the Allies, membership of the Axis was fluid, with some nations switching sides or changing their degree of military involvement over the course of the war.
Origins and creation
The term "axis" was first applied to the Italo-German relationship by the Italian prime minister Benito Mussolini in September 1923, when he wrote in the preface to Roberto Suster's Germania Repubblica that "there is no doubt that in this moment the axis of European history passes through Berlin" (non v'ha dubbio che in questo momento l'asse della storia europea passa per Berlino). At the time, he was seeking an alliance with the Weimar Republic against Yugoslavia and France in the dispute over the Free State of Fiume.
The term was used by Hungary's prime minister Gyula Gömbös when advocating an alliance of Hungary with Germany and Italy in the early 1930s. Gömbös' efforts did affect the Italo-Hungarian Rome Protocols, but his sudden death in 1936 while negotiating with Germany in Munich and the arrival of Kálmán Darányi, his successor, ended Hungary's involvement in pursuing a trilateral axis. Contentious negotiations between the Italian foreign minister, Galeazzo Ciano, and the German ambassador, Ulrich von Hassell, resulted in a Nineteen-Point Protocol, signed by Ciano and his German counterpart, Konstantin von Neurath, in 1936. When Mussolini publicly announced the signing on 1 November, he proclaimed the creation of a Rome–Berlin axis.
Initial proposals of a German–Italian alliance
Italy under Duce Benito Mussolini had pursued a strategic alliance of Italy with Germany against France since the early 1920s. Prior to becoming head of government in Italy as leader of the Italian Fascist movement, Mussolini had advocated alliance with defeated Germany after the Paris Peace Conference (1919–1920) settled World War I. He believed that Italy could expand its influence in Europe by allying with Germany against France. In early 1923, as a goodwill gesture to Germany, Italy secretly delivered weapons for the German Army, which had faced major disarmament under the provisions of the Treaty of Versailles.
Since the 1920s Italy had identified the year 1935 as a crucial date for preparing for a war against France, as 1935 was the year when Germany's obligations under the Treaty of Versailles were scheduled to expire. Meetings took place in Berlin in 1924 between Italian General Luigi Capello and prominent figures in the German military, such as von Seeckt and Erich Ludendorff, over military collaboration between Germany and Italy. The discussions concluded that Germans still wanted a war of revenge against France but were short on weapons and hoped that Italy could assist Germany.
However at this time Mussolini stressed one important condition that Italy must pursue in an alliance with Germany: that Italy "must ... tow them, not be towed by them". Italian foreign minister Dino Grandi in the early 1930s stressed the importance of "decisive weight", involving Italy's relations between France and Germany, in which he recognized that Italy was not yet a major power, but perceived that Italy did have strong enough influence to alter the political situation in Europe by placing the weight of its support onto one side or another, and sought to balance relations between the three.
Danube alliance, dispute over Austria
In 1933, Adolf Hitler and the Nazi Party came to power in Germany. Hitler had advocated an alliance between Germany and Italy since the 1920s. Shortly after being appointed Chancellor, Hitler sent a personal message to Mussolini, declaring "admiration and homage" and declaring his anticipation of the prospects of German-Italian friendship and even alliance. Hitler was aware that Italy held concerns over potential German land claims on South Tyrol, and assured Mussolini that Germany was not interested in South Tyrol. Hitler in Mein Kampf had declared that South Tyrol was a non-issue considering the advantages that would be gained from a German–Italian alliance. After Hitler's rise to power, the Four Power Directorate proposal by Italy had been looked at with interest by Britain, but Hitler was not committed to it, resulting in Mussolini urging Hitler to consider the diplomatic advantages Germany would gain by breaking out of isolation by entering the Directorate and avoiding an immediate armed conflict. The Four Power Directorate proposal stipulated that Germany would no longer be required to have limited arms and would be granted the right to re-armament under foreign supervision in stages. Hitler completely rejected the idea of controlled rearmament under foreign supervision.
Mussolini did not trust Hitler's intentions regarding Anschluss nor Hitler's promise of no territorial claims on South Tyrol. Mussolini informed Hitler that he was satisfied with the presence of the anti-Marxist government of Dollfuss in Austria, and warned Hitler that he was adamantly opposed to Anschluss. Hitler responded in contempt to Mussolini that he intended "to throw Dollfuss into the sea". With this disagreement over Austria, relations between Hitler and Mussolini steadily became more distant.
Hitler attempted to break the impasse with Italy over Austria by sending Hermann Göring to negotiate with Mussolini in 1933 to convince Mussolini to press the Austrian government to appoint members of Austria's Nazis to the government. Göring claimed that Nazi domination of Austria was inevitable and that Italy should accept this, as well as repeating to Mussolini of Hitler's promise to "regard the question of the South Tyrol frontier as finally liquidated by the peace treaties". In response to Göring's visit with Mussolini, Dollfuss immediately went to Italy to counter any German diplomatic headway. Dollfuss claimed that his government was actively challenging Marxists in Austria and claimed that once the Marxists were defeated in Austria, that support for Austria's Nazis would decline.
In June 1934, Hitler and Mussolini met for the first time, in Venice. The meeting did not proceed amicably. Hitler demanded that Mussolini compromise on Austria by pressuring Dollfuss to appoint Austrian Nazis to his cabinet, to which Mussolini flatly refused the demand. In response, Hitler promised that he would accept Austria's independence for the time being, saying that due to the internal tensions in Germany (referring to sections of the Nazi SA that Hitler would soon kill in the Night of the Long Knives) that Germany could not afford to provoke Italy. Galeazzo Ciano told the press that the two leaders had made a "gentleman's agreement" to avoid interfering in Austria.
Several weeks after the Venice meeting, on 25 July 1934, Austrian Nazis assassinated Dollfuss. Mussolini was outraged as he held Hitler directly responsible for the assassination that violated Hitler's promise made only weeks ago to respect Austrian independence. Mussolini rapidly deployed several army divisions and air squadrons to the Brenner Pass, and warned that a German move against Austria would result in war between Germany and Italy. Hitler responded by both denying Nazi responsibility for the assassination and issuing orders to dissolve all ties between the German Nazi Party and its Austrian branch, which Germany claimed was responsible for the political crisis.
Italy effectively abandoned diplomatic relations with Germany while turning to France in order to challenge Germany's intransigence by signing a Franco-Italian accord to protect Austrian independence. French and Italian military staff discussed possible military cooperation involving a war with Germany should Hitler dare to attack Austria.
Relations between Germany and Italy recovered due to Hitler's support of Italy's invasion of Ethiopia in 1935, while other countries condemned the invasion and advocated sanctions against Italy.
Development of German–Italian–Japanese alliance
Interest in Germany and Japan in forming an alliance began when Japanese diplomat Oshima Hiroshi visited Joachim von Ribbentrop in Berlin in 1935. Oshima informed von Ribbentrop of Japan's interest in forming a German–Japanese alliance against the Soviet Union. Von Ribbentrop expanded on Oshima's proposal by advocating that the alliance be based in a political context of a pact to oppose the Comintern. The proposed pact was met with mixed reviews in Japan, with a faction of ultra-nationalists within the government supporting the pact while the Japanese Navy and the Japanese Foreign Ministry were staunchly opposed to the pact. There was great concern in the Japanese government that such a pact with Germany could disrupt Japan's relations with Britain, endangering years of a beneficial Anglo-Japanese accord, that had allowed Japan to ascend in the international community in the first place. The response to the pact was met with similar division in Germany; while the proposed pact was popular amongst the upper echelons of the Nazi Party, it was opposed by many in the Foreign Ministry, the Army, and the business community who held financial interests in China to which Japan was hostile.
On learning of German–Japanese negotiations, Italy also began to take an interest in forming an alliance with Japan. Italy had hoped that due to Japan's long-term close relations with Britain, that an Italo-Japanese alliance could pressure Britain into adopting a more accommodating stance towards Italy in the Mediterranean. In the summer of 1936, Italian Foreign Minister Ciano informed Japanese Ambassador to Italy, Sugimura Yotaro, "I have heard that a Japanese–German agreement concerning the Soviet Union has been reached, and I think it would be natural for a similar agreement to be made between Italy and Japan." Initially Japan's attitude towards Italy's proposal was generally dismissive, viewing a German–Japanese alliance against the Soviet Union as imperative while regarding an Italo-Japanese alliance as secondary, as Japan anticipated that an Italo-Japanese alliance would antagonize Britain that had condemned Italy's invasion of Ethiopia. This attitude by Japan towards Italy altered in 1937 after the League of Nations condemned Japan for aggression in China and faced international isolation, while Italy remained favourable to Japan. As a result of Italy's support for Japan against international condemnation, Japan took a more positive attitude towards Italy and offered proposals for a non-aggression or neutrality pact with Italy.
The Tripartite Pact was signed by Germany, Italy, and Japan on 27 September 1940, in Berlin. The pact was subsequently joined by Hungary (20 November 1940), Romania (23 November 1940), Slovakia (24 November 1940), and Bulgaria (1 March 1941).
The Axis powers' primary goal was territorial expansion at the expense of their neighbors. In ideological terms, the Axis described their goals as breaking the hegemony of the plutocratic Western powers and defending civilization from communism. The Axis championed a number of variants on fascism, militarism, and autarky. Creation of territorially contiguous autarkic empires was a common goal of all three major Axis powers.
The Axis population in 1938 was 258.9 million, while the Allied population (excluding the Soviet Union and the United States, which later joined the Allies) was 689.7 million. Thus the Allied powers outnumbered the Axis powers by 2.7 to 1. The leading Axis states had the following domestic populations: Germany 75.5 million (including 6.8 million from recently annexed Austria), Japan 71.9 million (excluding its colonies), and Italy 43.4 million (excluding its colonies). The United Kingdom (excluding its colonies) had a population of 47.5 million and France (excluding its colonies) 42 million.
The wartime gross domestic product (GDP) of the Axis was $911 billion at its highest in 1941 in international dollars by 1990 prices. The GDP of the Allied powers was $1,798 billion. The United States stood at $1,094 billion, more than the Axis combined.
The burden of the war upon participating countries has been measured through the percentage of gross national product (GNP) devoted to military expenditures. Nearly one-quarter of Germany's GNP was committed to the war effort in 1939, and this rose to three-quarters of GNP in 1944, prior to the collapse of the economy. In 1939, Japan committed 22 percent of its GNP to its war effort in China; this rose to three-quarters of GNP in 1944. Italy did not mobilize its economy; its GNP committed to the war effort remained at prewar levels.
Italy and Japan lacked industrial capacity; their economies were small, dependent on international trade, external sources of fuel and other industrial resources. As a result, Italian and Japanese mobilization remained low, even by 1943.
Among the three major Axis powers, Japan had the lowest per capita income, while Germany and Italy had an income level comparable to the United Kingdom.
Major Axis powers
Hitler in 1941 described the outbreak of World War II as the fault of the intervention of Western powers against Germany during its war with Poland, describing it as the result of "the European and American warmongers". Hitler had designs for Germany to become the dominant and leading state in the world, such as his intention for Germany's capital of Berlin to become the Welthauptstadt ("World Capital"), renamed Germania. The German government also justified its actions by claiming that Germany inevitably needed to territorially expand because it was facing an overpopulation crisis that Hitler described: "We are overpopulated and cannot feed ourselves from our own resources". Thus expansion was justified as an inevitable necessity to provide lebensraum ("living space") for the German nation and end the country's overpopulation within existing confined territory, and provide resources necessary to its people's well-being. Since the 1920s, the Nazi Party publicly promoted the expansion of Germany into territories held by the Soviet Union.
Germany justified its war against Poland on the issues of German minority within Poland and Polish opposition to the incorporation of the ethnically German-majority Free City of Danzig into Germany. While Hitler and the Nazi party before taking power openly talked about destroying Poland and were hostile to Poles, after gaining power until February 1939 Hitler tried to conceal his true intentions towards Poland, and signed a 10-year Non-Aggression Pact in 1934, revealing his plans to only to his closest associates. Relations between Germany and Poland altered from the early to the late 1930s, as Germany sought rapprochement with Poland to avoid the risk of Poland entering the Soviet sphere of influence, and appealed to anti-Soviet sentiment in Poland. The Soviet Union in turn at this time competed with Germany for influence in Poland. At the same time Germany was preparing for a war with Poland and was secretly preparing the German minority in Poland for a war.
A diplomatic crisis erupted following Hitler demanding that the Free City of Danzig be annexed to Germany, as it was led by a Nazi government seeking annexation to Germany. Germany used legal precedents to justify its intervention against Poland and annexation of the Free City of Danzig (led by a local Nazi government that sought incorporation into Germany) in 1939. Poland rejected Germany's demands and Germany in response prepared a general mobilization on the morning of 30 August 1939.
Germany justified its invasion of the Low Countries of Belgium, Luxembourg, and the Netherlands in May 1940 by claiming that it suspected that Britain and France were preparing to use the Low Countries to launch an invasion of the industrial Ruhr region of Germany. When war between Germany versus Britain and France appeared likely in May 1939, Hitler declared that the Netherlands and Belgium would need to be occupied, saying: "Dutch and Belgian air bases must be occupied ... Declarations of neutrality must be ignored". In a conference with Germany's military leaders on 23 November 1939, Hitler declared to the military leaders that "We have an Achilles heel, the Ruhr", and said that "If England and France push through Belgium and Holland into the Ruhr, we shall be in the greatest danger", and thus claimed that Belgium and the Netherlands had to be occupied by Germany to protect Germany from a British-French offensive against the Ruhr, irrespective of their claims to neutrality.
Germany's invasion of the Soviet Union in 1941 involved issues of lebensraum, anti-communism, and Soviet foreign policy. After Germany invaded the Soviet Union in 1941, the Nazi regime's stance towards an independent, territorially-reduced Russia was affected by pressure beginning in 1942 from the German Army on Hitler to endorse a Russian army led by Andrey Vlasov. Initially the proposal to support an anti-communist Russian army was met with outright rejection by Hitler, however by 1944 as Germany faced mounting losses on the Eastern Front, Vlasov's forces were recognized by Germany as an ally, particularly by Reichsführer-SS Heinrich Himmler.
After the Japanese attack on Pearl Harbor and the outbreak of war between Japan and the United States, Germany supported Japan by declaring war on the US. During the war Germany denounced the Atlantic Charter and the Lend-Lease Act that the US adopted to support the Allied powers prior to entry into the alliance, as imperialism directed at dominating and exploiting countries outside of the continental Americas. Hitler denounced American President Roosevelt's invoking of the term "freedom" to describe US actions in the war, and accused the American meaning of "freedom" to be the freedom for democracy to exploit the world and the freedom for plutocrats within such democracy to exploit the masses.
At the end of World War I, German citizens felt that their country had been humiliated as a result of the Treaty of Versailles, which included a war guilt clause and forced Germany to pay enormous reparations payments and forfeit territories formerly controlled by the German Empire and all its colonies. The pressure of the reparations on the German economy led to hyperinflation during the early 1920s. In 1923 the French occupied the Ruhr region when Germany defaulted on its reparations payments. Although Germany began to improve economically in the mid-1920s, the Great Depression created more economic hardship and a rise in political forces that advocated radical solutions to Germany's woes. The Nazis, under Hitler, promoted the nationalist stab-in-the-back legend stating that Germany had been betrayed by Jews and Communists. The party promised to rebuild Germany as a major power and create a Greater Germany that would include Alsace-Lorraine, Austria, Sudetenland, and other German-populated territories in Europe. The Nazis also aimed to occupy and colonize non-German territories in Poland, the Baltic states, and the Soviet Union, as part of the Nazi policy of seeking Lebensraum ("living space") in eastern Europe.
Germany renounced the Versailles treaty and remilitarized the Rhineland in March 1936. Germany had already resumed conscription and announced the existence of a German air force, the Luftwaffe, and naval force, the Kriegsmarine in 1935. Germany annexed Austria in 1938, the Sudetenland from Czechoslovakia, and the Memel territory from Lithuania in 1939. Germany then invaded the rest of Czechoslovakia in 1939, creating the Protectorate of Bohemia and Moravia and the country of Slovakia.
On 23 August 1939, Germany and the Soviet Union signed the Molotov–Ribbentrop Pact, which contained a secret protocol dividing eastern Europe into spheres of influence. Germany's invasion of its part of Poland under the Pact eight days later triggered the beginning of World War II. By the end of 1941, Germany occupied a large part of Europe and its military forces were fighting the Soviet Union, nearly capturing Moscow. However, crushing defeats at the Battle of Stalingrad and the Battle of Kursk devastated the German armed forces. This, combined with Western Allied landings in France and Italy, led to a three-front war that depleted Germany's armed forces and resulted in Germany's defeat in 1945.
The Protectorate of Bohemia and Moravia was created from the dismemberment of Czechoslovakia. Shortly after Germany annexed the Sudetenland region of Czechoslovakia, Slovakia declared its independence. The new Slovak State allied itself with Germany. The remainder of the country was occupied by German military forces and organized into the Protectorate. Czech civil institutions were preserved but the Protectorate was considered within the sovereign territory of Germany.
The General Government was the name given to the territories of occupied Poland that were not directly annexed into German provinces, but like Bohemia and Moravia was considered within the sovereign territory of Germany by the Nazi authorities.
Reichskommissariats were established in the Netherlands, Belgium, and Norway, designated as a colonies the "Germanic" populations of which were to be incorporated into the planned Greater Germanic Reich. By contrast the Reichskommissariats established in the east (Reichskommissariat Ostland in the Baltics, Reichskommissariat Ukraine in the Ukraine) were established as colonies for settlement by Germans.
In Norway, under Reichskommissariat Norwegen, the Quisling regime, headed by Vidkun Quisling, was installed by the Germans as a client regime during the occupation, while king Haakon VII and the legal government were in exile. Quisling encouraged Norwegians to serve as volunteers in the Waffen-SS, collaborated in the deportation of Jews, and was responsible for the executions of members of the Norwegian resistance movement. About 45,000 Norwegian collaborators joined the pro-Nazi party Nasjonal Samling (National Union), and some police units helped arrest many Jews. However, Norway was one of the first countries where resistance during World War II was widespread before the turning point of the war in 1943. After the war, Quisling and other collaborators were executed. Quisling's name has become an international eponym for traitor.
Duce Benito Mussolini described Italy's declaration of war against the Western Allies of Britain and France in June 1940 as the following: "We are going to war against the plutocratic and reactionary democracies of the West who have invariably hindered the progress and often threatened the very existence of the Italian people". Italy condemned the Western powers for enacting sanctions on Italy in 1935 for its actions in the Second Italo-Ethiopian War that Italy claimed was a response to an act of Ethiopian aggression against tribesmen in Italian Eritrea in the Walwal incident of 1934. Italy, like Germany, also justified its actions by claiming that Italy needed to territorially expand to provide spazio vitale ("vital space") for the Italian nation.
In October 1938 in the aftermath of the Munich Agreement, Italy demanded concessions from France to yield to Italy in Africa. Relations between Italy and France deteriorated with France's refusal to accept Italy's demands. France responded to Italy's demands with threatening naval manoeuvres as a warning to Italy. As tensions between Italy and France grew, Hitler made a major speech on 30 January 1939 in which he promised German military support in the case of an unprovoked war against Italy.
Italy entered World War II on 10 June 1940. Italy justified its intervention against Greece in October 1940 on the allegation that Greece was being used by Britain against Italy, Mussolini informed this to Hitler, saying: "Greece is one of the main points of English maritime strategy in the Mediterranean".
Italy justified its intervention against Yugoslavia in April 1941 by appealing to both Italian irredentist claims and the fact of Albanian, Croatian, and Macedonian separatists not wishing to be part of Yugoslavia. Croatian separatism soared after the assassination of Croatian political leaders in the Yugoslav parliament in 1928 including the death of Stjepan Radić, and Italy endorsed Croatian separatist Ante Pavelić and his fascist Ustaše movement that was based and trained in Italy with the Fascist regime's support prior to intervention against Yugoslavia.
The intention of the Fascist regime was to create a "New Roman Empire" in which Italy would dominate the Mediterranean. In 1935–1936 Italy invaded and annexed Ethiopia and the Fascist government proclaimed the creation of the "Italian Empire". Protests by the League of Nations, especially the British, who had interests in that area, led to no serious action, although The League did try to enforce economic sanctions upon Italy, but to no avail. The incident highlighted French and British weakness, exemplified by their reluctance to alienate Italy and lose her as their ally. The limited actions taken by the Western powers pushed Mussolini's Italy towards alliance with Hitler's Germany anyway. In 1937 Italy left the League of Nations and joined the Anti-Comintern Pact, which had been signed by Germany and Japan the preceding year. In March/April 1939 Italian troops invaded and annexed Albania. Germany and Italy signed the Pact of Steel on May 22.
Italy was ill-prepared for war, in spite of the fact that it had continuously been involved in conflict since 1935, first with Ethiopia in 1935–1936 and then in the Spanish Civil War on the side of Francisco Franco's Nationalists. Mussolini refused to heed warnings from his minister of exchange and currency, Felice Guarneri, who said that Italy's actions in Ethiopia and Spain meant that Italy was on the verge of bankruptcy. By 1939 military expenditures by Britain and France far exceeded what Italy could afford. As a result of Italy's economic difficulties its soldiers were poorly paid, often being poorly equipped and poorly supplied, and animosity arose between soldiers and class-conscious officers; these contributed to low morale amongst Italian soldiers.
By early 1940, Italy was still a non-belligerent, and Mussolini communicated to Hitler that Italy was not prepared to intervene soon. By March 1940, Mussolini decided that Italy would intervene, but the date was not yet chosen. His senior military leadership unanimously opposed the action because Italy was unprepared. No raw materials had been stockpiled and the reserves it did have would soon be exhausted, Italy's industrial base was only one-tenth of Germany's, and even with supplies the Italian military was not organized to provide the equipment needed to fight a modern war of a long duration. An ambitious rearmament program was impossible because of Italy's limited reserves in gold and foreign currencies and lack of raw materials. Mussolini ignored the negative advice.
By 1941, Italy's attempts to run an autonomous campaign from Germany's, collapsed as a result of military setbacks in Greece, North Africa, and Eastern Africa; and the country became dependent and effectively subordinate to Germany. After the German-led invasion and occupation of Yugoslavia and Greece, that had both been targets of Italy's war aims, Italy was forced to accept German dominance in the two occupied countries. Furthermore, by 1941, German forces in North Africa under Erwin Rommel effectively took charge of the military effort ousting Allied forces from the Italian colony of Libya, and German forces were stationed in Sicily in that year. Germany's insolence towards Italy as an ally was demonstrated that year when Italy was pressured to send 350,000 "guest workers" to Germany who were used as forced labour. While Hitler was disappointed with the Italian military's performance, he maintained overall favorable relations with Italy because of his personal friendship with Mussolini.
On 25 July 1943, following the Allied invasion of Sicily, King Victor Emmanuel III dismissed Mussolini, placed him under arrest, and began secret negotiations with the Western Allies. An armistice was signed on 8 September 1943, and four days later Mussolini was rescued by the Germans in Operation Oak and placed in charge of a puppet state called the Italian Social Republic (Repubblica Sociale Italiana/RSI, or Repubblica di Salò) in northern Italy. In order to liberate the country from the Germans and Fascists, Italy became a co-belligerent of the Allies; as result, the country descended in Civil War, with the Italian Co-Belligerent Army and the partisans, supported by the Allies, contended the Social Republic's forces and its German allies. Some areas in Northern Italy were liberated from the Germans as late as May, 1945. Mussolini was killed by Communist partisans on 28 April 1945 while trying to escape to Switzerland.
Colonies and dependencies
The Dodecanese Islands were an Italian dependency from 1912 to 1943.
Montenegro was an Italian dependency from 1941 to 1943 known as the Governorate of Montenegro that was under the control of an Italian military governor. Initially, the Italians intended that Montenegro would become an "independent" state closely allied with Italy, reinforced through the strong dynastic links between Italy and Montenegro, as Queen Elena of Italy was a daughter of the last Montenegrin king Nicholas I. The Italian-backed Montenegrin nationalist Sekula Drljević and his followers attempted to create a Montenegrin state. On 12 July 1941, they proclaimed the "Kingdom of Montenegro" under the protection of Italy. In less than 24 hours, that triggered a general uprising against the Italians. Within three weeks, the insurgents managed to capture almost all the territory of Montenegro. Over 70,000 Italian troops and 20,000 of Albanian and Muslim irregulars were deployed to suppress the rebellion. Drljevic was expelled from Montenegro in October 1941. Montenegro then came under full direct Italian control. With the Italian capitulation of 1943, Montenegro came directly under the control of Germany.
Politically and economically dominated by Italy from its creation in 1913, Albania was occupied by Italian military forces in 1939 as the Albanian king Zog l fled the country with his family. The Albanian parliament voted to offer the Albanian throne to the King of Italy, resulting in a personal union between the two countries.
Italian East Africa was an Italian colony existing from 1936 to 1943. Prior to the invasion and annexation of Ethiopia into this united colony in 1936, Italy had two colonies, Eritrea and Somalia since the 1880s.
Libya was an Italian colony existing from 1912 to 1943. The northern portion of Libya was incorporated directly into Italy in 1939; however the region remained united as a colony under a colonial governor.
The Japanese government justified its actions by claiming that it was seeking to unite East Asia under Japanese leadership in a Greater East Asia Co-Prosperity Sphere that would free East Asians from domination and rule by clients of Western powers. Japan invoked themes of Pan-Asianism and said that the Asian people needed to be free from Western influence.
The United States opposed the Japanese war in China, and recognized Chiang Kai-Shek's Nationalist Government as the legitimate government of China. As a result, the United States sought to bring the Japanese war effort to a halt by imposing an embargo on all trade between the United States and Japan. Japan was dependent on the United States for 80 percent of its petroleum, and as a consequence the embargo resulted in an economic and military crisis for Japan, as Japan could not continue its war effort against China without access to petroleum.
In order to maintain its military campaign in China with the major loss of petroleum trade with the United States, Japan saw the best means to secure an alternative source of petroleum in the petroleum-rich and natural-resources-rich Southeast Asia. This threat of retaliation by Japan to the total trade embargo by the United States was known by the American government, including American Secretary of State Cordell Hull who was negotiating with the Japanese to avoid a war, fearing that the total embargo would pre-empt a Japanese attack on the Dutch East Indies.
Japan identified the American Pacific fleet based in Pearl Harbor as the principal threat to its designs to invade and capture Southeast Asia. Thus Japan initiated the attack on Pearl Harbor on 7 December 1941 as a means to inhibit an American response to the invasion of Southeast Asia, and buy time to allow Japan to consolidate itself with these resources to engage in a total war against the United States, and force the United States to accept Japan's acquisitions. On 7 December 1941 Japan declared war on the United States and the British Empire.
The Empire of Japan, a constitutional monarchy with Hirohito as its Emperor, was the principal Axis power in Asia and the Pacific. Under the emperor were a political cabinet and the Imperial General Headquarters, with two chiefs of staff. By 1945 the Emperor of Japan was more than a symbolic leader; he played a major role in devising a strategy to keep himself on the throne.
At its peak, Japan's Greater East Asia Co-Prosperity Sphere included Manchuria, Inner Mongolia, large parts of China, Malaysia, French Indochina, the Dutch East Indies, the Philippines, Burma, a small part of India, and various Pacific Islands in the central Pacific.
As a result of the internal discord and economic downturn of the 1920s, militaristic elements set Japan on a path of expansionism. As the Japanese home islands lacked natural resources needed for growth, Japan planned to establish hegemony in Asia and become self-sufficient by acquiring territories with abundant natural resources. Japan's expansionist policies alienated it from other countries in the League of Nations and by the mid-1930s brought it closer to Germany and Italy, who had both pursued similar expansionist policies. Cooperation between Japan and Germany began with the Anti-Comintern Pact, in which the two countries agreed to ally to challenge any attack by the Soviet Union.
Japan entered into conflict against the Chinese in 1937. The Japanese invasion and occupation of parts of China resulted in numerous atrocities against civilians, such as the Nanking massacre and the Three Alls Policy. The Japanese also fought skirmishes with Soviet–Mongolian forces in Manchukuo in 1938 and 1939. Japan sought to avoid war with the Soviet Union by signing a non-aggression pact with it in 1941.
Japan's military leaders were divided on diplomatic relationships with Germany and Italy and the attitude towards the United States. The Imperial Japanese Army was in favour of war with the United States, but the Imperial Japanese Navy was generally strongly opposed. When Prime Minister of Japan General Hideki Tojo refused American demands that Japan withdraw its military forces from China, a confrontation became more likely. War with the United States was being discussed within the Japanese government by 1940. Commander of the Combined Fleet Admiral Isoroku Yamamoto was outspoken in his opposition, especially after the signing of the Tripartite Pact, saying on 14 October 1940: "To fight the United States is like fighting the whole world. But it has been decided. So I will fight the best I can. Doubtless I shall die on board Nagato [his flagship]. Meanwhile, Tokyo will be burnt to the ground three times. Konoe and others will be torn to pieces by the revengeful people, I [shouldn't] wonder. " In October and November 1940, Yamamoto communicated with Navy Minister Oikawa, and stated, "Unlike the pre-Tripartite days, great determination is required to make certain that we avoid the danger of going to war. "
With the European powers focused on the war in Europe, Japan sought to acquire their colonies. In 1940 Japan responded to the German invasion of France by occupying northern French Indochina. The Vichy France regime, a de facto ally of Germany, accepted the takeover. The allied forces did not respond with war. However, the United States instituted an embargo against Japan in 1941 because of the continuing war in China. This cut off Japan's supply of scrap metal and oil needed for industry, trade, and the war effort.
To isolate the US forces stationed in the Philippines and to reduce US naval power, the Imperial General Headquarters ordered an attack on the US naval base at Pearl Harbor, Hawaii, on 7 December 1941. They also invaded Malaya and Hong Kong. Initially achieving a series of victories, by 1943 the Japanese forces were driven back towards the home islands. The Pacific War lasted until the atomic bombings of Hiroshima and Nagasaki in 1945. The Soviets formally declared war in August 1945 and engaged Japanese forces in Manchuria and northeast China.
Colonies and dependencies
Taiwan was a Japanese dependency established in 1895. Korea was a Japanese protectorate and dependency formally established by the Japan–Korea Treaty of 1910.
The South Seas Mandate were territories granted to Japan in 1919 in the peace agreements of World War I, that designated to Japan the German South Pacific islands. Japan received these as a reward by the Allies of World War I, when Japan was then allied against Germany.
Japan occupied the Dutch East Indies during the war. Japan planned to transform these territories into a client state of Indonesia and sought alliance with Indonesian nationalists including future Indonesian President Sukarno, however these efforts did not deliver the creation of an Indonesian state until after Japan's surrender.
Other Tripartite Pact signatories
In addition to the three major Axis powers, six other countries signed the Tri-Partite Pact as its member states. Of the additional countries, Romania, Hungary, Bulgaria, the Independent State of Croatia, and Slovakia participated in various Axis military operations with their national armed forces, while the sixth, Yugoslavia, saw its pro-Nazi government overthrown earlier in a coup merely days after it signed the Pact, and the membership was reversed.
The Kingdom of Bulgaria was ruled by Тsar Boris III when it signed the Tripartite Pact on 1 March 1941. Bulgaria had been on the losing side in the First World War and sought a return of lost ethnically and historically Bulgarian territories, specifically in Macedonia and Thrace (all within Kingdom of Yugoslavia, Kingdom of Greece and Turkey). During the 1930s, because of traditional right-wing elements, Bulgaria drew closer to Nazi Germany. In 1940 Germany pressured Romania to sign the Treaty of Craiova, returning to Bulgaria the region of Southern Dobrudja, which it had lost in 1913. The Germans also promised Bulgaria — if it joined the Axis — an enlargement of its territory to the borders specified in the Treaty of San Stefano.
Bulgaria participated in the Axis invasion of Yugoslavia and Greece by letting German troops attack from its territory and sent troops to Greece on April 20. As a reward, the Axis powers allowed Bulgaria to occupy parts of both countries—southern and south-eastern Yugoslavia (Vardar Banovina) and north-eastern Greece (parts of Greek Macedonia and Greek Thrace). The Bulgarian forces in these areas spent the following years fighting various nationalist groups and resistance movements. Despite German pressure, Bulgaria did not take part in the Axis invasion of the Soviet Union and actually never declared war on the Soviet Union. The Bulgarian Navy was nonetheless involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping.
Following the Japanese attack on Pearl Harbor in December 1941, the Bulgarian government declared war on the Western Allies. This action remained largely symbolic (at least from the Bulgarian perspective), until August 1943, when Bulgarian air defense and air force attacked Allied bombers, returning (heavily damaged) from a mission over the Romanian oil refineries. This turned into a disaster for the citizens of Sofia and other major Bulgarian cities, which were heavily bombed by the Allies in the winter of 1943–1944.
On 2 September 1944, as the Red Army approached the Bulgarian border, a new Bulgarian government came to power and sought peace with the Allies, expelled the few remaining German troops, and declared neutrality. These measures however did not prevent the Soviet Union from declaring war on Bulgaria on 5 September, and on 8 September the Red Army marched into the country, meeting no resistance. This was followed by the coup d'état of 9 September 1944, which brought a government of the pro-Soviet Fatherland Front to power. After this, the Bulgarian army (as part of the Red Army's 3rd Ukrainian Front) fought the Germans in Yugoslavia and Hungary, sustaining numerous casualties. Despite this, the Paris Peace Treaty treated Bulgaria as one of the defeated countries. Bulgaria was allowed to keep Southern Dobruja, but had to give up all claims to Greek and Yugoslav territory.
Political instability plagued the country until Miklós Horthy, a Hungarian nobleman and Austro-Hungarian naval officer, became regent in 1920. The vast majority of the Hungarians desired to recover territories lost through the Trianon Treaty. During the government of Gyula Gömbös, Hungary drew closer to Germany and Italy largely because of a shared desire to revise the peace settlements made after World War I. Many people sympathized with the anti-Semitic policy of the Nazi regime. Due to its supportive stance towards Germany and the new efforts in the international policy, Hungary gained favourable territorial settlements by the First Vienna Award, after the breakup of Czechoslovakia occupied and annexed the remainder of Carpathian Ruthenia and in 1940 received Northern Transylvania from Romania via the Second Vienna Award. Hungarians permitted German troops to transit through their territory during the invasion of Yugoslavia, and Hungarian forces joined the military operations after the proclamation of the Independent State of Croatia. Parts of the former Yugoslavia were annexed to Hungary; the United Kingdom immediately broke off diplomatic relations in response.
Although Hungary did not initially participate in the German invasion of the Soviet Union, Hungary and the Soviet Union became belligerents on 27 June 1941. Over 500,000 soldiers served on the Eastern Front. All five of Hungary's field armies ultimately participated in the war against the Soviet Union; a significant contribution was made by the Hungarian Second Army.
On 25 November 1941, Hungary was one of thirteen signatories to the renewed Anti-Comintern Pact. Hungarian troops, like their Axis counterparts, were involved in numerous actions against the Soviets. By the end of 1943, the Soviets had gained the upper hand and the Germans were retreating. The Hungarian Second Army was destroyed in fighting on the Voronezh Front, on the banks of the Don River.
Prior to the German occupation within the area of Hungary around 63,000 Jews perished. Afterwards, in late 1944, 437,000 Jews were deported to Auschwitz-Birkenau, most of them to their deaths. Overall, Hungarian Jews suffered close to 560,000 casualties.
Relations between Germany and the regency of Miklós Horthy collapsed in 1944 when Horthy attempted to negotiate a peace agreement with the Soviets and jump out of the war without German approval. Horthy was forced to abdicate after German commandos, led by Colonel Otto Skorzeny, held his son hostage as part of Operation Panzerfaust. Hungary was reorganized following Horthy's abdication in December 1944 into a totalitarian regime called the Government of National Unity, led by Ferenc Szálasi. He had been Prime Minister of Hungary since October 1944 and was leader of the Hungarist Arrow Cross Party. Its jurisdiction was effectively limited to an ever-narrowing band of territory in central Hungary, around Budapest since by the time they took power the Red Army was already far inside the country. Nonetheless, the Arrow Cross rule, short-lived as it was, was brutal. In fewer than three months, Arrow Cross death squads killed as many as 38,000 Hungarian Jews. Arrow Cross officers helped Adolf Eichmann re-activate the deportation proceedings from which the Jews of Budapest had thus far been spared, sending some 80,000 Jews out of the city on slave labour details and many more straight to death camps. Most of them died, including many who were murdered outright after the end of the fighting as they were returning home. Days after the Szálasi government took power, the capital of Budapest was surrounded by the Soviet Red Army. German and Hungarian forces tried to hold off the Soviet advance but failed. After fierce fighting, Budapest was taken by the Soviets. A number of pro-German Hungarians retreated to Italy and Germany, where they fought until the end of the war.
In March 1945, Szálasi fled to Germany as the leader of a government in exile, until the surrender of Germany in May 1945.
Independent State of Croatia
On 10 April 1941, the so-called Independent State of Croatia (Nezavisna Država Hrvatska, or NDH), an installed German-Italian puppet state co-signed the Tripartite Pact. The NDH remained a member of the Axis until the end of Second World War, its forces fighting for Germany even after its territory had been overrun by Yugoslav Partisans. On 16 April 1941, Ante Pavelić, a Croatian nationalist and one of the founders of the Ustaše ("Croatian Liberation Movement"), was proclaimed Poglavnik (leader) of the new regime.
Initially the Ustaše had been heavily influenced by Italy. They were actively supported by Mussolini's Fascist regime in Italy, which gave the movement training grounds to prepare for war against Yugoslavia, as well as accepting Pavelić as an exile and allowing him to reside in Rome. In 1941 during the Italian invasion of Greece, Mussolini requested that Germany invade Yugoslavia to save the Italian forces in Greece. Hitler reluctantly agreed; Yugoslavia was invaded and the NDH was created. Pavelić led a delegation to Rome and offered the crown of the NDH to an Italian prince of the House of Savoy, who was crowned Tomislav II. The next day, Pavelić signed the Contracts of Rome with Mussolini, ceding Dalmatia to Italy and fixing the permanent borders between the NDH and Italy. Italian armed forces were allowed to control all of the coastline of the NDH, effectively giving Italy total control of the Adriatic coastline. When the King of Italy ousted Mussolini from power and Italy capitulated, the NDH became completely under German influence.
The platform of the Ustaše movement proclaimed that Croatians had been oppressed by the Serb-dominated Kingdom of Yugoslavia, and that Croatians deserved to have an independent nation after years of domination by foreign empires. The Ustaše perceived Serbs to be racially inferior to Croats and saw them as infiltrators who were occupying Croatian lands. They saw the extermination and expulsion or deportation of Serbs as necessary to racially purify Croatia. While part of Yugoslavia, many Croatian nationalists violently opposed the Serb-dominated Yugoslav monarchy, and assassinated Alexander I of Yugoslavia, together with the Internal Macedonian Revolutionary Organization. The regime enjoyed support amongst radical Croatian nationalists. Ustashe forces fought against communist Yugoslav Partisan guerrilla throughout the war.
Upon coming to power, Pavelić formed the Croatian Home Guard (Hrvatsko domobranstvo) as the official military force of the NDH. Originally authorized at 16,000 men, it grew to a peak fighting force of 130,000. The Croatian Home Guard included an air force and navy, although its navy was restricted in size by the Contracts of Rome. In addition to the Croatian Home Guard, Pavelić was also the supreme commander of the Ustaše militia, although all NDH military units were generally under the command of the German or Italian formations in their area of operations.
The Ustaše government declared war on the Soviet Union, signed the Anti-Comintern Pact of 1941, and sent troops to Germany's Eastern Front. Ustaše militia were garrisoned in the Balkans, battling the communist partisans.
The Ustaše government applied racial laws on Serbs, Jews, Romani people, as well as targeting those opposed to the fascist regime, and after June 1941 deported them to the Jasenovac concentration camp or to German camps in Poland. The racial laws were enforced by the Ustaše militia. The exact number of victims of the Ustaše regime is uncertain due to the destruction of documents and varying numbers given by historians. According to the United States Holocaust Memorial Museum in Washington, D.C., between 320,000 and 340,000 Serbs were killed in the NDH.
When war erupted in Europe in 1939, the Kingdom of Romania was pro-British and allied to the Poles. Following the invasion of Poland by Germany and the Soviet Union, and the German conquest of France and the Low Countries, Romania found itself increasingly isolated; meanwhile, pro-German and pro-Fascist elements began to grow.
The August 1939 Molotov–Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol ceding Bessarabia, and Northern Bukovina to the Soviet Union. On June 28, 1940, the Soviet Union occupied and annexed Bessarabia, as well as part of northern Romania and the Hertza region. On 30 August 1940, as a result of the German-Italian arbitrated Second Vienna Award Romania had to cede Northern Transylvania to Hungary. Southern Dobruja was ceded to Bulgaria in September 1940. In an effort to appease the Fascist elements within the country and obtain German protection, King Carol II appointed the General Ion Antonescu as Prime Minister on September 6, 1940.
Two days later, Antonescu forced the king to abdicate and installed the king's young son Michael (Mihai) on the throne, then declared himself Conducător ("Leader") with dictatorial powers. The National Legionary State was proclaimed on 14 September, with the Iron Guard ruling together with Antonescu as the sole legal political movement in Romania. Under King Michael I and the military government of Antonescu, Romania signed the Tripartite Pact on November 23, 1940. German troops entered the country on 10 October 1941, officially to train the Romanian Army. Hitler's directive to the troops on 10 October had stated that "it is necessary to avoid even the slightest semblance of military occupation of Romania". The entrance of German troops in Romania determined Italian dictator Benito Mussolini to launch an invasion of Greece, starting the Greco-Italian War. Having secured Hitler's approval in January 1941, Antonescu ousted the Iron Guard from power.
Romania was subsequently used as a platform for invasions of Yugoslavia and the Soviet Union. Despite not being involved militarily in the Invasion of Yugoslavia, Romania requested that Hungarian troops not operate in the Banat. Paulus thus modified the Hungarian plan and kept their troops west of the Tisza.
Romania joined the German-led invasion of the Soviet Union on June 22, 1941. Antonescu was the only foreign leader Hitler consulted on military matters and the two would meet no less than ten times throughout the war. Romania re-captured Bessarabia and Northern Bukovina during Operation Munchen before conquering further Soviet territory and establishing the Transnistria Governorate. After the Siege of Odessa, the city became the capital of the Governorate. Romanian troops fought their way into the Crimea alongside German troops and contributed significantly to the Siege of Sevastopol. Later, Romanian mountain troops joined the German campaign in the Caucasus, reaching as far as Nalchik. After suffering devastating losses at Stalingrad, Romanian officials began secretly negotiating peace conditions with the Allies.
Romania's military industry was small but versatile, able to copy and produce thousands of French, Soviet, German, British, and Czechoslovak weapons systems, as well as producing capable original products. Romania also built sizable warships, such as the minelayer NMS Amiral Murgescu and the submarines NMS Rechinul and NMS Marsuinul. Hundreds of originally-designed aircraft were also produced, such as the fighter IAR-80 and the light bomber IAR-37. Romania had also been a major power in the oil industry since the 1800s. It was one of the largest producers in Europe and the Ploiești oil refineries provided about 30% of all Axis oil production. The British historian Dennis Deletant has asserted that, Romania's crucial contributions to the Axis war effort, including having the third largest Axis army in Europe and sustaining the German war effort through oil and other materiel, meant that it was "on a par with Italy as a principal ally of Germany and not in the category of a minor Axis satellite".
Under Antonescu Romania was a fascist dictatorship and a totalitarian state. Between 45,000 and 60,000 Jews were killed in Bukovina and Bessarabia by Romanian and German troops in 1941. According to Wilhelm Filderman at least 150,000 Jews of Bessarabia and Bukovina, died under the Antonescu regime(both those deported and those who remained). Overall, approximately 250,000 Jews under Romanian jurisdiction died.
By 1943, the tide began to turn. The Soviets pushed further west, retaking Ukraine and eventually launching an unsuccessful invasion of eastern Romania in the spring of 1944. Romanian troops in the Crimea helped repulse initial Soviet landings, but eventually all of the peninsula was re-conquered by Soviet forces and the Romanian Navy evacuated over 100,000 German and Romanian troops, an achievement which earned Romanian Admiral Horia Macellariu the Knight's Cross of the Iron Cross. During the Jassy-Kishinev Offensive of August 1944, Romania switched sides on August 23, 1944. Romanian troops then fought alongside the Soviet Army until the end of the war, reaching as far as Czechoslovakia and Austria.
Slovakia had been closely aligned with Germany almost immediately from its declaration of independence from Czechoslovakia on 14 March 1939. Slovakia entered into a treaty of protection with Germany on 23 March 1939.
Slovak troops joined the German invasion of Poland, having interest in Spiš and Orava. Those two regions, along with Cieszyn Silesia, had been disputed between Poland and Czechoslovakia since 1918. The Poles fully annexed them following the Munich Agreement. After the invasion of Poland, Slovakia reclaimed control of those territories. Slovakia invaded Poland alongside German forces, contributing 50,000 men at this stage of the war.
Slovakia declared war on the Soviet Union in 1941 and signed the revived Anti-Comintern Pact in 1941. Slovak troops fought on Germany's Eastern Front, furnishing Germany with two divisions totaling 80,000 men. Slovakia declared war on the United Kingdom and the United States in 1942.
Slovakia was spared German military occupation until the Slovak National Uprising, which began on 29 August 1944, and was almost immediately crushed by the Waffen SS and Slovak troops loyal to Josef Tiso.
After the war, Tiso was executed and Slovakia once again became part of Czechoslovakia. The border with Poland was shifted back to the pre-war state. Slovakia and the Czech Republic finally separated into independent states in 1993.
Yugoslavia (two-day membership)
Yugoslavia was largely surrounded by members of the pact and now bordered the German Reich. From late 1940 Hitler sought a non-aggression pact with Yugoslavia. In February 1941, Hitler called for Yugoslavia's accession to the Tripartite Pact, the Yugoslav delayed. In March, divisions of the German army arrived at the Bulgarian-Yugoslav border and permission was sought for them to pass through to attack Greece. On 25 March 1941, fearing that Yugoslavia would be invaded otherwise, the Yugoslav government signed the Tripartite Pact with significant reservations. Unlike other Axis powers, Yugoslavia was not obliged to provide military assistance, nor to provide its territory for Axis to move military forces during the war. Less than two days later, after demonstrations in the streets of Belgrade, Prince Paul and the government were removed from office by a coup d'état. Seventeen-year-old King Peter was declared to be of age. The new Yugoslav government under General Dušan Simović, refused to ratify Yugoslavia's signing of the Tripartite Pact, and started negotiations with Great Britain and Soviet Union. Winston Churchill commented that "Yugoslavia has found its soul"; however, Hitler invaded and quickly took control.
Anti-Comintern Pact signatories
Some countries signed the Anti-Comintern Pact but not the Tripartite Pact. As such their adherence to the Axis may have been less than that of Tripartite Pact signatories. Some of these states were officially at war with members of the Allied powers, others remained neutral in the war and sent only volunteers. Signing the Anti-Comintern Pact was seen as a "a litmus test of loyalty" by the Nazi leadership.
China (Reorganized National Government of China)
During the Second Sino-Japanese War, Japan advanced from its bases in Manchuria to occupy much of East and Central China. Several Japanese puppet states were organized in areas occupied by the Japanese Army, including the Provisional Government of the Republic of China at Beijing, which was formed in 1937, and the Reformed Government of the Republic of China at Nanjing, which was formed in 1938. These governments were merged into the Reorganized National Government of China at Nanjing on 29 March 1940. Wang Jingwei became head of state. The government was to be run along the same lines as the Nationalist regime and adopted its symbols.
The Nanjing Government had no real power; its main role was to act as a propaganda tool for the Japanese. The Nanjing Government concluded agreements with Japan and Manchukuo, authorising Japanese occupation of China and recognising the independence of Manchukuo under Japanese protection. The Nanjing Government signed the Anti-Comintern Pact of 1941 and declared war on the United States and the United Kingdom on 9 January 1943.
The government had a strained relationship with the Japanese from the beginning. Wang's insistence on his regime being the true Nationalist government of China and in replicating all the symbols of the Kuomintang led to frequent conflicts with the Japanese, the most prominent being the issue of the regime's flag, which was identical to that of the Republic of China.
The worsening situation for Japan from 1943 onwards meant that the Nanking Army was given a more substantial role in the defence of occupied China than the Japanese had initially envisaged. The army was almost continuously employed against the communist New Fourth Army. Wang Jingwei died on 10 November 1944, and was succeeded by his deputy, Chen Gongbo. Chen had little influence; the real power behind the regime was Zhou Fohai, the mayor of Shanghai. Wang's death dispelled what little legitimacy the regime had. On 9 September 1945, following the defeat of Japan, the area was surrendered to General He Yingqin, a nationalist general loyal to Chiang Kai-shek. Chen Gongbo was tried and executed in 1946.
Denmark was occupied by Germany after April 1940 and never joined the Axis. On 31 May 1939, Denmark and Germany signed a treaty of non-aggression, which did not contain any military obligations for either party. On April 9, Germany attacked Scandinavia, and the speed of the German invasion of Denmark prevented King Christian X and the Danish government from going into exile. They had to accept "protection by the Reich" and the stationing of German forces in exchange for nominal independence. Denmark coordinated its foreign policy with Germany, extending diplomatic recognition to Axis collaborator and puppet regimes, and breaking diplomatic relations with the Allied governments-in-exile. Denmark broke diplomatic relations with the Soviet Union and signed the Anti-Comintern Pact in 1941. However the United States and Britain ignored Denmark and worked with Henrik Kauffmann Denmark's ambassador in the US when it came to dealings about using Iceland, Greenland, and the Danish merchant fleet against Germany.
In 1941 Danish Nazis set up the Frikorps Danmark. Thousands of volunteers fought and many died as part of the German Army on the Eastern Front. Denmark sold agricultural and industrial products to Germany and made loans for armaments and fortifications. The German presence in Denmark, including the construction of the Danish paid for part of the Atlantic Wall fortifications and was never reimbursed.
The Danish protectorate government lasted until 29 August 1943, when the cabinet resigned after the regularly scheduled and largely free election concluding the Folketing's current term. The Germans imposed martial law following Operation Safari, and Danish collaboration continued on an administrative level, with the Danish bureaucracy functioning under German command. The Royal Danish Navy scuttled 32 of its larger ships; Germany seized 64 ships and later raised and refitted 15 of the sunken vessels. 13 warships escaped to Sweden and formed a Danish naval flotilla in exile. Sweden allowed formation of a Danish military brigade in exile; it did not see combat. The Danish resistance movement was active in sabotage and issuing underground newspapers and blacklists of collaborators.
Although Finland never signed the Tripartite Pact, it fought against the Soviet Union alongside Germany in the 1941-44 Continuation War, during which the official position of the wartime Finnish government was that Finland was a co-belligerent of the Germans who they described as "brothers-in-arms". Finland did sign the revived Anti-Comintern Pact of November 1941. Finland signed a peace treaty with the Allied powers in 1947 which described Finland as having been "an ally of Hitlerite Germany" during the continuation war. As such, Finland was the only democracy to join the Axis. Finland's relative independence from Germany put it in the most advantageous position of all the minor Axis powers.
Whilst Finland's relationship with Nazi Germany during the Continuation War remains controversial within Finland, in a 2008 Helsingin Sanomat survey of 28 Finnish historians 16 agreed that Finland had been an ally of Nazi Germany, with only six disagreeing.
The August 1939 Molotov–Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol dividing much of eastern Europe and assigning Finland to the Soviet sphere of influence. After unsuccessfully attempting to force territorial and other concessions on the Finns, the Soviet Union tried to invade Finland in November 1939 during the Winter War, intending to establish a communist puppet government in Finland. The conflict threatened Germany's iron-ore supplies and offered the prospect of Allied interference in the region. Despite Finnish resistance, a peace treaty was signed in March 1940, wherein Finland ceded some key territory to the Soviet Union, including the Karelian Isthmus, containing Finland's second-largest city, Viipuri, and the critical defensive structure of the Mannerheim Line. After this war, Finland sought protection and support from the United Kingdom and non-aligned Sweden, but was thwarted by Soviet and German actions. This resulted in Finland being drawn closer to Germany, first with the intent of enlisting German support as a counterweight to thwart continuing Soviet pressure, and later to help regain lost territories.
In the opening days of Operation Barbarossa, Germany's invasion of the Soviet Union, Finland permitted German planes returning from mine dropping runs over Kronstadt and Neva River to refuel at Finnish airfields before returning to bases in East Prussia. In retaliation, the Soviet Union launched a major air offensive against Finnish airfields and towns, which resulted in a Finnish declaration of war against the Soviet Union on 25 June 1941. The Finnish conflict with the Soviet Union is generally referred to as the Continuation War.
Finland's main objective was to regain territory lost to the Soviet Union in the Winter War. However, on 10 July 1941, Field Marshal Carl Gustaf Emil Mannerheim issued an Order of the Day that contained a formulation understood internationally as a Finnish territorial interest in Russian Karelia.
Diplomatic relations between the United Kingdom and Finland were severed on 1 August 1941, after the British bombed German forces in the Finnish village and port of Petsamo. The United Kingdom repeatedly called on Finland to cease its offensive against the Soviet Union, and declared war on Finland on 6 December 1941, although no other military operations followed. War was never declared between Finland and the United States, though relations were severed between the two countries in 1944 as a result of the Ryti-Ribbentrop Agreement.
Finland maintained command of its armed forces and pursued war objectives independently of Germany. Germans and Finns did work closely together during Operation Silver Fox, a joint offensive against Murmansk. Finland took part in the Siege of Leningrad. Finland was one of Germany's most important allies in it war with the USSR.
The relationship between Finland and Germany was also affected by the Ryti-Ribbentrop Agreement, which was presented as a German condition for help with munitions and air support, as the Soviet offensive coordinated with D-Day threatened Finland with complete occupation. The agreement, signed by President Risto Ryti but never ratified by the Finnish Parliament, bound Finland not to seek a separate peace.
After Soviet offensives were fought to a standstill, Ryti's successor as president, Marshall Mannerheim, dismissed the agreement and opened secret negotiations with the Soviets, which resulted in a ceasefire on 4 September and the Moscow Armistice on 19 September 1944. Under the terms of the armistice, Finland was obliged to expel German troops from Finnish territory, which resulted in the Lapland War.
Manchukuo, in the northeast region of China, had been a Japanese puppet state in Manchuria since the 1930s. It was nominally ruled by Puyi, the last emperor of the Qing Dynasty, but was in fact controlled by the Japanese military, in particular the Kwantung Army. While Manchukuo ostensibly was a state for ethnic Manchus, the region had a Han Chinese majority.
Following the Japanese invasion of Manchuria in 1931, the independence of Manchukuo was proclaimed on 18 February 1932, with Puyi as head of state. He was proclaimed the Emperor of Manchukuo a year later. The new Manchu nation was recognized by 23 of the League of Nations' 80 members. Germany, Italy, and the Soviet Union were among the major powers who recognised Manchukuo. Other countries who recognized the State were the Dominican Republic, Costa Rica, El Salvador, and Vatican City. Manchukuo was also recognised by the other Japanese allies and puppet states, including Mengjiang, the Burmese government of Ba Maw, Thailand, the Wang Jingwei regime, and the Indian government of Subhas Chandra Bose. The League of Nations later declared in 1934 that Manchuria lawfully remained a part of China. This precipitated Japanese withdrawal from the League. The Manchukuoan state ceased to exist after the Soviet invasion of Manchuria in 1945.
Manchukuo signed the Anti-Comintern Pact in 1939, but never signed the Tripartite Pact.
Caudillo Francisco Franco's Spanish State gave moral, economic, and military assistance to the Axis powers, while nominally maintaining neutrality. Franco described Spain as a member of the Axis and signed the Anti-Comintern Pact in 1941 with Hitler and Mussolini. Members of the ruling Falange party in Spain held irredentist designs on Gibraltar. Falangists also supported Spanish colonial acquisition of Tangier, French Morocco and northwestern French Algeria. In addition, Spain held ambitions on former Spanish colonies in Latin America. In June 1940 the Spanish government approached Germany to propose an alliance in exchange for Germany recognizing Spain's territorial aims: the annexation of the Oran province of Algeria, the incorporation of all Morocco, the extension of Spanish Sahara southward to the twentieth parallel, and the incorporation of French Cameroons into Spanish Guinea. Spain invaded and occupied the Tangier International Zone, maintaining its occupation until 1945. The occupation caused a dispute between Britain and Spain in November 1940; Spain conceded to protect British rights in the area and promised not to fortify the area. The Spanish government secretly held expansionist plans towards Portugal that it made known to the German government. In a communiqué with Germany on 26 May 1942, Franco declared that Portugal should be annexed into Spain.
Franco had previously won the Spanish Civil War with the help of Nazi Germany and Fascist Italy. Both were eager to establish another fascist state in Europe. Spain owed Germany over $212 million for supplies of matériel during the Spanish Civil War, and Italian combat troops had actually fought in Spain on the side of Franco's Nationalists.
When Germany invaded the Soviet Union in 1941, Franco immediately offered to form a unit of military volunteers to join the invasion. This was accepted by Hitler and, within two weeks, there were more than enough volunteers to form a division – the Blue Division (División Azul) under General Agustín Muñoz Grandes.
The possibility of Spanish intervention in World War II was of concern to the United States, which investigated the activities of Spain's ruling Falange party in Latin America, especially Puerto Rico, where pro-Falange and pro-Franco sentiment was high, even amongst the ruling upper classes. The Falangists promoted the idea of supporting Spain's former colonies in fighting against American domination. Prior to the outbreak of war, support for Franco and the Falange was high in the Philippines. The Falange Exterior, the international department of the Falange, collaborated with Japanese forces against U.S. and Filipino forces in the Philippines through the Philippine Falange.
Bilateral agreements with the Axis Powers
Some countries colluded with Germany, Italy, and Japan without signing either the Anti-Comintern Pact, or the Tripartite Pact. In some cases these bilateral agreements were formalised, in other cases it was less formal. Some of these countries were puppet states established by the Axis Powers themselves.
Burma (Ba Maw government)
The Japanese Army and Burma nationalists, led by Aung San, seized control of Burma from the United Kingdom during 1942. A State of Burma was formed on 1 August 1943 under the Burmese nationalist leader Ba Maw. A treaty of alliance was concluded between the Ba Maw regime and Japan was signed by Ba Maw for Burma and Sawada Renzo for Japan on the same day in which the Ba Maw government pledged itself to provide the Japanese "with every necessary assistance in order to execute a successful military operation in Burma". The Ba Maw government mobilised Burmese society during the war to support the Axis war-effort.
The Ba Maw regime established the Burma Defence Army (later renamed the Burma National Army), which was commanded by Aung San which fought alongside the Japanese in the Burma campaign. The Ba Maw has been described as a state having "independence without sovereignty" and as being effectively a Japanese puppet state. On 27 March 1945 the Burma National Army revolted against the Japanese.
As an ally of Japan during the war that deployed troops to fight on the Japanese side against Allied forces, Thailand is considered to have been part of the Axis alliance, or at least "aligned with the Axis powers". For example, writing in 1945, the American politician Clare Boothe Luce described Thailand as "undeniably an Axis country" during the war.
Thailand waged the Franco-Thai War in October 1940 to May 1941 to reclaim territory from French Indochina. Japanese forces invaded Thailand an hour and a half before the attack on Pearl Harbor (because of the International Dateline, the local time was on the morning of 8 December 1941). Only hours after the invasion, Prime Minister Field Marshal Phibunsongkhram ordered the cessation of resistance against the Japanese. An outline plan of Japan-Thailand joint military operations, whereby Thai forces would invade Burma to defend the right flank of Japanese forces, was agreed on 14 December 1941. On 21 December 1941, a military alliance with Japan was signed and on 25 January 1942, Sang Phathanothai read over the radio Thailand's formal declaration of war on the United Kingdom and the United States. The Thai ambassador to the United States, Mom Rajawongse Seni Pramoj, did not deliver his copy of the declaration of war. Therefore, although the British reciprocated by declaring war on Thailand and considered it a hostile country, the United States did not.
The Thais and Japanese agreed that the Burmese Shan State and Karenni State were to be under Thai control. The rest of Burma was to be under Japanese control. On 10 May 1942, the Thai Phayap Army entered Burma's eastern Shan State, which had been claimed by Siamese kingdoms. Three Thai infantry and one cavalry division, spearheaded by armoured reconnaissance groups and supported by the air force, engaged the retreating Chinese 93rd Division. Kengtung, the main objective, was captured on 27 May. Renewed offensives in June and November saw the Chinese retreat into Yunnan.
In November 1943 Thailand signed the Greater East Asia Joint Declaration, formally aligning itself with the Axis Powers. The area containing the Shan States and Kayah State was annexed by Thailand in 1942, and four northern states of Malaya were also transferred to Thailand by Japan as a reward for Thai co-operation. These areas were ceded back to Burma and Malaya in 1945. Thai military losses totalled 5,559 men during the war, of whom about 180 died resisting the Japanese invasion of 8 December 1941, roughly 150 died in action during the fighting in the Shan States, and the rest died of malaria and other diseases. The Free Thai Movement ("Seri Thai") was established during these first few months. Parallel Free Thai organizations were also established in the United Kingdom. The king's aunt, Queen Rambai Barni, was the nominal head of the British-based organization, and Pridi Banomyong, the regent, headed its largest contingent, which was operating within Thailand. Aided by elements of the military, secret airfields and training camps were established, while American Office of Strategic Services and British Force 136 agents slipped in and out of the country.
As the war dragged on, the Thai population came to resent the Japanese presence. In June 1944, Phibun was overthrown in a coup d'état. The new civilian government under Khuang Aphaiwong attempted to aid the resistance while maintaining cordial relations with the Japanese. After the war, U.S. influence prevented Thailand from being treated as an Axis country, but the British demanded three million tons of rice as reparations and the return of areas annexed from Malaya during the war. Thailand also returned the portions of British Burma and French Indochina that had been annexed. Phibun and a number of his associates were put on trial on charges of having committed war crimes and of collaborating with the Axis powers. However, the charges were dropped due to intense public pressure. Public opinion was favourable to Phibun, as he was thought to have done his best to protect Thai interests.
In 1939 the Soviet Union considered forming an alliance with either Britain and France or with Germany. When negotiations with Britain and France failed, they turned to Germany and signed the Molotov–Ribbentrop Pact in August 1939. Germany was now freed from the risk of war with the Soviets, and was assured a supply of oil. This included a secret protocol whereby territories controlled by Poland, Finland, Estonia, Romania, Latvia and Lithuania were divided into spheres of interest of the parties. The Soviet Union had been forced to cede the Kresy (Western Belarus and Western Ukraine) to Poland after losing the Soviet-Polish War of 1919–1921, and the Soviet Union sought to re-annex those territories.
On 1 September, barely a week after the pact had been signed, Germany invaded Poland. The Soviet Union invaded Poland from the east on 17 September and on 28 September signed a secret treaty with Nazi Germany to coordinate fighting against the Polish resistance. The Soviets targeted intelligence, entrepreneurs and officers, committing a string of atrocities that culminated in the Katyn massacre and mass relocation to the Gulag in Siberia. Soon after the invasion of Poland, the Soviet Union occupied the Baltic countries of Estonia, Latvia and Lithuania, and annexed Bessarabia and Northern Bukovina from Romania. The Soviet Union attacked Finland on 30 November 1939, which started the Winter War. Finnish defenses prevented an all-out invasion, resulting in an interim peace, but Finland was forced to cede strategically important border areas near Leningrad.
The Soviet Union provided material support to Germany in the war effort against Western Europe through a pair of commercial agreements, the first in 1939 and the second in 1940, which involved exports of raw materials (phosphates, chromium and iron ore, mineral oil, grain, cotton, and rubber). These and other export goods transported through Soviet and occupied Polish territories allowed Germany to circumvent the British naval blockade. In October and November 1940, German-Soviet talks about the potential of joining the Axis took place in Berlin. Joseph Stalin later personally countered with a separate proposal in a letter on 25 November that contained several secret protocols, including that "the area south of Batum and Baku in the general direction of the Persian Gulf is recognized as the center of aspirations of the Soviet Union", referring to an area approximating present day Iraq and Iran, and a Soviet claim to Bulgaria. Hitler never responded to Stalin's letter. Shortly thereafter, Hitler issued a secret directive on the invasion of the Soviet Union. Reasons included the Nazi ideologies of Lebensraum and Heim ins Reich
The German army entered Paris on 14 June 1940, following the battle of France. Pétain became the last Prime Minister of the French Third Republic on 16 June 1940. He sued for peace with Germany and on 22 June 1940, the French government concluded an armistice with Hitler and Mussolini, which came into effect at midnight on 25 June. Under the terms of the agreement, Germany occupied two-thirds of France, including Paris. Pétain was permitted to keep an "armistice army" of 100,000 men within the unoccupied southern zone. This number included neither the army based in the French colonial empire nor the French fleet. In Africa the Vichy regime was permitted to maintain 127,000. The French also maintained substantial garrisons at the French-mandate territory of Syria and Greater Lebanon, the French colony of Madagascar, and in French Somaliland. Some members of the Vichy government pushed for closer cooperation, but they were rebuffed by Pétain. Neither did Hitler accept that France could ever become a full military partner, and constantly prevented the buildup of Vichy's military strength.
After the armistice, relations between the Vichy French and the British quickly worsened. Although the French had told Churchill they would not allow their fleet to be taken by the Germans, the British launched naval attacks intended to prevent the French navy being used, the most notable of which was the attack on the Algerian harbour of Mers el-Kebir on 3 July 1940. Though Churchill defended his controversial decision to attack the French fleet, the action deteriorated greatly the relations between France and Britain. German propaganda trumpeted these attacks as an absolute betrayal of the French people by their former allies.
On 10 July 1940, Pétain was given emergency "full powers" by a majority vote of the French National Assembly. The following day approval of the new constitution by the Assembly effectively created the French State (l'État Français), replacing the French Republic with the government unofficially called "Vichy France," after the resort town of Vichy, where Pétain maintained his seat of government. This continued to be recognised as the lawful government of France by the neutral United States until 1942, while the United Kingdom had recognised de Gaulle's government-in-exile in London. Racial laws were introduced in France and its colonies and many foreign Jews in France were deported to Germany. Albert Lebrun, last President of the Republic, did not resign from the presidential office when he moved to Vizille on 10 July 1940. By 25 April 1945, during Pétain's trial, Lebrun argued that he thought he would be able to return to power after the fall of Germany, since he had not resigned.
In September 1940, Vichy France was forced to allow Japan to occupy French Indochina, a federation of French colonial possessions and protectorates encompassing modern day Vietnam, Laos, and Cambodia. The Vichy regime continued to administer them under Japanese military occupation. French Indochina was the base for the Japanese invasions of Thailand, Malaya, and the Dutch East Indies. On 26 September 1940, de Gaulle led an attack by Allied forces on the Vichy port of Dakar in French West Africa. Forces loyal to Pétain fired on de Gaulle and repulsed the attack after two days of heavy fighting, drawing Vichy France closer to Germany.
During the Anglo-Iraqi War of May 1941, Vichy France allowed Germany and Italy to use air bases in the French mandate of Syria to support the Iraqi revolt. British and Free French forces attacked later Syria and Lebanon in June–July 1941, and in 1942 Allied forces took over French Madagascar. More and more colonies abandoned Vichy, joining the Free French territories of French Equatorial Africa, Polynesia, New Caledonia and others who had sided with de Gaulle from the start.
In November 1942 Vichy French troops briefly resisted the landing of Allied troops in French North Africa for two days, until Admiral François Darlan negotiated a local ceasefire with the Allies. In response to the landings, German and Italian forces invaded the non-occupied zone in southern France and ended Vichy France as an entity with any kind of autonomy; it then became a puppet government for the occupied territories. In June 1943, the formerly Vichy-loyal colonial authorities in French North Africa led by Henri Giraud came to an agreement with the Free French to merge with their own interim regime with the French National Committee (Comité Français National, CFN) to form a provisional government in Algiers, known as the French Committee of National Liberation (Comité Français de Libération Nationale, CFLN) initially led by Darlan.
In 1943 the Milice, a paramilitary force which had been founded by Vichy, was subordinated to the Germans and assisted them in rounding up opponents and Jews, as well as fighting the French Resistance. The Germans recruited volunteers in units independent of Vichy. Partly as a result of the great animosity of many right-wingers against the pre-war Front Populaire, volunteers joined the German forces in their anti-communist crusade against the USSR. Almost 7,000 joined Légion des Volontaires Français (LVF) from 1941 to 1944. The LVF then formed the cadre of the Waffen-SS Division Charlemagne in 1944–1945, with a maximum strength of some 7,500. Both the LVF and the Division Charlemagne fought on the eastern front.
Deprived of any military assets, territory or resources, the members of the Vichy government continued to fulfil their role as German puppets, being quasi-prisoners in the so-called "Sigmaringen enclave" in a castle in Baden-Württemberg at the end of the war in May 1945.
In April 1941 the Arab nationalist Rashīd ʿAlī al-Gaylānī, who was pro-Axis, seized power in Iraq. British forces responded by deploying to Iraq and in turn removing Rashi Ali from power. During fighting between Iraqi and British forces Axis forces were deployed to Iraq to support the Iraqis. However, Rashid Ali was never able to conclude a formal alliance with the Axis.
Anti-British sentiments were widespread in Iraq prior to 1941. Rashid Ali was appointed Prime Minister Rashid Ali in 1940. When Italy declared war on Britain, Rashid Ali had maintained ties with the Italians. This angered the British government. In December 1940, as relations with the British worsened, Rashid Ali formally requested weapons and military supplies from Germany. In January 1941 Rashid Ali was forced to resign as a result of British pressure.
In April 1941 Rashid Ali, on seizing power in a coup, repudiated the Anglo-Iraqi Treaty of 1930 and demanded that the British abandon their military bases and withdraw from the country.
On 9 May 1941, Mohammad Amin al-Husayni, the Grand Mufti of Jerusalem associate of Ali and in asylum in Iraq, declared holy war against the British and called on Arabs throughout the Middle East to rise up against British rule. On 25 May 1941, the Germans stepped up offensive operations in the Middle East.
Hitler issued Order 30: "The Arab Freedom Movement in the Middle East is our natural ally against England. In this connection special importance is attached to the liberation of Iraq ... I have therefore decided to move forward in the Middle East by supporting Iraq. "
Hostilities between the Iraqi and British forces began on 2 May 1941, with heavy fighting at the RAF air base in Habbaniyah. The Germans and Italians dispatched aircraft and aircrew to Iraq utilizing Vichy French bases in Syria; this led to Australian, British, Indian and Free French forces entering and conquering Syria in June and July. With the advance of British and Indian forces on Baghdad, Iraqi military resistance ended by 31 May 1941. Rashid Ali and the Mufti of Jerusalem fled to Iran, then Turkey, Italy, and finally Germany, where Ali was welcomed by Hitler as head of the Iraqi government-in-exile in Berlin.
Various nominally-independent governments formed out of local sympathisers under varying degrees of German, Italian, and Japanese control were established within the territories that they occupied during the war. Some of these governments declared themselves to be neutral in the conflict with the allies, or never concluded any formal alliance with the Axis powers, but their effective control by the Axis powers rendered them in reality an extension of it and hence part of it. These differed from military authorities and civilian commissioners provided by the occupying power in that they were formed from nationals of the occupied country, and that the supposed legitimacy of the puppet state was recognised by the occupier de jure if not de facto.
The collaborationist administrations of German-occupied countries in Europe had varying degrees of autonomy, and not all of them qualified as fully recognized sovereign states. The General Government in occupied Poland was a fully German administration. In occupied Norway, the National Government headed by Vidkun Quisling – whose name came to symbolize pro-Axis collaboration in several languages – was subordinate to the Reichskommissariat Norwegen. It was never allowed to have any armed forces, be a recognized military partner, or have autonomy of any kind. In the occupied Netherlands, Anton Mussert was given the symbolic title of "Führer of the Netherlands' people". His National Socialist Movement formed a cabinet assisting the German administration, but was never recognized as a real Dutch government.
Albania (Albanian Kingdom)
After the Italian armistice, a vacuum of power opened up in Albania. The Italian occupying forces were rendered largely powerless, as the National Liberation Movement took control of the south and the National Front (Balli Kombëtar) took control of the north. Albanians in the Italian army joined the guerrilla forces. In September 1943 the guerrillas moved to take the capital of Tirana, but German paratroopers dropped into the city. Soon after the battle, the German High Command announced that they would recognize the independence of a greater Albania. They organized an Albanian government, police, and military in collaboration with the Balli Kombëtar. The Germans did not exert heavy control over Albania's administration, but instead attempted to gain popular appeal by giving their political partners what they wanted. Several Balli Kombëtar leaders held positions in the regime. The joint forces incorporated Kosovo, western Macedonia, southern Montenegro, and Presevo into the Albanian state. A High Council of Regency was created to carry out the functions of a head of state, while the government was headed mainly by Albanian conservative politicians. Albania was the only European country occupied by the Axis powers that ended World War II with a larger Jewish population than before the war. The Albanian government had refused to hand over their Jewish population. They provided Jewish families with forged documents and helped them disperse in the Albanian population. Albania was completely liberated on November 29, 1944.
Territory of the Military Commander in Serbia
The Government of National Salvation, also referred to as the Nedić regime, was the second Serbian puppet government, after the Commissioner Government, established on the Territory of the (German) Military Commander in Serbia[nb 2] during World War II. It was appointed by the German Military Commander in Serbia and operated from 29 August 1941 to October 1944. Although the Serbian puppet regime had some support, it was unpopular with a majority of Serbs who either joined the Yugoslav Partisans or Draža Mihailović's Chetniks. The Prime Minister throughout was General Milan Nedić. The Government of National Salvation was evacuated from Belgrade to Kitzbühel, Germany in the first week of October 1944 before the German withdrawal from Serbia was complete.
Racial laws were introduced in all occupied territories with immediate effects on Jews and Roma people, as well as causing the imprisonment of those opposed to Nazism. Several concentration camps were formed in Serbia and at the 1942 Anti-Freemason Exhibition in Belgrade the city was pronounced to be free of Jews (Judenfrei). On 1 April 1942, a Serbian Gestapo was formed. An estimated 120,000 people were interned in German-run concentration camps in Nedić's Serbia between 1941 and 1944. However the Banjica Concentration Camp was jointly run by the German Army and Nedic's regime. 50,000 to 80,000 were killed during this period. Serbia became the second country in Europe, following Estonia, to be proclaimed Judenfrei (free of Jews). Approximately 14,500 Serbian Jews – 90 percent of Serbia's Jewish population of 16,000 – were murdered in World War II.
Nedić was captured by the Americans when they occupied the former territory of Austria, and was subsequently handed over to the Yugoslav communist authorities to act as a witness against war criminals, on the understanding he would be returned to American custody to face trial by the Allies. The Yugoslav authorities refused to return Nedić to United States custody. He died on 4 February 1946 after either jumping or falling out of the window of a Belgrade hospital, under circumstances which remain unclear.
Italy (Italian Social Republic)
Mussolini had been removed from office and arrested by King Victor Emmanuel III on 25 July 1943. After the Italian armistice, in a raid led by German paratrooper Otto Skorzeny, Mussolini was rescued from arrest.
Once restored to power, Mussolini declared that Italy was a republic and that he was the new head of state. He was subject to German control for the duration of the war.
Joint German-Italian client states
Greece (Hellenic State)
Following the German invasion of Greece and the flight of the Greek government to Crete and then Egypt, the Hellenic State was formed in May 1941 as a puppet state of both Italy and Germany. Initially, Italy had wished to annex Greece, but was pressured by Germany to avoid civil unrest such as had occurred in Bulgarian-annexed areas. The result was Italy accepting the creation of a puppet regime with the support of Germany. Italy had been assured by Hitler of a primary role in Greece. Most of the country was held by Italian forces, but strategic locations (Central Macedonia, the islands of the northeastern Aegean, most of Crete, and parts of Attica) were held by the Germans, who seized most of the country's economic assets and effectively controlled the collaborationist government. The puppet regime never commanded any real authority, and did not gain the allegiance of the people. It was somewhat successful in preventing secessionist movements like the Vlach "Roman Legion" from establishing themselves. By mid-1943, the Greek Resistance had liberated large parts of the mountainous interior ("Free Greece"), setting up a separate administration there. After the Italian armistice, the Italian occupation zone was taken over by the German armed forces, who remained in charge of the country until their withdrawal in autumn 1944. In some Aegean islands, German garrisons were left behind, and surrendered only after the end of the war.
The Empire of Japan created a number of client states in the areas occupied by its military, beginning with the creation of Manchukuo in 1932. These puppet states achieved varying degrees of international recognition.
The Kingdom of Kampuchea was a short-lived Japanese puppet state that lasted from 9 March 1945 to 15 August 1945. The Japanese entered the French protectorate of Cambodia in mid-1941, but allowed Vichy French officials to remain in administrative posts while Japanese calls for an "Asia for the Asiatics" won over many Cambodian nationalists.
In March 1945, in order to gain local support, the Japanese dissolved French colonial rule and pressured Cambodia to declare independence within the Greater East Asia Co-Prosperity Sphere. King Sihanouk declared the Kingdom of Kampuchea (replacing the French name) independent. Son Ngoc Thanh who had fled to Japan in 1942 returned in May and was appointed foreign minister. On the date of Japanese surrender, a new government was proclaimed with Son Ngoc Thanh as prime minister. When the Allies occupied Phnom Penh in October, Son Ngoc Thanh was arrested for collaborating with the Japanese and was exiled to France.
It was led by Subhas Chandra Bose, an Indian nationalist who rejected Mahatma Gandhi's nonviolent methods for achieving independence. The First Indian National Army faltered after its leadership objected to being a propaganda tool for Japanese war aims, and the role of Japanese liaison office. It was revived by the Indian Independence League with Japanese support in 1942 after the ex-PoWs and Indian civilians in South-east Asia agreed to participate in the INA venture on the condition it was led by Bose. From occupied Singapore Bose declared India's independence on October 21, 1943 . The Indian National Army was committed as a part of the U Go Offensive. It played a largely marginal role in the battle, and suffered serious casualties and had to withdraw with the rest of Japanese forces after the siege of Imphal was broken. It was later committed to the defence of Burma against the Allied offensive. It suffered a large number of desertions in this latter part. The remaining troops of the INA maintained order in Rangoon after the withdrawal of Ba Maw's government. The provisional government was given nominal control of the Andaman and Nicobar Islands from November 1943 to August 1945.
Inner Mongolia (Mengjiang)
Mengjiang was a Japanese puppet state in Inner Mongolia. It was nominally ruled by Prince Demchugdongrub, a Mongol nobleman descended from Genghis Khan, but was in fact controlled by the Japanese military. Mengjiang's independence was proclaimed on 18 February 1936, following the Japanese occupation of the region.
The Inner Mongolians had several grievances against the central Chinese government in Nanking, including their policy of allowing unlimited migration of Han Chinese to the region. Several of the young princes of Inner Mongolia began to agitate for greater freedom from the central government, and it was through these men that Japanese saw their best chance of exploiting Pan-Mongol nationalism and eventually seizing control of Outer Mongolia from the Soviet Union.
Japan created Mengjiang to exploit tensions between ethnic Mongolians and the central government of China, which in theory ruled Inner Mongolia. When the various puppet governments of China were unified under the Wang Jingwei government in March 1940, Mengjiang retained its separate identity as an autonomous federation. Although under the firm control of the Japanese Imperial Army, which occupied its territory, Prince Demchugdongrub had his own independent army. Mengjiang vanished in 1945 following Japan's defeat in World War II.
French Indochina, including Laos, had been occupied by the Japanese in 1941, though government by the Vichy French colonial officials had continued. The liberation of France in 1944, bringing Charles de Gaulle to power, meant the end of the alliance between Japan and the Vichy French administration in Indochina. On 9 March 1945 the Japanese staged a military coup in Hanoi, and on 8 April they reached Luang Phrabang. King Sīsavāngvong was detained by the Japanese, and forced to issue a declaration of independence, albeit one that does not appear to have ever been formalised. French control over Laos was re-asserted in 1946.
Philippines (Second Republic)
After the surrender of the Filipino and American forces in Bataan Peninsula and Corregidor Island, the Japanese established a puppet state in the Philippines in 1942. The following year, the Philippine National Assembly declared the Philippines an independent Republic and elected José Laurel as its President. There was never widespread civilian support for the state, largely because of the general anti-Japanese sentiment stemming from atrocities committed by the Imperial Japanese Army. The Second Philippine Republic ended with Japanese surrender in 1945, and Laurel was arrested and charged with treason by the US government. He was granted amnesty by President Manuel Roxas, and remained active in politics, ultimately winning a seat in the post-war Senate.
Vietnam (Empire of Vietnam)
The Empire of Vietnam was a short-lived Japanese puppet state that lasted from 11 March to 23 August 1945. When the Japanese seized control of French Indochina, they allowed Vichy French administrators to remain in nominal control. This French rule ended on 9 March 1945, when the Japanese officially took control of the government. Soon after, Emperor Bảo Đại voided the 1884 treaty with France and Trần Trọng Kim, a historian, became prime minister.
German, Italian and Japanese World War II cooperation
On 7 December 1941, Japan attacked the US naval bases in Pearl Harbor, Hawaii. According to the stipulation of the Tripartite Pact, Nazi Germany and Fascist Italy were required to come to the defense of their allies only if they were attacked. Since Japan had made the first move, Germany and Italy were not obliged to aid her until the United States counterattacked. Nevertheless, expecting the US to declare war on Germany in any event, Hitler ordered the Reichstag to formally declare war on the United States. Hitler had agreed that Germany would almost certainly declare war when the Japanese first informed him of their intention to go to war with the United States on 17 November 1941. Italy also declared war on the US.
Historian Ian Kershaw suggests that this declaration of war against the United States was a serious blunder made by Germany and Italy, as it allowed the United States to join the war in Europe and North Africa without any limitation. On the other hand, American destroyers escorting convoys had been effectively intervening in the Battle of the Atlantic with German and Italian ships and submarines, and the immediate war declaration made the Second Happy Time possible for U-boats. Franklin D. Roosevelt had said in his Fireside Chat on 9 December 1941 that Germany and Italy considered themselves to be in a state of war with the United States. Plans for Rainbow Five had been published by the press early in December 1941, and Hitler could no longer ignore the amount of economic and military aid the US was giving Britain and the USSR.
Hitler declaring war on the United States on 11 December 1941
Italian pilots of a Savoia-Marchetti SM.75 long-range cargo aircraft meeting with Japanese officials upon arriving in East Asia in 1942.
- Axis leaders of World War II
- Axis power negotiations on the division of Asia during World War II
- Central Powers
- Expansion operations and planning of the Axis powers
- Foreign relations of the Axis powers
- Greater Germanic Reich
- Hakkō ichiu
- Hypothetical Axis victory in World War II
- Imperial Italy (fascist)
- Croatian–Romanian–Slovak friendship proclamation
- List of pro-Axis leaders and governments or direct control in occupied territories
- New Order (Nazism)
- Participants in World War II
- Zweites Buch
- Goldberg, Maren; Lotha, Gloria; Sinha, Surabhi (24 March 2009). "Rome-Berlin Axis". Britannica.Com. Britannica Group, inc. Retrieved 11 February 2021.
- Cornelia Schmitz-Berning (2007). Vokabular des Nationalsozialismus. Berlin: De Gruyter. p. 745. ISBN 978-3-11-019549-1.
- "Axis". GlobalSecurity.org. Retrieved 26 March 2015.
- Cooke, Tim (2005). History of World War II: Volume 1 - Origins and Outbreak. Marshall Cavendish. p. 154. ISBN 0761474838. Retrieved 28 October 2020.
- Hedinger, Daniel (8 June 2017). "The imperial nexus: the Second World War and the Axis in global perspective". Journal of Global History. 12 (2): 184–205. doi:10.1017/S1740022817000043. Retrieved 11 February 2021.
- Tucker, Spencer; Roberts, Priscilla Mary (2005). Encyclopedia of World War II A Political, Social and Military History. ABC-Clio. p. 102. Retrieved 13 February 2021.
- Momah, Sam (1994). Global strategy : from its genesis to the post-cold war era. Vista Books. p. 71. ISBN 9789781341069. Retrieved 13 February 2021.
- Martin-Dietrich Glessgen and Günter Holtus, eds., Genesi e dimensioni di un vocabolario etimologico, Lessico Etimologico Italiano: Etymologie und Wortgeschichte des Italienischen (Ludwig Reichert, 1992), p. 63.
- D. C. Watt, "The Rome–Berlin Axis, 1936–1940: Myth and Reality", The Review of Politics, 22: 4 (1960), pp. 530–31.
- Sinor 1959, p. 291.
- MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. p. 124.
- MacGregor Knox. Common Destiny: Dictatorship, Foreign Policy, and War in Fascist Italy and Nazi Germany. Cambridge University Press, 2000. p. 125.
- John Gooch. Mussolini and His Generals: The Armed Forces and Fascist Foreign Policy, 1922–1940. Cambridge University Press, 2007. p. 11.
- Gerhard Schreiber, Bern Stegemann, Detlef Vogel. Germany and the Second World War. Oxford University Press, 1995. p. 113.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 68.
- Christian Leitz. Nazi Foreign Policy, 1933–1941: The Road to Global War. p. 10.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 75.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 81.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 82.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 76.
- H. James Burgwyn. Italian foreign policy in the interwar period, 1918–1940. Wesport, Connecticut, USA: Greenwood Publishing Group, 1997. p. 78.
- Peter Neville. Mussolini. London, England: Routledge, 2004. p. 123.
- Knickerbocker, H.R. (1941). Is Tomorrow Hitler's? 200 Questions On the Battle of Mankind. Reynal & Hitchcock. pp. 7–8. ISBN 9781417992775.
- Peter Neville. Mussolini. London, England: Routledge, 2004. pp. 123–125.
- Gordon Martel. Origins of Second World War Reconsidered: A. J. P. Taylor and Historians. Digital Printing edition. Routledge, 2003. p. 179.
- Gordon Martel. Austrian Foreign Policy in Historical Context. New Brunswick, New Jersey, USA: Transaction Publishers, 2006. p. 179.
- Peter Neville. Mussolini. London, England: Routledge, 2004. p. 125.
- Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. pp. 32–39
- Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. p. 33.
- Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. p. 38.
- Adriana Boscaro, Franco Gatti, Massimo Raveri, (eds). Rethinking Japan. 1. Literature, visual arts & linguistics. pp. 39–40.
- Hill 2003, p. 91.
- Shelley Baranowski. Axis Imperialism in the Second World War. Oxford University Press, 2014.
- Stanley G. Payne. A History of Fascism, 1914–1945. Madison, Wisconsin, USA: University of Wisconsin Press, 1995. p. 379.
- Harrison 2000, p. 3.
- Harrison 2000, p. 4.
- Harrison 2000, p. 10.
- Harrison 2000, pp. 10, 25.
- Harrison 2000, p. 20.
- Harrison 2000, p. 19.
- Lewis Copeland, Lawrence W. Lamm, Stephen J. McKenna. The World's Great Speeches: Fourth Enlarged (1999) Edition. p. 485.
- Hitler's Germany: Origins, Interpretations, Legacies. London, England: Routledge, 1939. p. 134.
- Stephen J. Lee. Europe, 1890–1945. p. 237.
- Peter D. Stachura. The Shaping of the Nazi State. p. 31.
- Stutthof. Zeszyty Muzeum, 3. PL ISSN 0137-5377. Mirosław Gliński Geneza obozu koncentracyjnego Stutthof na tle hitlerowskich przygotowan w Gdansku do wojny z Polska
- Jan Karski. The Great Powers and Poland: From Versailles to Yalta. Rowman & Littlefield, 2014. p. 197.
- Maria Wardzyńska, "Był rok 1939. Operacja niemieckiej policji bezpieczeństwa w Polsce Intelligenzaktion Instytut Pamięci Narodowej, IPN 2009
- A. C. Kiss. Hague Yearbook of International Law. Martinus Nijhoff Publishers, 1989.
- William Young. German Diplomatic Relations 1871–1945: The Wilhelmstrasse and the Formulation of Foreign Policy. iUniverse, 2006. p. 271.
- Gabrielle Kirk McDonald. Documents and Cases, Volumes 1–2. The Hague, Netherlands: Kluwer Law International, 2000. p. 649.
- Geoffrey A. Hosking. Rulers And Victims: The Russians in the Soviet Union. Harvard University Press, 2006 p. 213.
- Catherine Andreyev. Vlasov and the Russian Liberation Movement: Soviet Reality and Emigré Theories. First paperback edition. Cambridge, England: Cambridge University Press, 1989. pp. 53, 61.
- Randall Bennett Woods. A Changing of the Guard: Anglo-American Relations, 1941–1946. University of North Carolina Press, 1990. p. 200.
- Molotov–Ribbentrop Pact 1939.
- Roberts 2006, p. 82.
- John Whittam. Fascist Italy. Manchester, England; New York, New York, USA: Manchester University Press. p. 165.
- Michael Brecher, Jonathan Wilkenfeld. Study of Crisis. University of Michigan Press, 1997. p. 109.
- *Rodogno, Davide (2006). Fascism's European Empire: Italian Occupation During the Second World War. Cambridge, UK: Cambridge University Press. pp. 46–48. ISBN 978-0-521-84515-1.
- H. James Burgwyn. Italian Foreign Policy in the Interwar Period, 1918–1940. Westport, Connecticut, USA: Praeger Publishers, 1997. pp. 182-183.
- H. James Burgwyn. Italian Foreign Policy in the Interwar Period, 1918–1940. Westport, Connecticut, USA: Praeger Publishers, 1997. p. 185.
- John Lukacs. The Last European War: September 1939 – December 1941. p. 116.
- Jozo Tomasevich. War and Revolution in Yugoslavia, 1941–1945: Occupation and Collaboration. pp. 30–31.
- Lowe & Marzari 2002, p. 289.
- McKercher & Legault 2001, pp. 40–41.
- McKercher & Legault 2001, p. 41.
- Samuel W. Mitcham: Rommel's Desert War: The Life and Death of the Afrika Korps. Stackpole Books, 2007. p. 16.
- Stephen L. W. Kavanaugh. Hitler's Malta Option: A Comparison of the Invasion of Crete (Operation Merkur) and the Proposed Invasion of Malta (Nimble Books LLC, 2010). p. 20.
- Mussolini Unleashed, 1939–1941: Politics and Strategy in Fascist Italy's Last War. pp. 284–285.
- Patricia Knight. Mussolini and Fascism. Routledge, 2003. p. 103.
- Davide Rodogno. Fascism's European Empire: Italian Occupation during the Second World War. Cambridge, England: Cambridge University Press, 2006. p. 30.
- John Lukacs. The Last European War: September 1939 – December 1941. Yale University Press, 2001. p. 364.
- Shirer 1960, p. 1131.
- Albania: A Country Study: Italian Occupation, Library of Congress. Last accessed 14 Februari 2015.
- "Albania – Italian Penetration". countrystudies.us.
- Barak Kushner. The Thought War: Japanese Imperial Propaganda. University of Hawaii Press, p. 119.
- Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. p. 21.
- Euan Graham. Japan's sea lane security, 1940–2004: a matter of life and death? Oxon, England; New York, New York, USA: Routledge, 2006. p. 77.
- Daniel Marston. The Pacific War: From Pearl Harbor to Hiroshima. Osprey Publishing, 2011.
- Hilary Conroy, Harry Wray. Pearl Harbor Reexamined: Prologue to the Pacific War. University of Hawaii Press, 1990. p. 60.
- Herbert P. Bix, Hirohito and the Making of Modern Japan (2001) ch. 13
- Dull 2007, p. 5.
- Asada 2006, pp. 275–276.
- Li Narangoa, R. B. Cribb. Imperial Japan and National Identities in Asia, 1895–1945. Psychology Press, 2003. pp. 15-16.
- Seamus Dunn, T.G. Fraser. Europe and Ethnicity: The First World War and Contemporary Ethnic Conflict. Routledge, 1996. p. 97.
- Montgomery 2002, p. [page needed].
- Hungary and the Holocaust Confrontation with the Past (2001) (Center for Advanced Holocaust Studies United States Holocaust Memorial Museum); Tim Cole; Hungary, the Holocaust, and Hungarians: Remembering Whose History? pp. 3-5;
- Randolph L. Braham; (2010) Hungarian, German, and Jewish calculations and miscalculations in the last chapter of the Holocaust pp. 9-10; Washington, D.C. : Center for Advanced Holocaust Studies, United States Holocaust Memorial Museum,
- "Szita Szabolcs: A budapesti csillagos házak (1944-45) | Remény". Remeny.org. Retrieved 2017-06-17.
- "Archived copy". Archived from the original on 2009-02-02. Retrieved 2013-05-18.CS1 maint: archived copy as title (link)
- Jasenovac United States Holocaust Memorial Museum web site
- Senn 2007, p. [page needed].
- Dinu C. Giurescu, Romania in the Second World War (1939–1945)
- Craig Stockings, Eleanor Hancock, Swastika over the Acropolis: Re-interpreting the Nazi Invasion of Greece in World War II, p. 37
- Carlile Aylmer Macartney, October Fifteenth: A History of Modern Hungary, 1929–1945, Volume 1, p. 481
- Dennis Deletant, Final report, p. 498
- Robert D. Kaplan, In Europe's Shadow: Two Cold Wars and a Thirty-Year Journey Through Romania and Beyond, p. 134
- David T. Zabecki, World War II in Europe: An Encyclopedia, p. 1421
- Steven J. Zaloga, Tanks of Hitler’s Eastern Allies 1941–45, p. 31
- Atkinson, Rick (2013). The Guns at Last Light (1 ed.). New York: Henry Holt. p. 354. ISBN 978-0-8050-6290-8.
- Dennis Deletant, "Romania", in David Stahel, Joining Hitler's Crusade (Cambridge University Press, 2017), p. 78
- Radu Ioanid; (2008) The Holocaust in Romania: The Destruction of Jews and Gypsies Under the Antonescu Regime 1940-1944 pp. 289-297; Ivan R. Dee, ISBN 1461694906
- Spencer C. Tucker, World War II at Sea: An Encyclopedia, p. 633
- Goda, Norman J. W. (2015). "The diplomacy of the Axis, 1940–1945". The Cambridge History of the Second World War: 276–300. doi:10.1017/CHO9781139524377.015. ISBN 9781139524377. Retrieved 25 October 2020.
- "Den Dansk-Tyske Ikke-Angrebstraktat af 1939". Flådens Historie. (in Danish)
- Aage, Trommer. ""Denmark". The Occupation 1940–45". Foreign Ministry of Denmark. Archived from the original on 2006-06-18. Retrieved 2006-09-20.
- William L. Langer and S. Everett Gleason, The Undeclared War, 1940–1941 (1953), pp. 172–73, 424–31, 575–78
- Richard Petrow, The Bitter Years: The Invasion and Occupation of Denmark and Norway, April 1940 – May 1945 (1974) p. 165
- "Jasenovac". 11 July 2003. Archived from the original on 11 July 2003.
- "Flåden efter 29 August 1943". Archived from the original on 16 August 2007.
- "Den Danske Brigade DANFORCE – Den Danske Brigade "DANFORCE" Sverige 1943–45". 12 August 2002. Archived from the original on 12 August 2002.
- Petrow, The Bitter Years (1974) pp. 185–95
- Kirby 1979, p. 134.
- Kent Forster, "Finland's Foreign Policy 1940–1941: An Ongoing Historiographic Controversy," Scandinavian Studies (1979) 51#2 pp. 109–123
- "Treaty of Peace With Finland". 1947. p. 229. Retrieved 23 October 2020.
- Wagner, Margaret E.; Osbourne, Linda Barrett; Reyburn, Susan (2007). The Library of Congress World War II companion. New York: Simon & Schuster. p. 39. ISBN 9780743252195. Retrieved 29 April 2021.
- Jukes, Geoffrey; O'Neill, Robert (2010). World War II: The Eastern Front 1941-1945. The Rosen Publishing Group. p. 52. ISBN 978-1435891340. Retrieved 7 February 2021.
- DiNardo, R.L. (2005). Germany and the Axis Powers from Coalition to Collapse. University Press of Kansas. p. 95. ISBN 9780700614127. Retrieved 9 February 2021.
- Tallgren, Immi (2014). "Martyrs and Scapegoats of the Nation? The Finnish War-Responsibility Trial, 1945–1946". Historical Origins of International Criminal Law. 2 (21): 512. Retrieved 25 October 2020.
- Mäkinen, Esa (19 October 2008). "Historian professorit hautaavat pitkät kiistat". Helsingin Sanomat. Retrieved 7 February 2021.
- Kirby 1979, p. 120.
- Kirby 1979, pp. 120–121.
- Kennedy-Pipe 1995, p. [page needed].
- Kirby 1979, p. 123.
- Seppinen 1983, p. [page needed].
- British Foreign Office Archive, 371/24809/461-556.
- Jokipii 1987, p. [page needed].
- Wylie 2002, p. 275.
- Rohr 2007, p. 99.
- Bowen 2000, p. 59.
- Payne 1987, p. 269.
- Preston 1994, p. 857.
- Reginbogin, Herbert (2009). Faces of Neutrality: A Comparative Analysis of the Neutrality of Switzerland and other Neutral Nations during WW II (First ed.). LIT Verlag. p. 120.
- Leonard & Bratzel 2007, p. 96.
- Steinberg 2000, p. 122.
- Payne 1999, p. 538.
- Seekins, Donald M. (27 Mar 2017). Historical Dictionary of Burma (Myanmar). Rowman & Littlefield. p. 438. ISBN 978-1538101834. Retrieved 27 October 2020.
- Yoon, Won Z. (September 1978). "Military Expediency: A Determining Factor in the Japanese Policy regarding Burmese Independence". Journal of Southeast Asian Studies. 9 (2): 262–263. doi:10.1017/S0022463400009772. JSTOR 20062727. Retrieved 25 October 2020.
- Fry, Gerald W.; Nieminen, Gayla S.; Smith, Harold E. (8 August 2013). Historical Dictionary of Thailand. Scarecrow Press. p. 221. ISBN 978-0810875258. Retrieved 27 October 2020.
- Merrill, Dennis; Patterson, Thomas (10 Sep 2009). Major Problems in American Foreign Relations, Volume II: Since 1914. Cengage Learning. p. 343. ISBN 978-1133007548. Retrieved 27 October 2020.
- Bowman, John Stewart (1998). FACTS ABOUT THE AMERICAN WARS. H.W. Wilson Company. p. 432. ISBN 9780824209292. Retrieved 7 February 2021.
- Smythe, Hugh H. (Third Quarter 1964). "Thailand Minority Groups". Phylon. Clark Atlanta University. 25 (3): 280–287. doi:10.2307/273786. JSTOR 273786. Retrieved 2 April 2021.
- Booth Luce, Clare (14 December 1945). "Not Unduly Exacting About Java". Congressional Record: Proceedings and Debates of the US Congress. U.S. Government Printing Office: A5532. Retrieved 27 October 2020.
- Murashima, Eiji (October 2006). "The Commemorative Character of Thai Historiography: The 1942-43 Thai Military Campaign in the Shan States Depicted as a Story of National Salvation and the Restoration of Thai Independence". Modern Asian Studies. Cambridge University Press. 40 (4): 1056–1057. doi:10.1017/S0026749X06002198. JSTOR 3876641. S2CID 144491081. Retrieved 1 April 2021.
- "Thailand and the Second World War". Archived from the original on 2009-10-27.
- Darling, Frank C. (March 1963). "British and American Influence in Post-War Thailand". Journal of Southeast Asian History. Cambridge University Press. 4 (1): 99. doi:10.1017/S0217781100000788. JSTOR 20067423. Retrieved 1 April 2021.
- Nekrich, A. M. (Aleksandr Moiseevich) (1997). Pariahs, partners, predators : German-Soviet relations, 1922-1941. Freeze, Gregory L., 1945-. New York: Columbia University Press. pp. 112–120. ISBN 0-231-10676-9. OCLC 36023920.
- Shirer, William L. (William Lawrence), 1904-1993. The rise and fall of the Third Reich. New York. pp. 495–496. ISBN 0-671-62420-2. OCLC 1286630.CS1 maint: multiple names: authors list (link)
- "Internet History Sourcebooks". sourcebooks.fordham.edu. Retrieved 2020-10-29.
- Eastern Europe, Russia and Central Asia 2004, Volume 4. London, England: Europa Publications, 2003. pp. 138–139.
- "Avalon Project – Nazi-Soviet Relations 1939–1941". avalon.law.yale.edu.
- Wettig 2008, pp. 20–21.
- Roberts 2006, p. 58.
- Brackman 2001, pp. 341–343.
- Nekrich, Ulam & Freeze 1997, pp. 202–205.
- Donaldson & Nogee 2005, pp. 65–66.
- Churchill 1953, pp. 520–521.
- Roberts 2006, p. 59.
- Baranowski, Shelley (2011). Nazi Empire: German Colonialism and Imperialism from Bismarck to Hitler. Cambridge University Press. ISBN 978-0-521-85739-0.
- Bachelier 2000, p. 98.
- Paxton 1993.
- Albert Lebrun's biography, French Republic Presidential official website Archived April 14, 2009, at the Wayback Machine
- Editors of Encyclopaedia Britannica (14 June 2002). "Rashīd ʿAlī al-Gaylānī". Britannica.com. Britannica Group, inc. Retrieved 12 February 2021.CS1 maint: extra text: authors list (link)
- Silverfarb, Daniel; Khadduri, Majid (1986). Britain's Informal Empire in the Middle East A Case Study of Iraq, 1929-1941. Oxford University Press. p. 113. ISBN 9780195039979. Retrieved 12 February 2021.
- Nafi, Basheer M. (Spring 1997). "The Arabs and the Axis: 1933-1940". Arab Studies Quarterly. 19 (2): 16. JSTOR 41858205. Retrieved 12 February 2021.
- Jabārah 1985, p. 183.
- Churchill, Winston (1950). The Second World War, Volume III, The Grand Alliance. Boston: Houghton Mifflin Company, p. 234; Kurowski, Franz (2005). The Brandenburger Commandos: Germany's Elite Warrior Spies in World War II. Mechanicsburg, Pennsylvania: Stackpole Book. ISBN 978-0-8117-3250-5, 10: 0-8117-3250-9. p. 141
- Lemkin, Raphael (1944). Axis Rule in Occupied Europe: Laws of Occupation, Analysis of Government, Proposals for Redress. Carnegie Endowment for International Peace. p. 11. ISBN 1584779012. Retrieved 24 October 2020.
- Sarner 1997, p. [page needed].
- "Shoah Research Center – Albania" (PDF). Archived from the original (PDF) on 2003-11-27.
- Hehn (1971), pp. 344–73
- MacDonald, David Bruce (2002). Balkan holocausts?: Serbian and Croatian victim-centred propaganda and the war in Yugoslavia. Manchester: Manchester University Press. p. 142. ISBN 0719064678.
- MacDonald, David Bruce (2007). Identity Politics in the Age of Genocide: The Holocaust and Historical Representation. Routledge. p. 167. ISBN 978-1-134-08572-9.
- Raphael Israeli (4 March 2013). The Death Camps of Croatia: Visions and Revisions, 1941–1945. Transaction Publishers. p. 31. ISBN 978-1-4128-4930-2. Retrieved 12 May 2013.
- Geoffrey C. Gunn, Monarchical Manipulation in Cambodia: France, Japan, and the Sihanouk Crusade for Independence, Copenhagen: Nordic Institute for Asian Studies, 2018, Part V
- David P. Chandler, A History of Cambodia, Silkworm 1993[page needed]
- Gow, I; Hirama, Y; Chapman, J (2003). Volume III: The Military Dimension The History of Anglo-Japanese Relations, 1600-2000. Springer. p. 208. ISBN 0230378870. Retrieved 27 October 2020.
- Ivarsson, Søren; Goscha, Christopher E. (February 2007). "Prince Phetsarath (1890-1959): Nationalism and Royalty in the Making of Modern Laos". Journal of Southeast Asian Studies. Cambridge University Press. 38 (1): 65–71. doi:10.1017/S0022463406000932. JSTOR 20071807. S2CID 159778908. Retrieved 2 April 2021.
- Guillermo, Artemio R. (2012). Historical Dictionary of the Philippines. Scarecrow Press. pp. 211, 621. ISBN 978-0-8108-7246-2. Retrieved 22 March 2013.
- Abinales, Patricio N; Amoroso, Donna J. (2005). State And Society In The Philippines. State and Society in East Asia Series. Rowman & Littlefield. pp. 160, 353. ISBN 978-0-7425-1024-1. Retrieved 22 March 2013.
- Cullinane, Michael; Borlaza, Gregorio C.; Hernandez, Carolina G. "Philippines". Encyclopædia Britannica. Encyclopædia Britannica, Inc. Retrieved January 22, 2014.
- Kershaw, Ian. Fateful Choices: Ten Decisions the Changed the World, 1940–1941 New York: Penguin, 2007. pp. 444–46 ISBN 978-1-59420-123-3
- Kershaw 2007, p. 385.
- Longerich, Peter Hitler: A Life (2019) p. 784
- Kershaw 2007, Chapter 10.
- Duncan Redford; Philip D. Grove (2014). The Royal Navy: A History Since 1900. I.B. Tauris. p. 182. ISBN 9780857735072.
- "Franklin D. Roosevelt: Fireside Chat". www.presidency.ucsb.edu.
- "Historian: FDR probably engineered famous WWII plans leak". upi.com.
- "BBC On This Day – 11 – 1941: Germany and Italy declare war on US". BBC News. BBC. 11 December 1941.
- Asada, Sadao (2006). From Mahan to Pearl Harbor: The Imperial Japanese Navy and the United States. Annapolis: Naval Institute Press. ISBN 978-1-55750-042-7.
- Bachelier, Christian (2000). Azéma & Bédarida (ed.). L'armée française entre la victoire et la défaite. La France des années noires. 1. Le Seuil.
- Bowen, Wayne H. (2000). Spaniards and Nazi Germany: Collaboration in the New Order. Columbia, Missouri: University of Missouri Press. ISBN 978-0-8262-1300-6.
- Brackman, Roman (2001). The Secret File of Joseph Stalin: A Hidden Life. London; Portland: Frank Cass. ISBN 978-0-7146-5050-0.
- Leonard, Thomas M.; Bratzel, John F. (2007). Latin America During World War II. Lanham Road, Maryland; Plymouth, England: Rowman & Littlefield. ISBN 978-0-7425-3740-8.
- Churchill, Winston (1953). The Second World War. Boston: Houghton Mifflin Harcourt. ISBN 978-0-395-41056-1.
- Corvaja, Santi (2008) . Hitler & Mussolini: The Secret Meetings. New York: Enigma.
- Donaldson, Robert H; Nogee, Joseph L (2005). The Foreign Policy of Russia: Changing Systems, Enduring Interests. Armonk, New York: M. E. Sharpe. ISBN 978-0-7656-1568-8.
- Dull, Paul S (2007) . A Battle History of the Imperial Japanese Navy, 1941–1945. Annapolis: Naval Institute Press.
- Harrison, Mark (2000) . The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press. ISBN 978-0-521-78503-7.
- Hill, Richard (2003) . Hitler Attacks Pearl Harbor: Why the United States Declared War on Germany. Boulder, CO: Lynne Rienner.
- Jabārah, Taysīr (1985). Palestinian leader, Hajj Amin al-Husayni, Mufti of Jerusalem. Kingston Press. p. 183. ISBN 978-0-940670-10-5.
- Jokipii, Mauno (1987). Jatkosodan synty: tutkimuksia Saksan ja Suomen sotilaallisesta yhteistyöstä 1940–41 [Birth of the Continuation War: Analysis of the German and Finnish Military Co-operation, 1940–41] (in Finnish). Helsinki: Otava. ISBN 978-951-1-08799-1.
- Kennedy-Pipe, Caroline (1995). Stalin's Cold War: Soviet Strategies in Europe, 1943 to 1956. New York: Manchester University Press. ISBN 978-0-7190-4201-0.
- Kershaw, Ian (2007). Fateful Choices: Ten Decisions That Changed the World, 1940–1941. London: Allen Lane. ISBN 978-1-59420-123-3.
- Kirby, D. G. (1979). Finland in the Twentieth Century: A History and an Interpretation. London: C. Hurst & Co. ISBN 978-0-905838-15-1.
- Lebra, Joyce C (1970). The Indian National Army and Japan. Singapore: Institute of Southeast Asian Studies. ISBN 978-981-230-806-1.
- Lewis, Daniel K. (2001). The History of Argentina. New York; Hampshire: Palgrave MacMillan.
- Lidegaard, Bo (2003). Dansk Udenrigspolitisk Historie, vol. 4 (in Danish). Copenhagen: Gyldendal. ISBN 978-87-7789-093-2.
- Lowe, Cedric J.; Marzari, Frank (2002) . Italian Foreign Policy, 1870–1940. Foreign Policies of the Great Powers. London: Routledge.
- McKercher, B. J. C.; Legault, Roch (2001) . Military Planning and the Origins of the Second World War in Europe. Westport, Connecticut: Greenwood Publishing Group.
- Montgomery, John F. (2002) . Hungary: The Unwilling Satellite. Simon Publications.
- Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997). Pariahs, Partners, Predators: German-Soviet Relations, 1922–1941. Columbia University Press. ISBN 0-231-10676-9.
- Paxton, Robert O (1993). J. P. Azéma & François Bédarida (ed.). La Collaboration d'État. La France des Années Noires. Paris: Éditions du Seuil.
- Payne, Stanley G. (1987). The Franco Regime, 1936–1975. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-11074-1.
- Payne, Stanley G. (1999). Fascism in Spain, 1923–1977. Madison, Wisconsin: University of Wisconsin Press. ISBN 978-0-299-16564-2.
- Potash, Robert A. (1969). The Army And Politics in Argentina: 1928–1945; Yrigoyen to Perón. Stanford: Stanford University Press.
- Roberts, Geoffrey (2006). Stalin's Wars: From World War to Cold War, 1939–1953. Yale University Press. ISBN 0-300-11204-1.
- Preston, Paul (1994). Franco: A Biography. New York: Basic Books. ISBN 978-0-465-02515-2.
- Rodao, Florentino (2002). Franco y el imperio japonés: imágenes y propaganda en tiempos de guerra. Barcelona: Plaza & Janés. ISBN 978-84-01-53054-8.
- Rohr, Isabelle (2007). The Spanish Right and the Jews, 1898–1945: Antisemitism and Opportunism. Eastbourne, England; Portland, Oregon: Sussex Academic Press.
- Sarner, Harvey (1997). Rescue in Albania: One Hundred Percent of Jews in Albania Rescued from the Holocaust. Cathedral City, California: Brunswick Press.
- Senn, Alfred Erich (2007). Lithuania 1940: Revolution From Above. Amsterdam; New York: Rodopi Publishers. ISBN 978-90-420-2225-6.
- Seppinen, Ilkka (1983). Suomen ulkomaankaupan ehdot 1939–1940 [Conditions of Finnish Foreign Trade 1939–1940] (in Finnish). Helsinki: Suomen historiallinen seura. ISBN 978-951-9254-48-7.
- Shirer, William L. (1960). The Rise and Fall of the Third Reich. New York: Simon & Schuster. ISBN 978-0-671-62420-0.
- Sinor, Denis (1959). History of Hungary. Woking; London: George Allen and Unwin.
- Steinberg, David Joel (2000) . The Philippines: A Singular and a Plural Place. Boulder Hill, Colorado; Oxford: Westview Press. ISBN 978-0-8133-3755-5.
- Walters, Guy (2009). Hunting Evil: The Nazi War Criminals Who Escaped and the Quest to Bring Them to Justice. New York: Broadway Books.
- Wettig, Gerhard (2008). Stalin and the Cold War in Europe. Landham, Md: Rowman & Littlefield. ISBN 978-0-7425-5542-6.
- Wylie, Neville (2002). European Neutrals and Non-Belligerents During the Second World War. Cambridge; New York: Cambridge University Press. ISBN 978-0-521-64358-0.
- Halsall, Paul (1997). "The Molotov–Ribbentrop Pact, 1939". New York: Fordham University. Retrieved 2012-03-22.
- Dear, Ian C. B. (2005). Foot, Michael; Daniell, Richard (eds.). The Oxford Companion to World War II. Oxford University Press. ISBN 0-19-280670-X.
- Kirschbaum, Stanislav (1995). A History of Slovakia: The Struggle for Survival. New York: St. Martin's Press. ISBN 0-312-10403-0.
- Ready, J. Lee (2012) . The Forgotten Axis: Germany's Partners and Foreign Volunteers in World War II. Jefferson, N.C.: McFarland & Company. ISBN 9780786471690. OCLC 895414669.
- Roberts, Geoffrey (1992). "Infamous Encounter? The Merekalov-Weizsacker Meeting of 17 April 1939". The Historical Journal. Cambridge University Press. 35 (4): 921–926. doi:10.1017/S0018246X00026224. JSTOR 2639445.
- Toynbee, Arnold, ed. Survey Of International Affairs: Hitler's Europe 1939-1946 (1954) online. Highly detailed coverage of conquered territories.
- Weinberg, Gerhard L. (2005). A World at Arms: A Global History of World War II (2nd ed.). NY: Cambridge University Press. ISBN 978-0-521-85316-3.
|Look up Axis powers in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Axis powers.| | https://wiki-offline.jakearchibald.com/wiki/Axis_Powers | 21 |
50 | The Federalist, number 10: Madison, in the Federalist number ten, rejected the Antifederalist argument that establishing a republic in United States would lead to a struggle for power. He also argued that the Constitution would prevent the formation of national factions and parties.
implied powers, elastic clause, necessary and proper clause: An implied power is one not granted in a job description, yet is meant to be taken. The elastic clause was included into the Constitution to allow flexibility. Congress was granted the right to make all laws which they deemed necessary and proper thus expanding their power.
loose, strict interpretation of the Constitution: The strict interpretation of the constitution meant that it was to be followed exactly to the word, a philosophy adopted by Jefferson. Hamilton believed in a loose interpretation, or that powers implied within the Constitution should be included in the new government to fit changes over time.
•RESERVED AND DELEGATED POWERS: Delegated powers were specifically enumerated rights granted to Congress and the President. The delegated powers of Congress included the ability to tax, issue currency, borrow money, declare war and sustain an army. All powers not stated specifically in the Constitution were reserved to the states as stated in the Tenth Amendment. These reserved powers were the result of flexibility in the Constitution to adapt over time.
Undemocratic Elements in the Constitution: According to Charles Beard, the Constitution was written to the advantage of the elite in the United States. The founding fathers did not believe in total democracy, or mob rule, and so used state legislatures and the electoral college to elect senators and the president, respectively.
Flexibility in the Constitution: The flexibility in the Constitution enabled it to adapt over time; there have only been sixteen amendments since 1791. Our founding fathers used vague language, and so Supreme Court interpretations of the Constitution changed over time; the Elastic clause and the reserved powers are examples of this ambiguity.
Upper and Lower House: The senate was seen as the upper house because there were less delegates, the age requirement was higher, and the term limits were six years as opposed to two for the House of Representatives. As a result the Senate was seen as more of an elitist institution while the House was viewed as reflective of the common people.
Electoral College: In order to protect the interests of the elite, land owning class, the framers of the Constitution added the electoral college as a safeguard against the majority opinion. As a result, electors could elect a presidential candidate without considering the popular vote and elections could be won without a majority in the popular vote.
Washington and Hamilton
As the first president of the newly formed United States, George Washington played a largely passive role, suggesting few laws to Congress, attempting to reassure the public he was above favoritism and sectional interests. Alexander Hamilton, on the other hand, took advantage of Washington’s reluctance to be involved with domestic issues, and, as secretary of the treasury, attempted to restore American credit by advocating a perpetual debt.
Post Revolutionary America—West: In the late eighteenth century, masses of people had moved into the trans-Appalachian frontier to escape post-revolutionary depression, despite the risk of violence presented by Indians and the British in their Northwest posts. Congress aided the expansion with the Land and Northwest Ordinances
Post Revolutionary America—South: Many of the southern citizens had bought land in the west and watched the price of land eagerly. Aside from the unstable land speculation, the south had recovered from the war. It had diversified its crops and exported them at prewar levels.
Post Revolutionary America—North: Plagued by high taxes, overpopulation, and rebellion, the North’s efforts at postwar recovery was impeded by the depression of the 1780s. Manufacturing and merchant marine industries were also, negatively affected by independence; the British imposed new embargoes and tariffs on the United States.
•President George Washington: George Washington was elected president in 1788 and again in 1792. Washington’s two terms set the precedent for being President of the United States. He tended to shy away from the affairs of Congress and also formed the first Presidential cabinet, appointing two of the ablest men into high positions of responsibility into his cabinet. His farewell address cautioned the American people to stay out of international affairs, remain isolationist, and to beware of impending bipartisanship.
Washington’s Definition of the Presidency: George Washington set the precedent for being the President of the United States. He humbly served two terms and appointed the first cabinet. Washington stayed out of Congress’ way and supported the United States’ isolationist stance in world affairs.
Vice President John Adams: Because he ran second to George Washington in the elections of 1788 and 1792, he became the nation’s first Vice President, limiting himself to presiding over the senate. Prior to his term as Vice President, he was a diplomat to European nations such as France, Britain, and the Dutch Republic.
Judiciary Act, 1789: The Congress passed the Judiciary Act in 1789, in an effort to create a federal-court system and replace the old system, in which the courts varied from state to state. They were burdened with filling in the holes of the judiciary system left by the Constitution.
Secretary of Treasury Hamilton: Hamilton was appointed in 1789, when the nation’s economy was in shambles. In 1790, he submitted to Congress a Report of the Public Credit that provided for the payments of all debts assumed during the war. He wanted a national bank and encouraged manufacturing through financial government protection.
Secretary of State Jefferson: As Secretary of State for Washington’s first term, Thomas Jefferson wanted to establish reciprocal trade agreements with European nations and deny it to the British. This plan, in 1783, died in Congress, along with his other plans to try to manipulate the European countries. He resigned after the Citizen Genet scandal.
Secretary of War Knox: Henry Knox was the Secretary of War from 1789-1794, the first one under the United States Constitution. Prior to this, he fought in major Revolutionary battles, was in command of the West Point fortress in New York, and was the Secretary of War under the Articles of Confederation.
Attorney General Randolph: Edmund Jennings Randolph was the Attorney General under the Washington Administration from 1789-1794; before which he was the head of the Virginia delegation at the Constitutional Convention in Philadelphia and submitted the Virginia Plan.
•Hamilton’s program: ideas, proposals, reasons for it: Alexander Hamilton wrote to Congress a Report on Public Credit which proposed a way in which the national and foreign debts could be funded and how the federal government would take charge of the debts left by states from the resolution in 1790. The plans attempted to end wartime debt problems. Hamilton believed that constant deficit was necessary to stimulate the nation’s economy, and also believed that the U.S. should immediately repay its foreign debt.
Hamilton’s Legacy: Hamilton’s devices for restoring the credit of the nation led to great monetary gains for merchants, speculators, and others working in the port cities. The government’s takeover of state debts freed those of New England, New Jersey, and South Carolina from harsh taxes.
Tariff of 1789: A revenue raising tariff enacted by Congress, it encouraged the people of the U.S. to manufacture earthenware, glass, and other products in their home in order to avoid importation. With a duty of 8.5%, the tariff succeeded in raising much needed funds for Congress
Bank of the U.S.: Chartered by the newly formed federal government, the bank was established in Philadelphia in 1791, and was permitted by the government to issue legal tender bank notes that could be exchanged for gold. The bank successfully established a national currency, but the charter ended in 1811, for economic and political reasons.
national debt, state debt, foreign debt: National debt accumulated by the US during the Revolutionary war continued to plague Americans. The states were also in debt after borrowing heavily from the government. Hamilton, in his Report on Public Credit, wanted to pay off foreign debt immediately and then through tariffs repay the national debt.
excise taxes: A fixed charge on items of consumption, usually used for revenue raising. The first excise tax placed upon the United States in 1791, by Parliament was one which taxed all domestic distilled spirits. Anger towards this excise tax led directly to the Whiskey Rebellion.
Report on Manufacturers: Presented to Congress in 1791, by Alexander Hamilton, the report suggested that protective tariffs on imports from foreign lands would lead Americans to produce more in their homelands, thus building national wealth and attracting foreigners.
Report on Public Credit: Hamilton submitted his report to Congress in 1790, hoping to seize it as an opportunity to rebuild the country’s credit base. He reported that the US was 54 million dollars in debt: 12 million to foreigners, and the rest to Americans. On top of that, he estimated that the states held debts of over 25 million dollars.
location of the capital: logrolling, D.C.: The nation’s capital was originally located in New York, but later was transferred to Washington D.C.. Originally planned by Charles L’Enfant, the city consisted of beautiful walkways, tree lined streets, and masterfully architecture buildings.
Indian Decline: The frontier warfare during the post-revolutionary era combined with the continuing penetration of western ways into Indian culture caused severe reductions in Indian population and territory. An increasing amount of hatred towards the "redskins" further encouraged the violence towards Indians.
Residence Act: Determined that a ten mile square area for the capital of the United States would be chosen along the Potomac River along the Virginia-Maryland boarder. The area was to be named the District of Columbia, after Christopher Columbus, and was selected by George Washington.
Major L’Enfant, Benjamin Banneker: Pierre Charles L’Enfant was the French architect who, in 1791, drew the plans for the nations capital in Washington D.C., on which the city is now based. Benjamin Banneker was appointed in 1791, by President Washington to assist L’Enfant in surveying the land where the capital city was to be built.
Whiskey Rebellion: An organized resistance in 1794, to the excise tax on whiskey in which federal revenue officials were tarred and feathered, riots were conducted, and mobs burned homes of excise inspectors. The federal militia captured many of the protesters, but most were released.
French Alliance of 1778: Alliance made between France and the United Sates during America’s civil war in 1778. The alliance was used to convince French citizens living in United States territory to become citizens of American, and therefore to bear arms or participate in the war.
French Revolution: The revolution was a period consisting of social and political upheaval from 1789-1799. Caused by the inability of the ruling class and clergy to solve the states problems, the hunger of the workers, the taxation of the poor, and the American Revolution, it led to the establishment of the First Republic and the end of the monarchy.
Citizen Genet: Sent to the United States by the French in 1793 to find soldiers to attack British ships and conquer the territories held by the Spanish, Edmund Genet founded the American Foreign Legion despite Washington’s April 22 proclamation of American neutrality.
Neutrality Proclamation: Issued by President George Washington on April 22, 1793, the Neutrality Proclamation stated that the United States would remain a neutral faction in the war with France against Britain and Spain despite heavy French pressures to join their forces. Many Americans felt the war to be a violation of their neutrality.
XYZ Affair, Talleyrand: When a commission was sent to France in 1797 in order to negotiate problems between the two countries, they were told by the French foreign minister Talleyrand that the agents X, Y, Z, three officials who did not take the process seriously, would only negotiate for a lend of $10 million to the French government.
undeclared naval war with France: Otherwise known as the Quasi-War, the undeclared conflict between the two nations lasted from 1798 to 1800. In the conflict, the United States managed to capture ninety-three French ships while France captured just one U.S. ship.
British seizure of American ships: The Privy Council issued a secret order on November 6, 1793, to confiscate any foreign ships trading with French Caribbean islands. In this decision, they seized over 250 American ships which were conducting trade with the islands.
Royal Navy: They navy of the British empire, the Royal Navy began to inspect American ships in 1793 for suspected defects of the British Navy, who they then forcibly placed back into their own navy. These bold actions commonly referred to as impressment, further strengthened hostilities between the two countries.
"Rule of 1756": The French opened colonial trade to the Dutch, who were a neutral party. British prize courts, in response, stated that neutrals could not engage in wartime trade with a country if they were not permitted to trade with that country at times of peace.
Jay’s Treaty: Negotiated between the United States and France in 1794, the treaty evacuated British posts in the West, appointed a committee to set up the U.S.-French boundary, and named a commission to determine how much the British should pay for illegally seizing American ships. It did not resolve the British West Indies trade dispute.
Pinckney’s Treaty, right of deposit at New Orleans: Ratified in 1796, the treaty gave westerners the right to access the world markets duty-free through the Mississippi River. Spain promised to recognize the thirty-first parallel, to end U.S. camps, and to discourage Indian attacks on western settlers.
Spanish intrigue in the Southwest: Spain attempted, in many cases, to detach the West from the United States, hoping to further expand their territory into the vast land. Washington’s attempts at a failed alliance with the Creek Indians to expand into their lands only led to further conflicts between America and Spain.
James Wilkinson: An American soldier who participated in the American Revolution and the War of 1812. Wilkinson was the man who reported Burr’s conspiracy to access Louisiana to President Jefferson. He served as Secretary to the Board of War and was a brigadier general under Anthony Wayne.
"Mad" Anthony Wayne: Known as Mad Anthony due to his quick temper and his bravery, Wayne was a General during the American Revolution. He began his service with the Pennsylvania militia. He participated in the battles of Brandywine and Germantown and distinguished himself in the Battle of Monmouth.
Battle of Fallen Timbers: At the Battle of Fallen Timbers, in 1794, Anthony Wayne defeated a coalition of Native American tribes as the major general and commander in chief of the troops. The battle took place around present day Toledo and led to the Treaty of Greenville which opened up the Northwest to American settlers.
Treaty of Greenville, 1795: This treaty, which was drafted in 1795, opened the Northwest Territory to settlement by white United States citizens. The territory had formerly only been inhabited by Indians, so therefore the treaty between the two races was an important one. The treaty served to end white-Indian hostilities for sixteen years.
Barbary Pirates: Following the American Revolution, the Barbary pirates began to raid the ships of the United States. The United States therefore formed treaties with Morocco, Tripoli, and Tunis, as European nations already had, that gave them immunity from these attacks.
Tripolitan War: From 1801-1805, the war was a battle between the North African state Tripoli and the United States. The Tripolitans had seized U.S. ships in the U.S. refusal to pay in increase in the tribute paid to the pasha of Tripoli. In the end, the demand for payment was ended and the U.S. paid $60,000 to free Americans caught captive.
Washington’s Farewell Address: In his realization of the important role that he had take in developing the role of the president of the United States, Washington’s farewell address asked the citizens of the United States to avoid involvement in political problems between foreign nations.
Federalists and Republicans
By the election of 1796, the United States political system had become bipartisan, largely a result of the disagreements over Hamilton’s programs and foreign policies. The split in the Federalist party became official with Jefferson’s resignation from Washington’s cabinet in 1793, upon which he formed the Republicans, whose ideology claimed that the Federalists had become a party geared toward enriching the wealthy at the expense of the poor.
election of 1796: President Adams, Vice-president Jefferson: Jefferson was supported by the Republicans, while Adams was supported by the Federalists. Adams was victorious in the election, Jefferson was made Vice-president, as a constitutional law stated that the candidate with the second highest number of electoral votes got that position.
new states: Vt, Ky, Tenn: Vermont, Kentucky, and Tennessee were all admitted into the United States between 1791 and 1796 by the federal government. Their admission was spurred by the hope that they would then become completely loyal to the Union, as they had not been before.
•Federalists: The Federalist party was the starting point of the movement to draft and later ratify the new Constitution. It urged for a stronger national government to take shape after 1781. Its leaders included Alexander Hamilton, John Jay, James Madison, and George Washington rose to power between 1789-1801. Under Hamilton, the Federalists solved the problem of revolutionary debt, created Jay’s Treaty and also the Alien and Sedition Acts.
•Democratic-Republicans: The first political party in the United States, the Democratic-Republican party was created by Thomas Jefferson and James Madison in opposition to the views of Alexander Hamilton. It arose to power in the 1790s and opposed the Federalist party, while advocating states rights and an agricultural society. The party expressed sympathy towards the French Revolution but opposed close ties with the British.
Society of the Cincinnati: A post-war organization of veteran officers from the Continental Army, the Society of the Cincinnati was feared by many because its charter had the possibility of becoming a hereditary aristocracy, as it gave membership to descendants.
Democratic Societies: An organization in which the wealthy are on a level of equality with the poor. This is best exemplified by the Philadelphia Democratic Society, in which Republicans were united by wealth rather then by status, as well as believed that those with talent and ambition should not forget their dreams.
•Alien and Sedition Acts: In 1798, the Neutralization Act said residence must remain in the United States for five years before becoming naturalized while the Alien Act allowed the exportation of any alien believed to be a threat to national security. The Alien Enemies Act allowed the President to export aliens during times of war and the Sedition Act made it a criminal offense to plot against government. These acts were criticized because they oppressed the people’s First Amendment rights.
Virginia and Kentucky Resolutions: Written by Jefferson and Madison in protest to the Alien and Sedition Acts, the Virginia Resolution stated that states possessed the right to intervene in unconstitutional acts in government, and the Kentucky Resolution stated that federal government could not extend powers outside of constitutionally granted powers.
Fries Rebellion: Pennsylvanian German farmers, in 1799, rebelled against the government after it released debtors and citizens who did not pay taxes. This action infuriated the farmers because the money was needed to fund the expansion of the nation’s army. This rebellion alerted those in power to the general disgruntlement of much of the nation.
doctrine of nullification: A group of Kentucky Resolutions adopted in 1799, the Doctrine of Nullification stated that any federal laws considered by the people to be "objectionable" may be nullified by the states. The passage of these resolutions proved the probability of upcoming violent disagreements of how the law should be interpreted.
Convention of 1800: The Federalist party split into two factions during the Convention of 1800, as the party was undecided as to who their presidential candidate should be. The Federalists wanted to nominate Adams, while the "High Federalists," led by Alexander Hamilton, denounced his candidacy.
•Second Great Awakening: Occurring mainly in the frontier states, the Second Great Awakening began in the 1790s and was characterized by "camp meetings," or open air revivals which lasted for weeks at a time where revivalists spoke of the second coming of Jesus. Charles Finney, an especially prominent preacher of the time, preached not only the second coming of Jesus, but also the gospel of free will, which lead to a greater democratic power commonly seen in the ideals of Jacksonian democracy.
Fugitive Slave Law: Enacted by congress in 1793, the law required judges to give a slave back to its owner or his representative if caught after running away. This law indicated tightening racial tensions, as well as stripped slaves of the right to trial by jury or presentation of evidence of freedom.
Gabriel’s Rebellion: Led by Gabriel Prosser in August 1800, the rebellion broke out near Richmond, Virginia when 1,000 slaves marched to the capital. Thirty five slaves were executed by a swift state militia, but whites still feared what many occur in the future with slave uprisings. The rebellion increased tensions between the North and the South.
Logan Act: Enacted in 1795 by the legislative assembly, the Logan Act allowed city councils the power to establish, as well as to support and to regulate, a system consisting schools for the general public. This act led to the establishment of school systems throughout the U.S.
Legal equality for free blacks: These measures first appeared in the 1780s and 1790s, when states dropped restrictions on freedom of movement, protected the property of blacks, and allowed them to enroll in the state militia. By 1796, all but three states allowed blacks voting rights.
Alexander McGillivray: The leader of the Creek Indians, who in 1790 signed a peace treaty with the United States that allowed whites to occupy lands in the Georgia piedmont, but spared the rest of the Creek lands from white settlement. He received a large bribe for signing the treaty.
Gilbert Stuart: An American painter who is particularly well known for his many portraits of wartime hero and President George Washington. His three styles of portrait painting: the "Vaughan" half-length, the "Lansdowne" full-length, and the "Athenaeum" head have often been mimicked.
Charles Wilson Peale: As a portrait painter of the Federalist period, Peale is best known for his fourteen portraits of George Washington. In 1786, Peale began a museum of parts of nature in Independence Hall, Philadelphia of portraits and helped to found the Pennsylvania Academy of the Fine Arts in 1805. | https://essaydocs.org/explorers-in-the-late-15.html?page=5 | 21 |
32 | 10.1 Macroeconomic Perspectives on Demand and Supply
Neoclassical economists emphasize Say’s law, which holds that supply creates its own demand. Keynesian economists emphasize Keynes’ law, which holds that demand creates its own supply. Many mainstream economists take a Keynesian perspective, emphasizing the importance of aggregate demand, for the short run, and a neoclassical perspective, emphasizing the importance of aggregate supply, for the long run.
10.2 Building a Model of Aggregate Demand and Aggregate Supply
The upward-sloping short run aggregate supply (SRAS) curve shows the positive relationship between the price level and the level of real GDP in the short run. Aggregate supply slopes up because when the price level for outputs increases, while the price level of inputs remains fixed, the opportunity for additional profits encourages more production. The aggregate supply curve is near-horizontal on the left and near-vertical on the right. In the long run, aggregate supply is shown by a vertical line at the level of potential output, which is the maximum level of output the economy can produce with its existing levels of workers, physical capital, technology, and economic institutions.
The downward-sloping aggregate demand (AD) curve shows the relationship between the price level for outputs and the quantity of total spending in the economy. It slopes down because of: (a) the wealth effect, which means that a higher price level leads to lower real wealth, which reduces the level of consumption; (b) the interest rate effect, which holds that a higher price level will mean a greater demand for money, which will tend to drive up interest rates and reduce investment spending; and (c) the foreign price effect, which holds that a rise in the price level will make domestic goods relatively more expensive, discouraging exports and encouraging imports.
10.3 Shifts in Aggregate Supply
The aggregate demand/aggregate supply (AD/AS) diagram shows how AD and AS interact. The intersection of the AD and AS curves shows the equilibrium output and price level in the economy. Movements of either AS or AD will result in a different equilibrium output and price level. The aggregate supply curve will shift out to the right as productivity increases. It will shift back to the left as the price of key inputs rises, and will shift out to the right if the price of key inputs falls. If the AS curve shifts back to the left, the combination of lower output, higher unemployment, and higher inflation, called stagflation, occurs. If AS shifts out to the right, a combination of lower inflation, higher output, and lower unemployment is possible.
10.4 Shifts in Aggregate Demand
The AD curve will shift out as the components of aggregate demand—C, I, G, and X–M—rise. It will shift back to the left as these components fall. These factors can change because of different personal choices, like those resulting from consumer or business confidence, or from policy choices like changes in government spending and taxes. If the AD curve shifts to the right, then the equilibrium quantity of output and the price level will rise. If the AD curve shifts to the left, then the equilibrium quantity of output and the price level will fall. Whether equilibrium output changes relatively more than the price level or whether the price level changes relatively more than output is determined by where the AD curve intersects with the AS curve.
The AD/AS diagram superficially resembles the microeconomic supply and demand diagram on the surface, but in reality, what is on the horizontal and vertical axes and the underlying economic reasons for the shapes of the curves are very different. Long-term economic growth is illustrated in the AD/AS framework by a gradual shift of the aggregate supply curve to the right. A recession is illustrated when the intersection of AD and AS is substantially below potential GDP, while an expanding economy is illustrated when the intersection of AS and AD is near potential GDP.
10.5 How the AD/AS Model Incorporates Growth, Unemployment, and Inflation
Cyclical unemployment is relatively large in the AD/AS framework when the equilibrium is substantially below potential GDP. Cyclical unemployment is small in the AD/AS framework when the equilibrium is near potential GDP. The natural rate of unemployment, as determined by the labor market institutions of the economy, is built into what is meant by potential GDP, but does not otherwise appear in an AD/AS diagram. Pressures for inflation to rise or fall are shown in the AD/AS framework when the movement from one equilibrium to another causes the price level to rise or to fall. The balance of trade does not appear directly in the AD/AS diagram, but it appears indirectly in several ways. Increases in exports or declines in imports can cause shifts in AD. Changes in the price of key imported inputs to production, like oil, can cause shifts in AS. The AD/AS model is the key model used in this book to understand macroeconomic issues.
10.6 Keynes’ Law and Say’s Law in the AD/AS Model
The SRAS curve can be divided into three zones. Keynes’ law says demand creates its own supply, so that changes in aggregate demand cause changes in real GDP and employment. Keynes’ law can be shown on the horizontal Keynesian zone of the aggregate supply curve. The Keynesian zone occurs at the left of the SRAS curve where it is fairly flat, so movements in AD will affect output, but have little effect on the price level. Say’s law says supply creates its own demand. Changes in aggregate demand have no effect on real GDP and employment, only on the price level. Say’s law can be shown on the vertical neoclassical zone of the aggregate supply curve. The neoclassical zone occurs at the right of the SRAS curve where it is fairly vertical, and so movements in AD will affect the price level, but have little impact on output. The intermediate zone in the middle of the SRAS curve is upward-sloping, so a rise in AD will cause higher output and price level, while a fall in AD will lead to a lower output and price level. | https://openstax.org/books/principles-macroeconomics-ap-courses/pages/10-key-concepts-and-summary | 21 |
57 | History of Albania
The history of Albania forms a part of the history of Europe. During classical antiquity, Albania was home to several Illyrian tribes such as the Ardiaei, Albanoi, Amantini, Enchele, Taulantii and many others, but also Thracian and Greek tribes, as well as several Greek colonies established on the Illyrian coast. In the 3rd century BC, the area was annexed by Rome and became part of the Roman provinces of Dalmatia, Macedonia and Moesia Superior. Afterwards, the territory remained under Roman and Byzantine control until the Slavic migrations of the 7th century. It was integrated into the Bulgarian Empire in the 9th century.
Part of a series on the
|History of Albania|
In the Middle Ages, the Principality of Arbër and a Sicilian union known as the medieval Kingdom of Albania were established. Some areas became part of the Venetian and later Serbian Empire. Between the mid-14th and the late 15th centuries, most of modern-day Albania was dominated by Albanian principalities, when the Albanian principalities fell to the rapid invasion of the Ottoman Empire. Albania remained under Ottoman control as part of the province of Rumelia until 1912; with some interruptions during the 18th and 19th century with the establishment of autonomy minded Albanian lords. The first independent Albanian state was founded by the Albanian Declaration of Independence following a short occupation by the Kingdom of Serbia. The formation of an Albanian national consciousness dates to the later 19th century and is part of the larger phenomenon of the rise of nationalism under the Ottoman Empire.
A short-lived monarchical state known as the Principality of Albania (1914–1925) was succeeded by an even shorter-lived first Albanian Republic (1925–1928). Another monarchy, the Kingdom of Albania (1928–1939), replaced the republic. The country endured occupation by Italy just prior to World War II. After the collapse of the Axis powers, Albania became a communist state, the Socialist People's Republic of Albania, which for most of its duration was dominated by Enver Hoxha (died 1985). Hoxha's political heir Ramiz Alia oversaw the disintegration of the "Hoxhaist" state during the wider collapse of the Eastern Bloc in the later 1980s.
The communist regime collapsed in 1990, and the former communist Party of Labour of Albania was routed in elections in March 1992, amid economic collapse and social unrest. The unstable economic situation led to an Albanian diaspora, mostly to Italy, Greece, Switzerland, Germany and North America during the 1990s. The crisis peaked in the Albanian Turmoil of 1997. An amelioration of the economic and political conditions in the early years of the 21st century enabled Albania to become a full member of NATO in 2009. The country is applying to join the European Union.
The first traces of human presence in Albania, dating to the Middle Paleolithic and Upper Paleolithic eras, were found in the village of Xarrë, near Sarandë and Dajti near Tirana. The objects found in a cave near Xarrë include flint and jasper objects and fossilized animal bones, while those found at Mount Dajt comprise bone and stone tools similar to those of the Aurignacian culture. The Paleolithic finds of Albania show great similarities with objects of the same era found at Crvena Stijena in Montenegro and north-western Greece.
Several Bronze Age artifacts from tumulus burials have been unearthed in southern Albania that show close connection with sites in south-western Macedonia and Lefkada, Greece. Archaeologists have come to the conclusion that these regions were inhabited from the middle of the third millennium BC by Indo-European people who spoke a Proto-Greek language. A part of this population later moved to Mycenae around 1600 BC and founded the Mycenaean civilisation there. Other tumulus burials have been found in northern Albania, especially near the city of Shkodra around the third millennium BC, these burials were most likely built by Proto Illyrians. Another population group, the Illirii, probably the southernmost Illyrian tribe of that time that lived on the border of Albania and Montenegro, possibly neighbored the Greek tribes.
In the late Bronze Age and early Iron Age a number of possible population movements occurred in the territories of modern Albania, for example the settlement of the Bryges in areas of southern Albania-northwestern Greece and Illyrian tribes into central Albania. The latter derived from early an Indo-European presence in the western Balkan Peninsula. The movement of the Byrgian tribes can be assumed to coincide with the beginning Iron Age in the Balkans during the early 1st millennium BC.
Archaeologists associate the Illyrians with the Hallstatt culture, an Iron Age people noted for production of iron, bronze swords with winged-shaped handles, and the domestication of horses. It is impossible to delineate Illyrian tribes from Paleo-Balkans in a strict linguistic sense, but areas classically included under "Illyrian" for the Balkans Iron Age include the area of the Danube, Sava, and Morava rivers to the Adriatic Sea and the Shar Mountains.
The Illyrians were a group of tribes who inhabited the western Balkans during the classical times. The territory the tribes covered came to be known as Illyria to Greek and Roman authors, corresponding roughly to the area between the Adriatic Sea in the west, the Drava river in the north, the Morava river in the east and the mouth of Vjosë river in the south. The first account of the Illyrian peoples comes from the Coastal Passage contained in a periplus, an ancient Greek text of the middle of the 4th century BC.
Several Illyrian tribes that resided in the region of Albania were the Ardiaei, Taulantii and Albanoi in central Albania, the Parthini, the Abri and the Caviii in the north, the Enchelei in the east, the Bylliones in the south and several others. In the westernmost parts of the territory of Albania, along with the Illyrian tribes, lived the Bryges, a Phrygian people, and in the south lived the Greek tribe of the Chaonians.
In the 4th century BC, the Illyrian king Bardylis united several Illyrian tribes and engaged in conflicts with Macedon to the south-east, but was defeated. Bardyllis was succeeded by Grabos, then by Bardylis II, and then by Cleitus the Illyrian, who was defeated by Alexander the Great.
Around 230 BC, the Ardiaei briefly attained military might under the reign of king Agron. Agron extended his rule over other neighbouring tribes as well. He raided parts of Epirus, Epidamnus, and the islands of Corcyra and Pharos. His state stretched from Narona in Dalmatia south to the river Aoos and Corcyra. During his reign, the Ardiaean Kingdom reached the height of its power. The army and fleet made it a major regional power in the Balkans and the southern Adriatic. The king regained control of the Adriatic with his warships (lembi), a domination once enjoyed by the Liburnians. None of his neighbours were nearly as powerful. Agron divorced his (first) wife.
Agron suddenly died, circa 231BC, after his triumph over the Aetolians. Agron's (second) wife was Queen Teuta, who acted as regent after Agron's death. According to Polybius, she ruled "by women's reasoning". Teuta started to address the neighbouring states malevolently, supporting the piratical raids of her subjects. After capturing Dyrrhachium and Phoenice, Teuta's forces extended their operations further southward into the Ionian Sea, defeating the combined Achaean and Aetolian fleet in the Battle of Paxos and capturing the island of Corcyra. Later on, in 229 BC, she clashed with the Romans and initiated the Illyrian Wars. These wars, which were spread out over 60 years, eventually resulted in defeat for the Illyrians by 168 BC and the end of Illyrian independence when King Gentius was defeated by a Roman army after heavy clashes with Rome and Roman allied cities such as Apollonia and Dyrrhachium under Anicius Gallus. After his defeat, the Romans split the region into three administrative divisions, called meris.
Greeks and Romans
Beginning in the 7th century BC, Greek colonies were established on the Illyrian coast. The most important were Apollonia, Aulon (modern-day Vlorë), Epidamnos (modern-day Durrës), and Lissus (modern-day Lezhë). The rediscovered Greek city of Buthrotum (Ancient Greek: Βουθρωτόν, romanized: Vouthrotón) (modern-day Butrint), a UNESCO World Heritage Site, is probably more significant today than it was when Julius Caesar used it as a provisions depot for his troops during his campaigns in the 1st century BC. At that time, it was considered an unimportant outpost, overshadowed by Apollonia and Epidamnos.
The lands comprising modern-day Albania were incorporated into the Roman Empire as part of the province of Illyricum above the river Drin, and Roman Macedonia (specifically as Epirus Nova) below it. The western part of the Via Egnatia ran inside modern Albania, ending at Dyrrachium. Illyricum was later divided into the provinces of Dalmatia and Pannonia.
The Roman province of Illyricum or Illyris Romana or Illyris Barbara or Illyria Barbara replaced most of the region of Illyria. It stretched from the Drilon River in modern Albania to Istria (Croatia) in the west and to the Sava River (Bosnia and Herzegovina) in the north. Salona (near modern Split in Croatia) functioned as its capital. The regions which it included changed through the centuries though a great part of ancient Illyria remained part of Illyricum.
South Illyria became Epirus Nova, part of the Roman province of Macedonia. In 357 AD the region was part of the Praetorian prefecture of Illyricum one of four large praetorian prefectures into which the Late Roman Empire was divided. By 395 AD dioceses in which the region was divided were the Diocese of Dacia (as Pravealitana), and the Diocese of Macedonia (as Epirus Nova). Most of the region of modern Albania corresponds to the Epirus Nova.
Christianity came to Epirus nova, then part of the Roman province of Macedonia. Since the 3rd and 4th century AD, Christianity had become the established religion in Byzantium, supplanting pagan polytheism and eclipsing for the most part the humanistic world outlook and institutions inherited from the Greek and Roman civilizations. The Durrës Amphitheatre (Albanian: Amfiteatri i Durrësit) is a historic monument from the time period located in Durrës, Albania, that was used to preach Christianity to civilians during that time.
When the Roman Empire was divided into eastern and western halves in AD 395, Illyria east of the Drinus River (Drina between Bosnia and Serbia), including the lands form Albania, were administered by the Eastern Empire but were ecclesiastically dependent on Rome. Though the country was in the fold of Byzantium, Christians in the region remained under the jurisdiction of the Pope until 732. In that year the iconoclast Byzantine emperor Leo III, angered by archbishops of the region because they had supported Rome in the Iconoclastic Controversy, detached the church of the province from the Roman pope and placed it under the patriarch of Constantinople.
When the Christian church split in 1054 between Eastern Orthodoxy and Catholicism, the region of southern Albania retained its ties to Constantinople, while the north reverted to the jurisdiction of Rome. This split marked the first significant religious fragmentation of the country. After the formation of the Slav principality of Dioclia (modern Montenegro), the metropolitan see of Bar was created in 1089, and dioceses in northern Albania (Shkodër, Ulcinj) became its suffragans. Starting in 1019, Albanian dioceses of the Byzantine rite were suffragans of the independent Archdiocese of Ohrid until Dyrrachion and Nicopolis, were re-established as metropolitan sees. Thereafter, only the dioceses in inner Albania (Elbasan, Krujë) remained attached to Ohrid. In the 13th century during the Venetian occupation, the Latin Archdiocese of Durrës was founded.
Early Middle Ages
After the region fell to the Romans in 168 BC it became part of Epirus nova that was, in turn, part of the Roman province of Macedonia. When the Roman Empire was divided into East and West in 395, the territories of modern Albania became part of the Byzantine Empire. Beginning in the first decades of Byzantine rule (until 461), the region suffered devastating raids by Visigoths, Huns, and Ostrogoths. In the 6th and 7th centuries, the region experienced an influx of Slavs.
In general, the invaders destroyed or weakened Roman and Byzantine cultural centres in the lands that would become Albania.
In the late 11th and 12th centuries, the region played a crucial part in the Byzantine–Norman wars; Dyrrhachium was the westernmost terminus of the Via Egnatia, the main overland route to Constantinople, and was one of the main targets of the Normans (cf. Battle of Dyrrhachium (1081)). Towards the end of the 12th century, as Byzantine central authority weakened and rebellions and regionalist secessionism became more common, the region of Arbanon became an autonomous principality ruled by its own hereditary princes. In 1258, the Sicilians took possession of the island of Corfu and the Albanian coast, from Dyrrhachium to Valona and Buthrotum and as far inland as Berat. This foothold, reformed in 1272 as the "Kingdom of Albania", was intended by the dynamic Sicilian ruler, Charles of Anjou, to become the launchpad for an overland invasion of the Byzantine Empire. The Byzantines, however, managed to recover most of Albania by 1274, leaving only Valona and Dyrrhachium in Charles' hands. Finally, when Charles launched his much-delayed advance, it was stopped at the Siege of Berat in 1280–1281. Albania would remain largely part of the Byzantine empire until the Byzantine civil war of 1341–1347 when it fell shortly to the hands of the Serbian ruler Stephen Dushan.
In the mid-9th century, most of eastern Albania became part of the Bulgarian Empire. The area, known as Kutmichevitsa, became an important Bulgarian cultural center in the 10th century with many thriving towns such as Devol, Glavinitsa (Ballsh) and Belgrad (Berat). When the Byzantines managed to conquer the First Bulgarian Empire the fortresses in eastern Albania were some of the last Bulgarian strongholds to submit to the Byzantines. Later the region was recovered by the Second Bulgarian Empire.
In the Middle Ages, the name Arberia began to be increasingly applied to the region now comprising the nation of Albania. The first undisputed mention of Albanians in the historical record is attested in a Byzantine source for the first time in 1079–1080, in a work titled History by Byzantine historian Michael Attaliates, who referred to the Albanoi as having taken part in a revolt against Constantinople in 1043 and to the Arbanitai as subjects of the duke of Dyrrhachium. A later reference to Albanians from the same Attaliates, regarding the participation of Albanians in a rebellion around 1078, is undisputed.
Principality of Arbër
In 1190, the Principality of Arbër (Arbanon) was founded by archon Progon in the region of Krujë. Progon was succeeded by Gjin Progoni and then Dhimitër Progoni. Arbanon extended over the modern districts of central Albania, with its capital located at Krujë.
The principality of Arbanon was established in 1190 by the native archon Progon in the region surrounding Kruja, to the east and northeast of Venetian territories. Progon was succeeded by his sons Gjin and then Demetrius (Dhimitër), who managed to retain a considerable degree of autonomy from the Byzantine Empire. In 1204, Arbanon attained full, though temporary, political independence, taking advantage of the weakening of Constantinople following its pillage during the Fourth Crusade. However, Arbanon lost its large autonomy ca. 1216, when the ruler of Epirus, Michael I Komnenos Doukas, started an invasion northward into Albania and Macedonia, taking Kruja and ending the independence of the principality of Arbanon and its ruler, Demetrius. After the death of Demetrius, the last ruler of the Progon family, the same year, Arbanon was successively controlled subsequently by the Despotate of Epirus, the Bulgarian Empire and, from 1235, by the Empire of Nicaea.
During the conflicts between Michael II Komnenos Doukas of Epirus and Emperor John III Doukas Vatatzes, Golem (ruler of Arbanon at the time) and Theodore Petraliphas, who were initially Michael's allies, defected to John III in 1252. He is last mentioned in the sources among other local leaders, in a meeting with George Akropolites in Durrës in 1256. Arbanon was a beneficiary of the Via Egnatia trade road, which brought wealth and benefits from the more developed Byzantine civilization.
High Middle Ages
After the fall of the Principality of Arber in territories captured by the Despotate of Epirus, the Kingdom of Albania was established by Charles of Anjou. He took the title of King of Albania in February 1272. The kingdom extended from the region of Durrës (then known as Dyrrhachium) south along the coast to Butrint. After the failure of the Eighth Crusade, Charles of Anjou returned his attention to Albania. He began contacting local Albanian leaders through local catholic clergy. Two local Catholic priests, namely John from Durrës and Nicola from Arbanon, acted as negotiators between Charles of Anjou and the local noblemen. During 1271 they made several trips between Albania and Italy eventually succeeding in their mission.
On 21 February 1272, a delegation of Albanian noblemen and citizens from Durrës made their way to Charles' court. Charles signed a treaty with them and was proclaimed King of Albania "by common consent of the bishops, counts, barons, soldiers and citizens" promising to protect them and to honor the privileges they had from Byzantine Empire. The treaty declared the union between the Kingdom of Albania (Latin: Regnum Albanie) with the Kingdom of Sicily under King Charles of Anjou (Carolus I, dei gratia rex Siciliae et Albaniae). He appointed Gazzo Chinardo as his Vicar-General and hoped to take up his expedition against Constantinople again. Throughout 1272 and 1273 he sent huge provisions to the towns of Durrës and Vlorë. This alarmed the Byzantine Emperor, Michael VIII Palaiologos, who began sending letters to local Albanian nobles, trying to convince them to stop their support for Charles of Anjou and to switch sides. However, the Albanian nobles placed their trust on Charles, who praised them for their loyalty. Throughout its existence the Kingdom saw armed conflict with the Byzantine empire. The kingdom was reduced to a small area in Durrës. Even before the city of Durrës was captured, it was landlocked by Karl Thopia's principality. Declaring himself as Angevin descendant, with the capture of Durrës in 1368 Karl Thopia created the Princedom of Albania. During its existence Catholicism saw rapid spread among the population which affected the society as well as the architecture of the Kingdom. A Western type of feudalism was introduced and it replaced the Byzantine Pronoia.
Principalities and League of Lezhë
In 1355 the Serbian Empire was dissolved and several Albanian principalities were formed including the Balsha, Kastrioti, Thopia and Shpata as the major ones. In the late 14th and the early 15th century the Ottoman Empire conquered parts of south and central Albania. The Albanians regained control of their territories in 1444 when the League of Lezhë was established, under the rule of George Kastrioti Skanderbeg, the Albanian national hero. The League was a military alliance of feudal lords in Albania forged in Lezhë on 2 March 1444, initiated and organised under Venetian patronage with Skanderbeg as leader of the regional Albanian and Serbian chieftains united against the Ottoman Empire. The main members of the league were the Arianiti, Balšić, Dukagjini, Muzaka, Spani, Thopia and Crnojevići. For 25 years, from 1443 to 1468, Skanderbeg's 10,000-man army marched through Ottoman territory winning against consistently larger and better supplied Ottoman forces. Threatened by Ottoman advances in their homeland, Hungary, and later Naples and Venice – their former enemies – provided the financial backbone and support for Skanderbeg's army. By 1450 it had certainly ceased to function as originally intended, and only the core of the alliance under Skanderbeg and Araniti Comino continued to fight on. After Skanderbeg's death in 1468, the sultan "easily subdued Albania," but Skanderbeg's death did not end the struggle for independence, and fighting continued until the Ottoman siege of Shkodra in 1478–79, a siege ending when the Republic of Venice ceded Shkodra to the Ottomans in the peace treaty of 1479.
Early Ottoman period
Ottoman supremacy in the west Balkan region began in 1385 with their success in the Battle of Savra. Following that battle, the Ottoman Empire in 1415 established the Sanjak of Albania covering the conquered parts of Albania, which included territory stretching from the Mat River in the north to Chameria in the south. In 1419, Gjirokastra became the administrative centre of the Sanjak of Albania.
The northern Albanian nobility, although tributary of the Ottoman Empire they still had autonomy to rule over their lands, but the southern part which was put under the direct rule of the Ottoman Empire, prompted by the replacement of large parts of the local nobility with Ottoman landowners, centralized governance and the Ottoman taxation system, the population and the nobles, led principally by Gjergj Arianiti, revolted against the Ottomans.
During the early phases of the revolt, many land (timar) holders were killed or expelled. As the revolt spread, the nobles, whose holdings had been annexed by the Ottomans, returned to join the revolt and attempted to form alliances with the Holy Roman Empire. While the leaders of the revolt were successful in defeating successive Ottoman campaigns, they failed to capture many of the important towns in the Sanjak of Albania. Major combatants included members of the Dukagjini, Zenebishi, Thopia, Kastrioti and Arianiti families. In the initial phase, the rebels were successful in capturing some major towns such as Dagnum. Protracted sieges such as that of Gjirokastër, the capital of the Sanjak, gave the Ottoman army time to assemble large forces from other parts of the empire and to subdue the main revolt by the end of 1436. Because the rebel leaders acted autonomously without a central leadership, their lack of coordination of the revolt contributed greatly to their final defeat. Ottoman forces conducted a number of massacres in the aftermath of the revolt.
Many Albanians had been recruited into the Janissary corps, including the feudal heir George Kastrioti who was renamed Skanderbeg (Iskandar Bey) by his Turkish officers at Edirne. After the Ottoman defeat in the Battle of Niš at the hands of the Hungarians, Skanderbeg deserted in November 1443 and began a rebellion against the Ottoman Empire.
After his desertion, Skanderbeg re-converted to Christianity and declared war against the Ottoman Empire, which he led from 1443 to 1468. Skanderbeg summoned the Albanian princes to the Venetian-controlled town of Lezhë where they formed the League of Lezhë. Gibbon reports that the "Albanians, a martial race, were unanimous to live and die with their hereditary prince", and that "in the assembly of the states of Epirus, Skanderbeg was elected general of the Turkish war and each of the allies engaged to furnish his respective proportion of men and money". Under a red flag bearing Skanderbeg's heraldic emblem, an Albanian force held off Ottoman campaigns for twenty-five years and overcame a number of the major sieges: Siege of Krujë (1450), Second Siege of Krujë (1466–67), Third Siege of Krujë (1467) against forces led by the Ottoman sultans Murad II and Mehmed II. For 25 years Skanderbeg's army of around 10,000 men marched through Ottoman territory winning against consistently larger and better supplied Ottoman forces.
Throughout his rebellion, Skanderbeg defeated the Ottomans in a number of battles, including Torvioll, Oranik, Otonetë, Modric, Ohrid and Mokra; with his most brilliant being in Albulena. However, Skanderbeg did not receive any of the help which had been promised to him by the popes or the Italian states, Venice, Naples and Milan. He died in 1468, leaving no clear successor. After his death the rebellion continued, but without its former success. The loyalties and alliances created and nurtured by Skanderbeg faltered and fell apart and the Ottomans reconquered the territory of Albania, culminating with the siege of Shkodra in 1479. However, some territories in Northern Albania remained under Venetian control. Shortly after the fall of the castles of northern Albania, many Albanians fled to neighbouring Italy, giving rise to the Arbëreshë communities still living in that country.
Skanderbeg's long struggle to keep Albania free became highly significant to the Albanian people, as it strengthened their solidarity, made them more conscious of their national identity, and served later as a great source of inspiration in their struggle for national unity, freedom and independence.
Late Ottoman period
Upon the Ottomans return in 1479, a large number of Albanians fled to Italy, Egypt and other parts of the Ottoman Empire and Europe and maintained their Arbëresh identity. Many Albanians won fame and fortune as soldiers, administrators, and merchants in far-flung parts of the Empire. As the centuries passed, however, Ottoman rulers lost the capacity to command the loyalty of local pashas, which threatened stability in the region. The Ottoman rulers of the 19th century struggled to shore up central authority, introducing reforms aimed at harnessing unruly pashas and checking the spread of nationalist ideas. Albania would be a part of the Ottoman Empire until the early 20th century.
The Ottoman period that followed was characterized by a change in the landscape through a gradual modification of the settlements with the introduction of bazaars, military garrisons and mosques in many Albanian regions. Part of the Albanian population gradually converted to Islam, with many joining the Sufi Order of the Bektashi. Converting from Christianity to Islam brought considerable advantages, including access to Ottoman trade networks, bureaucratic positions and the army. As a result, many Albanians came to serve in the elite Janissary and the administrative Devşirme system. Among these were important historical figures, including Iljaz Hoxha, Hamza Kastrioti, Koca Davud Pasha, Zağanos Pasha, Köprülü Mehmed Pasha (head of the Köprülü family of Grand Viziers), the Bushati family, Sulejman Pasha, Edhem Pasha, Nezim Frakulla, Haxhi Shekreti, Hasan Zyko Kamberi, Ali Pasha of Gucia, Muhammad Ali ruler of Egypt, Ali Pasha of Tepelena rose to become one of the most powerful Muslim Albanian rulers in western Rumelia. His diplomatic and administrative skills, his interest in modernist ideas and concepts, his popular religiousness, his religious neutrality, his win over the bands terrorizing the area, his ferocity and harshness in imposing law and order, and his looting practices towards persons and communities in order to increase his proceeds cause both the admiration and the criticism of his contemporaries. His court was in Ioannina, but the territory he governed incorporated most of Epirus and the western parts of Thessaly and Greek Macedonia in Northern Greece.
Many Albanians gained prominent positions in the Ottoman government, Albanians highly active during the Ottoman era and leaders such as Ali Pasha of Tepelena might have aided Husein Gradaščević. The Albanians proved generally faithful to Ottoman rule following the end of the resistance led by Skanderbeg, and accepted Islam more easily than their neighbors.
No fewer than 42 Grand Viziers of the Empire were of Albanian descent. The Ottoman period also saw the rising Albanian nobility and Albanians were also an important part of the Ottoman army and Ottoman administration like the case of Köprülü family.
Semi-independent Albanian Pashaliks
A period of semi-independence started during the mid 18th century. As Ottoman power began to decline in the 18th century, the central authority of the empire in Albania gave way to the local authority of autonomy-minded lords. The most successful of those lords were three generations of pashas of the Bushati family, who dominated most of northern Albania from 1757 to 1831, and Ali Pasha Tepelena of Janina (now Ioánnina, Greece), a brigand-turned-despot who ruled over southern Albania and northern Greece from 1788 to 1822.
In the 1870s, the Sublime Porte's reforms aimed at checking the Ottoman Empire's disintegration had failed. The image of the "Turkish yoke" had become fixed in the nationalist mythologies and psyches of the empire's Balkan peoples and their march toward independence quickened. The Albanians, because of the higher degree of Islamic influence, their internal social divisions, and the fear that they would lose their Albanian-speaking territories to the emerging Serbia, Montenegro, Bulgaria, and Greece, were the last of the Balkan peoples to desire division from the Ottoman Empire. With the rise of the Albanian National Awakening, Albanians regained a sense of statehood and engaged in military resistance against the Ottoman Empire as well as instigating a massive literary revival. Albanian émigrés in Bulgaria, Egypt, Italy, Romania and the United States supported the writing and distribution of Albanian textbooks and writings.
League of Prizren
In the second quarter of the 19th century, after the fall of the Albanian pashaliks and the Massacre of the Albanian Beys, an Albanian National Awakening took place and many revolts against the Ottoman Empire were organized. These revolts included the Albanian Revolts of 1833–1839, the Revolt of 1843–44, and the Revolt of 1847. A culmination of the Albanian National Awakening was the League of Prizren. The league was formed at a meeting of 47 Ottoman beys in Prizren on 18 June 1878. An initial position of the league was presented in a document known as Kararname. Through this document Albanian leaders emphasized their intention to preserve and maintain the territorial integrity of the Ottoman Empire in the Balkans by supporting the porte, and "to struggle in arms to defend the wholeness of the territories of Albania". In this early period, the League participated in battles against Montenegro and successfully wrestled control over Plav and Gusinje after brutal warfare with Montenegrin troops. In August 1878, the Congress of Berlin ordered a commission to determine the border between the Ottoman Empire and Montenegro. Finally, the Great Powers blockaded Ulcinj by sea and pressured the Ottoman authorities to bring the Albanians under control. Albanian diplomatic and military efforts were successful in wresting control of Epirus, however some lands were still ceded to Greece by 1881.
The League's founding figure Abdyl Frashëri influenced the League to demand autonomy and wage open war against the Ottomans. Faced with growing international pressure "to pacify" the refractory Albanians, the sultan dispatched a large army under Dervish Turgut Pasha to suppress the League of Prizren and deliver Ulcinj to Montenegro. The League of Prizren's leaders and their families were arrested and deported. Frashëri, who originally received a death sentence, was imprisoned until 1885 and exiled until his death seven years later. A similar league was established in 1899 in Peja by former League member Haxhi Zeka. The league ended its activity in 1900 after an armed conflict with the Ottoman forces. Zeka was assassinated by a Serbian agent Adem Zajmi in 1902.
The initial sparks of the first Balkan war in 1912 were ignited by the Albanian uprising between 1908 and 1910, which had the aim of opposing the Young Turk policies of consolidation of the Ottoman Empire. Following the eventual weakening of the Ottoman Empire in the Balkans, Serbia, Greece, and Bulgaria declared war, seizing the remaining Ottoman territory in Europe. The territory of Albania was occupied by Serbia in the north and Greece in the south, leaving only a patch of land around the southern coastal city of Vlora. The unsuccessful uprising of 1910, 1911 and the successful and final Albanian revolt in the Ottoman Empire in 1912, as well as the Serbian and Greek occupation and attempts to incorporate the land into their respective countries, led to a proclamation of independence by Ismail Qemali in Vlorë on 28 November 1912. The same day, Ismail Qemali waved the national flag of Albania, from the balcony of the Assembly of Vlorë, in the presence of hundreds of Albanians. This flag was sewn after Skanderbeg's principality flag, which had been used more than 500 years earlier.
Albanian independence was recognized by the Conference of London on 29 July 1913. The Conference of London then delineated the border between Albania and its neighbors, leaving more than half of ethnic Albanians outside Albania. This population was largely divided between Montenegro and Serbia in the north and east (including what is now Kosovo and North Macedonia), and Greece in the south. A substantial number of Albanians thus came under Serbian rule.
At the same time, an uprising in the country's south by local Greeks led to the formation of the Autonomous Republic of Northern Epirus in the southern provinces (1914). The republic proved short-lived as Albania collapsed with the onset of World War I. Greece held the area between 1914 and 1916, and unsuccessfully tried to annex it in March 1916; however in 1917 the Greeks were driven from the area by Italy, which took over most of Albania. The Paris Peace Conference of 1919 awarded the area to Greece. However the area definitively reverted to Albanian control in November 1921, following Greece's defeat in the Greco-Turkish War.
Principality of Albania
In supporting the independence of Albania, the Great Powers were assisted by Aubrey Herbert, a British MP who passionately advocated the Albanian cause in London. As a result, Herbert was offered the crown of Albania, but was dissuaded by the British Prime Minister, H. H. Asquith, from accepting. Instead the offer went to William of Wied, a German prince who accepted and became sovereign of the new Principality of Albania.
The Principality was established on 21 February 1914. The Great Powers selected Prince William of Wied, a nephew of Queen Elisabeth of Romania to become the sovereign of the newly independent Albania. A formal offer was made by 18 Albanian delegates representing the 18 districts of Albania on 21 February 1914, an offer which he accepted. Outside of Albania William was styled prince, but in Albania he was referred to as Mbret (King) so as not to seem inferior to the King of Montenegro. This is the period when Albanian religions gained independence. The ecumenical patriarch of Constantinople recognized the autocephaly of the Albanian Orthodox Church after a meeting of the country's Albanian Orthodox congregations in Berat in August 1922. The most energetic reformers in Albania came from the Orthodox population who wanted to see Albania move quickly away from its Turkish-ruled past, during which Christians made up the underclass. Albania's conservative Sunni Muslim community broke its last ties with Constantinople in 1923, formally declaring that there had been no caliph since Muhammad himself and that Muslim Albanians pledged primary allegiance to their native country. The Muslims also banned polygamy and allowed women to choose whether or not they wanted to wear a veil. Upon termination of Albania from Turkey in 1912, as in all other fields, the customs administration continued its operation under legislation approved specifically for the procedure. After the new laws were issued for the operation of customs, its duty was 11% of the value of goods imported and 1% on the value of those exported.
The security was to be provided by a Gendarmerie commanded by Dutch officers. William left Albania on 3 September 1914 following a pan-Islamic revolt initiated by Essad Pasha Toptani and later headed by Haji Kamil, the latter the military commander of the "Muslim State of Central Albania" centered in Tirana. William never renounced his claim to the throne.
World War I
World War I interrupted all government activities in Albania, while the country was split in a number of regional governments. Political chaos engulfed Albania after the outbreak of World War I. The Albanian people split along religious and tribal lines after the prince's departure. Muslims demanded a Muslim prince and looked to Turkey as the protector of the privileges they had enjoyed. Other Albanians looked to Italy for support. Still others, including many beys and clan chiefs, recognized no superior authority.
Prince William left Albania on 3 September 1914, as a result of the Peasant Revolt initiated by Essad Pasha and later taken over by Haxhi Qamili. William subsequently joined the German army and served on the Eastern Front, but never renounced his claim to the throne.
In late 1914, Greece occupied the Autonomous Republic of Northern Epirus, including Korçë and Gjirokastër. Italy occupied Vlorë, and Serbia and Montenegro occupied parts of northern Albania until a Central Powers offensive scattered the Serbian army, which was evacuated by the French to Thessaloniki. Austro-Hungarian and Bulgarian forces then occupied about two-thirds of the country (Bulgarian occupation of Albania).
Under the secret Treaty of London signed in April 1915, Triple Entente powers promised Italy that it would gain Vlorë (Valona) and nearby lands and a protectorate over Albania in exchange for entering the war against Austria-Hungary. Serbia and Montenegro were promised much of northern Albania, and Greece was promised much of the country's southern half. The treaty left a tiny Albanian state that would be represented by Italy in its relations with the other major powers.
In September 1918, Entente forces broke through the Central Powers' lines north of Thessaloniki and within days Austro-Hungarian forces began to withdraw from Albania. On 2 October 1918 the city of Durrës was shelled on the orders of Louis Franchet d'Espèrey, during the Battle of Durazzo: according to d'Espèrey, the Port of Durrës, if not destroyed, would have served the evacuation of the Bulgarian and German armies, involved in World War I.
When the war ended on 11 November 1918, Italy's army had occupied most of Albania; Serbia held much of the country's northern mountains; Greece occupied a sliver of land within Albania's 1913 borders; and French forces occupied Korçë and Shkodër as well as other regions with sizable Albanian populations.
Projects of partition in 1919–1920
After World War I, Albania was still under the occupation of Serbian and Italian forces. It was a rebellion of the respective populations of Northern and Southern Albania that pushed back the Serbs and Italians behind the recognized borders of Albania.
Albania's political confusion continued in the wake of World War I. The country lacked a single recognized government, and Albanians feared, with justification, that Italy, Yugoslavia, and Greece would succeed in extinguishing Albania's independence and carve up the country. Italian forces controlled Albanian political activity in the areas they occupied. The Serbs, who largely dictated Yugoslavia's foreign policy after World War I, strove to take over northern Albania, and the Greeks sought to control southern Albania.
A delegation sent by a postwar Albanian National Assembly that met at Durrës in December 1918 defended Albanian interests at the Paris Peace Conference, but the conference denied Albania official representation. The National Assembly, anxious to keep Albania intact, expressed willingness to accept Italian protection and even an Italian prince as a ruler so long as it would mean Albania did not lose territory. Serbian troops conducted actions in Albanian-populated border areas, while Albanian guerrillas operated in both Serbia and Montenegro.
In January 1920, at the Paris Peace Conference, negotiators from France, Britain, and Greece agreed to allow Albania to fall under Yugoslav, Italian, and Greek spheres of influence as a diplomatic expedient aimed at finding a compromising solution to the territorial conflicts between Italy and Yugoslavia.
Members of a second Albanian National Assembly held at Lushnjë in January 1920 rejected the partition plan and warned that Albanians would take up arms to defend their country's independence and territorial integrity. The Lushnjë National Assembly appointed a four-man regency to rule the country. A bicameral parliament was also created, in which an elected lower chamber, the Chamber of Deputies (with one deputy for every 12,000 people in Albania and one for the Albanian community in the United States), appointed members of its own ranks to an upper chamber, the Senate. In February 1920, the government moved to Tirana, which became Albania's capital.
One month later, in March 1920, U.S. President Woodrow Wilson intervened to block the Paris agreement. The United States underscored its support for Albania's independence by recognizing an official Albanian representative to Washington, and in December the League of Nations recognized Albania's sovereignty by admitting it as a full member. The country's borders, however, remained unsettled following the Vlora War in which all territory (except Saseno island) under Italian control in Albania was relinquished to the Albanian state.
Albania achieved a degree of statehood after the First World War, in part because of the diplomatic intercession of the United States government. The country suffered from a debilitating lack of economic and social development, however, and its first years of independence were fraught with political instability. Unable to survive a predatory environment without a foreign protector, Albania became the object of tensions between Italy and the Kingdom of Serbs, Croats and Slovenes, which both sought to dominate the country.
Interwar Albanian governments appeared and disappeared in rapid succession. Between July and December 1921 alone, the premiership changed hands five times. The Popular Party's head, Xhafer Ypi, formed a government in December 1921 with Fan S. Noli as foreign minister and Ahmed Bey Zogu as internal affairs minister, but Noli resigned soon after Zogu resorted to repression in an attempt to disarm the lowland Albanians despite the fact that bearing arms was a traditional custom.
When the government's enemies attacked Tirana in early 1922, Zogu stayed in the capital and, with the support of the British ambassador, repulsed the assault. He took over the premiership later in the year and turned his back on the Popular Party by announcing his engagement to the daughter of Shefqet Verlaci, the Progressive Party leader.
Zogu's protégés organized themselves into the Government Party. Noli and other Western-oriented leaders formed the Opposition Party of Democrats, which attracted all of Zogu's many personal enemies, ideological opponents, and people left unrewarded by his political machine. Ideologically, the Democrats included a broad sweep of people who advocated everything from conservative Islam to Noli's dreams of rapid modernization.
Opposition to Zogu was formidable. Orthodox peasants in Albania's southern lowlands loathed Zogu because he supported the Muslim landowners' efforts to block land reform; Shkodër's citizens felt shortchanged because their city did not become Albania's capital, and nationalists were dissatisfied because Zogu's government did not press Albania's claims to Kosovo or speak up more energetically for the rights of the ethnic Albanian minorities in present-day Yugoslavia and Greece.
Zogu's party handily won elections for a National Assembly in early 1924. Zogu soon stepped aside, however, handing over the premiership to Verlaci in the wake of a financial scandal and an assassination attempt by a young radical that left Zogu wounded. The opposition withdrew from the assembly after the leader of a nationalist youth organization, Avni Rustemi, was murdered in the street outside the parliament building.
Noli's supporters blamed the Rustemi murder on Zogu's Mati clansmen, who continued to practice blood vengeance. After the walkout, discontent mounted, and in June 1924 a peasant-backed insurgency had won control of Tirana. Noli became prime minister, and Zogu fled to Yugoslavia.
Fan Noli, an idealist, rejected demands for new elections on the grounds that Albania needed a "paternal" government. In a manifesto describing his government's program, Noli called for abolishing feudalism, resisting Italian domination, and establishing a Western-style constitutional government. Scaling back the bureaucracy, strengthening local government, assisting peasants, throwing Albania open to foreign investment, and improving the country's bleak transportation, public health, and education facilities filled out the Noli government's overly ambitious agenda. Noli encountered resistance to his program from people who had helped him oust Zogu, and he never attracted the foreign aid necessary to carry out his reform plans. Noli criticized the League of Nations for failing to settle the threat facing Albania on its land borders.
Under Fan Noli, the government set up a special tribunal that passed death sentences, in absentia, on Zogu, Verlaci, and others and confiscated their property. In Yugoslavia Zogu recruited a mercenary army, and Belgrade furnished the Albanian leader with weapons, about 1,000 Yugoslav army regulars, and Russian White Emigres to mount an invasion that the Serbs hoped would bring them disputed areas along the border. After Noli decided to establish diplomatic relations with the Soviet Union, a bitter enemy of the Serbian ruling family, Belgrade began making wild allegations that Albania was about to embrace Bolshevism.
On 13 December 1924, Zogu's Yugoslav-backed army crossed into Albanian territory. By Christmas Eve, Zogu had reclaimed the capital, and Noli and his government had fled to Italy. The Noli government lasted just 6 months and a week, and Ahmet Zogu returned with another coup d'état and regained control, changing the political situation and abolishing principality.
After defeating Fan Noli's government, Ahmet Zogu recalled the parliament, in order to find a solution for the uncrowned principality of Albania. The parliament quickly adopted a new constitution, proclaimed the first republic, and granted Zogu dictatorial powers that allowed him to appoint and dismiss ministers, veto legislation, and name all major administrative personnel and a third of the Senate. The Constitution provided for a parliamentary republic with a powerful president serving as head of state and government.
Ahmet Zogu was elected president for a term of seven years by the National Assembly, prior to his proclamation King of Albanians. On 31 January, Zogu was elected president for a seven-year term. Opposition parties and civil liberties disappeared; opponents of the regime were murdered; and the press suffered strict censorship. Zogu ruled Albania using four military governors responsible to him alone. He appointed clan chieftains as reserve army officers who were kept on call to protect the regime against domestic or foreign threats.
Zogu, however, quickly turned his back on Belgrade and looked instead to Benito Mussolini's Italy for patronage. Under Zogu, Albania joined the Italian coalition against Yugoslavia of Kingdom of Italy, Hungary, and Bulgaria in 1924–1927. After the United Kingdom's and France's political intervention in 1927 with the Kingdom of Yugoslavia, the alliance crumbled. Zogu maintained good relations with Benito Mussolini's fascist regime in Italy and supported Italy's foreign policy. He would be the first and only Albanian to hold the title of president until 1991.
Kingdom of Albania
In 1928, Zogu I secured the Parliament's consent to its own dissolution. Afterwards, Albania was declared a monarchy with Zogu I first as the Prime Minister, then as the President and at last as the King of Albania. International recognition arrived forthwith. The new formed constitution abolished the Albanian Senate and created a unicameral parliament, but King Zog retained the dictatorial powers he had enjoyed as president. Zogu I remained a conservative, but initiated reforms. For example, in an attempt at social modernisation the custom of adding one's region to one's name was dropped. He also made donations of land to international organisations for the building of schools and hospitals.
Soon after his incoronation, Zog broke off his engagement to Shefqet Verlaci's daughter, and Verlaci withdrew his support for the king and began plotting against him. Zog had accumulated a great number of enemies over the years, and the Albanian tradition of blood vengeance required them to try to kill him. Zog surrounded himself with guards and rarely appeared in public. The king's loyalists disarmed all of Albania's tribes except for his own Mati tribesmen and their allies, the Dibra. Nevertheless, on a visit to Vienna in 1931, Zog and his bodyguards fought a gun battle with would-be assassins Aziz Çami and Ndok Gjeloshi on the Opera House steps.
Zog remained sensitive to steadily mounting disillusion with Italy's domination of Albania. The Albanian army, though always less than 15,000-strong, sapped the country's funds, and the Italians' monopoly on training the armed forces rankled public opinion. As a counterweight, Zog kept British officers in the Gendarmerie despite strong Italian pressure to remove them. In 1931, Zog openly stood up to the Italians, refusing to renew the 1926 First Treaty of Tirana.
In 1932 and 1933, Albania could not make the interest payments on its loans from the Society for the Economic Development of Albania. In response, Rome turned up the pressure, demanding that Tirana name Italians to direct the Gendarmerie; join Italy in a customs union; grant Italy control of the country's sugar, telegraph, and electrical monopolies; teach the Italian language in all Albanian schools; and admit Italian colonists. Zog refused. Instead, he ordered the national budget slashed by 30 percent, dismissed the Italian military advisers, and nationalized Italian-run Roman Catholic schools in the northern part of the country. In 1934, Albania had signed trade agreements with Yugoslavia and Greece, and Mussolini had suspended all payments to Tirana. An Italian attempt to intimidate the Albanians by sending a fleet of warships to Albania failed because the Albanians only allowed the forces to land unarmed. Mussolini then attempted to buy off the Albanians. In 1935 he presented the Albanian government 3 million gold francs as a gift.
Zog's success in defeating two local rebellions convinced Mussolini that the Italians had to reach a new agreement with the Albanian king. A government of young men led by Mehdi Frasheri, an enlightened Bektashi administrator, won a commitment from Italy to fulfill financial promises that Mussolini had made to Albania and to grant new loans for harbor improvements at Durrës and other projects that kept the Albanian government afloat. Soon Italians began taking positions in Albania's civil service, and Italian settlers were allowed into the country. Mussolini's forces overthrew King Zog when Italy invaded Albania in 1939.
World War II
Starting in 1928, but especially during the Great Depression, the government of King Zog, which brought law and order to the country, began to cede Albania's sovereignty to Italy. Despite some significant resistance, especially at Durrës, Italy invaded Albania on 7 April 1939 and took control of the country, with the Italian Fascist dictator Benito Mussolini proclaiming Italy's figurehead King Victor Emmanuel III of Italy as King of Albania. The nation thus became one of the first to be occupied by the Axis Powers in World War II.
As Hitler began his aggression against other European countries, Mussolini decided to occupy Albania as a means of competing with Hitler's territorial gains. Mussolini and the Italian Fascists saw Albania as a historical part of the Roman Empire, and the occupation was intended to fulfill Mussolini's dream of creating an Italian Empire. During the Italian occupation, Albania's population was subject to a policy of forced Italianization by the kingdom's Italian governors, in which the use of the Albanian language was discouraged in schools while the Italian language was promoted. At the same time, the colonization of Albania by Italians was encouraged.
Mussolini, in October 1940, used his Albanian base to launch an attack on Greece, which led to the defeat of the Italian forces and the Greek occupation of Southern Albania in what was seen by the Greeks as the liberation of Northern Epirus. While preparing for the Invasion of Russia, Hitler decided to attack Greece in December 1940 to prevent a British attack on his southern flank.
Albania had long had considerable strategic importance for Italy. Italian naval strategists eyed the port of Vlorë and the island of Sazan at the entrance to the Bay of Vlorë with considerable interest, as it would give Italy control of the entrance to the Adriatic Sea. In addition, Albania could provide Italy with a beachhead in the Balkans. Before World War I Italy and Austria-Hungary had been instrumental in the creation of an independent Albanian state. At the outbreak of war, Italy had seized the chance to occupy the southern half of Albania, to avoid it being captured by the Austro-Hungarians. That success did not last long, as post-war domestic problems, Albanian resistance, and pressure from United States President Woodrow Wilson, forced Italy to pull out in 1920.
When Mussolini took power in Italy he turned with renewed interest to Albania. Italy began penetration of Albania's economy in 1925, when Albania agreed to allow it to exploit its mineral resources. That was followed by the First Treaty of Tirana in 1926 and the Second Treaty of Tirana in 1927, whereby Italy and Albania entered into a defensive alliance. The Albanian government and economy were subsidised by Italian loans, the Albanian army was trained by Italian military instructors, and Italian colonial settlement was encouraged. Despite strong Italian influence, Zog refused to completely give in to Italian pressure. In 1931 he openly stood up to the Italians, refusing to renew the 1926 Treaty of Tirana. After Albania signed trade agreements with Yugoslavia and Greece in 1934, Mussolini made a failed attempt to intimidate the Albanians by sending a fleet of warships to Albania.
As Nazi Germany annexed Austria and moved against Czechoslovakia, Italy saw itself becoming a second-rate member of the Axis. The imminent birth of an Albanian royal child meanwhile threatened to give Zog a lasting dynasty. After Hitler invaded Czechoslovakia (15 March 1939) without notifying Mussolini in advance, the Italian dictator decided to proceed with his own annexation of Albania. Italy's King Victor Emmanuel III criticized the plan to take Albania as an unnecessary risk. Rome, however, delivered Tirana an ultimatum on 25 March 1939, demanding that it accede to Italy's occupation of Albania. Zog refused to accept money in exchange for countenancing a full Italian takeover and colonization of Albania.
On 7 April Mussolini's troops invaded Albania. The operation was led by General Alfredo Guzzoni. The invasion force was divided into three groups, which were to land successively. The most important was the first group, which was divided in four columns, each assigned to a landing area at a harbor and an inland target on which to advance. Despite some stubborn resistance by some patriots, especially at Durrës, the Italians made short work of the Albanians. Durrës was captured on 7 April, Tirana the following day, Shkodër and Gjirokastër on 9 April, and almost the entire country by 10 April.
Unwilling to become an Italian puppet, King Zog, his wife, Queen Geraldine Apponyi, and their infant son Leka fled to Greece and eventually to London. On 12 April, the Albanian parliament voted to depose Zog and unite the nation with Italy "in personal union" by offering the Albanian crown to Victor Emmanuel III.
The parliament elected Albania's largest landowner, Shefqet Bej Verlaci, as Prime Minister. Verlaci additionally served as head of state for five days until Victor Emmanuel III formally accepted the Albanian crown in a ceremony at the Quirinale palace in Rome. Victor Emmanuel III appointed Francesco Jacomoni di San Savino, a former ambassador to Albania, to represent him in Albania as "Lieutenant-General of the King" (effectively a viceroy).
Albania under Italy
While Victor Emmanuel ruled as king, Shefqet Bej Verlaci served as the Prime Minister. Shefqet Verlaci controlled the day-to-day activities of the Italian protectorate. On 3 December 1941, Shefqet Bej Verlaci was replaced as Prime Minister and Head of State by Mustafa Merlika Kruja.
From the start, Albanian foreign affairs, customs, as well as natural resources came under direct control of Italy. The puppet Albanian Fascist Party became the ruling party of the country and the Fascists allowed Italian citizens to settle in Albania and to own land so that they could gradually transform it into Italian soil.
In October 1940, during the Greco-Italian War, Albania served as a staging-area for Italian dictator Benito Mussolini's unsuccessful invasion of Greece. Mussolini planned to invade Greece and other countries like Yugoslavia in the area to give Italy territorial control of most of the Mediterranean Sea coastline, as part of the Fascists objective of creating the objective of Mare Nostrum ("Our Sea") in which Italy would dominate the Mediterranean.
But, soon after the Italian invasion, the Greeks counter-attacked and a sizeable portion of Albania was in Greek hands (including the cities of Gjirokastër and Korçë). In April 1941, after Greece capitulated to the German forces, the Greek territorial gains in southern Albania returned to Italian command. Under Italian command came also large areas of Greece after the successful German invasion of Greece.
After the fall of Yugoslavia and Greece in April 1941, the Italian Fascists added to the territory of the Kingdom of Albania most of the Albanian-inhabited areas that had been previously given to the Kingdom of Yugoslavia. The Albanian fascists claimed in May 1941 that nearly all the Albanian populated territories were united to Albania (see map). Even areas of northern Greece (Chameria) were administered by Albanians. But this was even a consequence of borders that Italy and Germany agreed on when dividing their spheres of influence. Some small portions of territories with Albanian majority remained outside the new borders and contact between the two parts was practically impossible: the Albanian population under the Bulgarian rule was heavily oppressed.
Albania under Germany
After the surrender of the Italian Army in September 1943, Albania was occupied by the Germans.
With the collapse of the Mussolini government in line with the Allied invasion of Italy, Germany occupied Albania in September 1943, dropping paratroopers into Tirana before the Albanian guerrillas could take the capital. The German Army soon drove the guerrillas into the hills and to the south. The Nazi German government subsequently announced it would recognize the independence of a neutral Albania and set about organizing a new government, police and armed forces.
The Germans did not exert heavy-handed control over Albania's administration. Rather, they sought to gain popular support by backing causes popular with Albanians, especially the annexation of Kosovo. Many Balli Kombëtar units cooperated with the Germans against the communists and several Balli Kombëtar leaders held positions in the German-sponsored regime. Albanian collaborators, especially the Skanderbeg SS Division, also expelled and killed Serbs living in Kosovo. In December 1943, a third resistance organization, an anticommunist, anti-German royalist group known as Legaliteti, took shape in Albania's northern mountains. Led by Abaz Kupi, it largely consisted of Geg guerrillas, supplied mainly with weapons from the allies, who withdrew their support for the NLM after the communists renounced Albania's claims on Kosovo. The capital Tirana was liberated by the partisans on 17 November 1944 after a 20-day battle. The communist partizans entirely liberated Albania from German occupation on 29 November 1944, pursuing the German army till Višegrad, Bosnia (then Yugoslavia) in collaboration with the Yugoslav communist forces.
The Albanian partisans also liberated Kosovo, part of Montenegro, and southern Bosnia and Herzegovina. By November 1944, they had thrown out the Germans, being with Yugoslavia the only European nations to do so without any assistance from the allies. Enver Hoxha became the leader of the country by virtue of his position as Secretary General of the Albanian Communist Party. After having taken over power of the country, the Albanian communists launched a tremendous terror campaign, shooting intellectuals and arresting thousands of innocent people. Some died due to suffering torture.
Albania was one of the few European countries occupied by the Axis powers that ended World War II with a larger Jewish population than before the war. Some 1,200 Jewish residents and refugees from other Balkan countries were hidden by Albanian families during World War II, according to official records.
Albanian resistance in World War II
The National Liberation War of the Albanian people started with the Italian invasion in Albania on 7 April 1939 and ended on 28 November 1944. During the antifascist national liberation war, the Albanian people fought against Italy and Germany, which occupied the country. In the 1939–1941 period, the antifascist resistance was led by the National Front nationalist groups and later by the Communist Party.
In October 1941, the small Albanian communist groups established in Tirana an Albanian Communist Party of 130 members under the leadership of Hoxha and an eleven-man Central Committee. The Albanian communists supported the Molotov–Ribbentrop Pact, and did not participate in the antifascist struggle until Germany invaded the Soviet Union in 1941. The party at first had little mass appeal, and even its youth organization netted recruits. In mid-1942, however, party leaders increased their popularity by calling the young peoples to fight for the liberation of their country, that was occupied by Fascist Italy.
This propaganda increased the number of new recruits by many young peoples eager for freedom. In September 1942, the party organized a popular front organization, the National Liberation Movement (NLM), from a number of resistance groups, including several that were strongly anticommunist. During the war, the NLM's communist-dominated partisans, in the form of the National Liberation Army, did not heed warnings from the Italian occupiers that there would be reprisals for guerrilla attacks. Partisan leaders, on the contrary, counted on using the lust for revenge such reprisals would elicit to win recruits.
The communists turned the so-called war of liberation into a civil war, especially after the discovery of the Dalmazzo-Kelcyra protocol, signed by the Balli Kombëtar. With the intention of organizing a partisan resistance, they called a general conference in Pezë on 16 September 1942 where the Albanian National Liberation Front was set up. The Front included nationalist groups, but it was dominated by communist partisans.
In December 1942, more Albanian nationalist groups were organized. Albanians fought against the Italians while, during Nazi German occupation, Balli Kombëtar allied itself with the Germans and clashed with Albanian communists, which continued their fight against Germans and Balli Kombëtar at the same time.
A nationalist resistance to the Italian occupiers emerged in November 1942. Ali Këlcyra and Midhat Frashëri formed the Western-oriented Balli Kombëtar (National Front). Balli Kombëtar was a movement that recruited supporters from both the large landowners and peasantry. It opposed King Zog's return and called for the creation of a republic and the introduction of some economic and social reforms. The Balli Kombëtar's leaders acted conservatively, however, fearing that the occupiers would carry out reprisals against them or confiscate the landowners' estates.
Communist revolution in Albania (1944)
The communist partisans regrouped and gained control of southern Albania in January 1944. In May they called a congress of members of the National Liberation Front (NLF), as the movement was by then called, at Përmet, which chose an Anti-Fascist Council of National Liberation to act as Albania's administration and legislature. Hoxha became the chairman of the council's executive committee and the National Liberation Army's supreme commander.
The communist partisans defeated the last Balli Kombëtar forces in southern Albania by mid-summer 1944 and encountered only scattered resistance from the Balli Kombëtar and Legality when they entered central and northern Albania by the end of July. The British military mission urged the remnants of the nationalists not to oppose the communists' advance, and the Allies evacuated Kupi to Italy. Before the end of November, the main German troops had withdrawn from Tirana, and the communists took control of the capital by fighting what was left of the German army. A provisional government the communists had formed at Berat in October administered Albania with Enver Hoxha as prime minister.
Consequences of the war
The NLF's strong links with Yugoslavia's communists, who also enjoyed British military and diplomatic support, guaranteed that Belgrade would play a key role in Albania's postwar order. The Allies never recognized an Albanian government in exile or King Zog, nor did they ever raise the question of Albania or its borders at any of the major wartime conferences.
No reliable statistics on Albania's wartime losses exist, but the United Nations Relief and Rehabilitation Administration reported about 30,000 Albanian war dead, 200 destroyed villages, 18,000 destroyed houses, and about 100,000 people left homeless. Albanian official statistics claim somewhat higher losses. Furthermore, thousands of Chams (Tsams, Albanians living in Northern Greece) were driven out of Greece with the justification that they had collaborated with the Nazis.
A collection of communists moved quickly after the second World War to subdue all potential political enemies in Albania, break the country's landowners and minuscule middle class, and isolate Albania from western powers in order to establish the People's Republic of Albania. In 1945, the communists had liquidated, discredited, or driven into exile most of the country's interwar elite. The Internal Affairs Minister, Koçi Xoxe, a pro-Yugoslav erstwhile tinsmith, presided over the trial and the execution of thousands of opposition politicians, clan chiefs, and members of former Albanian governments who were condemned as "war criminals."
Thousands of their family members were imprisoned for years in work camps and jails and later exiled for decades to miserable state farms built on reclaimed marshlands. The communists' consolidation of control also produced a shift in political power in Albania from the northern Ghegs to the southern Tosks. Most communist leaders were middle-class Tosks, Vlachs and Orthodox, and the party drew most of its recruits from Tosk-inhabited areas, while the Ghegs, with their centuries-old tradition of opposing authority, distrusted the new Albanian rulers and their alien Marxist doctrines.
In December 1945, Albanians elected a new People's Assembly, but only candidates from the Democratic Front (previously the National Liberation Movement then the National Liberation Front) appeared on the electoral lists, and the communists used propaganda and terror tactics to gag the opposition. Official ballot tallies showed that 92% of the electorate voted and that 93% of the voters chose the Democratic Front ticket. The assembly convened in January 1946, annulled the monarchy, and transformed Albania into a "people's republic."
Enver Hoxha and Mehmet Shehu emerged as communist leaders in Albania, and are recognized by most western nations. They began to concentrate primarily on securing and maintaining their power base by killing all their political adversaries, and secondarily on preserving Albania's independence and reshaping the country according to the precepts of Stalinism so they could remain in power. Political executions were common with between 5,000 and 25,000 killed in total under the communist regime. Albania became an ally of the Soviet Union, but this came to an end after 1956 over the advent of de-Stalinization, causing the Soviet-Albanian split. A strong political alliance with China followed, leading to several billion dollars in aid, which was curtailed after 1974, causing the Sino-Albanian split. China cut off aid in 1978 when Albania attacked its policies after the death of Chinese leader Mao Zedong. Large-scale purges of officials occurred during the 1970s.
During the period of socialist construction of Albania, the country saw rapid economic growth. For the first time, Albania was beginning to produce the major part of its own commodities domestically, which in some areas were able to compete in foreign markets. During the period of 1960 to 1970, the average annual rate of increase of Albania's national income was 29 percent higher than the world average and 56 percent higher than the European average. Also during this period, because of the monopolised socialist economy, Albania was the only country in the world that imposed no imposts or taxes on its people whatsoever.
Enver Hoxha, who ruled Albania for four decades, died on 11 April 1985. Soon after Hoxha's death, voices for change emerged in the Albanian society and the government began to seek closer ties with the West in order to improve economic conditions. Eventually the new regime of Ramiz Alia introduced some liberalisation, and granting the freedom to travel abroad in 1990. The new government made efforts to improve ties with the outside world. The elections of March 1991 kept the former Communists in power, but a general strike and urban opposition led to the formation of a coalition cabinet that included non-Communists.
In 1967, the authorities conducted a violent campaign to extinguish religious practice in Albania, claiming that religion had divided the Albanian nation and kept it mired in backwardness. Student agitators combed the countryside, forcing Albanians to quit practicing their faith. Despite complaints, even by APL members, all churches, mosques, monasteries, and other religious institutions had been closed or converted into warehouses, gymnasiums, and workshops by year's end. A special decree abrogated the charters by which the country's main religious communities had operated.
Albania and Yugoslavia
Until Yugoslavia's expulsion from the Cominform in 1948, Albania acted like a Yugoslav satellite and the President of Yugoslavia, Josip Broz Tito aimed to use his choke hold on the Albanian party to incorporate the entire country into Yugoslavia. After Germany's withdrawal from Kosovo in late 1944, Yugoslavia's communist partisans took possession of the province and committed retaliatory massacres against Albanians. Before the second World War, the Communist Party of Yugoslavia had supported transferring Kosovo to Albania, but Yugoslavia's postwar communist regime insisted on preserving the country's prewar borders.
In repudiating the 1943 Mukaj agreement under pressure from the Yugoslavs, Albania's communists had consented to restore Kosovo to Yugoslavia after the war. In January 1945, the two governments signed a treaty reincorporating Kosovo into Yugoslavia as an autonomous province. Shortly thereafter, Yugoslavia became the first country to recognize Albania's provisional government.
Relations between Albania and Yugoslavia declined, however, when the Albanians began complaining that the Yugoslavs were paying too little for Albanian raw materials and exploiting Albania through the joint stock companies. In addition, the Albanians sought investment funds to develop light industries and an oil refinery, while the Yugoslavs wanted the Albanians to concentrate on agriculture and raw-material extraction. The head of Albania's Economic Planning Commission and one of Hoxha's allies, Nako Spiru, became the leading critic of Yugoslavia's efforts to exert economic control over Albania. Tito distrusted Hoxha and the other intellectuals in the Albanian party and, through Xoxe and his loyalists, attempted to unseat them.
In 1947, Yugoslavia's leaders engineered an all-out offensive against anti-Yugoslav Albanian communists, including Hoxha and Spiru. In May, Tirana announced the arrest, trial, and conviction of nine People's Assembly members, all known for opposing Yugoslavia, on charges of antistate activities. A month later, the Communist Party of Yugoslavia's Central Committee accused Hoxha of following "independent" policies and turning the Albanian people against Yugoslavia.
Albania and the Soviet Union
Albania became dependent on Soviet aid and know-how after the break with Yugoslavia in 1948. In February 1949, Albania gained membership in the communist bloc's organization for coordinating economic planning, the Council for Mutual Economic Assistance. Tirana soon entered into trade agreements with Poland, Czechoslovakia, Hungary, Romania, and the Soviet Union. Soviet and central European technical advisers took up residence in Albania, and the Soviet Union also sent Albania military advisers and built a submarine installation on Sazan Island.
After the Soviet-Yugoslav split, Albania and Bulgaria were the only countries the Soviet Union could use to funnel war material to the communists fighting in Greece. What little strategic value Albania offered the Soviet Union, however, gradually shrank as nuclear arms technology developed.
Anxious to pay homage to Stalin, Albania's rulers implemented new elements of the Stalinist economic system. In 1949, Albania adopted the basic elements of the Soviet fiscal system, under which state enterprises paid direct contributions to the treasury from their profits and kept only a share authorized for self-financed investments and other purposes. In 1951, the Albanian government launched its first five-year plan, which emphasized exploiting the country's oil, chromite, copper, nickel, asphalt, and coal resources; expanding electricity production and the power grid; increasing agricultural output; and improving transportation. The government began a program of rapid industrialization after the APL's Second Party Congress and a campaign of forced collectivization of farmland in 1955. At the time, private farms still produced about 87% of Albania's agricultural output, but by 1960 the same percentage came from collective or state farms.
Stalin died in March 1953, and apparently fearing that the Soviet ruler's demise might encourage rivals within the Albanian party's ranks, neither Hoxha nor Shehu risked traveling to Moscow to attend his funeral. The Soviet Union's subsequent movement toward rapprochement with the hated Yugoslavs rankled the two Albanian leaders. Tirana soon came under pressure from Moscow to copy, at least formally, the new Soviet model for a collective leadership. In July 1953, Hoxha handed over the foreign affairs and defense portfolios to loyal followers, but he kept both the top party post and the premiership until 1954, when Shehu became Albania's prime minister. The Soviet Union, responding with an effort to raise the Albanian leaders' morale, elevated diplomatic relations between the two countries to the ambassadorial level.
Despite some initial expressions of enthusiasm, Hoxha and Shehu mistrusted Nikita Khrushchev's programs of "peaceful coexistence" and "different roads to socialism" because they appeared to pose the threat that Yugoslavia might again try to take control of Albania. Hoxha and Shehu were also alarmed at the prospect that Moscow might prefer less dogmatic rulers in Albania. Tirana and Belgrade renewed diplomatic relations in December 1953, but Hoxha refused Khrushchev's repeated appeals to rehabilitate posthumously the pro-Yugoslav Xoxe as a gesture to Tito. The Albanian duo instead tightened their grip on their country's domestic life and let the propaganda war with the Yugoslavs grind on.
Albania and China
The People's Republic of Albania played a role in the Sino-Soviet split far outweighing either its size or its importance in the communist world. In 1958, the nation stood with the People's Republic of China in opposing Moscow on issues of peaceful coexistence, de-Stalinization, and Yugoslavia's separate road to socialism through decentralization of economic life. The Soviet Union, central European countries, and China, all offered Albania large amounts of aid. Soviet leaders also promised to build a large Palace of Culture in Tirana as a symbol of the Soviet people's "love and friendship" for the Albanian people.
Despite these gestures, Tirana was dissatisfied with Moscow's economic policy toward Albania. Hoxha and Shehu apparently decided in May or June 1960 that Albania was assured of Chinese support, and they openly sided with the People's Republic of China when sharp polemics erupted between the People's Republic of China and the Soviet Union. Ramiz Alia, at the time a candidate-member of the Politburo and Hoxha's adviser on ideological questions, played a prominent role in the rhetoric.
Hoxha and Shehu continued their harangue against the Soviet Union and Yugoslavia at the APL's Fourth Party Congress in February 1961. During the congress, the Albanian government announced the broad outlines of the country's Third Five-Year Plan from 1961 to 65, which allocated 54% of all investment to industry, thereby rejecting Khrushchev's wish to make Albania primarily an agricultural producer. Moscow responded by canceling aid programs and lines of credit for Albania, but the Chinese again came to the rescue.
The Albanian-Chinese relations had stagnated by 1970, and when the Asian giant began to reemerge from isolation in the early 1970s, Mao Zedong and the other communist Chinese leaders reassessed their commitment to tiny Albania, starting the Sino-Albanian split. In response, Tirana began broadening its contacts with the outside world. Albania opened trade negotiations with France, Italy, and the recently independent Asian and African states, and in 1971 it normalized relations with Yugoslavia and Greece. Albania's leaders abhorred the People's Republic of China's contacts with the United States in the early 1970s, and its press and radio ignored President Richard Nixon's trip to Beijing in 1972.
As Hoxha's health declined, the first secretary of the People's Socialist Republic began planning for an orderly succession. In 1976, the People's Parliament adopted its second communist Constitution of the post-war era. The constitution guaranteed the people of Albania the freedom of speech, press, organization, association, and parliament but subordinated these rights to the individual's duties to society as a whole. The constitution enshrined in law the idea of autarky and prohibited the government from seeking financial aid or credits or from forming joint companies with partners from capitalist or communist countries perceived to be "revisionist". The constitution's preamble also boasted that the foundations of religious belief in Albania had been abolished.
In 1980, Hoxha turned to Ramiz Alia to succeed him as Albania's communist patriarch, overlooking his long-standing comrade-in-arms, Mehmet Shehu. Hoxha first tried to convince Shehu to step aside voluntarily, but when this move failed, Hoxha arranged for all the members of the Politburo to rebuke him for allowing his son to become engaged to the daughter of a former bourgeois family. Hoxha purged the members of Shehu's family and his supporters within the police and military. In November 1982, Hoxha announced that Shehu had been a foreign spy working simultaneously for the United States, British, Soviet, and Yugoslav intelligence agencies in planning the assassination of Hoxha himself. "He was buried like a dog", the dictator wrote in the Albanian edition of his book, 'The Titoites'. Hoxha went into semi-retirement in early 1983, and Alia assumed responsibility for Albania's administration. Alia traveled extensively around Albania, standing in for Hoxha at major events and delivering addresses laying down new policies and intoning litanies to the enfeebled president. Alia succeeded to the presidency and became legal secretary of the APL two days later. In due course, he became a dominant figure in the Albanian media, and his slogans appeared painted in crimson letters on signboards across the country.
In 1991, Ramiz Alia became the first President of Albania. Alia tried to follow in Enver Hoxha's footsteps, but the changes had already started and the collapse of communism throughout Europe led to widespread changes within the society of Albania. Mikhail Gorbachev had appeared in the Soviet Union with new rules and policies (glasnost and perestroika). However, Alia took similar steps, signing the Helsinki Agreement and allowing pluralism under pressure from students and workers. Afterwards, the first multi-party elections took place since the communists assumed power in Albania. The Socialist Party led by Ramiz Alia won the 1991 elections. Nevertheless, it was clear that the change would not be stopped. Pursuant to a 29 April 1991 interim basic law, Albanians ratified a constitution on 28 November 1998, establishing a democratic system of government based upon the rule of law and guaranteeing the protection of fundamental human rights.
Furthermore, the Communists retained support and governmental control in the first round of elections under the interim law, but fell two months later during a general strike. A committee of "national salvation" took over but also collapsed in half a year. On 22 March 1992, the Communists were trumped by the Democratic Party after winning the 1992 parliamentary elections. The transition from the socialist state to a parliamentary system had many challenges. The Democratic Party had to implement the reforms it had promised, but they were either too slow or didn't solve the problems, so the people were disappointed when their hopes for fast prosperity went unfulfilled.
The Democratic Party took control after winning the second multi-party elections, deposing the Communist Party. Afterwards, Sali Berisha became the second President. Today, Berisha is the longest-serving and the only President of Albania elected to a second term. In 1995, Albania became the 35th member of the Council of Europe and requested membership in North Atlantic Treaty Organisation (NATO). The people of Albania has continued to emigrate to western European countries, especially to Greece and Italy but also to the United States.
Deliberate programmes of economic and democratic reforms were put in place, but Albanian inexperience with capitalism led to the proliferation of pyramid schemes, which were not banned due to the corruption of the government. Anarchy in the late 1996s to early 1997s, as a result of the collapse of these pyramid schemes, alarmed the world and prompted international mediation. In the early spring 1997, Italy led a multinational military and humanitarian intervention (Operation Alba), authorized by the United Nations Security Council, to help stabilize the country. The government of Berisha collapsed in 1997 in the wake of the additional collapse of pyramid schemes and widespread corruption, which caused anarchy and rebellion throughout the country, backed up by former communists and Sigurimi former members. The government attempted to suppress the rebellion by military force but the attempt failed, due to long-term corrosion of the Military of Albania due to political and social factors. Few months later, after the 1997 parliamentary elections the Democratic Party was defeated by the Socialist Party, winning just 25 seats out of a total of 156. Sali Berisha resigned and the Socialists elected Rexhep Meidani as President. Including to that, the leader of the Socialists Fatos Nano was elected as Prime Minister, a post which he held until October 1998, when he resigned as a result of the tense situation created in the country after the assassination of Azem Hajdari, a prominent leader of the Democratic Party. Due to that, Pandeli Majko was then elected Prime Minister until November 1999, when he was replaced by Ilir Meta. The Parliament adopted the current Constitution on 29 November 1998. Albania approved its constitution through a popular referendum which was held in November 1998, but which was boycotted by the opposition. The general local elections of October 2000 marked the loss of control of the Democrats over the local governments and a victory for the Socialists.
In 2001, Albania has made strides toward democratic reform and maintaining the rule of law, serious deficiencies in the electoral code remain to be addressed, as demonstrated in the elections. International observers judged the elections to be acceptable, but the Union for Victory Coalition, the second-largest vote recipient, disputed the results and boycotted parliament until 31 January 2002. In June 2005, the democratic coalition formed a government with the Sali Berisha. His return to power in the elections of 3 July 2005 ended eight years of Socialist Party rule. After Alfred Moisiu, in 2006 Bamir Topi was elected President of Albania until 2010. Despite the political situation, the economy of Albania grew at an estimated 5% in 2007. The Albanian lek has strengthened from 143 lekë to the US dollar in 2000 to 92 lekë in 2007.
On 23 June 2013, the eighth parliamentary elections took place, won by Edi Rama of the Socialist Party. During his tenure as 33rd Prime Minister, Albania has implemented numerous reforms focused on the modernizing the economy and democratizing of state institutions like the judiciary and law enforcement. Additionally, unemployment has been steadily reduced to the 4th lowest unemployment rate in the Balkans.
After the collapse of the Eastern Bloc, Albania started to develop closer ties with Western Europe. At the 2008 Bucharest summit, the North Atlantic Treaty Organization (NATO) invited Albania to join the alliance. In April 2014 Albania became a full member of the NATO. Albania was among the first southeastern European countries to join the Partnership for peace programme. Albania applied to join the European Union, becoming an official candidate for accession to the European Union in June 2014.
In 2017, the eighth parliamentary elections took place, simultaneously with the presidential elections. The presidential elections were held on 19, 20, 27 and 28 April 2017. In the fourth round, the incumbent Chairman and then-Prime Minister, Ilir Meta was elected as the eighth President of Albania with 87 votes. However, the result of the parliamentary elections held on 25 June 2017 was a victory for the Socialist Party led by Edi Rama, that received 48.33% of the votes of the elections, ahead of 5 other candidates. Lulzim Basha, the Democratic Party candidate and runner-up in the election, received only 28.81% of the votes.
- Politics of Albania
- Timeline of Albanian history
- Epitaph of Gllavenica
- Gjergj Arianiti
- Miranda Vickers (1999). The Albanians: A Modern History. I.B.Tauris. p. 66. ISBN 978-1-86064-541-9. Retrieved 10 May 2012.
- The Prehistory of the Balkans; and the Middle East and the Aegean world, tenth to eighth centuries B.C. John Boardman p.189–90
- Hammond, N. G. L. (1974). "Grave circles in Albania and Macedonia". Bronze Age Migrations in the Aegean: Archaeological and Linguistic Problems in Greek Prehistory. British Association for Mycenaean Studies. 4: 189–198. ISBN 978-0-7156-0580-6. Retrieved 16 March 2011.
- Nicholas Geoffrey Lemprière Hammond, Guy Thompson Griffith A History of Macedonia: Historical geography and prehistory. Clarendon Press, 1972, p. 290
- Nicholas Geoffrey Lemprière Hammond. Studies: Further studies on various topics. A.M. Hakkert, 1993, p. 231: "The leading dans of both groups buried their dead under a circular tumulus of soil in the second millennium BC The main reservoir of the Greek speakers was central Albania and Epirus, and it was from there that the founders of Mycenaean civilization came to Mycenae, c. 1600 BC, and buried their nobles in Grave Circle B. Further waves of immigrants passing through and from Epirus people the Greek peninsula and islands the last wave, called Dorians, settling from 1100 onwards. The lands they left in central Albania were occupied during the so-called Dark Age (U10-800BC) by Illyrians, whose main habitat was in the area now called Bosnia,"
- John Boardman. The prehistory of the Balkans and the Middle East and the Aegean world. Cambridge University Press, 1982. ISBN 978-0-521-22496-3, p. 629: "...the southernmost outliers of the tribes which held the Zeta valley, as such they may have been the immediate neighbours of Greek-speaking tribes in the Bronze Age."
- Wilkes John. The Illyrians. Wiley-Blackwell, 1995, ISBN 978-0-631-19807-9, p. 92: "Illyrii was once no more than the name of a single people... astride the modern frontier between Albania and Yugoslav Montenegro"
- Hammond, N.G.L. (1997). "Ancient Epirus: Prehistory and Protohistory". Epirus, 4000 Years of Greek History and Civilization. p. 38: Ekdotike Athenon: 34–45. ISBN 978-960-213-371-2.CS1 maint: location (link)
- The later (The Peoples of Europe) by John Wilkes, ISBN 0-631-19807-5, 1996, page 39: "... the other hand, the beginnings of the Iron Age around 1000 BC is held to coincide with the formation of several different peoples in Albania and other countries. ..."
- Zickel, Raymond. "THE ANCIENT ILLYRIANS". Albania: A Country Study. US Library of Congress. Retrieved 1 March 2011.
- The Illyrians (The Peoples of Europe) by John Wilkes, 1996, ISBN 978-0-631-19807-9, page 92, "Appian's description of the Illyrian territories records a southern boundary with Chaonia and Thesprotia, where ancient Epirus began south of river Aoous (Vjose)" also map
- Cambridge University Press. The Cambridge ancient history. 2000. ISBN 0-521-23447-6, page 261,"...down to the mouth of Aous"
- The Illyrians (The Peoples of Europe) by John Wilkes, 1996, page 94
- (Ptolemy. Geogr. Ill 12,20)
- Appian, The Foreign Wars, III, 1.2
- Wilkes, J. J. The Illyrians, 1992, ISBN 0-631-19807-5, Page 96,"... 25 Enchelei
- The Illyrians (The Peoples of Europe) by John Wilkes,1996, ISBN 9780631198079, page 111.
- Nicholas Geoffrey Lemprière Hammond, Guy Thompson Griffith. A History of Macedonia: Historical geography and prehistory. Clarendon Press, 1972, p. 92.
- Lewis, D. M.; Boardman, John (1994). The Cambridge ancient history: The fourth century B.C. Cambridge University Press. p. 423. ISBN 978-0-521-23348-4. Retrieved 26 October 2010.
- Boardman, John; Hammond, Nicholas Geoffrey Lemprière (1982), The Cambridge Ancient History: The Expansion of the Greek World, Eighth to Six Centuries B.C, Cambridge, p. 261
- Wilkes, John J. (1995), The Illyrians, Oxford: Blackwell Publishing, ISBN 0-631-19807-5, p. 92
- Harding, p. 93. Grabos became the most powerful Illyrian king after the death of Bardylis in 358.
- "The Journal of Hellenic Studies by Society for the Promotion of Hellenic Studies (London, England)", 1973, p. 79. Cleitus was evidently the son of Bardylis II the grandson of the very old Bardylis who had fallen in battle against Phillip II in 385 BC.
- Hammond, Nicholas Geoffrey Lemprière (1 January 1993). Studies concerning Epirus and Macedonia before Alexander. Hakkert. ISBN 9789025610500.
- "Korkyra (polis)", Wikipedia, 10 August 2020, retrieved 16 February 2021
- "Lighthouse of Alexandria", Wikipedia, 12 February 2021, retrieved 16 February 2021
- "Aoös", Wikipedia, 8 February 2021, retrieved 16 February 2021
- Eckstein, Arthur M. (1 February 1995). Moral Vision in the Histories of Polybius. University of California Press. ISBN 978-0-520-91469-8.
- Berranger, Danièle (1 January 2007). Épire, Illyrie, Macédoine: mélanges offerts au professeur Pierre Cabanes (in French). Presses Univ Blaise Pascal. p. 127. ISBN 9782845163515.
- Berranger, Danièle (1 January 2007). Épire, Illyrie, Macédoine: mélanges offerts au professeur Pierre Cabanes (in French). Presses Univ Blaise Pascal. p. 137. ISBN 9782845163515.
- An Inventory of Archaic and Classical Poleis by Mogens Herman, ISBN 0-19-814099-1, 2004, page 343, "Bouthroton (Bouthrotios)"
- Ludwig Schaaff, Enzyklopädie der klassischen Altertumskunde, 2002, ISBN 0-543-80046-6, page 17
- An ancient geography, classical and sacred. By S. Augustus Mitchell. by Michigan Historical Reprint Series, 2005, ISBN 1-4255-3778-2, page 215
- Paul: His Story by Jerome Murphy-O'Connor, page 247
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "The Barbarian Invasions and the Middle Ages". Albania: A Country Study. U.S. Library of Congress.
- Madgearu, Alexandru (2008). Gordon, Martin (ed.). The Wars of the Balkan Peninsula: Their Medieval Origins (illustrated ed.). Scarecrow Press. p. 25. ISBN 978-0-8108-5846-6.
It was supposed that those Albanoi from 1042 were Normans from Sicily, called by an archaic name (the Albanoi were an independent tribe from Southern Italy). The following instance is indisputable. It comes from the same Attaliates, who wrote that the Albanians (Arbanitai) were involved in the 1078 rebellion of...
- Elsie 2010, pp. iv, xxviii. sfn error: no target: CITEREFElsie2010 (help)
- Ducellier 1999, p. 780: "As for Albania, its separate identity was real enough, even though it had not truly broken with Constantinople; all the same, the rulers of Arbanon around ἄρχον, Progon and his sons Dhimitër and Gjin, based at Kruja, retained a considerable degree of autonomy, even though Progon bore no title grander than ἄρχων (archon); and the title of πανὑπερσεβαστός (panhypersebastos), borne by Dhimitër at the start of the thirteenth century, can only be seen as a sign of his dependence on the Byzantines." harvnb error: no target: CITEREFDucellier1999 (help)
- Elsie 2010, p. xxviii. sfn error: no target: CITEREFElsie2010 (help)
- Varzos 1984, pp. 555–556. sfn error: no target: CITEREFVarzos1984 (help)
- Fine 1994, p. 68. sfn error: no target: CITEREFFine1994 (help)
- Angelidi, Christine (2016). ΕΥΨΥΧΙΑ. Mélanges offerts à Hélène Ahrweiler (in French). Publications de la Sorbonne. ISBN 978-2-85944-830-1.
- Ellis & Klusáková 2007, p. 134. sfn error: no target: CITEREFEllisKlusáková2007 (help)
- Prifti, Skënder (2002). Historia e popullit shqiptar në katër vëllime (in Albanian). Albania. p. 207. ISBN 978-99927-1-622-9.
- Nicol, Donald M. (1 January 1984). The Despotate of Epiros 1267-1479: A Contribution to the History of Greece in the Middle Ages. Cambridge University Press. ISBN 9780521261906.
- The Encyclopaedia of Islam. 1974.
- Barbinger, Franz (1978). Mehmed the Conqueror and His Time. ISBN 0691010781.
- Housley, Norman. The later Crusades, 1274–1580: from Lyons to Alcazar. p. 90. ISBN 978-0-19-822136-4.
- Fine, John V. A.; Fine, John Van Antwerp (1994). The Late Medieval Balkans: A Critical Survey from the Late Twelfth Century to the Ottoman Conquest. ISBN 978-0-472-08260-5.
- "Oliver Jens Schmitt: Scanderbeg: an Uprising and its Leader". Archived from the original on 13 March 2016.
- Lane-Poole, Stanley (1888). The Story of Turkey.
- Zhelyazkova, Antonina (2000). "Albanian Identities" (PDF). Sofia: International Centre for Minority Studies and Intercultural Relations (IMIR). Retrieved 18 March 2011.
The territories of Central and Southern Albania, stretching between the Mat River to the north and Çameria [modern Tsameria, Greece] to the south, were included in a single sancak known from the records and historical works as Arvanid
- Riza, Emin (1992). "Ethnographic and open-air museums" (PDF). UNESCO, Paris. Retrieved 18 March 2011.
- Anamali, Skënder; Prifti, Kristaq; RSH), Instituti i Historisë (Akademia e Shkencave e (1 January 2002). Historia e popullit shqiptar (in Albanian). Botimet Toena. p. 338. ISBN 9789992716229.
- "Library of Congress Country Study of Albania". Lcweb2.loc.gov. 27 July 2010. Retrieved 27 August 2010.
- Gjergj Kastrioti Skënderbeu: jeta dhe vepra (1405–1468) (in Albanian). Botimet Toena. 1 January 2002. ISBN 9789992716274.
- The Analytical Review, Or History of Literature, Domestic and Foreign, on an Enlarged Plan. 1 January 1788.
- Research Institute for European and American Studies. The Balkan Muslim Presence
- A. Madrugearu, M. Gordon. The wars of the Balkan Peninsula: their medieval origin. Scarecrow Press, 2008. p. 27.
- Gegaj, Athanas; Young, Antonia; Krasniqi, Rexhep; Hodgson, John; Young, Nigel; Bland, William B. (1997). Albania. ISBN 9781851092604. Missing
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "National Awakening and the Birth of Albania". Albania: A Country Study. US Library of Congress.
- Glenny, Misha. The Balkans (Nationalism, War and the Great Powers, 1804–1999)
- "History of Albania". Lonely Planet. Retrieved 5 January 2012.
- Elsie, Robert. "1913 The Conference of London". Archived from the original on 17 July 2011. Retrieved 5 January 2012.
- Konidaris, Gerasimos (2005). Schwandner-Sievers, Stephanie (ed.). The new Albanian migrations. Sussex Academic Publishing. p. 65. ISBN 978-1-903900-78-9.
- Tucker, Spencer; Roberts, Priscilla Mary (2005). World War I: encyclopedia. ABC-CLIO. p. 77. ISBN 978-1-85109-420-2. Retrieved 26 January 2011.
- Miller, William (1966). The Ottoman Empire and Its Successors, 1801–1927. Routledge. pp. 543–544. ISBN 978-0-7146-1974-3.
- Young, Antonia; Hodgson, John; Young, Nigel (1997). Albania. Clio Press. ISBN 1-85109-260-9.
- Dušan Fundić, "The Albanian Question in Serbian-Italian Relations 1914-1918." on Serbia and Italy in the Great War (2019) online.
- Jelavich, Barbara (1999) , History of the Balkans: Twentieth century, 2, Cambridge, UK: The Press Syndicate of the University of Cambridge, p. 103, ISBN 0-521-27459-1, retrieved 25 January 2011,
Soon the government was faced with major peasant revolt
- George B. Leon. Greece and the First World War: from neutrality to intervention, 1917–1918. East European Monographs, 1990. ISBN 978-0-88033-181-4, p. 323.
- David Turnock. The economy of East Central Europe 1815–1989: stages of transformation in a peripheral region.. Routledge, 2006 ISBN 978-0-415-18053-5, p. 424
- Kabashi, Gezim (24 December 2012). "Fotot e Rralla – Bombardimi i Durresit me 2 Tetor 1918" [Rare Photos – Bombing of Durrës on 2 October 1918]. Gazeta e Durresit. Archived from the original on 24 October 2014.
- Pearson, Owen (2004). Albania in the Twentieth Century, A History: Volume 1: Albania and King Zog. New York: I.B. Tauris. p. 138.
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "Interwar Albania, 1918–41". Albania: A Country Study.
- Zickel, Raymond E.; Iwaskiw, Walter R.; Keefe, Eugene K. (1994). Albania: a country study. US Library of Congress. pp. 27–29. LCCN 93042885.
- "Alabania: Zog, Not Scanderbeg". Time. 17 June 1929. Archived from the original on 8 August 2009.
- Paul Lendvai (1969). Eagles in cobwebs: nationalism and communism in the Balkans. Doubleday. p. 181. ISBN 9780356030104. Retrieved 10 May 2012.
- Owen Pearson (2004). Albania And King Zog: Independence, Republic And Monarchy 1908–1939. I.B.Tauris. p. 304. ISBN 978-1-84511-013-0. Retrieved 10 May 2012.
He forbade the carrying of arms by civilians, enforcing this prohibition among the tribesmen by ordering all tribes to be disarmed except his own, the Moslem Mati, their allies, the Diber, and the catholic Mirdite
- Giovanni Villari, "A Failed Experiment: The Exportation of Fascism to Albania." Modern Italy 12.2 (2007): 157-171 online.
- Glenny, Misha (2012). The Balkans: Nationalism, War, and the Great Powers, 1804-2012. House of Anansi. p. 418. ISBN 9781101610992.
- Creveld, Martin van (July–October 1972). "In the Shadow of Barbarossa: Germany and Albania, January–March 1941". Journal of Contemporary History. 7 (3/4): 22–230. JSTOR 259913.
- Fischer, Bernd J (1999). Albania at War, 1939–1945. Hurst. p. 5. ISBN 9781850655312.
- Albania: A Country Study: Albania's Reemergence after World War I, Library of Congress
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "Italian Penetration". Albania: A Country Study. Library of Congress.
- Fischer, B. J: Albania at War, 1939–1945, page 7. Hurst, 1999
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "Zog's Kingdom". Albania: A Country Study. Library of Congress.
- Zickel, Raymond; Iwaskiw, Walter R., eds. (1994). "Italian Occupation". Albania: A Country Study. Library of Congress.
- Fischer, B. J: Albania at War, 1939–1945, page 36. Hurst, 1999
- Owen Pearson (2006). Albania in the Twentieth Century, A History : Volume II: Albania in Occupation and War, 1939–45. London: I. B. Tauris. p. 167. ISBN 1-84511-104-4.
- "History of Albania | My Albania! The Official website of Albanian! Open source travel guide". myalbania.eu. Retrieved 29 January 2019.
- "Albania's broken men fear prison horrors will be forgotten". Associated Press. 19 June 2016.
- Sarner. Rescue in Albania: One Hundred Percent of Jews in Albania Rescued from the Holocaust, 1997.
- "Muslim Family Who Hid 26 Jews in Albania from the Nazis Honored by ADL" Anti-Defamation League Archived 5 January 2009 at the Wayback Machine
- Escape Through the Balkans: the Autobiography of Irene Grunbaum (University of Nebraska Press, 1996)
- "Shoah Research Center – Albania" (PDF). Retrieved 27 August 2010.
- "Israeli Historians Study How Albanian Jews Escaped Holocaust". Fox News Channel. 20 May 2008.
- Robert Elsie (30 March 2010). Historical Dictionary of Albania. Scarecrow Press. p. 30. ISBN 978-0-8108-6188-6. Retrieved 10 May 2012.
- 15 February 1994 Washington Times
- "WHPSI": The World Handbook of Political and Social Indicators by Charles Lewis Taylor
- 8 July 1997 NY Times
- Pano, Aristotel. "Panorama of the Economic-Social Development of Socialist Albania". Albania Today. Retrieved 11 April 2012.
- "Albania." World Almanac & Book of Facts, 2008, pp467–545, (AN 28820955)
- Popovski, Ivan (10 May 2017). A Short History of South East Europe. Lulu Press, Inc. ISBN 978-1-365-95394-1.
- Albania: From Anarchy to a Balkan Identity ISBN 1-85065-279-1, by Miranda Vickers & James Pettifer, 1999, page 210, "with the split in the world communist movement it moved into a close relationship with China"
- Karen Dawisha; Bruce Parrott (13 June 1997). Politics, Power, and the Struggle for Democracy in South-East Europe. Cambridge University Press. pp. 295–. ISBN 978-0-521-59733-3. Retrieved 10 May 2012.
- "THE CONSTITUTION OF THE PEOPLE'S SOCIALIST REPUBLIC OF ALBANIA". bjoerna.dk.
Approved by the People's Assembly on 28 December 1976
- "THE CONSTITUTION OF THE PEOPLE'S SOCIALIST REPUBLIC OF ALBANIA". bjoerna.dk.
The foundations of religious obscurantism were smashed. The moral figure of the working man, his consciousness, and world outlook, are moulded on the basis of the proletarian ideology, which has become the dominant ideology.
- "Election Watch". Journal of Democracy. Johns Hopkins University Press. 2 (3): 115–117. Summer 1991. doi:10.1353/jod.1991.0035. Archived from the original on 25 April 2006.
- "Election Watch". Journal of Democracy. Johns Hopkins University Press. 3 (3): 154–157. July 1992. doi:10.1353/jod.1992.0042. Archived from the original on 25 April 2006.
- Ayers, Bert (2015). Bridging the Gap. Lulu.com. p. 28. ISBN 978-1-329-64683-4.
- Alì, Maurizio (2003). "L'attività di peacekeeping della Forza Multinazionale di Protezione in Albania" (in Italian). Rome, Italy: Università Roma Tre - Facoltà di Scienze Politiche – via HAL. Cite journal requires
- "Oberation Alba". United Nations Website. Permanent Mission of Slovenia to the UN. Archived from the original on 21 October 2008. Retrieved 4 January 2013.
- "Presidenti Nishani e dekreton: 25 qershori data e zgjedhjeve parlamentare". gsh.al (in Albanian). 21 May 2017.
- "Shqipëri: Dështimi i tretë për zgjedhjen e presidentit". evropaelire.org (in Albanian). 22 April 2017.
- "Ilir Meta, president i ri i Shqipërisë". Telegrafi (in Albanian). 28 April 2017.
- Bushkoff, Leonard. "Albania, history of", Collier's Encyclopedia, vol. 1. NY: P.F. Collier, L.P, 1996.
- Elsie, Robert. Historical Dictionary of Albania (2010) online
- Hall, Richard C. War in the Balkans: An Encyclopedic History from the Fall of the Ottoman Empire to the Breakup of Yugoslavia (2014) excerpt
- Keith Lyle, ed. Oxford Encyclopedic World Atlas, 5th edn. Spain, 2000.
- Rodgers, Mary M. (ed.). Albania...in Pictures. Minneapolis: Lerner Publications Company, 1995.
- 2003 U.S. Department of State Background Note of Albania
- Krasniqi, Afrim. The End of Albania's Siberia. Tirana 1998.
- Krasniqi, Afrim. Civil Society in Albania. Tirana 2004.
- Krasniqi, Afrim. Political Parties in Albania 1920–2006 Tirana 2006.
- Antonello Biagini, Storia dell'Albania contemporanea, Bompiani, 2005
- Patrice Najbor, Histoire de l'Albanie et de sa maison royale (5 volumes), JePublie, Paris, 2008, (ISBN 978-2-9532382-0-4).
- Patrice Najbor, La dynastye des Zogu, Textes & Prétextes, Paris, 2002.
- Monarkia Shqiptare 1928–1939, Qendra e Studimeve Albanologjike & Insitituti Historisë, Boetimet Toena, Tirana, 2011 (ISBN 978-99943-1-721-9)
- Sontag, Raymond James/ The political history of Albania (1921) online
- Stavrianos, L.S. The Balkans Since 1453 (1958), major scholarly history; 970pp online free to borrow
- Tom Winnifrith, ed. Perspectives on Albania. London: Palgrave Macmillan, 1992.
|Wikimedia Commons has media related to History of Albania.|
- History of Albania: Primary Documents
- Library of Congress Country Study of Albania
- Colloque sur la création de l'Etat albanais en 1912, on the cultural website Albania
- Maison royale d'Albanie, site officiel en langue française
- Famille royale d'Albanie, site officiel en langue anglaise
- L'Albanie et le sauvetage des Juifs
- Perseus Digital Library, keyword Albania
- Further reading
- Books about Albania and the Albanian people (scribd.com) Reference of books (and some journal articles) about Albania and the Albanian people; their history, language, origin, culture, literature, etc. Public domain books, fully accessible online. | https://library.kiwix.org/wikipedia_en_top_maxi/A/History_of_Albania | 21 |
19 | Hearing is one of the traditional five senses, ability to observe sound by detecting vibrations through an organ such as the ear. The sense of hearing is very important because it has helped humans survive. We know what hearing is, but what is hearing loss? Hearing loss happens when there is a problem with one or more parts of the ear or ears. People who have hearing loss might be able to hear some sounds or nothing at all. People also may use the words deaf, deafness and hard of hearing when they are talking about hearing loss.
To understand how hearing loss happens, it helps to know how the ear works. The ear is made up of three different sections: the outer ear, the middle ear, and the inner ear. These parts work together that is why we can hear and process sounds. The outer ear picks up sound waves and the waves transfer through the outer ear canal. When the sound waves hit the eardrum in the middle ear, the eardrum starts to vibrate. A hearing problem can develop later in life and connected with getting older which a natural part of the aging process is. There are a few common hearing loss causes such as genetics, loud noises. The main one that comes with aging is Presbycusis is age-related hearing loss. It becomes more common in people as they get older. People with this kind of hearing loss may have a hard time hearing what others are saying or may be unable to stand loud sounds. The decline is slow. Just as hair turns gray at different rates, presbycusis can develop at different rates. It can be caused by sensorineural hearing loss. This type of hearing loss results from damage to parts of the inner ear, the auditory nerve, or hearing pathways in the brain. Presbycusis may be caused by aging, loud noise, heredity, head injury, infection, illness, certain prescription drugs, and circulation problems such as high blood pressure. The degree of hearing loss varies from person to person. Also, a person can have a different amount of hearing loss in each ear.
Hearing loss is often overlooked because our hearing is an invisible sense that is always expected to be in action. Yet, there are people everywhere that suffer from the effects of hearing loss. It is important to study and understand all aspects of the many different types and reasons for hearing loss. The loss of this particular sense can be socially debilitating. It can affect the communication skills of the person, not only in receiving information, but also in giving the correct response. This paper focuses primarily on hearing loss in the elderly. One thing that affects older individuals' communication is the difficulty they often experience when recognizing time compressed speech. Time compressed speech involves fast and unclear conversational speech. Many older listeners can detect the sound of the speech being spoken, but it is still unclear (Pichora-Fuller, 2000). In order to help with diagnosis and rehabilitation, we need to understand why speech is unclear even when it is audible. The answer to that question would also help in the development of hearing aids and other communication devices. Also, as we come to understand the reasoning behind this question and as we become more knowledgeable about what older adults can and cannot hear, we can better accommodate them in our day to day interactions.
There are many approaches to the explanation of the elderly's difficulty with rapid speech. Researchers point to a decline in processing speed, a decline in processing brief acoustic cues (Gordon-Salant & Fitzgibbons, 2001), an age-related decline of temporal processing in general (Gordon-Salant & Fitzgibbons, 1999; Vaughan & Letowski, 1997), the fact that both visual and auditory perception change with age (Helfer, 1998), an interference of mechanical function of the ear, possible sensorineural hearing loss due to damage to receptors over time (Scheuerle, 2000), or a decline in the processing of sounds in midbrain (Ochert, 2000). Each one of these...
Please join StudyMode to read the full document | https://www.studymode.com/essays/Aging-And-Hearing-Loss-46159110.html | 21 |
19 | (also known as a quake
) is the shaking of the surface of the Earth resulting from a sudden release of energy in the Earth
that creates seismic waves
. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to propel objects and people into the air, and wreak destruction across entire cities. The seismicity
, or seismic activity
, of an area is the frequency, type, and size of earthquakes experienced over a period of time. The word tremor
is also used for non-earthquake seismic rumbling
occur mostly along tectonic plate boundaries, and especially on the Pacific Ring of Fire
Global plate tectonic movement
At the Earth's surface, earthquakes manifest themselves by shaking and displacing or disrupting the ground. When the epicenter
of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami
. Earthquakes can also trigger landslides
and occasionally, volcanic activity.
In its most general sense, the word earthquake
is used to describe any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults
but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests
. An earthquake's point of initial rupture is called its hypocenter
or focus. The epicenter
is the point at ground level directly above the hypocenter.
Naturally occurring earthquakes
Three types of faults:
Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane
. The sides of a fault move past each other smoothly and aseismically
only if there are no irregularities or asperities
along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior
. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy
This energy is released as a combination of radiated elastic strain seismic waves
frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory
. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture
growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy
and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.
Earthquake fault types
There are three main types of fault, all of which may cause an interplate earthquake
: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip
and where movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended
such as a divergent boundary
. Reverse faults occur in areas where the crust is being shortened
such as at a convergent boundary. Strike-slip faults
are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
Reverse faults, particularly those along convergent plate boundaries
, are associated with the most powerful earthquakes, megathrust earthquakes
, including almost all of those of magnitude 8 or more. Megathrust earthquakes are responsible for about 90% of the total seismic moment released worldwide.
Strike-slip faults, particularly continental transforms
, can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. For every unit increase in magnitude, there is a roughly thirtyfold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude of earthquake. An 8.6 magnitude earthquake releases the same amount of energy as 10,000 atomic bombs like those used in World War II
This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures
and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 °C (572 °F) flow in response to stress; they do not rupture in earthquakes.
The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately 1,000 km (620 mi). Examples are the earthquakes in Alaska (1957)
, Chile (1960)
, and Sumatra (2004)
, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault
), the North Anatolian Fault
in Turkey (1939
), and the Denali Fault
in Alaska (2002
), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
Aerial photo of the San Andreas Fault in the Carrizo Plain
, northwest of Los Angeles
The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees.
Thus, the width of the plane within the top brittle crust of the Earth can become 50–100 km (31–62 mi) (Japan, 2011
; Alaska, 1964
), making the most powerful earthquakes possible.
Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km (6.2 mi) within the brittle crust.
Thus, earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about six kilometres (3.7 mi).
In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels.
This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest
principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass up, and thus, the overburden equals the least
principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
Earthquakes away from plate boundaries
Comparison of the 1985
earthquakes on Mexico City, Puebla and Michoacán/Guerrero
Where plate boundaries occur within the continental lithosphere
, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault
continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake
was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian
and Eurasian plates
where it runs through the northwestern part of the Zagros Mountains
. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms
All tectonic plates have internal stress fields caused by their interactions with neighboring plates and sedimentary loading or unloading (e.g., deglaciation).
These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.
Shallow-focus and deep-focus earthquakes
The majority of tectonic earthquakes originate in the ring of fire at depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km (43 mi) are classified as "shallow-focus" earthquakes, while those with a focal-depth between 70 and 300 km (43 and 186 mi) are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones
, where older and colder oceanic crust
descends beneath another tectonic plate, deep-focus earthquakes
may occur at much greater depths (ranging from 300 to 700 km (190 to 430 mi)).
These seismically active areas of subduction are known as Wadati–Benioff zones
. Deep-focus earthquakes occur at a depth where the subducted lithosphere
should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine
undergoing a phase transition
into a spinel
Earthquakes and volcanic activity
Earthquakes often occur in volcanic regions and are caused there, both by tectonic
faults and the movement of magma
. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens
Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers
(a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.
A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m (330 ft) while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated, it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.
Rupture propagation is generally modeled using a fracture mechanics
approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity, which is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes
have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake
has been attributed to the effects of the sonic boom
developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes
. A particularly dangerous form of slow earthquake is the tsunami earthquake
, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake
Co-seismic overpressuring and effect of pore pressure
During an earthquake, high temperatures can develop at the fault plane so increasing pore pressure consequently to vaporization of the ground water already contained within rock.
In the coseismic phase, such increase can significantly affect slip evolution and speed and, furthermore, in the post-seismic phase it can control the aftershock sequence
because, after the main event, pore pressure increase slowly propagates into the surrounding fracture network.
From the point of view of the Mohr-Coulomb strength theory
, an increase in fluid pressure reduces the normal stress acting on the fault plane that holds it in place, and fluids can exert a lubricating effect. As thermal overpressurisation may provide a positive feedback between slip and strength fall at the fault plane, a common opinion is that it may enhance the faulting process instability. After the main shock, the pressure gradient between the fault plane and the neighbouring rock causes a fluid flow which increases pore pressure in the surrounding fracture networks; such increase may trigger new faulting processes by reactivating adjacent faults, giving rise to aftershocks.
Analogously, artificial pore pressure increase, by fluid injection in Earth’s crust, may induce seismicity
Most earthquakes form part of a sequence, related to each other in terms of location and time.
Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.
An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock
. Aftershocks are formed as the crust around the displaced fault plane
adjusts to the effects of the main shock.
Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks
by the fact that no single earthquake in the sequence is obviously the main shock, so none has a notable higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park
In August 2012, a swarm of earthquakes shook Southern California
's Imperial Valley
, showing the most recorded activity in the area since the 1970s.
Sometimes a series of earthquakes occur in what has been called an earthquake storm
, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution
of the previous earthquakes. Similar to aftershocks
but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault
in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.
Intensity of earth quaking and magnitude of earthquakes Quaking or shaking of the earth is a common phenomenon undoubtedly known to humans from earliest times. Prior to the development of strong-motion accelerometers
that can measure peak ground speed and acceleration directly, the intensity of the earth-shaking was estimated on the basis of the observed effects, as categorized on various seismic intensity scales
. Only in the last century has the source of such shaking been identified as ruptures in the Earth's crust, with the intensity of shaking at any locality dependent not only on the local ground conditions but also on the strength or magnitude
of the rupture, and on its distance.
Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude
scale, which is based on the actual energy released by an earthquake.
Frequency of occurrence
It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt.
Minor earthquakes occur nearly constantly around the world in places like California
in the U.S., as well as in El Salvador
, the Philippines
, the Azores
, New Zealand
Larger earthquakes occur less frequently, the relationship being exponential
; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5.
In the (low seismicity) United Kingdom
, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years.
This is an example of the Gutenberg–Richter law
The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey
estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable.
In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend.
More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey
A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.
Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000-kilometre-long (25,000 mi), horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire
, which for the most part bounds the Pacific Plate
Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains
While most earthquakes are caused by movement of the Earth's tectonic plates
, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs
, extracting resources such as coal
, and injecting fluids underground for waste disposal or fracking
Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake
is thought to have been caused by disposing wastewater from oil production into injection wells
and studies point to the state's oil industry as the cause of other earthquakes in the past century.
A Columbia University
paper suggested that the 8.0 magnitude 2008 Sichuan earthquake
was induced by loading from the Zipingpu Dam
though the link has not been conclusively proved.
Measuring and locating earthquakes
Every tremor produces different types of seismic waves, which travel through rock with different velocities:
of the seismic waves through solid rock ranges from approx. 3 km/s (1.9 mi/s) up to 13 km/s (8.1 mi/s), depending on the density
of the medium. In the Earth's interior, the shock- or P-waves travel much faster than the S-waves (approx. relation 1.7:1). The differences in travel time from the epicenter
to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also, the depth of the hypocenter
can be computed roughly.
In the upper crust, P-waves travel in the range 2–3 km (1.2–1.9 mi) per second (or lower) in soils and unconsolidated sediments, increasing to 3–6 km (1.9–3.7 mi) per second in solid rock. In the lower crust, they travel at about 6–7 km (3.7–4.3 mi) per second; the velocity increases within the deep mantle to about 13 km (8.1 mi) per second. The velocity of S-waves ranges from 2–3 km (1.2–1.9 mi) per second in light sediments and 4–5 km (2.5–3.1 mi) per second in the Earth's crust up to 7 km (4.3 mi) per second in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
On average, the kilometer distance to the earthquake is the number of seconds between the P- and S-wave times 8
Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg
S-waves and later arriving surface waves do most of the damage compared to P-waves. P-waves squeeze and expand material in the same direction they are traveling, whereas S-waves shake the ground up and down and back and forth.
Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions
(F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
Standard reporting of earthquakes includes its magnitude
, date and time of occurrence, geographic coordinates
of its epicenter
, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.
Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurements could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki
Effects of earthquakes
1755 copper engraving depicting Lisbon
in ruins and in flames after the 1755 Lisbon earthquake
, which killed an estimated 60,000 people. A tsunami
overwhelms the ships in the harbor.
The effects of earthquakes include, but are not limited to, the following:
Shaking and ground rupture
Shaking and ground rupture
are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude
, the distance from the epicenter
, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation
The ground-shaking is measured by ground acceleration
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic
motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams
, bridges, and nuclear power stations
and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure.
Soil liquefaction occurs when, because of the shaking, water-saturated granular
material (such as sand) temporarily loses its strength and transforms from a solid
to a liquid
. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake
, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.
An earthquake may cause injury and loss of life, road and bridge damage, general property damage
, and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease
, lack of basic necessities, mental consequences such as panic attacks, depression to survivors,
and higher insurance premiums.
Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.
Earthquakes can cause fires
by damaging electrical power
or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake
were caused by fire than by the earthquake itself.
Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea
. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.
Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.
Further information: Flood
Floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.
The terrain below the Sarez Lake
is in danger of catastrophic flooding if the landslide dam
formed by the earthquake, known as the Usoi Dam
, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.
Earthquakes (M6.0+) since 1900 through 2017
Earthquakes of magnitude 8.0 and greater from 1900 to 2018. The apparent 3D volumes of the bubbles are linearly proportional to their respective fatalities.
One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake
, which occurred on 23 January 1556 in Shaanxi
province, China. More than 830,000 people died.
Most houses in the area were yaodongs
—dwellings carved out of loess
hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake
, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century.
Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis
that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
is a branch of the science of seismology
concerned with the specification of the time, location, and magnitude
of future earthquakes within stated limits.
Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists
, scientifically reproducible predictions cannot yet be made to a specific day or month.
is usually considered to be a type of prediction
, earthquake forecasting
is often differentiated from earthquake prediction
. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.
For well-understood faults the probability that a segment may rupture during the next few decades can be estimated.
Earthquake warning systems
have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
The objective of earthquake engineering
is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting
to improve their resistance to earthquakes. Earthquake insurance
can provide building owners with financial protection against losses resulting from earthquakes Emergency management
strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
may help to assess buildings and plan precautionary operations: the Igor expert system
is part of a mobile laboratory that supports the procedures leading to the seismic assessment of masonry buildings and the planning of retrofitting operations on them. It has been successfully applied to assess buildings in Lisbon
Individuals can also take preparedness steps like securing water heaters
and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami
caused by a large quake.
An image from a 1557 book depicting an earthquake in Italy in the 4th century BCE
From the lifetime of the Greek philosopher Anaxagoras
in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth."Thales
of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water.
Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder
called earthquakes "underground thunderstorms".
In recent studies, geologists claim that global warming
is one of the reasons for increased seismic activity. According to these studies, melting glaciers and rising sea levels disturb the balance of pressure on Earth's tectonic plates, thus causing an increase in the frequency and intensity of earthquakes.[better source needed]
In Norse mythology
, earthquakes were explained as the violent struggling of the god Loki
. When Loki, god
of mischief and strife, murdered Baldr
, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn
stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.
In Greek mythology
was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident
, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.
In Japanese mythology
(鯰) is a giant catfish
who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima
who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.
In popular culture
The most popular single earthquake in fiction is the hypothetical "Big One" expected of California
's San Andreas Fault
someday, as depicted in the novels Richter 10
(1996), Goodbye California
(2009) and San Andreas
(2015) among other works.
Jacob M. Appel's widely anthologized short story, A Comparative Seismology
, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.
Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones.
Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival.
Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions.
As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake
, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.
- ^ Ohnaka, M. (2013). The Physics of Rock Failure and Earthquakes. Cambridge University Press. p. 148. ISBN 978-1-107-35533-0.
- ^ Vassiliou, Marius; Kanamori, Hiroo (1982). "The Energy Release in Earthquakes". Bull. Seismol. Soc. Am. 72: 371–387.
- ^ Spence, William; S.A. Sipkin; G.L. Choy (1989). "Measuring the Size of an Earthquake". United States Geological Survey. Archived from the original on 2009-09-01. Retrieved 2006-11-03.
- ^ Stern, Robert J. (2002), "Subduction zones", Reviews of Geophysics, 40 (4): 17, Bibcode:2002RvGeo..40.1012S, doi:10.1029/2001RG000108
- ^ Geoscience Australia
- ^ Wyss, M. (1979). "Estimating expectable maximum magnitude of earthquakes from fault dimensions". Geology. 7 (7): 336–340. Bibcode:1979Geo.....7..336W. doi:10.1130/0091-7613(1979)7<336:EMEMOE>2.0.CO;2.
- ^ Sibson, R.H. (1982). "Fault Zone Models, Heat Flow, and the Depth Distribution of Earthquakes in the Continental Crust of the United States". Bulletin of the Seismological Society of America. 72 (1): 151–163.
- ^ Sibson, R.H. (2002) "Geology of the crustal earthquake source" International handbook of earthquake and engineering seismology, Volume 1, Part 1, p. 455, eds. W H K Lee, H Kanamori, P C Jennings, and C. Kisslinger, Academic Press, ISBN 978-0-12-440652-0
- ^ "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-07-24.
- ^ "Instrumental California Earthquake Catalog". WGCEP. Archived from the original on 2011-07-25. Retrieved 2011-07-24.
- ^ Hjaltadóttir S., 2010, "Use of relatively located microearthquakes to map fault patterns and estimate the thickness of the brittle crust in Southwest Iceland"
- ^ "Reports and publications | Seismicity | Icelandic Meteorological office". En.vedur.is. Retrieved 2011-07-24.
- ^ Schorlemmer, D.; Wiemer, S.; Wyss, M. (2005). "Variations in earthquake-size distribution across different stress regimes". Nature. 437 (7058): 539–542. Bibcode:2005Natur.437..539S. doi:10.1038/nature04094. PMID 16177788. S2CID 4327471.
- ^ Talebian, M; Jackson, J (2004). "A reappraisal of earthquake focal mechanisms and active shortening in the Zagros mountains of Iran". Geophysical Journal International. 156 (3): 506–526. Bibcode:2004GeoJI.156..506T. doi:10.1111/j.1365-246X.2004.02092.x.
- ^ Nettles, M.; Ekström, G. (May 2010). "Glacial Earthquakes in Greenland and Antarctica". Annual Review of Earth and Planetary Sciences. 38 (1): 467–491. Bibcode:2010AREPS..38..467N. doi:10.1146/annurev-earth-040809-152414.
- ^ Noson, Qamar, and Thorsen (1988). Washington State Earthquake Hazards: Washington State Department of Natural Resources. Washington Division of Geology and Earth Resources Information Circular 85.
- ^ "M7.5 Northern Peru Earthquake of 26 September 2005" (PDF). National Earthquake Information Center. 17 October 2005. Retrieved 2008-08-01.
- ^ Greene II, H.W.; Burnley, P.C. (October 26, 1989). "A new self-organizing mechanism for deep-focus earthquakes". Nature. 341 (6244): 733–737. Bibcode:1989Natur.341..733G. doi:10.1038/341733a0. S2CID 4287597.
- ^ Foxworthy and Hill (1982). Volcanic Eruptions of 1980 at Mount St. Helens, The First 100 Days: USGS Professional Paper 1249.
- ^ Watson, John; Watson, Kathie (January 7, 1998). "Volcanoes and Earthquakes". United States Geological Survey. Retrieved May 9, 2009.
- ^ a b National Research Council (U.S.). Committee on the Science of Earthquakes (2003). "5. Earthquake Physics and Fault-System Science". Living on an Active Earth: Perspectives on Earthquake Science. Washington, D.C.: National Academies Press. p. 418. ISBN 978-0-309-06562-7. Retrieved 8 July 2010.
- ^ Sibson, R.H. (1973). "Interactions between Temperature and Pore-Fluid Pressure during Earthquake Faulting and a Mechanism for Partial or Total Stress Relief". Nat. Phys. Sci. 243 (126): 66–68. doi:10.1038/physci243066a0.
- ^ Rudnicki, J.W.; Rice, J.R. (2006). "Effective normal stress alteration due to pore pressure changes induced by dynamic slip propagation on a plane between dissimilar materials" (PDF). J. Geophys. Res. 111, B10308. doi:10.1029/2006JB004396.
- ^ a b c Guerriero, V; Mazzoli, S. (2021). "Theory of Effective Stress in Soil and Rock and Implications for Fracturing Processes: A Review". Geosciences. 11 (3): 119. doi:10.3390/geosciences11030119.
- ^ a b Nur, A; Booker, J.R. (1972). "Aftershocks Caused by Pore Fluid Flow?". Science. 175 (4024): 885–887. doi:10.1126/science.175.4024.885. PMID 17781062. S2CID 19354081.
- ^ a b "What are Aftershocks, Foreshocks, and Earthquake Clusters?". Archived from the original on 2009-05-11.
- ^ "Repeating Earthquakes". United States Geological Survey. January 29, 2009. Retrieved May 11, 2009.
- ^ "Earthquake Swarms at Yellowstone". United States Geological Survey. Retrieved 2008-09-15.
- ^ Duke, Alan. "Quake 'swarm' shakes Southern California". CNN. Retrieved 27 August 2012.
- ^ Amos Nur; Cline, Eric H. (2000). "Poseidon's Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean" (PDF). Journal of Archaeological Science. 27 (1): 43–63. doi:10.1006/jasc.1999.0431. ISSN 0305-4403. Archived from the original (PDF) on 2009-03-25.
- ^ "Earthquake Storms". Horizon. 1 April 2003. Retrieved 2007-05-02.
- ^ Bolt 1993.
- ^ Chung & Bernreuter 1980, p. 1.
- ^ The USGS policy for reporting magnitudes to the press was posted at USGS policyArchived 2016-05-04 at the Wayback Machine, but has been removed. A copy can be found at http://dapgeol.tripod.com/usgsearthquakemagnitudepolicy.htm.
- ^ a b "Cool Earthquake Facts". United States Geological Survey. Retrieved 2021-04-21.
- ^ a b Pressler, Margaret Webb (14 April 2010). "More earthquakes than usual? Not really". KidsPost. Washington Post: Washington Post. pp. C10.
- ^ "Earthquake Hazards Program". United States Geological Survey. Retrieved 2006-08-14.
- ^ USGS Earthquake statistics table based on data since 1900 Archived 2010-05-24 at the Wayback Machine
- ^ "Seismicity and earthquake hazard in the UK". Quakes.bgs.ac.uk. Retrieved 2010-08-23.
- ^ "Italy's earthquake history." BBC News. October 31, 2002.
- ^ "Common Myths about Earthquakes". United States Geological Survey. Archived from the original on 2006-09-25. Retrieved 2006-08-14.
- ^ Are Earthquakes Really on the Increase?Archived 2014-06-30 at the Wayback Machine, USGS Science of Changing World. Retrieved 30 May 2014.
- ^ "Earthquake Facts and Statistics: Are earthquakes increasing?". United States Geological Survey. Archived from the original on 2006-08-12. Retrieved 2006-08-14.
- ^ The 10 biggest earthquakes in historyArchived 2013-09-30 at the Wayback Machine, Australian Geographic, March 14, 2011.
- ^ "Historic Earthquakes and Earthquake Statistics: Where do earthquakes occur?". United States Geological Survey. Archived from the original on 2006-09-25. Retrieved 2006-08-14.
- ^ "Visual Glossary – Ring of Fire". United States Geological Survey. Archived from the original on 2006-08-28. Retrieved 2006-08-14.
- ^ Jackson, James (2006). "Fatal attraction: living with earthquakes, the growth of villages into megacities, and earthquake vulnerability in the modern world". Philosophical Transactions of the Royal Society. 364 (1845): 1911–1925. Bibcode:2006RSPTA.364.1911J. doi:10.1098/rsta.2006.1805. PMID 16844641. S2CID 40712253.
- ^ "Global urban seismic risk." Cooperative Institute for Research in Environmental Science.
- ^ Fougler, Gillian R.; Wilson, Miles; Gluyas, Jon G.; Julian, Bruce R.; Davies, Richard J. (2018). "Global review of human-induced earthquakes". Earth-Science Reviews. 178: 438–514. Bibcode:2018ESRv..178..438F. doi:10.1016/j.earscirev.2017.07.008. Retrieved July 23, 2020.
- ^ Fountain, Henry (March 28, 2013). "Study Links 2011 Quake to Technique at Oil Wells". The New York Times. Retrieved July 23, 2020.
- ^ Hough, Susan E.; Page, Morgan (2015). "A Century of Induced Earthquakes in Oklahoma?". Bulletin of the Seismological Society of America. 105 (6): 2863–2870. Bibcode:2015BuSSA.105.2863H. doi:10.1785/0120150109. Retrieved July 23, 2020.
- ^ Klose, Christian D. (July 2012). "Evidence for anthropogenic surface loading as trigger mechanism of the 2008 Wenchuan earthquake". Environmental Earth Sciences. 66 (5): 1439–1447. arXiv:1007.2155. doi:10.1007/s12665-011-1355-7. S2CID 118367859.
- ^ LaFraniere, Sharon (February 5, 2009). "Possible Link Between Dam and China Quake". The New York Times. Retrieved July 23, 2020.
- ^ "Speed of Sound through the Earth". Hypertextbook.com. Retrieved 2010-08-23.
- ^ "Newsela | The science of earthquakes". newsela.com. Retrieved 2017-02-28.
- ^ Geographic.org. "Magnitude 8.0 - SANTA CRUZ ISLANDS Earthquake Details". Global Earthquake Epicenters with Maps. Retrieved 2013-03-13.
- ^ "Earth's gravity offers earlier earthquake warnings". Retrieved 2016-11-22.
- ^ "Gravity shifts could sound early earthquake alarm". Retrieved 2016-11-23.
- ^ "On Shaky Ground, Association of Bay Area Governments, San Francisco, reports 1995,1998 (updated 2003)". Abag.ca.gov. Archived from the original on 2009-09-21. Retrieved 2010-08-23.
- ^ "Guidelines for evaluating the hazard of surface fault rupture, California Geological Survey"(PDF). California Department of Conservation. 2002. Archived from the original (PDF) on 2009-10-09.
- ^ "Historic Earthquakes – 1964 Anchorage Earthquake". United States Geological Survey. Archived from the original on 2011-06-23. Retrieved 2008-09-15.
- ^ "Earthquake Resources". Nctsn.org. Retrieved 2018-06-05.
- ^ "Natural Hazards – Landslides". United States Geological Survey. Retrieved 2008-09-15.
- ^ "The Great 1906 San Francisco earthquake of 1906". United States Geological Survey. Retrieved 2008-09-15.
- ^ a b Noson, Qamar, and Thorsen (1988). Washington Division of Geology and Earth Resources Information Circular 85 (PDF). Washington State Earthquake Hazards.
- ^ "Notes on Historical Earthquakes". British Geological Survey. Archived from the original on 2011-05-16. Retrieved 2008-09-15.
- ^ "Fresh alert over Tajik flood threat". BBC News. 2003-08-03. Retrieved 2008-09-15.
- ^ USGS: Magnitude 8 and Greater Earthquakes Since 1900 Archived 2016-04-14 at the Wayback Machine
- ^ "Earthquakes with 50,000 or More DeathsArchived November 1, 2009, at the Wayback Machine". U.S. Geological Survey
- ^ Spignesi, Stephen J. (2005). Catastrophe!: The 100 Greatest Disasters of All Time. ISBN 0-8065-2558-4
- ^ Kanamori Hiroo. "The Energy Release in Great Earthquakes" (PDF). Journal of Geophysical Research. Archived from the original (PDF) on 2010-07-23. Retrieved 2010-10-10.
- ^ USGS. "How Much Bigger?". United States Geological Survey. Retrieved 2010-10-10.
- ^ Geller et al. 1997, p. 1616, following Allen (1976, p. 2070), who in turn followed Wood & Gutenberg (1935)
- ^ Earthquake Prediction. Ruth Ludwin, U.S. Geological Survey.
- ^ Kanamori 2003, p. 1205. See also International Commission on Earthquake Forecasting for Civil Protection 2011, p. 327.
- ^ Working Group on California Earthquake Probabilities in the San Francisco Bay Region, 2003 to 2032, 2003, "Archived copy". Archived from the original on 2017-02-18. Retrieved 2017-08-28.
- ^ Pailoplee, Santi (2017-03-13). "Probabilities of Earthquake Occurrences along the Sumatra-Andaman Subduction Zone". Open Geosciences. 9 (1): 4. Bibcode:2017OGeo....9....4P. doi:10.1515/geo-2017-0004. ISSN 2391-5447. S2CID 132545870.
- ^ Salvaneschi, P.; Cadei, M.; Lazzari, M. (1996). "Applying AI to Structural Safety Monitoring and Evaluation". IEEE Expert. 11 (4): 24–34. doi:10.1109/64.511774.
- ^ a b c d "Earthquakes". Encyclopedia of World Environmental History. 1: A–G. Routledge. 2003. pp. 358–364.
- ^ "Fire and Ice: Melting Glaciers Trigger Earthquakes, Tsunamis and Volcanos". about News. Retrieved October 27, 2015.
- ^ Sturluson, Snorri (1220). Prose Edda. ISBN 978-1-156-78621-5.
- ^ George E. Dimock (1990). The Unity of the Odyssey. Univ of Massachusetts Press. pp. 179–. ISBN 978-0-87023-721-8.
- ^ "Namazu". World History Encyclopedia. Retrieved 2017-07-23.
- ^ a b c d Van Riper, A. Bowdoin (2002). Science in popular culture: a reference guide. Westport: Greenwood Press. p. 60. ISBN 978-0-313-31822-1.
- ^ JM Appel. A Comparative Seismology. Weber Studies (first publication), Volume 18, Number 2.
- ^ Goenjian, Najarian; Pynoos, Steinberg; Manoukian, Tavosian; Fairbanks, AM; Manoukian, G; Tavosian, A; Fairbanks, LA (1994). "Posttraumatic stress disorder in elderly and younger adults after the 1988 earthquake in Armenia". Am J Psychiatry. 151 (6): 895–901. doi:10.1176/ajp.151.6.895. PMID 8185000.
- ^ Wang, Gao; Shinfuku, Zhang; Zhao, Shen; Zhang, H; Zhao, C; Shen, Y (2000). "Longitudinal Study of Earthquake-Related PTSD in a Randomly Selected Community Sample in North China". Am J Psychiatry. 157 (8): 1260–1266. doi:10.1176/appi.ajp.157.8.1260. PMID 10910788.
- ^ Goenjian, Steinberg; Najarian, Fairbanks; Tashjian, Pynoos (2000). "Prospective Study of Posttraumatic Stress, Anxiety, and Depressive Reactions After Earthquake and Political Violence" (PDF). Am J Psychiatry. 157 (6): 911–916. doi:10.1176/appi.ajp.157.6.911. PMID 10831470. Archived from the original(PDF) on 2017-08-10.
- ^ Coates, SW; Schechter, D (2004). "Preschoolers' traumatic stress post-9/11: relational and developmental perspectives. Disaster Psychiatry Issue". Psychiatric Clinics of North America. 27 (3): 473–489. doi:10.1016/j.psc.2004.03.006. PMID 15325488.
- ^ Schechter, DS; Coates, SW; First, E (2002). "Observations of acute reactions of young children and their families to the World Trade Center attacks". Journal of ZERO-TO-THREE: National Center for Infants, Toddlers, and Families. 22 (3): 9–13.
- Allen, Clarence R. (December 1976), "Responsibilities in earthquake prediction", Bulletin of the Seismological Society of America, 66 (6): 2069–2074.
- Bolt, Bruce A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 978-0-7167-5040-6.
- Chung, D.H.; Bernreuter, D.L. (1980), Regional Relationships Among Earthquake Magnitude Scales., NUREG/CR-1457.
- Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage
- Geller, Robert J.; Jackson, David D.; Kagan, Yan Y.; Mulargia, Francesco (14 March 1997), "Earthquakes Cannot Be Predicted" (PDF), Science, 275 (5306): 1616, doi:10.1126/science.275.5306.1616, S2CID 123516228.
- Donald Hyndman; David Hyndman (2009). "Chapter 3: Earthquakes and their causes". Natural Hazards and Disasters (2nd ed.). Brooks/Cole: Cengage Learning. ISBN 978-0-495-31667-1.
- International Commission on Earthquake Forecasting for Civil Protection (30 May 2011), "Operational Earthquake Forecasting: State of Knowledge and Guidelines for Utilization", Annals of Geophysics, 54 (4): 315–391, doi:10.4401/ag-5350.
- Kanamori, Hiroo (2003), "Earthquake Prediction: An Overview", International Handbook of Earthquake and Engineering Seismology, International Geophysics, 616: 1205–1216, doi:10.1016/s0074-6142(03)80186-9, ISBN 978-0-12-440658-2.
- Wood, H.O.; Gutenberg, B. (6 September 1935), "Earthquake prediction", Science, 82 (2123): 219–320, Bibcode:1935Sci....82..219W, doi:10.1126/science.82.2123.219, PMID 17818812.
Wikimedia Commons has media related to Earthquake
Look up earthquake
in Wiktionary, the free dictionary.
Last edited on 24 June 2021, at 13:29
Content is available under CC BY-SA 3.0
unless otherwise noted. | https://googleweblight.com/sp?hl&geid=NSTNR&u=https://en.m.wikipedia.org/wiki/Earthquake | 21 |
20 | The Era of Good Feelings
Americans came out of the War of 1812 with a new sense of national pride. Though the war was largely a stalemate, the astonishing American victory at the Battle of New Orleans made the nation feel as though it had won a second war for independence.
The election of James Monroe to the presidency in 1816 marked the beginning of a period of one-party rule, often termed the Era of Good Feelings. The new sense of pride broke down old political barriers and united Americans behind the common goal of improving the nation. In fact, the nation was so unified that Monroe ran uncontested for a second term in 1820.
The American System
Politicians rallied behind Speaker of the House Henry Clay and his American System to improve the national infrastructure. Clay wanted to make internal improvements to national transportation to link the agricultural West with the industrial North. Dozens of new canals and roads were built at the government’s expense, such as the Erie Canal and the Cumberland Road. Clay also pushed the Tariff of 1816through Congress to protect new manufacturers by raising the tax on goods produced abroad. Finally, Clay hoped to bolster the national economy by establishing a new Bank of the United States.
Landmark Decisions and Doctrines
The Supreme Court, under Chief Justice John Marshall, made several landmark decisions during this period, including McCulloch v. Maryland, Dartmouth College v. Woodward, Cohens v. Virginia, Gibbons v. Ogden, and Fletcher v. Peck. An ardent Federalist, Marshall issued decisions that strengthened the Court and the federal government relative to the states.
Meanwhile, President Monroe and Secretary of State John Quincy Adams issued the Monroe Doctrine in 1823, warning European powers to stay out of affairs in the western hemisphere. Like the early Supreme Court decisions, the Monroe Doctrine has had a large and lasting influence on American policy.
The Missouri Compromise
The Era of Good Feelings was short-lived. First, the Panic of 1819shook the U.S. economy and caused a brief depression toward the end of Monroe’s first term. Then, the Missouri crisis of 1819–1820 arose when Missouri applied for admission to the Union as a slave state. Northerners in the House rejected Missouri’s application because they wanted to maintain a balance between free and slave states in the Senate. They also passed the Tallmadge Amendment in 1819, stopping any more slaves from entering Missouri and gradually emancipating those already living there. Southerners were outraged by these developments.
Under Henry Clay’s Missouri Compromise, northerners and southerners agreed to admit Missouri as a slave state and Maine as a free state. The compromise also stipulated that slavery could not expand north of the 36° 30' parallel.
The Corrupt Bargain
By the election of 1824, the good feelings had vanished completely. Secretary of State John Quincy Adams ran against War of 1812 hero Andrew Jackson, but neither candidate won enough electoral votes to become president, so the vote went to the House of Representatives. Henry Clay, who hated Jackson, threw his support behind Adams. Adams won and promptly made Clay his new secretary of state, enraging many Americans, who cried out against this “corrupt bargain.” Adams’s reputation was so damaged that his hands were pretty much tied during his entire term in office.
Jackson bounced back and was elected president in 1828. He immediately exploited the spoils system by surrounding himself with political supporters and yes-men. “Old Hickory,” as his troops had called him, was a new kind of president in a new American age. As more and more white males received the right to vote during the 1830s and 1840s, aristocracy and privilege came to be seen as undemocratic and anti-American. Although Jackson himself was fairly well-to-do by the time he took office, he had come from a poor family. Westerners and southerners loved him for his seemingly rugged individuality and strength. Northerners, on the other hand, feared him and his democratic “rabble.”
The Nullification Crisis
Jackson’s two terms were full of political crises, the first of which was the Nullification Crisis over the Tariff of 1828. The tariff, which had been passed near the end of Adams’s presidency, heavily taxed all foreign goods. Northern manufacturers loved this protection, but southerners hated it because they traded heavily with Britain.
Vice President John C. Calhoun secretly wrote a pamphlet called the “South Carolina Exposition and Protest” that urged southern state legislatures to nullify what he called the “Tariff of Abominations.” The South Carolina legislature followed his advice in 1832, making Jackson so angry that he threatened to send troops to the state to collect the taxes forcibly. Civil war was barely averted, thanks to Henry Clay, who proposed the Compromise Tariff of 1833as a middle road.
A renowned Indian fighter during his military years, Jackson continued to persecute Native Americans during his presidency. In 1830, the Indian Removal Act authorized the army to relocate, by force, any Native Americans living east of the Mississippi River. The act violated an earlier Supreme Court decision that recognized Indian lands, but Jackson didn’t care. More than 100,000 Native Americans were moved to present-day Oklahoma and Nebraska, and thousands died on the difficult journey that became known as the Trail of Tears.
Jackson’s Bank War
Jackson also caused a stir with his Bank War against the Bank of the United States. Because the Bank was a private institution funded by a small group of wealthy speculators, Jackson believed it was undemocratic. He vetoed the bill to renew the Bank’s charter and then effectively killed the Bank by refusing to put any more federal money in it, depositing the money in smaller banks instead. This action sent the national economy into a depression after the Panic of 1837. It also united Henry Clay, Daniel Webster, and other Jackson-haters, which in turn led to the creation of the Whig Party.
Van Buren and Depression
Jackson’s Democratic successor, Martin Van Buren, had an even rockier time. Although Jackson had tried to nip the depression in the bud with the Specie Circular law, it only made matters worse. Without a strong central bank to provide stability, hundreds of smaller “wildcat banks” went out of business.
Blamed for a depression that was not his fault, Van Buren lost the election of 1840 to Whig war hero William Henry Harrison. However, the relatively unknown Vice President John Tyler became president after Harrison died only a month into his term.
John Tyler and the Whigs
Whig leaders Henry Clay and Daniel Webster initially rejoiced when Harrison was elected, for he shared their support of higher tariffs, internal improvements, and a revived Bank of the United States. To their surprise, though, Tyler ruined all their plans.
Tyler, a former Democrat, had become a Whig because he personally disliked Jackson, not because he believed in the Whig platform. Tyler did pass the slightly higher Tariff of 1842 but refused to fund internal improvements or bring back the Bank of the United States. Whigs, outraged by his betrayal, expelled him from the party.
Nonetheless, Tyler had a productive term. The Webster-Ashburton Treaty of 1842 established a permanent eastern border with Canada and cooled tensions with Britain. During his final days as president, Tyler also pushed through congressional measures to annex Texas.
Texas caused controversy from the day it declared independence from Mexico in 1836. Southerners badly wanted Texas to become a new slave state in the Union, for they believed that westward expansion of slavery was vital to their socioeconomic system. Northern Whigs, however, didn’t want slavery to spread any further than it already had, so they blocked the annexation of Texas in 1836.
The Abolitionist Movement
This debate over slavery was the most divisive issue of the era. While southerners spoke loudly in support of slavery, the abolitionist movement grew from a small faction in the 1820s to a powerful social and political movement by the 1840s and 1850s. Though the abolitionists opposed slavery, they by no means advocated racial equality—most of them wanted only gradual emancipation or even resettlement of blacks in Africa. At the time, only radical abolitionists such as William Lloyd Garrison demanded immediate emancipation of all slaves.
Social Reform and Religious Revivalism
At the same time, some progressive northerners—many of them women—started social reform movements against prostitution, alcohol, and mistreatment of prisoners and the mentally disabled. Others tried to expand women’s rights and improve education. Many of these movements were successful in convincing state legislatures to enact new legislation.
Linked to these reform movements was a new wave of religious revivalism that spread across America at the time. Many new religious denominations flourished, including the Methodists, Baptists, Shakers, Mormons, and Millerites, among others. In general, women were especially involved in these new denominations.
The Market Revolution
At the same time that these social transformations were taking place, the U.S. economy was evolving into a market economy. New inventions and infrastructure made it much easier to transport goods around the country.
Eli Whitney’s cotton gin and system of interchangeable parts revitalized the South, West, and North. Cotton production became a more efficient and lucrative business, so southern planters brought in more slaves to work their fields. Cyrus McCormick’s mechanical mower-reaper revolutionized wheat production in the West, enabling farmers to send surplus crops to northern industrial cities.
Immigration and wage labor, meanwhile, completely transformed the North. The potato famine in Ireland and failed democratic revolutions in Germany sent several million Irish and German immigrants to the North in the 1840s and 1850s. Many found work as wage laborers in the new factories.
Manifest Destiny and the Mexican War
Within the United States, people were itching to move further west. Land-hungry westerners and southerners in particular wanted more land on which to farm and plant cotton. Inspired by revivalism, many Americans began to believe that it was their “manifest destiny” from God to push westward across the continent. Politicians were encouraged to acquire more and more land.
Westward expansion was particularly important to James K. Polk, who was elected president in 1844. During his four years in office, Polk acquired all of the Oregon Territory south of the 49th parallel. With his eye on California (then a Mexican territory), he provoked the Mexican War, which the United States won handily. Under the Treaty of Guadalupe Hidalgo, which ended the war in 1848, Mexico gave up Texas, California, and everything in between.
Take a Study Break
Every Shakespeare Play Summed Up in a Quote from The Office
Every Marvel Movie Summed Up in a Single Sentence
Macbeth As Told in a Series of Texts
QUIZ: Is This a Great Gatsby Quote or a Lorde Lyric?
QUIZ: Which Coming-of-Age Trope Will You Experience This Summer?
QUIZ: Are You a Hero, a Villain, or an Anti-Hero?
Pick 10 Books and We'll Guess Whether You're an Introvert or an Extrovert | https://www.sparknotes.com/history/american/precivilwar/summary/ | 21 |
100 | National income is the total value of income that individuals and enterprises earn in a country. It means the value of goods and services a country produces during a financial year. It is therefore the net income of all economic activities of a country during a year period. We view national income in terms of money. People use the term interchangeably with National Output, National Dividend, or National Expenditure. It includes all payments made to resources and this comes in form of either wages, rent, interests, or profits. Basically, a country’s growth in national income determines its progress. We can also refer to national income as the value of the aggregate output of the different sectors of the economy. This aggregate output is within a time period, mostly one year.
The concept of national income occupies a vital place in carrying out a macroeconomic analysis. A government evaluates the economic performance of her economy with the help of national income data. Through this data, the government formulates her different policies for solving different problems of the economy. This helps to maximize the people’s welfare. In general terms, national income is the total value of goods and services a country produces per annum. They are normally distributed among factors of production in form of rent, wages, interests, and profits.
Definitions of national income
There are two basic national income definitions which are the traditional definition and the modern definition.
Under the traditional approach, we shall look at different definitions by different economists. These economists include Marshall, Pigou, Cairncross, and Kuznets.
According to Marshall’s view, “The labor and capital of a country acting on its natural resources, produce annually a certain net aggregate of commodities, material and immaterial including services of all kinds. This is the net annual income of or the revenue of the country or national dividend”.
The word ‘net’ means deductions from total gross production concerning depreciation. Depreciation involves wearing out of plants and machinery. Net income from abroad is inclusive. We may view it as a national dividend, a flow of goods and services excluding funds.
Quoting Marshall’s words, “the national dividend is at once the aggregate net product of and the sole source of and the sole source of payment for all agents of production within a country”. That means what an economy produces is distributed among different factors of production.
A.C. Pigou defined national income as “That part of the objective income of a community, including, of course, income derived from abroad which can be measured in money”. This definition is narrow because it does not include non-marketed goods and services. that is those goods and services which do not involve money payment. Pigou’s definition omitted some components. He argued that when a man marries his housemaid, the national income will reduce. This is because he is not supposed to pay any wages to his wife whom he used to pay before marriage. The major criticism of this definition is that it is narrow.
Prof. Cairncross’s definition
Prof. Cairncross said, “National income is, in fact, simply the output upside down. What we produce flows in a reservoir, from the joint output of the community”.
Simon Kuznets’s definition
Simon Kuznets defined national income as “the net output of commodities and services flowing during a year from a country’s productive system in the hands of the ultimate consumers”.
Looking at the above definitions, Marshall’s definition seems to be more detailed.
The modern definition
National income is the monetary measure of all the goods and services a nation produced in a year. It is the total of all personal income. That is, the sum of all wages, rent received, interests and profits in a country in a year period. This implies the total income that accrues to all factors of production in the country. It is therefore the value of goods and services produced within a time period usually a year. We estimate the value of these goods and services in monetary terms.
The National Sample Survey defined national income as “the money measures of net aggregates of all commodities and services accruing to the inhabitants of a country during a specific period”.
Froyen defined national income as “the sum of all factor earnings from current production of goods and services. Factor earnings are incomes of factors of production”. Also, Gardner Ackley stated that “National income is the sum of all (I) wages, salaries, commissions, bonuses and other forms of income (ii) net income from rents and royalties (iii) interest (iv) profit”.
Prof. Lipsey and Chrystal stated that national income in general terms is “the value of the nation’s total output and the value of the income generated by the production of that output”.
The National Income Committee of India pointed out that “A national estimate measures the volume of commodities and services turned-out during a given period counted with duplication”.
National income, therefore, is the monetary value of all production and earnings of the country within a time period usually one year. National incomes equal net national product excluding direct business taxes.
Major concepts of national income
This is an individual’s earnings in monetary terms. Individuals earn when they partake in the production of goods or for rendering services. Personal income includes wages and salaries, payment for services, rent paid to landlords, interest received by a capital owner, and profit received by an entrepreneur.
Personal income minus personal income tax is equal to disposable income. When you deduct personal income tax from your personal income, what is remaining is the disposable income. It is the amount left for spending and saving.
Gross domestic product (GDP)
This is the total monetary value of all goods and services a country produces within a year. Here all residents of a country produce these goods and services irrespective of whether they are citizens or foreigners. This concept places more emphasis on the geographical aspect of production. That is the total value of output in a country. It does not include the earnings of citizens and their investments outside the country. It includes the earnings of foreigners and the earnings a country gets from foreign investments. Also, there is no allowance for the depreciation of assets. Gross domestic product is the same as for Gross Domestic Income. This is because the income received for producing goods and services has to be equal to the value of the products (goods and services). It is only concerned with the internal production of the country.
Gross national product (GNP) [Gross National Income]
Gross national product was initially referred to as the Gross National Income (GNI) and it is the total monetary value of goods and services that a country’s citizens produce. This includes income from their investments both within the county and abroad. In other words, it includes net income from abroad. The gross national product includes the earnings of citizens and their investments outside the country. It does not include the earnings of foreigners and their investments in the country. This also does not give allowance for depreciation. Some residents receive their income from abroad. Foreigners also receive income payments from the country. The difference between the two types of income is what we call the net income from above.
GDP vs GNP
Comparing the gross domestic product and gross national product, you think they are similar but they are not; this is because GDP measures the income from goods and services of a country in a year, irrespective of income from citizens and non-citizens; whereas GNP measures income from only the citizens of a given country, irrespective whether they are currently living in the country or another country. While GDP includes the earnings and investments of foreigners in the country, GNP does not include them.
Net domestic product (NDP)
Net domestic product is simply the gross domestic product less depreciation. Depreciation is a decrease in the economic value of an asset. There is an allowance for depreciation, usually, wear and tear of capital. We can describe it as the total monetary value of goods and services in which all residents of a country produce. It includes the earnings they get from their investments, whether citizens or foreigners. After summing up these variables, we deduct depreciation.
Net national product (NNP)
We get our net national product after deducting depreciation from the gross national product. This depreciation occurs in form of wear and tear, disuse, etc. of assets that are used in production. It is the total monetary value of goods that a country produces. This also includes income from their investments, whether home or abroad after deducting depreciation.
Per capita income
Per capita income is the income we get after dividing the national income by the country’s population. In essence, it is the income per head of the population.
Importance of national income
National income estimates are very important to any economy of the world. Various aspects of a nation’s economy need to measure the national income. This helps to determine the type of policy they should formulate. We shall discuss the uses of national income estimates below;
National income estimates provide detailed data on the contribution on the contribution of various sectors in the economy. This also includes their national output. It includes information which states the government’s revenue and expenditure. It displays the various sources of income such as wages, rent, profit, and interest. National income gives a clear picture of the country’s economic growth. It helps to indicate the status of an economy. This data helps economists to formulate economic policies for economic development. Policymakers and economic planners will be able to know the right line of action to take. For example, if the planners are aimed at increasing employment and output, national income estimates will be a guide. It will help to determine whether the policymakers were able to achieve their desired goals. Economic planners set growth rate plans during development plans.
National income has an influence on foreign investments. Foreign investors run to countries that have rich and fast-growing economies. Per capita income usually gives policymakers a rough idea of a demand for foreign investors.
Standard of living
National income helps to compare the standard of living between countries and the people in the same country. Economists compare the standard of living in a country in a country at different times. Mostly the per capita income of a country serves this purpose. It becomes possible to compare the standard of living of different countries. Economists do this by reducing the per capita income of different countries to the same currency. This currency is usually the U.S. Dollar. Countries with higher per capita incomes have a higher standard of living. The capita income of the same country at different periods is relevant in comparing the standards of living per time. The per capita income roughly indicates the general well-being of a country’s citizens. A higher per capita income indicates a higher standard of living.
The government uses national income figures to measure economic progress. These figures provide clear information about the growth rate and performance of an economy. By comparing the different data for different years, the status of the economy will be clear. We will be able to know what is happening to the entire economy. It shows whether the economic growth rate is high or low. Also, whether there is an increase or decrease. When the economic growth is low, the policymakers do something about it. They adopt measures to improve the performance of the economy. Here it is easy to find out the contribution of various sectors of the economy and identify those sectors that need more attention.
Determines technical assistance
Through national income data, the World Bank and the United Nations sometimes give priority to poorer countries. It gives an avenue for technical assistance when they have a rough idea of the national income of different countries.
A country’s contribution to international organizations is highly dependent on the degree of its prosperity. These international organizations include the Commonwealth, United Nations, etc. National income estimates greatly determine the progress of a country’s economy. This most times form the basis for contributing to international organizations.
A country’s budget highly depends on the net national income and its concepts. The government formulates its annual budgets through the help of national income estimates. This is to avoid selfish or desire-based policies.
Inflation and deflationary gaps
For timely policies that are meant to curb inflation and deflations, the government needs aggregate national income data. if expenditure increases, it shows inflationary gaps while it shows deflationary gap when expenditure decreases.
Defense and development
National income estimates help to divide the national product between the defense and development purpose of the country. From national income data, the government will know how much to set aside for the defense budget.
Measurement of national income
There are three methods of measuring national income, the income approach, the net product (output approach), and the expenditure approach. The three approaches arrive at the same goal.
When factors of production produce goods and services, consumers pay for them. The sum total of this income constitutes the national income. The income received by factors of production has to be equal to the expenditure incurred in producing the goods and services. The sum total of all expenditures constitutes the national expenditure.
Under this method, we take account of all the income received within a year period. These income receipts involve individuals, firms, and the government. All wages and salaries, rent, profit, and interest are summed up. In order to avoid double-counting, economists do not include transfer payments. Examples of transfer income include gifts to orphans, aged, students and beggars, and pension to retirees. These are part of people’s income which is already counted. The income must be that which arises from the production of goods and services. Aside from excluding transfer payments, we also exclude business expenses.
Using the income approach, we either arrive at the gross domestic product or the gross national product at factor cost. This is because we find the total figure by adding up all costs, that is, the income of all the factors of production. We add taxes and subtract subsidies in order to get the GDP and GNP.
Output or net product approach
Under this method, we measure national income in terms of the value (monetary) of all the goods and services the various economic units produce in a year. under this method, we add up the value of the net contributions of the different sectors in a country. To avoid double-counting, we measure income on a value-added basis. Value-added is the value of outputs minus the cost of inputs. Examples are, the cost of raw materials like maize and soya beans in making poultry feeds must be subtracted from the poultry feeds to get the contribution of milling companies to national output.
The calculation includes the value of exports and less the value of imports. It includes the value of goods and services the producers themselves produce and consume. The value of houses (owner-occupied) is included. They should also include the value of the services house helps and voluntary organizations. Also, the value of public services such as justice and defense must reflect. The output approach reveals the GDP at market prices. We exclude taxes and subsidies to get the GDP at factor prices.
This approach measures the total expenditure by individuals, firms, and government on goods and services. Here, it is necessary to avoid double counting by identifying the expenditure (consumption or investment). This includes only expenditure on final goods and services. For instance, in the value of poultry feeds, we are careful not to add the prices of the ingredients used in making the feeds. Under this approach, we do not include transfer payments as well.
Under this approach, the formula for calculating national income is;
Y = National income
C = Private consumption expenditure
I = Private investment expenditure
G = Government expenditure on consumption and investment
X = Exports
M = Imports
P = Income from property
We can simplify the formula by reducing it to Y=C+I where
Y = National income
C = All expenditure on consumer goods and services
I = All expenditure on investment goods and services
Limitations of using national income data
Though national income estimates are important which we have stated above, there are limitations.
National income data does not state whether income is widely spread or concentrated in the hands of few. It does not disclose income distribution within a country. Per capita income is almost useless when it comes to measuring the standard of living of the people.
There are variations in the structure of production such as non-marketed products tend to be neglected. These non-marketed products are important in underdeveloped economies.
Quality of life
National income ignores the quality of life. Though it measures the monetary value of economic activities, it does not measure how happy and comfortable people are. It ignores the actual welfare of the people.
Different approaches in measurement
Different countries might have measured their national income using different approaches. The different methods of valuation among countries differ. In this instance, it is possible that what a country includes, another country may exclude. What makes up the national income of one country may be different from that of another country. For example, one country may include self-service while another may exclude it. Some may work out GDP at factor price while others will work it out at a market price.
Also, the method of computing national income figures can change over time. Economists may include certain items that were not part of the national income measurements. National income may increase but yet the standard of living low.
The population statistics may change over time and there may be problems in that area as well. The size of national income is not the actual measurement of the welfare of an economy. This is because of the changes in population sizes, population sizes can never be the same. Problems may arise as wrong population figures may be used to calculate the per capita income. Some countries do not have reliable data and using per capita income in such countries may give the wrong picture.
Changes in the value of money
The value of money changes over time. It can either increase or decrease. When prices increase, an increase in the national income does not mean an increase in the standard of living. This is because a certain amount of money can buy fewer goods. Changes in price level greatly limit the comparison of national income among yeas.
Also, converting the national income of different countries to the same currency is possible. Using foreign exchange rates there are still limitations in comparing the standards of living. The Standard of living largely depends on the value/purchasing power of money. Some countries may be having a high rate of inflation while others deflation.
National income ignores whether economic activities have positive or negative effects which cannot be measured in monetary terms. For example, if a manufacturing industry pollutes the environment and reduces the quality of life. It only shows the money the manufacturing industry generated.
Circular flow of income
Usually commodities and money flow between households and firms. There is a steady flow of inputs and outputs among the different sectors in an economy. Let us assume that there are only two sectors in the economy, the personal sector (households) and the firms (business sector). Households supply inputs (factors of production) to firms for the purpose of production. Firms pay the households wages, rent, interests, and profits which form part of their incomes. Households use the income to buy goods and services the firms produced. Household spending forms part of the firm’s income. This activity leads to the circular flow of income. Firms again will use the income they got through households to pay for the productive services of the households. As a result of this, income continues to move in a circle. Income flows towards one direction while commodities flow in the other direction. | https://www.jotscroll.com/national-income-meaning-and-concepts | 21 |
25 | Blacks in Niagara Falls, New York: 1865 to 1965, a survey.
The black community is a part of the city's history and culture. Blacks settled there, like other Niagara Fallsians, not because of the scenic beauty of the Falls but to earn a livelihood, as the city has historically maintained a thriving tourist industry, electrical plants, and factories that needed a copious labor force. Blacks have gained employment in these and other industries and have been an integral part of the city's labor force since the period of slavery in the United States.
During the Slavery Era, many blacks traveled through Niagara Falls en route to Canada. Niagara Falls was one of the final stations of the celebrated Underground Railroad. Before the Fugitive Slave Act of 1850, it was not uncommon for escaped slaves to settle in northern communities, such as Niagara Falls, and become citizens of those communities. However, once the Fugitive Slave Act of 1850 became law, there was a mass exodus of escaped slaves to Canada. It is estimated that 30,000 to 40,000 slaves escaped to Canada during the nineteenth century, many of them passing through Niagara Falls on their way to freedom. This aspect of Niagara Falls' black history is well known and well documented; however, the period after 1865 is not.
The aim of this paper is to survey and introduce the history of blacks in Niagara Falls, New York from 1865 to 1965, focusing on such themes as employment, population expansion, community development, leadership and racial conflict. Black Niagara Fallsians, like other black American residents in their respective communities, strove to be part of the mainstream of the city of Niagara Falls as well as to maintain their own cultural identity. Job and educational opportunities, access to political power and available housing are a few of the major issues that were of concern to them. Surveying and introducing this important history will hopefully encourage others to conduct further research.
The historical experience of blacks in Niagara Falls is similar to that of other black northern urban communities. (1) They existed on the fringes of society, never being an influential group within the broader community, and never having a major role in the political, economic or social affairs of the overall community. Their presence was largely tolerated or ignored until their numbers increased significantly, making them more noticeable and more threatening, especially as they attempted to gain more living space. Although racial tensions heightened in Niagara Falls, they never rose to the level that prevailed in larger cites, where they culminated in full-scale race riots.
THE FIRST 35 YEARS AFTER SLAVERY
In 1850, the year the severe fugitive slave law was enacted and well before racial tensions became detectable, 41 blacks were residing in Niagara Falls. (2) In 1860 two hundred and forty-two blacks were living in Niagara Falls. (3) Due to the proximity of Canada to Niagara Falls, many of those individuals were probably escaped slaves who resided in the United States and at times in Canada, depending on where they could find steady employment. By 1865, the year that slaves were emancipated, 126 blacks were living in Niagara Falls. (4) Many of those who were in Niagara Falls in 1860 probably returned to regions in the South to be with family members and other loved ones.
After slavery Niagara Falls was a relatively safe place for blacks to reside. No longer did blacks have to fear one of their race might be returned to slavery or that a free person might be seized and sold into bondage. (5) No longer did they have to keep abreast of the local debates between proslavery and antislavery factions and fear their consequences. Moreover, they did not have to be conscious of or concerned about the views and behaviors of the numerous southerners who frequented Niagara Falls to enjoy its natural scenic beauties. Those former slaves that had escaped to Canada were free to remain in Canada or return to the United States. A new day had dawned.
During this early period of freedom and beyond, the percentage of blacks residing in the Niagara Falls area was small. In 1865 they comprised 2.0 percent of the population. In 1890 they were about 1.5 percent of the population, and in 1920 they were 1.0 percent of the population. Black Niagara Fallsians usually amounted to about 2.0 percent of the population or less, never exceeding this level except during the 1870s, 1880s and World War II and thereafter. Table I below displays this pattern. (6)
Black Niagara Fallsians, besides being a small part of the Niagara Falls population, were mainly concentrated in low-skilled jobs. This is evident from the beginning. They were laborers, servants, laundresses, hack men, drivers, porters and hotel workers. This situation prevailed not only during Reconstruction but also throughout the entire timeframe of this study. Of the 126 blacks that resided in Niagara Falls in 1865, for example, census takers recorded that only 22 of them were employed. Most of them were low-skilled workers, eight being listed as servants, nine as laborers, and one as a drayman. (7) There were four skilled tradespersons: a tailor, a mason and two barbers. The census takers for 1870, 1880 and 1892 and thereafter recorded an expanding number of black workers, reflecting an increasing stability among the black Niagara Fallsian population. These censuses continued to show that blacks were predominantly low-skilled workers.
A popular type of employment that was open to black Niagara Fallsians was hotel work, which was plentiful because Niagara Falls was a major tourist attraction. Their presence in this field hearkens back to the days of slavery, when they were cheap laborers who performed the daily menial hotel functions as waiters, cooks, bellboys, servants and janitors. The hotels would hire them for the summer and fire them during the winter. (8) Many of the black hotel workers often "lived in the basements of the hotels and worked all day long and into the night." (9) They continued to labor in hotels beyond the slavery period. The Town and Village 1865 Census for Niagara Falls, for example, listed four blacks working in a hotel, which is, most likely, an undercount. The 1870 Census listed 40, the 1880 Census 57, and this pattern continued throughout the Niagara Falls Town and Village censuses that recorded occupation and are now open to the public.
Two of the oldest and most prominent hotels in Niagara Falls were the Cataract House and the International Hotel. The Cataract House was built in 1825 and the International Hotel in 1853. (10) These hotels housed and entertained such notable guests as Abraham Lincoln, Ulysses S. Grant, Millard Fillmore, William "Boss" Tweed, and Li Hung Chang, a Chinese diplomat. President William McKinley had lunch at the International House an hour or so before he was assassinated in Buffalo. These hotels also employed an extensive black workforce. (11) It has even been written about the Cataract House (and probably held true for the International Hotel as well) "that rich southerners who patronized this hotel before and after the Civil War were made to feel at home by the colored help." (12) The black employees at the Cataract House, when suddenly needed during their breaks, were often summoned by a great bell that hung above the hotel. During their breaks, the black workers would often congregate near and around the hotel, which was surrounded by a scenic wooded area. (13)
Patrick Snead, an escaped slave who had fled from Savannah, Georgia in 1851, worked at the Cataract House during the summers. He had done this for two years. In 1853 he had been identified as a fugitive slave and was arrested at the Cataract House by five constables from Buffalo. After his arrest, he was rescued by a group of black waiters, also from the Cataract House, who assisted him in getting on a ferry to cross over to Canada, with the constables in hot pursuit. (14) The constables called for the ferry drivers to bring their boat ashore, which they did, and Snead was again apprehended, taken to Buffalo, jailed for nine days and finally brought before a judge. Fortunately for Snead, he obtained good representation resulting in his release, and upon being freed, he immediately fled to Canada. (15) Although Snead treasured his freedom, he regretted that he had lost his opportunity to earn the much-needed wages that he could have earned at the Cataract House. (16)
Unlike the concrete work history of black Niagara Fallsians documented by census records, information on the activities of the small black community is sketchy for the first 35 years following the Civil War. This is due to several reasons. First of all, since the black community was so small and on the fringes of society, few records remain and still less information was written. As Table I above indicates, in 1870 one hundred and forty-nine blacks were recorded as living in Niagara Falls. In 1880 one hundred and fifty were recorded, and lastly in 1890 one hundred and fifty-nine. As mentioned before, these population figures for each decade represented a small percentage of the entire Niagara Falls population. Secondly, the census for these early years revealed a large number of individuals labeled as mulattoes, (17) which could mean that they saw themselves as part of the larger community since one of their parents was white. Thirdly, available evidence seems to indicate that no distinctive black church existed during this time or at least there was no physical church edifice that brought black community member together to worship.
Regardless of these matters, signs of community are evident. In 1872 the Colored Republicans scheduled a meeting for September 26 at Grant's Hall. (18) William H. Johnson of Albany, who was chairman of the Colored Republican State Committee and William F. Butler of New York City were slated to speak. Moreover, the Colored Glee Club was on the program, and it was expected that they would sing some of their favorite campaign songs. Twenty years later, the Colored Republicans held a meeting at Allen's Hall. (19) They were then called the "Wide Awake Colored Republican Club." The Colored Glee Club was also on the program. The keynote speaker for the meeting was a physician from Buffalo, Dr. H. H. Lewis, who spoke for about an hour and a half and kept his large audience's attention, with seriousness, anecdotes, and humor. (20) "At the conclusion of Mr. Lewis's remarks the Glee Club rendered another selection and all united in three hearty cheers for Harrison and Reid." (21)
Five years later the "Colored '400' Dance" was held at Crick's Hall. (22) Invitations had been sent out some weeks in advance. Guests came from Canada, Buffalo, Lockport, and other cities, and as far away as Cleveland, Ohio. An orchestra played for at least 36 dances, from 9:00 p.m. to 12 a.m. Mrs. Anderson Fayette of Niagara Falls, whose husband was the proprietor of a local hotel, was nominated "belle of the ball." (23) The Dance was dubbed a great success and one of the best-managed affairs ever held in Niagara Falls. Credit was given to the committee that was in charge.
During the decade that the Dance occurred, more blacks were migrating to Niagara Falls. By 1900, 344 blacks were recorded as living in the Niagara Falls area. Compared to the 1890 population figure, this was not only an increase of 185 individuals but also a percentage increase of 116 percent. For the period of this study, this population growth represents the first major influx of blacks into Niagara Falls, and employment constructing the Tunnel is what brought them to the area. (24)
The Tunnel was a passage from the Edward Dean Power Plant under the City of Niagara Falls to land below the cataract. The purpose of this tunnel was to divert water from the Niagara River through the tunnel to a point beyond the Falls to generate electricity at the Edward Dean Power Plant. The construction of the Tunnel began in 1890 and was finished before 1900. Many workers from various ethnic backgrounds worked on the Tunnel, 200 in the first phase of its construction and 800 in the final phase, with Italians playing a key role in laying the bricks. (25) Blacks also played a significant role in constructing the Tunnel, completing significant portions of its work. One black who worked on the Tunnel Project in Niagara Falls and then worked on a smaller similar project in Niagara Falls, Ontario, compared the treatment of black workers in both places:
when we comes along de street [in Niagara Falls, New York] dey runs in de houses and closes de doahs. So fa' as I can see dey's afraid of us, count some of our fellows got such rep' tation for letting blood on dis side. I has to laff' self sometimes, but doan't think we looks over good in our dirty oil skins. (26)
On the Canadian side, they were treated with dignity and not stereotyped. (27) They were greeted with a friendly hello. Nonetheless, by 1910, a short time after the Tunnel was complete, the 1910 Census indicates that 266 blacks were living in the Niagara Falls area, which, when compared to the 1900 figure, was a decrease of 78 individuals. This adds credence to the idea that several blacks had migrated to the area specifically to work on the Tunnel Project.
1900 TO THE END OF THE DEPRESSION
By the time the Tunnel had been completed, a black community was strongly evident in Niagara Falls. A black Baptist group was conducting religious services in Crick's Hall and making plans to build their own house of worship. (28) The leaders of this group were the Reverend B. B. B. Johnson and his wife, who was also an ordained minister. They had a home on Erie Avenue, which was a street in an area where clusters of blacks were settling, and came to be for a long time the main thoroughfare of the black community.
Johnson and his wife first appeared in the Niagara Falls Gazette, the main local newspaper, April 9, 1900. An announcement was made stating that the "Negro Baptists" of the city were going to have service at Crick's Hall followed by the baptizing of 50 persons in "Loop Drive Lake" on the Niagara Reservation. The final part of the announcement stated that the cornerstone for the first black church in Niagara Falls would be laid. (29) The church was to be called the Second Baptist Church of Niagara Falls and was to be located on Twelfth Street. When the actual events occurred, eight people were baptized and about 4,000 people from throughout Niagara County and other areas witnessed the baptisms. (30) The Niagara Falls Gazette labeled this activity a "reform movement for the Negroes of Niagara Falls." (31)
The Reverend Johnson and his wife were clearly leaders in the black community. The next month after the baptismal services, plans were launched to bring Booker T. Washington, D. Augustus Straker and John J. Jones to Niagara Falls to address residents of the city for the Fourth of July. (32) These were major black leaders, Booker T. Washington not only being the most prominent of the three leaders but the foremost leader of his race, a man of great influence. Besides having these speakers inform Niagara Fallsians about the status of African Americans and how national events would affect and were affecting them, Johnson hoped to generate income to pay off the debts of his new church. He had obtained confirmation that each speaker would be present, and he estimated that about 20,000 people from New York State and other locales would attend the Jubilee Celebration.
A little over a month later, before the Jubilee Celebration and eight days before the Fourth of July, the Niagara Falls Gazette carried a report that Johnson would give up his pulpit until the debts of the Second Baptist Church were paid. His wife would assume the pulpit until he had raised the funds to pay off the church's debts and some he had contracted on his personal account. (33) In his own words, Johnson told a Gazette representative, "It is discouraging to stand up in your pulpit and counsel your congregation to lead an honest life and pay their honest debts, and then have a bill presented to you the minute you step down from the pulpit." (34) Johnson, who had labored hard for the improvement of his race in Niagara Falls, wanted to use the bulk of his energy working toward eliminating the debts of his church. It was hoped and expected that the coming Fourth of July Jubilee Celebration would contribute significantly toward this effort.
The Celebration was not the great success it was expected to be, largely due to the fact that the prominent speakers were unable to attend. Moreover, it rained. Nevertheless, the celebration was a small success. The event was well attended and began with a parade led by Robert Dett of the Keystone Hotel. Dett was the father of the famed Niagara Falls musician, R. Nathaniel Dett. (35) Following Dett was the "Tunnel District Blues Band," led by the Reverend Johnson with his sword and military uniform. At the grounds the Reverend Hatchet of London, Ontario spoke. Then the Declaration of Independence and the Emancipation Proclamation were read, followed by a very patriotic address by Johnson. (36) After the official procedures were over, the visitors enjoyed the games and each other.
A small sum was obtained for the Second Baptist Church. Johnson furnished the Daily Cataract-Journal, another local Niagara Falls newspaper, an accounting of income that was raised by his church for the 1900-1901 fiscal year. (37) In July, the month of the Jubilee Celebration, $243 was raised. As could be expected, this was by far the most money obtained in any single month. Still, according to Johnson, it was not enough to cover the church's debts. After 1901, Johnson does not appear in any of the Niagara Falls newspapers. Rumor has it that he left town and kept funds raised by his congregation that were slated for the debts of the church. (38)
Besides the Second Baptist Church, other Niagara Falls churches are listed below with the year they were founded, up until 1957, according to the research of the Reverend Franklin Banks (39):
(1) 1906 St. John's Baptist Church (2) 1917 Shiloh Baptist Church (3) 1920 Union Baptist Church (4) 1925 The Morning Star Church of God and Christ (5) 1926 Emmanuel Baptist Church (6) 1926 Trinity Baptist Church (7) 1929 Second Baptist Church (8) 1937 New Hope Baptist Church (9) 1946 Mt. Erie Baptist Church (10) 1952 Lily of the Valley Baptist Church (11) 1953 The Glorious Church of God in Christ (12) 1956 The New Jerusalem Revival Center (13) 1956 First African Methodist Episcopal Zion Church (14) 1957 Mt. Sinai Baptist Church
Some of these churches were formed by members of one established church who left it to establish a new one. (40) Union Baptist Church and Mt. Sinai Baptist Church are examples.
After the first 35 years following the Civil War, monumental racial problems did not exist in Niagara Falls, unlike the southern section of the country, which was still experiencing the aftermath of Reconstruction and the wrath of the Jim Crow era. The reason for this was that the black population during the first 35 years (and for a period into the future) was a very small percentage of the overall population, on the fringes of society. In the South, however, where the majority of them lived, they competed with whites for a livelihood, and in some places made up about one-third or more of the population.
Although there were no monumental racial problems in Niagara Falls by 1900, evidence indicates that black Niagara Fallsians felt a sense of unfairness when they saw how they were treated in society. A 1901 Niagara Falls Gazette article was titled "Negroes Object." This article reported that blacks in Niagara Falls and Buffalo were incensed with the treatment that was accorded President McKinley's assassin, Leon Czolgosz. "Had a Negro shot the President," said a soldier who had fought at San Juan Hill with Theodore Roosevelt, "no power on earth could have protected him from the violence of the mob, and yet here is a man caught red handed in the act and he is protected and will be given a trial and perhaps escape with a sentence of a few years. And why? Simply because he has a white face." (41) These statements reflect how many black Niagara Fallsians were sensitive to the fact that they were on the fringes of society and not considered part of the mainstream, and that a different standard of justice probably would be applied to them in certain situations, even in Niagara Falls. Furthermore, it was said that the fact that a Negro, James B. Parker, had knocked down the President's assassin ought to have done wonders in lifting the prejudice against the black race. (42)
Table I shows that by 1920, nineteen years after the assassination of President McKinley, 509 blacks were residing in Niagara Falls. Compared to the 1910 figure, this was an increase of 243 individuals. What brought more people to Niagara Falls and the rest of Western New York were the jobs created by the defense industry and the decline of immigrants entering the United States because of World War I. In the neighboring city of Buffalo, comparing the population growth of blacks for the same periods, there was an increase of 2,738 individuals. (43) Social scientists have considered these events as results of the First Great Migration, the substantial movement of black migrants from the South to northern urban areas due to World War I. (44) It is estimated that about 500,000 blacks left the South from 1914 to 1918 to work in northern industries. For the first time, the 1920 Census listed some black Niagara Fallsians as factory workers, although most were identified as laborers. (45)
Besides the increase in population, the formation of a community center was one of the greatest single events that benefited black Niagara Fallsians during the Age of Freedom. It is rumored that black Niagara Fallsians who patronized the local YMCA were told to get their own recreational facility. Regardless of who may have made this statement, white patrons or YMCA representatives, black Niagara Fallsians still maintained an ongoing relationship with the local YMCA, which lasted throughout the period of this paper. Nonetheless, during the late 1920s, Eugene Ellis, Benjamin Bolden and Reverend D. B. Barton, who were prominent leaders of the black community, approached the Community Chest, a division of the local Niagara Falls government, about sponsoring some social and recreational facility for the black community. (46)
It was agreed by both groups that a study should be undertaken to determine the precise needs of the black community. John M. Pollard Sr. of the Playground and Recreation Association of America was brought to Niagara Falls from New York City to conduct the study. Pollard, who did some graduate work at the School of Social Work at the University of Chicago (which was heavily influenced by the famous pioneering sociologist, Robert Park) took several months to complete his study. He administered five hundred questionnaires, made two hundred personal visits, and held six public meetings. Ben Bolden, who was an up-and-coming community leader, assisted him. (47) His findings, which were presented to the Community Chest, conveyed that a community center was needed and would contribute toward the progress of Niagara Falls in general. Furthermore, he envisioned the Center as a place for drama and stories, music and plays, dances, baseball and basketball games, swimming, china painting and hand work of all kinds, forums and debating clubs and study clubs, an employment bureau to seek economic opportunities, and so forth. (48)
There were mixed reactions in the black community to the idea of a separate community center for blacks. (49) One small group felt that separation was counterproductive and that blacks and whites, side by side, had to work out the destiny of America. "They [felt] that to permit separation [was] to admit inferiority and because of bitter experience they [asked] for better economic conditions only, and they [were] content to wait for other social adjustments." (50) Another group, small but larger than the first, felt that, with the right kind of leadership, blacks would integrate into established institutions even if it meant expanding those existing institutions to accommodate them. The overwhelming majority of people, however, felt that a separate community center was needed to address the social and recreational needs of the black community. They also felt that it had to be led by well-trained members of their community of the highest type available.
Pollard's study, which was titled "Why Have a Study," also suggested an initial operating budget for the Center. For its first year, the Center's operating budget was projected to be $5,107. This amount seemed reasonable to the members of the Community Chest because they accepted it without making any adjustments. The budget that Pollard created and submitted to the Community Chest is listed below (51):
Suggested Budget Salaries $2,000.00 Wages 300.00 Other Compensation 300.00 Total Personal Service $2,600.00 Total Services other than personal 120.00 New Equipment $1,000.00 Total Equipment 1,000.00 Rent 900.00 $4,620.00 Total Materials & Supplies 487.00 Grand Total $5,107.00
Most of the funds requested from the Community Chest were projected to cover salaries and equipment. About a fourth of the funds were requested for rent and services other than personal. For reasons unknown, Pollard recommended that a woman begin the work of heading the Center and that men volunteer their help until a budget could be secured to support two people.
Several meetings were held to support the creation of the Center, and to further convince the Community Chest of the worthiness of the project. These meetings were usually well attended, reflecting the majority's view that the center should be separate. (52) Usually, acknowledged community leaders would lead these meetings, individuals such as the Reverend D. B. Barton, Ben Bolden, Charlotte Dett, a tourist-house owner and the mother of Nathaniel Dett, Bessie Palmer, another respected community leader, and several others.
On October 17, 1928, they met at the Chamber of Commerce to select a name for the new center. "Among the names suggested were: Douglass League (as a tribute to Frederick Douglass, the well-known abolitionist), Dunbar League (in honor of Paul Lawrence Dunbar, famed Negro poet), Pollard Centre (for John Pollard, who made the survey recommending that a social center for the Falls Negroes be established), and Nathaniel Dett Centre, in honor of a native son, a composer and conductor of note, and well known here." (53) Besides these suggestions, the leadership welcomed further suggestions from any source. At the conclusion of this meeting no name had yet been confirmed.
By March 29, 1929, a little over a year after Pollard's study was completed, the Center was not only officially organized but had a name as well: the Niagara Community Center. The objective of the new organization was to promote social, recreational, and cultural activities for the black residents of Niagara Falls, with special emphasis on programs for young people. (54) (As Pollard had envisioned, the Center would eventually grow beyond its role as just a social-cultural organization.) Its first home was at 511 Erie Avenue. However, by October 1931, over a year and a half later, its new location was at 637 Erie Avenue. It was moved to provide larger quarters for its members. It would remain at this location until 1952.
Today many senior citizens still have fond memories of the Center at the 637 Erie Avenue location. Moreover, in accordance with Pollard's counsel that a woman be chosen as the first director of the Center, a Ms. Palmer was elected. She served from March 29 to May 1929. Romania L. Grisby replaced her, serving from May 1929 to September 1931. The third director was John M. Pollard Sr. who had moved to Niagara Falls. His directorship began in October of 1931, a little over three years after his study. Under his presidency the Niagara Community Center was stabilized, continuously expanded and became the heart of the entire black community.
Black Niagara Fallsians, like Americans in general, experienced the effects of the Great Depression during those years. As Table I above shows, 906 blacks were recorded as living in Niagara Falls by 1930 and 975 by 1940. The populations for these two census years, which represent critical years of the depression, were still small compared to the white population, representing less than 1.3 percent of the total. While reflecting on events during the Depression, Theodore Williamson, a funeral home director and longtime resident of Niagara Falls, explained the difficulty of those times:
African Americans in Niagara Falls were poor in the 1930s. Milk and bread were given out. Families picked up metals to sell to Jewish junk dealers during the 1930s in Niagara Falls. (55)
Furthermore, housing conditions for many black Niagara Fallsians were not good. A Works Project Administration housing study completed in 1939 made several negative findings concerning black Niagara Fallsians. (56) For example, it stressed that overcrowding was prevalent in many of their dwelling units. Almost five percent of black families (compared to 2.7% of whites) had more than one and a half persons per room, and the vast majority of blacks were renters and only about 13 percent were homeowners. Furthermore, over 60 percent of a sample of 210 black families were living in substandard dwelling units in need of major repairs or really unfit for use. (This pattern would continue on into the 1960s and beyond, especially as more blacks migrated into Niagara Falls, seeking living space.)
Nevertheless, despite the effects of the Depression, the Niagara Falls Community Center served as a strong pillar. It sponsored continuous recreational activities for youth, which helped them to avoid trouble. One of the first major successful projects that Pollard undertook as the new director was to invite William O. Pickens to Niagara Falls to speak to various groups. (57) Pickens was a scholar, educator and field secretary of the National Association for the Advancement of Colored People (NAACP). At an address before black and white students at the senior high school, he spoke extensively on the role of blacks in the making of America, instilling pride and awareness. (58) His addresses were well received according to the Niagara Falls Gazette. Charlotte Dett in 1934 sponsored a program at the Center in observance of Negro History Week. (59) Both Buffalonian and local talent were scheduled to appear on the program. Numerous groups used the Center as a meeting place. Activities such as these were ongoing and usually well attended, making Pollard and the Niagara Falls Community Center important and respected names in Niagara Falls.
THE POST-DEPRESSION ERA TO THE CIVIL RIGHTS MOVEMENT
The emergence of World War II not only helped to end the Great Depression but also made the Niagara Falls Community Center even more essential, especially with the growth of the black community. World War II brought on the Second Great Migration, which represented another significant population shift that brought numerous blacks out of the South and into the North, increasing the populations of numerous urban areas. Blacks migrated to the North to work in the war industries. Compared to past decades, from 1940 to 1950 Niagara Falls experienced an even greater influx of blacks. According to the U.S. censuses, 2,610 more blacks had migrated to Niagara Falls by 1950, mostly arriving from Alabama. (60)
Besides satisfying the demand for labor, local opinion maintains that blacks were recruited to work in low-level plant jobs because it was believed that they could best cope with the intense heat generated by local plants, especially near furnaces during summer periods. (61) Recruiting agents were sent South to entice black labor to come to Niagara Falls to work in plants such as Carborundum, Union Carbide, Carbon Corporation, Hooker Chemical, Bell Aircraft, Vanadian Company, and others. Family members and friends would often follow a recruit to Niagara Falls once news of his progress was received. It is even alleged that monetary inducements and the granting of bonds were used to entice potential labor. (62) Bloneva Bond, a longtime resident of Niagara Falls, an outspoken community activist and one of the few black professionals that migrated to Niagara Falls during the war, said, "Local industries chartered buses to bring people from the South to Niagara Falls.... She added that jobs went begging throughout the Buffalo and the Niagara area," offering appealing wages. (63) Moreover, she remembered that when she and her husband (Harwood Bond) moved to Niagara Falls in 1942, Carborundum was paying 52 cents an hour, which was better than the $100 a month she earned as a teacher in North Carolina. Her husband got a job working at Hooker Chemical Company (now Occidental Chemical Corporation) and retired in 1980 after 37 years of loyal service. (64)
The expansion of the black population of Niagara Falls placed a heavier demand on the facilities of the Niagara Community Center. It entered its most active period, as the Center began to fulfill even more of Pollard's vision. The Center continued to be an actively used recreational facility and meeting place. However, it expanded its job agency role and social service programs. (65) Pollard, for example, was known to carry around job applications for potential applicants and be ready to issue them at a moment's notice. The Center also helped many job seekers obtain employment, by coordinating interviews with potential employers. It found lodging for many of the new migrants as well. During the war, it even expanded its hours to accommodate those members who worked at night, as attested below:
During the last month a "swing-shift" program has been started by the Girls Work Secretary. It provides a recreational program for defense workers who could not take advantage of the Center's programs at any other time. The swing-shifters enjoy reading in the Center's library, playing ping-pong, playing bridge, listening to the radio, dancing and general socializing. Coffee and doughnuts are served. The executive director is on duty at each of their meetings. They meet each Wednesday night between 11:30 p.m. and 3:00 p.m. It is proving to be very popular. (66)
For those men who were in the Armed Services the Center published a short-lived newsletter on local community affairs and sent it to them, as well as making it available to local citizens. The newspaper was called "Date Data." (67)
Racial problems existed in Niagara Falls prior to the Second Great Migration, but on a small scale, almost to the extent of not being readily detectable. However, a meticulous scanning of the Niagara Falls Gazette from 1860 to 1970 clearly indicates that black Niagara Fallsians were considered distinct and of a lower caste than the general white population. This appears to have just been an acknowledged fact, which is evident from the way blacks were characterized in the newspaper and the types of jobs that were open to them.
Unlike the case with foreign immigrants whose ethnicity was mentioned during their initial settlement but no longer after their assimilation, race was constantly used in distinguishing black Niagara Fallsians. Moreover, menial low-level jobs were the ones that were generally open to them. The U.S census records from 1865 to 1920 support this contention. Subsequently, the black population was perceived as different and largely tolerated because their numbers were small and because they existed on the fringes of society.
As a child in the 1930's, Teddy Williamson remembers talk of a race riot. Supposedly, a black man had offended a white woman and there was talk of retaliation. Williamson also remembers being told to stay in the house. (68) Because of the effects of the Second Great Migration on Niagara Falls, racial tensions heightened, but not to the extent of causing racial violence. Other means would be employed to ensure that black Niagara Fallsians remained on the margins of society.
Housing discrimination was one of the key methods employed to maintain the status quo. A small number of blacks could be found throughout various sections of the city, especially the old residents who had been in Niagara Falls for years. William Rudolph, who was the first black bricklayer of Niagara Falls, lived on Livingston Avenue. The Hershey Family lived on North Avenue. A few families even lived close to the actual cataract of Niagara Falls. (69) These families, according to Williamson, were homeowners who lived in white neighborhoods during the 1920's, 1930's and beyond when black Niagara Fallsians viewed homeownership as impressive because so few owned homes. (70)
However, before World War II, the majority of black Niagara Fallsians, who were predominantly new migrants, were clustered around four areas of the southern end of the city: (1) the Erie Avenue area, (2) the Buffalo Avenue area, (3) the East Falls Street area, (4) and the Twenty-Fourth Street-Allen-MacKenna Avenue area. (71) Reverend James Banks, who was a Niagara University student and pastor of the Emmanuel Baptist Church of Niagara Falls, explains the area's parameters best:
The first area included that part of Erie Avenue between Eighth Street and Fifth Street. The Second area consisted of that part of Buffalo Avenue between Tenth and Thirteenth Streets. The third area is that part of East Falls Street between Tenth and Thirteenth Streets. The fourth area is bounded on the south by Buffalo Avenue, on the west by Twenty-Second Street, on the north by MacKenna Avenue and on the east by Twenty-Seventh Street. (72)
Of these four areas, Erie Avenue was the main business district and the entertainment center of the black community. (73) The Niagara Falls Community Center was located on Erie Avenue. At least one church was located there. Ann Gabriel and Almed Cheatham had tourist homes on the avenue. Jerry Plato owned and operated a boardinghouse there. Wesley Parker ran a restaurant and boarding house called the Parker House. The Sunset Club, which had New York City style entertainment for people over the age of 21, was there. Murphy's Grill, which had 20 rooms upstairs, was a popular restaurant. A man by the name of Torran operated a poolroom on the avenue, while Emmett Ashford and his wife managed a beauty shop and barbershop. Historic Erie Avenue would be demolished in a later period, however, by an urban renewal project that built the present day Niagara Falls Convention Center.
During and after World War II, many blacks left the above areas and moved to the northern end of the city, such that by 1947 about 61 percent of black Niagara Fallsians resided on the northern end of town. (74) This population shift was due to the building of Hyde Park Village and Center Court Housing, which were both housing projects. The former was built in 1943 and the latter in 1944. Hyde Park Village was originally built to be temporary quarters for black residents and therefore was not an attractively well-built unit. Center Court Housing was more appealing and decent as far as a project unit could be.
The areas mentioned earlier that were considered the black district and the housing projects were not sufficient to accommodate the steadily expanding black population. By 1960 Niagara Falls had experienced its greatest influx of migrants. The United States Census for 1960 listed 7,038 blacks as living in Niagara Falls, roughly about 6.9 percent of the city's population. In comparison to the 1950 census, this was an increase of 3,453 individuals. Most of these new arrivals sought housing in Hyde Park Village or Center Court Housing.
By the mid-1950s, the demolition of Hyde Park Village began. Moreover, realtors would not sell or rent to blacks outside areas that were designated for them, thwarting the natural flow of a growing population throughout the city. Such unlawful practices caused overcrowding and unsanitary living conditions. The Hyde Park Village project became an overcrowded slum area, unfit for human habitation. According to the findings of the Works Project Administration study completed in 1939, blacks were living in overcrowded units because not much else was available to them, particularly in view of the intense competition for the available housing in the restricted areas.
In 1952 twenty-eight residents of Hyde Park Village were granted a 30-day stay by Judge Thomas B. Lee after they had been ordered to vacate their dwellings. (75) Hyde Park Village was being prepared for demolition. Most of the adults who had received eviction notices testified that they had tried in vain to find other accommodations throughout the city, but their efforts were to no avail. The judge was informed by the City Council that there was no housing shortage in the city, and he unhesitatingly informed the black citizens before him of this fact. (76) He was correct, but he apparently was unaware of the fact that realtors refused to sell or rent to blacks in certain sections where housing was plentiful. The majority of the adults that received eviction notices had come from Alabama, because many of them had been enticed by local industries to work for them in the area. These industries had made no efforts to ensure that housing would be made available to their recruited employees.
In addition to their testimonies before Judge Lee, perhaps what helped the 28 evicted tenants receive a stay is that they had support from a Judge J. D. Carr, who was mediator from the Civil Rights League, Inc. of Buffalo, and James A. Lafferty who was the Hyde Park Village project manager. Lafferty told Judge Carr, "The tenants [were] illegally in possession of the units but that it was his view that for these same people there [was] an acute housing shortage." (77)
Once these tenants finally evacuated Hyde Park Village, they generally crowded back into the older areas in the southern end of town or the Center Court Housing project, where a closer watch was kept on tenants to prevent them from allowing relatives and friends to double up in their apartments as many had done in Hyde Park Village, which helped to produce a slum-like-atmosphere. A few were allowed into Griffon Manor, which was an exclusively white federal housing project. There was still an urgent unfulfilled demand for housing among the ever-expanding black population, however.
The leadership of the Niagara Community Center constantly spoke out against the housing discrimination that confronted black Niagara Fallsians. Aaron L. Griffin, who had replaced Pollard as the director of the Niagara Community Center on July 1, 1943, did so, as did his associates. (78) They also supported studies that further underscored the housing problem. By the mid-1950s, their complaints and the studies' findings calling for the development of more housing for black Niagara Fallsians fell upon deaf ears in the City Council (79) and did not even begin to be seriously considered until a fire took the lives of many children housed in an overcrowded, old, dilapidated apartment building.
Aside from the housing discrimination issue, the fact that blacks were expected to exist in certain spaces at the margins of society is confirmed by another incident. Florence Lovell Dyett, Ernest's mother, who resided in Jacksonville, Florida and was head of the Elementary Education Department at Bethune-Cookmen College, in 1946, responded to an advertisement of the DeVeaux School of Niagara Falls. (80) The advertisement declared that the child of a deceased minister, who possessed an outstanding academic record, could qualify for a scholarship at the DeVeaux School. The scholarship was called the Samuel DeVeaux Scholarship and was worth a $1,000 per year. It paid all charges for board, room, tuition and laundry.
Ernest's mother applied for the scholarship on her 12-year-old son's behalf. Reverend William S. Hudson, who was headmaster of the school, corresponded with Ms. Dyett. Reverend Hudson found Ernest's academic record very impressive, and school officials quickly voted to issue Ernest the esteemed scholarship. More correspondence took place between Ms. Dyett and Reverend Hudson, who earnestly congratulated Ernest on being chosen as a scholarship recipient, as plans were being made to get Ernest to school. Ms. Dyett informed Reverend Hudson that she would not be able to deliver Ernest to school herself but that a friend would do it for her. When Ernest and his chaperon arrived at the school, Reverend Hudson bluntly told them that Ernest could not enter the school because he was "colored." He then escorted them to the train station and paid their fare to Philadelphia.
When Ms. Dyett's friend informed her of what had transpired, she was angry and concerned about the psychological damage that might have been done to her son. In a follow-up letter, Reverend Hudson informed Ms. Dyett that, during their correspondence, he had not been told that Ernest was colored and that had he known he would have spared all parties from the embarrassment. Moreover, he surmised that Ms. Dyett might have assumed that racial prejudice was nonexistent in the North since she lived in the South. He further informed her that parents would withdraw their children if they knew a colored student was at the DeVeaux School. He then subtly reprimanded Ms. Dyett for not informing him that Ernest was colored. (81) In response, she contacted Walter White of the NAACP for advice concerning this matter. (82) White had Franklin H. Williams, one of his aides, contact Reverend Hudson and hint at a lawsuit if Ernest was not allowed entry to the school and his awarded scholarship. (83) Hudson took this matter up with the Board of Directors of his school who decided that, since Ernest had been awarded the scholarship, he should be admitted to the DeVeaux School. This information was conveyed to Ms. Dyett and the NAACP, and it appears that Ernest did attend the school. (84) Knowing that he was not initially welcomed, the psychological damage of attending the DeVeaux School, a private religious institution, may have been even more damaging than his first encounter with the "good" Reverend Hudson.
No records indicate whether the local black community or its leadership was aware of the Dyett case. This may have been a private matter between the national office and the DeVeaux School, although a local branch of the NAACP had been formed by 1942.
Even so, more blacks were steadily migrating to Niagara Falls as indicated in Table I. Their presence placed an even heavier burden on the Niagara Falls Community Center. (85) The house at 637 Erie Avenue was not large enough to accommodate the increasing number of individuals who wanted to patronize the Center. (86) After a request for further assistance from the Community Chest, another study was conducted to convince the Community Chest of the Center's needs. This study was titled "Survey of the Niagara Community Center Association of Niagara Falls, New York." (87) Edward G. Lindsey conducted the study and completed it in 1948. He found that since the war the paid-up membership had grown from 50 to 1,357. (88) Twenty-nine groups were meeting at the Center, and members had participated 19,302 times in programs. His study supported and promoted the idea of building larger quarters.
To obtain funds to build the new quarters, in May of 1949 the Center launched a $120,000 fundraising campaign, which was approved by the Community Chest. The leadership of the Niagara Falls Community Center wanted to move it to the northern end of town where about 60 percent of the black community resided, preferably next to the Center Court Housing project where many youths lived. Through the influence of board members, such as a Mr. Van Liew, the City Council and the Niagara Falls Housing authority donated five lots of land adjacent to Center Avenue and 15th Street, (89) which was next to the Center Court Housing project. In 1952, amidst joyous celebration, the Niagara Falls Community Center relocated to its new home and is there today.
Obtaining housing continued to be a major problem for black Niagara Fallsians even after the new community center had been built. (90) As mentioned earlier, this problem was evident from World War II, throughout the timeframe of this paper and beyond. Black Niagara Fallsians persistently searched for dwelling units all over the city but realtors continued to rent or sell properties to them only in certain sections, where a highly observable cluster of blacks had been living and ever expanding in five city tracts. (91)
Due to the persistent influx of blacks, the population densities of those areas increased significantly, creating overcrowded and undesirable conditions. Moreover, there were cases of professional persons who wanted to purchase housing in Niagara Falls and who could not buy in the areas where they desired. (92) Some of these people purchased homes in Buffalo and commuted to their jobs in Niagara Falls.
Reverend H. Edward Whitaker, who was head of the New Hope Baptist Church and a civil rights activist, at a Niagara Falls Christian convention that dealt with community affairs issues, told his fellow clergymen that 50 members of his congregation had to commute from Buffalo to Niagara Falls because they could not find decent housing in the city. (93) The convention members unanimously voted for fair housing for everyone without regard to race, color, creed or national origin.
By 1956 a small number of black families were renting units in Griffon Manor. It had been mentioned in one of the local newspapers that there were 100 vacancies in Griffon Manor, suggesting that there was no housing shortage in Niagara Falls. A desperate and upset phone caller contacted the manager of Griffon Manor and told him that he had read in the newspaper that 100 Griffon Manor units were vacant. He informed the manager that, if he rented any of those units to "colored people," he would be "taken care of." (94) He also indicated that the past housing manager had kept them out and that he represented a group of concerned citizens that had a large investment in property around Griffon Manor. When told to come to the Griffon Manor office to discuss the matter, the caller, who spoke in broken English, refused to comply, repeating his warning several times to the effect that if the manager did not keep out colored people he would be "taken care of." This threat was reported to the police. (95)
THE CIVIL RIGHTS MOVEMENT UNTIL 1965
The Civil Rights Movement of the 1950s and 60s impacted the entire nation, including the black community of Niagara Falls, influencing its members to speak out and protest such practices as housing discrimination and mistreatment at local stores. (96) As noted earlier, the Reverend Whitaker was a strong and outspoken advocate for fair housing as was Aaron Griffin, Bloneva Bond and several other community members. In 1960 when four North Carolina Agricultural and Technology students sat down at the food counter at a Woolworth Store in Greensboro, the Student Sit-in Movement began, serving to galvanize Americans throughout the nation.
In response to this movement, Whitaker organized the picketing of a local Kresge Store, which was affiliated with the Woolworth Store chain that acquiesced to segregation in the South. A W. T. Grant Store was another local store affiliated with the Woolworth Store chain that would be targeted for picketing. Whitaker, who was also a local NAACP leader, followed the national office's directive to boycott such stores. (97) Under his leadership, black community members picketed the Kresge Store in shifts, carrying signs that read, "Segregation is immoral" and "Support the NAACP," which served to further highlight the injustice that was occurring in the South. The picketing, however, bewildered some white community members. (98) One man claimed that the problem that was being protested existed thousands of miles away and that he could not understand the merit of such an activity in Niagara Falls, which did not adhere to such unfair practices. One teenager yelled that the picketers should be "run off the street." He instead was chased away by a policeman.
A sympathetic store operator stated that the picketers had a legitimate complaint and perhaps needed to picket in order to work out their frustrations. In 1960 Daisy Bates, who was the head of the Little Rock Arkansas local NAACP Branch and the guiding hand behind the desegregation of Little Rock Arkansas's Central High School, was invited to give a few addresses to the general Niagara Falls community. (99) She stressed that in the South blacks were fighting for their constitutional right to vote, but in Niagara Falls blacks could vote but were not taking advantage of their rights and opportunities. (100) She also reprimanded black Niagara Fallsians for not writing their congressmen about the racial injustices going on in America and the lack of any effort by Congress to create a strong civil rights bill. Some Niagara Fallsians hinted that, since she was an outsider, Ms Bates' criticisms of black Niagara Fallsians should not be taken too seriously. However, long after Ms. Bates had left town, members of the black community rushed to her aid to say that her criticisms were legitimate. (101)
Another group entered the Niagara Falls area in about 1961 or 1962, a group that vehemently opposed the Civil Rights Movement. That group was the Nation of Islam. (102) The Niagara Falls Gazette viewed them as fanatics. (103) Members from the Muslim temple in Buffalo, Temple 23, came to Niagara Falls to recruit members into the Nation of Islam and to perhaps establish a temple in the area. Although the majority of black leaders, such as Pollard, Bolden, Bond, and Griffin, had been seeking to integrate black Niagara Fallsians into the mainstream of the community, the Muslims sought, among other things, to convert the black community to Islam, Islamic separatism and the development of an independent economic base within the black community. The Muslims gained few converts in the area, probably because most black Niagara Fallsians were Baptists or Methodists and vigorously opposed considering any religion other than Christian. (104)
At an NAACP forum on civil rights at the Niagara Community Center, for example, when Bloneva Bond was asked if the NAACP should join forces with the Muslims on a project that could benefit the entire black community, she responded: "No. Ours is a democratic organization operating within the Constitution. We believe in the equality of man, which the Muslims do not." (105) Griffin also did not view the Muslims as a helpful force within the community. (106) They were generally perceived as a cult that impeded the progress of the struggling black community. (107)
Notwithstanding the limited influence of Islam, adherence to some of the Muslim teachings might have benefited black Niagara Fallsians. (108) The Muslims preached community economic development, the pooling of economic resources, and the development of an enterprising mindset. (109) Booker T. Washington had preached these same ideas years earlier, but they were not generally embraced. Accepting some of these principles might have been of tremendous help to the black community, which was struggling economically, especially during the early 1960s when the booming economic effects of World War II had receded.
In 1963, for example, the NAACP conducted a survey of employment of black Niagara Fallsians and found that 17.6 percent of the heads of families were unemployed. (110) Thus, adherence to Muslim economic principles perhaps might have helped to establish a stronger, more advanced economic structure and an enterprising tradition within the black community, which was sorely needed.
With the Muslims operating on the fringes of the black community, black Niagara Fallsians continued to promote and support the strategies and tactics of the Civil Rights Movement, whose influence was not only widespread but also motivational. Taking their cue from national events, such as the Sit-in Movement, the Birmingham Desegregation Movement, the March on Washington and other civil rights protest activities, the leadership continued to voice their concern about housing and employment, making stronger demands.
Before leaving to assume his new ministerial post in his native Virginia, the Reverend Whitaker, in an interview with the Niagara Falls Gazette, continued to emphasize that the housing available to black Niagara Fallsians was limited, and that they were virtually restricted to certain sections of the city. (111) He had been a minister in Niagara Falls for ten years and acknowledged that some racial progress had been made in that period. Because of improvements in race relations, he considered Niagara Falls generally a good place to live, but there was still much work to be done. He said that discrimination was subtle in Niagara Falls compared to the South, where it was more blunt and in the open. He believed that each region had its own styles of racial discrimination, but that through strong leadership, racial problems could be eliminated. (112)
In January of 1963, ten of Niagara Falls's black churches, led by their ministers, participated in planning together to get more blacks hired in local supermarkets and department stores, preferably in customer-service positions where they would be readily visible to the public. (113) A twofold plan was the result of their discussions. First, they would send fact-finding individuals to specific stores to determine if any blacks worked in them. If no blacks were employed, the individuals would discuss with the owner and manager the importance of hiring black employees, and suggest that a boycott of the store might ensue if no blacks were hired, especially if a significant number of blacks patronized the store. The second phase would be the actual boycott where ministers would tell their congregations to not patronize a specific store until black employees were hired.
It was also assumed that church members would relay the message to their non-church member friends. Although the conditions in Niagara Falls were not as desperate as in the South, the national activities of civil rights organizations had made the ministers determined to effect some improvements within their city, with one minister even predicting marches if concessions were not made to the black community's satisfaction. (114) The planning of these churches did not effect any immediate changes, but it got people thinking more about the importance of action than of rhetoric, as a "take action" attitude began to permeate the black community.
In the middle of this situation, a young aggressive leader by the name of Otis Cowart arose as a black community spokesperson. He was the president of the Niagara Falls branch of the Congress of Racial Equality (CORE), which, among many things, promoted hiring equality and aggressive action when confronting cases of hiring discrimination. Believing that hiring inequality existed at the local W. T. Grant Store, Cowart and other CORE members conducted a four-day survey of the store at 2116 Main Street. (115) They sought to determine how many blacks were employed by the store and an estimate of how many patronized the store.
In their survey, they found that only one black worked at the store and that he worked in the store's warehouse. They also found that 466 blacks had patronized the store, during the four-day period of their study, while 6000 whites had done so. After questioning the black patrons, they found that, within the last year, blacks had spent a total of $46,381 in the store, many of them having accounts at the store. With this information, Cowart contacted Frank Carmichael, who was the manager of the W. T. Grant store, and informed him of the survey results and stressed that the store should hire five blacks within the next six months or face picketing. It was requested that the blacks be hired in visible customer service roles, such as sales personnel or clerks, and as a good faith gesture, it was also requested that at least two blacks be hired by Monday, three days after these demands were issued.
CORE argued that since January of 1963 several blacks had applied for employment, but none were hired, while two whites were hired permanently and 12 temporarily during the past Christmas season. Cowart informed Carmichael and local authorities that CORE's picketing demonstrations would be nonviolent and that they would not prevent store patrons from entering and exiting the store and that their picket signs would display such messages as "Don't buy where you can't work," "Full democracy is complete freedom," "No quotas," and "No tokenism."
On August 28, 1963, the same day that the March on Washington commenced, the picketing began with about 25 picketers, who started at 10:00 a.m. and continued until the store closed at 5:30 p.m. (116) Arthur Ray, a longtime resident of Niagara Falls, indicated that he participated in the picketing not only because discrimination was wrong, but also to ensure a better future for his children and grandchildren. (117) By Wednesday 100 individuals were picketing in front of the W. T. Grant Store, and several of them were white, as CORE was a multiracial civil rights organization. (118) Cowart, who had given the picketers workshops on nonviolence and was constantly coaching them as they picketed, had warned store officials that the picketers were determined to stay on until Christmas and beyond, until CORE's demands had been met. In a story on the picketing published in the Niagara Falls Gazette, Cowart acknowledged that the W. T. Grant Store just happened to be the first store chosen for picketing and that there would be other stores involved if unfair hiring practices did not cease. In response, the management of W. T. Grant issued a published statement in which they declared that W. T. Grant was for equal opportunity and hiring equality but that they refused to hire strictly based on race, which they considered discriminatory and contrary to sound personnel policy. (119)
To strengthen their picketing efforts, and in disregard of W. T. Grant's position, at a community rally at Trinity Baptist Church Cowart asked community members who had patronized the store to do four things:
(1) Not to cross the picket line and buy [at W. T. Grant];
(2) To turn in their charge plates, if they [had] any there;
(3) To notify the store that when their current bill [was] paid the account [would] not be reopened, and
(4) To encourage others not to buy there. (120)
The black community was overwhelmingly supportive of Cowart's request and CORE's crusade to ensure equal opportunities and equal hiring practices in Niagara Falls. Many young blacks were leaving the area due to limited employment opportunities, and the community wanted to support efforts to change this situation. They donated funds toward the endeavor, giving $108 at one meeting. (121) Their support and unity tremendously aided CORE's campaign because after three weeks of picketing a settlement was reached, in which W. T. Grant agreed to ensure that blacks were given equal employment opportunities at their store. (122) Other details of the settlement were not disclosed to the public. Nonetheless, with the settlement, the picketing was called off on September 21 at 5:30 p.m., and the acceptance of Cowart as a legitimate community leader increased. (123)
Besides the CORE Boycott, the Civil Rights Movement helped motivate black Niagara Fallsians to seek out power and influence within the broader community. For example, the local NAACP Branch conducted a voter registration campaign to increase the political influence of black Niagara Fallsians. (124) In 1963 Joseph H. Profit ran for a seat on the Niagara County Board of Supervisors and won. At the age of 28, he was the first black Niagara Fallsian to serve in such a capacity. (125) Arthur Ray ran for a position on the Board of Education and won in 1964. (126) He was also the first black in the city to hold such a position. The Reverend Whitaker had run in 1959 but lost. Both Profit and Ray had gain considerable white support. Under Cowart, CORE pushed city officials to be more aggressive on race issues, accusing them of not being aggressive enough, and threatening more picketing. (127)
To promote the Civil Rights struggle and to commemorate the 60th Anniversary Founding of the Niagara Movement, the Educational Committee of the local branch invited Roy Wilkins, the national secretary of the NAACP, to Niagara Falls to give an address. (128) The event was well attended. The Civil Rights Movement compelled black Niagara Fallsians to not only demand equal rights and opportunities but to push even harder to obtain entry into the mainstream of society. (129) Blacks would demonstrate much of this same energy and enthusiasm in attempting to gain power and influence in Niagara Falls for the three remaining decades of the 20th Century.
Blacks have lived in Niagara Falls and its surrounding areas since the days of slavery. After the Civil War, employment was the driving factor that attracted them to the region, the building of the Tunnel and the two world wars serving as economy boosters. The black population in the region never really became noticeable and problematic for many Niagara Fallsians until World War II, when their population figures increased by leaps and bounds in less than a decade, most of the migrants relocating from Alabama. Many black Niagara Fallsians who lived in the city before the influx of blacks in the 1940's attest that blacks did not have any major racial problems until the Second World War. Some of them even blamed the rural country habits of the migrants for contributing to racial problems. It was their efforts to obtain living space outside of specific city sections that increased racial tensions between black and white citizens of Niagara Falls.
When the new black migrants naturally sought to spread out to other less crowded sections of the city, white resistance was strong and pervasive. In terms of limiting housing and living space to blacks, Niagara Falls followed the pattern of most northern urban cities that experienced the First and Second Great Migrations. However, Niagara Falls did not entirely follow the trend of other northern urban areas. The black population never increased to the extent of contributing to race riots, at least not during the timeframe of this paper, although there were discussions of rioting.
In further examining the historical lives of black Niagara Fallsians, we find that they created stable families, formed communities, developed institutions within those communities, and served one another as well as the broader community.
The formation of the Niagara Community Center was perhaps the greatest single accomplishment achieved by black Niagara Fallsians, who developed this institute initially to serve as a recreational center, particularly for youth. With the steady flow of blacks into Niagara Falls, it grew to be an all-purpose social welfare center for the community, growing to become not only the core of the black community but a leadership-training center as well. It is still in operation today.
From 1865 to 1965 the black people of Niagara Falls faced obstacles, overcame many of them, lived and loved, and steadily attempted to carve out space for themselves, their progeny and community within the mainstream of their city.
Table I: Black Population of Niagara Falls, New York, 1860-1970 Year Population Percentage 1860 242 3.67 1865 126 2.00 1870 149 4.95 1880 150 4.51 1890 159 1.45 1900 344 1.76 1910 266 0.87 1920 509 1.00 1930 906 1.20 1940 975 1.25 1950 3,585 3.94 1960 7,038 6.87 1970 8,001 9.30
(1) The racial tensions that occurred in Springfield, Ohio in 1904, Springfield, Illinois in 1908, and Chicago, Illinois in 1919 underscore the effects those rapidly expanding African-American populations had on northern urban communities when they attempted to broaden their living space.
(2) The Seventh Census of the United States: 1850 (Washington, D. C.: Robert Armstrong, Public Printer, 1853), 102.
(3) Population of the United States in 1860 (Washington, D. C.: Government Printing Office, 1864), 337.
(4) New York State Census of 1865 (Albany, New York: Charles Van Benthuysen, Sons, 1867), 9.
(5) Benjamin Drew, The Narratives of Fugitive Slaves in Canada (Boston: John P. Jewett & Company, 1856), 102-103.
(6) The population figures for the years 1865 were obtained from the Census for New York State. All other data were obtained from the United States Censuses, which are listed in my bibliography.
(7) 1865 Census of the Town and Village of Niagara Falls, 8-100.
(8) "Freedom," Niagara Gazette (Niagara Falls, New York, July 10, 1986), 7A.
(10) "A Famous Old Hostelry," Niagara Falls Journal (Niagara Falls, New York, January 4, 1918), 4: C 1.
(11) This fact is apparent throughout the Town and Village of Niagara Falls censuses for the years 1865 to 1920. These censuses are listed in my bibliography and stored in the Niagara Falls, New York Public Library; "Interesting Reminiscences," Niagara Falls Gazette (Niagara Falls, New York, October 26, 1927), 4: 2.
(12) Orrin E. Dunlap, "The Cataract House." Local History Hotel Folder 647.94 (Local History Department at the Niagara Falls, New York Public Library).
(13) "Interesting Reminiscences," Niagara Falls Gazette (Niagara Falls, New York, October 26, 1927), 4: 2.
(14) "Excitement at the Falls," The Niagara Courier (Lockport, New York, August 31, 1853), 2: 6.
(15) Patrick Snead does not appear in any Niagara Falls, New York census reports after the slavery era. He may not have been recorded. He may have resided in another American community, or he may have stayed in Canada, as did many former escaped slaves. Consult Michael Power, Nancy Butler, and Joy Ormsby, Slavery and Freedom in Niagara (Niagara Historical Society: Niagara-on-the-Lake, Ontario LOS 1JO, 1993).
(16) Benjamin Drew, The Narratives of Fugitive Slaves in Canada, 104.
(17) In my population figures, individuals in the census listed as mulattoes were counted as black.
(18) "Colored Republican Meeting," Niagara Falls Gazette (Niagara Falls, New York, September 18, 1872), 3.
(19) "The Colored Republicans," Niagara Falls Gazette (Niagara Falls, New York, October 19, 1892), 1.
(20) Lillian S. Williams, Strangers in the Land of Paradise (Bloomington: Indiana University Press, 1999), 92.
(21) "The Colored Republicans," 1.
(22) "Colored '400' Danced," Niagara Falls Gazette (Niagara Falls, New York, April 4, 1897), 1.
(23) 1900 Town and Village Census for Niagara Falls, Sheet 1; 1910 Sheet 3. Town and Village Census for Niagara Fall,
(24) H. William Feder, The Evolution of an Ethnic Neighborhood that Became United In Diversity: The East Side, Niagara Falls, New York, 1880-1930 (Amherst, New York: BMP Inc., 2000), 141.
(25) Ibid., 135.
(26) "A Canadian Tunnel" Niagara Falls Gazette (Niagara Falls, New York, August 31, 1892), 1.
(28) "Memorable Day for the Negro Baptists," Niagara Falls Gazette (Niagara Falls, New York, April 9, 1900), 1.
(29) Ibid., 1.
(30) "Salvation of the Negro Citizen," Niagara Falls Gazette (Niagara Falls, New York, April 16, 1900), 1 & 6.
(31) Ibid., 1.
(32) "Negro Jubilee to be Held on Fourth of July," Niagara Falls Gazette (Niagara Falls, New York, May 5, 1900), 1; "Plans for Negro Jubilee are Well Underway," Niagara Falls Gazette (Niagara Falls, New York, May 16, 1900), 1.
(33) "Rev. B. B. Johnson Has Given Up His Pulpit," Niagara Falls Gazette (Niagara Falls, New York, June 26, 1900), 1.
(34) Ibid., 1.
(35) Vivian Flagg McBrier, R. Nathaniel Dett: His Life and Work (Washington, D. C.: Associated Publishers, Inc., 1977), 1.
(36) "Negroes Held Big Jubilee Yesterday," Niagara Falls Gazette (Niagara Falls, New York, July 5, 1900), 1.
(37) "Rev. B. B. B. Johnson Makes Accounting," The Daily Cataract-Journal (January 4, 1901), 5: 3.
(38) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis (Niagara University, 1958), 73. Banks was a minister himself in Niagara Falls and knew much of the African-American religious history of the city.
(39) Ibid., 70.
(40) Ibid., 70-103.
(41) "Negroes Object: Local Men and Buffalo Colored Folk Say if Czolgosz was a Negro His Treatment Would be Summary," Niagara Fall Gazette (Niagara Falls, New York, September 10, 1901), 1.
(42) Ibid., 1.
(43) Lillian S. Williams, Strangers in the Land of Paradise, 221.
(44) St. Clair Drake and Horace A. Cayton, Black Metropolis (Chicago: University of Chicago Press, 1970 [first published in 1945]), 58-64; Nicholas Lemann, The Promised Land: The Great Migration and How it Changed America (New York: Vintage Books, 1991), 1-353.
(45) 1920 U. S. Census for Niagara County, New York, Reels nos. 25, 26, & 27 (Niagara Falls, New York Public Library), Sheet 16.
(46) "Establishment of Community Center Here For Colored People Suggested as Means of Improving Conditions," Niagara Falls Gazette (Niagara Falls, New York, February 7, 1928), 13; The Niagara Community Center Papers were microfilmed by the Afro-American Historical Association of the Niagara Frontier and the Buffalo State College History Department. Referred to hereafter as the microfilmed collection, "The Niagara Community Center Association 1927-1977," 1.
(47) "Ben Bolden Can Take Pride in Career Here," Niagara Falls Gazette (Niagara Falls, New York, May 25, 1958), 7.
(48) "Pollard Opened Door to Community Center," Niagara Falls Gazette (Niagara Falls, New York, February 17, 1987), 1A: 2.
(49) Establishment of Community Center Here For Colored People Suggested as Means of Improving Conditions," 13.
(50) Ibid., 13.
(52) "Form Organization for Advancement of Colored Community: About 150 Negroes Attend Meeting; Will Try to Establish Community Center," Niagara Falls Gazette (Niagara Falls, New York, October 12, 1928), 14:4.
(53) "Selecting Name for New Centre," Niagara Falls Gazette (Niagara Falls, New York, October 17, 1928), 1: 15.
(54) Microfilm, "The Niagara Community Center Association 1927-1977," 6.
(55) Interview with Theodore Williamson, May 13, 2002. (Interview in possession of author)
(56) "Report of Real Property Survey, City of Niagara Falls, New York," Published by the City of Niagara Falls as a report on Official Project #665-21-3-267 conducted under the auspices of the Works Project Administration, October 1939 (Local History Department, Niagara Falls Public Library), 40.
(57) "Negro Leader to Speak at Falls," Niagara Falls Gazette (Niagara Falls, New York, November 3, 1931), 20.
(58) "Negro's Advancement During Past 60 Years Phenomenal, Pickens Says," Niagara Falls Gazette (Niagara Falls, New York, November 9, 1931), 18.
(59) "About Negroes," Niagara Falls Gazette (Niagara Falls, New York, February 9, 1934), 2: 3.
(60) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis, 10-12.
(61) Interview with Vicky Neely, December 28, 2001. (Interview in possession of author)
(62) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis, 8-10.
(63) "Jobs Led Blacks to Falls," Niagara Falls Gazette (Niagara Falls, New York, February 5, 1990), 1A & 4A.
(64) Ibid., 1A.
(65) The microfilmed collection, "The Niagara Community Center Association 1927-1977," 9.
(67) "Community Center Reaches High Mark in Service to Colored Folk," Niagara Falls Gazette (Niagara Falls New York, October 2, 1944), 11.
(68) Interview with Theodore Williamson, May 13, 2002.
(69) Interview with Barbara Smith, May 17, 2002. (Interview in possession of author)
(70) Interview with Theodore Williamson, May 13, 2002.
(71) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis, 13.
(72) Ibid., 14.
(73) Interview with Theodore Williamson, May 13, 2002.
(74) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis, 15.
(75) "30-day Stay of Order to Vacate Their Homes Granted to 28 Hyde Park Village Families," Niagara Falls Gazette (Niagara Falls, New York, February 26, 1952), 1.
(78) "Accepts Community Center Directorship," Niagara Falls Gazette (Niagara Falls, New York, June 16, 1943), 10; "Mr. Griffin Likes Falls As Place to Live, Work," Niagara Falls Gazette (Niagara Falls, New York, November 27, 1948), 22.
(79) James Franklin Banks, "Problems Encountered by World War II and Post World War II Negroes, Who Settled in the Niagara Falls, New York Area." Master's Thesis, 38-39.
(80) "Statement of Case of Ernest Lovell Dyett by His Mother," Papers of the NAACP, Part 3, Series B: 1940-1950 (Microfilm: AFR 6, Olin library, Cornell University).
(81) Ibid., 4.
(82) "Ernest Lovell Dyett to Walter White," October 6, 1946, Papers of the NAACP, Part 3, Series B: 1940-1950.
(83) "Franklin H. Williams to Reverend William S. Hudson," October 8, 1946, Papers of the NAACP, Part 3, Series B: 1940-1950, 1-2.
(84) "Reverend William S. Hudson to Florence Dyett," October 11, 1946, Papers of the NAACP, Part 3, Series B: 1940-1950; "Franklin H. Williams to Florence Dyett," October 14, 1946, Papers of the NAACP, Part 3, Series B: 1940-1950, 1-2.
(85) "Community Center Reaches High Mark in Service to Colored Folk," Niagara Falls Gazette (Niagara Falls New York, October 2, 1944), 11.
(86) "Five Leaders of Negro Community Center Honored: Year's Work of Community Center Reviewed; Larger Quarters Needed," Niagara Falls Gazette (Niagara Falls, New York, February 19, 1946), 19.
(87) Edward G. Lindsey, "Survey of the Niagara Community Center Association of Niagara Falls, New York," Niagara Falls: The Community Chest (Unpublished Materials).
(88) "To Seek $120,000 For Community Center Building," Niagara Falls Gazette (Niagara Falls, New York, January 21, 1949), 1 & 25.
(89) 15th Street is presently called Aaron Griffin Way.
(90) "We Have Some Race Problems Here Too," Niagara Falls Gazette (Niagara Falls, New York, April 24, 1959), 21.
(91) "State Commission Charges Housing Segregation Here," Niagara Falls Gazette (Niagara Falls, New York, November 29, 1961), 21; "Falls Negro Finds Democracy Diluted," Niagara Falls Gazette (Niagara Falls, New York, March 17, 1963), 6 A; "Negro Leader Charges Bias in Housing, Schools, Jobs," Niagara Falls Gazette (Niagara Falls, New York, June 28, 1963), 7. "Agents get Blamed for Bias in Housing," Niagara Falls Gazette (Niagara Falls, New York, July 31, 1963), 25.
(92) "Lack of Jobs, Housing, Plagues Negroes Here," Niagara Falls Gazette (Niagara Falls, New York, August 3, 1962), 13.
(93) "Religious Fellowship Seeks End to Housing Discrimination," Niagara Falls Gazette (Niagara Falls, New York, June 3, 1959), 26.
(94) "Project Head Threatened, Warned to Ban Negroes," Niagara Falls Gazette (Niagara Falls, New York, February 16, 1956), 1.
(95) A review of the local newspapers does not indicate that the threat was carried out.
(96) "Jobs Led Blacks to Falls," Niagara Falls Gazette, 4A; "Negroes Urged to Support Passive Resistance Drive," Niagara Falls Gazette (Niagara Falls, New York, April 3, 1960), 1C.
(97) "Store Picketing Here To Back Dixie Negroes," Niagara Falls Gazette (Niagara Falls, New York, April 20, 1960), 35; "Rev. Whitaker Named Head of NAACP," Niagara Falls Gazette (Niagara Falls, New York, December 8, 1959), p. 17.
(98) "NAACP Pickets March at Chain Stores Here," Niagara Falls Gazette (Niagara Falls, New York, April 24, 1960), 6 C.
(99) Little Rock NAACP Leader Not Ready to Give Up Fight," Niagara Falls Gazette (Niagara Falls, New York, March 20, 1960), 1 C.
(100) "Little Rock Leader Chides Falls Negroes for Apathy in Politics," Niagara Falls Gazette (Niagara Falls, New York, March 19, 1960), 9.
(101) "Falls Called Southern City for Denial of Negro Rights," Niagara Falls Gazette (Niagara Falls, New York, March 31, 1960), 14.
(102) "Muslim Cult Now Operating Here," Niagara Falls Gazette (Niagara Falls, New York, January 6, 1962), 9; "Pride & Purpose," Niagara Falls, Gazette (Niagara Falls, New York, January 11, 1962), 11.
(103) "Shortcomings of Negroes Here are Cited," Niagara Falls Gazette (Niagara Falls, New York, August 7, 1963), 26.
(104) "Muslims Gaining Few Converts," Niagara Falls Gazette (Niagara Falls, New York, January 7, 1962), 3 C.
(105) "Ethical-Problem Solutions Asked of NAACP Panel," Niagara Falls Gazette (Niagara Falls, New York, March 12, 1963), 17.
(106) "More About Muslims ...," Niagara Falls Gazette (Niagara Falls, New York, January 11, 1962), 11.
(107) "Falls Seeks To Resolve Problems," Niagara Falls Gazette (Niagara Falls, New York, August 11, 1963), 4 A.
(108) "Pro-Muslim," Niagara Falls Gazette (Niagara Falls, New York, January 18, 1962), 15.
(109) Lawrence Tyler, "The Protestant Ethic Among The Black Muslims." Phylon 27, no. 1 (Spring 1966): 5-14.
(110) "Employment is Major Negro Problem," Niagara Falls Gazette (Niagara Falls, New York, August 4, 1963), 4 A.
(111) "Housing Lack of Jobs Plague Negroes Here," Niagara Falls Gazette (Niagara Falls, New York, December 11, 1962), 10; "Negroes Find it Difficult to Find Housing at Falls," Niagara Falls Gazette (Niagara Falls, New York, December 9, 1964), 6; "Negro Leaders Here Cite Bias in Housing, Job Opportunity," Niagara Falls Gazette (Niagara Falls, New York, March 27, 1960), 8C; "Finds Housing Intolerable Y-Teen Leader to Leave Falls," Niagara Falls Gazette (Niagara Falls, New York, June 26, 1960), 1B; "NAACP Asks Housing Bill," Niagara Falls Gazette (Niagara Falls, New York, February 16, 1961), 8.
(113) "City's Stores Are Urged to Hire More Negroes," Niagara Falls Gazette (Niagara Falls, New York, January 8, 1963), 9.
(114) "Negroes to March Here, Leader Says," Niagara Falls Gazette (Niagara Falls, New York, August 6, 1963), 5.
(115) "CORE Unit Here Picks Grant's for Picketing," Niagara Falls Gazette (Niagara Falls, New York, August 24, 1963), 3.
(116) "CORE Sets Picketing Plans," Niagara Falls Gazette (Niagara Falls, New York, August 27, 1963), 11.
(117) Interview with Arthur B. Ray, May 24, 2002. (Interview in possession of author)
(118) "CORE Picketing Falls Store," Niagara Falls Gazette (Niagara Falls, New York, August 28, 1963), 27.
(119) "Management's Policy Stated by Grant Firm," Niagara Falls Gazette (Niagara Falls, New York, August 27, 1963), 12.
(120) "CORE Increases Pickets, Starts Boycott at Store," Niagara Falls Gazette (Niagara Falls, New York, August 31, 1963), 3.
(122) "Main St. Store, CORE End Dispute on Hiring," Niagara Falls Gazette (Niagara Falls, New York, September 21, 1963), 1.
(123) "Negro Job Gains Called Small," Niagara Falls Gazette (Niagara Falls, New York, January 18, 1964), 9.
(124) "NAACP Here Maps Campaign For Registration of Voters," Niagara Falls Gazette (Niagara Falls, New York, August 4, 1964), 13.
(125) "First Negro Elected To County Board," Niagara Falls Gazette (Niagara Falls Gazette (Niagara Falls, New York, November 6, 1963), 25.
(126) "Ray, Evans Are Elected; Only 5,763 Votes Cast," Niagara Falls Gazette (Niagara Falls, New York, August 4, 1964), 13; "School Board member Likes Challenge of Education Post," Niagara Falls Gazette (Niagara Falls, New York, January 17, 1965), 6 A; "95 Per Cent Negro School Here Has Special Problem," Niagara Falls Gazette (Niagara Falls, New York, December 20, 1959), 3; "Need is Cited for Greater Education of Minorities," Niagara Falls Gazette (Niagara Falls, New York, October 16, 1960), 4C.
(127) "CORE Blasts City Fathers For Slighting Bias Problem," Niagara Falls Gazette (Niagara Falls, New York, May 2, 1965), 1 C.
(128) "Secretary of NAACP To Speak," Niagara Falls Gazette (Niagara Falls, New York, July 30, 1965), 4.
(129) "East Side Recasts its Image," Niagara Falls Gazette (Niagara Falls, New York, September 1, 1965), 25; "Community Action Changes Studied," Niagara Falls Gazette (Niagara Falls, New York, October 17, 1965), 1 C.
|Printer friendly Cite/link Email Feedback|
|Author:||Boston, Michael B.|
|Publication:||Afro-Americans in New York Life and History|
|Date:||Jul 1, 2004|
|Next Article:||Positioning and imaging Caesar: from margin to center in the historiography of colonial New York City.| | https://www.thefreelibrary.com/Blacks+in+Niagara+Falls,+New+York:+1865+to+1965,+a+survey-a0128705070 | 21 |
14 | Micronutrients are essential elements required by organisms in varying quantities throughout life to orchestrate a range of physiological functions to maintain health. Micronutrient requirements differ between organisms; for example, humans and other animals require numerous vitamins and dietary minerals, whereas plants require specific minerals. For human nutrition, micronutrient requirements are in amounts generally less than 100 milligrams per day, whereas macronutrients are required in gram quantities daily.
The minerals for humans and other animals include 13 elements that originate from Earth's soil and are not synthesized by living organisms, such as calcium and iron. Micronutrient requirements for animals also include vitamins, which are organic compounds required in microgram or milligram amounts. Since plants are the primary origin of nutrients for humans and animals, some micronutrients may be in low levels and deficiencies can occur when dietary intake is insufficient, as occurs in malnutrition, implying the need for initiatives to deter inadequate micronutrient supply in plant foods.
A multiple micronutrient powder of at least iron, zinc, and vitamin A was added to the World Health Organization's List of Essential Medicines in 2019.
At the 1990 World Summit for Children, the gathered nations identified deficiencies in two microminerals and one micronutrient – iodine, iron, and vitamin A – as being particularly common and posing public health risks in developing countries. The Summit set goals for elimination of these deficiencies. The Ottawa-based Micronutrient Initiative was formed in response to this challenge with the mission to undertake research and fund and implement micronutrient programming.
As programming around these micronutrients grew, new research in the 1990s led to the implementation of folate and zinc supplementation programs as well.
Priority programs include supplementation with vitamin A for children 6–59 months, zinc supplementation as a treatment for diarrhoeal disease, iron and folate supplementation for women of child-bearing age, salt iodization, staple food fortification, multiple micronutrient powders, biofortification of crops and behavior-centered nutrition education.
There is low-quality evidence that food fortifications with micronutrients may reduce the risk of getting anemia and micronutrient deficiency but there is an uncertain effect on the height and weight of children. Meanwhile, there is no data to show adverse effects of micronutrients fortification. Fortification of maize flour with iron and other vitamins and minerals has uncertain benefits on reducing the risk of anemia.
Salt iodization is the recommended strategy for ensuring adequate human iodine intake. To iodize salt, potassium iodate is added to salt after it is refined and dried and before it is packed. Although large-scale iodization is most efficient, given the proliferation of small-scale salt producers in developing countries, technology for small-scale iodization has also been developed. International organizations work with national governments to identify and support small salt producers in adopting iodization activity.
In 1990, less than 20 percent of households in developing countries were consuming iodized salt. By 1994, international partnerships had formed in a global campaign for Universal Salt Iodization. By 2008, it was estimated that 72 percent of households in developing countries were consuming iodized salt and the number of countries in which iodine deficiency disorders were a public health concern reduced by more than half from 110 to 47 countries.
Vitamin A supplementationEdit
In 1997, national vitamin A supplementation programming received a boost when experts met to discuss rapid scale-up of supplementation activity, and the Micronutrient Initiative, with support from the Government of Canada, began to ensure supply to UNICEF.
In areas with vitamin A deficiency, it is recommended that children aged 6–59 months receive two doses annually. In many countries, vitamin A supplementation is combined with immunization and campaign-style health events.
Global vitamin A supplementation efforts have targeted 103 priority countries. In 1999, 16 percent of children in these countries received two annual doses of vitamin A. By 2007, the rate increased to 62 percent.
The Micronutrient Initiative, with funding from the Government of Canada, supplies 75 percent of the vitamin A required for supplementation in developing countries.
Fortification of staple foods with Vitamin A has uncertain benefits on reducing the risk of subclinical Vitamin A deficiency.
Double-fortified salt (DFS) is a public health tool for delivering nutritional iron. DFS is fortified with both iodine and iron. It was developed by Venkatesh Mannar, Executive Director of the Micronutrient Initiative and University of Toronto Professor Levente Diosady, who discovered a process for coating iron particles with a vegetable fat to prevent the negative interaction of iodine and iron.
In India, Tata Salt Plus is an iodine-plus-iron fortified salt, developed by the National Institute of Nutrition, Hyderabad through double fortification technology. This technology was offered to Tata Chemicals under a long-term MoU after due studies on bio-availability across the population strata conducted and published by NIN.
It was first used in public programming in 2004. In September 2010 DFS was produced in the Indian state of Tamil Nadu and distributed through a state school feeding program. DFS has also been used to combat iron deficiency anemia (IDA) in the Indian state of Bihar. In September 2010, Venkatesh Mannar was named a Laureat of the California-based Tech Awards for his work in developing Double-Fortified Salt.
The returns of applying micronutrient-enriched fertilizers could be huge for human health, social and economic development. Research has shown that enriching fertilizers with micronutrients had not only an impact on plant deficiencies but also on humans and animals, through the food chain. A 1994 report by the World Bank estimated that micronutrient malnutrition costs developing economies at least 5 percent of gross domestic product. The Asian Development Bank has summarized the benefits of eliminating micronutrient deficiencies as follows:
Along with a growing understanding of the extent and impact of micronutrient malnutrition, several interventions have demonstrated the feasibility and benefits of correction and prevention. Distributing inexpensive capsules, diversifying to include more micronutrient-rich foods, or fortifying commonly consumed foods can make an enormous difference. Correcting iodine, vitamin A, and iron deficiencies can improve the population-wide intelligence quotient by 10-15 points, reduce maternal deaths by one-fourth, decrease infant and child mortality by 40 percent, and increase people's work capacity by almost half. The elimination of these deficiencies will reduce health care and education costs, improve work capacity and productivity, and accelerate equitable economic growth and national development. Improved nutrition is essential to sustain economic growth. Micronutrient deficiency elimination is as cost-effective as the best public health interventions and fortification is the most cost-effective strategy.
Fortification of staple foods with zinc exclusively may improve serum zinc levels in the population. Other effects such as improving zinc deficiency, children's growth, cognition, work capacity of adults, or blood indicators are unknown.
Experiments show that soil and foliar application of zinc fertilizer can effectively reduce the phytate zinc ratio in grain. People who eat bread prepared from zinc enriched wheat show a significant increase in serum zinc, suggesting that the zinc fertilizer strategy is a promising approach to address zinc deficiencies in humans.
Where zinc deficiency is a limiting factor, zinc fertilization can increase crop yields. Balanced crop nutrition supplying all essential nutrients, including zinc, is a cost-effective management strategy. Even with zinc-efficient varieties, zinc fertilizers are needed when the available zinc in the topsoil becomes depleted.
There are about seven nutrients essential to plant growth and health that are only needed in very small quantities. Though these are present in only small quantities, they are all necessary:
- Boron is believed to be involved in carbohydrate transport in plants; it also assists in metabolic regulation. Boron deficiency will often result in bud dieback.
- Chlorine is necessary for osmosis and ionic balance; it also plays a role in photosynthesis.
- Copper is a component of some enzymes. Symptoms of copper deficiency include browning of leaf tips and chlorosis.
- Iron is essential for chlorophyll synthesis, which is why an iron deficiency results in chlorosis.
- Manganese activates some important enzymes involved in chlorophyll formation. Manganese deficient plants will develop chlorosis between the veins of its leaves. The availability of manganese is partially dependent on soil pH.
- Molybdenum is essential to plant health. Molybdenum is used by plants to reduce nitrates into usable forms. Some plants use it for nitrogen fixation, thus it may need to be added to some soils before seeding legumes.
- Zinc participates in chlorophyll formation, and also activates many enzymes. Symptoms of zinc deficiency include chlorosis and stunted growth.
Micronutrient deficiencies are widespread. 51% of world cereal soils are deficient in zinc and 30% of cultivated soils globally are deficient in iron. Steady growth of crop yields during recent decades (in particular through the Green Revolution) compounded the problem by progressively depleting soil micronutrient pools.
In general, farmers only apply micronutrients when crops show deficiency symptoms, while micronutrient deficiencies decrease yields before symptoms appear. Some common farming practices (such as liming acid soils) contribute to widespread occurrence of micronutrient deficiencies in crops by decreasing the availability of the micronutrients present in the soil.
Biofortification of crop plants – improvement of vitamin and mineral levels through plant biotechnology – is being used in many world regions to address micronutrient deficiencies in regions of poverty and malnutrition.
- Gernand, A. D; Schulze, K. J; Stewart, C. P; West Jr, K. P; Christian, P (2016). "Micronutrient deficiencies in pregnancy worldwide: Health effects and prevention". Nature Reviews Endocrinology. 12 (5): 274–289. doi:10.1038/nrendo.2016.37. PMC 4927329. PMID 27032981.
- Tucker, K. L (2016). "Nutrient intake, nutritional status, and cognitive function with aging". Annals of the New York Academy of Sciences. 1367 (1): 38–49. Bibcode:2016NYASA1367...38T. doi:10.1111/nyas.13062. PMID 27116240.
- Jane Higdon; Victoria J. Drake (2011). Evidence-Based Approach to Vitamins and Minerals: Health Benefits and Intake Recommendations (2nd ed.). Thieme. ISBN 978-3131644725.
- Blancquaert, D; De Steur, H; Gellynck, X; Van Der Straeten, D (2017). "Metabolic engineering of micronutrients in crop plants" (PDF). Annals of the New York Academy of Sciences. 1390 (1): 59–73. Bibcode:2017NYASA1390...59B. doi:10.1111/nyas.13274. hdl:1854/LU-8519050. PMID 27801945. S2CID 9439102.
- Marschner, Petra, ed. (2012). Marschner's mineral nutrition of higher plants (3rd ed.). Amsterdam: Elsevier/Academic Press. ISBN 9780123849052.
- "Minerals". Corvallis, OR: Micronutrient Information Center, Linus Pauling Institute, Oregon State University. 2018. Retrieved 19 February 2018.
- "Vitamins and minerals". US Department of Agriculture, National Agricultural Library. 2016. Archived from the original on 31 July 2016. Retrieved 18 April 2016.
- "Vitamins". Corvallis, OR: Micronutrient Information Center, Linus Pauling Institute, Oregon State University. 2018. Retrieved 19 February 2018.
- "World Health Organization model list of essential medicines: 21st list 2019". 2019. hdl:10665/325771. Cite journal requires
- UNICEF, The State of the World’s Children 1998: Fact Sheet. http://www.unicef.org/sowc98/fs03.htm
- UNICEF Canada, Global Child Survival and Health: A 50-year progress report from UNICEF Canada, p. 68.
- Das JK, Salam RA, Mahmood SB, Moin A, Kumar R, Mukhtar K, Lassi ZS, Bhutta ZA (18 December 2019). "Food fortification with multiple micronutrients: impact on health outcomes in general population". Cochrane Database of Systematic Reviews. 12 (12): CD011400. doi:10.1002/14651858.CD011400.pub2. PMC 6917586. PMID 31849042.
- Garcia-Casal MN, Peña-Rosas JP, De-Regil LM, Gwirtz JA, Pasricha SR (22 December 2018). "Fortification of maize flour with iron for controlling anaemia and iron deficiency in populations". Cochrane Database of Systematic Reviews. 12 (12): CD010187. doi:10.1002/14651858.CD010187.pub2. PMC 6517107. PMID 30577080.
- Flour Fortification Initiative, GAIN, Micronutrient Initiative, USAID, The World Bank, UNICEF, Investing in the future: a united call to action on vitamin and mineral deficiencies, p. 19.
- UNICEF, The State of the World's Children 2010, Statistical Tables, p. 15.
- UNICEF, "Vitamin A Supplementation: a decade of progress", p. 1.
- Flour Fortification Initiative, GAIN, Micronutrient Initiative, USAID, The World Bank, UNICEF, Investing in the future: a united call to action on vitamin A and mineral deficiencies, p. 17.
- Micronutrient Initiative, Annual Report 2009-2010, p. 4.
- Hombali AS, Solon JA, Venkatesh BT, Nair NS, Peña-Rosas JP. (10 May 2019). "Fortification of staple foods with vitamin A for vitamin A deficiency". Cochrane Database of Systematic Reviews. 5 (5): CD010068. doi:10.1002/14651858.CD010068.pub2. PMC 6509778. PMID 31074495.CS1 maint: multiple names: authors list (link)
- L.L. Diosady and M.G. Venkatesh Mannar, "Double Fortification Of Salt With Iron And Iodine", 2000
- "Tata group | Tata Chemicals | Media releases | India's first iodine plus iron fortified salt launched by Tata Chemicals". Archived from the original on 2013-02-06. Retrieved 2013-06-07.
- "Evaluating the Impact on Anemia of Making Double Fortified Salt Available in Bihar, India" | The Abdul Latif Jameel Poverty Action Lab
- World Bank (1994). Enriching Lives: Overcoming Vitamin and Mineral Malnutrition in Developing Countries. Development in Practice Series.
- Asia Development Bank (October 2000). [www.adb.org/Documents/TARs/REG/tar_oth34014.pdf Regional Initiative to Eliminate Micronutrient Malnutrition in Asia Through Public-Private Partnership]. TAR: OTH 34014. Retrieved on: 2011-10-13.
- Shah D, Sachdev HS, Gera T, De-Regil LM, Peña-Rosas JP (9 June 2016). "Fortification of staple foods with zinc for improving zinc status and other health outcomes in the general population". Cochrane Database of Systematic Reviews (6): CD010697. doi:10.1002/14651858.CD010697.pub2. PMID 27281654.CS1 maint: multiple names: authors list (link) | https://en.m.wikipedia.org/wiki/Micronutrient | 21 |
23 | Probability and Statistics > Regression analysis
Regression analysis will provide you with an equation for a graph so that you can make predictions about your data. For example, if you’ve been putting on weight over the last few years, it can predict how much you’ll weigh in ten years time if you continue to put on weight at the same rate. It will also give you a slew of statistics (including a p-value and a correlation coefficient) to tell you how accurate your model is. Most elementary stats courses cover very basic techniques, like making scatter plots and performing linear regression. However, you may come across more advanced techniques like multiple regression.
- Introduction to Regression Analysis
- Multiple Regression Analysis
- Overfitting and how to avoid it
- Related articles
In statistics, it’s hard to stare at a set of random numbers in a table and try to make any sense of it. For example, global warming may be reducing average snowfall in your town and you are asked to predict how much snow you think will fall this year. Looking at the following table you might guess somewhere around 10-20 inches. That’s a good guess, but you could make a better guess, by using regression.
Essentially, regression is the “best guess” at using a set of data to make some kind of prediction. It’s fitting a set of points to a graph. There’s a whole host of tools that can run regression for you, including Excel, which I used here to help make sense of that snowfall data:
Just by looking at the regression line running down through the data, you can fine tune your best guess a bit. You can see that the original guess (20 inches or so) was way off. For 2015, it looks like the line will be somewhere between 5 and 10 inches! That might be “good enough”, but regression also gives you a useful equation, which for this chart is:
y = -2.2923x + 4624.4.
What that means is you can plug in an x value (the year) and get a pretty good estimate of snowfall for any year. For example, 2005:
y = -2.2923(2005) + 4624.4 = 28.3385 inches, which is pretty close to the actual figure of 30 inches for that year.
Best of all, you can use the equation to make predictions. For example, how much snow will fall in 2017?
y = 2.2923(2017) + 4624.4 = 0.8 inches.
Regression also gives you an R squared value, which for this graph is 0.702. This number tells you how good your model is. The values range from 0 to 1, with 0 being a terrible model and 1 being a perfect model. As you can probably see, 0.7 is a fairly decent model so you can be fairly confident in your weather prediction!
Multiple regression analysis is almost the same as simple linear regression. The only difference between simple linear regression and multiple regression is in the number of predictors (“x” variables) used in the regression.
- Simple regression analysis uses a single x variable for each dependent “y” variable. For example: (x1, Y1).
- Multiple regression uses multiple “x” variables for each independent variable: (x1)1, (x2)1, (x3)1, Y1).
In one-variable linear regression, you would input one dependent variable (i.e. “sales”) against an independent variable (i.e. “profit”). But you might be interested in how different types of sales effect the regression. You could set your X1 as one type of sales, your X2 as another type of sales and so on.
When to Use Multiple Regression Analysis.
Ordinary linear regression usually isn’t enough to take into account all of the real-life factors that have an effect on an outcome. For example, the following graph plots a single variable (number of doctors) against another variable (life-expectancy of women).
From this graph it might appear there is a relationship between life-expectancy of women and the number of doctors in the population. In fact, that’s probably true and you could say it’s a simple fix: put more doctors into the population to increase life expectancy. But the reality is you would have to look at other factors like the possibility that doctors in rural areas might have less education or experience. Or perhaps they have a lack of access to medical facilities like trauma centers.
The addition of those extra factors would cause you to add additional dependent variables to your regression analysis and create a multiple regression analysis model.
Multiple Regression Analysis Output.
Regression analysis is always performed in software, like Excel or SPSS. The output differs according to how many variables you have but it’s essentially the same type of output you would find in a simple linear regression. There’s just more of it:
- Simple regression: Y = b0 + b1 x.
- Multiple regression: Y = b0 + b1 x1 + b0 + b1 x2…b0…b1 xn.
The output would include a summary, similar to a summary for simple linear regression, that includes:
- R (the multiple correlation coefficient),
- R squared (the coefficient of determination),
- adjusted R-squared,
- The standard error of the estimate.
Minimum Sample size
“The answer to the sample size question appears to depend in part on the objectives
of the researcher, the research questions that are being addressed, and the type of
model being utilized. Although there are several research articles and textbooks giving
recommendations for minimum sample sizes for multiple regression, few agree
on how large is large enough and not many address the prediction side of MLR.” ~ Gregory T. Knofczynski
If you’re concerned with finding accurate values for squared multiple correlation coefficient, minimizing the
shrinkage of the squared multiple correlation coefficient or have another specific goal, Gregory Knofczynski’s paper is a worthwhile read and comes with lots of references for further study. That said, many people just want to run MLS to get a general idea of trends and they don’t need very specific estimates. If that’s the case, you can use a rule of thumb. It’s widely stated in the literature that you should have more than 100 items in your sample. While this is sometimes adequate, you’ll be on the safer side if you have at least 200 observations or better yet—more than 400.
sample size is too small. If you put enough predictor variables in your regression model, you will nearly always get a model that looks significant.
While an overfitted model may fit the idiosyncrasies of your data extremely well, it won’t fit additional test samples or the overall population. The model’s
p-values, R-Squared and regression coefficients can all be misleading. Basically, you’re asking too much from a small set of data.
How to Avoid Overfitting
In linear modeling (including multiple regression), you should have at least 10-15 observations for each term you are trying to estimate. Any less than that, and you run the risk of overfitting your model.
While this rule of thumb is generally accepted, Green (1991) takes this a step further and suggests that the minimum sample size for any regression should be 50, with an additional 8 observations per term. For example, if you have one interacting variable and three predictor variables, you’ll need around 45-60 items in your sample to avoid overfitting, or 50 + 3(8) = 74 items according to Green.
There are exceptions to the “10-15” rule of thumb. They include:
- When there is multicollinearity in your data, or if the effect size is small. If that’s the case, you’ll need to include more terms (although there is, unfortunately, no rule of thumb for how many terms to add!).
- You may be able to get away with as few as 10 observations per predictor if you are using logistic regression or survival models, as long as you don’t have extreme event probabilities, small effect sizes, or predictor variables with truncated ranges. (Peduzzi et al.)
How to Detect and Avoid Overfitting
The easiest way to avoid overfitting is to increase your sample size by collecting more data. If you can’t do that, the second option is to reduce the number of predictors in your model — either by combining or eliminating them. Factor Analysis is one method you can use to identify related predictors that might be candidates for combining.
Use cross validation to detect overfitting: this partitions your data, generalizes your model, and chooses the model which works best. One form of cross-validation is predicted R-squared. Most good statistical software will include this statistic, which is calculated by:
- Removing one observation at a time from your data,
- Estimating the regression equation for each iteration,
- Using the regression equation to predict the removed observation.
Cross validation isn’t a magic cure for small data sets though, and sometimes a clear model isn’t identified even with an adequate sample size.
2. Shrinkage & Resampling
3. Automated Methods
Automated stepwise regression shouldn’t be used as an overfitting solution for small data sets. According to Babyak (2004),
“The problems with automated selection conducted in this very typical manner are so
numerous that it would be hard to catalogue all of them [in a journal article].”
Babyak also recommends avoiding univariate pretesting or screening (a “variation of automated selection in disguise”), dichotomizing continuous variables — which can dramatically increase Type I errors, or multiple testing of confounding variables (although this may be ok if used judiciously).
Gonick, L. (1993). The Cartoon Guide to Statistics. HarperPerennial.
Lindstrom, D. (2010). Schaum’s Easy Outline of Statistics, Second Edition (Schaum’s Easy Outlines) 2nd Edition. McGraw-Hill Education
- Babyak, M.A.,(2004). “What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models.” Psychosomatic Medicine. 2004 May-Jun;66(3):411-21.
- Green S.B., (1991) “How many subjects does it take to do a regression analysis?” Multivariate Behavior Research 26:499–510.
- Peduzzi P.N., et. al (1995). “The importance of events per independent variable in multivariable analysis, II: accuracy and precision of regression estimates.” Journal of Clinical Epidemiology 48:1503–10.
- Peduzzi P.N., et. al (1996). “A simulation study of the number of events per variable in logistic regression analysis.” Journal of Clinical Epidemiology 49:1373–9.
Check out our YouTube channel for hundreds of videos on elementary statistics, including regression analysis using a variety of tools like Excel and the TI-83.
- Additive Model & Multiplicative Model
- How to Construct a Scatter Plot.
- How to Calculate Pearson’s Correlation Coefficients.
- How to Compute a Linear Regression Test Value.
- Chow Test for Split Data Sets
- Forward Selection
- What is Kriging?
- How to Find a Linear Regression Equation.
- How to Find a Regression Slope Intercept.
- How to Find a Linear Regression Slope.
- How to Find the Standard Error of Regression Slope.
- Mallows’ Cp
- Validity Coefficient: What it is and how to find it.
- Quadratic Regression.
- Quartic Regression
- Stepwise Regression
- Unstandardized Coefficient
- Next:: Weak Instruments
- Assumptions and Conditions for Regression.
- Betas / Standardized Coefficients.
- What is a Beta Weight?
- Bilinear Regression
- The Breusch-Pagan-Godfrey Test
- Cook’s Distance.
- What is a Covariate?
- Cox Regression.
- Detrend Data.
- Gauss-Newton Algorithm.
- What is the General Linear Model?
- What is the Generalized Linear Model?
- What is the Hausman Test?
- What is Homoscedasticity?
- Influential Data.
- What is an Instrumental Variable?
- Lack of Fit
- Lasso Regression.
- Levenberg–Marquardt Algorithm
- What is the Line of best fit?
- What is Logistic Regression?
- What is the Mahalanobis distance?
- Model Misspecification.
- Multinomial Logistic Regression.
- What is Nonlinear Regression?
- Ordered Logit / Ordered Logistic Regression
- What is Ordinary Least Squares Regression?
- Parsimonious Models.
- What is Pearson’s Correlation Coefficient?
- Poisson Regression.
- Probit Model.
- What is a Prediction Interval?
- What is Regularization?
- Regularized Least Squares.
- Regularized Regression
- What are Relative Weights?
- What are Residual Plots?
- Reverse Causality.
- Ridge Regression
- Root Mean Square Error.
- Semiparametric models
- Simultaneity Bias.
- Simultaneous Equations Model.
- What is Spurious Correlation?
- Structural Equations Model
- What are Tolerance Intervals?
- Trend Analysis
- Tuning Parameter
- What is Weighted Least Squares Regression?
- Y Hat explained.
Watch the video for the steps:
Can’t see the video? Click here.
Regression is fitting data to a line (Minitab can also perform other types of regression, like quadratic regression). When you find regression in Minitab, you’ll get a scatter plot of your data along with the line of best fit, plus Minitab will provide you with:
- Standard Error (how much the data points deviate from the mean).
- R squared: a value between 0 and 1 which tells you how well your data points fit the model.
- Adjusted R2 (adjusts R2 to account for data points that do not fit the model).
Regression in Minitab takes just a couple of clicks from the toolbar and is accessed through the Stat menu.
Example question: Find regression in Minitab for the following set of data points that compare calories consumed per day to weight:
Calories consumed daily (Weight in lb): 2800 (140), 2810 (143), 2805 (144), 2705 (145), 3000 (155), 2500 (130), 2400 (121), 2100 (100), 2000 (99), 2350 (120), 2400 (121), 3000 (155).
Step 1: Type your data into two columns in Minitab.
Step 2: Click “Stat,” then click “Regression” and then click “Fitted Line Plot.”
Step 3: Click a variable name for the dependent value in the left-hand window. For this sample question, we want to know if calories consumed affects weight, so calories is the independent variable (Y) and weight is the dependent variable (X). Click “Calories” and then click “Select.”
Step 4: Repeat Step 3 for the dependent X variable, weight.
Step 5: Click “OK.” Minitab will create a regression line graph in a separate window.
Step 4: Read the results. As well as creating a regression graph, Minitab will give you values for S, R-sq and R-sq(adj) in the top right corner of the fitted line plot window.
s = standard error.
R-Sq = Coefficient of Determination
R-Sq(adj) = Adjusted Coefficient of Determination (Adjusted R Squared).
Stephanie Glen. "Regression Analysis: Step by Step Articles, Videos, Simple Definitions" From StatisticsHowTo.com: Elementary Statistics for the rest of us! https://www.statisticshowto.com/probability-and-statistics/regression-analysis/
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Comments? Need to post a correction? Please post a comment on our Facebook page. | https://www.statisticshowto.com/probability-and-statistics/regression-analysis/ | 21 |
288 | ‘Time value of money’ is central to the concept of finance. It recognizes that the value of money is different at different points of time. Since money can be put to productive use, its value is different depending upon when it is received or paid.
In simpler terms, the value of a certain amount of money today is more valuable than its value tomorrow. It is not because of the uncertainty involved with time but purely on account of timing. The difference in the value of money today and tomorrow is referred to as the time value of money.
1. Meaning of Time Value of Money
The time value of money is one of the basic theories of financial management, it states that ‘the value of money you have now is greater than a reliable promise to receive the same amount of money at a future date’.
The time value of money (TVM) is the idea that money available at the present time is worth more than the same amount in the future due to its potential earning capacity. This core principle of finance holds that, provided money can earn interest, any amount of money is worth more the sooner it is received.
The time value of money is the greater benefit of receiving money now rather than receiving later. It is founded on time preference. The principle of the time value of money explains why interest is paid or earned? Interest, whether it is on a bank deposit or debt, compensates the depositor or lender for the time value of money.
2. Concept of Time Value of Money
Important terms or concepts used in computing the time value of money are-
(2) Cash inflow
(3) Cash outflow
(4) Discounted Cash flow
(5) Even cash flows /Annuity cash flows
(6) Uneven/mixed streams of cash flows
(7) Single cash flows
(8) Multiple cash flows
(9) Future value
(10) Present value
(13) Effective interest rate / Time preference rate
(14) Risks and types of risks
(15) Uncertainty, and
(16) Doubling Period.
The above concepts are briefly explained below:
Cash flow is either a single sum or the series of receipts or payments occurring over a specified period of time. Cash flows are of two types namely, cash inflow and cash outflow and cash flow may be of much variety namely; single cash flow, mixed cash flow streams, even cash flows or uneven cash flows.
(2) Cash Inflow:
Cash inflows refer to the receipts of cash, for the investment made on the asset/project, which comes into the hands of an individual or into the business organisation account at a point of time/s. Cash inflow may be a single sum or series of sums (even or uneven/mixed) over a period of time.
(3) Cash Outflow:
Cash outflow is just opposite to cash inflow, which is the original investment made on the project or the asset, which results in the payment/s made towards the acquisition of asset or getting the project over a period of time/s.
(4) Discounted Cash Flow- The Mechanics of Time Value:
The present value of a future cash flow (inflows or outflows) is the amount of current cash that is of equivalent value to the decision maker today. The process of determining present value of a future payment (or receipts) or a series of future payments (or receipts) is called discounting. The compound interest rate used for discounting cash flows is called discount rate.
(5) Even Cash Flows /Annuity Cash Flows:
Even cash flows, also known as annuities, are the existence of equal/even/fixed streams of cash flows may be a cash inflow or outflow over a specified period of time, which exists from the beginning of the year.
Annuities are also defined as ‘a series of uniform receipts or payments occurring over a number of years, which results from an initial deposit.’
In simple words, constant periodic sums are called annuities.
It is essential to discuss some of the aspects related to annuities, which are discussed as below:
4. Various types of Annuity-
i. Annuity Certain
ii. Annuity Contingent
iii. Immediate or Ordinary annuity
iv. Annuity due
v. Perpetual annuity
vi. Deferred annuity
5. Annuity factor-
(i) Present Value Annuity factor, and
(ii) Compound value annuity factor.
A brief description each of the above aspects is as follows:
i. Annuitant is a person or an institution, who receives the annuity.
ii. Status refers to the period for which the annuity is payable or receivable.
iii. Perpetuity is an infinite or indefinite period for which the amount exists.
iv. a. Annuity Certain refers to an annuity which is payable or receivable for a fixed number of years.
b. Annuity Contingent refers to the payment/receipt of an annuity till the happening of a certain event/incident.
c. Immediate annuities are those receipts or payments, which are made at the end of the each period.
d. A series of cash flows (i.e., receipts or payments) starting at the beginning of each period for a specified number of periods is called an Annuity due. This implies that the first cash flow has occurred today.
e. Perpetual annuities when, annuities payments are made for ever or for an indefinite or infinite periods.
f. Deferred annuities are those receipts or payments, which starts after a certain number of years.
v. (a) Present Value of Annuity factor is the sum of the present value of Re. 1 for the given period of time duration at the given rate of interest;
(b) Compound value/Future value of annuity factor is the sum of the future value of Re. 1 for the given period of time duration at the given rate of interest. This is the reciprocal of the present value annuity discount factor.
Note – When the interest rate rises, the present value of a lump sum or an annuity declines. The present value factor declines with higher interest rate, other things remaining the same.
vi. Sinking fund is a fund which is created out of fixed payments each period (annuities) to accumulate to a future some after a specified period. The compound value of an annuity can be used to calculate an annuity to be deposited to a sinking fund for ‘n’ period at ‘i’ rate of interest to accumulate to a given sum.
(6) Uneven/Mixed Streams of Cash Flows:
Uneven cash flows, as the concept itself states, is the existence of un-equal or mixed streams of cash inflows emanating from the investment made on the assets or the project.
(7) Single Cash Inflows:
A single cash inflow is a single sum of receipt of cash generated from the project during the given period, for which the present value is ascertained by multiplying the cash inflow by the discount factor.
(8) Multiple Cash Inflows:
Multiple cash inflows (even or mixed cash inflows) are the series of cash flows, may be annuities/mixed streams of cash inflows which are generated from the project over the entire life of the asset.
(9) Future Value/Compound Value [FV/CV]:
The future value concept states as to how much is the value of current cash flow or streams of cash flows at the end of specified time periods at a given discount rate or interest rate. Future value refers to the worth of the current sum or series of cash flows invested or lent at a specified rate of return or rate of interest at the end of specified period.
In simple terms, future value refers to the value of a cash flow or series of cash flows at some specified future time at specified time preference rate for money.
The process of determining the future value of present money is called compounding. In other words, compounding is a process of investing money, reinvesting the interest earned & finding value at the end of specified period is called compounding.
In simple words, calculation of maturity value of an investment from the amount of investment made is called compounding.
Under compounding technique the interest earned on the initial principal become part of principal at the end of compounding period. Since interest goes on earning interest over the life of the asset, this technique of time value of money is also known as ‘compounding’.
The simple formula to calculate Compound Value in different interest time periods is-
(a) If Interest is added at the end of each year or compounded annually-
FV or CV = PV (1 + i)n
Where, FV or CV = Future Value or Compound Value, PV= Present Value,
(1 + i)n = Compound Value factor of Re.1 at a given interest rate for a certain number of years.
(b) If Interest is added/computed semi-annually and other compounding periods/multi- compounding-
Say for example;
(i) When Compounding is made semi-annually, then m=2 (because two half years in one year).
(ii) When Compounding is made quarterly, then m= 4 (because, 4 quarter years in one year).
(iii) When Compounding is made monthly, then m= 12 (because, 12 months in one year).
(11) Present Value:
The present value is just opposite to the future value. Present value refers to the present worth of a future sum of money or streams of cash flows at a specified interest rate or rate of return. It is also called a discounted value.
In simple terms it refers to the current value of a future cash flow or series of cash flows.
The inverse of the compounding process is discounting technique. The process of determining the present value of future cash flows is called discounting.
Discounting or Present Value technique is more popular than compounding technique, since every individual or an organisation intends to have/hold present sums, rather than getting some amount of money after some time, because of time preference for money.
(13) Effective Interest Rate / Time Preference Rate:
Time preference rate is used to translate the different amounts received at different time periods; to amounts equivalent in value to the firm/individual in the present at common point reference. This time preference rate is normally expressed in ‘percent’ to find out the value of money at present or in future.
In business, the finance manager is supposed to take number of decisions under different situations. In all such decisions, there is an existence of risk and uncertainty.
Risk is the ‘variability of returns’ or the ‘chance of financial losses’ associated with the given asset. Assets that are having higher chances of loss or the higher rate of variability in returns are viewed as ‘risky assets’ and vice versa. Hence care should be taken to recognize and to measure the extent of risk associated with the assets, before taking the decision to invest on such risky assets.
3. Importance of Time Value of Money
The Consideration of time is important and its adjustment in financial decision making is also equally important and inevitable. Most financial decisions, such as the procurement of funds, purchase of assets, maintenance of liquidity and distribution of profits etc., affect the firm’s cash flows/movement of cash in and out of the organization in different time periods.
Cash flows occurring in different time periods are not comparable, but they should be properly measurable. Hence, it is required to adjust the cash flows for their differences in timing and risk. The value of cash flows to a common time point should be calculated.
To maximize the owner’s equity, it’s extremely vital to consider the timing and risk of cash flows. The choice of the risk adjusted discount rate (interest rate) is important for calculating the present value of cash flows.
For instance, if the time preference rate is 10 percent, it implies that an investor can accept receiving Rs.1000 if he is offered Rs.1100 after one year. Rs.1100 is the future value of Rs.1000 today at 10% interest rate.
Thus, the individual is indifferent between Rs.1000 and Rs.1100 a year from now as he/she considers these two amounts equivalent in value. You can also say that Rs.1000 today is the present value of Rs.1100 after a year at 10% interest rate.
Time value adjustment is important for both short-term and long-term decisions. If the amounts involved are very large, time value adjustment even for a short period will have significant implications.
However, other things being same, adjustment of time is relatively more important for financial decisions with long range implications than with short range implications. Present value of sums far in the future will be less than the present value of sums in the near future.
The concept of time value of money is of immense use in all financial decisions.
The time value concept is used
1. To compare the investment alternatives to judge the feasibility of proposals.
2. In choosing the best investment proposals to accept or to reject the proposal for investment.
3. In determining the interest rates, thereby solving the problems involving loans, mortgages, leases, savings and annuities.
4. To find the feasible time period to get back the original investment or to earn the expected rate of return.
5. Helps in wage and price fixation.
4. Reasons for Time Preference of Money / Reasons for Time Value of Money
There are three primary reasons for the time value of money- reinvestment opportunities; uncertainty and risk; preference for current consumption.
These reasons are explained below:
1. Reinvestment Opportunities:
The main fundamental reason for Time value of money is reinvestment opportunities.
Funds which are received early can be reinvested in order to earn money on them. The basic premise here is that the money which is received today can be deposited in a bank account so as to earn some return in terms of income.
In India saving bank rate is about 4% while fixed deposit rate is about 7% for one year deposit in public sector banks. Therefore even if the person does not have any other profitable investment opportunity to invest his funds, he can simply put his money in a savings bank account and earn interest income on it.
Let us assume that Mr. X receives Rs.100000 in cash today. He can invest or deposit this Rs.100000 in fixed deposit account and earn 7% interest p.a. Therefore at the end of one year his money of Rs.100000 grows to Rs.107000 without any efforts on the part of Mr. X.
If he deposits Rs.100000 in two years fixed deposit providing interest rate 7% p.a. then at the end of second year his money will grow to Rs.114490 (i.e. Rs.107000+ 7% of Rs.107000). Here we assume that interest is compounded annually i.e. we do not have a simple interest rate but compounded interest rate of 7%.
Thus Time value of money is the compensation for time.
2. Uncertainty and Risk:
Another reason for Time value of money is that funds which are received early resolves uncertainty and risk surmounting future cash flows. All of us know that the future is uncertain and unpredictable. At best we can make best guesses about the future with some probabilities that can be assigned to expected outcomes in the future.
Therefore given a choice between Rs.100 to be received today or Rs.100 to be received in future say one year later, every rational person will opt for Rs.100 today. This is because the future is uncertain. It is better to get money as early as possible rather than keep waiting for it.
The underlying principle is “A bird in hand is better than two in the bush.”
It must be noted that there is a difference between risk and uncertainty.
In a Risky situation we can assign probabilities to the expected outcomes. Probability is the chance of occurrence of an event or outcome. For example I may get Rs.100 with 90% probability in future. Therefore there is 10% probability of not getting it at all. In a risky situation outcomes are predictable with probabilities.
In case of an uncertain situation it is not possible to assign probabilities to the expected outcomes. In such a situation the outcomes are not predictable.
3. Preference for Current Consumption:
The third fundamental reason for Time value of money is preference for current consumption. Everybody prefers to spend money today on necessities or luxuries rather than in future, unless he is sure that in future he will get more money to spend.
Let us take an example, Your father gives you two options – to get Wagon R today on your 20th birthday OR to get Wagon R on your 21st birthday which is one year later.
Which one would you choose? Obviously you would prefer Wagon R today rather than one year later. So every rational person has a preference for current consumption. Those who save for future, do so to get higher money and hence higher consumption in future.
In the above example of a car if your father says that he can give you a bigger car, say Honda City on your 21st birthday, then you may opt for this option if you think that it is better to wait and get a bigger car next year rather than settling for a small car this year.
Thus we can say that the amount of money which is received early (or today) carries more value than the same amount of money which is received later (or in future). This is Time Value of Money.
5. Valuation Concepts
There are the following two valuation concepts:
1) Compound Value Concept (Future Value or Compounding)
2) Present Value Concept (Discounting)
1) Compound Value Concept:
The compound value concept is used to find out the Future Value (FV) of present money. A Future Value means that a given quantity of money today is worth more than what will be received at some point of time in future.
It is the same as the concept of compound interest, wherein the interest earned in a preceding year is reinvested at the prevailing rate of interest for the remaining period. Thus, the accumulated amount (principal + interest) at the end of a period becomes the principal amount for calculating the interest for the next period.
The compounding technique to find out the FV to present money can be explained with reference to:
i) The FV of a single present cash flow,
ii) The FV of a series of equal cash flows and
iii) The FV of multiple flows.
i) FV of a Single Present Cash Flow: The future value of a single cash flow is defined in term of equation as follows:
Mr. A makes a deposit of Rs. 10,000 in a bank which pays 10% interest compounded annually for 5 years. You are required to find out the amount to be received by him after 5 years.
ii) Future Value of Series of Equal Cash Flows or Annuity of Cash Flows:
Quite often a decision may result in the occurrence of cash flows of the same amount every year for a number of years consecutively, instead of a single cash flow. For example, a deposit of Rs. 1,000 each year is to be made at the end of each of the next 3 years from today.
This may be referred to as an annuity of deposit of Rs. 1,000 for 3 years. An annuity is thus, a finite series of equal cash flows made at regular intervals.
In general terms, the future value of an annuity is given as:
It is evident from the above that future value of an annuity depends upon three variables, A, r and n. The future value will vary if any of these three variables changes. For computation purposes, tables or calculators can be made use of.
Mr. A is required to pay five equal annual payments of Rs. 10,000 each in his deposit account that pays 10% interest per year. Find out the future value of annuity at the end of four years.
iii) Future Value of Multiple Flows:
Suppose the investment is Rs. 1,000 now (beginning of year 1), Rs.2,000 at the beginning of year 2 and Rs.3,000 at the beginning of year 3, how much will these flows accumulate at the end of year 3 at a rate of interest of 12 percent per annum?
To determine the accumulated sum at the end of year, add the future compounded values of Rs. 1,000, Rs.2, 000 and Rs.3, 000 respectively:
2) Present Value Concept:
Present values allow us to place all the figures on a current footing so that comparisons may be made in terms of today’s rupees. Present value concept is the reverse of compounding technique and is known as the discounting technique.
As there are FVs of sums invested now, calculated as per the compounding techniques, there are also the present values of a cash flow scheduled to occur in future.
The present value is calculated by discounting technique by applying the following equation:
The discounting technique to find out the PV can be explained in terms of:
i) Present Value of a Future Sum:
The present value of a future sum will be worth less than the future sum because one forgoes the opportunity to invest and thus forgoes the opportunity to earn interest during that period. In order to find out the PV of future money, this opportunity cost of the money is to be deducted from the future money.
The present value of a single cash flow can be computed with the help of following formula:
Find out the present value of Rs.3, 000 received after 10 years hence, if the discount rate is 10%.
Mr. A makes a deposit of Rs. 5000 in a bank which pays 10% interest compounded annually. You are required to find out the amount to be received after 5 years.
ii) PV of a Series of Equal Future Cash Flows or Annuity:
A decision taken today may result in a series of future cash flows of the same amount over a period of number of years.
For example, a service agency offers the following options for a 3-year contract:
a) Pay only Rs.2, 500 now and no more payment during next 3 years, or
b) Pay Rs.900 each at the end of first year, second year and third year from now. A client having a rate of interest at 10% p.a. can choose an option on the basis of the present values of both options as follows:
The payment of Rs.2, 500 now is already in terms of the present value and therefore does not require any adjustment.
The customer has to pay an annuity of Rs.900 for 3 years.
In order to find out the PV of a series of payments, the PVs of different amounts accruing at different times are to be calculated and then added. For the above example, the total PV is Rs.2, 238. In this case, the client should select option B, as he is paying a lower amount of Rs.2, 238 in real terms as against Rs.2, 500 payable in option A.
The present value of an annuity may be expressed as follows:
Find out the present value of a 5 years annuity of Rs.50, 000 discounted at 8%.
6. Techniques Used to Understand the Concept of Time Value of Money
Basically two techniques are used to find the time value of money.
1. Compounding Technique or Future Value Technique
2. Discounting Technique or Present Value technique
1. Compounding Technique:
Compounding technique is just reverse of the discounting technique, where the present sum of money is converted into future sum of money by multiplying the present value by the compound value factor for the required rate of interest and the period.
Hence Future Value or Compound Value is the ‘product’ of the present value of a given sum of money and the factor.
The simple formulas are used to calculate the Compound value of a single sum:
(a) If interest is compounded annually is-
FV = PV (1 + i)n = PV (CVFni)
Note- (1 + i)n is the formula for future value or compound value factor and
CVFni = Compound Value factor for the given number of years at required rate of interest.
(b) If Interest is added semi-annually and other compounding periods-
2. Discounting Technique or Present Value Technique:
Discounting technique or present value technique is the process of converting the future cash flows into present cash flows by using an interest rate/time preference rate/discount rate.
The simple formula used to calculate the Present Value of a single sum is:
P= Present Value, PVF= Present value factor of Re.1, DF= Discount factor of Re.1, A= Future Value or Compound Value, i = interest rate & n= number of years or time period given for 1 to n years and (1 + i)n = The compound value factor.
So from the above formula, it is very clear that the present value of future cash flows is the product of the ‘future sum of money and the discount factor’ or ‘the quotient of the future sum of money and the compound value factor (1 + i)1-n.
Note – Present value can be computed for all types of cash flows, say single sum/ multiple sums, even / annuity sums and mixed/un-even sums.
Alternatively, PVF/DF, CVF of a rupee and also the annuity discount factor (PADF) and the compound value annuity factor (CVAF) at the given rate of interest for the expected period can be referred through the tables also.
7. Present Value Technique or Discounting Technique
It is a process of computing the present value of cash flow (or a series of cash flows) that is to be received in the future. Since money in hand has the capacity to earn interest, a rupee is worth more today than it would be worth tomorrow.
Discounting is one of the core principles of finance and is the primary factor used in pricing a stream of future receipts. As a method, discounting is used to determine how much these future receipts are worth today.
It is just the opposite of compounding where compound interest rates are used in determining the present value corresponding to a future value. For example, Rs. 1,000 compounded at an annual interest rate of 10% becomes Rs. 1,771.56 in six years.
Conversely, the present value of Rs. 1,771.56 realized after six years of investment is Rs. 1,000 when discounted at an annual rate of 10%. This present value is computed by multiplying the future value by a discount rate. This discount rate is computed as reciprocal of compounding.
Present value calculations determine what the value of a cash flow received in the future would be worth today (that is at time zero). The process of finding a present value is called discounting; the discounted value of a rupee to be received in future gets smaller as it is applied to a distant future.
The interest rate used to discount cash flows is generally called the discount rate. How much would Rs.100 received five years from now be worth today if the current interest rate is 10%?
Let us draw a timeline.
The arrow represents the flow of money and the numbers under the timeline represent the time period. It may be noted that time period zero is today, corresponding to which the value is called present value.
A generalized procedure for calculating the future value of a single amount compounded annually is as given below:
I. Ascertaining the Present Value (PV):
The discounting technique that facilitates the ascertainment of present value of a future cash flow may be applied in the following specific situations:
(a) Present Value of a Single Future Cash Flow:
The future value of a single cash flow may be ascertained by applying the usual compound interest formula as given below:
Let us understand the computation of present value with the help of an example that follows:
Mr. Aman shall receive Rs.25,000 after 4 years. What is the present value of this future receipt, if the rate of interest is 12% p.a.?
(b) Present Value of Series of Equal Cash Flows (Annuity):
An annuity is a series of equal cash flows that occur at regular intervals for a finite period of time. These are essentially a series of constant cash flows that are received at a specified frequency over the course of a fixed time period. The most common payment frequencies are yearly, semi-annually, quarterly and monthly.
There are two types of annuities – ordinary annuity and annuity due. Ordinary annuities are payments (or receipts) that are required at the end of each period. Issuers of coupon bonds, for example, usually pay interest at the end of every six months until the maturity date. Annuity due are payments (or receipts) that are required in the beginning of each period.
Payment of rent, lease etc., are examples of annuity due. Since the present and future value calculations for ordinary annuities and annuities due are slightly different, we will first discuss the present value calculation for ordinary annuities.
The formula for calculating the present value of a single future cash flow may be extended to compute present value of series of equal cash flow as given below:
An LED TV can be purchased by paying Rs.50,000 now or Rs.20,000 each at the end of first, second and third year respectively. To pay cash now, the buyer would have to withdraw the money from an investment, earning interest at 10% p.a. compounded annually. Which option is better and by how much, in present value terms?
Let paying Rs.50,000 now be Option I and payment in three equal installments of Rs.20,000 each be Option II, the present value of cash outflows of Option II is computed as:
(c) Present Value of a Series of Unequal Cash Flows:
The formula for computing present value of an annuity is based on the assumption that cash flows at each time period are equal.
However, quite often cash flows are unequal because profits of a firm, for instance, which culminate into cash flows, are not constant year after year.
The formula for calculating the present value of a single future cash flow may be extended to compute present value of series of unequal cash flows as given below:
Ms. Ameeta shall receive Rs.30,000, Rs.20,000, Rs.12,000 and Rs.6,000 at the end of first, second, third and fourth year from an investment proposal. Calculate the present value of her future cash flows from this proposal, given that the rate of interest is 12% p.a.
If Ms. Ameeta lends Rs.55,086 @ 12%p.a, the borrower may settle the loan by paying Rs.30,000, Rs.20,000, Rs.12,000 and Rs.6,000 at the end of first, second, third and fourth year.
It refers to a stream of equal cash flows that occur and last forever. This implies that the annuity that occurs for an infinite period of time turns it to perpetuity. Although it may seem a bit illogical, yet an infinite series of cash flows have a finite present value.
Examples of Perpetuity:
(i) Local governments set aside funds so that certain cultural activities are carried on a regular basis.
(ii) A fund is set-up to provide scholarship to meritorious needy students on a regular basis.
(iii) A charity club sets up a fund to provide a flow of regular payments forever to needy children.
The present value of perpetuity is computed as:
A philanthropist wishes to institute a scholarship of Rs.25,000 p.a., payable to a meritorious student in an educational institution. How this amount should he invest @ 8% p.a. so that the required amount of scholarship becomes available as yield of investment in perpetuity.
Valuation of Preference Shares:
Preference shares have preference over ordinary shares in terms of payment or dividend and repayment of capital if the company is wound up. They may be issued with or without a maturity period.
The preference shares unlike bonds has an investment value as it resembles both bond as well as common stock. It is a hybrid between the bond and the equity stock. It resembles a bond as it has a prior claim on the assets of the firm at the time of liquidations.
Like the common stock the preference shareholders receive dividend and have similar features as common stock and liabilities at the time of liquidation of a firm.
Types of Preference Shares:
a. Redeemable preference shares are shares with maturity.
b. Irredeemable preference shares are shares without any maturity.
Features of Preference Shares:
The dividend rate is fixed in the case of preference shares.
Preference shareholders have a claim on assets and income prior to ordinary shareholders.
Redeemable preference shares have a maturity date while irredeemable preference shares are perpetual.
A company can issue convertible preference shares and can be converted as per the norms.
Valuation of Equity Shares
Equity shares are also referred to as common stock, unlike bonds, equity shares are instruments that do not assure a fixed return.
Equity is fundamentally different from debt. Debt is commonly issued by security known as bond/debenture. Financial markets deal with the transfer of these securities from one person to another. The price at which such transfer takes place is determined by market forces.
Features of equity share:
(a) Ownership and management,
(b) Entitlement to residual cash flows,
(c) Limited liability,
(d) Infinite life,
(e) Substantially different risk profile.
Challenges in Valuation of Equity:
The valuation of equity shares is relatively more difficult.
The difficulty arises because of two factors:
(i) Rate of Dividend on Equity Shares is not known.
(ii) Estimates of the Amount and timing of the cash flows expected by equity shareholders are more uncertain.
9. Risk and Return Analysis
What is risk?
Risk is the variability which may likely to accrue in future between the expected returns and the actual returns. So, the risk may also be considered as a chance of variation or chance of loss.
Types of Risk:
Risk can be classified in the following two parts:
1. Systematic Risk or Market Risk:
Systematic risk is that part of total risk which cannot be eliminated by diversification. Diversification means investing in different types of securities. No investor can avoid or eliminate this risk, whatsoever precautions or diversification may be resorted to. So, it is also called non diversifiable risk, or the market risk.
This part of the risk arises because every security has a built in tendency to move in line with the fluctuations in the market. The systematic risk arises due to general factors in the market such as money supply, inflation, economic recession, industrial policy, interest rate policy of the government, credit policy, tax policy etc. These are the factors which affect almost every firm.
The unsystematic risk is one which can be eliminated by diversification. This risk represents the fluctuation in returns of a security due to factors specific to the particular firm only and not the market as a whole.
These factors may be such as worker’s unrest, strike, change in market demand, change in consumer preference etc. This risk is also called diversifiable risk and can be reduced by diversification. Diversification is the act of holding many securities in order to lessen the risk.
The effect of diversification on the risk of a portfolio is represented graphically in the below figure:
The above diagram shows that the systematic risk remains the same and is constant irrespective of the number of securities in the portfolio as shown by OA in the above diagram and is fixed for any number of securities.
For only security it is OA & for 20 security also it is OA. However, the unsystematic risk is reduced when more and more securities are added to the portfolio. As from the above diagram we can see that earlier it was OD & by increasing the number of securities it decreases to C.
10. Methods of Risk Management
Risk is inherent in business and hence there is no escape from the risk for a businessman. However, he may face this problem with greater confidence if he adopts a scientific approach by dealing with risk. Risk management may, therefore, be defined as adoption of a scientific approach to the problem dealing with risk faced by a business firm or an individual.
Broadly, there are five methods in general for risk management:
i) Avoidance of Risk
A business firm can avoid risk by not accepting any assignment or any transaction which involves any type of risk whatsoever. This will naturally mean a very low volume of business activities and losing of too many profitable activities.
ii) Prevention of Risk
In case of this method, the business avoids risk by taking appropriate steps for prevention of business risk or avoiding loss, such steps include adaptation of safety programmes, employment of night security guard, arranging for medical care, disposal of waste material etc.
iii) Retention of Risk
In the case of this method, the organization voluntarily accepts the risk since either the risk is insignificant or its acceptance will be cheaper as compared to avoiding it.
iv) Transfer of Risk
In case of this method, risk is transferred to some other person or organization. In other words, under this method, a person who is subject to risk may induce another person to assume the risk. Some of the techniques used for transfer of risk are hedging, sub-contracting, getting surety bonds, entering into indemnity contracts etc.
This is done by creating a common fund out of the contribution (known as premium) from several persons who are equally exposed to the same loss. Fund so created is used for compensating the persons who might have suffered financial loss on account of the risks insured against.
11. Types of Investors
There are three types of investor which may be classified as:
a) Risk Averse
Under this category those investors appear who avoid taking risk and prefer only the investments which have zero or relatively lower risk. These investors ignore the return from the investment. Generally risk averse investors are – Retired persons, Old age persons and Pensioners.
b) Risk Seekers
Under this category those investors are nominated who are ready to take risk if the return is sufficient enough (according to their expectations). These investors may be ready to take – Income risk, Capital risk or both.
Under this category those investors lie who do not care much about the risk. Their investments decisions are based on consideration other than risk and return.
What is return?
Return is the amount received by the investor from their investment. Everyone needs high returns over invested amounts. Each and every investor who invests or wants to invest their amount in any type of project, first expects some return which encourages them to take risk.
Risk and Return Trade Off:
The principle that potential “return rises with an increase in risk”. Low levels of uncertainty (low risk) are associated with low potential returns, whereas high levels of uncertainty (high risk) are associated with high potential returns. According to the risk-return tradeoff, invested money can render higher profits only if it is subject to the possibility of being lost.
Because of the risk- return tradeoff, you must be aware of your personal risk tolerance when choosing investments for your portfolio. Taking on some risk is the price of achieving returns; therefore, if you want to make money, you can’t cut out all risk. The goal instead is to find an appropriate balance – one that generates some profit, but still allows you to sleep at night.
We can see this in the following figure:
Risk and return analysis emphasizes over the following characteristics:
(i) Risk and Return have parallel relations.
(ii) Return is fully associated with risk.
(iii) Risk and return concepts are basic to the understanding of the valuation of assets or securities.
(iv) The expected rate of return is an average rate of return. This average may deviate from the possible outcomes (rates of return). | https://www.economicsdiscussion.net/financial-management/time-value-of-money-2/33285 | 21 |
18 | Lua error in Module:About-distinguish at line 61: attempt to index field 'wikibase' (a nil value).
In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. The figure on the right illustrates the geometric relationship. Expressed algebraically, for quantities a and b with a > b > 0,
where the Greek letter phi ( or ) represents the golden ratio. Its value is:
The golden ratio is also called the golden mean or golden section (Latin: sectio aurea). Other names include extreme and mean ratio, medial section, divine proportion, divine section (Latin: sectio divina), golden proportion, golden cut, and golden number.
Some twentieth-century artists and architects, including Le Corbusier and Dalí, have proportioned their works to approximate the golden ratio—especially in the form of the golden rectangle, in which the ratio of the longer side to the shorter is the golden ratio—believing this proportion to be aesthetically pleasing. The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other plant parts.
Mathematicians since Euclid have studied the properties of the golden ratio, including its appearance in the dimensions of a regular pentagon and in a golden rectangle, which may be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has also been used to analyze the proportions of natural objects as well as man-made systems such as financial markets, in some cases based on dubious fits to data.
- 1 Calculation
- 2 History
- 3 Applications and observations
- 4 Mathematics
- 4.1 Irrationality
- 4.2 Minimal polynomial
- 4.3 Golden ratio conjugate
- 4.4 Alternative forms
- 4.5 Geometry
- 4.5.1 Dividing a line segment by interior division
- 4.5.2 Dividing a line segment by exterior division
- 4.5.3 Golden triangle, pentagon and pentagram
- 4.5.4 Scalenity of triangles
- 4.5.5 Triangle whose sides form a geometric progression
- 4.5.6 Golden triangle, rhombus, and rhombic triacontahedron
- 4.6 Relationship to Fibonacci sequence
- 4.7 Symmetries
- 4.8 Other properties
- 4.9 Decimal expansion
- 5 Pyramids
- 6 Disputed observations
- 7 See also
- 8 References and footnotes
- 9 Further reading
- 10 External links
Two quantities a and b are said to be in the golden ratio φ if
One method for finding the value of φ is to start with the left fraction. Through simplifying the fraction and substituting in b/a = 1/φ,
Multiplying by φ gives
which can be rearranged to
Using the quadratic formula, two solutions are obtained:
Because φ is the ratio between positive quantities φ is necessarily positive:
This derivation can also be found with a compass-and-straightedge construction:
- The initial situation is the dividing a line segment by exterior division with the additions and therefore
- First, the line segment is about doubled and then the semicircle with the radius around the point is drawn, thus the intersection point is obtained.
- Now the semicircle is drawn with the radius around the point The arising intersection point corresponds
- Next up, the perpendicular on the line segment from the point will be establish.
- The subsequent parallel to the line segment , produces, as it were, the hypotenuse of the right triangle It is well recognizable, this triangle and the triangle are similar to each other. The hypotenuse has due to the cathetuses and according the Pythagorean theorem, a length that is equal to the value of
- Finally, the circle arc is drawn with the radius around the point Because of the circular arc meets the point respectively , and thus leads to the result , from this it follows that
Some of the greatest mathematical minds of all ages, from Pythagoras and Euclid in ancient Greece, through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler, to present-day scientific figures such as Oxford physicist Roger Penrose, have spent endless hours over this simple ratio and its properties. But the fascination with the Golden Ratio is not confined just to mathematicians. Biologists, artists, musicians, historians, architects, psychologists, and even mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is probably fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics.
Ancient Greek mathematicians first studied what we now call the golden ratio because of its frequent appearance in geometry. The division of a line into "extreme and mean ratio" (the golden section) is important in the geometry of regular pentagrams and pentagons. Euclid's Elements (Greek: Στοιχεῖα) provides the first known written definition of what is now called the golden ratio:
A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser.
Euclid explains a construction for cutting (sectioning) a line "in extreme and mean ratio" (i.e., the golden ratio). Throughout the Elements, several propositions (theorems in modern terminology) and their proofs employ the golden ratio.
The first known approximation of the (inverse) golden ratio by a decimal fraction, stated as "about 0.6180340", was written in 1597 by Michael Maestlin of the University of Tübingen in a letter to his former student Johannes Kepler.
Since the 20th century, the golden ratio has been represented by the Greek letter φ (phi, after Phidias, a sculptor who is said to have employed it) or less commonly by τ (tau, the first letter of the ancient Greek root τομή—meaning cut).
Timeline according to Priya Hemenway:
- Phidias (490–430 BC) made the Parthenon statues that seem to embody the golden ratio.
- Plato (427–347 BC), in his Timaeus, describes five possible regular solids (the Platonic solids: the tetrahedron, cube, octahedron, dodecahedron, and icosahedron), some of which are related to the golden ratio.
- Euclid (c. 325–c. 265 BC), in his Elements, gave the first recorded definition of the golden ratio, which he called, as translated into English, "extreme and mean ratio" (Greek: ἄκρος καὶ μέσος λόγος).
- Fibonacci (1170–1250) mentioned the numerical series now named after him in his Liber Abaci; the ratio of sequential elements of the Fibonacci sequence approaches the golden ratio asymptotically.
- Luca Pacioli (1445–1517) defines the golden ratio as the "divine proportion" in his Divina Proportione.
- Michael Maestlin (1550–1631) publishes the first known approximation of the (inverse) golden ratio as a decimal fraction.
- Johannes Kepler (1571–1630) proves that the golden ratio is the limit of the ratio of consecutive Fibonacci numbers, and describes the golden ratio as a "precious jewel": "Geometry has two great treasures: one is the Theorem of Pythagoras, and the other the division of a line into extreme and mean ratio; the first we may compare to a measure of gold, the second we may name a precious jewel." These two treasures are combined in the Kepler triangle.
- Charles Bonnet (1720–1793) points out that in the spiral phyllotaxis of plants going clockwise and counter-clockwise were frequently two successive Fibonacci series.
- Martin Ohm (1792–1872) is believed to be the first to use the term goldener Schnitt (golden section) to describe this ratio, in 1835.
- Édouard Lucas (1842–1891) gives the numerical sequence now known as the Fibonacci sequence its present name.
- Mark Barr (20th century) suggests the Greek letter phi (φ), the initial letter of Greek sculptor Phidias's name, as a symbol for the golden ratio.
- Roger Penrose (b. 1931) discovered in 1974 the Penrose tiling, a pattern that is related to the golden ratio both in the ratio of areas of its two rhombic tiles and in their relative frequency within the pattern. This in turn led to new discoveries about quasicrystals.
Applications and observations
De Divina Proportione, a three-volume work by Luca Pacioli, was published in 1509. Pacioli, a Franciscan friar, was known mostly as a mathematician, but he was also trained and keenly interested in art. De Divina Proportione explored the mathematics of the golden ratio. Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that the interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions. Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. De Divina Proportione contains illustrations of regular solids by Leonardo da Vinci, Pacioli's longtime friend and collaborator; these are not directly linked to the golden ratio.
The Parthenon's façade as well as elements of its façade and elsewhere are said by some to be circumscribed by golden rectangles. Other scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Midhat J. Gazalé says, "It was not until Euclid, however, that the golden ratio's mathematical properties were studied. In the Elements (308 BC) the Greek mathematician merely regarded that number as an interesting irrational number, in connection with the middle and extreme ratios. Its occurrence in regular pentagons and decagons was duly observed, as well as in the dodecahedron (a regular polyhedron whose twelve faces are regular pentagons). It is indeed exemplary that the great Euclid, contrary to generations of mystics who followed, would soberly treat that number for what it is, without attaching to it other than its factual properties." And Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation. The one thing we know for sure is that Euclid, in his famous textbook Elements, written around 300 BC, showed how to calculate its value." Later sources like Vitruvius exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions.
A 2004 geometrical analysis of earlier research into the Great Mosque of Kairouan reveals a consistent application of the golden ratio throughout the design, according to Boussora and Mazouz. They found ratios close to the golden ratio in the overall proportion of the plan and in the dimensioning of the prayer space, the court, and the minaret. The authors note, however, that the areas where ratios close to the golden ratio were found are not part of the original construction, and theorize that these elements were added in a reconstruction.
The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned."
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture. In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.
From measurements of 15 temples, 18 monumental tombs, 8 sarcophagi, and 58 grave stelae from the fifth century BC to the second century AD, one researcher has concluded that the golden ratio was totally absent from Greek architecture of the classical fifth century BC, and almost absent during the following six centuries.
Leonardo da Vinci's illustrations of polyhedra in De divina proportione (On the Divine Proportion) and his views that some bodily proportions exhibit the golden ratio have led some scholars to speculate that he incorporated the golden ratio in his paintings. But the suggestion that his Mona Lisa, for example, employs golden ratio proportions, is not supported by anything in Leonardo's own writings. Similarly, although the Vitruvian Man is often shown in connection with the golden ratio, the proportions of the figure do not actually match it, and the text only mentions whole number ratios.
Salvador Dalí, influenced by the works of Matila Ghyka, explicitly used the golden ratio in his masterpiece, The Sacrament of the Last Supper. The dimensions of the canvas are a golden rectangle. A huge dodecahedron, in perspective so that edges appear in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.
A statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is 1.34, with averages for individual artists ranging from 1.04 (Goya) to 1.46 (Bellini). On the other hand, Pablo Tosto listed over 350 works by well-known artists, including more than 100 which have canvasses with golden rectangle and root-5 proportions, and others with proportions like root-2, 3, 4, and 6.
There was a time when deviations from the truly beautiful page proportions 2:3, 1:√3, and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimeter.
Some sources claim that the golden ratio is commonly used in everyday design, for example in the shapes of postcards, playing cards, posters, wide-screen televisions, photographs, light switch plates and cars.
Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. French composer Erik Satie used the golden ratio in several of his pieces, including Sonneries de la Rose+Croix. The golden ratio is also apparent in the organization of the sections in the music of Debussy's Reflets dans l'eau (Reflections in Water), from Images (1st series, 1905), in which "the sequence of keys is marked out by the intervals 34, 21, 13 and 8, and the main climax sits at the phi position."
The musicologist Roy Howat has observed that the formal boundaries of La Mer correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable," but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions.
Pearl Drums positions the air vents on its Masters Premium models based on the golden ratio. The company claims that this arrangement improves bass response and has applied for a patent on this innovation.
Though Heinz Bohlen proposed the non-octave-repeating 833 cents scale based on combination tones, the tuning features relations based on the golden ratio. As a musical interval the ratio 1.618... is 833.090... cents ( Play (help·info)).
Adolf Zeising, whose main interests were mathematics and philosophy, found the golden ratio expressed in the arrangement of parts such as leaves and branches along the stems of plants and of veins in leaves. He extended his research to the skeletons of animals and the branchings of their veins and nerves, to the proportions of chemical compounds and the geometry of crystals, even to the use of proportion in artistic endeavors. In these patterns in nature he saw the golden ratio operating as a universal law. In connection with his scheme for golden-ratio-based human body proportions, Zeising wrote in 1854 of a universal law "in which is contained the ground-principle of all formative striving for beauty and completeness in the realms of both nature and art, and which permeates, as a paramount spiritual ideal, all structures, forms and proportions, whether cosmic or individual, organic or inorganic, acoustic or optical; which finds its fullest realization, however, in the human form."
In 2010, the journal Science reported that the golden ratio is present at the atomic scale in the magnetic resonance of spins in cobalt niobate crystals.
However, some have argued that many apparent manifestations of the golden ratio in nature, especially in regard to animal dimensions, are fictitious.
The golden ratio is key to the golden section search.
Studies by psychologists, starting with Fechner, have been devised to test the idea that the golden ratio plays a role in human perception of beauty. While Fechner found a preference for rectangle ratios centered on the golden ratio, later attempts to carefully test such a hypothesis have been, at best, inconclusive.
The golden ratio is an irrational number. Below are two short proofs of irrationality:
Contradiction from an expression in lowest terms
- the whole is the longer part plus the shorter part;
- the whole is to the longer part as the longer part is to the shorter part.
If we call the whole n and the longer part m, then the second statement above becomes
- n is to m as m is to n − m,
To say that φ is rational means that φ is a fraction n/m where n and m are integers. We may take n/m to be in lowest terms and n and m to be positive. But if n/m is in lowest terms, then the identity labeled (*) above says m/(n − m) is in still lower terms. That is a contradiction that follows from the assumption that φ is rational.
Derivation from irrationality of √5
Another short proof—perhaps more commonly known—of the irrationality of the golden ratio makes use of the closure of rational numbers under addition and multiplication. If is rational, then is also rational, which is a contradiction if it is already known that the square root of a non-square natural number is irrational.
Having degree 2, this polynomial actually has two roots, the other being the golden ratio conjugate.
Golden ratio conjugate
The conjugate root to the minimal polynomial x2 - x - 1 is
The absolute value of this quantity (≈ 0.618) corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, b/a), and is sometimes referred to as the golden ratio conjugate. It is denoted here by the capital Phi ():
Alternatively, can be expressed as
This illustrates the unique property of the golden ratio among positive numbers, that
or its inverse:
This means 0.61803...:1 = 1:1.61803....
and its reciprocal:
The equation φ2 = 1 + φ likewise produces the continued square root, or infinite surd, form:
An infinite series can be derived to express phi:
These correspond to the fact that the length of the diagonal of a regular pentagon is φ times the length of its side, and similar relations in a pentagram.
The number φ turns up frequently in geometry, particularly in figures with pentagonal symmetry. The length of a regular pentagon's diagonal is φ times its side. The vertices of a regular icosahedron are those of three mutually orthogonal golden rectangles.
There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, Thomson problem). However, a useful approximation results from dividing the sphere into parallel bands of equal surface area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. 360°/φ ≅ 222.5°. This method was used to arrange the 1500 mirrors of the student-participatory satellite Starshine-3.
Dividing a line segment by interior division
- Having a line segment AB, construct a perpendicular BC at point B, with BC half the length of AB. Draw the hypotenuse AC.
- Draw an arc with center C and radius BC. This arc intersects the hypotenuse AC at point D.
- Draw an arc with center A and radius AD. This arc intersects the original line segment AB at point S. Point S divides the original line segment AB into line segments AS and SB with lengths in the golden ratio.
Dividing a line segment by exterior division
- Construct on line segment AS off the point S, a vertical length of AS with the endpoint C.
- Do bisect the line segment AS with M.
- The circular arc around M with the radius MC divides the extension AS in point B. Point S divides the constructed line segment AB into line segments AS and SB with lengths in the golden ratio.
Application examples you can see in the articles Pentagon with a given side length, Decagon with given circumcircle and Decagon with a given side length.
The both above displayed different algorithms produce geometric constructions that divides a line segment into two line segments where the ratio of the longer to the shorter line segment is the golden ratio.
Golden triangle, pentagon and pentagram
If angle BCX = α, then XCA = α because of the bisection, and CAB = α because of the similar triangles; ABC = 2α from the original isosceles symmetry, and BXC = 2α by similarity. The angles in a triangle add up to 180°, so 5α = 180, giving α = 36°. So the angles of the golden triangle are thus 36°-72°-72°. The angles of the remaining obtuse isosceles triangle AXC (sometimes called the golden gnomon) are 36°-36°-108°.
Suppose XB has length 1, and we call BC length φ. Because of the isosceles triangles XC=XA and BC=XC, so these are also length φ. Length AC = AB, therefore equals φ + 1. But triangle ABC is similar to triangle CXB, so AC/BC = BC/BX, AC/φ = φ/1, and so AC also equals φ2. Thus φ2 = φ + 1, confirming that φ is indeed the golden ratio.
Similarly, the ratio of the area of the larger triangle AXC to the smaller CXB is equal to φ, while the inverse ratio is φ − 1.
In a regular pentagon the ratio between a side and a diagonal is (i.e. 1/φ), while intersecting diagonals section each other in the golden ratio.
George Odom has given a remarkably simple construction for φ involving an equilateral triangle: if an equilateral triangle is inscribed in a circle and the line segment joining the midpoints of two sides is produced to intersect the circle in either of two points, then these three points are in golden proportion. This result is a straightforward consequence of the intersecting chords theorem and can be used to construct a regular pentagon, a construction that attracted the attention of the noted Canadian geometer H. S. M. Coxeter who published it in Odom's name as a diagram in the American Mathematical Monthly accompanied by the single word "Behold!"
The golden ratio plays an important role in the geometry of pentagrams. Each intersection of edges sections other edges in the golden ratio. Also, the ratio of the length of the shorter segment to the segment bounded by the two intersecting edges (a side of the pentagon in the pentagram's center) is φ, as the four-color illustration shows.
The pentagram includes ten isosceles triangles: five acute and five obtuse isosceles triangles. In all of them, the ratio of the longer side to the shorter side is φ. The acute triangles are golden triangles. The obtuse isosceles triangles are golden gnomons.
The golden ratio properties of a regular pentagon can be confirmed by applying Ptolemy's theorem to the quadrilateral formed by removing one of its vertices. If the quadrilateral's long edge and diagonals are b, and short edges are a, then Ptolemy's theorem gives b2 = a2 + ab which yields
Scalenity of triangles
Consider a triangle with sides of lengths a, b, and c in decreasing order. Define the "scalenity" of the triangle to be the smaller of the two ratios a/b and b/c. The scalenity is always less than φ and can be made as close as desired to φ.
Triangle whose sides form a geometric progression
If the side lengths of a triangle form a geometric progression and are in the ratio 1 : r : r2, where r is the common ratio, then r must lie in the range φ−1 < r < φ, which is a consequence of the triangle inequality (the sum of any two sides of a triangle must be strictly bigger than the length of the third side). If r = φ then the shorter two sides are 1 and φ but their sum is φ2, thus r < φ. A similar calculation shows that r > φ−1. A triangle whose sides are in the ratio 1 : √φ : φ is a right triangle (because 1 + φ = φ2) known as a Kepler triangle.
Golden triangle, rhombus, and rhombic triacontahedron
A golden rhombus is a rhombus whose diagonals are in the golden ratio. The rhombic triacontahedron is a convex polytope that has a very special property: all of its faces are golden rhombi. In the rhombic triacontahedron the dihedral angle between any two adjacent rhombi is 144°, which is twice the isosceles angle of a golden triangle and four times its most acute angle.
Relationship to Fibonacci sequence
The mathematics of the golden ratio and of the Fibonacci sequence are intimately interconnected. The Fibonacci sequence is:
- 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, ....
The closed-form expression for the Fibonacci sequence involves the golden ratio:
Therefore, if a Fibonacci number is divided by its immediate predecessor in the sequence, the quotient approximates φ; e.g., 987/610 ≈ 1.6180327868852. These approximations are alternately lower and higher than φ, and converge on φ as the Fibonacci numbers increase, and:
where above, the ratios of consecutive terms of the Fibonacci sequence, is a case when .
Furthermore, the successive powers of φ obey the Fibonacci recurrence:
This identity allows any polynomial in φ to be reduced to a linear expression. For example:
The reduction to a linear expression can be accomplished in one step by using the relationship
where is the kth Fibonacci number.
However, this is no special property of φ, because polynomials in any solution x to a quadratic equation can be reduced in an analogous manner, by applying:
for given coefficients a, b such that x satisfies the equation. Even more generally, any rational function (with rational coefficients) of the root of an irreducible nth-degree polynomial over the rationals can be reduced to a polynomial of degree n ‒ 1. Phrased in terms of field theory, if α is a root of an irreducible nth-degree polynomial, then has degree n over , with basis .
The golden ratio and inverse golden ratio have a set of symmetries that preserve and interrelate them. They are both preserved by the fractional linear transformations – this fact corresponds to the identity and the definition quadratic equation. Further, they are interchanged by the three maps – they are reciprocals, symmetric about , and (projectively) symmetric about 2.
More deeply, these maps form a subgroup of the modular group isomorphic to the symmetric group on 3 letters, corresponding to the stabilizer of the set of 3 standard points on the projective line, and the symmetries correspond to the quotient map – the subgroup consisting of the 3-cycles and the identity fixes the two numbers, while the 2-cycles interchange these, thus realizing the map.
The golden ratio has the simplest expression (and slowest convergence) as a continued fraction expansion of any irrational number (see Alternate forms above). It is, for that reason, one of the worst cases of Lagrange's approximation theorem and it is an extremal case of the Hurwitz inequality for Diophantine approximations. This may be why angles close to the golden ratio often show up in phyllotaxis (the growth of plants).
The defining quadratic polynomial and the conjugate relationship lead to decimal values that have their fractional part in common with φ:
The sequence of powers of φ contains these values 0.618..., 1.0, 1.618..., 2.618...; more generally, any power of φ is equal to the sum of the two immediately preceding powers:
As a result, one can easily decompose any power of φ into a multiple of φ and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of φ:
If , then:
When the golden ratio is used as the base of a numeral system (see Golden ratio base, sometimes dubbed phinary or φ-nary), every integer has a terminating representation, despite φ being irrational, but every fraction has a non-terminating representation.
The golden ratio also appears in hyperbolic geometry, as the maximum distance from a point on one side of an ideal triangle to the closer of the other two sides: this distance, the side length of the equilateral triangle formed by the points of tangency of a circle inscribed within the ideal triangle, is .
The golden ratio's decimal expansion can be calculated directly from the expression
for n = 1, 2, 3, ..., until the difference between xn and xn−1 becomes zero, to the desired number of digits.
The Babylonian algorithm for √5 is equivalent to Newton's method for solving the equation x2 − 5 = 0. In its more general form, Newton's method can be applied directly to any algebraic equation, including the equation x2 − x − 1 = 0 that defines the golden ratio. This gives an iteration that converges to the golden ratio itself,
for an appropriate initial estimate xφ such as xφ = 1. A slightly faster method is to rewrite the equation as x − 1 − 1/x = 0, in which case the Newton iteration becomes
These iterations all converge quadratically; that is, each step roughly doubles the number of correct digits. The golden ratio is therefore relatively easy to compute with arbitrary precision. The time needed to compute n digits of the golden ratio is proportional to the time needed to divide two n-digit numbers. This is considerably faster than known algorithms for the transcendental numbers π and e.
An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers F 25001 and F 25000, each over 5000 digits, yields over 10,000 significant digits of the golden ratio.
Both Egyptian pyramids and the regular square pyramids that resemble them can be analyzed with respect to the golden ratio and other ratios.
Mathematical pyramids and triangles
A pyramid in which the apothem (slant height along the bisector of a face) is equal to φ times the semi-base (half the base width) is sometimes called a golden pyramid. The isosceles triangle that is the face of such a pyramid can be constructed from the two halves of a diagonally split golden rectangle (of size semi-base by apothem), joining the medium-length edges to make the apothem. The height of this pyramid is times the semi-base (that is, the slope of the face is ); the square of the height is equal to the area of a face, φ times the square of the semi-base.
The medial right triangle of this "golden" pyramid (see diagram), with sides is interesting in its own right, demonstrating via the Pythagorean theorem the relationship or . This Kepler triangle is the only right triangle proportion with edge lengths in geometric progression, just as the 3–4–5 triangle is the only right triangle proportion with edge lengths in arithmetic progression. The angle with tangent corresponds to the angle that the side of the pyramid makes with respect to the ground, 51.827... degrees (51° 49' 38").
A nearly similar pyramid shape, but with rational proportions, is described in the Rhind Mathematical Papyrus (the source of a large part of modern knowledge of ancient Egyptian mathematics), based on the 3:4:5 triangle; the face slope corresponding to the angle with tangent 4/3 is 53.13 degrees (53 degrees and 8 minutes). The slant height or apothem is 5/3 or 1.666... times the semi-base. The Rhind papyrus has another pyramid problem as well, again with rational slope (expressed as run over rise). Egyptian mathematics did not include the notion of irrational numbers, and the rational inverse slope (run/rise, multiplied by a factor of 7 to convert to their conventional units of palms per cubit) was used in the building of pyramids.
Another mathematical pyramid with proportions almost identical to the "golden" one is the one with perimeter equal to 2π times the height, or h:b = 4:π. This triangle has a face angle of 51.854° (51°51'), very close to the 51.827° of the Kepler triangle. This pyramid relationship corresponds to the coincidental relationship .
Egyptian pyramids very close in proportion to these mathematical pyramids are known.
In the mid-nineteenth century, Röber studied various Egyptian pyramids including Khafre, Menkaure and some of the Giza, Sakkara, and Abusir groups, and was interpreted as saying that half the base of the side of the pyramid is the middle mean of the side, forming what other authors identified as the Kepler triangle; many other mathematical theories of the shape of the pyramids have also been explored.
One Egyptian pyramid is remarkably close to a "golden pyramid"—the Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu). Its slope of 51° 52' is extremely close to the "golden" pyramid inclination of 51° 50' and the π-based pyramid inclination of 51° 51'; other pyramids at Giza (Chephren, 52° 20', and Mycerinus, 50° 47') are also quite close. Whether the relationship to the golden ratio in these pyramids is by design or by accident remains open to speculation. Several other Egyptian pyramids are very close to the rational 3:4:5 shape.
Adding fuel to controversy over the architectural authorship of the Great Pyramid, Eric Temple Bell, mathematician and historian, claimed in 1950 that Egyptian mathematics would not have supported the ability to calculate the slant height of the pyramids, or the ratio to the height, except in the case of the 3:4:5 pyramid, since the 3:4:5 triangle was the only right triangle known to the Egyptians and they did not know the Pythagorean theorem, nor any way to reason about irrationals such as π or φ.[disputed ]
Michael Rice asserts that principal authorities on the history of Egyptian architecture have argued that the Egyptians were well acquainted with the golden ratio and that it is part of mathematics of the Pyramids, citing Giedon (1957). Historians of science have always debated whether the Egyptians had any such knowledge or not, contending rather that its appearance in an Egyptian building is the result of chance.
In 1859, the pyramidologist John Taylor claimed that, in the Great Pyramid of Giza, the golden ratio is represented by the ratio of the length of the face (the slope height), inclined at an angle θ to the ground, to half the length of the side of the square base, equivalent to the secant of the angle θ. The above two lengths were about 186.4 and 115.2 meters respectively. The ratio of these lengths is the golden ratio, accurate to more digits than either of the original measurements. Similarly, Howard Vyse, according to Matila Ghyka, reported the great pyramid height 148.2 m, and half-base 116.4 m, yielding 1.6189 for the ratio of slant height to half-base, again more accurate than the data variability.
Examples of disputed observations of the golden ratio include the following:
- Historian John Man states that the pages of the Gutenberg Bible were "based on the golden section shape". However, according to Man's own measurements, the ratio of height to width was 1.45.
- Some specific proportions in the bodies of many animals (including humans) and parts of the shells of mollusks are often claimed to be in the golden ratio. There is a large variation in the real measures of these elements in specific individuals, however, and the proportion in question is often significantly different from the golden ratio. The ratio of successive phalangeal bones of the digits and the metacarpal bone has been said to approximate the golden ratio. The nautilus shell, the construction of which proceeds in a logarithmic spiral, is often cited, usually with the idea that any logarithmic spiral is related to the golden ratio, but sometimes with the claim that each new chamber is proportioned by the golden ratio relative to the previous one; however, measurements of nautilus shells do not support this claim.
- In investing, some practitioners of technical analysis use the golden ratio to indicate support of a price level, or resistance to price increases, of a stock or commodity; after significant price changes up or down, new support and resistance levels are supposedly found at or near prices related to the starting price via the golden ratio. The use of the golden ratio in investing is also related to more complicated patterns described by Fibonacci numbers (e.g. Elliott wave principle and Fibonacci retracement). However, other market analysts have published analyses suggesting that these percentages and patterns are not supported by the data.
References and footnotes
- Livio, Mario (2002). The Golden Ratio: The Story of Phi, The World's Most Astonishing Number. New York: Broadway Books. ISBN 0-7679-0815-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Piotr Sadowski (1996). The knight on his quest: symbolic patterns of transition in Sir Gawain and the Green Knight. University of Delaware Press. p. 124. ISBN 978-0-87413-580-0.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Richard A Dunlap, The Golden Ratio and Fibonacci Numbers, World Scientific Publishing, 1997
- Euclid, Elements, Book 6, Definition 3.
- Summerson John, Heavenly Mansions: And Other Essays on Architecture (New York: W.W. Norton, 1963) p. 37. "And the same applies in architecture, to the rectangles representing these and other ratios (e.g. the 'golden cut'). The sole value of these ratios is that they are intellectually fruitful and suggest the rhythms of modular design."
- Jay Hambidge, Dynamic Symmetry: The Greek Vase, New Haven CT: Yale University Press, 1920
- William Lidwell, Kritina Holden, Jill Butler, Universal Principles of Design: A Cross-Disciplinary Reference, Gloucester MA: Rockport Publishers, 2003
- Pacioli, Luca. De divina proportione, Luca Paganinem de Paganinus de Brescia (Antonio Capella) 1509, Venice.
- Strogatz, Steven (September 24, 2012). "Me, Myself, and Math: Proportion Control". New York Times.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Weisstein, Eric W., "Golden Ratio Conjugate", MathWorld.
- Markowsky, George (January 1992). "Misconceptions about the Golden Ratio" (PDF). The College Mathematics Journal. 23 (1).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Mario Livio,The Golden Ratio: The Story of Phi, The World's Most Astonishing Number, p.6
- ῎Ακρον καὶ μέσον λόγον εὐθεῖα τετμῆσθαι λέγεται, ὅταν ᾖ ὡς ἡ ὅλη πρὸς τὸ μεῖζον τμῆμα, οὕτως τὸ μεῖζον πρὸς τὸ ἔλαττὸν, translated in Richard Fitzpatrick (translator) (2007). Euclid's Elements of Geometry. ISBN 978-0615179841.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>, p. 156
- Euclid, Elements, Book 6, Proposition 30. Retrieved from http:/.aleph0.clarku.edu/~djoyce/java/elements/toc.html.
- Euclid, Elements, Book 2, Proposition 11; Book 4, Propositions 10–11; Book 13, Propositions 1–6, 8–11, 16–18.
- "The Golden Ratio". The MacTutor History of Mathematics archive. Retrieved 2007-09-18.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Weisstein, Eric W., "Golden Ratio", MathWorld.
- Hemenway, Priya (2005). Divine Proportion: Phi In Art, Nature, and Science. New York: Sterling. pp. 20–21. ISBN 1-4027-3522-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Plato. "Timaeus". Translated by Benjamin Jowett. The Internet Classics Archive. Retrieved 30 May 2006.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- James Joseph Tattersall (2005). Elementary number theory in nine chapters (2nd ed.). Cambridge University Press. p. 28. ISBN 978-0-521-85014-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Underwood Dudley (1999). Die Macht der Zahl: Was die Numerologie uns weismachen will. Springer. p. 245. ISBN 3-7643-5978-1.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Cook, Theodore Andrea (1979) . The Curves of Life. New York: Dover Publications. ISBN 0-486-23701-X.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Gardner, Martin (2001), The Colossal Book of Mathematics: Classic Puzzles, Paradoxes, and Problems : Number Theory, Algebra, Geometry, Probability, Topology, Game Theory, Infinity, and Other Topics of Recreational Mathematics, W. W. Norton & Company, p. 88, ISBN 9780393020236<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>.
- Jaric, Marko V. (2012), Introduction to the Mathematics of Quasicrystals, Elsevier, p. x, ISBN 9780323159470,
Although at the time of the discovery of quasicrystals the theory of quasiperiodic functions had been known for nearly sixty years, it was the mathematics of aperiodic Penrose tilings, mostly developed by Nicolaas de Bruijn, that provided the major influence on the new field.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Van Mersbergen, Audrey M., "Rhetorical Prototypes in Architecture: Measuring the Acropolis with a Philosophical Polemic", Communication Quarterly, Vol. 46 No. 2, 1998, pp 194-213.
- Midhat J. Gazalé , Gnomon, Princeton University Press, 1999. ISBN 0-691-00514-1
- Keith J. Devlin The Math Instinct: Why You're A Mathematical Genius (Along With Lobsters, Birds, Cats, And Dogs), p. 108. New York: Thunder's Mouth Press, 2005, ISBN 1-56025-672-9
- Boussora, Kenza and Mazouz, Said, The Use of the Golden Section in the Great Mosque of Kairouan, Nexus Network Journal, vol. 6 no. 1 (Spring 2004),
- Le Corbusier, The Modulor p. 25, as cited in Padovan, Richard, Proportion: Science, Philosophy, Architecture (1999), p. 316, Taylor and Francis, ISBN 0-419-22780-6
- Marcus Frings: The Golden Section in Architectural Theory, Nexus Network Journal vol. 4 no. 1 (Winter 2002), available online
- Le Corbusier, The Modulor, p. 35, as cited in Padovan, Richard, Proportion: Science, Philosophy, Architecture (1999), p. 320. Taylor & Francis. ISBN 0-419-22780-6: "Both the paintings and the architectural designs make use of the golden section".
- Urwin, Simon. Analysing Architecture (2003) pp. 154-5, ISBN 0-415-30685-X
- Jason Elliot (2006). Mirrors of the Unseen: Journeys in Iran. Macmillan. pp. 277, 284. ISBN 978-0-312-30191-0.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Patrice Foutakis, "Did the Greeks Build According to the Golden Ratio?", Cambridge Archaeological Journal, vol. 24, n° 1, February 2014, p. 71-86.
- Leonardo da Vinci's Polyhedra, by George W. Hart
- Livio, Mario. "The golden ratio and aesthetics". Retrieved 2008-03-21.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Part of the process of becoming a mathematics writer is, it appears, learning that you cannot refer to the golden ratio without following the first mention by a phrase that goes something like 'which the ancient Greeks and others believed to have divine and mystical properties.' Almost as compulsive is the urge to add a second factoid along the lines of 'Leonardo Da Vinci believed that the human form displays the golden ratio.' There is not a shred of evidence to back up either claim, and every reason to assume they are both false. Yet both claims, along with various others in a similar vein, live on." Keith Devlin (May 2007). "The Myth That Will Not Go Away". Retrieved September 26, 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Donald E. Simanek. "Fibonacci Flim-Flam". Retrieved April 9, 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Salvador Dalí (2008). The Dali Dimension: Decoding the Mind of a Genius (DVD). Media 3.14-TVC-FGSD-IRL-AVRO.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Hunt, Carla Herndon and Gilkey, Susan Nicodemus. Teaching Mathematics in the Block pp. 44, 47, ISBN 1-883001-51-X
- Bouleau, Charles, The Painter's Secret Geometry: A Study of Composition in Art (1963) pp.247-8, Harcourt, Brace & World, ISBN 0-87817-259-9
- Olariu, Agata, Golden Section and the Art of Painting Available online
- Tosto, Pablo, La composición áurea en las artes plásticas – El número de oro, Librería Hachette, 1969, p. 134–144
- Jan Tschichold. The Form of the Book, pp.43 Fig 4. "Framework of ideal proportions in a medieval manuscript without multiple columns. Determined by Jan Tschichold 1953. Page proportion 2:3. margin proportions 1:1:2:3, Text area proportioned in the Golden Section. The lower outer corner of the text area is fixed by a diagonal as well."
- Jan Tschichold, The Form of the Book, Hartley & Marks (1991), ISBN 0-88179-116-4.
- Jones, Ronald (1971). "The golden section: A most remarkable measure". The Structurist. 11: 44–52.
Who would suspect, for example, that the switch plate for single light switches are standardized in terms of a Golden Rectangle?<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Art Johnson (1999). Famous problems and their mathematicians. Libraries Unlimited. p. 45. ISBN 978-1-56308-446-1.
The Golden Ratio is a standard feature of many modern designs, from postcards and credit cards to posters and light-switch plates.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Alexey Stakhov; Scott Olsen; Scott Anthony Olsen (2009). The mathematics of harmony: from Euclid to contemporary mathematics and computer science. World Scientific. p. 21. ISBN 978-981-277-582-5.
A credit card has a form of the golden rectangle.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Simon Cox (2004). Cracking the Da Vinci code: the unauthorized guide to the facts behind Dan Brown's bestselling novel. Barnes & Noble Books. ISBN 978-0-7607-5931-8.
The Golden Ratio also crops up in some very unlikely places: widescreen televisions, postcards, credit cards and photographs all commonly conform to its proportions.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "THE NEW RAPIDE S : Design".
The ‘Golden Ratio’ sits at the heart of every Aston Martin.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Lendvai, Ernő (1971). Béla Bartók: An Analysis of His Music. London: Kahn and Averill.
- Smith, Peter F. The Dynamics of Delight: Architecture and Aesthetics (New York: Routledge, 2003) pp 83, ISBN 0-415-30010-X
- Roy Howat (1983). Debussy in Proportion: A Musical Analysis. Cambridge University Press. ISBN 0-521-31145-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Simon Trezise (1994). Debussy: La Mer. Cambridge University Press. p. 53. ISBN 0-521-44656-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Pearl Masters Premium". Pearl Corporation. Archived from the original on December 19, 2007. Retrieved December 2, 2007. Unknown parameter
|deadurl=ignored (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "An 833 Cents Scale: An experiment on harmony", Huygens-Fokker.org. Accessed December 1, 2012.
- Richard Padovan (1999). Proportion. Taylor & Francis. pp. 305–306. ISBN 978-0-419-22780-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Padovan, Richard (2002). "Proportion: Science, Philosophy, Architecture". Nexus Network Journal. 4 (1): 113–122. doi:10.1007/s00004-001-0008-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Zeising, Adolf (1854). Neue Lehre van den Proportionen des meschlischen Körpers. preface.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Golden ratio discovered in a quantum world". Eurekalert.org. 2010-01-07. Retrieved 2011-10-31.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- J.C. Perez (1991), "Chaos DNA and Neuro-computers: A Golden Link", in Speculations in Science and Technology vol. 14 no. 4, ISSN 0155-7785.
- Yamagishi, Michel E.B., and Shimabukuro, Alex I. (2007), "Nucleotide Frequencies in Human Genome and Fibonacci Numbers", in Bulletin of Mathematical Biology, ISSN 0092-8240 (print), ISSN 1522-9602 (online). PDF full text
- Perez, J.-C. (September 2010). "Codon populations in single-stranded whole human genome DNA are fractal and fine-tuned by the Golden Ratio 1.618". Interdisciplinary Sciences: Computational Life Science. 2 (3): 228–240. doi:10.1007/s12539-010-0022-0. PMID 20658335.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> PDF full text
- Pommersheim, James E., Tim K. Marks, and Erica L. Flapan, eds. 2010. "Number Theory: A Lively Introduction with Proofs, Applications, and Stories". John Wiley and Sons: 82.
- The golden ratio and aesthetics, by Mario Livio.
- Max. Hailperin; Barbara K. Kaiser; Karl W. Knight (1998). Concrete Abstractions: An Introduction to Computer Science Using Scheme. Brooks/Cole Pub. Co. ISBN 0-534-95211-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Brian Roselle, "Golden Mean Series"
- "A Disco Ball in Space". NASA. 2001-10-09. Retrieved 2007-04-16.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Chris and Penny. "Quandaries and Queries". Math Central. Retrieved 23 October 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- American Mathematical Monthly, pp. 49-50, 1954.
- Roger Herz-Fischler (2000). The Shape of the Great Pyramid. Wilfrid Laurier University Press. ISBN 0-88920-324-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Koca, Mehmet; Koca, Nazife Ozdes; Koç, Ramazan (2010), "Catalan solids derived from three-dimensional-root systems and quaternions", Journal of Mathematical Physics, 51: 043501, arXiv:0908.3272, doi:10.1063/1.3356985<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>.
- Fibonacci Numbers and Nature - Part 2 : Why is the Golden section the "best" arrangement?, from Dr. Ron Knott's Fibonacci Numbers and the Golden Section, retrieved 2012-11-29.
- Weisstein, Eric W., "Pisot Number", MathWorld.
- Horocycles exinscrits : une propriété hyperbolique remarquable, cabri.net, retrieved 2009-07-21.
- Yee, Alexander J. (17 August 2015). "Golden Ratio". numberword.org.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Independent computations done by Ron Watkins and Dustin Kirkland.
- Radio, Astraea Web (2006). The Best of Astraea: 17 Articles on Science, History and Philosophy. Astrea Web Radio. ISBN 1-4259-7040-0.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Midhat Gazale, Gnomon: From Pharaohs to Fractals, Princeton Univ. Press, 1999
- Eli Maor, Trigonometric Delights, Princeton Univ. Press, 2000
- "The Great Pyramid, The Great Discovery, and The Great Coincidence". Archived from the original on 2014-01-02. Retrieved 2007-11-25. Unknown parameter
|deadurl=ignored (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Lancelot Hogben, Mathematics for the Million, London: Allen & Unwin, 1942, p. 63., as cited by Dick Teresi, Lost Discoveries: The Ancient Roots of Modern Science—from the Babylonians to the Maya, New York: Simon & Schuster, 2003, p.56
- Burton, David M. (1999). The history of mathematics: an introduction (4 ed.). WCB McGraw-Hill. p. 56. ISBN 0-07-009468-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Bell, Eric Temple (1940). The Development of Mathematics. New York: Dover. p. 40.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Rice, Michael, Egypt's Legacy: The Archetypes of Western Civilisation, 3000 to 30 B.C pp. 24 Routledge, 2003, ISBN 0-415-26876-1
- S. Giedon, 1957, The Beginnings of Architecture, The A.W. Mellon Lectures in the Fine Arts, 457, as cited in Rice, Michael, Egypt's Legacy: The Archetypes of Western Civilisation, 3000 to 30 B.C pp.24 Routledge, 2003
- Markowsky, George (January 1992). "Misconceptions about the Golden Ratio" (PDF). College Mathematics Journal. Mathematical Association of America. 23 (1): 2–19. doi:10.2307/2686193. JSTOR 2686193.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Taylor, The Great Pyramid: Why Was It Built and Who Built It?, 1859
- Matila Ghyka The Geometry of Art and Life, New York: Dover, 1977
- Man, John, Gutenberg: How One Man Remade the World with Word (2002) pp. 166–167, Wiley, ISBN 0-471-21823-5. "The half-folio page (30.7 × 44.5 cm) was made up of two rectangles—the whole page and its text area—based on the so called 'golden section', which specifies a crucial relationship between short and long sides, and produces an irrational number, as pi is, but is a ratio of about 5:8."
- Pheasant, Stephen (1998). Bodyspace. London: Taylor & Francis. ISBN 0-7484-0067-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- van Laack, Walter (2001). A Better History Of Our World: Volume 1 The Universe. Aachen: van Laach GmbH.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Ivan Moscovich, Ivan Moscovich Mastermind Collection: The Hinged Square & Other Puzzles, New York: Sterling, 2004
- Peterson, Ivars. "Sea shell spirals". Science News.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- For instance, Osler writes that "38.2 percent and 61.8 percent retracements of recent rises or declines are common," in Osler, Carol (2000). "Support for Resistance: Technical Analysis and Intraday Exchange Rates" (PDF). Federal Reserve Bank of New York Economic Policy Review. 6 (2): 53–68.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Roy Batchelor and Richard Ramyar, "Magic numbers in the Dow," 25th International Symposium on Forecasting, 2005, p. 13, 31. "Not since the 'big is beautiful' days have giants looked better", Tom Stevenson, The Daily Telegraph, Apr. 10, 2006, and "Technical failure", The Economist, Sep. 23, 2006, are both popular-press accounts of Batchelor and Ramyar's research.
- Doczi, György (2005) . The Power of Limits: Proportional Harmonies in Nature, Art, and Architecture. Boston: Shambhala Publications. ISBN 1-59030-259-1.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Huntley, H. E. (1970). The Divine Proportion: A Study in Mathematical Beauty. New York: Dover Publications. ISBN 0-486-22254-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Joseph, George G. (2000) . The Crest of the Peacock: The Non-European Roots of Mathematics (New ed.). Princeton, NJ: Princeton University Press. ISBN 0-691-00659-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Livio, Mario (2002) . The Golden Ratio: The Story of PHI, the World's Most Astonishing Number (Hardback ed.). NYC: Broadway (Random House). ISBN 0-7679-0815-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Sahlqvist, Leif (2008). Cardinal Alignments and the Golden Section: Principles of Ancient Cosmography and Design (3rd Rev. ed.). Charleston, SC: BookSurge. ISBN 1-4196-2157-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Schneider, Michael S. (1994). A Beginner's Guide to Constructing the Universe: The Mathematical Archetypes of Nature, Art, and Science. New York: HarperCollins. ISBN 0-06-016939-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Scimone, Aldo (1997). La Sezione Aurea. Storia culturale di un leitmotiv della Matematica. Palermo: Sigma Edizioni. ISBN 978-88-7231-025-0.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Stakhov, A. P. (2009). The Mathematics of Harmony: From Euclid to Contemporary Mathematics and Computer Science. Singapore: World Scientific Publishing. ISBN 978-981-277-582-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Walser, Hans (2001) [Der Goldene Schnitt 1993]. The Golden Section. Peter Hilton trans. Washington, DC: The Mathematical Association of America. ISBN 0-88385-534-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
|Wikimedia Commons has media related to Golden ratio.|
- Hazewinkel, Michiel, ed. (2001), "Golden ratio", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Golden Section" by Michael Schreiber, Wolfram Demonstrations Project, 2007.
- Golden Section in Photography: Golden Ratio, Golden Triangles, Golden Spiral
- Weisstein, Eric W., "Golden Ratio", MathWorld.
- "Researcher explains mystery of golden ratio". PhysOrg. December 21, 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>.
- Knott, Ron. "The Golden section ratio: Phi".<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Information and activities by a mathematics professor.
- The Pentagram & The Golden Ratio. Green, Thomas M. Updated June 2005. Archived November 2007. Geometry instruction with problems to solve.
- Schneider, Robert P. (2011). "A Golden Pair of Identities in the Theory of Numbers". arXiv:1109.3216 [math.HO].<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Proves formulas that involve the golden mean and the Euler totient and Möbius functions.
- The Myth That Will Not Go Away, by Keith Devlin, addressing multiple allegations about the use of the golden ratio in culture. | https://infogalactic.com/info/Golden_ratio | 21 |
26 | Indigenous peoples, also referred to as first people, aboriginal people, native people, or autochthonous people, are culturally distinct ethnic groups who are native to a particular place. The term indigenous was first, in its modern context, used by Europeans, who used it to differentiate the indigenous peoples of the Americas from black people who were brought to the Americas as slaves from Africa. It may have first been used in this context by Sir Thomas Browne in 1646, who stated "and although in many parts thereof there be at present swarms of Negroes serving under the Spaniard, yet were they all transported from Africa, since the discovery of Columbus; and are not indigenous or proper natives of America."
Peoples are usually described as indigenous when they maintain traditions or other aspects of an early culture that is associated with a given region. Not all indigenous peoples share this characteristic, as many have adopted substantial elements of a colonizing culture, such as dress, religion or language. Indigenous peoples may be settled in a given region (sedentary) or exhibit a nomadic lifestyle across a large territory, but they are generally historically associated with a specific territory on which they depend. Indigenous societies are found in every inhabited climate zone and continent of the world except Antarctica. It is estimated that there are approximately five thousand indigenous nations throughout the world.
Since at least the 15th century, indigenous peoples' homelands have been invaded and occupied by European colonizers, who initially justified colonization under the authority of the Catholic Church to spread Christianity through the Doctrine of Discovery. Thousands of indigenous nations throughout the world remain occupied by about two hundred political constructs known as states which formed as a result of colonialism. Indigenous peoples continue to face threats to their sovereignty, economic well-being, languages, ways of knowing, and access to the resources on which their cultures depend. Indigenous rights have been set forth in international law by the United Nations, the International Labour Organization, and the World Bank. In 2007, the UN issued a Declaration on the Rights of Indigenous Peoples (UNDRIP) to guide member-state national policies to the collective rights of indigenous peoples, including culture, identity, language and access to employment, health, quality education and natural resources.
Estimates of the total global population of indigenous peoples usually range from 250 million to 600 million. This is because official designations and terminology on who is considered Indigenous vary widely between countries. In settler states colonized by Europeans, such as in the Americas, Australia, New Zealand, and Oceania, Indigenous status is generally unproblematically applied to groups descended from peoples who lived there prior to European settlement. In Asia and Africa, where the majority of indigenous peoples live, indigenous population figures are less clear and may fluctuate dramatically as states tend to underreport the population of indigenous peoples or define them by different terminology.
Indigenous is derived from the Latin word indigena, meaning "sprung from the land, native" and -genus "to be born from." The Latin indigena is based on the Old Latin indu "in, within" + gignere "to beget, produce." Indu is from the archaic Greek word endo, "in, within," which is an extended form of the Proto-Indo-European en or "in." The origins of the term indigenous are not related in any way to the origins of the term Indian, which, until recently, was commonly applied to indigenous peoples of the Americas. Any given people, ethnic group or community may be described as indigenous.
Autochthonous originates from the Greek αὐτός autós meaning self/own, and χθών chthon meaning Earth. The term is based in the Indo-European root dhghem- (earth). The earliest documented use of this term was in 1804.
The term 'indigenous peoples' refers to culturally distinct groups affected by colonization. As a reference to a group of people, the term indigenous first came into use by Europeans who used it to differentiate the indigenous peoples of the Americas from enslaved Africans. It may have first been used in this context by Sir Thomas Browne. In Chapter 10 of Pseudodoxia Epidemica (1646) entitled "Of the Blackness of Negroes," Browne wrote "and although in many parts thereof there be at present swarms of Negroes serving under the Spaniard, yet were they all transported from Africa, since the discovery of Columbus; and are not indigenous or proper natives of America."
In the 1970s, the term was used as a way of linking the experiences, issues, and struggles of groups of colonized people across international borders. At this time 'indigenous people(s)' also began to be used to describe a legal category in indigenous law created in international and national legislation. The use of the 's' in 'peoples' recognizes that there are real differences between different indigenous peoples. James Anaya, former Special Rapporteur on the Rights of Indigenous Peoples, defined indigenous peoples as "living descendants of pre-invasion inhabitants of lands now dominated by others. They are culturally distinct groups that find themselves engulfed by other settler societies born of forces of empire and conquest".
Throughout history, different states designate the groups within their boundaries that are recognized as indigenous peoples according to international or national legislation by different terms. Indigenous people also include people indigenous based on their descent from populations that inhabited the country when non-indigenous religions and cultures arrived—or at the establishment of present state boundaries—who retain some or all of their own social, economic, cultural and political institutions, but who may have been displaced from their traditional domains or who may have resettled outside their ancestral domains.
The status of the indigenous groups in the subjugated relationship can be characterized in most instances as an effectively marginalized or isolated in comparison to majority groups or the nation-state as a whole. Their ability to influence and participate in the external policies that may exercise jurisdiction over their traditional lands and practices is very frequently limited. This situation can persist even in the case where the indigenous population outnumbers that of the other inhabitants of the region or state; the defining notion here is one of separation from decision and regulatory processes that have some, at least titular, influence over aspects of their community and land rights.
The presence of external laws, claims and cultural mores either potentially or actually act to variously constrain the practices and observances of an indigenous society. These constraints can be observed even when the indigenous society is regulated largely by its own tradition and custom. They may be purposefully imposed, or arise as unintended consequence of trans-cultural interaction. They may have a measurable effect, even where countered by other external influences and actions deemed beneficial or that promote indigenous rights and interests.
The first meeting of the United Nations Working Group on Indigenous Populations (WGIP) was on 9 August 1982 and this date is now celebrated as the International Day of the World's Indigenous Peoples. In 1982 the group accepted a preliminary definition by Mr. José R. Martínez-Cobo, Special Rapporteur on Discrimination against Indigenous Populations:
Indigenous communities, peoples, and nations are those that, having a historical continuity with pre-invasion and pre-colonial societies that developed on their territories, consider themselves distinct from other sectors of the societies now prevailing in those territories, or parts of them. They form at present non-dominant sectors of society and are determined to preserve, develop, and transmit to future generations their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples, in accordance with their own cultural patterns, social institutions and legal systems.
The primary impetus in considering indigenous identity comes from considering the historical impacts of European colonialism. A 2009 United Nations report published by the Secretariat of the Permanent Forum on indigenous Issues stated:
For centuries, since the time of their colonization, conquest or occupation, Indigenous peoples have documented histories of resistance, interface or cooperation with states, thus demonstrating their conviction and determination to survive with their distinct sovereign identities. Indeed, Indigenous peoples were often recognized as sovereign peoples by states, as witnessed by the hundreds of treaties concluded between Indigenous peoples and the governments of the United States, Canada, New Zealand and others. And yet as indigenous populations dwindled, and the settler populations grew ever more dominant, states became less and less inclined to recognize the sovereignty of indigenous peoples. Indigenous peoples themselves, at the same time, continued to adapt to changing circumstances while maintaining their distinct identity as sovereign peoples.
The World Health Organization defines indigenous populations as follows: "communities that live within, or are attached to, geographically distinct traditional habitats or ancestral territories, and who identify themselves as being part of a distinct cultural group, descended from groups present in the area before modern states were created and current borders defined. They generally maintain cultural and social identities, and social, economic, cultural and political institutions, separate from the mainstream or dominant society or culture."
Greek sources of the Classical period acknowledge people whom they referred to as "Pelasgians". These peoples inhabited lands surrounding the Aegean Sea before the subsequent migrations of the Hellenic ancestors claimed by these authors. The disposition and precise identity of this former group is elusive, and sources such as Homer, Hesiod and Herodotus give varying, partially mythological accounts. However, it is clear that these cultures were distinguished by the subsequent Hellenic cultures (and distinct from non-Greek speaking 'foreigners,' termed "barbarians" by the historical Greeks). Greco-Roman society flourished between 330 BCE and 640 CE and commanded successive waves of conquests that gripped more than half of the known world at the time. But because already existent populations within other parts of Europe at the time of classical antiquity had more in common culturally speaking with the Greco-Roman world, the intricacies involved in expansion across the European frontier were not so contentious relative to indigenous issues.
The Catholic Church and the Doctrine of DiscoveryEdit
The Doctrine of Discovery is a legal and religious concept tied to the Roman Catholic Church which rationalized and 'legalized' colonization and the conquering of indigenous peoples in the eyes of Christianized Europeans. The roots of the Doctrine go back as far as the fifth century popes and leaders in the church who had ambitions of forming a global Christian commonwealth. The Crusades (1096-1271) were based on this ambition of a holy war against who the church saw as infidels. Pope Innocent IV's writings from 1240 were particularly influential. He argued that Christians were justified in invading and acquiring infidel's lands because it was the church's duty to control the spiritual health of all humans on Earth.
The Doctrine developed further in the 15th century after the conflict between the Teutonic Knights and Poland to control 'pagan' Lithuania. At the Council of Constance (1414), the Knights argued that their claims were "authorized by papal proclamations dating from the time of the Crusades [which] allowed the outright confiscation of the property and sovereign rights of heathens." The council disagreed, stating that non-Christians had claims to rights of sovereignty and property under European natural law. However, the council upheld that conquests could 'legally' occur if non-Christians refused to comply with Christianization and European natural law. This effectively meant that peoples who were not considered 'civilized' by European standards or otherwise refused to assimilate under Christian authority were subject to war and forced assimilation: "Christians simply refused to recognize the right of non-Christians to remain free of Christian dominion."
Christian Europeans had already begun invading and colonizing lands outside of Europe before the Council of Constance, demonstrating how the Doctrine was applied to non-Christian indigenous peoples outside Europe. In the 14th and 15th centuries, the indigenous peoples of what are now referred to as the Canary Islands, known as Guanches (who had lived on the islands since the BCE era) became the subject of colonizers' attention. The Guanches had remained undisturbed and relatively 'forgotten' by Europeans until Portugal began surveying the island for potential settlement in 1341. In 1344, a papal bull was issued which assigned the islands to Castile, a kingdom in Spain. In 1402, the Spanish began efforts to invade and colonize the islands. By 1436, a new papal edict was issued by Pope Eugenius IV known as Romanus Pontifex which authorized Portugal to convert the indigenous peoples to Christianity and control the islands on behalf of the pope. The Guanches resisted European invasion until the surrender of the Guanche kings of Tenerife to Spain in 1496. The invaders brought destruction and diseases to the Guanche people, whose identity and culture disappeared as a result.
As Portugal expanded southward into North Africa in the 15th century, new edicts were added by subsequent popes which extended Portuguese authority over indigenous peoples. In 1455, Pope Nicholas V re-issued the Romanus Pontifex with more direct language, authorizing Portugal "to invade, search out, capture, vanquish, and subdue all Saracens and pagans" as well as allowing non-Christians to be placed in slavery and have their property stolen. As stated by Robert J. Miller, Jacinta Ruru, Larissa Behrendt, and Tracey Lindberg, the doctrine developed over time "to justify the domination of non-Christian, non-European peoples and the confiscations of their lands and rights." Because Portugal was granted 'permissions' by the papacy to expand in Africa, Spain was urged to move westward across the Atlantic Ocean, searching to convert and conquer indigenous peoples in what they would understand as the 'New World.' This division of the world between Spain and Portugal was formalized with the Treaty of Tordesillas in 1494.
Spanish King Ferdinand and Queen Isabella hired Christopher Columbus, who was dispatched in 1492, to colonize and bring new lands under the Spanish crown. Columbus 'discovered' a few islands in the Caribbean as early as 1493 and Ferdinand and Isabella immediately asked the pope to 'ratify' the discovery. In 1493, Pope Alexander VI issued the Inter caetera divinai, which affirmed that since the islands had been "undiscovered by others" that they were now under Spanish authority. Alexander granted Spain any lands that it discovered as long as they had not been "previously possessed by any Christian owner." The beginnings of European colonialism in the 'New World' effectively formalized the Doctrine of Discovery into 'international law,' which at that time meant law that was agreed upon by Spain, Portugal, and the Catholic Church. Indigenous peoples were not consulted or included in these arrangements.
European colonialism in the 'New World'Edit
Spain issued the Spanish Requirement of 1513 (Requiremento), a document that was intended to inform indigenous peoples that "they must accept Spanish missionaries and sovereignty or they would be annihilated." The document was supposed to be read to indigenous peoples so that they theoretically could accept or reject the proposal before any war against them could be waged: "the Requiremento informed the natives of their natural law obligations to hear the gospel and that their lands had been donated to Spain." Refusal by indigenous peoples meant that, in the Spaniard's eyes, war could 'justifiably' be waged against them. Many conquistadors apparently feared that, if given the option, indigenous peoples would actually accept Christianity, which would legally not permit invasion of their lands and the theft of their belongings. Legal scholars Robert J. Miller, Jacinta Rura, Larissa Behrendt, and Tracey Lindberg record that this commonly resulted in Spanish invaders reading the document aloud "in the night to the trees" or reading it "to the land from their ships." The scholars remark: "so much for legal formalism and the free will and natural law rights of New World Indigenous peoples."
Being Catholic countries in 1493, England as well as France worked to 're-interpret' the Doctrine of Discovery to serve their own colonial interests. In the 16th century, England established a new interpretation of the Doctrine: "the new theory, primarily developed by English legal scholars, argued that the Catholic King Henry VII of England, would not violate the 1493 papal bulls, which divided the world for the Spanish and Portuguese." This interpretation was also supported by Elizabeth I's legal advisors in the 1580s and effectively set a precedent among European colonial nations that the first Christian nation to occupy land was the 'legal' owner and that this had to be respected in international law. This rationale was used in the colonization of what was to become the American colonies. James I stated in the First Virginia Charter (1606) and the Charter to the Council of New England (1620) that colonists could be given property rights because the lands were "not now actually possessed by any Christian Prince or People...." English monarchs issued that colonists should spread Christianity "to those [who] as yet live in Darkness and miserable Ignorance of the true Knowledge and Worship of God, [and] to bring the Infidels and Savages, living in those parts, to human civility, and to a settled and quiet Government."
This approach to colonization of indigenous lands resulted in an acceleration of exploration and land claiming, particularly by France, England, and Holland. Land claims were made through symbolic "rituals of discovery" that were performed to illustrate the colonizing nation's legal claim to the land. Markers of possession such as crosses, flags, and plates claiming possession and other symbols became important in this contest to claim indigenous lands. In 1642, Dutch explorers were ordered to set up posts and a plate that asserted their intention to establish a colony on the land. In the 1740s, French explorers buried lead plates at various locations to reestablish their 17th century land claims to Ohio country. The French plates were later discovered by indigenous peoples of the Ohio River. Upon contact with English explorers, the English noted that the lead plates were monuments "of the renewal of [French] possession" of the land. In 1774, Captain James Cook attempted to invalidate Spanish land claims to Tahiti by removing their marks of possession and then proceeding to set up English marks of possession. When the Spanish learned of this action, they quickly sent an explorer to reestablish their claim to the land.
The English developed the legal concept of terra nullius (land that is null or void) or vacuum domicilium (empty or vacant house) to validate their lands claims over indigenous peoples' homelands. This concept formalized the idea that lands which were not being used in a manner that European legal systems approved of were open for European colonization. Historian Henry Reynolds captured this perspective in his statement that "Europeans regarded North America as a vacant land that could be claimed by right of discovery." These new legal concepts were developed in order to diminish reliance on papal authority to authorize or justify colonization claims.
As the 'rules' of colonization became established into legal doctrine agreed upon by between European colonial powers, methods of laying claims to indigenous lands continued to expand rapidly. As encounters between European colonizers and indigenous populations in the rest of the world accelerated, so did the introduction of infectious diseases, which sometimes caused local epidemics of extraordinary virulence. For example, smallpox, measles, malaria, yellow fever, and other diseases were unknown in pre-Columbian Americas and Oceania.
Settler independence and continuing colonialismEdit
Although the establishment of colonies throughout the world by various European powers was intended to expand their nation's wealth and influence, settler populations in some localities became anxious to assert their own autonomy. For example, settler independence movements in the American colonies were successful by 1783, following the American Revolutionary War. This resulted in the establishment of the United States of America as a separate entity from British Empire. The United States continued and expanded European colonial doctrine through adopting the Doctrine of Discovery as the law of the American federal government in 1823 with the US Supreme Court case Johnson v. M'Intosh. Statements at the Johnson court case illuminated the United States' support for the principles of the discovery doctrine:
The United States ... [and] its civilized inhabitants now hold this country. They hold, and assert in themselves, the title by which it was acquired. They maintain, as all others have maintained, that discovery gave an exclusive right to extinguish the Indian title of occupancy, either by purchase or by conquest; and gave also a right to such a degree of sovereignty, as the circumstances of the people would allow them to exercise. ... [This loss of native property and sovereignty rights was justified, the Court said, by] the character and religion of its inhabitants ... the superior genius of Europe ... [and] ample compensation to the [Indians] by bestowing on them civilization and Christianity, in exchange for unlimited independence.
Population and distributionEdit
Indigenous societies range from those who have been significantly exposed to the colonizing or expansionary activities of other societies (such as the Maya peoples of Mexico and Central America) through to those who as yet remain in comparative isolation from any external influence (such as the Sentinelese and Jarawa of the Andaman Islands).
Precise estimates for the total population of the world's indigenous peoples are very difficult to compile, given the difficulties in identification and the variances and inadequacies of available census data. The United Nations estimates that there are over 370 million indigenous people living in over 70 countries worldwide. This would equate to just fewer than 6% of the total world population. This includes at least 5,000 distinct peoples in over 72 countries.
Contemporary distinct indigenous groups survive in populations ranging from only a few dozen to hundreds of thousands and more. Many indigenous populations have undergone a dramatic decline and even extinction, and remain threatened in many parts of the world. Some have also been assimilated by other populations or have undergone many other changes. In other cases, indigenous populations are undergoing a recovery or expansion in numbers.
Certain indigenous societies survive even though they may no longer inhabit their "traditional" lands, owing to migration, relocation, forced resettlement or having been supplanted by other cultural groups. In many other respects, the transformation of culture of indigenous groups is ongoing, and includes permanent loss of language, loss of lands, encroachment on traditional territories, and disruption in traditional ways of life due to contamination and pollution of waters and lands.
Environmental and economic benefits of having indigenous peoples tend landEdit
A WRI report mentions that “tenure-secure” indigenous lands generates billions and sometimes trillions of dollars’ worth of benefits in the form of carbon sequestration, reduced pollution, clean water and more. It says that tenure-secure indigenous lands have low deforestation rates, they help to reduce GHG emissions, control erosion and flooding by anchoring soil, and provide a suite of other local, regional and global ecosystem services. However, many of these communities find themselves on the front lines of the deforestation crisis, and their lives and livelihoods threatened.
Indigenous peoples by regionEdit
Indigenous populations are distributed in regions throughout the globe. The numbers, condition and experience of indigenous groups may vary widely within a given region. A comprehensive survey is further complicated by sometimes contentious membership and identification.
In the post-colonial period, the concept of specific indigenous peoples within the African continent has gained wider acceptance, although not without controversy. The highly diverse and numerous ethnic groups that comprise most modern, independent African states contain within them various peoples whose situation, cultures and pastoralist or hunter-gatherer lifestyles are generally marginalized and set apart from the dominant political and economic structures of the nation. Since the late 20th century these peoples have increasingly sought recognition of their rights as distinct indigenous peoples, in both national and international contexts.
Though the vast majority of African peoples are indigenous in the sense that they originate from that continent, in practice, identity as an indigenous people per the modern definition is more restrictive, and certainly not every African ethnic group claims identification under these terms. Groups and communities who do claim this recognition are those who, by a variety of historical and environmental circumstances, have been placed outside of the dominant state systems, and whose traditional practices and land claims often come into conflict with the objectives and policies implemented by governments, companies and surrounding dominant societies.
Indigenous peoples of the American continent are broadly recognized as being those groups and their descendants who inhabited the region before the arrival of European colonizers and settlers (i.e., Pre-Columbian). Indigenous peoples who maintain, or seek to maintain, traditional ways of life are found from the high Arctic north to the southern extremities of Tierra del Fuego.
The impacts of historical and ongoing European colonization of the Americas on indigenous communities have been in general quite severe, with many authorities estimating ranges of significant population decline primarily due to disease, land theft and violence. Several peoples have become extinct, or very nearly so. But there are and have been many thriving and resilient indigenous nations and communities.
In Mexico, about 25 million people self-reported as indigenous in 2015. Some estimates put the indigenous population of Mexico as high as 40-65 million people, making it the country with the highest indigenous population in North America. In the southern states of Oaxaca (65.73%) and Yucatán (65.40%), the majority of the population is indigenous, as reported in 2015. Other states with high populations of indigenous peoples include Campeche (44.54%), Quintana Roo, (44.44%), Hidalgo, (36.21%), Chiapas (36.15%), Puebla (35.28%), and Guerrero (33.92%).
Indigenous peoples in Canada comprise the First Nations, Inuit and Métis. The descriptors "Indian" and "Eskimo" have fallen into disuse in Canada. More currently, the term "Aboriginal" is being replaced with "Indigenous". Several national organizations in Canada changed their names from “Aboriginal” to “Indigenous.” Most notable was the change of Aboriginal Affairs and Northern Development Canada (AANDC) to Indigenous and Northern Affairs Canada (INAC) in 2015, which then split into Indigenous Services Canada and Crown-Indigenous Relations and Northern Development Canada in 2017. According to the 2016 Census, there are over 1,670,000 indigenous peoples in Canada. There are currently over 600 recognized First Nations governments or bands spread across Canada, such as the Cree, Mohawk, Mikmaq, Blackfoot, Coast Salish, Innu, Dene and more, with distinctive indigenous cultures, languages, art, and music. First Nations peoples signed 11 numbered treaties across much of what is now known as Canada between 1871 and 1921, except in parts of British Columbia. Many treaty promises have been historically and contemporarily broken.
The Inuit have achieved a degree of administrative autonomy with the creation in 1999 of the territories of Nunavik (in Northern Quebec), Nunatsiavut (in Northern Labrador) and Nunavut, which was until 1999 a part of the Northwest Territories. The autonomous territory of Greenland within the Kingdom of Denmark is also home to a recognised indigenous and majority population of Inuit (about 85%) who settled the area in the 13th century, displacing the indigenous Dorset people and Greenlandic Norse.
In the United States, the combined populations of Native Americans, Inuit and other indigenous designations totaled 2,786,652 (constituting about 1.5% of 2003 U.S. census figures). Some 563 scheduled tribes are recognized at the federal level, and a number of others recognized at the state level.
In some countries (particularly in Latin America), indigenous peoples form a sizable component of the overall national population — in Bolivia, they account for an estimated 56–70% of the total nation, and at least half of the population in Guatemala and the Andean and Amazonian nations of Peru. In English, indigenous peoples are collectively referred to by different names that vary by region and include such ethnonyms as Native Americans, Amerindians, and American Indians. In Spanish or Portuguese speaking countries, one finds the use of terms such as índios, pueblos indígenas, amerindios, povos nativos, povos indígenas, and, in Peru, Comunidades Nativas (Native Communities), particularly among Amazonian societies like the Urarina and Matsés. In Chile, there are indigenous peoples like the Mapuches in the Center-South and the Aymaras in the North; also the Rapa Nui indigenous to Easter Island are a Polynesian people.
The Amerindians make up 0.4% of all Brazilian population, or about 700,000 people. Indigenous peoples are found in the entire territory of Brazil, although the majority of them live in Indian reservations in the North and Center-Western part of the country. On 18 January 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted peoples in Brazil, up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted peoples.
The vast regions of Asia contain the majority of the world's present-day indigenous populations, about 70% according to IWGIA figures.
- Armenians: are the indigenous people of the Armenian Highlands. There are currently more Armenians living outside their ancestral homeland because of the Armenian genocide of 1915.
- Assyrians: are indigenous to Mesopotamia. They claim descent from the ancient Neo-Assyrian Empire, and lived in what was Assyria, their original homeland, and still speak dialects of Aramaic, the official language of the Assyrian Empire.
- Yazidis: are an indigenous people to Upper Mesopotamia.
- Kurds: are one of the indigenous peoples of Mesopotamia.
There are claims that Jews and Bedouin are indigenous to Land of Israel /Palestine, which forms the territory of the modern State of Israel and the Palestinian territories. The World Directory of Minorities and indigenous Peoples recognizes the Negev Bedouin as indigenous to modern-day Israel.
The most substantial populations of indigenous people are in India, which constitutionally recognizes a range of "Scheduled Tribes" within its borders. These various people number about 200 million, but these terms "indigenous people" and "tribal people" are different.
There are also indigenous people residing in the hills of Northern, North-eastern and Southern India like the Tamils (of Tamil Nadu), Shina, Kalasha, Khowar, Burusho, Balti, Wakhi, Domaki, Nuristani, Kohistani, Gujjar and Bakarwal, Kashmiri (of Jammu and Kashmir), Bheel, Ladakhi, Lepcha, Bhutia (of Sikkim), Naga (of Nagaland), indigenous Assamese communities, Mizo (of Mizoram), Tripuri (Tripura), Adi and Nyishi (Arunachal Pradesh), Kodava (of Kodagu), Toda, Kurumba, Kota (of the Nilgiris), Irulas and others.
India's Andaman and Nicobar Islands in the Indian Ocean are also home to several indigenous groups such as the Andamanese of Strait Island, the Jarawas of Middle Andaman and South Andaman Islands, the Onge of Little Anadaman Island and the uncontacted Sentinelese of North Sentinel Island. They are registered and protected by the Indian government.
In Sri Lanka, the indigenous Vedda people constitute a small minority of the population today.
The Russians invaded Siberia and conquered the indigenous people in the 17th–18th centuries.
Nivkh people are an ethnic group indigenous to Sakhalin, having a few speakers of the Nivkh language, but their fisher culture has been endangered due to the development of oil field of Sakhalin from 1990s.
In Russia, definition of "indigenous peoples" is contested largely referring to a number of population (less than 50 000 people), and neglecting self-identification, origin from indigenous populations who inhabited the country or region upon invasion, colonization or establishment of state frontiers, distinctive social, economic and cultural institutions. Thus, indigenous peoples of Russia such as Sakha, Komi, Karelian and others are not considered as such due to the size of the population (more than 50 000 people), and consequently they "are not the subjects of the specific legal protections." The Russian government recognizes only 40 ethnic groups as indigenous peoples, even though there are other 30 groups to be counted as such. The reason of nonrecognition is the size of the population and relatively late advent to their current regions, thus indigenous peoples in Russia should be numbered less than 50,000 people.
Ainu people are an ethnic group indigenous to Hokkaidō, the Kuril Islands, and much of Sakhalin. As Japanese settlement expanded, the Ainu were pushed northward and fought against the Japanese in Shakushain's Revolt and Menashi-Kunashir Rebellion, until by the Meiji period they were confined by the government to a small area in Hokkaidō, in a manner similar to the placing of Native Americans on reservations. In a ground-breaking 1997 decision involving the Ainu people of Japan, the Japanese courts recognized their claim in law, stating that "If one minority group lived in an area prior to being ruled over by a majority group and preserved its distinct ethnic culture even after being ruled over by the majority group, while another came to live in an area ruled over by a majority after consenting to the majority rule, it must be recognized that it is only natural that the distinct ethnic culture of the former group requires greater consideration."
The Tibetans are indigenous to Tibet.
The languages of Taiwanese aborigines have significance in historical linguistics, since in all likelihood Taiwan was the place of origin of the entire Austronesian language family, which spread across Oceania.
In Hong Kong, the indigenous inhabitants of the New Territories are defined in the Sino-British Joint Declaration as people descended through the male line from a person who was in 1898, before Convention for the Extension of Hong Kong Territory. There are several different groups that make of the indigenous inhabitants, the Punti, Hakka, Hoklo, and Tanka. All are nonetheless considered part of the Cantonese majority, although some like the Tanka have been shown to have genetic and anthropological roots in the Baiyue people, the pre-Han Chinese inhabitants of Southern China.
The Malay Singaporeans are the indigenous people of Singapore, inhabiting it since the Austronesian migration. They had established the Kingdom of Singapura back in the 13th century. The name Singapore itself comes from the Malay word Singapura (Singa=Lion, Pura=City) which means the Lion City.
The Cham are the indigenous people of the former state of Champa which was conquered by Vietnam in the Cham–Vietnamese wars during Nam tiến. The Cham in Vietnam are only recognized as a minority, and not as an indigenous people by the Vietnamese government despite being indigenous to the region.
In Indonesia, there are 50 to 70 million people who classify as indigenous peoples. However, the Indonesian government does not recognize the existence of indigenous peoples, classifying every Native Indonesian ethnic group as "indigenous" despite the clear cultural distinctions of certain groups. This problem is shared by many other countries in the ASEAN region.
In the Philippines, there are 135 ethno-linguistic groups, majority of which are considered as indigenous peoples by mainstream indigenous ethnic groups in the country. The indigenous people of Cordillera Administrative Region and Cagayan Valley in the Philippines are the Igorot people. The indigenous peoples of Mindanao are the Lumad peoples and the Moro (Tausug, Maguindanao Maranao and others) who also live in the Sulu archipelago. There are also others sets of indigenous peoples in Palawan, Mindoro, Visayas, and the rest central and south Luzon. The country has one of the largest indigenous peoples population in the world.
In Myanmar, indigenous peoples include the Shan, the Karen, the Rakhine, the Karenni, the Chin, the Kachin and the Mon. However, there are more ethnic groups that are considered indigenous, for example, the Akha, the Lisu, the Lahu or the Mru, among others.
Ethnic Europeans are indigenous to the region, having lived there for millennia. However, the UN recognizes very few indigenous populations of Europe, mostly people of non-European origin, confined to the far north and far east of the continent.
Notable indigenous minority populations in Europe that are recognized by the UN include the Finno-Ugric Nenets, Samoyed, and Komi peoples of northern Russia; Circassians of southern Russia and the North Caucasus; Crimean Tatars of Crimea in Ukraine; Sámi peoples of northern Norway, Sweden, and Finland and northwestern Russia (in an area also referred to as Sápmi); Basques of Basque Country, Spain and southern France; and the Sorbian people of Germany and Poland.
In Australia, the indigenous populations are the Aboriginal Australian peoples (comprising many different nations and tribes) and the Torres Strait Islander peoples (also with sub-groups). These groups are often together spoken of[by whom?] as Indigenous Australians.
Polynesian, Melanesian and Micronesian peoples originally populated many of the present-day Pacific Island countries in the Oceania region over the course of thousands of years. European, American, Chilean and Japanese colonial expansion in the Pacific brought many of these areas under non-indigenous administration, mainly during the 19th century. During the 20th century, several of these former colonies gained independence and nation-states formed under local control. However, various peoples have put forward claims for indigenous recognition where their islands are still under external administration; examples include the Chamorros of Guam and the Northern Marianas, and the Marshallese of the Marshall Islands. Some islands remain under administration from Paris, Washington, London or Wellington.
In most parts of Oceania, indigenous peoples outnumber the descendants of colonists. Exceptions include Australia, New Zealand and Hawaii. According to the 2013 census, New Zealand Māori make up 14.9% of the population of New Zealand, with less than half (46.5%) of all Māori residents identifying solely as Māori. The Māori are indigenous to Polynesia and settled New Zealand relatively recently, with migrations thought to have occurred in the 13th century CE. In New Zealand, pre-contact Māori groups did not necessarily see themselves as a single people, thus grouping into tribal (iwi) arrangements has become a more formal arrangement in more recent times. Many Māori national leaders signed a treaty with the British, the Treaty of Waitangi (1840), seen in some circles as forming the modern geo-political entity that is New Zealand.
A majority of the Papua New Guinea (PNG) population is indigenous, with more than 700 different nationalities recognized in a total population of 8 million. The country's constitution and key statutes identify traditional or custom-based practices and land tenure, and explicitly set out to promote the viability of these traditional societies within the modern state. However, conflicts and disputes concerning land use and resource rights continue between indigenous groups, the government, and corporate entities.
Indigenous rights and other issuesEdit
Indigenous peoples confront a diverse range of concerns associated with their status and interaction with other cultural groups, as well as changes in their inhabited environment. Some challenges are specific to particular groups; however, other challenges are commonly experienced. These issues include cultural and linguistic preservation, land rights, ownership and exploitation of natural resources, political determination and autonomy, environmental degradation and incursion, poverty, health, and discrimination.
The interactions between indigenous and non-indigenous societies throughout history and contemporarily have been complex, ranging from outright conflict and subjugation to some degree of mutual benefit and cultural transfer. A particular aspect of anthropological study involves investigation into the ramifications of what is termed first contact, the study of what occurs when two cultures first encounter one another. The situation can be further confused when there is a complicated or contested history of migration and population of a given region, which can give rise to disputes about primacy and ownership of the land and resources.
Wherever indigenous cultural identity is asserted, common societal issues and concerns arise from the indigenous status. These concerns are often not unique to indigenous groups. Despite the diversity of indigenous peoples, it may be noted that they share common problems and issues in dealing with the prevailing, or invading, society. They are generally concerned that the cultures and lands of indigenous peoples are being lost and that indigenous peoples suffer both discrimination and pressure to assimilate into their surrounding societies. This is borne out by the fact that the lands and cultures of nearly all of the peoples listed at the end of this article are under threat. Notable exceptions are the Sakha and Komi peoples (two northern indigenous peoples of Russia), who now control their own autonomous republics within the Russian state, and the Canadian Inuit, who form a majority of the territory of Nunavut (created in 1999). Despite the control of their territories, many Sakha people have lost their lands as a result of the Russian Homestead Act, which allows any Russian citizen to own any land in the Far Eastern region of Russia. In Australia, a landmark case, Mabo v Queensland (No 2), saw the High Court of Australia reject the idea of terra nullius. This rejection ended up recognizing that there was a pre-existing system of law practised by the Meriam people.
A 2009 United Nations publication says "Although indigenous peoples are often portrayed as a hindrance to development, their cultures and traditional knowledge are also increasingly seen as assets. It is argued that it is important for the human species as a whole to preserve as wide a range of cultural diversity as possible, and that the protection of indigenous cultures is vital to this enterprise."
Human rights violationsEdit
The Bangladesh Government has stated that there are "no indigenous peoples in Bangladesh." This has angered the indigenous peoples of Chittagong Hill Tracts, Bangladesh, collectively known as the Jumma. Experts have protested against this move of the Bangladesh Government and have questioned the Government's definition of the term "indigenous peoples." This move by the Bangladesh Government is seen by the indigenous peoples of Bangladesh as another step by the Government to further erode their already limited rights.
Hindus and Chams have both experienced religious and ethnic persecution and restrictions on their faith under the current Vietnamese government, with the Vietnamese state confiscating Cham property and forbidding Cham from observing their religious beliefs. Hindu temples were turned into tourist sites against the wishes of the Cham Hindus. In 2010 and 2013 several incidents occurred in Thành Tín and Phươc Nhơn villages where Cham were murdered by Vietnamese. In 2012, Vietnamese police in Chau Giang village stormed into a Cham Mosque, stole the electric generator, and also raped Cham girls. Cham in the Mekong Delta have also been economically marginalised, with ethnic Vietnamese settling on land previously owned by Cham people with state support.
The Indonesian government has outright denied the existence of indigenous peoples within the countries' borders. In 2012, Indonesia stated that ‘The Government of Indonesia supports the promotion and protection of indigenous people worldwide ... Indonesia, however, does not recognize the application of the indigenous peoples concept ... in the country’. Along with the brutal treatment of the country's Papuan people (a conservative estimate places the violent deaths at 100,000 people in West New Guinea since Indonesian occupation in 1963, see Papua Conflict) has led to Survival International condemning Indonesia for treating its indigenous peoples as the worst in the world.
The Vietnamese viewed and dealt with the indigenous Montagnards from the Central Highlands as "savages," which caused a Montagnard uprising against the Vietnamese. The Vietnamese were originally centered around the Red River Delta but engaged in conquest and seized new lands such as Champa, the Mekong Delta (from Cambodia) and the Central Highlands during Nam Tien. While the Vietnamese received strong Chinese influence in their culture and civilization and were Sinicized, and the Cambodians and Laotians were Indianized, the Montagnards in the Central Highlands maintained their own indigenous culture without adopting external culture and were the true indigenous of the region. To hinder encroachment on the Central Highlands by Vietnamese nationalists, the term Pays Montagnard du Sud-Indochinois (PMSI) emerged for the Central Highlands along with the indigenous being addressed by the name Montagnard. The tremendous scale of Vietnamese Kinh colonists flooding into the Central Highlands has significantly altered the demographics of the region. The anti-ethnic minority discriminatory policies by the Vietnamese, environmental degradation, deprivation of lands from the indigenous people, and settlement of indigenous lands by an overwhelming number of Vietnamese settlers led to massive protests and demonstrations by the Central Highland's indigenous ethnic minorities against the Vietnamese in January–February 2001. This event gave a tremendous blow to the claim often published by the Vietnamese government that in Vietnam “There has been no ethnic confrontation, no religious war, no ethnic conflict. And no elimination of one culture by another.”
In May 2016, the Fifteenth Session of the United Nations Permanent Forum on Indigenous Issues (UNPFII) affirmed that indigenous peoples are distinctive groups protected in international or national legislation as having a set of specific rights based on their linguistic and historical ties to a particular territory, prior to later settlement, development, and or occupation of a region. The session affirms that, since indigenous peoples are vulnerable to exploitation, marginalization, oppression, forced assimilation, and genocide by nation states formed from colonizing populations or by different, politically dominant ethnic groups, individuals and communities maintaining ways of life indigenous to their regions are entitled to special protection.
In December 1993, the United Nations General Assembly proclaimed the International Decade of the World's Indigenous People, and requested UN specialized agencies to consider with governments and indigenous people how they can contribute to the success of the Decade of Indigenous People, commencing in December 1994. As a consequence, the World Health Organization, at its Forty-seventh World Health Assembly, established a core advisory group of indigenous representatives with special knowledge of the health needs and resources of their communities, thus beginning a long-term commitment to the issue of the health of indigenous peoples.
The WHO notes that "Statistical data on the health status of indigenous peoples is scarce. This is especially notable for indigenous peoples in Africa, Asia and eastern Europe," but snapshots from various countries (where such statistics are available) show that indigenous people are in worse health than the general population, in advanced and developing countries alike: higher incidence of diabetes in some regions of Australia; higher prevalence of poor sanitation and lack of safe water among Twa households in Rwanda; a greater prevalence of childbirths without prenatal care among ethnic minorities in Vietnam; suicide rates among Inuit youth in Canada are eleven times higher than the national average; infant mortality rates are higher for Indigenous peoples everywhere.
The first UN publication on the State of the World's Indigenous Peoples revealed alarming statistics about indigenous peoples' health. Health disparities between indigenous and non-indigenous populations are evident in both developed and developing countries. Native Americans in the United States are 600 times more likely to acquire tuberculosis and 62% more likely to commit suicide than the non-Indian population. Tuberculosis, obesity, and type 2 diabetes are major health concerns for the indigenous in developed countries. Globally, health disparities touch upon nearly every health issue, including HIV/AIDS, cancer, malaria, cardiovascular disease, malnutrition, parasitic infections, and respiratory diseases, affecting indigenous peoples at much higher rates. Many causes of indigenous children's mortality could be prevented. Poorer health conditions amongst indigenous peoples result from longstanding societal issues, such as extreme poverty and racism, but also the intentional marginalization and dispossession of indigenous peoples by dominant, non-indigenous populations and societal structures.
Racism and discriminationEdit
Indigenous peoples have frequently been subjected to various forms of racism and discrimination. Indigenous peoples have been denoted primitives, savages or uncivilized. These terms occurred commonly during the heyday of European colonial expansion, but still continue in use in certain societies in modern times.
During the 17th century, Europeans commonly labeled indigenous peoples as "uncivilized". Some philosophers, such as Thomas Hobbes (1588-1679), considered indigenous people to be merely "savages". Others (especially literary figures in the 18th century) popularised the concept of "noble savages". Those who were close to the Hobbesian view tended to believe themselves to have a duty to "civilize" and "modernize" the indigenous. Although anthropologists, especially from Europe, used[when?] to apply these terms to all tribal cultures, the practice has fallen into disfavor as demeaning and is, according to many anthropologists, not only inaccurate, but dangerous.
Survival International runs a campaign to stamp out media portrayal of indigenous peoples as "primitive" or "savages". Friends of Peoples Close to Nature considers not only that indigenous culture should be respected as not being inferior, but also sees indigenous ways of life as offering frameworks in sustainability and as a part of the struggle within the "corrupted" western world, from which the threat[which?] stems.
After World War I (1914-1918), many Europeans came to doubt the morality of the means used to "civilize" peoples. At the same time, the anti-colonial movement, and advocates of indigenous peoples, argued that words such as "civilized" and "savage" were products and tools of colonialism, and argued that colonialism itself was savagely destructive. In the mid-20th century, European attitudes began to shift to the view that indigenous and tribal peoples should have the right to decide for themselves what should happen to their ancient cultures and ancestral lands.
At an international level, indigenous peoples have received increased recognition of their environmental rights since 2002, but few countries respect these rights in reality. The UN Declaration on the Rights of Indigenous Peoples, adopted by the General Assembly in 2007, established indigenous peoples' right to self-determination, implying several rights regarding natural resource management. In countries where these rights are recognized, land titling and demarcation procedures are often put on delay, or leased out by the state as concessions for extractive industries without consulting indigenous communities.
Many in the United States federal government are in favor of exploiting oil reserves in the Arctic National Wildlife Refuge, where the Gwich'in indigenous people rely on herds of caribou. Oil drilling could destroy thousands of years of culture for the Gwich'in. On the other hand, some of the Inupiat Eskimo, another indigenous community in the region, favor oil drilling because they could benefit economically.
The introduction of industrial agricultural technologies such as fertilizers, pesticides, and large plantation schemes have destroyed ecosystems that indigenous communities formerly depended on, forcing resettlement. Development projects such as dam construction, pipelines and resource extraction have displaced large numbers of indigenous peoples, often without providing compensation. Governments have forced indigenous peoples off of their ancestral lands in the name of ecotourism and national park development. Indigenous women are especially affected by land dispossession because they must walk longer distances for water and fuel wood. These women also become economically dependent on men when they lose their livelihoods. Indigenous groups asserting their rights has most often resulted in torture, imprisonment, or death.
Most indigenous populations are already subject to the deleterious effects of climate change. Climate change has not only environmental, but also human rights and socioeconomic implications for indigenous communities. The World Bank acknowledges climate change as an obstacle to Millennium Development Goals, notably the fight against poverty, disease, and child mortality, in addition to environmental sustainability.
Use of indigenous knowledgeEdit
- Collective rights
- Cultural appropriation
- Ethnic minority
- Genocide of indigenous peoples
- Human rights
- The Image Expedition
- Indigenous Futurisms
- Indigenous intellectual property
- Indigenous Peoples Climate Change Assessment Initiative
- Indigenous rights
- Intangible cultural heritage
- International Day of the World's Indigenous Peoples
- Canadian National Indigenous Peoples Day
- U.S. Indigenous Peoples' Day
- List of active NGOs of national minorities
- List of ethnic groups
- List of indigenous peoples
- Missing and murdered Indigenous women
- Uncontacted peoples
- United Nations Permanent Forum on Indigenous Issues
- Unrepresented Nations and Peoples Organization
- Virgin soil epidemic
- Mathewson, Kent (2004). "Drugs, Moral Geographies, and Indigenous Peoples: Some Initial Mappings and Central Issues". Dangerous Harvest: Drug Plants and the Transformation of Indigenous Landscapes. Oxford University Press. p. 13. ISBN 9780195143195.
As Sir Thomas Browne remarked in 1646, (this seems to be the first usage in its modern sense).
- Browne, Sir Thomas (1646). "Pseudodoxia Epidemica, Chap. X. Of the Blackness of Negroes". University of Chicago. Retrieved 3 March 2021.
- "Who are the indigenous and tribal peoples?". www.ilo.org. 22 July 2016.
- Acharya, Deepak and Shrivastava Anshu (2008): Indigenous Herbal Medicines: Tribal Formulations and Traditional Herbal Practices, Aavishkar Publishers Distributor, Jaipur, India. ISBN 978-81-7910-252-7. p. 440
- LaDuke, Winona (1997). "Voices from White Earth: Gaa-waabaabiganikaag". People, Land, and Community: Collected E.F. Schumacher Society Lectures. Yale University Press. pp. 24–25. ISBN 9780300071733.
- Miller, Robert J.; Ruru, Jacinta; Behrendt, Larissa; Lindberg, Tracey (2010). Discovering Indigenous Lands: The Doctrine of Discovery in the English Colonies. OUP Oxford. pp. 9–13. ISBN 9780199579815.
- Taylor Saito, Natsu (2020). "Unsettling Narratives". Settler Colonialism, Race, and the Law: Why Structural Racism Persist (eBook). NYU Press. ISBN 9780814708026.
...several thousand nations have been arbitrarly (and generally involuntarily) incorporated into approximately two hundred political constructs we call independent states...
- Sanders, Douglas (1999). "Indigenous peoples: Issues of definition". International Journal of Cultural Property. 8: 4–13. doi:10.1017/S0940739199770591.
- Bodley 2008:2
- Muckle, :>:>Robert J. (2012). Indigenous Peoples of North America: A Concise Anthropological Overview. University of Toronto Press. p. 18. ISBN 9781442604162.
- McIntosh, Ian (September 2000). "Are there Indigenous Peoples in Asia?". Cultural Survival Quarterly Magazine.
- "indigene, adj. and n." OED Online. Oxford University Press, September 2016. Web. 22 November 2016.
- "indigenous (adj.)". Online Etymology Dictionary. Retrieved 4 March 2021.
- Peters, Michael A.; Mika, Carl T. (10 November 2017). "Aborigine, Indian, indigenous or first nations?". Educational Philosophy and Theory. 49 (13): 1229–1234. doi:10.1080/00131857.2017.1279879. ISSN 0013-1857.
- Mario Blaser, Harvey A. Feit, Glenn McRae, In the Way: indigenous Peoples, Life Projects, and Development, IDRC, 2004, p. 53
- "autochthonous". Wordsmith.org. Retrieved 4 March 2021.
- Smith, Linda Tuhiwai (2012). Decolonizing methodologies : research and indigenous peoples (Second ed.). Dunedin, New Zealand: Otago University Press. ISBN 978-1-877578-28-1. OCLC 805707083.
- Robert K. Hitchcock, Diana Vinding, Indigenous Peoples' Rights in Southern Africa, IWGIA, 2004, p. 8 based on Working Paper by the Chairperson-Rapporteur, Mrs. Erica-Irene A. Daes, on the concept of indigenous people. UN-Dokument E/CN.4/Sub.2/AC.4/1996/2 (, unhchr.ch)
- S. James Anaya, Indigenous Peoples in International Law, 2nd ed., Oxford University press, 2004, p. 3; Professor Anaya teaches Native American Law, and is the third Commission on Human Rights Special Rapporteur on the Human Rights and Fundamental Freedoms of Indigenous People
- Martínez-Cobo (1986/7), paras. 379–82,
- "Indigenous and Tribal People's Rights Over Their Ancestral Lands and Natural Resources". cidh.org. Retrieved 30 May 2020.
- "Indigenous Peoples". World Bank. Retrieved 11 April 2020.
- "International Day of the World's Indigenous Peoples - 9 August". www.un.org. Retrieved 11 March 2020.
- Study of the Problem of Discrimination Against Indigenous Populations, p. 10, Paragraph 25, 30 July 1981, UN EASC
- "A working definition, by José Martinez Cobo". IWGIA - International Work Group for Indigenous Affairs. 9 April 2011. Archived from the original on 26 October 2019. Retrieved 11 March 2020.
- "State of the World's Indigenous Peoples, p. 1" (PDF). Archived from the original (PDF) on 15 February 2010.
- State of the World's Indigenous Peoples, Secretariat of Permanent Forum on Indigenous Issues, UN, 2009 Archived 15 February 2010 at the Wayback Machine. pg. 1-2.
- "Indigenous populations". World Health Organization. Retrieved 4 March 2021.
- Hall, Gillette, and Harry Anthony Patrinos. Indigenous Peoples, Poverty and Human Development in Latin America. New York: Palgrave MacMillan, n.d. Google Scholar. Web. 11 March 2013
- Williams, Victoria R. (2020). "Canarian". Indigenous Peoples: An Encyclopedia of Culture, History, and Threats to Survival. ABC-CLIO. p. 225. ISBN 9781440861185.
- Old World Contacts/Colonists/Canary Islands Archived 13 October 2007 at the Wayback Machine. Ucalgary.ca (22 June 1999). Retrieved on 11 October 2011.
- Antonio Rumeu de Armas (1975). "La Victoria de Acentejo". La conquista de Tenerife, 1494-1496 (in Spanish). Aula de Cultura de Tenerife. p. 278.
- Miller, Robert J.; Ruru, Jacinta; Behrendt, Larissa; Lindberg, Tracey (2010). Discovering Indigenous Lands: The Doctrine of Discovery in the English Colonies. OUP Oxford. pp. 15–22. ISBN 9780199579815.
- Miller, Robert J. (2006). Native America, Discovered and Conquered: Thomas Jefferson, Lewis & Clark, and Manifest Destiny. Praeger Publications. pp. 9–10. ISBN 9780275990114.
- "Who are indigenous peoples?" (PDF). Retrieved 2 January 2020.
- "Indigenous issues". International Work Group on Indigenous Affairs. Retrieved 5 September 2005.
- "Protecting Indigenous Land Rights Makes Good Economic Sense". World Resources Institute. 7 October 2016.
- Ding, Helen; Veit, Peter; Gray, Erin; Reytar, Katie; Altamirano, Juan-Carlos; Blackman, Allen; Hodgdon, Benjamin (10 June 2016). "Climate Benefits, Tenure Costs" – via www.wri.org.
- Defending the defenders: tropical forests in the front line
- "Protect indigenous people's land rights and the whole world will benefit | DISD". www.un.org.
- Birch, Sharon. "What role do indigenous people and forests have in a sustainable future?".
- "In the 2010 census "indigenous" people were defined as persons who live in a household where an indigenous language is spoken by one of the adult family members, and or people who self identified as indigenous ("Criteria del hogar: De esta manera, se establece, que los hogares indígenas son aquellos en donde el jefe y/o el cónyuge y/o padre o madre del jefe y/o suegro o suegra del jefe hablan una lengua indígena y también aquellos que declararon pertenecer a un grupo indígena". Cdi.gob.mx. Retrieved 9 April 2018.
- "Persons who speak an indigenous language but who do not live in such a household (Por lo antes mencionado, la Comisión Nacional Para el Desarrollo de los Pueblos Indígenas de México (CDI) considera población indígena (PI) a todas las personas que forman parte de un hogar indígena, donde el jefe(a) del hogar, su cónyuge y/o alguno de los ascendientes (madre o padre, madrastra o padrastro, abuelo(a), bisabuelo(a), tatarabuelo(a), suegro(a)) declaro ser hablante de lengua indígena. Además, también incluye a personas que declararon hablar alguna lengua indígena y que no forman parte de estos hogares". Cdi.gob.mx. Retrieved 9 April 2018.
- "John P. Schmal". Somosprimos.com. Retrieved 19 April 2016.
- "Comisión Nacional para el Desarrollo de los Pueblos Indígenas. México". Cdi.gob.mx. Retrieved 22 April 2013.
- "Civilization.ca – Gateway to Aboriginal Heritage–Culture". Canadian Museum of Civilization Corporation. Government of Canada. 12 May 2006. Retrieved 18 September 2009.
- "Inuit Circumpolar Council (Canada) – ICC Charter". Inuit Circumpolar Council > ICC Charter and By-laws > ICC Charter. 2007. Archived from the original on 5 March 2010. Retrieved 18 September 2009.
- "In the Kawaskimhon Aboriginal Moot Court Factum of the Federal Crown Canada" (PDF). Faculty of Law. University of Manitoba. 2007. p. 2. Archived from the original (PDF) on 26 March 2009. Retrieved 18 September 2009.
- "Words First An Evolving Terminology Relating to Aboriginal Peoples in Canada". Communications Branch of Indian and Northern Affairs Canada. 2004. Archived from the original on 14 November 2007. Retrieved 26 June 2010.
- "Terminology of First Nations, Native, Aboriginal and Métis" (PDF). Aboriginal Infant Development Programs of BC. 2009. Archived from the original (PDF) on 14 July 2010. Retrieved 26 June 2010.
- "Why we say "Indigenous" instead of "Aboriginal"". Indigenous Innovation.
- Statistics Canada, Canada (table), Census Profile, 2016 Census of Population, Catalogue № 98-316-X2016001 (Ottawa: 2017‑11‑29); ———, Aboriginal Peoples Reference Guide, 2016 Census of Population, Catalogue № 98‑500‑X2016009 (Ottawa: 2017‑10‑25), ISBN‑13:978‑0‑660‑05518‑3, [accessed 2019‑10‑08].
- "Assembly of First Nations - Assembly of First Nations-The Story". Assembly of First Nations. Archived from the original on 2 August 2009. Retrieved 2 October 2009.
- "Civilization.ca-Gateway to Aboriginal Heritage-object". Canadian Museum of Civilization Corporation. 12 May 2006. Retrieved 2 October 2009.
- KintischNov. 10, Eli (10 November 2016). "Why did Greenland's Vikings disappear?". Science | AAAS. Retrieved 29 December 2019.
- "The World Is Changing for Greenland's Native Inuit People". oceanwide-expeditions.com. Retrieved 29 December 2019.
- Wade, Nicholas (30 May 2008). "DNA Offers Clues to Greenland's First Inhabitants". The New York Times. Retrieved 29 December 2019.
- "Reverse Colonialism - How the Inuit Conquered the Vikings". Canadian Geographic. 27 July 2015.
- Dean, Bartholomew 2009 Urarina Society, Cosmology, and History in Peruvian Amazonia, Gainesville: University Press of Florida ISBN 978-0-8130-3378-5
- Brazil urged to protect Indians. BBC News (30 March 2005). Retrieved on 11 October 2011.
- Brazil sees traces of more isolated Amazon tribes. Reuters.com. Retrieved on 11 October 2011.
- United Nations High Commissioner for Refugees. "Refworld – World Directory of Minorities and indigenous Peoples – Turkey : Assyrians". Refworld.
- atlasofhumanity.com. "Iraq, Yazidis". Atlas Of Humanity. Retrieved 1 February 2021.
- "Who Are the Kurds?". BBC News. 31 October 2017.
- "Kurds and Kurdistan: Facts and Figures".
- "Are Jews Indigenous to the Land of Israel? Yes". Tablet Magazine. 9 February 2017.
- "Palestine - IWGIA - International Work Group for Indigenous Affairs". www.iwgia.org.
- Refugees, United Nations High Commissioner for. "Refworld | World Directory of Minorities and Indigenous Peoples - Israel". Refworld.
- "Who are the indigenous and tribal peoples?". www.ilo.org. 22 July 2016. Retrieved 2 May 2019.
- "Natives in Russia's far east worry about vanishing fish". The Economic Times. India. Agence France-Presse. 25 February 2009. Retrieved 5 March 2011.
- IWGIA (2012). Briefing note. Indigenous peoples in the Russian Federation
- Fondahl, G., Filippova, V., Mack, L. (2015). Indigenous peoples in the new Arctic. In B.Evengard, O.Nymand Larsen, O.Paasche (Eds), The New Arctic (pp. 7–22). Springer
- IWGIA (2012) Briefing note. Indigenous people in the Russian Federation
- Lehtola, M. (2012). HoWhy theory and the cultural transition in the Sakha Republic. In T.Aikas, S.Lipkin, A.K.Salmi (Eds.), Archaeology of social relations: ten case studies by Finnish archaeologists (pp. 51–76). Oulu University
- Slezkine, Y. (1994). Arctic mirrors: Russia and the small peoples of the North. New York, NY: Cornell University Press
- Recognition at last for Japan's Ainu, BBC NEWS
- Judgment of the Sapporo District Court, Civil Division No. 3, 27 March 1997, in (1999) 38 ILM, p. 419
- Blust, R. (1999), "Subgrouping, circularity and extinction: some issues in Austronesian comparative linguistics" in E. Zeitoun & P.J.K Li, ed., Selected papers from the Eighth International Conference on Austronesian Linguistics. Taipei: Academia Sinica
- Fox, James J."Current Developments in Comparative Austronesian Studies" (PDF). (105 KB). Paper prepared for Symposium Austronesia Pascasarjana Linguististik dan Kajian Budaya. Universitas Udayana, Bali 19–20 August 2004.
- Diamond, Jared M. "Taiwan's gift to the world" (PDF). Archived from the original (PDF) on 17 June 2009. (107 KB). Nature, Volume 403, February 2000, pp. 709–10
- "ANNEX III of Sino-British Joint Declaration". Retrieved 21 August 2016.
- "Indonesia and the Denial of Indigenous Peoples' Existence". 17 August 2013.
- "Myanmar - IWGIA - International Work Group for Indigenous Affairs". www.iwgia.org. Retrieved 30 May 2020.
- Who are Europe's indigenous peoples and what are their struggles?. Euronews, September 08, 2019.
- Pygmy human remains found on rock islands, Science | The Guardian, 12 March 2008.]
- "Papua New Guinea country profile". BBC News. 2018. Retrieved 1 February 2018.
- Bartholomew Dean and Jerome Levi (eds.) At the Risk of Being Heard: Indigenous Rights, Identity and Postcolonial States University of Michigan Press (2003)
- "Mabo v Queensland" (PDF). Retrieved 2 January 2020.
- No 'indigenous', reiterates Shafique Archived 19 March 2012 at the Wayback Machine. bdnews24.com (18 June 2011). Retrieved on 11 October 2011.
- Ministry of Chittagong Hill Tracts Affairs. mochta.gov.bd. Retrieved on 28 March 2012.
- INDIGENOUS PEOPLEChakma Raja decries non-recognition Archived 19 March 2012 at the Wayback Machine. bdnews24.com (28 May 2011). Retrieved on 11 October 2011.
- 'Define terms minorities, indigenous' Archived 18 November 2011 at the Wayback Machine. bdnews24.com (27 May 2011). Retrieved on 11 October 2011.
- Disregarding the Jumma Archived 19 June 2011 at the Wayback Machine. Himalmag.com. Retrieved on 11 October 2011.
- "Mission to Vietnam Advocacy Day (Vietnamese-American Meet up 2013) in the U.S. Capitol. A UPR report By IOC-Campa". Chamtoday.com. 14 September 2013. Archived from the original on 22 February 2014. Retrieved 17 June 2014.
- Taylor, Philip (December 2006). "Economy in Motion: Cham Muslim Traders in the Mekong Delta" (PDF). The Asia Pacific Journal of Anthropology. 7 (3): 238. doi:10.1080/14442210600965174. ISSN 1444-2213. S2CID 43522886. Archived from the original (PDF) on 23 September 2015. Retrieved 3 September 2014.
- "Indonesia denies it has any indigenous peoples".
- Graham A. Cosmas (2006). MACV: The Joint Command in the Years of Escalation, 1962–1967. Government Printing Office. pp. 145–. ISBN 978-0-16-072367-4.
- Oscar Salemink (2003). The Ethnography of Vietnam's Central Highlanders: A Historical Contextualization, 1850–1990. University of Hawaii Press. pp. 28–. ISBN 978-0-8248-2579-9.
- Oscar Salemink (2003). The Ethnography of Vietnam's Central Highlanders: A Historical Contextualization, 1850-1990. University of Hawaii Press. pp. 29–. ISBN 978-0-8248-2579-9.
- McElwee, Pamela (2008). "7 Becoming Socialist or Becoming Kinh? Government Policies for Ethnic Minorities in the Socialist Republic of Viet Nam". In Duncan, Christopher R. (ed.). Civilizing the Margins: Southeast Asian Government Policies for the Development of Minorities. Singapore: NUS Press. p. 182. ISBN 978-9971-69-418-0.
- Coates 2004:12
- "Resolutions and Decisions. WHA47.27 International Decade of the World's Indigenous People. The Forty-seventh World Health Assembly" (PDF). World Health Organization. Retrieved 17 April 2011.
- Hanley, Anthony J. Diabetes in Indigenous Populations, Medscape Today
- Ohenjo, Nyang'ori; Willis, Ruth; Jackson, Dorothy; Nettleton, Clive; Good, Kenneth; Mugarura, Benon (2006). "Health of Indigenous people in Africa". The Lancet. 367 (9526): 1937–46. doi:10.1016/S0140-6736(06)68849-1. PMID 16765763. S2CID 7976349.
- Health and Ethnic Minorities in Viet Nam, Technical Series No. 1, June 2003, WHO, p. 10
- Facts on Suicide Rates, First Nations and Inuit Health, Health Canada
- "Health of indigenous peoples". Health Topics A to Z. Retrieved 17 April 2011.
- State of the world's indigenous peoples. Vereinte Nationen Department of International Economic and Social Affairs. New York: United Nations. 2009. ISBN 978-92-1-130283-7. OCLC 699622751.CS1 maint: others (link)
- Charles Theodore Greve (1904). Centennial History of Cincinnati and Representative Citizens, Volume 1. Biographical Publishing Company. p. 35. Retrieved 22 May 2013.
- See Oliphant v. Suquamish Indian Tribe, 435 U.S. 191 (1978); also see Robert Williams, Like a Loaded Weapon
- Survival International website – About Us/FAQ. Survivalinternational.org. Retrieved on 28 March 2012.
- "Friends of Peoples close to Nature website – Our Ethos and statement of principles". Archived from the original on 26 February 2009. Retrieved 23 January 2010.CS1 maint: unfit URL (link) Retrieved from Internet Archive 13 December 2013.
- "United Nations Declaration on the Rights of Indigenous Peoples | United Nations For Indigenous Peoples". www.un.org. Retrieved 22 January 2020.
Pike, Sarah M. (2004). "4: The 1960s Watershed Years". New Age and Neopagan Religions in America. Columbia Contemporary American Religion Series. New York: Columbia University Press. p. 82. ISBN 9780231508384. Retrieved 19 February 2020.
Many young people looked to American Indian traditions for alternative lifestyles, and this was to shape New Agers' and Neopagans' subsequent turn to and incorporation of indigenous peoples' practices into their own rituals and belief systems. [...] The desire to share in native peoples' perceived harmony with nature became a common theme of the 1960s counterculture and in 1970s Neopaganism and New Age communities.
- FOGGIN, SOPHIE (31 January 2020). "Helena Gualinga is a voice for Indigenous communities in the fight against climate change". Latin America reports. Retrieved 11 September 2020.
- Fisher, Matthew R.; Editor (2017), "1.5 Environmental Justice & Indigenous Struggles", Environmental Biology, retrieved 17 April 2020CS1 maint: extra text: authors list (link)
- Senanayake, S.G.J.N. (January 2006). "Indigenous knowledge as a key to sustainable development". Journal of Agricultural Sciences – Sri Lanka. doi:10.4038/jas.v2i1.8117. Retrieved 25 March 2021.
- "TRADITIONAL TECHNIQUES PROVIDE USEFUL MODELS FOR BIODIVERSITY POLICIES". Convention on biological diversity. United Nations. Retrieved 25 March 2021.
- African Commission on Human and Peoples’ Rights (2003). "Report of the African Commission's Working Group of Experts on Indigenous Populations/Communities" (PDF). ACHPR & IWGIA. Archived from the original (PDF) on 26 September 2007.
- Baviskar, Amita (2007). "Indian Indigeneitites: Adivasi Engagements with Hindu NAtionalism in India". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Bodley, John H. (2008). Victims of Progress (5th. ed.). Plymouth, England: AltaMira Press. ISBN 978-0-7591-1148-6.
- de la Cadena, Marisol; Orin Starn, eds. (2007). Indigenous Experience Today. Oxford: Berg Publishers, Wenner-Gren Foundation for Anthropological Research. ISBN 978-1-84520-519-5.
- Clifford, James (2007). "Varieties of Indigenous Experience: Diasporas, Homelands, Sovereignties". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Coates, Ken S. (2004). A Global History of Indigenous Peoples: Struggle and Survival. New York: Palgrave MacMillan. ISBN 978-0-333-92150-0.
- Farah, Paolo D.; Tremolada Riccardo (2014). "Intellectual Property Rights, Human Rights and Intangible Cultural Heritage". Rivista di Diritto Industriale (2, part I): 21–47. ISSN 0035-614X. SSRN 2472388.
- Farah, Paolo D.; Tremolada Riccardo (2014). "Desirability of Commodification of Intangible Cultural Heritage: The Unsatisfying Role of IPRs". Transnational Dispute Management. 11 (2). ISSN 1875-4120. SSRN 2472339.
- Gerharz, Eva; Nasir Uddin; Pradeep Chakkarath, eds. (2017). Indigeneity on the move: Varying manifestations of a contested concept. New York: Berghahn Books. ISBN 978-1-78533-723-9.
- Henriksen, John B. (2001). "Implementation of the Right of Self-Determination of Indigenous Peoples" (PDF). Indigenous Affairs. 3/2001 (PDF ed.). Copenhagen: International Work Group for Indigenous Affairs. pp. 6–21. ISSN 1024-3283. OCLC 30685615. Archived from the original (PDF) on 2 June 2010. Retrieved 1 September 2007.
- Hughes, Lotte (2003). The no-nonsense guide to indigenous peoples. Verso. ISBN 978-1-85984-438-0.
- Howard, Bradley Reed (2003). Indigenous Peoples and the State: The struggle for Native Rights. DeKalb, Illinois: Northern Illinois University Press. ISBN 978-0-87580-290-9.
- Johansen. Bruce E. (2003). Indigenous Peoples and Environmental Issues: An Encyclopedia. Westport, Connecticut: Greenwood Press. ISBN 978-0-313-32398-0.
- Martinez Cobo, J. (198). "United Nations Working Group on Indigenous Populations". Study of the Problem of Discrimination Against Indigenous Populations. UN Commission on Human Rights.[permanent dead link]
- Maybury-Lewis, David (1997). Indigenous Peoples, Ethnic Groups and the State. Needham Heights, Massachusetts: Allyn & Bacon. ISBN 978-0-205-19816-0.
- Merlan, Francesca (2007). "Indigeneity as Relational Identity: The Construction of Australian Land Rights". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Pratt, Mary Louise (2007). "Afterword: Indigeneity Today". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Tsing, Anna (2007). "Indigenous Voice". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
|Wikisource has original text related to this article:|
|Look up indigenous peoples in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Indigenous peoples.|
- Awareness raising film by Rebecca Sommer for the Secretariat of the UNPFII Archived 27 July 2013 at the Wayback Machine
- "First Peoples" from PBS
- "The Indigenous World" from International Work Group for Indigenous Affairs
- IFAD and indigenous peoples (International Fund for Agricultural Development, IFAD)
- IPS Inter Press Service News on indigenous peoples from around the world | https://en.m.wikipedia.org/wiki/Indigenous_people | 21 |
17 | Definition - What is Internal Bleeding?
With internal bleeding, there is vascular damage that is not visible from the outside. There is a source of bleeding inside the body. If rapid and high blood loss occurs from this source of bleeding, this constitutes a medical emergency. In most cases, a traumatic event is the cause of the bleeding. However, there are other causes, for example caused by inflammation, which lead to so-called oozing bleeding. This means that very little blood escapes through the damaged tissue, but this does so continuously. This can lead to chronic anemia, which is accompanied by a feeling of tiredness and fatigue.
There are many different causes that can trigger internal bleeding. In principle, the injury to any internal organ can cause internal bleeding. Some examples are given below.
Traumatic events, such as a car accident or a violent act, can lead to internal organ damage. In traffic accidents, for example, the blunt abdominal trauma often leads to a rupture of the spleen. Since the spleen is a very well perfused organ, a rupture can lead to massive blood loss and is a medical emergency.
Here it goes to: These symptoms indicate a ruptured spleen
Another cause is hemorrhagic fever. This is an infection that can be triggered by various viruses, which can lead to an increased tendency to bleed. Triggers are, for example, the dengue, yellow fever or Ebola virus. In addition to flu-like symptoms such as fever, headache and muscle pain, internal bleeding can also occur.
You can find one here Overview page on tropical diseases
Another symptom that can cause internal bleeding is oozing gastric bleeding. It often affects older people whose gastric mucosa has been irritated for a long period of time. The cause for this can be a chronic inflammation of the gastric mucosa, which can be bacterial or a side effect of medication. A stomach tumor can also lead to gastric bleeding.
The so-called aortic dissection can also lead to massive blood loss. This is a splitting of the vessel wall of the main artery (aorta). As a result, blood flows between the layers of the aorta or the vessel ruptures completely. Aortic dissection is a life-threatening clinical picture and should be treated immediately.
Read here: Symptoms of aortic dissection
A low number of platelets can also cause internal bleeding. Platelets are responsible for blood clotting. If they are not available in sufficient numbers, the slightest trauma that would normally not cause bleeding, as small vascular injuries are normally blocked by the platelets, can lead to bleeding.
The diagnosis can be made in different ways. Which diagnostic procedure is used depends on the medical history. If external influences are the reason for the internal bleeding, the first thing to consider is the body. External injuries could provide clues as to the location of the bleeding. Other symptoms, such as the patient's blood pressure and heart rate, also need to be considered. The course of the accident must also not be ignored. Performing an emergency ultrasound can be very helpful. The most important internal structures that can be affected by internal bleeding are shown using sonography: heart and lungs, as well as liver, spleen and pelvis.
In the case of gastric bleeding, clinical evidence is of great importance. The patient should be asked about typical symptoms such as tiredness and fatigue. A pale appearance can also indicate chronic blood loss. In addition, it is typical with gastric bleeding that the stool of the person concerned has a black color. This is because blood that comes in contact with stomach acid turns black. To confirm the diagnosis, endoscopy can be done, i. Using a special device that contains a camera, the bleeding can be displayed and, if necessary, immediately closed.
In the case of aortic dissection, the diagnosis is made on the basis of a physical examination, laboratory tests, EKG and, if necessary, imaging, for example a representation of the aorta using computed tomography.
The suspicion that there may be a tropical disease arises from a survey on the last travel location. If the symptoms suggest a hemorrhagic fever, this can be confirmed by a laboratory test in which the pathogen is detected.
I recognize internal bleeding from these symptoms
Symptoms vary depending on the cause. If high blood loss occurs quickly, this usually results in shock symptoms. The blood pressure drops as the blood volume decreases. To compensate, the heart rate rises sharply. Loss of consciousness may occur, and if the bleeding is chronic, symptoms of chronic anemia develop. Those affected are often pale, tired and less productive. Depending on the cause, other, more specific symptoms can occur.
As mentioned above, gastric bleeding leads to the appearance of black stools. Pain in the stomach area can also occur. An aortic dissection is often accompanied by sudden, very severe pain in the chest area. These can radiate into the back and stomach. If the blood loss is very high, shock symptoms can also occur in this case.
Also read: Symptoms of shock
In hemorrhagic fever, bleeding can take different forms. For example, skin bleeding, bloody diarrhea or blood in the urine may occur. In addition, there are usually flu-like symptoms such as fever, tiredness, headache and aching limbs.
Therapy depends on the underlying cause. In general, however, the bleeding should be stopped. In the event of accidents causing the bleeding, in many cases it must be treated surgically. If the injuries are less severe, it may be sufficient to monitor the patient in the hospital and to assess the patient's condition with regular laboratory checks, which can be used to make a statement about the blood loss. As part of the monitoring, blood pressure and heart rate should be continuously monitored. The blood pressure can be stabilized, for example, by administering infusions with a venous system. If the blood loss is particularly massive, it may also be necessary to administer foreign blood in the form of a blood transfusion.
Endoscopic hemostasis is also possible, in which bleeding vessels are obliterated. This procedure can be performed, for example, when the stomach is bleeding. Since gastric bleeding can go unnoticed for a long time, it is possible that the person has already lost a large amount of blood, which can result in an iron deficiency. In this case, iron substitution can be carried out to support adequate blood formation. Iron is an important part of the blood and can be administered in the form of tablets or as an infusion.
In the case of major vascular damage, such as aortic dissection, it may be necessary to supply the vessel with a vascular prosthesis. The implantation takes place during a surgical procedure.
Duration / forecast
No general statement can be made about duration and prognosis. These depend on the cause of the internal bleeding and the previous illnesses and health status of the patient. If the bleeding has to be treated surgically, the patient may have to be monitored in the hospital for several days or even weeks. This depends on the blood loss and the organ damage that may have occurred. The younger the person affected and the fewer previous illnesses they have, the better the prognosis.
For example, with ruptured spleen, the death rate is 0-15%. It varies greatly as the outcome depends on the accompanying injuries, the age of the patient, and the blood loss. Even with aortic dissection, the prognosis correlates with the severity of the clinical picture. It also depends on the therapy chosen. Depending on the type of dissection, surgery or conservative therapy can be beneficial. There is also no uniform prognosis for bleeding from other organs. If the bleeding is only due to inflammation, the prognosis is good with the right therapy. However, if an organ bleeds due to the presence of a tumor, the prognosis again depends on the stage of the disease.
Course of disease
As mentioned above, the course of the disease can vary depending on the cause of the internal bleeding. If the person concerned loses a large amount of blood in a short period of time, the body may no longer be able to compensate for this blood loss. This then leads to a drop in blood pressure, which can lead to an inadequate supply of vital organs. This blood loss should be counteracted immediately by drinking fluids.
Other types of blood loss may present very differently clinically. If only a small amount of blood emerges from a vessel leak over a long period of time, this can go unnoticed for a long time. The body can usually counteract this loss of blood by producing more blood. However, the compensation mechanisms are also exhausted after a certain period of time. This then results in symptoms of anemia (anemia). These are i.a. Fatigue and listlessness and can appear insidious, so that they may not be noticed by the person affected until late.
Read what the Symptoms of anemia are.
How contagious is that?
Internal bleeding is a symptom that can have various causes. There is no risk of infection for the symptom "internal bleeding". However, the underlying disease that caused the internal bleeding can be contagious. This applies to the Ebola virus, for example. If a person is infected, he can infect others through body fluids and excretions. The dengue and yellow fever virus, on the other hand, are only transmitted by mosquitoes and not from person to person. | https://publichealthglobal.org/class-innere-blutung-TW2 | 21 |
39 | According to Fair macroeconomic is a section of economics that involve the structure, performance, decision-making, and behavior of the whole economy and not a particular market (102). It includes regional, worldwide, and national economies (Kim and Lee 1509-1539). Amongst many factors that determine macroeconomic are GDP, National Income, price rates, inflation, and unemployment. This paper will analyze unemployment as one of the indicators of macroeconomics.
Unemployment refers to when there are more people with no jobs for income as compared to those who are employed. The level of unemployment in a countrys economy can be measured by finding the number of those who do not have jobs. Those who are included in the labor force are in active search for jobs. As such, those who are on retirement, students, or those who are frustrated because of looking for a job without success, are not included in the labor force (Pollak 840-863).
The rate of unemployment is defined by the ratio of the number of people unemployed divided by the total of those employed and unemployed (The Economist). Consequently, unemployment rate ranging from 3% to 4% is considered low, as that ranging from 10% to 15% is considered high, and a rate of over 20% is considered very high. Unemployment results from some economical, political, educational, and cultural issues amongst others. When a country has a good economic background, then the rate of employment will be high because many sectors of the economy employ residents which in turn helps in improving their living standards. On the other hand, a weak economy leads to unemployment because it cannot manage to accommodate all those who need jobs, resulting in reduced living standards.
Some unemployment can result from low levels of education. It is because jobs may exist but on different professions and sometimes advanced skills may be needed while some residents may not have the required skills. As such increased unemployment will lead to the stagnant growth of the economy. Consequently, there are different types of unemployment depending on what causes it. One of which is classical unemployment which rises from the high rates of wages leading to employers not employing many workers. It arises from the low demand for goods and services in the market, and the business feels paying more workers will lead to low-profit margins.
The second is frictional unemployment which occurs when the job vacancies are present for a laborer, but the duration required to look for and find the work results to a phase of unemployment. The last is structural unemployment which includes a range of causes including the deviation of the laborers skills and that which is needed for the job. As such, it can occur when the economy is shifting or advancing through the upgrade of machines in industries leading to the need for other skills different from the previous ones.
In conclusion, employment leads to lack of income and can increase the levels of crime and other bad behaviors in a country. On the same note, unemployment leads to low living standards because many people will not have enough money to cater for their families. When there are small expenditures by citizens, the growth of the economy reduces. I recommend that countries should have flexible and improved education systems where workers are equipped with a broad range of skills to fit the dynamic global market. As such, when a new machine is introduced into a factory the employees already have an idea of its operation, other than replacing them with other skilled workers. Also, I suggest that countries should introduce rules guiding employers to employ a certain number of employees depending on the scale in which their businesses operate.
Fair, Ray C. "Analyzing Macroeconomic Forecastability". Journal of Forecasting 31.2 (2011): 99-108. Web.
Kim, Soyoung and Jaewoo Lee. "International Macroeconomic Fluctuations". Macroecon. Dynam. 1.07 (2014): 1509-1539. Web.
Pollak, Andreas. "Unemployment, Human Capital Depreciation, and Unemployment Insurance Policy". J. Appl. Econ. 28.5 (2012): 840-863. Web.
The Economist. "Unemployment Rate | Economist - World News, Politics, Economics, Business & Finance". The Economist. N.p., 2016. Web. 24 Apr. 2016.
Need a paper on the same topic?
We will write it for you from scratch!
If you are the original author of this essay and no longer wish to have it published on the SpeedyPaper website, please click below to request its removal:
- Favorite Place Essay Samples
- Reason and Rationale for the Canonical Scriptures
- Ecological Issues
- Explain the main ideas of liberalism
- Slavery in America Essay
- Discussion, Conclusions, and Recommendations
- Essay on Christianity
- Analysis of A Dream within a Dream and Alone by Edgar Allan Poe
- Essay about community life
- Evaluation of the Novel Billy Budd
- Genetic Diagnosis
- Essay on Leadership | https://speedypaper.com/essays/economic-news-analysis | 21 |
19 | |Part of a series on the|
|Part of a series on|
The English Reformation took place in 16th-century England when the Church of England broke away from the authority of the Pope and the Roman Catholic Church. These events were, in part, associated with the wider European Protestant Reformation, a religious and political movement that affected the practice of Christianity in western and central Europe. Causes included the invention of the printing press, increased circulation of the Bible and the transmission of new knowledge and ideas among scholars, the upper and middle classes and readers in general. The phases of the English Reformation, which also covered Wales and Ireland, were largely driven by changes in government policy, to which public opinion gradually accommodated itself.
Based on Henry VIII's desire for an annulment of his marriage (first requested of Pope Clement VII in 1527), the English Reformation began as more of a political affair than a theological dispute. The reality of political differences between Rome and England allowed growing theological disputes to come to the fore.Until the break with Rome, the Pope and general councils of the church decided doctrine. Church law was governed by canon law with final jurisdiction in Rome. Church taxes were paid straight to Rome and the Pope had the final word in the appointment of bishops.
The break with Rome was effected by a series of Acts of Parliament passed between 1532 and 1534, among them the 1534 Act of Supremacy, which declared that Henry was the "Supreme Head on earth of the Church of England".(This title was renounced by Mary I in 1553 in the process of restoring papal jurisdiction; when Elizabeth I reasserted the royal supremacy in 1559, her title was Supreme Governor. ) Final authority in doctrinal and legal disputes now rested with the monarch; the papacy was deprived of revenue and the final say on the appointment of bishops.
The theology and liturgy of the Church of England became markedly Protestant during the reign of Henry's son Edward VI, largely along lines laid down by Archbishop Thomas Cranmer. Under Mary, the process was reversed and the Church of England was again placed under papal jurisdiction. Elizabeth reintroduced the Protestant religion but in a more moderate manner. The structure and theology of the church was a matter of fierce dispute for generations.
The violent aspect of these disputes, manifested in the English Civil Wars, ended when the last Roman Catholic monarch, James II, was deposed and Parliament employed William III and Mary II jointly to rule in conjunction with the English Bill of Rights in 1688 (in the "Glorious Revolution"), from which emerged a church polity with an established church and a number of non-conformist churches whose members suffered various civil disabilities until these were removed many years later. The legacy of the Roman Catholic heritage and establishment as the state church remained controversial for many years and still exists. A substantial but dwindling minority of people from the late 16th to early 19th centuries remained Roman Catholic in England. Their church organization remained illegal until the Relief Act of 1829.
| Anglican Communion |
Catholic Church in England and Wales
Calendar of saints
(Church of England)
| Joseph of Arimathea |
Legend of Christ in Britain
Christianity in Roman Britain
| Anglo-Saxon Christianity |
| Wars of the Three Kingdoms |
Dissolution of the Monasteries
| Puritanism and the Restoration |
English Civil War
18th Century Church of England
19th Century Church of England
Church of England (Recent)
The Reformation was a clash of two opposed schemes of salvation. The Catholic Church taught that the contrite person could cooperate with God towards their salvation by performing good works (cf. synergism ).Medieval Catholic worship was centred on the Mass, the church's offering of the sacrifice of Christ's body and blood. The Mass was also an offering of prayer by which the living could help souls in purgatory. Protestants taught that fallen humanity was helpless and under condemnation until given the grace of God through faith. They believed the Catholic emphasis on purgatory was an obstacle to true faith in God and the identification of the Mass with Christ's sacrifice a blasphemous perversion of the Eucharist. In place of the Catholic Mass, Protestant worship was centred on the Bible–to them the only road to faith in Christ–either read or presented in sermons.
Lollardy anticipated some Protestant teachings. Derived from the writings of John Wycliffe, a 14th-century theologian and Bible translator, Lollardy stressed the primacy of scripture and emphasised preaching over the sacrament of the altar, holding the latter to be but a memorial. [ page needed ]Unlike Protestants, the early Lollards lacked access to the printing press and failed to gain a foothold among the church's most popular communicators, the friars. Unable to gain access to the levers of power, the Lollards were much reduced in numbers and influence by the 15th century. They sometimes faced investigation and persecution and rarely produced new literature after 1450. Lollards could still be found—especially in London and the Thames Valley, in Essex and Kent, Coventry, Bristol and even in the North—and many would be receptive to Protestant ideas.
More respectable and orthodox calls for reform came from Renaissance humanists, such as Erasmus (who lived in England for a time), John Colet, Dean of St Paul's, and Thomas More. Humanists downplayed the role of rites and ceremonies in achieving salvation and criticised the superstitious veneration of relics. Erasmus and Colet emphasised a simple, personal piety and a return ad fontes ("back to the sources") of Christian faith—the scriptures as understood through textual and linguistic scholarship.Colet's commentaries on the Pauline epistles emphasized double predestination and the worthlessness of human works. Anne Boleyn's own religious views were shaped by French humanists such as Jacques Lefèvre d'Étaples, whose 1512 commentaries on Paul's epistles stated that human works were irrelevant to salvation five years before Luther published the same views.
Humanist scholarship provided arguments against papal primacy and support for the claim that popes had usurped powers that rightfully belonged to kings. In 1534, Lorenzo Valla's On the Donation of Constantine —which proved that one of the pillars of the papacy's temporal authority was a hoax—was published in London. Thomas Cromwell paid for an English translation of Marsiglio of Padua's Defensor pacis in 1535. The conservative cleric Stephen Gardiner used Marsiglio's theory of a unitary realm to defend royal power over spiritual as well as secular affairs.
By the early 1520s, the views of German reformer Martin Luther were known and disputed in England.The main plank of Luther's theology was justification by faith alone rather than by good works. In this view, only faith, itself a gift from God, can secure the grace of God. Justification by faith alone threatened the whole basis of the Roman Catholic penitential system with its doctrine of purgatory, prayer for the dead, indulgences, and the sacrificial character of the Mass. Early Protestants portrayed Catholic practices such as confession to priests, clerical celibacy, and requirements to fast and keep vows as burdensome and spiritually oppressive. Not only did purgatory lack any biblical basis according to Protestants, but the clergy were accused of using fear of purgatory to make money from prayers and masses. Catholics countered that justification by faith alone was a "licence to sin".
English Catholicism was strong and popular in the early 1500s, and those who held Protestant sympathies would remain a religious minority until political events intervened.Protestant ideas were popular among some parts of the English population, especially among academics and merchants with connections to continental Europe. The first open demonstration of support for Luther took place at Cambridge in 1521 when a student defaced a copy of the papal bull of condemnation against Luther. Also at Cambridge was a group of reform-minded university students that met at the White Horse tavern from the mid-1520s, known by the moniker "Little Germany". Its members included Robert Barnes, Hugh Latimer, John Frith, Thomas Bilney, George Joye and Thomas Arthur.
The publication of William Tyndale's English New Testament in 1526 helped to spread Protestant ideas. Printed abroad and smuggled into the country, the Tyndale Bible was the first English Bible to be mass produced; there were probably 16,000 copies in England by 1536. Tyndale's translation was highly influential, forming the basis of all later English translations.An attack on traditional religion, Tyndale's translation included an epilogue explaining Luther's theology of justification by faith, and many translation choices were designed to undermine traditional Catholic teachings. Tyndale translated the Greek word charis as favour rather than grace to de-emphasize the role of grace-giving sacraments. His choice of love rather than charity to translate agape de-emphasized good works. When rendering the Greek verb metanoeite into English, Tyndale used repent rather than do penance. The former word indicated an internal turning to God, while the latter translation supported the sacrament of confession.
Between 1530 and 1533, Thomas Hitton (England's first Protestant martyr), Thomas Bilney, Richard Bayfield, John Tewkesbury, James Bainham, Thomas Benet, Thomas Harding, John Frith and Andrew Hewet were burned to death.In 1531, William Tracy was posthumously convicted of heresy for denying purgatory and affirming justification by faith, and his corpse was disinterred and burned. While Protestants were only a small portion of the population and suffered persecution, the rift between the king and papacy in the 1530s gave Protestants opportunities to form new alliances with government officials.
Henry VIII acceded to the English throne in 1509 at the age of 17. He made a dynastic marriage with Catherine of Aragon, widow of his brother Arthur, in June 1509, just before his coronation on Midsummer's Day. Unlike his father, who was secretive and conservative, the young Henry appeared the epitome of chivalry and sociability. An observant Roman Catholic, he heard up to five masses a day (except during the hunting season); of "powerful but unoriginal mind", he let himself be influenced by his advisors from whom he was never apart, by night or day. He was thus susceptible to whoever had his ear.
This contributed to a state of hostility between his young contemporaries and the Lord Chancellor, Cardinal Thomas Wolsey. As long as Wolsey had his ear, Henry's Roman Catholicism was secure: in 1521, he had defended the Roman Catholic Church from Martin Luther's accusations of heresy in a book he wrote—probably with considerable help from the conservative Bishop of Rochester John Fisher—entitled The Defence of the Seven Sacraments , for which he was awarded the title "Defender of the Faith" ( Fidei Defensor ) by Pope Leo X. (Successive English and British monarchs have retained this title to the present, even after the Anglican Church broke away from Roman Catholicism, in part because the title was re-conferred by Parliament in 1544, after the split.) Wolsey's enemies at court included those who had been influenced by Lutheran ideas, among whom was the attractive, charismatic Anne Boleyn.
Anne arrived at court in 1522 as maid of honour to Queen Catherine, having spent some years in France being educated by Queen Claude of France. She was a woman of "charm, style and wit, with will and savagery which made her a match for Henry".Anne was a distinguished French conversationalist, singer, and dancer. She was cultured and is the disputed author of several songs and poems. By 1527, Henry wanted his marriage to Catherine annulled. She had not produced a male heir who survived longer than two months, and Henry wanted a son to secure the Tudor dynasty. Before Henry's father (Henry VII) ascended the throne, England had been beset by civil warfare over rival claims to the English crown. Henry wanted to avoid a similar uncertainty over the succession. Catherine of Aragon's only surviving child was Princess Mary.
Henry claimed that this lack of a male heir was because his marriage was "blighted in the eyes of God".Catherine had been his late brother's wife, and it was therefore against biblical teachings for Henry to have married her (Leviticus 20:21); a special dispensation from Pope Julius II had been needed to allow the wedding in the first place. Henry argued the marriage was never valid because the biblical prohibition was part of unbreakable divine law, and even popes could not dispense with it. In 1527, Henry asked Pope Clement VII to annul the marriage, but the Pope refused. According to canon law, the pope could not annul a marriage on the basis of a canonical impediment previously dispensed. Clement also feared the wrath of Catherine's nephew, Holy Roman Emperor Charles V, whose troops earlier that year had sacked Rome and briefly taken the Pope prisoner.
The combination of Henry's "scruple of conscience" and his captivation by Anne Boleyn made his desire to rid himself of his queen compelling.The indictment of his chancellor Cardinal Wolsey in 1529 for praemunire (taking the authority of the papacy above the Crown) and Wolsey's subsequent death in November 1530 on his way to London to answer a charge of high treason left Henry open to both the influences of the supporters of the queen and the opposing influences of those who sanctioned the abandonment of the Roman allegiance, for whom an annulment was but an opportunity.
In 1529, the King summoned Parliament to deal with annulment, thus bringing together those who wanted reform but who disagreed what form it should take; it became known as the Reformation Parliament. There were common lawyers who resented the privileges of the clergy to summon laity to their courts;there were those who had been influenced by Lutheranism and were hostile to the theology of Rome; Thomas Cromwell was both. Henry's chancellor, Thomas More, successor to Wolsey, also wanted reform: he wanted new laws against heresy.
Cromwell was a lawyer and a member of Parliament—a Protestant who saw how Parliament could be used to advance the Royal Supremacy, which Henry wanted, and to further Protestant beliefs and practices Cromwell and his friends wanted.One of his closest friends was Thomas Cranmer, soon to be made an archbishop.
In the matter of the annulment, no progress seemed possible. The Pope seemed more afraid of Emperor Charles V than of Henry. Anne and Cromwell and their allies wished simply to ignore the Pope, but in October 1530 a meeting of clergy and lawyers advised that Parliament could not empower the archbishop to act against the Pope's prohibition. Henry thus resolved to bully the priests.
Having brought down his chancellor, Cardinal Wolsey, Henry VIII finally resolved to charge the whole English clergy with praemunire to secure their agreement to his annulment. The Statute of Praemunire, which forbade obedience to the authority of the Pope or of any foreign rulers, enacted in 1392, had been used against individuals in the ordinary course of court proceedings. Now Henry, having first charged Queen Catherine's supporters, Bishops John Fisher, Nicholas West and Henry Standish and Archdeacon of Exeter, Adam Travers, decided to proceed against the whole clergy.Henry claimed £100,000 from the Convocation of Canterbury (a representative body of English clergy) for their pardon, which was granted by the Convocation on 24 January 1531. The clergy wanted the payment spread over five years, but Henry refused. The convocation responded by withdrawing their payment altogether and demanded Henry fulfil certain guarantees before they would give him the money. Henry refused these conditions. He agreed only to the five-year period of payment and added five articles that specified that:
In Parliament, Bishop Fisher championed Catherine and the clergy; he had inserted into the first article the phrase "as far as the word of God allows". [ page needed ] In Convocation, however, William Warham, Archbishop of Canterbury, requested a discussion but was met by a stunned silence; then Warham said, "He who is silent seems to consent", to which a clergyman responded, "Then we are all silent." The Convocation granted consent to the King's five articles and the payment on 8 March 1531. That same year, Parliament passed the Pardon to Clergy Act 1531.
The breaking of the power of Rome proceeded little by little. In 1532, Cromwell brought before Parliament the Supplication Against the Ordinaries , which listed nine grievances against the church, including abuses of power and Convocation's independent legislative power. Finally, on 10 May, the King demanded of Convocation that the church renounce all authority to make laws. On 15 May, the Submission of the Clergy was subscribed, which recognised Royal Supremacy over the church so that it could no longer make canon law without royal licence—i.e., without the King's permission—thus emasculating it as a law-making body. (Parliament subsequently passed this in 1534 and again in 1536.) The day after this, More resigned as chancellor, leaving Cromwell as Henry's chief minister. (Cromwell never became chancellor. His power came—and was lost—through his informal relations with Henry.)[ citation needed ]
Several acts of Parliament then followed. The Act in Conditional Restraint of Annates proposed that the clergy pay no more than 5 percent of their first year's revenue (annates) to Rome. This was initially controversial and required that Henry visit the House of Lords three times to browbeat the Commons. [ original research? ]
The Act in Restraint of Appeals, drafted by Cromwell, apart from outlawing appeals to Rome on ecclesiastical matters, declared that
This realm of England is an Empire, and so hath been accepted in the world, governed by one Supreme Head and King having the dignity and royal estate of the Imperial Crown of the same, unto whom a body politic compact of all sorts and degrees of people divided in terms and by names of Spirituality and Temporality, be bounden and owe to bear next to God a natural and humble obedience.
This declared England an independent country in every respect. English historian Geoffrey Elton called this act an "essential ingredient" of the "Tudor revolution" in that it expounded a theory of national sovereignty.The Act in Absolute Restraint of Annates outlawed all annates to Rome and also ordered that if cathedrals refused the King's nomination for bishop, they would be liable to punishment by praemunire. Finally in 1534, the Acts of Supremacy made Henry "supreme head in earth of the Church of England" and disregarded any "usage, custom, foreign laws, foreign authority [or] prescription".
Meanwhile, having taken Anne to France on a pre-nuptial honeymoon, Henry married her in Westminster Abbey in January 1533. This was made easier by the death of Archbishop Warham, a strong opponent of an annulment. Henry appointed Thomas Cranmer to succeed him as Archbishop of Canterbury. Cranmer was prepared to grant the annulment of the marriage to Catherine as Henry required, going so far as to pronounce on 23 May the judgment that Henry's marriage with Catherine was against the law of God.Anne gave birth to a daughter, Princess Elizabeth, in September 1533. The Pope responded to the marriage by excommunicating both Henry and Cranmer from the Roman Catholic Church (11 July 1533). Henry was excommunicated again in December 1538.
Consequently, in the same year the Act of First Fruits and Tenths transferred the taxes on ecclesiastical income from the Pope to the Crown. The Act Concerning Peter's Pence and Dispensations outlawed the annual payment by landowners of one penny to the Pope. This Act also reiterated that England had "no superior under God, but only your Grace" and that Henry's "imperial crown" had been diminished by "the unreasonable and uncharitable usurpations and exactions" of the Pope. [ page needed ]
In case any of this should be resisted, Parliament passed the Treasons Act 1534, which made it high treason punishable by death to deny Royal Supremacy. The following year, Thomas More and John Fisher were executed under this legislation. Finally, in 1536, Parliament passed the Act against the Pope's Authority, which removed the last part of papal authority still legal. This was Rome's power in England to decide disputes concerning Scripture.[ citation needed ]
The break with Rome gave Henry VIII power to administer the English Church, tax it, appoint its officials, and control its laws. It also gave him control over the church's doctrine and ritual.Despite reading Protestant books, such as Simon Fish's Supplication for the Beggars and Tyndale's The Obedience of a Christian Man , and seeking Protestant support for his annulment, Henry's religious views remained conservative. Nevertheless, to promote and defend the Royal Supremacy, he embraced the language of the continental Reformation all while maintaining a middle way between religious extremes. The King relied on men with Protestant sympathies, such as Thomas Cromwell and Thomas Cranmer, to carry out his religious programme.
Since 1529, Cranmer had risen to prominence as part of the team working on the annulment. Having begun the task as a Catholic humanist, Cranmer's religious views had shifted toward Protestantism by 1531, in part due to the personal contacts made with continental reformers.While on a diplomatic mission to Emperor Charles V in 1532, Cranmer visited Lutheran Nuremberg where he became friends with theologian Andreas Osiander. It was at this time that Cranmer became interested in Lutheranism, and he renounced his priestly vow of celibacy to secretly marry Osiander's niece. The Lutherans, however, were not in favour of the annulment, forcing Cranmer and Henry to also seek support from other emerging Protestant churches in Germany and Switzerland. This brought him into contact with Martin Bucer of Strasbourg. After Warham's death, Cranmer was made Archbishop of Canterbury (with papal consent) in 1533.
In 1534, a new Heresy Act ensured that no one could be punished for speaking against the pope and also made it more difficult to convict someone of heresy; however, sacramentarians and Anabaptists continued to be vigorously persecuted.What followed was a period of doctrinal confusion as both conservatives and reformers attempted to shape the church's future direction. The reformers were aided by Cromwell, who in January 1535 was made vicegerent in spirituals. Effectively the King's vicar general, Cromwell's authority was greater than that of bishops, even the Archbishop of Canterbury. Largely due to Anne Boleyn's influence, a number of Protestants were appointed bishops between 1534 and 1536. These included Latimer, Thomas Goodrich, John Salcot, Nicholas Shaxton, William Barlow, John Hilsey and Edward Foxe. During the same period, the most influential conservative bishop, Stephen Gardiner, was sent to France on a diplomatic mission and thus removed from an active role in English politics for three years.
Cromwell's programme, assisted by Anne Boleyn's influence over episcopal appointments, was not merely against the clergy and the power of Rome. He persuaded Henry that safety from political alliances that Rome might attempt to bring together lay in negotiations with the German Lutheran princes of the Schmalkaldic League.There also seemed to be a possibility that Emperor Charles V might act to avenge his rejected aunt (Queen Catherine) and enforce the pope's excommunication. The negotiations did not lead to an alliance but did bring Lutheran ideas to England.
In 1536, Convocation adopted the first doctrinal statement for the Church of England, the Ten Articles. This was followed by the Bishops' Book in 1537. These established a semi-Lutheran doctrine for the church. Justification by faith, qualified by an emphasis on good works following justification, was a core teaching. The traditional seven sacraments were reduced to three only—baptism, Eucharist and penance. Catholic teaching on praying to saints, purgatory and the use of images in worship was undermined.
In August 1536, the same month the Ten Articles were published, Cromwell issued a set of Royal Injunctions to the clergy. Minor feast days were changed into normal work days, including those celebrating a church's patron saint and most feasts during harvest time (July through September). The rationale was partly economic as too many holidays led to a loss of productivity and were "the occasion of vice and idleness".In addition, Protestants considered feast days to be examples of superstition. Clergy were to discourage pilgrimages and instruct the people to give to the poor rather than make offerings to images. The clergy were also ordered to place Bibles in both English and Latin in every church for the people to read. This last requirement was largely ignored by the bishops for a year or more due to the lack of any authorised English translation. The only complete vernacular version was the Coverdale Bible finished in 1535 and based on Tyndale's earlier work. It lacked royal approval, however.
Historian Diarmaid MacCulloch in his study of The Later Reformation in England, 1547–1603 argues that after 1537, "England's Reformation was characterized by its hatred of images, as Margaret Aston's work on iconoclasm and iconophobia has repeatedly and eloquently demonstrated."In February 1538, the famous Rood of Grace was condemned as a mechanical fraud and destroyed at St Paul's Cross. In July, the statues of Our Lady of Walsingham, Our Lady of Ipswich, and other Marian images were burned at Chelsea on Cromwell's orders. In September, Cromwell issued a second set of royal injunctions ordering the destruction of images to which pilgrimage offerings were made, the prohibition of lighting votive candles before images of saints, and the preaching of sermons against the veneration of images and relics. Afterwards, the shrine and bones of Thomas Becket, considered by many to have been martyred in defense of the church's liberties, were destroyed at Canterbury Cathedral.
For Cromwell and Cranmer, a step in the Protestant agenda was attacking monasticism, which was associated with the doctrine of purgatory.While the King was not opposed to religious houses on theological grounds, there was concern over the loyalty of the monastic orders, which were international in character and resistant to the Royal Supremacy. The Franciscan Observant houses were closed in August 1534 after that order refused to repudiate papal authority. Between 1535 and 1537, 18 Carthusians were killed for doing the same.
The Crown was also experiencing financial difficulties, and the wealth of the church, in contrast to its political weakness, made confiscation of church property both tempting and feasible.Seizure of monastic wealth was not unprecedented; it had happened before in 1295, 1337, and 1369. The church owned between one-fifth and one-third of the land in all England; Cromwell realised that he could bind the gentry and nobility to Royal Supremacy by selling to them the huge amount of church lands, and that any reversion to pre-Royal Supremacy would entail upsetting many of the powerful people in the realm.
In 1534, Cromwell initiated a visitation of the monasteries ostensibly to examine their character, but in fact, to value their assets with a view to expropriation.The visiting commissioners claimed to have uncovered sexual immorality and financial impropriety amongst the monks and nuns, which became the ostensible justification for their suppression. There were also reports of the possession and display of false relics, such as Hailes Abbey's vial of the Holy Blood, upon investigation announced to be "honey clarified and coloured with saffron". The Compendium Competorum compiled by the visitors documented ten pieces of the True Cross, seven portions of the Virgin Mary's milk and numerous saints' girdles.
Leading reformers, led by Anne Boleyn, wanted to convert monasteries into "places of study and good letters, and to the continual relief of the poor", but this was not done. [ additional citation(s) needed ] Thirty-four houses were saved by paying for exemptions. Monks and nuns affected by closures were transferred to larger houses, and monks had the option of becoming secular clergy.In 1536, the Dissolution of the Lesser Monasteries Act closed smaller houses valued at less than £200 a year. Henry used the revenue to help build coastal defenses (see Device Forts ) against expected invasion, and all the land was given to the Crown or sold to the aristocracy.
The Royal Supremacy and the abolition of papal authority had not caused widespread unrest, but the attacks on monasteries and the abolition of saints' days and pilgrimages provoked violence. Mobs attacked those sent to break up monastic buildings. Suppression commissioners were attacked by local people in several places.In Northern England, there were a series of uprisings against the dissolutions in late 1536 and early 1537. The Lincolnshire Rising occurred in October 1536 and culminated in a force of 40,000 rebels assembling at Lincoln. They demanded an end to taxation during peacetime, the repeal of the statute of uses, an end to the suppression of monasteries, and that heresy be purged and heretics punished. Henry refused to negotiate, and the revolt collapsed as the nervous gentry convinced the common people to disperse.
The Pilgrimage of Grace was a more serious matter. The revolt began in October at Yorkshire and spread to the other northern counties. Around 50,000 strong, the rebels under Robert Aske's leadership restored 16 of the 26 northern monasteries that had been dissolved. Due to the size of the rebellion, the King was persuaded to negotiate. In December, the Duke of Norfolk offered the rebels a pardon and a parliament to consider their grievances. Aske then sent the rebels home. The promises made to them, however, were ignored by the King, and Norfolk was instructed to put the rebellion down. Forty-seven of the Lincolnshire rebels were executed, and 132 from the Pilgrimage of Grace. In Southern England, smaller disturbances took place in Cornwall and Walsingham in 1537.
The failure of the Pilgrimage of Grace only sped up the process of dissolution and may have convinced Henry VIII that all religious houses needed to be closed. In 1540, the last monasteries were dissolved, wiping out an important element of traditional religion.Former monks were given modest pensions from the Court of Augmentations, and those that could sought work as parish priests. Former nuns received smaller pensions and, as they were still bound by vows of chastity, forbidden to marry. Henry personally devised a plan to form at least thirteen new dioceses so that most counties had one based on a former monastery (or more than one), though this scheme was only partly carried out. New dioceses were established at Bristol, Gloucester, Oxford, Peterborough, Westminster and Chester, but not, for instance, at Shrewsbury, Leicester or Waltham.
According to historian Peter Marshall, Henry's religious reforms were based on the principles of "unity, obedience and the refurbishment of ancient truth".Yet, the outcome was disunity and disobedience. Impatient Protestants took it upon themselves to further reform. Priests said Mass in English rather than Latin and were marrying in violation of clerical celibacy. Not only were there divisions between traditionalists and reformers, but Protestants themselves were divided between establishment reformers who held Lutheran beliefs and radicals who held Anabaptist and Sacramentarian views. Reports of dissension from every part of England reached Cromwell daily—developments he tried to hide from the King.
In September 1538, Stephen Gardiner returned to England, and official religious policy began to drift in a conservative direction.This was due in part to the eagerness of establishment Protestants to disassociate themselves from religious radicals. In September, two Lutheran princes, the Elector of Saxony and Landgrave of Hesse, sent warnings of Anabaptist activity in England. A commission was swiftly created to seek out Anabaptists. Henry personally presided at the trial of John Lambert in November 1538 for denying the real presence of Christ in the Eucharist. At the same time, he shared in the drafting of a proclamation ordering Anabaptists and Sacramentaries to get out of the country or face death. Discussion of the real presence (except by those educated in the universities) was forbidden, and priests who married were to be dismissed.
It was becoming clear that the King's views on religion differed from those of Cromwell and Cranmer. Henry made his traditional preferences known during the Easter Triduum of 1539, where he crept to the cross on Good Friday.Later that year, Parliament passed the Six Articles reaffirming Roman Catholic beliefs and practices such as transubstantiation, clerical celibacy, confession to a priest, votive masses, and withholding communion wine from the laity.
On 28 June 1540 Cromwell, Henry's longtime advisor and loyal servant, was executed. Different reasons were advanced: that Cromwell would not enforce the Act of Six Articles; that he had supported Robert Barnes, Hugh Latimer and other heretics; and that he was responsible for Henry's marriage to Anne of Cleves, his fourth wife. Many other arrests under the Act followed.On 30 July, the reformers Barnes, William Jerome and Thomas Gerrard were burned at the stake. In a display of religious impartiality, Thomas Abell, Richard Featherstone and Edward Powell—all Roman Catholics—were hanged and quartered while the Protestants burned. European observers were shocked and bewildered. French diplomat Charles de Marillac wrote that Henry's religious policy was a "climax of evils" and that:
[I]t is difficult to have a people entirely opposed to new errors which does not hold with the ancient authority of the Church and of the Holy See, or, on the other hand, hating the Pope, which does not share some opinions with the Germans. Yet the government will not have either the one or the other, but insists on their keeping what is commanded, which is so often altered that it is difficult to understand what it is.
Despite setbacks, Protestants managed to win some victories. In May 1541, the King ordered copies of the Great Bible to be placed in all churches; failure to comply would result in a £2 fine. Protestants could celebrate the growing access to vernacular scripture as most churches had Bibles by 1545.The iconoclastic policies of 1538 were continued in the autumn when the Archbishops of Canterbury and York were ordered to destroy all remaining shrines in England. Furthermore, Cranmer survived formal charges of heresy in the Prebendaries' Plot of 1543.
Traditionalists, nevertheless, seemed to have the upper hand. By the spring of 1543, Protestant innovations had been reversed, and only the break with Rome and the dissolution of the monasteries remained unchanged.In May 1543, a new formulary was published to replace the Bishops' Book. This King's Book rejected justification by faith alone and defended traditional ceremonies and the use of images. This was followed days later by passage of the Act for the Advancement of True Religion, which restricted Bible reading to men and women of noble birth. Henry expressed his fears to Parliament in 1545 that "the Word of God, is disputed, rhymed, sung and jangled in every ale house and tavern, contrary to the true meaning and doctrine of the same."
By the spring of 1544, the conservatives appeared to be losing influence once again. In March, Parliament made it more difficult to prosecute people for violating the Six Articles. Cranmer's Exhortation and Litany , the first official vernacular service, was published in June 1544, and the King's Primer became the only authorised English prayer book in May 1545. Both texts had a reformed emphasis.After the death of the conservative Edward Lee in September 1544, the Protestant Robert Holgate replaced him as Archbishop of York. In December 1545, the King was empowered to seize the property of chantries (trust funds endowed to pay for priests to say masses for the dead). While Henry's motives were largely financial (England was at war with France and desperately in need of funds), the passage of the Chantries Act was "an indication of how deeply the doctrine of purgatory had been eroded and discredited".
In 1546, the conservatives were once again in the ascendant. A series of controversial sermons preached by the Protestant Edward Crome set off a persecution of Protestants that the traditionalists used to effectively target their rivals. It was during this time that Anne Askew was tortured in the Tower of London and burnt at the stake. Even Henry's last wife, Katherine Parr, was suspected of heresy but saved herself by appealing to the King's mercy. With the Protestants on the defensive, traditionalists pressed their advantage by banning Protestant books.
The conservative persecution of Queen Katherine, however, backfired.By November 1546, there were already signs that religious policy was once again tilting towards Protestantism. The King's will provided for a regency council to rule after his death, which would have been dominated by traditionalists, such as the Duke of Norfolk, Lord Chancellor Wriothesly, Bishop Gardiner and Bishop Tunstall. After a dispute with the King, Bishop Gardiner, the leading conservative churchman, was disgraced and removed as a councilor. Later, the Duke of Norfolk, the most powerful conservative nobleman, was arrested. By the time Henry died in 1547, the Protestant Edward Seymour, brother of Jane Seymour, Henry's third wife (and therefore uncle to the future Edward VI), managed—by a number of alliances such as with Lord Lisle—to gain control over the Privy Council.
When Henry died in 1547, his nine-year-old son, Edward VI, inherited the throne. Because Edward was given a Protestant humanist education, Protestants held high expectations and hoped he would be like Josiah, the biblical king of Judah who destroyed the altars and images of Baal.During the seven years of Edward's reign, a Protestant establishment would gradually implement religious changes that were "designed to destroy one Church and build another, in a religious revolution of ruthless thoroughness".
Initially, however, Edward was of little account politically.Real power was in the hands of the regency council, which elected Edward Seymour, 1st Duke of Somerset, to be Lord Protector. The Protestant Somerset pursued reform hesitantly at first, partly because his powers were not unchallenged. The Six Articles remained the law of the land, and a proclamation was issued on 24 May reassuring the people against any "innovations and changes in religion".
Nevertheless, Seymour and Cranmer did plan to further the reformation of religion. In July, a Book of Homilies was published, from which all clergy were to preach from on Sundays.The homilies were explicitly Protestant in their content, condemning relics, images, rosary beads, holy water, palms, and other "papistical superstitions". It also directly contradicted the King's Book by teaching "we be justified by faith only, freely, and without works". Despite objections from Gardiner, who questioned the legality of bypassing both Parliament and Convocation, justification by faith had been made a central teaching of the English Church.
In August 1547, thirty commissioners—nearly all Protestants—were appointed to carry out a royal visitation of England's churches.The Royal Injunctions of 1547 issued to guide the commissioners were borrowed from Cromwell's 1538 injunctions but revised to be more radical. Historian Eamon Duffy calls them a "significant shift in the direction of full-blown Protestantism". Church processions—one of the most dramatic and public aspects of the traditional liturgy—were banned. The injunctions also attacked the use of sacramentals, such as holy water. It was emphasized that they imparted neither blessing nor healing but were only reminders of Christ. Lighting votive candles before saints' images had been forbidden in 1538, and the 1547 injunctions went further by outlawing those placed on the rood loft. Reciting the rosary was also condemned.
The injunctions set off a wave of iconoclasm in the autumn of 1547.While the injunctions only condemned images that were abused as objects of worship or devotion, the definition of abuse was broadened to justify the destruction of all images and relics. Stained glass, shrines, statues, and roods were defaced or destroyed. Church walls were whitewashed and covered with biblical texts condemning idolatry.
Conservative bishops Edmund Bonner and Gardiner protested the visitation, and both were arrested. Bonner spent nearly two weeks in the Fleet Prison before being released.Gardiner was sent to the Fleet Prison in September and remained there until January 1548. However, he continued to refuse to enforce the new religious policies and was arrested once again in June when he was sent to the Tower of London for the rest of Edward's reign.
When a new Parliament met in November 1547, it began to dismantle the laws passed during Henry VIII's reign to protect traditional religion.The Act of Six Articles was repealed—decriminalizing denial of the real, physical presence of Christ in the Eucharist. The old heresy laws were also repealed, allowing free debate on religious questions. In December, the Sacrament Act allowed the laity to receive communion under both kinds, the wine as well as the bread. This was opposed by conservatives but welcomed by Protestants.
The Chantries Act 1547 abolished the remaining chantries and confiscated their assets. Unlike the Chantry Act 1545, the 1547 act was intentionally designed to eliminate the last remaining institutions dedicated to praying for the dead. Confiscated wealth funded the Rough Wooing of Scotland. Chantry priests had served parishes as auxiliary clergy and schoolmasters, and some communities were destroyed by the loss of the charitable and pastoral services of their chantries.
Historians dispute how well this was received. A.G. Dickens contended that people had "ceased to believe in intercessory masses for souls in purgatory",but Eamon Duffy argued that the demolition of chantry chapels and the removal of images coincided with the activity of royal visitors. The evidence is often ambiguous. In some places, chantry priests continued to say prayers and landowners to pay them to do so. Some parishes took steps to conceal images and relics in order to rescue them from confiscation and destruction. Opposition to the removal of images was widespread—so much so that when during the Commonwealth, William Dowsing was commissioned to the task of image breaking in Suffolk, his task, as he records it, was enormous.
The second year of Edward's reign was a turning point for the English Reformation; many people identified the year 1548, rather than the 1530s, as the beginning of the English Church's schism from the Roman Catholic Church.On 18 January 1548, the Privy Council abolished the use of candles on Candlemas, ashes on Ash Wednesday and palms on Palm Sunday. On 21 February, the council explicitly ordered the removal of all church images.
On 8 March, a royal proclamation announced a more significant change—the first major reform of the Mass and of the Church of England's official eucharistic theology.The "Order of the Communion" was a series of English exhortations and prayers that reflected Protestant theology and were inserted into the Latin Mass. A significant departure from tradition was that individual confession to a priest—long a requirement before receiving the Eucharist—was made optional and replaced with a general confession said by the congregation as a whole. The effect on religious custom was profound as a majority of laypeople, not just Protestants, most likely ceased confessing their sins to their priests. By 1548, Cranmer and other leading Protestants had moved from the Lutheran to the Reformed position on the Eucharist. Significant to Cranmer's change of mind was the influence of Strasbourg theologian Martin Bucer. This shift can be seen in the Communion order's teaching on the Eucharist. Laypeople were instructed that when receiving the sacrament they "spiritually eat the flesh of Christ", an attack on the belief in the real, bodily presence of Christ in the Eucharist. The Communion order was incorporated into the new prayer book largely unchanged.
That prayer book and liturgy, the Book of Common Prayer, was authorized by the Act of Uniformity 1549. It replaced the several regional Latin rites then in use, such as the Use of Sarum, the Use of York and the Use of Hereford with an English-language liturgy.Authored by Cranmer, this first prayer book was a temporary compromise with conservatives. It provided Protestants with a service free from what they considered superstition, while maintaining the traditional structure of the mass.
The cycles and seasons of the church year continued to be observed, and there were texts for daily Matins (Morning Prayer), Mass and Evensong (Evening Prayer). In addition, there was a calendar of saints' feasts with collects and scripture readings appropriate for the day. Priests still wore vestments—the prayer book recommended the cope rather than the chasuble. Many of the services were little changed. Baptism kept a strongly sacramental character, including the blessing of water in the baptismal font, promises made by godparents, making the sign of the cross on the child's forehead, and wrapping it in a white chrism cloth. The confirmation and marriage services followed the Sarum rite.There were also remnants of prayer for the dead and the Requiem Mass, such as the provision for celebrating holy communion at a funeral.
Nevertheless, the first Book of Common Prayer was a "radical" departure from traditional worship in that it "eliminated almost everything that had till then been central to lay Eucharistic piety".Communion took place without any elevation of the consecrated bread and wine. The elevation had been the central moment of the old liturgy, attached as it was to the idea of real presence. In addition, the prayer of consecration was changed to reflect Protestant theology. Three sacrifices were mentioned; the first was Christ's sacrifice on the cross. The second was the congregation's sacrifice of praise and thanksgiving, and the third was the offering of "ourselves, our souls and bodies, to be a reasonable, holy and lively sacrifice" to God. While the medieval Canon of the Mass "explicitly identified the priest's action at the altar with the sacrifice of Christ", the Prayer Book broke this connection by stating the church's offering of thanksgiving in the Eucharist was not the same as Christ's sacrifice on the cross. Instead of the priest offering the sacrifice of Christ to God the Father, the assembled offered their praises and thanksgivings. The Eucharist was now to be understood as merely a means of partaking in and receiving the benefits of Christ's sacrifice.
There were other departures from tradition. At least initially, there was no music because it would take time to replace the church's body of Latin music.Most of the liturgical year was simply "bulldozed away" with only the major feasts of Christmas, Easter and Whitsun along with a few biblical saints' days (Apostles, Evangelists, John the Baptist and Mary Magdalene) and only two Marian feast days (the Purification and the Annunciation). The Assumption, Corpus Christi and other festivals were gone.
In 1549, Parliament also legalized clerical marriage, something already practiced by some Protestants (including Cranmer) but considered an abomination by conservatives.
Enforcement of the new liturgy did not always take place without a struggle. In the West Country, the introduction of the Book of Common Prayer was the catalyst for a series of uprisings through the summer of 1549. There were smaller upheavals elsewhere from the West Midlands to Yorkshire. The Prayer Book Rebellion was not only in reaction to the prayer book; the rebels demanded a full restoration of pre-Reformation Catholicism.They were also motivated by economic concerns, such as enclosure. In East Anglia, however, the rebellions lacked a Roman Catholic character. Kett's Rebellion in Norwich blended Protestant piety with demands for economic reforms and social justice.
The insurrections were put down only after considerable loss of life.Somerset was blamed and was removed from power in October. It was wrongly believed by both conservatives and reformers that the Reformation would be overturned. Succeeding Somerset as de facto regent was John Dudley, 1st Earl of Warwick, newly appointed Lord President of the Privy Council. Warwick saw further implementation of the reforming policy as a means of gaining Protestant support and defeating his conservative rivals.
From that point on, the Reformation proceeded apace. Since the 1530s, one of the obstacles to Protestant reform had been the bishops, bitterly divided between a traditionalist majority and a Protestant minority. This obstacle was removed in 1550–1551 when the episcopate was purged of conservatives.Edmund Bonner of London, William Rugg of Norwich, Nicholas Heath of Worcester, John Vesey of Exeter, Cuthbert Tunstall of Durham, George Day of Chichester and Stephen Gardiner of Winchester were either deprived of their bishoprics or forced to resign. Thomas Thirlby, Bishop of Westminster, managed to stay a bishop only by being translated to the Diocese of Norwich, "where he did virtually nothing during his episcopate". Traditionalist bishops were replaced by Protestants such as Nicholas Ridley, John Ponet, John Hooper and Miles Coverdale.
The newly enlarged and emboldened Protestant episcopate turned its attention to ending efforts by conservative clergy to "counterfeit the popish mass" through loopholes in the 1549 prayer book. The Book of Common Prayer was composed during a time when it was necessary to grant compromises and concessions to traditionalists. This was taken advantage of by conservative priests who made the new liturgy as much like the old one as possible, including elevating the Eucharist.The conservative Bishop Gardiner endorsed the prayer book while in prison, and historian Eamon Duffy notes that many lay people treated the prayer book "as an English missal".
To attack the mass, Protestants began demanding the removal of stone altars. Bishop Ridley launched the campaign in May 1550 when he commanded all altars to be replaced with wooden communion tables in his London diocese.Other bishops throughout the country followed his example, but there was also resistance. In November 1550, the Privy Council ordered the removal of all altars in an effort to end all dispute. While the prayer book used the term "altar", Protestants preferred a table because at the Last Supper Christ instituted the sacrament at a table. The removal of altars was also an attempt to destroy the idea that the Eucharist was Christ's sacrifice. During Lent in 1550, John Hooper preached, "as long as the altars remain, both the ignorant people, and the ignorant and evil-persuaded priest, will dream always of sacrifice".
In March 1550, a new ordinal was published that was based on Martin Bucer's own treatise on the form of ordination. While Bucer had provided for only one service for all three orders of clergy, the English ordinal was more conservative and had separate services for deacons, priests and bishops.During his consecration as bishop of Gloucester, John Hooper objected to the mention of "all saints and the holy Evangelist" in the Oath of Supremacy and to the requirement that he wear a black chimere over a white rochet. Hooper was excused from invoking the saints in his oath, but he would ultimately be convinced to wear the offensive consecration garb. This was the first battle in the vestments controversy, which was essentially a conflict over whether the church could require people to observe ceremonies that were neither necessary for salvation nor prohibited by scripture.
The 1549 Book of Common Prayer was criticized by Protestants both in England and abroad for being too susceptible to Roman Catholic re-interpretation. Martin Bucer identified 60 problems with the prayer book, and the Italian Peter Martyr Vermigli provided his own complaints. Shifts in Eucharistic theology between 1548 and 1552 also made the prayer book unsatisfactory—during that time English Protestants achieved a consensus rejecting any real bodily presence of Christ in the Eucharist. Some influential Protestants such as Vermigli defended Zwingli's symbolic view of the Eucharist. Less radical Protestants such as Bucer and Cranmer advocated for a spiritual presence in the sacrament.Cranmer himself had already adopted receptionist views on the Lord's Supper. In April 1552, a new Act of Uniformity authorized a revised Book of Common Prayer to be used in worship by November 1.
This new prayer book removed many of the traditional elements in the 1549 prayer book, resulting in a more Protestant liturgy. The communion service was designed to remove any hint of consecration or change in the bread and wine. Instead of unleavened wafers, ordinary bread was to be used.The prayer of invocation was removed, and the minister no longer said "the body of Christ" when delivering communion. Rather, he said, "Take and eat this, in remembrance that Christ died for thee, and feed on him in thy heart by faith, with thanksgiving". Christ's presence in the Lord's Supper was a spiritual presence "limited to the subjective experience of the communicant". Anglican bishop and scholar Colin Buchanan interprets the prayer book to teach that "the only point where the bread and wine signify the body and blood is at reception". Rather than reserving the sacrament (which often led to Eucharistic adoration), any leftover bread or wine was to be taken home by the curate for ordinary consumption.
In the new prayer book, the last vestiges of prayers for the dead were removed from the funeral service.Unlike the 1549 version, the 1552 prayer book removed many traditional sacramentals and observances that reflected belief in the blessing and exorcism of people and objects. In the baptism service, infants no longer received minor exorcism and the white chrisom robe. Anointing was no longer included in the services for baptism, ordination and visitation of the sick. These ceremonies were altered to emphasise the importance of faith, rather than trusting in rituals or objects. Clerical vestments were simplified—ministers were only allowed to wear the surplice and bishops had to wear a rochet.
Throughout Edward's reign, inventories of parish valuables, ostensibly for preventing embezzlement, convinced many the government planned to seize parish property, just as was done to the chantries. [ sic ] presently of a mass of money". No action was taken until 1552–1553 when commissioners were appointed. They were instructed to leave only the "bare essentials" required by the 1552 Book of Common Prayer—a surplice, tablecloths, communion cup and a bell. Items to be seized included copes, chalices, chrismatories, patens, monstrances and candlesticks. Many parishes sold their valuables rather than have them confiscated at a later date. The money funded parish projects that could not be challenged by royal authorities. In many parishes, items were concealed or given to local gentry who had, in fact, lent them to the church.These fears were confirmed in March 1551 when the Privy Council ordered the confiscation of church plate and vestments "for as much as the King's Majestie had neede
The confiscations caused tensions between Protestant church leaders and Warwick, now Duke of Northumberland. Cranmer, Ridley and other Protestant leaders did not fully trust Northumberland. Northumberland in turn sought to undermine these bishops by promoting their critics, such as Jan Laski and John Knox.Cranmer's plan for a revision of English canon law, the Reformatio legum ecclesiasticarum , failed in Parliament due to Northumberland's opposition. Despite such tensions, a new doctrinal statement to replace the King's Book was issued on royal authority in May 1553. The Forty-two Articles reflected the Reformed theology and practice taking shape during Edward's reign, which historian Christopher Haigh describes as a "restrained Calvinism". It affirmed predestination and that the King of England was Supreme Head of the Church of England under Christ.
King Edward became seriously ill in February and died in July 1553. Before his death, Edward was concerned that Mary, his devoutly Catholic sister, would overturn his religious reforms. A new plan of succession was created in which both of Edward's sisters Mary and Elizabeth were bypassed on account of illegitimacy in favour of the Protestant Jane Grey, the granddaughter of Edward's aunt Mary Tudor and daughter in law of the Duke of Northumberland. This new succession violated the "Third" Succession Act of 1544 and was widely seen as an attempt by Northumberland to stay in power.Northumberland was unpopular due to the church confiscations, and support for Jane collapsed. On 19 July, the Privy Council proclaimed Mary queen to the acclamation of the crowds in London.
Both Protestants and Roman Catholics understood that the accession of Mary I to the throne meant a restoration of traditional religion.Before any official sanction, Latin Masses began reappearing throughout England, despite the 1552 Book of Common Prayer remaining the only legal liturgy. Mary began her reign cautiously by emphasising the need for tolerance in matters of religion and proclaiming that, for the time being, she would not compel religious conformity. This was in part Mary's attempt to avoid provoking Protestant opposition before she could consolidate her power. While Protestants were not a majority of the population, their numbers had grown through Edward's reign. Historian Eamon Duffy writes that "Protestantism was a force to be reckoned with in London and in towns like Bristol, Rye, and Colchester, and it was becoming so in some northern towns such as Hessle, Hull, and Halifax."
Following Mary's accession, the Duke of Norfolk along with the conservative bishops Bonner, Gardiner, Tunstall, Day and Heath were released from prison and restored to their former dioceses. By September 1553, Hooper and Cranmer were imprisoned. Northumberland himself was executed but not before his conversion to Catholicism.
The break with Rome and the religious reforms of Henry VIII and Edward VI were achieved through parliamentary legislation and could only be reversed through Parliament. When Parliament met in October, Bishop Gardiner, now Lord Chancellor, initially proposed the repeal of all religious legislation since 1529. The House of Commons refused to pass this bill, and after heated debate,Parliament repealed all Edwardian religious laws, including clerical marriage and the prayer book, in the First Statute of Repeal. By 20 December, the Mass was reinstated by law. There were disappointments for Mary: Parliament refused to penalise non-attendance at Mass, would not restore confiscated church property, and left open the question of papal supremacy.
If Mary was to secure England for Roman Catholicism, she needed an heir and her Protestant half-sister Elizabeth had to be prevented from inheriting the Crown. On the advice of her cousin Charles V, Holy Roman Emperor, she married his son, Philip II of Spain, in 1554. There was opposition, and even a rebellion in Kent (led by Sir Thomas Wyatt); even though it was provided that Philip would never inherit the kingdom if there was no heir, received no estates and had no coronation.
By the end of 1554, Henry VIII's religious settlement had been re-instituted, but England was still not reunited with Rome. Before reunion could occur, church property disputes had to be settled—which, in practice, meant letting the nobility and gentry who had bought confiscated church lands keep them. Cardinal Reginald Pole, the Queen's cousin, arrived in November 1554 as papal legate to end England's schism with the Roman Catholic Church.On 28 November, Pole addressed Parliament to ask it to end the schism, declaring "I come not to destroy, but to build. I come to reconcile, not to condemn. I come not to compel, but to call again." In response, Parliament submitted a petition to the Queen the next day asking that "this realm and dominions might be again united to the Church of Rome by the means of the Lord Cardinal Pole".
On 30 November, Pole spoke to both houses of Parliament, absolving the members of Parliament "with the whole realm and dominions thereof, from all heresy and schism".Afterwards, bishops absolved diocesan clergy, and they in turn absolved parishioners. On 26 December, the Privy Council introduced legislation repealing the religious legislation of Henry VIII's reign and implementing the reunion with Rome. This bill was passed as the Second Statute of Repeal.
Historian Eamon Duffy writes that the Marian religious "programme was not one of reaction but of creative reconstruction" absorbing whatever was considered positive in the reforms of Henry VIII and Edward VI.The result was "subtly but distinctively different from the Catholicism of the 1520s." According to historian Christopher Haigh, the Catholicism taking shape in Mary's reign "reflected the mature Erasmian Catholicism" of its leading clerics, who were all educated in the 1520s and 1530s. Marian church literature, church benefactions and churchwarden accounts suggest less emphasis on saints, images and prayer for the dead. There was a greater focus on the need for inward contrition in addition to external acts of penance. Cardinal Pole himself was a member of the Spirituali , a Catholic reform movement that shared with Protestants an emphasis on man's total dependence on God's grace by faith and Augustinian views on salvation.
Cardinal Pole would eventually replace Cranmer as Archbishop of Canterbury in 1556, jurisdictional issues between England and Rome having prevented Cranmer's removal. Mary could have had Cranmer tried and executed for treason—he had supported the claims of Lady Jane Grey—but she resolved to have him tried for heresy. His recantations of his Protestantism would have been a major coup. Unhappily for her, he unexpectedly withdrew his recantations at the last minute as he was to be burned at the stake, thus ruining her government's propaganda victory.
As papal legate, Pole possessed authority over both his Province of Canterbury and the Province of York, which allowed him to oversee the Counter-Reformation throughout all of England.He re-installed images, vestment and plate in churches. Around 2,000 married clergy were separated from their wives, but the majority of these were allowed to continue their work as priests. Pole was aided by some of the leading Catholic intellectuals, Spanish members of the Dominican Order: Pedro de Soto, Juan de Villagarcía and Bartolomé Carranza.
In 1556, Pole ordered clergy to read one chapter of Bishop Bonner's A Profitable and Necessary Doctrine to their parishioners every Sunday. Modelled on the King's Book of 1543, Bonner's work was a survey of basic Catholic teaching organized around the Apostles' Creed, Ten Commandments, seven deadly sins, sacraments, the Lord's Prayer, and the Hail Mary.Bonner also produced a children's catechism and a collection of homilies.
From December 1555 to February 1556, Cardinal Pole presided over a national legatine synod that produced a set of decrees entitled Reformatio Angliae or the Reformation of England.The actions taken by the synod anticipated many of the reforms enacted throughout the Catholic Church after the Council of Trent. Pole believed that ignorance and lack of discipline among the clergy had led to England's religious turmoil, and the synod's reforms were designed to remedy both problems. Clerical absenteeism (the practice of clergy failing to reside in their diocese or parish), pluralism, and simony were condemned. Preaching was placed at the centre of the pastoral office, and all clergy were to provide sermons to the people (rectors and vicars who failed to were fined). The most important part of the plan was the order to establish a seminary in each diocese, which would replace the disorderly manner in which priests had been trained previously. The Council of Trent would later impose the seminary system upon the rest of the Catholic Church. It was also the first to introduce the altar tabernacle used to reserve Eucharistic bread for devotion and adoration.
Mary did what she could to restore church finances and land taken in the reigns of her father and brother. In 1555, she returned to the church the First Fruits and Tenths revenue, but with these new funds came the responsibility of paying the pensions of ex-religious. She restored six religious houses with her own money, notably Westminster Abbey for the Benedictines and Syon Abbey for the Bridgettines.However, there were limits to what could be restored. Only seven religious houses were re-founded between 1555 and 1558, though there were plans to re-establish more. Of the 1,500 ex-religious still living, only about a hundred resumed monastic life, and only a small number of chantries were re-founded. Re-establishments were hindered by the changing nature of charitable giving. A plan to re-establish Greyfriars in London was prevented because its buildings were occupied by Christ's Hospital, a school for orphaned children.
There is debate among historians over how vibrant the restoration was on the local level. According to historian A. G. Dickens, "Parish religion was marked by religious and cultural sterility",though historian Christopher Haigh observed enthusiasm, marred only by poor harvests that produced poverty and want. Recruitment to the English clergy began to rise after almost a decade of declining ordinations. Repairs to long-neglected churches began. In the parishes, "restoration and repair continued, new bells were bought, and church ales produced their bucolic profits". Great church feasts were restored and celebrated with plays, pageants and processions. However, Bishop Bonner's attempt to establish weekly processions in 1556 was a failure. Haigh writes that in years during which processions were banned people had discovered "better uses for their time" as well as "better uses for their money than offering candles to images". The focus was on "the crucified Christ, in the mass, the rood, and Corpus Christi devotion".
Protestants who refused to conform remained an obstacle to Catholic plans. Around 800 Protestants fled England to find safety in Protestant areas of Germany and Switzerland, establishing networks of independent congregations. Safe from persecution, these Marian exiles carried on a propaganda campaign against Roman Catholicism and the Queen's Spanish marriage, sometimes calling for rebellion.Those who remained in England were forced to practise their faith in secret and meet in underground congregations.
In 1555, the initial reconciling tone of the regime began to harden with the revival of the medieval heresy laws, which authorized capital punishment as a penalty for heresy.The persecution of heretics was uncoordinated—sometimes arrests were ordered by the Privy Council, others by bishops, and others by lay magistrates. Protestants brought attention to themselves usually due to some act of dissent, such as denouncing the Mass or refusing to receive the sacrament. A particularly violent act of protest was William Flower's stabbing of a priest during Mass on Easter Sunday, 14 April 1555. Individuals accused of heresy were examined by a church official and, if heresy was found, given the choice between death and signing a recantation. In some cases, Protestants were burnt at the stake after renouncing their recantation.
Around 284 Protestants were burnt at the stake for heresy.Several leading reformers were executed, including Thomas Cranmer, Hugh Latimer, Nicholas Ridley, John Rogers, John Hooper, Robert Ferrar, Rowland Taylor, and John Bradford. Lesser known figures were also among the victims, including around 51 women such as Joan Waste and Agnes Prest. Historian O. T. Hargrave writes that the Marian persecution was not "excessive" by "contemporary continental standards"; however, "it was unprecedented in the English experience". Historian Christopher Haigh writes that it "failed to intimidate all Protestants", whose bravery at the stake inspired others; however, it "was not a disaster: if it did not help the Catholic cause, it did not do much to harm it." After her death, the Queen became known as "Bloody Mary" due to the influence of John Foxe, one of the Marian exiles. Published in 1563, Foxe's Book of Martyrs provided accounts of the executions, and in 1571 the Convocation of Canterbury ordered that Foxe's book should be placed in every cathedral in the land.
Mary's efforts at restoring Roman Catholicism were also frustrated by the church itself. Pope Paul IV declared war on Philip and recalled Pole to Rome to have him tried as a heretic. Mary refused to let him go. The support she might have expected from a grateful Pope was thus denied.From 1557, the Pope refused to confirm English bishops, leading to vacancies and hurting the Marian religious program.
Despite these obstacles, the 5-year restoration was successful. There was support for traditional religion among the people, and Protestants remained a minority. Consequently, Protestants secretly ministering to underground congregations, such as Thomas Bentham, were planning for a long haul, a ministry of survival. Mary's death in November 1558, childless and without having made provision for a Roman Catholic to succeed her, meant that her Protestant sister Elizabeth would be the next queen.
|Part of a series on|
Elizabeth I inherited a kingdom in which a majority of people, especially the political elite, were religiously conservative, and England's main ally was Catholic Spain.For these reasons, the proclamation announcing her accession forbade any "breach, alteration, or change of any order or usage presently established within this our realm". This was only temporary. The new Queen was Protestant, though a conservative one. She also filled her new government with Protestants. The Queen's principal secretary was Sir William Cecil, a moderate Protestant. Her Privy Council was filled with former Edwardian politicians, and only Protestants preached at Court.
In 1558, Parliament passed the Act of Supremacy, which re-established the Church of England's independence from Rome and conferred on Elizabeth the title of Supreme Governor of the Church of England. The Act of Uniformity of 1559 authorised the 1559 Book of Common Prayer, which was a revised version of the 1552 Prayer Book from Edward's reign. Some modifications were made to appeal to Catholics and Lutherans, including giving individuals greater latitude concerning belief in the real presence and authorising the use of traditional priestly vestments. In 1571, the Thirty-Nine Articles were adopted as a confessional statement for the church, and a Book of Homilies was issued outlining the church's reformed theology in greater detail.
The Elizabethan Settlement established a church that was Reformed in doctrine but that preserved certain characteristics of medieval Catholicism, such as cathedrals, church choirs, a formal liturgy contained in the Prayer Book, traditional vestments and episcopal polity.According to historian Diarmaid MacCulloch, the conflicts over the Elizabethan Settlement stem from this "tension between Catholic structure and Protestant theology". During the reigns of Elizabeth and James I, several factions developed within the Church of England.
"Church papists" were Roman Catholics who outwardly conformed to the established church while maintaining their Catholic faith in secret. Catholic authorities disapproved of such outward conformity. Recusants were Roman Catholics who refused to attend Church of England services as required by law.Recusancy was punishable by fines of £20 a month (fifty times an artisan's wage). By 1574, Catholic recusants had organised an underground Roman Catholic Church, distinct from the Church of England. However, it had two major weaknesses: membership loss as church papists conformed fully to the Church of England and a shortage of priests. Between 1574 and 1603, 600 Catholic priests were sent to England. The influx of foreign trained Catholic priests, the unsuccessful Revolt of the Northern Earls, the excommunication of Elizabeth, and the discovery of the Ridolfi plot all contributed to a perception that Catholicism was treasonous. Executions of Catholic priests became more common—the first in 1577, four in 1581, eleven in 1582, two in 1583, six in 1584, fifty-three by 1590, and seventy more between 1601 and 1608. In 1585, it became treason for a Catholic priest to enter the country, as well as for anyone to aid or shelter him. As the older generation of recusant priests died out, Roman Catholicism collapsed among the lower classes in the north, west and in Wales. Without priests, these social classes drifted into the Church of England and Catholicism was forgotten. By Elizabeth's death in 1603, Roman Catholicism had become "the faith of a small sect", largely confined to gentry households.
Gradually, England was transformed into a Protestant country as the Prayer Book shaped Elizabethan religious life. By the 1580s, conformist Protestants (those who conformed their religious practice to the religious settlement) were becoming a majority.Calvinism appealed to many conformists, and Calvinist clergy held the best bishoprics and deaneries during Elizabeth's reign. Other Calvinists were unsatisfied with elements of the Elizabethan Settlement and wanted further reforms to make the Church of England more like the Continental Reformed churches. These nonconformist Calvinists became known as Puritans. Some Puritans refused to bow at the name of Jesus, to make the sign of the cross in baptism, use wedding rings or organ music in church. They especially resented the requirement that clergy wear the white surplice and clerical cap. Puritan clergymen preferred to wear black academic attire (see Vestments controversy). Many Puritans believed the Church of England should follow the example of Reformed churches in other parts of Europe and adopt presbyterian polity, under which government by bishops would be replaced with government by elders. However, all attempts to enact further reforms through Parliament were blocked by the Queen.
During the early Stuart period, the Church of England's dominant theology was still Calvinism, but a group of theologians associated with Bishop Lancelot Andrewes disagreed with many aspects of the Reformed tradition, especially its teaching on predestination. They looked to the Church Fathers rather than the Reformers and preferred using the more traditional 1549 Prayer Book.Due to their belief in free will, this new faction is known as the Arminian party, but their high church orientation was more controversial. James I tried to balance the Puritan forces within his church with followers of Andrewes, promoting many of them at the end of his reign.
During the reign of Charles I, the Arminians were ascendant and closely associated with William Laud, Archbishop of Canterbury (1633–1645). Laud and his followers believed the Reformation had gone too far and launched a "'Beauty of Holiness' counter-revolution, wishing to restore what they saw as lost majesty in worship and lost dignity for the sacerdotal priesthood."Laudianism, however, was unpopular with both Puritans and Prayer Book conformists, who viewed the high church innovations as undermining forms of worship they had grown attached to. The English Civil War resulted in the overthrow of Charles I, and a Puritan dominated Parliament began to dismantle the Elizabethan Settlement. The Puritans, however, were divided among themselves and failed to agree on an alternative religious settlement. A variety of new religious movements appeared, including Baptists, Quakers, Ranters, Seekers, Diggers, Muggletonians, and Fifth Monarchists.
The Restoration of the monarchy in 1660 allowed for the restoration of the Elizabethan Settlement as well, but the Church of England was fundamentally changed. The "Jacobean consensus" was shattered.Many Puritans were unwilling to conform and became dissenters. Now outside the established church, the different strands of the Puritan movement evolved into separate denominations: Congregationalists, Presbyterians, and Baptists.
After the Restoration, Anglicanism took shape as a recognisable tradition.From Richard Hooker, Anglicanism inherited a belief in the "positive spiritual value in ceremonies and rituals, and for an unbroken line of succession from the medieval Church to the latter day Church of England". From the Arminians, it gained a theology of episcopacy and an appreciation for liturgy. From the Puritans and Calvinists, it "inherited a contradictory impulse to assert the supremacy of scripture and preaching".
The religious forces unleashed by the Reformation ultimately destroyed the possibility of religious uniformity. Protestant dissenters were allowed freedom of worship with the Toleration Act 1688. It took Catholics longer to achieve toleration. Penal laws that excluded Catholics from everyday life began to be repealed in the 1770s. Catholics were allowed to vote and sit as members of Parliament in 1829 (see Catholic emancipation).
The historiography of the English Reformation has seen vigorous clashes among dedicated protagonists and scholars for five centuries. The main factual details at the national level have been clear since 1900, as laid out for example by James Anthony Froudeand Albert Pollard.
Reformation historiography has seen many schools of interpretation with Roman Catholic, Anglican and Nonconformist historians using their own religious perspectives. [ page needed ] In addition there has been a highly influential Whig interpretation, based on liberal secularized Protestantism, that depicted the Reformation in England, in the words of Ian Hazlett, as "the midwife delivering England from the Dark Ages to the threshold of modernity, and so a turning point of progress". Finally among the older schools was a neo-Marxist interpretation that stressed the economic decline of the old elites in the rise of the landed gentry and middle classes. All these approaches still have representatives, but the main thrust of scholarly historiography since the 1970s falls into four groupings or schools, according to Hazlett. [ page needed ]
Geoffrey Elton leads the first faction with an agenda rooted in political historiography. It concentrates on the top of the early modern church-state looking at it at the mechanics of policymaking and the organs of its implementation and enforcement. The key player for Elton was not Henry VIII, but rather his principal Secretary of State Thomas Cromwell. Elton downplays the prophetic spirit of the religious reformers in the theology of keen conviction, dismissing them as the meddlesome intrusions from fanatics and bigots.
Secondly, A. G. Dickens and others were motivated by a primarily religious perspective. They prioritize the religious and subjective side of the movement. While recognizing the Reformation was imposed from the top, just as it was everywhere else in Europe, it also responded to aspirations from below. Dickens has been criticized for underestimating the strength of residual and revived Roman Catholicism, but has been praised for his demonstration of the close ties to European influences. In the Dickens school, David Loades has stressed the theological importance of the Reformation for Anglo-British development.
Revisionists comprise a third school, led by Christopher Haigh, Jack Scarisbrick and numerous other scholars. Their main achievement was the discovery of an entirely new corpus of primary sources at the local level, leading them to the emphasis on Reformation as it played out on a daily and local basis, with much less emphasis on the control from the top they emphasize turning away from elite sources they emphasize local parish records, diocesan files, guild records, data from boroughs, the courts, and especially telltale individual wills.
Finally, Patrick Collinson and others have brought much more precision to the theological landscape, with Calvinist Puritans who were impatient with the Anglican caution sent compromises. Indeed, the Puritans were a distinct subgroup who did not comprise all of Calvinism. The Church of England thus emerged as a coalition of factions, all of them Protestant inspiration.
All the recent schools have decentered Henry VIII, and minimized hagiography. They have paid more attention to localities, Catholicism, radicals, and theological niceties. On Catholicism, the older schools overemphasized Thomas More (1470–1535), to the neglect of other bishops and factors inside Catholicism. The older schools too often concentrated on elite London, the newer ones look to the English villages.
Book of Common Prayer (BCP) is the short title of a number of related prayer books used in the Anglican Communion, as well as by other Christian churches historically related to Anglicanism. The original book, published in 1549 in the reign of Edward VI, was a product of the English Reformation following the break with Rome. The work of 1549 was the first prayer book to include the complete forms of service for daily and Sunday worship in English. It contained Morning Prayer, Evening Prayer, the Litany, and Holy Communion and also the occasional services in full: the orders for Baptism, Confirmation, Marriage, "prayers to be said with the sick", and a funeral service. It also set out in full the "propers" : the introits, collects, and epistle and gospel readings for the Sunday service of Holy Communion. Old Testament and New Testament readings for daily prayer were specified in tabular format as were the Psalms; and canticles, mostly biblical, that were provided to be said or sung between the readings.
Edward VI was the King of England and Ireland from 28 January 1547 until his death in 1553. He was crowned on 20 February at the age of nine. Edward was the son of Henry VIII and Jane Seymour, and England's first monarch to be raised as a Protestant. During his reign, the realm was governed by a regency council because he never reached maturity. The council was first led by his uncle Edward Seymour, 1st Duke of Somerset (1547–1549), and then by John Dudley, 1st Earl of Warwick (1550–1553), who from 1551 was Duke of Northumberland.
Thomas Cranmer was a leader of the English Reformation and Archbishop of Canterbury during the reigns of Henry VIII, Edward VI and, for a short time, Mary I. He helped build the case for the annulment of Henry's marriage to Catherine of Aragon, which was one of the causes of the separation of the English Church from union with the Holy See. Along with Thomas Cromwell, he supported the principle of royal supremacy, in which the king was considered sovereign over the Church within his realm.
Thomas Cromwell, 1st Earl of Essex, was an English lawyer and statesman who served as chief minister to King Henry VIII from 1534 to 1540, when he was beheaded on orders of the king.
The Thirty-nine Articles of Religion are the historically defining statements of doctrines and practices of the Church of England with respect to the controversies of the English Reformation. The Thirty-nine Articles form part of the Book of Common Prayer used by both the Church of England and the U.S. Episcopal Church, among other denominations in the worldwide Anglican Communion and Anglican Continuum.
Nicholas Ridley was an English Bishop of London. Ridley was burned at the stake as one of the Oxford Martyrs during the Marian Persecutions for his teachings and his support of Lady Jane Grey. He is remembered with a commemoration in the calendar of saints in some parts of the Anglican Communion on 16 October.
The formal history of the Church of England is traditionally dated by the Church to the Gregorian mission to England by Augustine of Canterbury in AD 597. As a result of Augustine's mission, and based on the tenets of Christianity, Christianity in England fell under control or authority of the Pope. This gave him the power to appoint bishops, preserve or change doctrine, and/or grant exceptions to standard doctrine.
The Exhortation and Litany, published in 1544, is the earliest officially authorized vernacular service in English. The same rite survives, in modified form, in the Book of Common Prayer.
The Elizabethan Religious Settlement is the name given to the religious and political arrangements made for England during the reign of Elizabeth I (1558–1603) that brought the English Reformation to a conclusion. The Settlement shaped the theology and liturgy of the Church of England and was important to the development of Anglicanism as a distinct Christian tradition.
Via media is a Latin phrase meaning "the middle road" and is a philosophical maxim for life which advocates moderation in all thoughts and actions.
The Tudor period occurred between 1485 and 1603 in England and Wales and includes the Elizabethan period during the reign of Elizabeth I until 1603. The Tudor period coincides with the dynasty of the House of Tudor in England whose first monarch was Henry VII. Historian John Guy (1988) argued that "England was economically healthier, more expansive, and more optimistic under the Tudors" than at any time since the Roman occupation.
Strangers' church was a term used by English-speaking people for independent Protestant churches established in foreign lands or by foreigners in England during the Reformation.
Diarmaid Ninian John MacCulloch is an English historian and academic, specialising in ecclesiastical history and the history of Christianity. Since 1995, he has been a fellow of St Cross College, Oxford; he was formerly the senior tutor. Since 1997, he has been Professor of the History of the Church at the University of Oxford.
The history of the Anglican Communion may be attributed mainly to the worldwide spread of British culture associated with the British Empire. Among other things the Church of England spread around the world and, gradually developing autonomy in each region of the world, became the communion as it exists today.
The Act of Uniformity 1551, sometimes referred to as the Act of Uniformity 1552, was an Act of the Parliament of England.
The Catholic Church in England and Wales is part of the worldwide Catholic Church in full communion with the Holy See. Its origins date from the 6th century, when Pope St Gregory the Great through the Benedictine missionary, Saint Augustine of Canterbury, intensified the evangelization of the Kingdom of Kent linking it to the Holy See in 597 AD. This unbroken communion with the Holy See lasted until King Henry VIII ended it in 1534.
The term Black Rubric is the popular name for the declaration found at the end of the "Order for the Administration of the Lord's Supper" in the Book of Common Prayer (BCP), the Church of England's liturgical book. The Black Rubric explains why communicants should kneel when receiving Holy Communion and excludes possible misunderstandings of this action. The declaration was composed in 1552, but the term dates from the 19th century when the medieval custom of printing the rubrics in red was followed in editions of the BCP while the declaration was printed in black.
The Prebendaries' Plot was an attempt during the English Reformation by religious conservatives to oust Thomas Cranmer from office as Archbishop of Canterbury. The events took place in 1543 and saw Cranmer formally accused of being a heretic. The hope was that this would stop further religious reforms in Kent and end Protestant influence at the royal court of Henry VIII.
Roger Tonge otherwise Roger Tong or Tongue was an English clergyman who served as a chaplain to Edward VI and was later appointed dean of Winchester Cathedral in 1549.
The 1549 edition of the Book of Common Prayer is the original version of the Book of Common Prayer (BCP), variations of which are still in use as the official liturgical book of the Church of England and other Anglican churches. Written during the English Reformation, the prayer book was largely the work of Thomas Cranmer, who borrowed from a large number of other sources. Evidence of Cranmer's Protestant theology can be seen throughout the book; however, the services maintain the traditional forms and sacramental language inherited from medieval Catholic liturgies. Criticised by Protestants for being too traditional, it was replaced by a new and significantly revised edition in 1552.
|volume=has extra text (help) | https://wikimili.com/en/English_Reformation | 21 |
16 | Every time tax season comes around; you may be wondering to yourself, “Where did the concept of taxes come from?”
The concept of paying taxes is not new. The first known taxation system recorded in history was in Ancient Egypt, around 3000-2800 BC, during the First Dynasty of the Old Kingdom.
The Bible even depicted taxation at work. In Genesis, Joseph told the people of Egypt how to give a portion of their crops to the Pharaoh, who would collect the crops as a buffer against unexpected famines.
Taxes and taxation have played a large role in shaping the history of the United States. During the colonial period, taxes were generally low.
However, various conflicts arose between the Crown and the colonies throughout the 18th century, culminating in the Boston Tea Party; this was an act of protest by the American colonists against Great Britain for the Tea Act. Dumping many tea chests into Boston Harbor, Americans spoke out against the cuts to taxation on tea that had driven American smugglers out of work.
The First Income Tax Law
The first income tax law came into being in 1861, just as the Civil War was heating up. To pay for the war effort, Congress established the Revenue Act of 1861—which levied a 3% tax on incomes over US$800 as well as the Revenue Act of 1862 which levied a 3% tax on incomes above US$600.
The passage of the 16th Amendment in 1913 granted the federal government the ability to tax citizens’ income. The tariffs were designed to protect and strengthen the American cotton, steel, and iron industries, to make it easier for Americans to compete on the rapidly industrializing world stage.
Then and Now
Throughout the 20th and 21st centuries, the top marginal tax rate increased steadily, from a 7% tax on net personal incomes in 1913 to 35% in 2010 and a slight increase to 37% percent as recently as 2020.
The government also instituted a variety of other taxes throughout the 20th century, such as the estate tax created to fund the US Navy and the Social Security Act of 1935; Congress created the act to provide financial security for Americans of retirement age.
Our current income tax model is progressive, and despite what you’d hear from many outlets the majority of higher earners pay much higher taxes. Statistics from the IRS have suggested that in 2018, the top 1% of all income earners reported 20% of the US’s income but paid 37% of the taxes. Further, the top 50% of all taxpayers paid 97% of all taxes.
Therefore, taxes are more than a necessary evil—they are a form of civic participation. As Americans, we pay taxes to help our fellow citizens, to ensure that the government has the adequate resources to protect our current way of life.
Otherwise, how will our government fund the Navy, the Army or ensure our roads are up to standard? There is a saying that freedom comes at a price—we all must pay our fair share, regardless of our income. | https://jimdegaetano.com/the-history-of-income-taxes/ | 21 |
15 | Burning of Parliament
The Palace of Westminster, the medieval royal palace used as the home of the British parliament, was largely destroyed by fire on 16 October 1834. The blaze was caused by the burning of small wooden tally sticks which had been used as part of the accounting procedures of the Exchequer until 1826. The sticks were disposed of in a careless manner in the two furnaces under the House of Lords, which caused a chimney fire in the two flues that ran under the floor of the Lords' chamber and up through the walls.
The resulting fire spread rapidly throughout the complex and developed into the biggest conflagration to occur in London between the Great Fire of 1666 and the Blitz of the Second World War; the event attracted large crowds which included several artists who provided pictorial records of the event. The fire lasted for most of the night and destroyed a large part of the palace, including the converted St Stephen's Chapel—the meeting place of the House of Commons—the Lords Chamber, the Painted Chamber and the official residences of the Speaker and the Clerk of the House of Commons.
The actions of Superintendent James Braidwood of the London Fire Engine Establishment ensured that Westminster Hall and a few other parts of the old Houses of Parliament survived the blaze. In 1836 a competition for designs for a new palace was won by Charles Barry. Barry's plans, developed in collaboration with Augustus Pugin, incorporated the surviving buildings into the new complex. The competition established Gothic Revival as the predominant national architectural style and the palace has since been categorised as a UNESCO World Heritage Site, of outstanding universal value.
The Palace of Westminster originally dates from the early eleventh century when Canute the Great built his royal residence on the north side of the River Thames. Successive kings added to the complex: Edward the Confessor built Westminster Abbey; William the Conqueror began building a new palace; his son, William Rufus, continued the process, which included Westminster Hall, started in 1097; Henry III built new buildings for the Exchequer—the taxation and revenue gathering department of the country—in 1270 and the Court of Common Pleas, along with the Court of King's Bench and Court of Chancery. By 1245 the King's throne was present in the palace, which signified that the building was at the centre of English royal administration.
In 1295 Westminster was the venue for the Model Parliament, the first English representative assembly, summoned by Edward I; during his reign he called sixteen parliaments, which sat either in the Painted Chamber or the White Chamber. By 1332 the barons (representing the titled classes) and burgesses and citizens (representing the commons) began to meet separately, and by 1377 the two bodies were entirely detached. In 1512 a fire destroyed part of the royal palace complex and Henry VIII moved the royal residence to the nearby Palace of Whitehall, although Westminster still retained its status as a royal palace. In 1547 Henry's son, Edward VI, provided St Stephen's Chapel for the Commons to use as their debating chamber. The House of Lords met in the medieval hall of the Queen's Chamber, before moving to the Lesser Hall in 1801. Over the three centuries from 1547 the palace was enlarged and altered, becoming a warren of wooden passages and stairways.
St Stephen's Chapel remained largely unchanged until 1692 when Sir Christopher Wren, at the time the Master of the King's Works, was instructed to make structural alterations. He lowered the roof, removed the stained glass windows, put in a new floor and covered the original gothic architecture with wood panelling. He also added galleries from which the public could watch proceedings. The result was described by one visitor to the chamber as "dark, gloomy, and badly ventilated, and so small ... when an important debate occurred ... the members were really to be pitied". When the future Prime Minister William Ewart Gladstone remembered his arrival as a new MP in 1832, he recounted "What I may term corporeal conveniences were ... marvellously small. I do not think that in any part of the building it afforded the means of so much as washing the hands." The facilities were so poor that, in debates in 1831 and 1834, Joseph Hume, a Radical MP, called for new accommodation for the House, while his fellow MP William Cobbett asked "Why are we squeezed into so small a space that it is absolutely impossible that there should be calm and regular discussion, even from circumstance alone ... Why are 658 of us crammed into a space that allows each of us no more than a foot and a half square?"
By 1834 the palace complex had been further developed, firstly by John Vardy in the middle of the eighteenth century, and in the early nineteenth century by James Wyatt and Sir John Soane. Vardy added the Stone Building, in a Palladian style to the West side of Westminster Hall; Wyatt enlarged the Commons, moved the Lords into the Court of Requests and rebuilt the Speaker's House. Soane, taking on responsibility for the palace complex on Wyatt's death in 1813, undertook rebuilding of Westminster Hall and constructed the Law Courts in a Neoclassical style. Soane also provided a new royal entrance, staircase and gallery, as well as committee rooms and libraries.
The potential dangers of the building were apparent to some, as no fire stops or party walls were present in the building to slow the progress of a fire. In the late eighteenth century a committee of MPs predicted that there would be a disaster if the palace caught fire. This was followed by a 1789 report from fourteen architects warning against the possibility of fire in the palace; signatories included Soane and Robert Adam. Soane again warned of the dangers in 1828, when he wrote that "the want of security from fire, the narrow, gloomy and unhealthy passages, and the insufficiency of the accommodations in this building are important objections which call loudly for revision and speedy amendment." His report was again ignored.
Since medieval times the Exchequer had used tally sticks, pieces of carved, notched wood, normally willow, as part of their accounting procedures. The parliamentary historian Caroline Shenton has described the tally sticks as "roughly as long as the span of an index finger and thumb". These sticks were split in two so that the two sides to an agreement had a record of the situation. Once the purpose of each tally had come to an end, they were routinely destroyed. By the end of the eighteenth century the usefulness of the tally system had likewise come to an end, and a 1782 Act of Parliament stated that all records should be on paper, not tallies. The Act also abolished sinecure positions in the Exchequer, but a clause in the act ensured it could only take effect once the remaining sinecure-holders had died or retired. The final sinecure-holder died in 1826 and the act came into force, although it took until 1834 for the antiquated procedures to be replaced. The novelist Charles Dickens, in a speech to the Administrative Reform Association, described the retention of the tallies for so long as an "obstinate adherence to an obsolete custom"; he also mocked the bureaucratic steps needed to implement change from wood to paper. He said that "all the red tape in the country grew redder at the bare mention of this bold and original conception." By the time the replacement process had finished there were two cart-loads of old tally sticks awaiting disposal.
In October 1834 Richard Weobley, the Clerk of Works, received instructions from Treasury officials to clear the old tally sticks while parliament was adjourned. He decided against giving the sticks away to parliamentary staff to use as firewood, and instead opted to burn them in the two heating furnaces of the House of Lords, directly below the peers' chambers. The furnaces had been designed to burn coal—which gives off a high heat with little flame—and not wood, which burns with a high flame. The flues of the furnaces ran up the walls of the basement in which they were housed, under the floors of the Lords' chamber, then up through the walls and out through the chimneys.
16 October 1834
of the fire
The process of destroying the tally sticks began at dawn on 16 October and continued throughout the day; two Irish labourers, Joshua Cross and Patrick Furlong, were assigned the task. Weobley checked in on the men throughout the day, claiming subsequently that, on his visits, both furnace doors were open, which allowed the two labourers to watch the flames, while the piles of sticks in both furnaces were only ever four inches (ten centimetres) high. Another witness to the events, Richard Reynolds, the firelighter in the Lords, later reported that he had seen Cross and Furlong throwing handfuls of tallies onto the fire—an accusation they both denied.
Those tending the furnaces were unaware that the heat from the fires had melted the copper lining of the flues and started a chimney fire. With the doors of the furnaces open, more oxygen was drawn into the furnaces, which ensured the fire burned more fiercely, and the flames driven further up the flues than they should have been. The flues had been weakened over time by having footholds cut in them by the child chimney sweeps. Although these footholds would have been repaired as the child exited on finishing the cleaning, the fabric of the chimney was still weakened by the action. In October 1834 the chimneys had not yet had their annual sweep, and a considerable amount of clinker had built up inside the flues.
A strong smell of burning was present in the Lords' chambers during the afternoon of 16 October, and at 4:00 pm two gentlemen tourists visiting to see the Armada tapestries that hung there were unable to view them properly because of the thick smoke. As they approached Black Rod's box in the corner of the room, they felt heat from the floor coming through their boots. Shortly after 4:00 pm Cross and Furlong finished work, put the last few sticks into the furnaces—closing the doors as they did so—and left to go to the nearby Star and Garter public house.
The first flames were spotted at 6:00 pm, under the door of the House of Lords, by the wife of one of the doorkeepers; she entered the chamber to see Black Rod's box alight, and flames burning the curtains and wood panels, and raised the alarm. For 25 minutes the staff inside the palace initially panicked and then tried to deal with the blaze, but they did not call for assistance, or alert staff at the House of Commons, at the other end of the palace complex.
At 6:30 pm there was a flashover, a giant ball of flame that The Manchester Guardian reported "burst forth in the centre of the House of Lords, ... and burnt with such fury that in less than half an hour, the whole interior ... presented ... one entire mass of fire." The explosion, and the resultant burning roof, lit up the skyline, and could be seen by the royal family in Windsor Castle, 20 miles (32 km) away. Alerted by the flames, help arrived from nearby parish fire engines; as there were only two hand-pump engines on the scene, they were of limited use. They were joined at 6:45 pm by 100 soldiers from the Grenadier Guards, some of whom helped the police in forming a large square in front of the palace to keep the growing crowd back from the firefighters; some of the soldiers assisted the firemen in pumping the water supply from the engines.
The London Fire Engine Establishment (LFEE)—an organisation run by several insurance companies in the absence of a publicly run brigade—was alerted at about 7:00 pm, by which time the fire had spread from the House of Lords. The head of the LFEE, James Braidwood, brought with him 12 engines and 64 firemen, even though the Palace of Westminster was a collection of uninsured government buildings, and therefore fell outside the protection of the LFEE. Some of the firefighters ran their hoses down to the Thames. The river was at low tide and it meant a poor supply of water for the engines on the river side of the building.
By the time Braidwood and his men had arrived on the scene, the House of Lords had been destroyed. A strong south-westerly breeze had fanned the flames along the wood-panelled and narrow corridors into St Stephen's Chapel. Shortly after his arrival the roof of the chapel collapsed; the resultant noise was so loud that the watching crowds thought there had been a Gunpowder Plot-style explosion. According to The Manchester Guardian, "By half-past seven o'clock the engines were brought to play upon the building both from the river and the land side, but the flames had by this time acquired such a predominance that the quantity of water thrown upon them produced no visible effect." Braidwood saw it was too late to save most of the palace, so elected to focus his efforts on saving Westminster Hall, and he had his firemen cut away the part of the roof that connected the hall to the already burning Speaker's House, and then soak the hall's roof to prevent it catching fire. In doing so he saved the medieval structure at the expense of those parts of the complex already ablaze.
The glow from the burning, and the news spreading quickly round London, ensured that crowds continued to turn up in increasing numbers to watch the spectacle. Among them was a reporter for The Times, who noticed that there were "vast gangs of the light-fingered gentry in attendance, who doubtless reaped a rich harvest, and [who] did not fail to commit several desperate outrages." The crowds were so thick that they blocked Westminster Bridge in their attempts to get a good view, and many took to the river in whatever craft they could find or hire in order to watch better. A crowd of thousands congregated in Parliament Square to witness the spectacle, including the Prime Minister—Lord Melbourne—and many of his cabinet. Thomas Carlyle, the Scottish philosopher, was one of those present that night, and he later recalled that:
"The crowd was quiet, rather pleased than otherwise; whew'd and whistled when the breeze came as if to encourage it: 'there's a flare-up (what we call shine) for the House o' Lords.' —'A judgment for the Poor-Law Bill!' — 'There go their hacts' (acts)! Such exclamations seemed to be the prevailing ones. A man sorry I did not anywhere see."
This view was doubted by Sir John Hobhouse, the First Commissioner of Woods and Forests, who oversaw the upkeep of royal buildings, including the Palace of Westminster. He wrote that "the crowd behaved very well; only one man was taken up for huzzaing when the flames increased. ... on the whole, it was impossible for any large assemblage of people to behave better." Many of the MPs and peers present, including Lord Palmerston, the Secretary of State for Foreign Affairs, helped break down doors to rescue books and other treasures, aided by passers-by; the Deputy Serjeant-at-Arms had to break into a burning room to save the parliamentary mace.
At 9:00 pm three Guards regiments arrived on the scene. Although the troops assisted in crowd control, their arrival was also a reaction of the authorities to fears of a possible insurrection, for which the destruction of parliament could have signalled the first step. The three European revolutions of 1830—the French, Belgian and Polish actions—were still of concern, as were the unrest from the Captain Swing riots, and the recent passing of the Poor Law Amendment Act 1834, which altered the relief provided by the workhouse system.
At around 1:30 am the tide had risen enough to allow the LFEE's floating fire engine to arrive on the scene. Braidwood had called for the engine five hours previously, but the low tide had hampered its progress from its downriver mooring at Rotherhithe. Once it arrived it was effective in bringing under control the fire that had taken hold in the Speaker's House.
Braidwood regarded Westminster Hall as safe from destruction by 1:45 am, partly because of the actions of the floating fire engine, but also because a change in the direction of the wind kept the flames away from the Hall. Once the crowd realised that the hall was safe they began to disperse, and had left by around 3:00 am, by which time the fire near the Hall was nearly out, although it continued to burn towards the south of the complex. The firemen remained in place until about 5:00 am, when they had extinguished the last remaining flames and the police and soldiers had been replaced by new shifts.
The House of Lords, as well as its robing and committee rooms, were all destroyed, as was the Painted Chamber, and the connecting end of the Royal Gallery. The House of Commons, along with its library and committee rooms, the official residence of the Clerk of the House and the Speaker's House, were devastated. Other buildings, such as the Law Courts, were badly damaged. The buildings within the complex which emerged relatively unscathed included Westminster Hall, the cloisters and undercroft of St Stephen's, St Mary Undercroft Chapel, the Jewel Tower and Soane's new buildings to the south. The British standard measurements, the yard and pound, were both lost in the blaze; the measurements had been created in 1496. Also lost were most of the procedural records for the House of Commons, which dated back as far as the late 15th century. The original Acts of Parliament from 1497 survived, as did the Lords' Journals, all of which were stored in the Jewel Tower at the time of the fire. In the words of Shenton, the fire was "the most momentous blaze in London between the Great Fire of 1666 and the Blitz" of the Second World War. Despite the size and ferocity of the fire, there were no deaths, although there were nine casualties during the night's events that were serious enough to require hospitalisation.
The day after the fire the Office of Woods and Forests issued a report outlining the damage, stating that "the strictest enquiry is in progress as to the cause of this calamity, but there is not the slightest reason to suppose that it has arisen from any other than accidental causes." The Times reported on some of the possible causes of the fire, but indicated that it was likely that the burning of the Exchequer tallies was to blame. The same day the cabinet ministers who were in London met for an emergency cabinet meeting; they ordered a list of witnesses to be drawn up, and on 22 October a committee of the Privy Council sat to investigate the fire.
The committee, which met in private, heard numerous theories as to the causes of the fire, including the lax attitude of plumbers working in the Lords, carelessness of the servants at Howard's Coffee House—situated inside the palace—and a gas explosion. Other rumours began to circulate; the Prime Minister received an anonymous letter claiming that the fire was an arson attack. The committee issued its report on 8 November, which identified the burning of the tallies as the cause of the fire. The committee thought it unlikely that Cross and Furlong had been as careful in filling the furnaces as they had claimed, and the report stated that "it is unfortunate that Mr Weobley did not more effectively superintend the burning of the tallies".
King William IV offered Buckingham Palace as a replacement to parliament; the proposal was declined by MPs who considered the building "dingy". Parliament still needed somewhere to meet, and the Lesser Hall and Painted Chamber were re-roofed and furnished for the Commons and Lords respectively for the State Opening of Parliament on 23 February 1835. The opening included a statement from the King, read by Lord Brougham, the Lord Chancellor, that prorogued parliament until 25 November 1835.
Although the architect Robert Smirke was appointed in December 1834 to design a replacement palace, pressure from the former MP Lieutenant Colonel Sir Edward Cust to open the process up to a competition gained popularity in the press and led to the formation in 1835 of a Royal Commission. This body determined in which style the new construction should be built, and in June they decided that either Elizabethan or gothic styles should be used. The commission also decided that although competitors would not be required to follow the outline of the original palace, the surviving buildings of Westminster Hall, the Undercroft Chapel and the Cloisters of St Stephen's should all be incorporated into the new complex.
There were 97 entries to the competition, which closed in November 1835; each entry was to be identifiable only by a pseudonym or symbol. The commission presented their recommendation in February 1836; the winning entry, which brought a prize of £1,500, was number 64, identified by a portcullis—the symbol chosen by the architect Charles Barry. Uninspired by any English secular Elizabethan or Gothic buildings, Barry had visited Belgium to view examples of Flemish civic architecture prior to drafting his design; to complete the necessary pen and ink drawings, which are now lost, he employed Augustus Pugin, a 23-year-old architect who was, in the words of the architectural historian Nikolaus Pevsner, "the most fertile and passionate of the Gothicists". Thirty four of the competitors petitioned parliament against the selection of Barry, who was a friend of Cust, but their plea was rejected, and the former prime minister Sir Robert Peel defended Barry and the selection process.
New Palace of Westminster
Barry planned an enfilade, or what Christopher Jones, the former BBC political editor, has called "one long spine of Lords' and Commons' Chambers" which enabled the Speaker of the House of Commons to look through the line of the building to see the Queen's throne in the House of Lords. Laid out around 11 courtyards, the building included several residences with accommodation for about 200 people, and comprised a total of 1,180 rooms, 126 staircases and 2 miles (3.2 km) of corridors. Between 1836 and 1837 Pugin made more detailed drawings on which estimates were made for the palace's completion; reports of the cost estimates vary from £707,000 to £725,000, with six years until completion of the project.
In June 1838 Barry and colleagues undertook a tour of Britain to locate a supply of stone for the building, eventually choosing Magnesian Limestone from the Anston quarry of the Duke of Leeds. Work started on building the river frontage on 1 January 1839, and Barry's wife laid the foundation stone on 27 April 1840. The stone was badly quarried and handled, and with the polluted atmosphere in London it proved to be problematic, with the first signs of deterioration showing in 1849, and extensive renovations required periodically.
Although there was a setback in progress with a stonemasons' strike between September 1841 and May 1843, the House of Lords had its first sitting in the new chamber in 1847. In 1852 the Commons was finished, and both Houses sat in their new chambers for the first time; Queen Victoria first used the newly completed royal entrance. In the same year, while Barry was appointed a Knight Bachelor, Pugin suffered a mental breakdown and, following incarceration at Bethlehem Pauper Hospital for the Insane, died at the age of 40.
The clock tower was completed in 1858, and the Victoria Tower in 1860; Barry died in May that year, before the building work was completed. The final stages of the work were overseen by his son, Edward, who continued working on the building until 1870. The total cost of the building came to around £2.5 million.
In 1836 the Royal Commission on Public Records was formed to look into the loss of the parliamentary records, and make recommendations on the preservation of future archives. Their published recommendations in 1837 led to the Public Record Act (1838), which set up the Public Record Office, initially based in Chancery Lane.
The fire became the "single most depicted event in nineteenth-century London ... attracting to the scene a host of engravers, watercolourists and painters". Among them were J.M.W. Turner, the landscape painter, who later produced two pictures of the fire, and the Romantic painter John Constable, who sketched the fire from a hansom cab on Westminster Bridge.
The destruction of the standard measurements led to an overhaul of the British weights and measures system. An inquiry that ran from 1838 to 1841 considered the two competing systems used in the country, the avoirdupois and troy measures, and decided that avoirdupois would be used forthwith; troy weights were retained solely for gold, silver and precious stones. The destroyed weights and measures were recast by William Simms, the scientific instrument maker, who produced the replacements after "countless hours of tests and experiments to determine the best metal, the best shape of bar, and the corrections for temperature".
The Palace of Westminster has been a UNESCO World Heritage Site since 1987, and is classified as being of outstanding universal value. UNESCO describe the site as being "of great historic and symbolic significance", in part because it is "one of the most significant monuments of neo-Gothic architecture, as an outstanding, coherent and complete example of neo-Gothic style". The decision to use the Gothic design for the palace set the national style, even for secular buildings.
In 2015 the chairman of the House of Commons Commission, John Thurso, stated that the palace was in a "dire condition". The Speaker of the House of Commons, John Bercow, agreed and said that the building was in need of extensive repairs. He reported that parliament "suffers from flooding, contains a great deal of asbestos and has fire safety issues", which would cost £3 billion to fix.
Notes and references
- In 1707, following the Acts of Union which led to 45 Scottish MPs joining the House of Commons, Wren also widened the galleries in the chamber.
- Dickens later mocked the decision, commenting that "the sticks were housed in Westminster, and it would naturally occur to any intelligent person that nothing could be easier than to allow them to be carried away for fire-wood by the miserable people who lived in that neighbourhood. However, they never had been useful, and official routine required that they should never be, and so the order went out that they were to be privately and confidentially burnt."
- In July that year parliament had passed the Chimney Sweepers Act 1834, which stopped children under ten from working as sweeps, and made it a criminal offence for anyone to force anyone to enter a flue.
- Black Rod—officially the Gentleman Usher of the Black Rod—is the parliamentary officer responsible for the maintaining the buildings, services and security of the Palace.
- A flashover fire is one that occurs in a confined space where the heat in that spaces rises quickly enough for the objects in the room to reach their combustible temperature at the same time. Those objects will give off ignitable vapours and gasses as they catch fire, and these will simultaneously ignite and expand, creating an fireball. The temperatures reached are between 500–600 °C (about 900–1100 °F).
- Braidwood had transferred to London from the Edinburgh Fire Brigade the year before; the Edinburgh service was the first municipal fire service in Britain.
- The Painted Chamber was used by the House of Lords until 1847; it was demolished in 1851. The Lesser Hall was used as the chamber for the House of Commons until 1852.
- £1,500 in 1836 equates to approximately £123,000 in 2015, according to calculations based on Consumer Price Index measure of inflation.
- In addition to undertaking Barry's drawings, Pugin also undertook the draughting for another entrant, James Gillespie Graham, the Scottish architect.
- In September 1844 Barry invited Pugin to assist with the design of the fittings for the House of Lords. Pugin subsequently produced numerous designs for a range of items, including the stained glass, wallpaper, textiles and tiles.
- £707,000 in 1837 equates to approximately £56 million in 2015, while £725,000 equates to approximately £57,500,000, according to calculations based on Consumer Price Index measure of inflation.
- He was accompanied by Dr William Smith, the founder of the science of geology; Sir Walter de la Beche of the Ordnance Geological Survey; and Charles Harriott Smith, the master mason who had carved the pillars of the National Gallery.
- In December 1851 the House of Commons had been used for a trial period. Complaints from MPs over the acoustics forced Barry to lower the roof, which changed the character of his design so much that he refused to ever enter the chamber again.
- In 2012, to celebrate the Diamond Jubilee of Elizabeth II, the clock tower was renamed the Elizabeth Tower.
- £2.5 million in 1870 equates to approximately £209 million in 2015, according to calculations based on Consumer Price Index measure of inflation.
- The body, now based in Kew, has since been renamed as The National Archives.
- Cooper 1982, pp. 68–71.
- "Henry III and the Palace". UK Parliament. Archived from the original on 11 May 2015. Retrieved 14 April 2015.
- Jones 1983, p. 10.
- "A Brief Chronology of the House of Commons" (PDF). UK Parliament. August 2010. Archived from the original (PDF) on 11 May 2015. Retrieved 13 April 2015.
- Jones 1983, pp. 15–16.
- "Location of Parliaments in the later middle ages". UK Parliament. Archived from the original on 11 May 2015. Retrieved 14 April 2015.
- Shenton, Caroline. "The Fire of 1834 and the Old Palace of Westminster" (PDF). UK Parliament. Archived from the original (pdf) on 11 May 2015. Retrieved 8 April 2015.
- Walker 1974, pp. 98–99.
- "The Commons Chamber in the 17th and 18th centuries". UK Parliament. Archived from the original on 11 May 2015. Retrieved 21 April 2015.
- Shenton 2012, p. 16.
- Bryant 2014, p. 43.
- Shenton 2012, p. 18.
- Bradley & Pevsner 2003, p. 214.
- Flanders 2012, p. 330.
- "Destruction by fire, 1834". UK Parliament. Archived from the original on 11 May 2015. Retrieved 8 April 2015.
- Shenton 2012, p. 56.
- Goetzmann & Rouwenhorst 2005, p. 111.
- "Tally Sticks". UK Parliament. Archived from the original on 11 May 2015. Retrieved 13 April 2015.
- Shenton 2012, p. 51.
- Baxter 2014, pp. 197–98.
- Baxter 2014, p. 233.
- Baxter 2014, p. 355.
- Dickens 1937, p. 175.
- Shenton, Caroline (16 October 2013). "The day Parliament burned down". UK Parliament. Archived from the original on 11 May 2015. Retrieved 8 April 2015.
- Jones 1983, p. 63.
- Dickens 1937, pp. 175–76.
- Shenton 2012, p. 41.
- Shenton 2012, pp. 22–23.
- Shenton 2012, p. 50.
- Jones 1983, p. 73.
- Shenton 2012, p. 68.
- Shenton 2012, pp. 62–63.
- Shenton 2012, p. 63.
- Shenton 2012, pp. 64–66.
- "Black Rod". UK Parliament. Archived from the original on 15 June 2015. Retrieved 15 June 2015.
- Shenton 2012, p. 60.
- Withington 2003, p. 76.
- DiMaio & DiMaio 2001, pp. 386–87.
- "Awful destruction by fire of houses of parliament". The Manchester Guardian. Manchester. 18 October 1834.
- Shenton 2012, p. 75.
- Shenton 2012, p. 77.
- Shenton 2012, p. 81.
- Flanders 2012, p. 331.
- Shenton 2012, p. 104.
- "Destruction of Both Houses of Parliament by Fire". The Times. London. 17 October 1834. p. 3.
- Jones 1983, p. 66.
- Shenton 2012, p. 89.
- Blackstone 1957, pp. 118–19.
- Carlyle 1888, p. 227.
- Broughton 1911, p. 22.
- Jones 1983, p. 67.
- Shenton 2012, pp. 128–29, 195.
- Shenton 2012, pp. 197–98.
- Shenton 2012, p. 203.
- Shenton 2012, pp. 205–06.
- Shenton 2012, p. 217.
- "Destruction of Both Houses of Parliament by Fire". The Times. London. 18 October 1834. p. 5.
- Flanders 2012, p. 104.
- Gupta 2009, p. 37.
- Shenton 2012, p. 191.
- Jones 2012, p. 36.
- Shenton 2012, p. 4.
- Jones 1983, p. 71.
- Shenton 2012, pp. 243–44.
- Shenton 2012, pp. 237–38.
- "Destruction of the Houses of Parliament: Report of the Privy Council". The Observer. London. 16 November 1834. p. 2.
- "Place of Assembly for the Two Houses Provided by the King". The Observer. London. 19 October 1834. p. 2.
- Bryant 2014, p. 44.
- Bryant 2014, pp. 43–44.
- Shenton 2012, pp. 241–42.
- Rorabaugh 1973, pp. 160, 162–63, 165.
- "Rebuilding the Palace". UK Parliament. Archived from the original on 11 May 2015. Retrieved 22 April 2015.
- Walker 1974, p. 104.
- "London, Sunday, February 7". The Observer. London. 8 February 1836. p. 2.
- UK CPI inflation numbers based on data available from Gregory Clark (2016), "The Annual RPI and Average Earnings for Britain, 1209 to Present (New Series)" MeasuringWorth.
- Bradley & Pevsner 2003, p. 215.
- Port 2004.
- Wedgwood 2004.
- Jones 1983, p. 79.
- Jones 1983, p. 80.
- Quinault 1992, p. 84.
- Jones 1983, p. 97.
- Jones 1983, p. 101.
- Jones 1983, pp. 100–01.
- Bradley & Pevsner 2003, p. 218.
- "The stonework". UK Parliament. Archived from the original on 11 May 2015. Retrieved 26 April 2015.
- Jones 1983, pp. 101–02.
- Bradley & Pevsner 2003, p. 216.
- Jones 1983, p. 107.
- "Elizabeth Tower naming ceremony". UK Parliament. Archived from the original on 11 May 2015. Retrieved 26 April 2015.
- Jones 1983, p. 113.
- Jones 1983, p. 116.
- Quinault 1992, p. 91.
- Shenton 2012, pp. 256–57.
- Galinou & Hayes 1996, p. 179.
- Shenton 2012, pp. 113–14.
- Shenton 2012, p. 258.
- McConnell 2004.
- "Palace of Westminster and Westminster Abbey including Saint Margaret's Church". UNESCO. Archived from the original on 11 May 2015. Retrieved 20 April 2015.
- "Speaker John Bercow warns over Parliament repairs". BBC. 3 March 2015. Archived from the original on 11 May 2015. Retrieved 10 May 2015.
- Baxter, William T. (2014). Accounting Theory. London: Routledge. ISBN 978-1-134-63177-3.
- Blackstone, G.V. (1957). A History of the British Fire Service. London: Routledge & Kegan Paul. OCLC 1701338.
- Bradley, Simon; Pevsner, Nikolaus (2003). London 6: Westminster. New Haven, CT and London: Yale University Press. ISBN 978-0-300-09595-1.
- Broughton, Lord (1911). Recollections of a Long Life, Volume 5. London: J. Murray. OCLC 1343383.
- Bryant, Chris (2014). Parliament: The Biography (Volume II – Reform). London: Transworld. ISBN 978-0-85752-224-5.
- Carlyle, Thomas (1888). Letters. 1826–1836. London: Macmillan. OCLC 489764019.
- Cooper, James (1982). Gleanings in Europe: England. Albany, NY: SUNY Press. ISBN 978-0-7914-9975-7.
- Dickens, Charles (1937). The Speeches of Charles Dickens. London: Michael Joseph. OCLC 2454762.
- DiMaio, Dominick; DiMaio, Vincent (2001). Forensic Pathology (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1-4200-4241-2.
- Flanders, Judith (2012). The Victorian City: Everyday Life in Dickens' London. London: Atlantic Books. ISBN 978-0-85789-881-4.
- Galinou, Mireille; Hayes, John (1996). London in Paint: Oil Paintings in the Collection at the Museum of London. London: Museum of London. ISBN 978-0-904818-51-2.
- Goetzmann, William N.; Rouwenhorst, K. Geert (2005). The Origins of Value: The Financial Innovations that Created Modern Capital Markets. Oxford: Oxford University Press. ISBN 978-0-19-517571-4.
- Gupta, SV (2009). Units of Measurement: Past, Present and Future. International System of Units. Heidelberg: Springer Science & Business Media. ISBN 978-3-642-00738-5.
- Jones, Christopher (1983). The Great Palace. London: BBC Books. ISBN 978-0-563-20178-6.
- Jones, Clyve (2012). A Short History of Parliament: England, Great Britain, the United Kingdom, Ireland and Scotland. Woodbridge, Suffolk: Boydell Press. ISBN 978-1-84383-717-6.
- McConnell, Anita (2004). "Simms, William (1793–1860)". Oxford Dictionary of National Biography. Oxford University Press. doi:10.1093/ref:odnb/25568. Retrieved 27 April 2015. (subscription or UK public library membership required)
- Port, MH (2004). "Barry, Sir Charles (1795–1860)". Oxford Dictionary of National Biography. Oxford University Press. doi:10.1093/ref:odnb/1550. Retrieved 10 April 2015. (subscription or UK public library membership required)
- Quinault, Roland (1992). "Westminster and the Victorian Constitution". Transactions of the Royal Historical Society. Cambridge: Cambridge University Press. 2: 79–104. JSTOR 3679100.
- Rorabaugh, W. J. (December 1973). "Politics and the Architectural Competition for the Houses of Parliament, 1834–1837". Victorian Studies. 17 (2): 155–175. JSTOR 3826182.
- Shenton, Caroline (2012). The Day Parliament Burned Down. Oxford: Oxford University Press. ISBN 978-0-19-964670-8.
- Walker, R.J.B. (1974). "The Palace of Westminster After the Fire of 1834". The Volume of the Walpole Society (1972–1974). 44: 94–122. JSTOR 41829434.
- Wedgwood, Alexandra (2004). "Pugin, Augustus Welby Northmore (1812–1852)". Oxford Dictionary of National Biography. Oxford University Press. doi:10.1093/ref:odnb/22869. Retrieved 10 April 2015. (subscription or UK public library membership required)
- Withington, John (2003). The Disastrous History of London. Stroud, Glos: Sutton Publishing. ISBN 978-0-7509-3321-6. | https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Burning_of_Parliament.html | 21 |
28 | A microclimate is a local set of atmospheric conditions that differ from those in
the surrounding areas, often with a slight difference but sometimes with a
substantial one. The term may refer to areas as small as a few square meters or
square feet (for example a garden bed or a cave) or as large as many square
kilometers or square miles. Because climate is statistical, which implies spatial
and temporal variation of the mean values of the describing parameters, within a
region there can occur and persist over time sets of statistically distinct
conditions, that is, microclimates. Microclimates can be found in most places.
What is climate?
Ans -Climate is the average weather in a place over many years. While the
weather can change in just a few hours, climate takes hundreds, thousands, even
millions of years to change
What is weather?
Ans - the state of the atmosphere at a particular place and time as regards heat,
cloudiness, dryness, sunshine, wind, rain, etc. Weather is the day-to-day
conditions of a particular place. The temperature, cloudiness, humidity, and
whether a storm is likely in the next few days. That’s weather! It is the mix of
events that happens each day in our atmosphere. Weather is not the same
everywhere. It may be hot and sunny in one part of the world, but freezing and
snowy in another.
The macro climate around a building cannot be affected by any
design changes, however the building design can be developed with a
knowledge of the macro climate in which the building is located.
Seasonal accumulated temperature difference (degree day) are a measure
of the outside air temperature, though do not account for available solar.
Typical wind speeds and direction
Annual totals of Global Horizontal Solar Radiation
The driving rain index (DRI) relates to the amount of moisture contained in
exposed surfaces and will affect thermal conductivity of external surfaces.
This Meteorological data gives a general impression of the climate at the
site of a building and the building design can be planned accordingly.
However the building itself and surrounding geography will affect the local
Elements of climate
Temperature is how hot or cold the atmosphere is, how many degrees it is
above or below freezing. Temperature is a very important factor in
determining the weather because it influences or controls other elements
of the weather, such as precipitation, humidity, clouds and atmospheric
Humidity is the amount of water vapor in the atmosphere.
Precipitation is the product of a rapid condensation process it may include
snow, hail, drizzle and rain.
Atmospheric pressure (or air pressure) is the weight of air resting on the
earth's surface. Pressure is shown on a weather map, often with lines called
Wind is the movement of air masses, especially on the Earth's surface.
What is water cycle?
The water cycle or hydrologic is a continuous cycle where water
evaporates, travels into the air and becomes part of a cloud, falls down to
earth as precipitation, and then evaporates again. This repeats again and
again in a never-ending cycle.
Water keeps moving and changing from a solid to a liquid to a gas, over and
Precipitation creates runoff that travels over the ground surface and helps
to fill lakes and rivers. It also percolates or moves downward through
openings in the soil to replenish aquifers under the ground.
Some places receive more precipitation than others do. These areas are
usually close to oceans or large bodies of water that allow more water to
evaporate and form clouds.
Other areas receive less precipitation. Often these areas are far from water
or near mountains. As clouds move up and over mountains, the water
vapor condenses to form precipitation and freezes. Snow falls on the peaks.
Carbon is present throughout the natural environment in a fixed amount.
It takes many forms and moves through the environment via the carbon
The carbon cycle is the circulation and transformation of carbon back and
forth between living things and the environment.
Carbon is an element, something that cannot be broken down into a
simpler substance. Other examples of elements are oxygen, nitrogen,
calcium, iron, and hydrogen.
Carbon compounds are present in living things like plants and animals and
in nonliving things like rocks and soil. Carbon compounds can exist as solids
(such as diamonds or coal), liquids (such as crude oil), or gases (such
as carbon dioxide). Carbon is often referred to as the "building block of life"
because living things are based on carbon and carbon compounds.
The amount of carbon on the earth and in Earth's atmosphere is fixed, but
that fixed amount of carbon is dynamic, always changing into different
carbon compounds and moving between living and nonliving things.
Carbon is released to the atmosphere from what are called "carbon
sources" and stored in plants, animals, rocks, and water in what are called
This process occurs in a number of steps. In the first step, through
photosynthesis (the process by which plants capture the sun's energy and
use it to grow), plants take carbon dioxide out of the atmosphere and
The carbon dioxide is converted into carbon compounds that make up the
body of the plant, which are stored in both the aboveground parts of the
plants (shoots and leaves), and the belowground parts (roots).
In the next step, animals eat the plants, breath in the oxygen, and exhale
carbon dioxide. The carbon dioxide created by animals is then available for
plants to use in photosynthesis. Carbon stored in plants that are not eaten
by animals eventually decomposes after the plants die, and is either
released into the atmosphere or stored in the soil.
Large quantities of carbon can be released to the atmosphere through
geologic processes like volcanic eruptions and other natural changes that
destabilize carbon sinks. For example, increasing temperatures can cause
carbon dioxide to be released from the ocean.
Environmental quality is a set of properties and characteristics of the
environment, either generalized or local, as they impinge on human beings
and other organisms.
It is a measure of the condition of an environment relative to the
requirements of one or more species and or to any human need or
Environmental quality is a general term which can refer to varied
characteristics that relate to the natural environment as well as the built
environment, such as air and water purity or pollution, noise and the
potential effects which such characteristics may have on physical and
The Council on Environmental Quality (CEQ) is a division of the Executive
Office of the President that coordinates federal environmental efforts in
the United States and works closely with agencies and other White House
offices on the development of environmental and energy policies and
Deforestation: Facts, Causes & Effects
Deforestation is the permanent destruction of forests in order to make the
land available for other uses
An estimated 18 million acres (7.3 million hectares) of forest, which is
roughly the size of the country of Panama, are lost each year, according to
the United Nations' Food and Agriculture Organization (FAO).
Some other common reasons are:
To make more land available for housing and urbanization
To harvest timber to create commercial items such as paper, furniture and
To create ingredients that are highly prized consumer items, such as the oil
from palm trees
To create room for cattle ranching
Common methods of deforestation are burning trees and clear cutting.
These tactics leave the land completely barren and are controversial
Deforestation and climate change
Deforestation is considered to be one of the contributing factors to global
The No. 1 problem caused by deforestation is the impact on the global
Gas molecules that absorb thermal infrared radiation are called greenhouse
If greenhouse gases are in large enough quantity, they can force climate
change, according to Daley. While oxygen (O2) is the second most abundant
gas in our atmosphere, it does not absorb thermal infrared radiation,
As greenhouse gases do. Carbon dioxide (CO2) is the most prevalent
greenhouse gas. In 2012, CO2 accounted for about 82 percent of all U.S.
greenhouse gas, according to the Environmental Protection Agency (EPA).
Trees can help, though. 300 billion tons of carbon, 40 times the annual
greenhouse gas emissions from fossil fuels, is stored in trees, according
The deforestation of trees not only lessens the amount of carbon stored, it
also releases carbon dioxide into the air. This is because when trees die,
they release the stored carbon. According to the 2010 Global Forest
Resources Assessment, deforestation releases nearly a billion tons of
carbon into the atmosphere per year, though the numbers are not as high
as the ones recorded in the previous decade. Deforestation is the second
largest anthropogenic (human-caused) source of carbon dioxide to the
atmosphere, ranging between 6 percent and 17 percent.
Carbon isn't the only greenhouse gas that is affected by deforestation.
Water vapor is also considered a greenhouse gas. "The impact of
deforestation on the exchange of water vapor and carbon dioxide between
the atmosphere and the terrestrial land surface is the biggest concern with
regard to the climate system," said Daley. Changes in their atmospheric
concentration will have a direct effect on climate.
Deforestation has decreased global vapor flows from land by 4 percent,
according to a study published by the National Academy of Sciences. Even
this slight change in vapor flows can disrupt natural weather patterns and
change current climate models.
Other effects of deforestation
Forests are complex ecosystems that affect almost every species on the
planet. When they are degraded, it can set off a devastating chain of events
both locally and around the world.
Loss of species: Seventy percent of the world’s plants and animals live in
forests and are losing their habitats to deforestation, according to National
Geographic. Loss of habitat can lead to species extinction. It also has
negative consequences for medicinal research and local populations who
rely on the animals and plants in the forests for hunting and medicine.
Water cycle: Trees are important to the water cycle. They absorb rain fall
and produce water vapor that is released into the atmosphere. Trees also
lessen the pollution in water, according to the North Carolina State
University, by stopping polluted runoff. In the Amazon, more than half the
water in the ecosystem is held within the plants, according to the National
Soil erosion: Tree roots anchor the soil. Without trees, the soil is free to
wash or blow away, which can lead to vegetation growth problems. The
WWF states that scientists estimate that a third of the world’s arable land
has been lost to deforestation since 1960. After a clear cutting, cash crops
like coffee, soy and palm oil are planted. Planting these types of trees can
cause further soil erosion because their roots cannot hold onto the soil.
Life quality: Soil erosion can also lead to silt entering the lakes, streams and
other water sources. This can decrease local water quality and contribute
to poor health in populations in the area.
Climate change is the rise in average surface temperatures on Earth, mostly
due to the burning of fossil fuels.
Climate change, also called global warming, refers to the rise in average
surface temperatures on Earth. An overwhelming scientific consensus
maintains that climate change is due primarily to the human use of fossil
fuels, which releases carbon dioxide and other greenhouse gases into the
air. The gases trap heat within the atmosphere, which can have a range of
effects on ecosystems, including rising sea levels, severe weather events,
and droughts that render landscapes more susceptible to wildfires.
The primary cause of climate change is the burning of fossil fuels, such as
oil and coal, which emits greenhouse gases into the atmosphere—primarily
carbon dioxide. Other human activities, such as agriculture and
deforestation, also contribute to the proliferation of greenhouse gases that
cause climate change.
While some quantities of these gases are a naturally occurring and critical
part of Earth’s temperature control system, the atmospheric concentration
of CO2 did not rise above 300 parts per million between the advent of
human civilization roughly 10,000 years ago and 1900. Today it is at about
400 ppm, a level not reached in more than 400,000 years.
Even small increases in Earth’s temperature caused by climate change can
have severe effects. The earth’s average temperature has gone up 1.4° F
over the past century and is expected to rise as much as 11.5° F over the
next. That might not seem like a lot, but the average temperature during
the last Ice Age was about 4º F lower than it is today.
Rising sea levels due to the melting of the polar ice caps (again, caused by
climate change) contribute to greater storm damage; warming ocean
temperatures are associated with stronger and more frequent storms;
additional rainfall, particularly during severe weather events, leads to
flooding and other damage; an increase in the incidence and severity of
wildfires threatens habitats, homes, and lives; and heat waves contribute to
human deaths and other consequences.
Approximately 90 per cent of all ozone is produced naturally in the
stratosphere. While ozone can be found through the entire atmosphere,
the greatest concentration occurs at an altitude of about 25 km. This band
of ozone-rich air is known as the "ozone layer".
Ozone Depletion - Ozone depletion is the term commonly used to describe
the thinning of the ozone layer in the stratosphere. Ozone depletion occurs
when the natural balance between the production and destruction of
ozone in the stratosphere is tipped in favor of destruction.
Health & Environmental Effects
The ozone layer acts as a natural filter, absorbing most of the sun's burning
ultraviolet (UV) rays. Stratospheric ozone depletion leads to an increase
in UV-B that reach the earth's surface, where it can disrupt biological
processes and damage a number of materials.
Ozone-depleting substances generally contain chlorine, fluorine, bromine,
carbon, and hydrogen in varying proportions and are often described by the
general term halocarbons. Chlorofluorocarbons, carbon tetrachloride, and
methyl chloroform are important human-produced ozone-depleting gases
that have been used in many applications. Another important group of
human-produced halocarbons is the halons, which contain carbon,
bromine, fluorine, and (in some cases) chlorine and have been mainly used
as fire extinguishers.
The main things that lead to destruction of the ozone gas in the ozone
layer. Low temperatures, increase in the level of chlorine and bromine
gases in the upper stratosphere are some of the reasons that leads to
ozone layer depletion. But the one and the most important reason for
ozone layer depletion is the production and emission of
chlorofluorocarbons (CFCs). This is what which leads to almost 80 percent
of the total ozone layer depletion.
There are many other substances that lead to ozone layer depletion such as
hydro chlorofluorocarbons (HCFCs) and volatile organic compounds (VOCs).
Such substances are found in vehicular emissions, by-products of industrial
processes, aerosols and refrigerants. All these ozone depleting substances
remain stable in the lower atmospheric region, but as they reach the
stratosphere, they get exposed to the ultra violet rays. This leads to their
breakdown and releasing of free chlorine atoms which reacts with the
ozone gas, thus leading to the depletion of the ozone layer.
Effects of ozone layer depletion
Let us see a few possible effects of the ozone layer depletion on the earth’s
environment and also on the plants and animals. The depletion of ozone
layer allows entering of UV rays from sun into the earth’s atmosphere
which is associated with a number of health related and environmental
issues. Let us see its major impacts on human beings
Skin Cancer / Eye Damage: UV rays are harmful for our eyes too
Damage to Immune system / Aging of skin
In humans, exposure to UV rays can also lead to difficulty in breathing,
chest pain, and throat irritation and can even lead to hampering of lung
Ozone layer depletion leads to decrease in ozone in the stratosphere and
increase in ozone present in the lower atmosphere. Presence of ozone in
the lower atmosphere is considered as a pollutant and a greenhouse gas.
Ozone in the lower atmosphere contributes to global warming and climate
change. The depletion of ozone layer has trickle down effects in the form of
global warming, which in turn leads to melting of polar ice, which will lead
to rising sea levels and climatic changes around the world.
Ozone layer depletion is not something that affects any specific country or
region. The whole world is vulnerable to its after effects. That makes it
important for each and every one of us to take actions to reduce ozone
layer depletion. International agreements such as Montreal protocol in
1987 have helped in reducing and controlling industrial emission of
More and more of such international agreements between countries is
necessary to bring down ozone layer depletion. At individual level each and
every one also can contribute towards reducing ozone layer depletion.
Buying and using recycled products, saving of energy, using of public
transport can do a lot in combating ozone layer depletion.
WHAT IS MICRO CLIMATE??
The microclimate is a local atmospheric zone where
the climate differs from the surrounding area. The term may refer to areas as
small as a few square feet (for example a garden bed) or as large as many square
miles. Microclimates exist, for example, near bodies of water which may cool the
local atmosphere, or in heavily urban areas where brick, concrete, and asphalt
absorb the sun's energy, heat up, and reradiate that heat to the ambient air: the
resulting urban heat island is a kind of microclimate.
Another contributing factor to microclimate is the slope or aspect of
an area. South-facing slopes in the Northern Hemisphere and north-facing slopes
in the Southern Hemisphere are exposed to more direct sunlight than opposite
slopes and are therefore warmer for longer.
Tall buildings create their own microclimate, both by overshadowing
large areas and by channeling strong winds to ground level. Wind effects around
tall buildings are assessed as part of a microclimate study.
Microclimates can also refer to purpose made environments, such as
those in a room or other enclosure. Microclimates are commonly created and
carefully maintained in museum display and storage environments. This can be
done using passive methods.
IMPACT UPON HUMANS
Microclimate has a significant effect upon agricultural
production and upon human comfort. Agricultural yields and crop growing season
is affected by microclimate variation in temperature, rainfall, solar insulation,
humidity and wind velocity.
Some descriptions of microclimate affecting humans also include the
factors of noise pollution, light pollution and air pollution
• Natural environment means all living and non-living things that
are natural. The universe is natural, but often the term "natural
environment" only means nature on Earth. Two aspects are usually
• Ecological units which are natural systems without much human
interference. These include including all
vegetation, microorganisms, soil, rocks, atmosphere, and natural events.
• Universal natural resources and physical phenomena which lack clear-cut
boundaries. These include climate, air, water, energy, radiation, electric
charge, and magnetism.
• In contrast to the natural environment is the built environment. There, man
has changed landscapes to make urban settings and agricultural land. A
simpler human environment largely replaces the complex natural
• Natural environment is a composite. The natural environment includes a
great number of things—all the agents, forces, processes, and material
resources of the world of Nature. The list of all these is unbelievably long. In
fact, it is bewilderingly complex until one sorts and classifies its
components and puts them into some simple arrangement. When this is
done, the whole matter is easy to understand. For instance, the natural en-
vironment of any part of the earth’s surface can be classified into the
following sixteen elements.
1. Weather and climate / Landforms / Rocks and minerals / Soils
2. Natural vegetation / Native animal life / Micro-organic realm
3. Surface waters of the land / Underground waters / the ocean
4. The coast zone/ Geomatical position
5. Natural situation / Geographical location / Regional form or shape /
Areal space or size
• In urban planning, the idea that a large percentage of the human
environment is manmade, and these artificial surroundings are so extensive
and cohesive that they function as organisms in the consumption of
resources, disposal of wastes, and facilitation of productive enterprise
within its bounds. Recently there has also been considerable dialogue and
research into the impact of the built environment's impact on population
• Urban ecosystems are the cities, towns, and urban strips constructed by
• This is the growth in the urban population and the supporting built
infrastructure has impacted on both urban environments and also on areas
which surround urban areas. These include semi or 'peri-urban'
environments that fringe cities as well as agricultural and natural
• Scientists are now developing ways to measure and understand the effects
of urbananisation on human and environmental health.
• By considering urban areas as part of a broader ecological system, scientists
can investigate how urban landscapes function and how they affect other
landscapes with which they interact. In this context, urban environments
are affected by their surrounding environment but also impact on that
environment. Knowing this may provide clues as to which alternative
development options will lead to the best overall environmental outcome.
• urban ecosystem research is focused on: Understanding how cities work as
ecological system / Developing sustainable approaches to development of
city fringe areas that reduce negative impact on surrounding environments
• Developing approaches to urban design that provide for health and
opportunity for citizens
THE LIVING ENVIRONMENT
• People have long been curious about living things—how many different
species there are, what they are like, where they live, how they relate to
each other, and how they behave. Scientists seek to answer these questions
and many more about the organisms that inhabit the earth. In particular,
they try to develop the concepts, principles, and theories that enable people
to understand the living environment better.
• Living organisms are made of the same components as all other matter,
involve the same kind of transformations of energy, and move using the
same basic kinds of forces. Thus, all of the physical principles discussed in
Chapter 4, The Physical Setting, apply to life as well as to stars, raindrops,
and television sets. But living organisms also have characteristics that can
be understood best through the application of other principles.
• Solar radiation is radiant energy emitted by the sun, particularly
electromagnetic energy. About half of the radiation is in the visible short-
wave part of the electromagnetic spectrum. The other half is mostly in the
near-infrared part, with some in the ultraviolet part of the spectrum
What is heat flow?
• When you bring two objects of different temperature together, energy will
always be transferred from the hotter to the cooler object.
• The objects will exchange thermal energy, until thermal equilibrium is
reached, i.e. until their temperatures are equal. We say that heat flows
from the hotter to the cooler object. Heat is energy on the move.
Units of heat are units of energy. The SI unit of energy is Joule. An external
agent doing work, heat will always flow from a hotter to a cooler
object. Two objects of different temperature always interact. There are
three different ways for heat to flow from one object to another. They are
conduction, convection, and radiation.
• (in joules)
What is Heat?
All matter is made up of molecules and atoms. These atoms are always in
different types of motion (translation, rotational, vibrational). The motion
of atoms and molecules creates heat or thermal energy. All matter has this
thermal energy. The more motion the atoms or molecules have the more
heat or thermal energy they will have.
How is heat transferred?
• Heat can travel from one place to another in three ways: Conduction,
Convection and Radiation. Both conduction and convection require matter
to transfer heat.
• If there is a temperature difference between two systems heat will always
find a way to transfer from the higher to lower system.
• Conduction is the transfer of heat between substances that are in direct
contact with each other. The better the conductor, the more rapidly heat
will be transferred. Metal is a good conduction of heat. Conduction occurs
when a substance is heated, particles will gain more energy, and vibrate
more. These molecules then bump into nearby particles and transfer some
of their energy to them. This then continues and passes the energy from
the hot end down to the colder end of the substance.
• Thermal energy is transferred from hot places to cold places by convection.
Convection occurs when warmer areas of a liquid or gas rise to cooler areas
in the liquid or gas. Cooler liquid or gas then takes the place of the warmer
areas which have risen higher. This results in a continuous circulation
pattern. Water boiling in a pan is a good example of these convection
currents. Another good example of convection is in the atmosphere. The
earth's surface is warmed by the sun, the warm air rises and cool air moves
• Radiation is a method of heat transfer that does not rely upon any contact
between the heat source and the heated object as is the case with
conduction and convection. Heat can be transmitted through empty space
by thermal radiation often called infrared radiation. This is a type
electromagnetic radiation. No mass is exchanged and no medium is
required in the process of radiation. Examples of radiation is the heat from
the sun, or heat released from the filament of a light bulb.
Movement of air
• Movement of air is caused by temperature or pressure differences and is
experienced as wind.
• Where there are differences of pressure between two places, a pressure
gradient exists, across which air moves: from the high-pressure region to
the low-pressure region.
• This movement of air however, does not follow the quickest straight-line
path. In fact, the air moving from high to low pressure follows a spiraling
route, outwards from high pressure and inwards towards low pressure.
• This is due to the rotation of the Earth beneath the moving air, which
causes an apparent deflection of the wind to the right in the Northern
Hemisphere, and to the left in the Southern Hemisphere.
• The deflection of air is caused by the Coriolis force. Consequently, air
blows anticlockwise around a low-pressure center (depression) and
clockwise around a high-pressure center (anticyclone) in the Northern
Hemisphere. This situation is reversed in the Southern Hemisphere.
• Wind caused by differences in temperature is known as convection or
advection. In the atmosphere, convection and advection transfer heat
energy from warmer regions to colder regions, either at the Earth surface
or higher up in the atmosphere.
• Small-scale air movement of this nature is observed during the formation
of sea and land breezes, due to temperature differences between seawater
• At a much larger scale, temperature differences across the Earth generate
the development of the major wind belts. Such wind belts, to some degree,
define the climate zones of the world.
• Land use involves the management and modification of natural
environment or wilderness into built environment such as settlements and
semi-natural habitats such as arable fields, pastures, and managed woods.
It also has been defined as "the total of arrangements, activities, and inputs
that people undertake in a certain land cover type."
• There are many types of land use:
• Recreational - fun, non-essentials like parks. / Transport - roads, railways,
• Agricultural - farmland. / Residential - housing. / Commercial - businesses
• Drainage is the natural or artificial removal of surface and sub-surface
water from an area. The internal drainage of most agricultural soils is good
enough to prevent severe waterlogging (anaerobic conditions that harm
root growth), but many soils need artificial drainage to improve production
or to manage water supplies.
• Drainage system may refer to: A drainage system (geomorphology), the
pattern formed by the streams, rivers, and lakes in a particular drainage
• A drainage system (agriculture), an intervention to control waterlogging
aiming at soil improvement for agricultural production.
• A drainage system in urban and industrial areas, a facility to dispose of
liquid waste. See Sustainable urban drainage systems and Sewerage.
1. It is the hygienic means of promoting health through prevention of human
contact with the hazards of wastes as well as the treatment and proper
disposal of sewage or wastewater. Hazards can be either
physical, microbiological, biological or chemical agents of disease.
2. Wastes that can cause health problems include human and animal excreta,
solid wastes, domestic wastewater (sewage or greywater) industrial wastes
and agricultural wastes. Hygienic means of prevention can be by using
engineering solutions (e.g., sanitary sewers, sewage treatment, surface
runoff management, solid waste management, excreta management),
simple technologies (e.g., pit latrines, dry toilets, urine-diverting dry
toilets, septic tanks), or even simply by behavior changes in personal
hygiene practices, such as hand washing with soap.
3. Providing sanitation to people requires a systems approach, rather than
only focusing on the toilet or wastewater treatment plant itself. The
experience of the user, excreta and wastewater collection methods,
transportation or conveyance of waste, treatment, and reuse or disposal all
need to be thoroughly considered.
4. The main objective of a sanitation system is to protect and promote human
health by providing a clean environment and breaking the cycle of
Sanitation is the hygienic means of promoting health through prevention of
human contact with the hazards of wastes. Hazards can be either physical,
microbiological, biological or chemical agents of disease. Wastes that can cause
health problems are human and animal feces, solid wastes, domestic wastewater
(sewage, sullage, and greywater), industrial wastes, and agricultural wastes.
Hygienic means of prevention can be by using engineering solutions (e.g.
sewerage and wastewater treatment), simple technologies (e.g. latrines, septic
tanks), or even by personal hygiene practices (e.g. simple hand washing with
soap).The term "sanitation" can be applied to a specific aspect, concept, location,
or strategy, such as: Basic sanitation - refers to the management of human feces
at the household level. This terminology is the indicator used to describe the
target of the Millennium Development Goal on sanitation.
Concept of Greenfield development
• Greenfield development is the creation of planned communities on
previously undeveloped land. This land may be rural, agricultural or unused
areas on the outskirts of urban areas.
• Unlike urban sprawls, where there is little or no proper suburban planning,
greenfield development is about efficient urban planning that aims to
provide practical, affordable and sustainable living spaces for growing
• The planning takes future growth and development into account as well as
seeks to avoid the various infrastructure issues that plague existing urban
areas. Going for Greenfield development is actually far more convenient
than attempting to develop or modify existing urban areas.
• The process of revitalizing old or rundown neighborhoods, which is known
as brownfield remediation, can be expensive, slow, and fraught with
various social and political issues. Landlords, for instance, may not find
development in their interest or profitable.
• If it is a rough neighborhood with dysfunctional school systems, people may
not be willing to move into it even after redevelopment. Planning and
developing new communities in new areas, on the other hand, can be a
comparatively faster and easier process, with no previous issues to contend
What is a Brownfield development?
• 'Brownfield' land is an area of land or premises that has been previously
used, but has subsequently become vacant, derelict or contaminated.
• This term derived from its opposite, undeveloped or 'greenfield' land.
Brownfield sites typically require preparatory regenerative work before any
new development goes ahead, and can also be partly occupied.
• Environmental Problems
• Our environment is constantly changing. There is no denying that.
However, as our environment changes, so does the need to become
increasingly aware of the problems that surround it.
• With a massive influx of natural disasters, warming and cooling periods,
different types of weather patterns and much more, people need to be
aware of what types of environmental problems our planet is facing.
• Global warming has become an undisputed fact about our current
livelihoods; our planet is warming up and we are definitely part of the
• However, this isn’t the only environmental problem that we should be
15 Major Current Environmental Problems
• 1. Pollution: Pollution of air, water and soil require millions of years to
recoup. Industry and motor vehicle exhaust are the number one pollutants.
Heavy metals, nitrates and plastic are toxins responsible for pollution.
While water pollution is caused by oil spill, acid rain, urban runoff; air
pollution is caused by various gases and toxins released by industries and
factories and combustion of fossil fuels; soil pollution is majorly caused by
industrial waste that deprives soil from essential nutrients.
• 2. Global Warming: Global warming leads to rising temperatures of the
oceans and the earth’ surface causing melting of polar ice caps, rise in sea
levels and also unnatural patterns of precipitation such as flash floods,
excessive snow or desertification.
• 3. Overpopulation: The population of the planet is reaching unsustainable
levels as it faces shortage of resources like water, fuel and food.
• 4. Natural Resource Depletion: Natural resource depletion is another
crucial current environmental problems. Fossil fuel consumption results in
emission of Greenhouse gases, which is responsible for global warming and
climate change. Globally, people are taking efforts to shift to renewable
sources of energy like solar, wind, biogas and geothermal energy.
• 5. Waste Disposal: The over consumption of resources and creation of
plastics are creating a global crisis of waste disposal. Developed countries
are notorious for producing an excessive amount of waste or garbage and
dumping their waste in the oceans and, less developed countries.
• 6. Climate Change: Climate change is yet another environmental problem
that has surfaced in last couple of decades. It occurs due to rise in global
warming which occurs due to increase in temperature of atmosphere by
burning of fossil fuels and release of harmful gases by industries.
• 7. Loss of Biodiversity: Human activity is leading to the extinction of species
and habitats and and loss of bio-diversity. Eco systems, which took millions
of years to perfect, are in danger when any species population is
decimating. Balance of natural processes like pollination is crucial to the
survival of the eco-system and human activity threatens the same. Another
example is the destruction of coral reefs in the various oceans, which
support the rich marine life.
• 8. Deforestation: Our forests are natural sinks of carbon dioxide and
produce fresh oxygen as well as helps in regulating temperature and
rainfall. At present forests cover 30% of the land but every year tree cover
is lost amounting to the country of Panama due to growing population
demand for more food, shelter and cloth.
Ecology is the science of the study of ecosystems. Ecological balance has been
defined by various online dictionaries as "a state of dynamic equilibrium within
a community of organisms in which genetic, species and ecosystem diversity
remain relatively stable, subject to gradual changes through natural success."
and "A stable balance in the numbers of each species in an ecosystem."
• The most important point being that the natural balance in an ecosystem is
maintained. This balance may be disturbed due to the introduction of new
species, the sudden death of some species, natural hazards or man-made
SUSTAINABLE SITE DEVELOPMENT
• The concept of sustainable development is related to environmentalism but
has evolved since its introduction in the 1980s. The most widely held
definition was published by the United Nation's World Commission on
Environment and Development in 1987. The General Assembly found
sustainable development to be that type of development that meets the
"needs of the present without compromising the ability of future
generations to meet their own needs."
Selecting and Developing the Site Wisely
Sustainable practices avoid the development of inappropriate sites and reduce the
environmental impact from the location of a building on a site. Development of
previously undeveloped sites consumes land that could have agricultural,
wetlands, and wildlife habitat value. Developing a site in an urban area with
existing infrastructure can protect Greenfields and preserve habitat and natural
Reducing Emissions Associated with Transportation
Vehicle emissions and the need for increased impervious areas for paved parking
lots are an environmental concern. Parking areas and roadways result in increased
storm water runoff and contribute to heat island effect. The use of alternative
forms of transportation can be promoted by providing bicycle racks and changing
rooms, preferred parking for carpooling and low-emitting and fuel-efficient
vehicles, and access to public transportation.
Planting Sustainable Landscapes
Sustainable landscape practices minimize the use of fertilizers, pesticides, and
irrigation. Using native and adaptive non-invasive plant species requires less
maintenance and uses little or no irrigation, fertilizers, or pesticides. Sustainable
landscaping practices reduce maintenance costs over the life of the
Protecting Surrounding Habitats
Development of building sites can encroach on agricultural land and adversely
affect wildlife habitat. Sustainable development promotes preserving and
restoring native vegetation and wildlife habitat.
Storm water Management
Impervious surfaces and reduced permeability within developed areas increase
storm water runoff that can contribute to off-site flooding and pollution. Effective
strategies exist to reduce and treat storm water runoff before it leaves the project
site and has an impact on sensitive water bodies.
Heat Island Effect Reduction
Dark, non-reflective surfaces in parking areas, hardscapes, and roofs absorb solar
radiation and radiate that heat to surrounding areas resulting in an increase in
ambient temperature. This increase in temperature can have an impact on habitat
as well as increase building energy costs for cooling. Installing reflective surfaces
and increasing the vegetation on the site can reduce or eliminate heat island
Light Pollution Prevention
Poorly designed site lighting can result in negative impacts due to light trespass
from the building and site. Light pollution reduction measures reduce night
glow and the impact from building interior and site lighting on nocturnal
environments, while still providing lighting for safety. Luminaries that do not
enhance safety, such as landscape lighting, should be avoided.
Floor area ratio (FAR)
• Floor area ratio (FAR) is the ratio of a building's total floor area (gross floor
area) to the size of the piece of land upon which it is built. The terms can
also refer to limits imposed on such a ratio.
• As a formula: Floor area ratio = (total covered area on all floors of all
buildings on a certain plot, gross floor area) / (area of the plot)
• The floor area ratio (FAR) can be used in zoning to limit the number of
people that a building can hold instead of controlling a building's external
Alternative technologies include the following:
Fuels for automobiles (besides gasoline and diesel)
Alcohol (either ethanol or methanol)
• Anaerobic digestion is a series of biological processes in which
microorganisms break down biodegradable material in the absence of
oxygen. One of the end products is biogas, which is combusted to generate
electricity and heat, or can be processed into renewable natural gas and
• Composting is nature's process of recycling decomposed organic materials
into a rich soil known as compost. Anything that was once living will
decompose. Basically, backyard composting is an acceleration of the same
process nature uses.
• Biodiesel refers to a vegetable oil - or animal fat-based diesel
fuel consisting of long-chain alkyl (methyl, ethyl, or propyl) esters. Biodiesel
is typically made by chemically reacting lipids (e.g., vegetable oil, soybean
animal fat (tallow
)) with an alcohol producing fatty acid esters.
• Biodiesel is meant to be used in standard diesel engines and is thus distinct
from the vegetable and waste oils used to fuel converted diesel engines.
Biodiesel can be used alone, or blended with petro diesel in any
Biodiesel blends can also be used as heating oil.
• Greywater is gently used water from your bathroom sinks, showers, tubs,
and washing machines. It is not water that has come into contact with
feces, either from the toilet or from washing diapers. Greywater may
contain traces of dirt, food, grease, hair, and certain household cleaning
• Alternative natural materials
• Alternative natural materials is a general term that describes natural
materials like rock or adobe that are not as commonly in use as materials
such as wood or iron. Alternative natural materials have many practical
uses in areas such as sustainable architecture and engineering. The main
purpose of using such materials is to minimize the negative effects that our
built environment can have on the planet while increasing the efficiency
and adaptability of the structures.
• In Asian countries, bamboo is being used for structures like bridges and
homes. Bamboo is surprisingly strong and rather flexible and grows
incredibly fast, making it a rather abundant material. Although it can be
difficult to join corners together, bamboo is immensely strong and makes
up for the hardships that can be encountered while building it.
• Rock is a great way to get away from traditional materials that are harmful
to the environment. Rocks have two great characteristics: good thermal
mass and thermal insulation. These characteristics make stone a great idea
because the temperature in the house stays rather constant thus requiring
less air conditioning and other cooling systems.
What is Rainwater harvesting?
• The term rainwater harvesting is being frequently used these days,
however, the concept of water harvesting is not new for India. Water
harvesting techniques had been evolved and developed centuries ago.
• Ground water resource gets naturally recharged through percolation. But
due to indiscriminate development and rapid urbanization, exposed surface
for soil has been reduced drastically with resultant reduction in percolation
of rainwater, thereby depleting ground water resource. Rainwater
harvesting is the process of augmenting the natural filtration of rainwater
in to the underground formation by some artificial methods. "Conscious
collection and storage of rainwater to cater to demands of water, for
drinking, domestic purpose & irrigation is termed as Rainwater Harvesting."
Why Harvest Rainwater
• This is perhaps one of the most frequently asked question, as to why one
should harvest rainwater. There are many reasons but following are some
of the important ones.
• To arrest ground water decline and augment ground water table / To
beneficiate water quality in aquifers
• To conserve surface water runoff during monsoon / To reduce soil erosion
• To inculcate a culture of water conservation
Onsite Sewage Systems
• Onsite sewage systems are effective at treating household sewage if
designed and installed properly in appropriate soil and maintained
regularly. In typical onsite sewage systems, the wastewater from toilets and
other drains flows from your house into a tank that separates the solids and
scum from the liquid. Bacteria help break down the solids into sludge. The
liquid flows out of the tank into a network of pipes buried in a disposal field
of gravel and soil. Holes in the pipes allow the wastewater to be released
into the disposal field. The soil, gravel and naturally occurring bacteria in
the soil filter and cleanse the wastewater. There are about 250,000 onsite
sewage systems in British Columbia, despite expansion of municipal sewage
collection and treatment facilities.
You may have a failing onsite sewage system if you notice one or more of the
• unusually green or spongy grass over the system;
• toilets, showers and sinks back up or take a long time to drain;
• sewage surfacing on your lawn or in a nearby ditch;
• Sewage odors around your yard, especially after rain.
• Sewage treatment is the process of removing contaminants
from wastewater, primarily from household sewage. It includes physical,
chemical, and biological processes to remove these contaminants and
produce environmentally safe treated wastewater (or treated effluent)
• Recycling and reuse
• Recycling involves the collection of used and discarded materials processing
these materials and making them into new products. It reduces the amount
of waste that is thrown into the community dustbins thereby making the
environment cleaner and the air more fresh to breathe.
• Surveys carried out by Government and non-government agencies in the
country have all recognized the importance of recycling wastes. However,
the methodology for safe recycling of waste has not been standardized.
Studies have revealed that 7 %-15% of the waste is recycled. If recycling is
done in a proper manner, it will solve the problems of waste or garbage. At
the community level, a large number of NGOs and private sector
enterprises have taken an initiative in segregation and recycling of waste
(EXNORA International in Chennai recycles a large part of the waste that is
collected). It is being used for composting, making pellets to be used in
gasifies, etc. Plastics are sold to the factories that reuse them.
• The steps involved in the process prior to recycling include
a) Collection of waste from doorsteps, commercial places, etc.
b) Collection of waste from community dumps.
c) Collection/picking up of waste from final disposal sites
• Most of the garbage generated in the household can be recycled and
reused. Organic kitchen waste such as leftover foodstuff, vegetable peels,
and spoilt or dried fruits and vegetables can be recycled by putting them in
the compost pits that have been dug in the garden. Old newspapers,
magazines and bottles can be sold to the kabadiwala the man who buys
these items from homes.
• In your own homes you can contribute to waste reduction and the recycling
and reuse of certain items. To cover you books you can use old calendars;
old greeting cards can also be reused. Paper can also be made at home
through a very simple process and you can paint on them.
• Waste recycling has some significant advantages. It leads to less utilization
of raw materials.Reduces environmental impacts arising from waste
treatment and disposal. Makes the surroundings cleaner and
healthier. Saves on landfill space. Saves money. Reduces the amount of
energy required to manufacture new products.
• In fact recycling can prevent the creation of waste at the source.
Passive energy system design
After including every available conservation technique in a building design, the
next step in decreasing the energy and water demands of the site are passive
building designs. A passive design uses several techniques, included in the actual
structural design and lot layout, to significantly reduce the amount of energy
needed to heat, cool and light a building and also to reduce the runoff from the
site, thus decreasing pollution and increasing infiltration of precipitation. Passive
methods do not require any mechanical or electronic devices, so after the design
is implemented, minimal additional inputs are required. The costs of passive
designs are usually the same as or only slightly higher than conventional designs,
making the payback of these techniques relatively short .Many of the water-
conserving benefits of passive design via landscaping are listed in the
Environmentally-Friendly Urban Landscaping section.
Passive design is the control of ventilation and temperature without using any
products that consume energy or money (such as heaters, dehumidifiers or fires).
Good passive design includes:
House orientation – positioning the house to allow maximum sun in the
winter and coolness in the summer. This includes deciding which rooms you
want to be the sunniest.
Solar energy – using solar panels for water heating.
Use of shading elements – for example, wide eaves protect from the sun in
summer and provide increased weather protection in winter.
Placement and glazing of windows – the larger windows should face the
sun to capture the warmth, use glazing to stop heat escaping, and have
shading to limit summer overheating.
Ventilation – using window joinery that allows ventilation, such as security
catches allowing windows to remain partially open, or vents in the joinery.
Insulation – to reduce heat loss.
Thermal Mass – using heavy building materials to store solar energy and
limit overheating during the day but then release energy during the night to
Passive design is based on these simple principles:
using the sun s energy (solar gain) to heat the home (space heating & water
using the sun to provide light in the home,
using very high levels of insulation to retain the heat (i.e. floors, walls, roof,
use a compact design to reduce the surface area to volume ratio,
airtightness - control air flow to reduce heat loss,
using the heat produced by people and appliances to heat the home,
use energy efficient appliances,
Using the lie of the land and planting to provide shelter.
In practice, a passive house will have most of the following features:
rectangular in plan, so the sun can shine deep into the house,
compact (low surface to volume ratio: not necessarily small) design to
reduce surface area,
positioned on the site so that one of the main facades is facing south,
south facing facade will have lots of glazing,
north facing facade will have very little glazing,
rooms that are used most (e.g. living room, kitchen will be on the south
rooms that used the least (e.g. utility room, toilets, storage) will be on the
thermal mass (e.g. concrete floor) to absorb and store solar energy (heat),
very high levels of insulation to retain heat,
air tight structure to reduce heat loss through draughts,
controlled ventilation to provide good indoor air quality,
Solar collectors for water heating.
1. The primary role of the building envelope is to separate different
environments, typically the interior from exterior, by managing the flow of
air, moisture, and heat between them. The envelope must also consider the
impact of architectural orientation and styles, as well as heating and
venting strategies, owner s expectations, and future requirements.
Successful envelope design harmonizes of all these needs, while looking for
synergies in design.
2. In terms of sustainable or green design the envelope must perform its
functions for the life of the building without excessive maintenance or
renewals. In addition, the materials should be locally extracted or
manufactured, resistant to degradation, recyclable/reusable, and balance
lifecycle cost and embodied energy. Together these characteristics define
There are several basic parameters for building orientation that are incorporated
in any passive solar design. The site where the building will be located must have
access to the sun, especially between 9 am and 3 pm, during the heating season,
and there should be no more than 20 percent blockage along the sun s path (City
of Austin s Green Building Program 2004). A long, thin building with one of the
longer sides facing south and most of the windows on the southern wall will allow
for maximum solar exposure during the
winter months, providing both heat and
light. An open floor plan placing the
rooms requiring the most light and heat
along the south face of the building
optimizes passive system operation.
Garages, storage rooms, and other such
spaces can act as thermal buffers when located on the east and west side of a
building (Consumer Energy Center 2004).
1. The building fabric is a critical component of any building, since it both
protects the building occupants and plays a major role in regulating the
indoor environment. Consisting of the building's roof, floor slabs, walls,
windows, and doors, the fabric controls the flow of energy between the
interior and exterior of the building.
2. For a new project, opportunities relating to the building fabric begin during
the predesign phase of the building. An optimal design of the building fabric
may provide significant reductions in heating and cooling loads-which in
turn can allow downsizing of mechanical equipment. When the right
strategies are integrated through good design, the extra cost for a high-
performance fabric may be paid for through savings achieved by installing
smaller HVAC equipment.
3. The building fabric must balance requirements for ventilation and daylight
while providing thermal and moisture protection appropriate to the
climatic conditions of the site. Fabric design is a major factor in determining
the amount of energy a building will use in its operation. Also, the overall
environmental life-cycle impacts and energy costs associated with the
production and transportation of different envelope materials vary greatly.
4. In keeping with the whole building approach, the entire design team must
integrate design of the fabric with other design elements including material
selection; daylighting and other passive solar design strategies; heating,
ventilating, and air-conditioning (HVAC) and electrical strategies; and
project performance goals. One of the most important factors affecting
fabric design is climate. Hot/dry, hot/humid, temperate, or cold climates
will suggest different design strategies. Specific designs and materials can
take advantage of or provide solutions for the given climate.
5. A second important factor in fabric design is what occurs inside the
building. If the activity and equipment inside the building generate a
significant amount of heat, the thermal loads may be primarily internal
(from people and equipment) rather than external (from the sun). This
affects the rate at which a building gains or loses heat. Building
Configuration also has significant impacts upon the efficiency and
requirements of the building fabric. Careful study is required to arrive at a
building footprint and orientation that work with the building fabric to
maximize energy benefit
Windows and Shading
1. The performance of solar passive cooling techniques such as solar shading,
insulation of building components and air exchange rate was evaluated. In
the study a decrease in the indoor temperature by about 2.5 °C to 4.5°C is
noticed for solar shading. Results modified with insulation and controlled
air exchange rate showed a further decrease of 4.4-6.8 °C in room
2. The analysis suggested that solar shading is quite useful to development of
passive cooling system to maintain indoor room air temperature lower than
the conventional building without shade. Although shading of the whole
building is beneficial, shading of the window is crucial. The total solar load
consists of three components; direct, diffuse and reflected radiation. To
prevent passive solar heating, when it is not wanted, learning about
different methods employed to shade a building leading to natural cooling
and energy conservation.
3. A window must always be shaded from the direct solar component and
often so from the diffuse and reflected components. Decisions on where
and when to include shading can greatly affect the comfort level inside a
closed space. Shading from the effects of direct solar radiation can be
achieved in many ways:
• Shade pro ided the effe t of re esses i the e ter al e elope of the
• Shade pro ided stati or o ea le e ter al li ds or lou res.
• Tra sie t shading provided by the orientation of the building on one or more of
its external walls.
• Per a e t or tra sie t shadi g pro ided the surrou di g uildi gs, s ree s
• Shadi g of roofs rolli g refle ti e a ass, earthe pots, egetation etc.
The different criteria of shading of buildings for various climatic zones have been
in the following Table 1.
High rise buildings
A high-rise is a tall building or structure. Normally, the function of the building is
added, for example high-rise apartment building or high-rise offices. Compare:
High-rise buildings became possible with the invention of the elevator (lift) and
cheaper, more abundant building materials. Buildings between 75 feet (23 m) and
491 feet (150 m) high are, by some standards, considered high-rises. Buildings
taller than 492 feet (150 m) are classified as skyscrapers. The average height of a
level is around 13 feet (4 m) high, thus a 79 foot (24 m) tall building would
comprise 6 floors.
The materials used for the structural system of high-rise buildings are reinforced
concrete and steel. Most American style skyscrapers have a steel frame, while
residential tower blocks are usually constructed out of concrete.
Utilities & Building Operations (Division)
Utilities Division areas of responsibility can be categorized as:
BUILDING MECHANICAL SYSTEM OPERATIONS & CONTROLS
ELECTRICAL AND ELEVATOR SERVICES
CENTRAL MECHANICAL SERVICES
Planned or unplanned interruptions to building electrical and/or mechanical
services occur due to breakdowns, renovation work and regular maintenance.
Where possible F & S will call the Department or Faculty Administration.
That Department or Faculty Administration, in turn, is responsible for
calling all affected users to ensure that they have input into the timing of
A "Notice of Shutdown" is issued by the Utilities Division.
As much advanced notice as possible is given to the Department or Faculty
Administration and all others involved for posting and circulation.
Cooperation and communication are vital in minimizing the effects of
Restricted Access: Electrical Rooms, Mechanical Rooms and Service Tunnels are
secured areas and kept locked to prevent unauthorized entry.
What is a Tall Building?
Tall buildings are often regarded as being greater than 20 storeys. However, a tall
building is really defined with respect to the height of the surrounding buildings. If
the majority of the buildings in a city are 3 or 4 storeys, then a 12 storey building
would be considered tall. In locations such as New York or Hong Kong, a tall
building is 40 plus storeys high. This paper examines primarily tall buildings in the
UK, i.e. buildings of 20 storeys or more.
The tall buildings considered here are assumed to be residential, offices, retail or
hotel accommodation, with a requirement for building services, not industrial
processes or multi-storey car parks.
What is a Sustainable Tall Building?
A sustainable building is one in which the design team have struck a balance
between environmental, economic and social issues at all stages – design,
construction, operation and change of use/end of life.
A purist s definition of a sustainable tall building is one which emits no pollution
to air, land and water, and can be economically occupied throughout its design
life, whilst contributing positively to the local community.
So the challenge is to achieve sustainability and build high-rise buildings. There
are specific aspects where tall buildings are less sustainable than low rise, e.g. in
their requirement for energy for vertical transportation, but there are others
where they undoubtedly have advantages e.g. utility of land in densely populated
urban areas. So the advantages need to be capitalized on, and the disadvantages
minimized or mitigated.
Modular building construction
The Modular Building Institute (MBI 2006) defines modular construction as a
method of construction that utilizes pre-engineered, factory-fabricated
structures in three-dimensional sections that are transported to be tied together
on a site . This definition, however, focuses solely on the production and form of
prefabricated parts. Modular construction involves much more.
• Modular o stru tio i ol es odular parts asse led i the fa tor ,
transported by road and installed on the building site to create a modular
• Modular parts ha e esta lished grid di e sio s.
• Parts just s all e ough to e tra sported road are alled odules.
• The odular uildi gs are asse led, tra sported and installed by specially
• The odular parts are o e ted usi g o e ie t dr -point and like
• The o po e ts of the odular parts a d odules are kept i sto k at the
• The poi t at hi h a order a e roke do i to its i di idual o po e ts
precedes the assembly of modular parts.
• Modular parts a d odules are a ufa tured a ordi g to usto er
• A odular uildi g a e take apart a d the reused to create the same or
another type of building.
Modular building construction (Wikipedia)
The modules can be placed side-by-side, end-to-end, or stacked, allowing a wide
variety of configurations and styles in the building layout.
Modular buildings are often priced lower than their site-built counterparts, for a
variety of reasons, manufacturers cite the following reasons for the typically
lower cost/price of these dwellings:
Speed of construction/faster return on investment. Modular construction
allows for the building and the site work to be completed simultaneously,
reducing the overall completion schedule by as much as 50%.
Indoor construction. Assembly is independent of weather, which increases
work efficiency and avoids damaged building material.
Favorable pricing from suppliers. Large-scale manufacturers can effectively
bargain with suppliers for discounts on materials.
Ability to service remote locations. Particularly in countries in which
potential markets may be located far from industrial centers, such as
Australia, there can be much higher costs to build a site-built house in a
remote area or an area experiencing a construction boom such as mining
towns. Modular homes can be built in major towns and sold to regional
Low waste. With the same plans being constantly built, the manufacturer
has records of exactly what quantity of materials is needed for a given job.
While waste from a site-built dwelling may typically fill several large
dumpsters, construction of a modular dwelling generates much less waste.
Environmentally friendly construction process. Modular construction
reduces waste and site disturbance compared to site-built structures.
Environmental benefits for used modular buildings. Modular buildings
contain 100% reusable components. This means you have the ability to take
the building down and relocate it. Should a company's needs change, the
modular room can be moved and they never lose their original investment.
Flexibility. Conventional buildings can be difficult to extend, however with
a modular building you can simply add sections, or even entire floors.
Healthier. Because modular homes are built in a factory, the materials are
stored indoors in a controlled environment, eliminating the risk of mold,
mildew, rust, and sun damage that can often lead to human respiratory
problems. Traditional site-built homes are always at risk from these threats.
Whilst there are many advantages to all forms of modular buildings, there can
be limitations also.
Volumetric: Transporting the completed modular building sections take up
a lot of space. This is balanced with the speed of construction once arrived
Flexibility: Due to transport and sometimes manufacturing restrictions,
module size can be limited, affecting room sizes. Panelized forms and flat
pack versions can provide easier shipment, and most manufacturers have
flexibility in their processes to cope with the majority of size requirements.
Partially open-sided modules
Open-sided (corner-supported) modules
Modules supported by a primary structural frame
Non-load bearing modules
Mixed modules and planar floor cassettes
Special stair or lift modules.
manufactured with four closed sides to create cellular type spaces designed
to transfer the combined vertical load of the modules above
the height of buildings in fully modular construction is in the range of 6 to
Modules are manufactured from a series of 2D panels, beginning with the
For buildings of 6 to 10 storeys height, a vertical bracing system is around
an access core, and horizontal bracing in the corridor floor between the
OPEN SIDED (CORNER-SUPPORTED) MODULES
An open ended module is a variant of a 4 sided module in which a rigid end
frame is provided, usually consisting of welded or rigidly connected
Rectangular Hollow Sections (RHS).
Modules can be placed side by side to create larger open plan spaces, as
required in hospitals and schools, etc.
As open sided modules are only stable on their own for one or two storeys,.
A steel external framework comprising walkways or balconies may be also
designed to provide stability.
MIXED MODULES AND FLOOR CASSETTES
In this hybrid or mixed form of construction, long modules may be stacked
to form a load-bearing serviced core.
floor cassettes span between the modules and load-bearing walls.
this mixed modular and panel form of construction is limited to buildings of
4 to 6 storey height.
used in residential buildings, particularly of terraced form.
MODULES SUPPORTED BY A PRIMARY STRUCTURE
Modules supported by long spanning cellular beams to create open plan
space at the lower levels .
the supporting columns are positioned at a multiple of the width of the
modules (normally 2 or 3 modules). The beams are designed to support the
combined loads from the modules above (normally a maximum of 4 6
NON LOAD BEARING MODULES
Non load bearing modules are of similar form to fully modular units, but are
not designed to resist external loads, other than their own weight and the
forces during lifting.
They are used as toilet/bathroom units, plant rooms or other serviced units
and are supported directly on a floor or by a separate structure.
The walls and floor of these pods are relatively thin (typically <100mm).
The units are designed to be installed either as the construction proceeds
or slid into place on the completed floor.
A curtain wall is a building façade that does not carry any dead load from the
building other than its own dead load, and one that transfers the horizontal loads
(wind loads) that are incident upon it. These loads are transferred to the main
building structure through connections at floors or columns of the building. A
curtain wall is designed to resist air and water infiltration, wind forces acting on
the building, seismic forces (usually only those imposed by the inertia of the
curtain wall), and its own dead load forces.
Curtain walls are typically designed with extruded aluminum members, although
the first curtain walls were made of steel. The aluminum frame is typically in filled
with glass, which provides an architecturally pleasing building, as well as benefits
such as day lighting. However, parameters related to solar gain control such as
thermal comfort and visual comfort are more difficult to control when using
highly-glazed curtain walls. Other common in fills include: stone veneer, metal
panels, louvers, and operable windows or vents.
Curtain walls differ from storefront systems in that they are designed to span
multiple floors, and take into consideration design requirements such as: thermal
expansion and contraction; building sway and movement; water diversion; and
thermal efficiency for cost-effective heating, cooling, and lighting in the building.
METAL CURTAIN WALLS
R.C.C CURTAIN WALLS
SPECIAL PURPOSE CURTAIN WALLS
METAL CURTAIN WALLS ARE BASICALLY DIVIDED INTO TWO CATEGORIES ON THE
BASIS OF TYPE OF ERECTION.
STICK & UNITIZED
STICK SYSTEM Stick system are shipped in pieces for field-fabrication and/or
assembly. These systems can be furnished by the manufacturer as stock lengths
to be cut, machined, assembled, and sealed in the field, or knocked down parts
pre-machined in the factory, for field-assembly and sealing only. All stick curtain
walls are field-glazed
UNITIZED CURTAIN WALL
To accomplish sealed and easily assembled, unitized curtain wall systems
have been developed.
Unitized curtain walls are factory-assembled and -glazed, then shipped to
the job site in units.
This accommodates thermal expansion and contraction, inter-story
differential movement, concrete creep, column foreshortening, and/or
WINDOW WALL –
A type of metal curtain wall installed between floors or between floor and roof
and typically composed of vertical and horizontal framing members, containing
operable sash or ventilators, fixed lights or opaque panels or any combination
MULLION AND PANEL= this is a type of Curtain wall in which only vertical mullions
are Installed and pre-fabricated Frames are installed.
R.C.C OR PRECAST CURTAIN WALLS-
Precast cladding or curtain walls are the most commonly used precast concrete
components for building envelopes. This type of precast concrete panel does not
transfer vertical loads but simply encloses the space
Store fronts are non-load-bearing glazed systems that occur on the ground floor,
which typically include commercial aluminium entrances. They are installed
between floor slabs, or between a floor slab and building structure above.
Typically field-fabricated and glazed, storefronts employ exterior glazing stops at
one side only. Provision for anchorage is made at perimeter conditions.
Maintenance and repair
Curtain walls and perimeter sealants require maintenance to maximize service
life. Perimeter sealants, properly designed and installed, have a typical service life
of 10 to 15 years. Removal and replacement of perimeter sealants require
meticulous surface preparation and proper detailing.
Aluminum frames are generally painted or anodized. Factory applied fluoro
polymer thermo set coatings have good resistance to environmental degradation
and require only periodic cleaning. Recoating with an air-dry fluoro polymer
coating is possible but requires special surface preparation and is not as durable
as the baked-on original coating.
Anodized aluminum frames cannot be "re-anodized" in place, but can be cleaned
and protected by proprietary clear coatings to improve appearance and
Exposed glazing seals and gaskets require inspection and maintenance to
minimize water penetration, and to limit exposure of frame seals and insulating
glass seals to wetting.
The production and use of building materials consumes large quantities of energy
and resources and generates waste. The choice of materials used in a building
therefore has important implications for the environment; wherever possible they
should be selected to minimize negative environment impacts and the
consumption of non-renewable resources.
A key concept when thinking about what materials to use is life cycle
stewardship . This means that the consequences and impacts of using materials
must be considered from the point at which they are mined/harvested, through
processing and manufacture, to installation, use, reuse/recycling and disposal.
Key considerations regarding sustainable materials include:
Reused or recycled – where possible reuse materials or use recycled
materials instead of new ones as this cuts out the emissions and energy
consumption associated with producing new materials and reduces waste.
For example, where demolition is involved, identify opportunities for reuse
or recycling of demolition materials (e.g. use recycled aggregates in new
low toxicity - use non-toxic materials that are free of harmful chemicals
such as CFCs
local sourcing – sourcing of materials locally may help to reduce the energy
use and environmental impacts associated with transportation
Responsible sourcing - independent certification schemes exist to confirm
that specific materials comply with responsible sourcing standards. For
example timber from well-managed forests is certified by the Forest
Stewardship Council (FSC).
maintenance/replacement and durability – using materials that are long
lasting and that are cheap and relatively easy to maintain, adapt and/or
replace will ensure that buildings are flexible and built to last
reusable or recyclable – select materials that can be easily dismantled and
reused or recycled at the end of their useful life
Retaining and re-using existing materials
Embodied energy can be minimized by
retaining and re-using existing building
structures and materials, particularly if
demolition of existing structures is required.
Therefore, consideration should be made to
re-use the existing materials within a new
development in either their existing state or in a revised/renewed state. For
example, crushed hard materials such as bricks and concrete may be re-used as
aggregate. But also when building new, future recyclability through easy
disassembly should be considered.
Consideration should be given to composite materials which are more difficult to
recycle than raw materials. For example, facade and roof structures that are easily
disassembled are more likely to be reused than those that would be damaged
when taken apart.
If none of these options are possible, then ensuring that most existing materials
are recycled and re-used off site should
be the next option.
Recycling involves processing used
materials into new products to prevent
waste of potentially useful materials,
reduce the consumption of fresh raw
materials, reduce energy usage, reduce
air pollution (from incineration) and water pollution (from landfilling) by reducing
the need for "conventional" waste disposal, and lower greenhouse gas emissions
as compared to virgin production. Recycling is a key component of modern waste
management. Recycling of a material would produce a fresh supply of the same
There are many building materials and appliances that can be re-used and
recycled including windows, doors, roofing tiles and dishwashers.
Building materials that can be recycled include:
bricks and tiles
Aggregates and concrete
Concrete aggregate collected from demolition sites is put through a crushing
machine, often along with asphalt, bricks, dirt, and rocks. Smaller pieces of
concrete are used as gravel for new construction projects. Crushed recycled
concrete can also be used as the dry aggregate for brand new concrete if it is free
of contaminants. This reduces the need for other rocks to be dug up, which in
turn saves trees and habitats
Steel crushed and baled for recycling
Iron and steel are the world's most recycled materials, and among the easiest
materials to reprocess, as they can be separated magnetically from the waste
stream. Recycling is via a steelworks: scrap is either remelted in an electric arc
furnace (90-100% scrap), or used as part of the charge in a Basic Oxygen Furnace
(around 25% scrap). Any grade of steel can be recycled to top quality new metal,
with no 'downgrading' from prime to lower quality materials as steel is recycled
repeatedly. 42% of crude steel produced is recycled material.
Aluminum is one of the most efficient and widely-recycled materials. Aluminum is
shredded and ground into small pieces or crushed into bales. These pieces or
bales are melted in an aluminum smelter to produce molten aluminum. This
process does not produce any change in the metal, so aluminum can be recycled
Recycling aluminum saves 95% of the energy cost of processing new aluminum.
This is because the temperature necessary for melting recycled, nearly pure,
aluminum is 600 °C, while to extract mined aluminum from its ore requires
900 °Americans throw away enough aluminum every year to rebuild their entire
commercial air fleet. Also, the energy saved by recycling one aluminum can is
enough to run a television for three hours.
A stack of wooden pallets awaits reuse or recycling.
Recycling timber has become popular due to its image as an environmentally
friendly product, with consumers commonly believing that by purchasing recycled
wood the demand for green timber will fall and ultimately benefit the
environment. Greenpeace also view recycled timber as an environmentally
friendly product, citing it as the most preferable timber source on their website.
The arrival of recycled timber as a construction product has been important in
both raising industry and consumer awareness towards deforestation and
promoting timber mills to adopt more environmentally friendly practices.
Alternative calcareous materials
Limestone's composition makes it a durable material that is easy to work with and
a favorite in the construction world. Limestone has been used for centuries as a
building material and can be found in buildings around the world. Limestone is an
extremely diverse material and, depending on its makeup, has varying levels of
strength and a variety of colors to choose from. Today, limestone continues to be
an important aspect of home construction and design.
*it s cheap and plentiful, and it s not too difficult to transport
*Getting it out of the ground isn't difficult
*No special, rare or dangerous chemicals are needed to make it into usable
*the products it can be made into are very numerous.
*it forms very strong bonds
*to make CaCO3 usable you have to convert it into quicklime, or CaO - Calcium
Oxide. This requires a lot of heating, and for eco boffs it releases CO2 - below:
CaCO3 + shed loads of heat to CaO + CO2
*If you use pure CaCO3 it tends to be dissolved by acids so your wonderful stature
may become a little crumbly.
*once your done with it it s very difficult to get rid of - it won't rot down.
Chalk, soft, fine-grained, easily pulverized, white-to-grayish variety of limestone.
The purest varieties contain up to 99 percent calcium carbonate in the form of the
Like any other high-purity limestone, chalk is used for making lime and Portland
cement and as a fertilizer. Finely ground and purified chalk is known as whiting
and is used as a filler, extender, or pigment in a wide variety of materials,
including ceramics, putty, cosmetics, crayons, plastics, rubber, paper, paints, and
linoleum. The chief use for chalk whiting, however, is in making putty, for which
its plasticity, oil absorption, and aging qualities are well suited.
Marl, old term used to refer to an earthy mixture of fine-grained minerals. The
term was applied to a great variety of sediments and rocks with a considerable
range of composition. Calcareous marls grade into clays, by diminution in the
amount of lime, and into clayey limestone s. Greensand marls contain the green,
potash-rich mica mineral glauconitic; widely distributed along the Atlantic coast in
the United States and Europe, they are used as water softeners.
Slag, by-product formed in smelting, welding, and other metallurgical and
combustion processes from impurities in the metals or ores being treated. Slag
consists mostly of mixed oxides of elements such as silicon, sulfur, phosphorus,
and aluminum; ash; and products formed in their reactions with furnace linings
and fluxing substances such as limestone. Slag floats on the surface of the molten
metal, protecting it from oxidation by the atmosphere and keeping it clean. Slag
forms a coarse aggregate used in certain concretes; it is used as a road material
and ballast and as a source of available phosphate fertilizer.
Metallic and Non- Metallic materials
These are metals and alloys containing a high proportion of the element
They are the strongest materials available and are used for applications
where high strength is required at relatively low cost and where weight is
not of primary importance.
As an example of ferrous metals such as: bridge building, the structure of
large buildings, railway lines, locomotives and rolling stock and the bodies
and highly stressed engine parts of road vehicles.
The ferrous metals themselves can also be classified into "families', and
these are shown in figure 4.
Steel – Corrosion is the most common and expensive form of material
degradation for construction steels, including concrete reinforcement. Steel
corrosion (rusting, or oxidation) is an electrochemical reaction that occurs when
iron atoms loose electrons in the presence of oxygen and water. The most
effective and common procedure for preventing or slowing corrosion is
to prevent contact with water, either by coatings or by protecting it within a
viable building envelope.
Cast iron is iron or a ferrous alloy which has been heated until it liquefies, and is
then poured into a mould to solidify. It is usually made from pig iron. The alloy
constituents affect its color when fractured: white cast iron has carbide impurities
which allow cracks to pass straight through. Grey cast iron has graphitic flakes
which deflect a passing crack and initiate countless new cracks as the material
Cast iron columns enabled architects to build tall buildings without the
enormously thick walls required to construct masonry buildings of any height.
Such flexibility allowed tall buildings to have large windows
Non – ferrous metals
These materials refer to the remaining metals known to mankind.
The pure metals are rarely used as structural materials as they lack
They are used where their special properties such as corrosion resistance,
electrical conductivity and thermal conductivity are required. Copper and
aluminum are used as electrical conductors and, together with sheet zinc
and sheet lead, are use as roofing materials.
They are mainly used with other metals to improve their strength.
Some widely used non-ferrous
metals and alloys are classified as shown
in figure 5.
1. Aluminum is one of the most
efficient and widely-recycled materials.
Aluminum is shredded and ground into
small pieces or crushed into bales. These
pieces or bales are melted in an
aluminum smelter to produce molten aluminum.
2. By this stage the recycled aluminum is indistinguishable from virgin
aluminum and further processing is identical for both. This process does not
produce any change in the metal, so aluminum can be recycled indefinitely.
3. Recycling aluminum saves 95% of the energy cost of processing new
aluminum. This is because the temperature necessary for melting recycled,
nearly pure, aluminum is 600 °C, while to extract mined aluminum from its
ore requires 900 °C.
4. To reach this higher temperature, much more energy is needed, leading to
the high environmental benefits of aluminum recycling. Americans throw
away enough aluminum every year to rebuild their entire commercial air
fleet. Also, the energy saved by recycling one aluminum can is enough to
run a television for three hours.
Non – metallic materials
Non – metallic (synthetic materials)
These are non – metallic materials that do not exist in nature, although they are
manufactured from natural substances such as oil, coal and clay. Some typical
examples are classified as shown in figure 6.
They combine good corrosion resistance with ease of manufacture by
moulding to shape and relatively low cost.
Synthetic adhesives are also being used for the joining of metallic
components even in highly stressed applications.
1. The term "plastics"
covers a range of synthetic
or semi-synthetic organic
that can be molded or
extruded into objects, films,
2. Their name is derived
from the fact that in their semi-liquid state they are malleable, or have the
property of plasticity. Plastics vary immensely in heat tolerance, hardness,
and resiliency. Combined with this adaptability, the general uniformity of
composition and lightness of plastics ensures their use in almost all
industrial applications today.
1. These are produced by baking naturally occurring clays at high
temperatures after moulding to shape. They are used for high – voltage
insulators and high – temperature – resistant cutting tool tips.
2. It is a useful and necessary term because, especially when initially found in
archaeological excavation, it may be difficult to distinguish, for example,
fragments of bricks from fragments of roofing or flooring tiles. However,
ceramic building materials are usually readily distinguishable from
fragments of ceramic pottery by their rougher finish.
Non – metallic (Natural materials)
Such materials are so diverse that only a few can be listed here to give a basic
introduction to some typical applications.
1. Wood has been used as a building material for thousands of years in its
natural state. Today, engineered wood is becoming very common in
2. Wood is a product of trees, and sometimes other fibrous plants, used for
construction purposes when cut or pressed into lumber and timber, such as
boards, planks and similar materials. It is a generic building material and is
used in building just about any type of structure in most climates.
3. Wood can be very flexible under loads, keeping strength while bending, and
is incredibly strong when compressed vertically. There are many differing
qualities to the different types of wood, even among same tree species.
This means specific species are better suited for various uses than others.
And growing conditions are important for deciding quality.
4. "Timber" is the term used for construction purposes except the term
"lumber" is used in the United States. Raw wood (a log, trunk, bole)
becomes timber when the wood has been "converted" (sawn, hewn, split)
in the forms of minimally-processed logs stacked on top of each other,
timber frame construction, and light-frame construction.
5. The main problems with timber structures are fire risk and moisture-related
problems.In modern times softwood is used as a lower-value bulk material,
whereas hardwood is usually used for finishing s and furniture.
6. Historically timber frame structures were built with oak in Western Europe,
recently Douglas fir has become the most popular wood for most types of
7. Many families or communities, in rural areas, have a personal woodlot from
which the family or community will grow and harvest trees to build with or
sell. These lots are tended to like a garden.
8. This was much more prevalent in pre-industrial times, when laws existed as
to the amount of wood one could cut at any one time to ensure there
would be a supply of timber for the future, but is still a viable form of
1. Glassmaking is considered an art form as well as an industrial process or
2. Clear windows have been used since the invention of glass to cover small
openings in a building. Glass panes provided humans with the ability to
both let light into rooms while at the same time keeping inclement weather
3. Glass is generally made from mixtures of sand and silicates, in a very hot
fire stove called a kiln, and are very brittle. Additives are often included the
mixture used to produce glass with shades of colors or various
characteristics (such as bulletproof glass or light emittance).
4. The use of glass in architectural buildings has become very popular in the
modern culture. Glass "curtain walls" can be used to cover the entire
facade of a building, or it can be used to span over a wide roof structure in
a "space frame". These uses though require some sort of frame to hold
sections of glass together, as glass by itself is too brittle and would require
an overly large kiln to be used to span such large areas by itself.
Active energy system
Active green energy systems refer to the use of technology applications
(electrical or mechanical) for utilizing or generating power.
This systems incorporate the use of renewable energy technologies, such as
solar photovoltaic panels, solar thermal collectors, wind turbines, bio fuel
systems, etc., to gather renewable energy to offset conventional energy.
Active solar systems are employed to convert solar energy into another
more useful form of energy.
This would normally be a conversion to heat or electrical energy.
Active solar uses electrical or mechanical equipment for this conversion.
Around 70% of solar radiation is absorbed by clouds, oceans and land
They are environmentally friendly.
Helps save the earth s energy resources.
It is the best choice for people who have allergies.
To use a little energy while saving money.
You can save between 50%- 80% on your current heating bill.
Going solar is an excellent start for reducing energy as well as to saving the
Active Solar Systems Mechanical/Electrical equipment s Heat Energy
(cooking, water heat purposes.) & Electrical Energy (solar panels, electric
ENERGY FROM SUN
• The Earth receives 174 pet watts (PW; 1petawatts=1015
incoming solar radiation at the upper atmosphere.
• Approximately 30% is reflected back to space while the rest is absorbed
by clouds, oceans and land masses. The spectrum of solar light at the
Earth's surface is mostly spread across the visible and near-
infrared ranges with a small part in the near-ultraviolet.
• Photons:-Light from the sun consists of photons.
• Solar panels:-Photons are absorbed by solar panels and the photoelectric
effect causes the flow of free electrons - electricity.
• Amp meter:-The amp meter measures the amount of instantaneous solar
current output. The current decreases as cloud cover sets in.
• Grid interactive inverter:-The inverter is the device where Direct Current
(DC) from the solar panels is transformed into 240 volt Alternating
Current (AC) at 50 Hertz, suitable for running household appliances.
• Kilo Watt Hour Meter:-The kilo Watt hour (kWh) meter is a cumulative
measurement of solar electricity. It is the total amount of electricity
produced by the solar panels.
• Electricity grid:-When the solar panels generate more electricity than the
electricity load, excess power is exported to the electricity grid.
• Main switchboard:-The kWh meter is located within the main
switchboard, which is the common link to the whole grid interactive
system. When its night, or a cloudy day, and the solar panels aren’t
generating any electricity, the electricity grid supplies electricity to the
switchboard. Additionally, it also provides electricity to the switchboard
when consumption is greater than the amount of electricity the solar
panels are providing.
• Electricity load:-the electricity that is used your home that is supplied via
Solar panels are devices that convert light into electricity.
They are also called as photo voltaic which means, basically, "light-
To get the most from solar panels, you need to point them in the direction
that captures the most sun.
Solar panels should always face true south if you are in the northern
hemisphere, or true north if you are in the southern hemisphere.
The angle at which a solar panel is installed changes depending on the
latitude of the location where you live.
The closer to the equator you live, the flatter your roof should be as the sun
is more directly overhead.
The more pole centric you live, the steeper your roof should be as the sun is
shining at more of an angle as opposed to overhead.
Solar panels should always face true south in the Northern Hemisphere,
North in the
Southern Hemisphere, tilted from the horizontal at a degree equal to your
latitude plus 15
Degrees in winter, or minus 15 degrees in summer.
LIQUID BASED SOLAR HEATING
• Cold water from the bottom of the tank is pumped to the solar collector.
• After passing through the collector, the hot water returns to the tank.
• Because hot water rises, the water coming from the collector stays at the
top of the tank. Hot water for the home is drawn from the top of the tank
• Tank material will be dependent on your water quality and whether you are
connected to the mains water supply.
• 1)vitreous enamel or mild steel:
• 2)stainless steel: less susceptible to corrosion and requires less
• >roof mounted tanks are placed horizontally above collectors.
• >split system, with the tank on ground level, needs a pump to circulate the
solar transfer fluid.
Wind is a form of solar energy.
They are caused by the uneven heating of the atmosphere by the sun, the
irregularities of the earth's surface, and rotation of the earth.
This wind flow, or motion energy, when "harvested" by modern wind
turbines, can be used to generate electricity.
How Wind Power Is Generated
• Wind turbines convert the kinetic energy in the wind into mechanical
• This mechanical power can be used for specific tasks (such as grinding grain
or pumping water) or a generator can convert this mechanical power into
electricity to power homes, businesses, schools, etc.
The multiple horizontal wind axis turbines alone generate half of the
necessary power for a typical small commercial building.
They are ideal for most areas of the world, where the prevailing wind speed
is above 3m/s. working in union with this system is a photovoltaic array.
The combination of these two systems is a key element in any green design
– the building will rely more on the wind turbines in the winter months,
while the photovoltaic array will take on more of a role during the summer
A wind tower is a traditional Persian architectural element to create natural
ventilation in buildings.
Wind tower tend to have one, four, or eight openings. The construction of a wind
tower. Depends on the direction of airflow at that specific location: if the wind
tends to blow from only one side, it is built with only one downwind opening
Geothermal heat pump
A geothermal heat pump or ground source heat pump (GSHP) is a
heating and/or cooling system that pumps heat to or from the ground.
It uses the earth as a heat source (in the winter) / or a heat sink (in the
It is used to provide heating and cooling to the building.
By extracting heat from the outdoor air, a heat pump can release several
times as much heat into the building as the heat value of the electricity it
The heat pump uses a vertical closed loop system, taking advantage of land
mass as a heat exchanger to either heat or cool the building.
Air conditioning is the process whereby the condition of air, as defined by
its temperature and moisture content, is changed.
Environmental requirements of the conditioned space may be determined
by human occupancy as related to comfort and health.
In construction, a complete system of heating, ventilation and air
conditioning is referred to as “HVAC”.
An air conditioner works similar to a refrigerator. The refrigerant flows through
the system, and changes in state or condition. All air conditioner units must have
the four basic components to work: | https://www2.slideshare.net/aaqibiqbal940/green-building-notes | 21 |
17 | Are you confused about division? This article gives you information about division. It also explains the definition of division, the sign of the division, terms like Dividend, Divisor, Quotient, and Remainder, Facts about Division. You can also check the examples for a better understanding of the concept.
Division – Definition
It is one of the four basic arithmetic operations which gives the result of sharing. In this method, the group of things is distributed into equal parts. There are a lot of signs that people used to indicate division. The most common signs are ‘%’, and back slash’ /’. Some members also write one number up and another number down with a line between them.
In the division Equation, there are Four parts namely 1. Dividend 2. Divisor 3.Quotient,4.Remainder.The dividend is the number you are dividing up. The divisor is the number you are dividing up. The quotient is the answer. The remainder is the part of the dividend that is remained after division. The division is the inverse of Multiplication.
24 is the Dividend.
4 is the Divisor.
6 is the Quotient.
Interesting Facts about Division | Division Facts Ideas to Learn
Here we will discuss the facts about division explained by considering few examples. They are along the lines
- When dividing the number by 1, the answer will always be the same number. It means if the divisor is 1, the quotient will always be equal to the dividend such as 50 ÷ 1= 50.
- Division by 0 is undefined. For example 25/0=undefined.
- The division of the same numerator(dividend) and Denominator (divisor) is always 1. For example: 4 ÷ 4 = 1.
- In a division sum, the remainder is always lesser than the divisor. For Example, 16%3 remainder is 1. 1 is smaller than 3.
- The dividend is always equal to the product of the divisor and the quotient with the addition of the remainder. Dividend=(Divisor × Quotient) + RemainderFor example 16%3
- The divisor and the quotient are always the factors of the dividend if there is no remainder. For Example 15%3=5
- The dividend is always a multiple of the divisor and the quotient if there is no remainder. For Example 40%8=5 5*8=40, 8*5=40.
FAQs on Facts of Division
1. What is division?
In this method, the group of things is distributed into equal parts.
2. What is the sign of division?
The sign of division is %, backslash( /).
3. What will be the Quotient when the dividend and divisor are the same?
The quotient of the same dividend and divisor is always 1.
4. What will be the Remainder when the dividend and divisor are the same?
The Remainder is zero when the dividend and divisor are the same.
5. In a division, does the remainder is always lesser than the divisor?
In a division, the remainder is less than the divisor. If it is larger than the divisor, it means that the division is incomplete.
6. How to check the division if it is correct or not?
In division, the dividend is always equal to the product of the divisor, quotient with addition to the remainder.
Dividend=(Divisor × Quotient) + Remainder | https://ccssmathanswers.com/facts-about-division/ | 21 |
15 | Expanding Vocabulary Development in Young Children What Research Says about Why, How, and What . Reading First Georgia Department of Education 2009-2010
Essential Questions • Why focus on vocabulary instruction? • What are the links between vocabulary and reading comprehension? • What is academic vocabulary and why is it important? • What are the components of effective vocabulary instruction?
How do YOU teach vocabulary? Brainstorm with your colleagues for one minute. Think about how you presently address vocabulary instruction within your curriculum.
Some vocabulary practices… Unreliable Practices Research-based Practices Asking students, “Does anyone know what _____ means?” Numerous independent activities without guidance or immediate feedback Directing students to “look it up” then use it in a sentence Relying on context based guessing as a primary strategy Teacher directed, explicit instruction Provide opportunities to practice using words Teach word meanings explicitly and systematically Teach independent word learning strategies (i.e., contextual strategies & morphemic analysis
Vocabulary is • Oral and written • Expressive and Receptive
Vocabulary instruction is • Direct • Indirect
Why focus on vocabulary instruction? “Of the many compelling reasons for providing students with instruction to build vocabulary, none is more important than the contribution of vocabulary knowledge to reading comprehension. Indeed, one of the most enduring findings in reading research is the extent to which students’ vocabulary knowledge relates to their reading comprehension.” (Anderson & Freebody, 1981; Baumann, Kame’enui, & Ash, 2003; Becker, 1977; Davis, 1942; Whipple, 1925)
Vocabulary Knowledge has a Direct Impact on Comprehension • Children’s vocabulary as measured in PreK is directly correlated with reading comprehension in upper elementary grades (Dickinson and Tabois, 2001). • Cunningham and Stanovich (1997) reported finding that “vocabulary as assessed in grade 1 predicts more than 30 percent of grade 11 reading comprehension.”
The Vocabulary Gap (Biemiller, 2005b)
Vocabulary Gap • The vocabulary gap grows each year(Stanovich, 1986). • Beginning in the intermediate grades, the “achievement gap” between socioeconomic groups is a language gap (Hirsh, 2002). • For those students who are English Language Learners, the achievement gap is a vocabulary gap(Carlo, et al., 2004).
Actual Differences in Language Quantityof Words Heard Quality of Words Heard
Closing the Vocabulary Gap Research-based Strategies for Improving Student Vocabulary
So many words… • How many words do we expect students to learn? • How many words can students actually learn and what teaching methods are most effective? • How many words can we expect to teach explicitly and for which words can we give immediate, brief explanations? • How can we increase student knowledge of words as well as the number of words they actually learn?
Getting Them All Engaged • Choral Responses • Partner Responses • Written Responses • Individual Responses
“It’s not what you say or do that ultimately matters…It is what you get the students to do as a result of what you said and did that counts.” (Archer, Feldman, & Kinsella, 2008)
Vocabulary Casserole Ingredients Needed: 20 words no one has ever heard before in his life 1 dictionary with very confusing definitions 1 matching test to be distributed by Friday 1 teacher who wants students to be quiet on Mondays copying words Put 20 words on chalkboard. Have students copy then look up in dictionary. Make students write all the definitions. For a little spice, require that students write words in sentences. Leave alone all week. Top with a boring test on Friday. Perishable. This casserole will be forgotten by Saturday afternoon. Serves: No one. Adapted from When Kids Can’t Read, What Teachers Can Do by Kylene Beers
Vocabulary Treat Ingredients Needed: 5-10 great words that you really could use 1 thesaurus Markers and chart paper 1 game like Jeopardy or BINGO 1 teacher who thinks learning is supposed to be fun Mix 5 to 10 words into the classroom. Have students test each word for flavor. Toss with a thesaurus to find other words that mean the same. Write definitions on chart paper and let us draw pictures of words to remind us what they mean. Stir all week by a teacher who thinks learning is supposed to be fun. Top with a cool game on Fridays like jeopardy or BINGO to see who remembers the most. Serves: Many Adapted from When Kids Can’t Read, What Teachers Can Do by Kylene Beers
Word Selection for Explicit Instruction • Due to the extensive vocabulary gap and the immense amount of words located within school texts, strategic selection of vocabulary to be taught explicitly is required. • Select a relatively small number of words for explicit instruction, 3-10 words per story or selection. • Select words that are unknown, critical to the meaning and words that the student will likely encounter in the future. (Archer, 2008)
So, which words do we teach? • Useful words (Tier 1): clock, baby, happy • High-frequency words (Tier 2): coincidence, absurd, industrious • Specific domain words(Tier 3): isotope, lathe, peninsula From: Bringing Words to Life - Robust Vocabulary Instruction by Isabelle Beck, Margaret McKeown, & Linda Kucan
Let’s Practice………………. • Now, with your buddy turn to page 99 in Creating Robust Vocabulary.Read The Tailor. Underline the words you think might be Tier II words. • Write/highlight the words that you and your buddy identify.
Remain calm!!!! There are NO Tier II police :>)
Instructional Routine for Explicit Vocabulary Instruction • Introduce the word. • Introduce the meaning of the word with a student friendly explanation. • Illustrate the word with examples and non-examples. • Check for student understanding. (Anita Archer, 2008)
What is Academic Vocabulary? • Academic vocabulary refers to the specialized, high-utility words used in the classroom • Academic vocabulary includes high-use academic words (e.g., analyze, summarize, evaluate, formula, respond, specify) • Academic language includes the vocabulary, grammar & syntax necessary to competently discuss a topic
Why Teach Academic Vocabulary? • Students need to learn the language of written text and academic content areas through direct, explicit instruction. • Most students do not come to school prepared to comprehend academic language therefore it must be taught explicitly with students having access to numerous practice opportunities
Academic Vocabulary Examples • analysis • approach • area • assessment • assume • authority • available • benefit • concept • consistent • constitutional • context • contract • create • data • definition • environment • established • estimate • evidence • export • financial • formula • function http://language.massey.ac.nz/staff/awl/awlinfo.shtml (Academic Word Lists)
Intentional Teaching of Academic Vocabulary • Structure academic conversations by providing sentence starters: • I predict ___________________. • I predict __________________ because ______________. • Encourage students to use “smart” words: • delighted instead of happy • accurate instead of good • hypothesize instead of guess • illustrate instead of draw • comment instead of tell • seek instead of find
Explicit Instruction of Words - Selection of words • Also, teach idioms (A phrase or expression in which the entire meaning is different from the usual meaning of the the individual words.) “The car rolling down the hill caught my eye.” “Soon we were in stitches.” “The painting cost me an arm and a leg.” “The teacher was under the weather.”
How can we possibly teach all the words students need to learn? • In an attempt to close the vocabulary gap, students must learn a large volume of words…more words than we can teach. • Word learning strategies arm students with ways to gain understanding from unknown words.
Word Learning Strategies • Using context clues • Utilizing morphemic analysis • Teaching the word families • Teaching cognate awareness • Fostering word consciousness • Exposing students to vocabulary multiple times and in various manners
Fostering Word Consciousness • Teach similes, metaphors and idioms. • Have fun with word play by utilizing riddles, puns, anagrams, acronyms and tongue twisters. • Provide students with a print rich environment. • Engage students in activities that explore the history of words and word origins.
Encourage Wide Reading • “The best way to foster vocabulary growth is to promote wide reading.” (Anderson, 1992) • Maximize access to reading materials and quality, authentic text. • Capture students curiosity with read alouds, book talks and author studies. • Expect reading outside of class.
Read-Alouds • Vocabulary can be gained from listening to others read. • Listening to a book being read can significantly improve children’s expressive vocabulary. (Nicholson & Whyte, 1992; Senechal & Cornell, 1993) • Print vocabulary is more extensive and diverse than oral vocabulary. (Hays, Wolfe, & Wolfe, 1996) • Wide disparities exist in the amount of time parents read to their children before lst grade. • Adams (1990) estimated that she spent at least 1000 hours reading books to her son before he entered first grade. • Teale (1984) observed that in low-income homes the children were read to for about 60 hours prior to first grade.
Read-Alouds • Choose interesting, engaging stories that attract and hold children’s attention. The books should also be somewhat challenging. (Biemiller, 1995; Elley, 1989) • Use performance-oriented reading. Read with expression and enthusiasm. • Provide students with a little explanation of novel words that are encountered in context. (Brabham & Lynch-Brown, 2002; Brett, Rothlein & Hurley, 1996; Beck, Perfetti, & McKeon, 1982; Elley, 1989; Penno, Wilkinson, &Moore, 2002; wasik & Bond, 2001; Whitehurst et al., 1998)
Read-Alouds • Actively engage students during the story book reading to increase vocabulary gains. (Dickerson & Smith, 1994; Hargrave & Senechal, 2000; Senechal, 1997) • Ask questions that promote passage comprehension. Retell and prediction questions are particularly useful. • Use a variety of responses including: • Group (choral) responses • Partner responses • Physical responses
Read-Alouds • For young students, read the book several times to increase greater gains in vocabulary. (Senechal, 1997) • Provide a rich discussion before and after reading of the book. • “What was your favorite part of the book?” • “What really surprised you in the story?” • “What would be another ending for the story?”
Explicit Instruction - Prepare - Student-Friendly Explanations • Dictionary Definition • relieved - (1) To free wholly or partly from pain, stress, pressure. (2) To lessen or alleviate, as pain or pressure • Student-Friendly Explanation (Beck, McKeown, & Kucan, 2003) • Uses known words. • Is easy to understand. • When something that was difficult is over or never happened at all, you feel relieved.
Explicit Instruction - Prepare - Student-Friendly Explanations • Dictionary Definition • Attention - a. the act or state of attending through applying the mind to an object of sense or thought b. a condition of readiness for such attention involving a selective narrowing of consciousness and receptivity • Explanation from Dictionary for English Language Learners (Elementary Learner’s Dictionary published by Oxford) • Attention - looking or listening carefully and with interest
Explicit Instruction- Practice ActivityWrite Student-Friendly Explanations
Instructional Routine for Vocabulary Step 1. Introduce the word. • Write the word on the board or overhead. • Read the word and have the students repeat the word. If the word is difficult to pronounce or unfamiliar have the students repeat the word a number of times. Introduce the word with me. “ This word is compulsory. What word?”
Instructional Routine for Vocabulary (continued) Step 2. Introduce meaning of word. Option # 1. Present a student-friendly explanation. • Tell students the explanation. OR • Have them read the explanation with you. Present the definition with me. “When something is required and you must do it, it is compulsory. So if it is required and you must do it, it is _______________.”
Instructional Routine for Vocabulary (continued) Step 2. Introduce meaning of word. Option # 2. Have students locate the definition in the glossary or text. • Have them locate the word in the glossary or text. • Have them break the definition into the critical attributes. Glossary Entry: Industrial Revolution Social and economic changes in Great Britain, Europe, and the United States that began around 1750 and resulted from making products in factories Industrial Revolution • Social & economic changes • Great Britain, Europe, US • Began around 1750 • Resulted from making products in factories
Instructional Routine for Vocabulary (continued) Step 2. Introduce meaning of word. Option # 3. Introduce the word using the morphographs in the word. • Introduce word in relationship to “word relatives”. • Declare *maintain Declaration of Independence *maintenance • analyze analyzing analysis b. Analyze parts of word. • autobiography auto = self bio = life graph = letters, words, or pictures
Instructional Routine for Vocabulary (continued) Step 3. Illustrate the word with examples. • Concrete examples. • Visual examples. • Verbal examples. (Also discuss when the term might be used and who might use the term.) Presentthe examples with me. “Coming to school as 8th graders is compulsory.” “Stopping at a stop sign when driving is compulsory.”
Instructional Routine for Vocabulary (Continued) Step 4. Check students’ understanding. Option #1. Ask deep processing questions.Check students’ understanding with me. “Many things become compulsory. Why do you think something would become compulsory?”
Instructional Routine for Vocabulary (continued) Step 4. Check students’ understanding. Option #2. Have students discern between examples and non-examples. Check students’ understanding with me. “Is going to school in 8th grade compulsory?” Yes “How do you know it is compulsory?” It is required. “Is going to college when you are 25 compulsory?” “Why is it not compulsory?” It is not required. You get to choose to go to college.
Instructional Routine for Vocabulary (continued) Step 4. Check students’ understanding. Option #3. Have students generate their own examples. Check students’ understanding with me. “There are many things at this school that are compulsory? Think of as many things as you can?” “Talk with your partner. See how many things you can think of that are compulsory.”
Practice Activity: Example A 1. Introduce the word.This word is migrate. What word? 2. Present a student-friendly explanation.When birds or other animals move from one place to another at a certain time each year, they migrate. So if birds move to a new place in the winter or spring, we say that the birds _________________. Animals usually migrate to find a warmer place to live or to get food. 3. Illustrate the word with examples.Sandhill Cranes fly from the North to the South so they can live in a warmer place. Sandhill Cranes _______________. | https://fr.slideserve.com/indiya/expanding-vocabulary-development-in-young-children | 21 |
41 | What is Circular Flow of Income?
The circular flow means the unending flow of production of goods and services, income, and expenditure in an economy. It shows the redistribution of income in a circular manner between the production unit and households.
These are land, labour, capital, and entrepreneurship.
- The payment for the contribution made by fixed natural resources (called land) is known as rent.
- The payment for the contribution made by a human worker is known as wage.
- The payment for the contribution made by capital is known as interest.
- The payment for the contribution made by entrepreneurship is known as profit.
Additional Reading: What are the Types of Banking
Circular Flow of Income in a Two-Sector Economy
It is defined as the flow of payments and receipts for goods, services, and factor services between the households and the firm sectors of the economy.
- The outer loop of the diagram shows the flow of factor services from households to firms and the corresponding flow of factor payments from firms to households.
- The inner loop shows the flow of goods and services from firms to households and the corresponding flow of consumption expenditure from households to firms.
- The entire amount of money, which is paid by firms as factor payments, is paid back by the factor owners to the firms.
Methods of Calculating National Income
There are three known methods by which national income is determined. These are:
- Value added method
- Expenditure method
- Income method
Let us look into the details of each of these methods.
Value Added Method
The value added method is also known as the product method or output method. Its primary objective is to calculate national income by taking the value added to a product during the various stages of production into account.
Therefore, the formula for calculating the national income by the value added method can be expressed as:
National income (NI) = (NDPfc) + Net factor income from abroad
The expenditure method of national income calculation is based on the expenditures taking place in the economy. The expenditures that happen in an economy can be done by individuals, households, business enterprises, and the government.
Therefore, the formula for calculating the national income by the expenditure method can be expressed as:
National income (NI) = C + G + I + (X – M)
National income (NI) = C + G + I + NX
The third method to calculate national income is the income method. It is based on the income generated by the individuals by providing services to the other people in the country either individually or by using the assets at disposal.
The income method takes the income generated from land, capital in the form of rent, interest, wages and profit into consideration.
The national income by income method is calculated by adding up the wages, interest earned on capital, profits earned, rent obtained from land, and income generated by the self-employed people in an economy. It is known as net domestic product at factor cost or NDPfc.
The addition of the net factor income from abroad to the net domestic product at factor cost gives the national income.
It can be expressed in a formula as:
NNPfc = (NDPfc) + Net factor income from abroad
|3-4 marks questions|
|Q.1. What are the four factors of production and what are the remunerations to each of these known as?|
|Explanation||● The production of goods and services is the result of the combined efforts of the following four factors of production:|
● Land, labour, capital, and enterprise
Factors of production remuneration
1. Land rent
2. Labour wage
3. Capital interest
4. Enterprise profit
|Q.2. What do you understand by factor income? What are its broad categories?|
|(A) Meaning||● It refers to the income received by factors of production for rendering factor services in the process of production.|
|(B) Categories||● Factor incomes are broadly classified into four categories. They are:|
Rent, wages, interest, and profit.
|Q.3. Differentiate real flow and money flow.|
|Basis||Real Flow||Money Flow|
|It is the flow of factor services from households to firms, and the flow of goods and services from firms to households.||It is the flow of factor payments by firms to households, and the payment for goods and services by households to firms.|
|(b) Medium of exchange||(i) Factors|
(ii) Goods and services
|(c) Other name||It is also known as physical flow.||It is also known as nominal flow.|
|Q.4. Define the term ‘flow and stock’.|
|Flow||‘Flow’ is a variable that is measured with reference to a period of time.|
Example: Flow of water in a river, income earned in a year, etc.
|Stock||‘Stock’ is a variable that is measured with reference to a particular point of time.|
Example: Water in a tank, wealth, bank balance on 31st March, etc.
Classify the following as stock and flow:
(a) Amount of bank deposits as on 31.03.2016 (b) Profit
(c) Losses (d) Capital
(e) Production (f) Population of India
|(a) Amount of bank deposits as on 31.03.2016||‘Stock’, as these are related to a point of time.|
|(b) Profit||‘Flow variable’, as the profit is measured over a period of time.|
|(c) Losses||‘Flow variable’, as the losses are measured over a period of time.|
|(d) Capital||‘Stock’, as the capital is related to a point of time.|
|(e) Production||‘Flow variable’, as the production is measured over a period of time.|
|(f) Population of India||‘Stock’, as the population is related to a point of time.|
|1 Mark questions|
● It refers to the withdrawal of money from the circular flow of income.
● When households and firms save a part of their income, it leads to a leakage from the circular flow of income.
It refers to the addition to the circular flow of income. Example: borrowings
|Multiple choice questions|
|Q.1. The flow of goods and services between firms and households are __________.|
|a. Real flow|
b. Money flow
c. The flow of capital
d. Flow of stock
|Q.2. Which of these belong to a two-sector economy?|
|a. Firms and government|
b. Households and government
c. Households and firms
d. Households and foreign sectors
|Q.3. Which of the following is the consumption sector?|
d. Foreign sectors
|Q.4. Which of the following is not a flow?|
|1-a, 2-c, 3-b, 4-a|
The above-mentioned concept is explained in detail about the Circular Flow of Income and Methods of Calculating National Income. To know more, stay tuned to our website. | https://coolgyan.org/commerce/circular-flow-of-income-and-methods-of-calculating-national-income/ | 21 |
61 | What is Financial Modeling?
Investopedia defines financial modeling as:
“the process of creating a summary of a company’s expenses and earnings in the form of a spreadsheet that can be used to calculate the impact of a future event or decision.”Investopedia
Financial modeling is a numerical depiction of a business’s activities in the past, present, and projected future. Models like this are meant to be used as decision-making aids.
What is the purpose of a financial model?
Either inside or outside the business, the results of a financial model is useful for decision-making and financial analysis. Financial models will be used by executives inside a business to make choices about:
- Obtaining funds (debt and/or equity)
- Buying (or selling) enterprises and/or assets
- Organically expanding the business. For instance, innovating.
- Selling business assets
- Projecting and budgeting to plan for the future
- Determining the worth of the business
- Accounting for management
- Ratio analysis/financial statement analysis
- Allocating capital
How to Build Financial Model
1. Preliminary findings and assumptions: Every financial model begins with a company’s past performance. You start by extracting three years of financial data and entering them into Excel to create the financial model.
Then you calculate things like revenue growth rate, gross margins, variable expenses, fixed costs, and inventory days, among other things, to reverse engineer the assumptions for the historical period. From there, you may use hard codes to fill up the forecast period’s assumptions.
2. Begin your income statement: With the projected assumptions already set, you can compute sales, COGS (cost of goods sold), gross profit, and operating expenditures all the way down to EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) at the top of the income statement.
3. Begin your balance sheet: You may begin filling up the balance sheet now that the top of the income statement is complete. Start by computing the revenue and COGS functions of accounts receivable and inventory and the AR (Accounts receivable) days and inventory days assumptions. Fill in the accounts payable section based on COGS and AP (Accounts Payable) days.
4. Establish the relevant schedules: You must first develop a schedule for capital assets such as Property, Plant & Equipment (PP&E), as well as debt and interest, before you can finish the income statement and balance sheet. The PP&E schedule will increase capital expenditures and deduct depreciation from the historical period.
The debt schedule will also use historical data to add debt rises and deduct repayments. The average debt balance will be used to calculate interest.
5. Make a profit and loss statement and a balance sheet: The income statement and balance sheet are completed with the data from the relevant schedules. Align depreciation with the PP&E schedule and interest to the debt schedule on the income statement. You may then figure out your profits before taxes, taxes, and net income.
Connect the closing PP&E balance and closing debt balance from the schedules on the balance sheet.
6. Create a cash flow statement: After you’ve completed the income statement and balance sheet, you may proceed to create the cash flow statement. To calculate cash from operations, start with net income, subtract depreciation, and account for changes in non-cash working capital.
7. Make graphs and charts: Charts and graphs are the most effective approach to display the outcomes of a financial model. Because most executives don’t have the time or interest to investigate the model’s intricate details, charts are handier.
Note: A comprehensive financial model will typically include at least three outputs: financial statements, and operational cash flow projection, and a KPI summary.
Enroll in this course at BizSkills Academy to Learn more | https://bizskillsacademy.com/costs-and-financial-modeling/ | 21 |
27 | © 2013 Phys.org (Phys.org) —Two planetary researches, one from Hampton University and the National Institute of Aerospace, the other from Louisiana State University, have published a paper in the journal Nature suggesting that for a period of time, the Earth was very similar to Jupiter’s moon Io—with heat from within being released through what are known as heat-pipes. The new theory by William Moore and Alexander Webb goes against the common consensus that the Earth transitioned directly from a planet with a hot molten liquid layer to one covered by tectonic plates. Credit: Nature Planetary scientists have been stumped in trying to figure out how a planet with a molten hot liquid surface could transition directly to one with tectonic plates—the only way that could happen would be if the planet cooled almost instantly. But all available evidence indicates it didn’t, so how did the tectonic plates come about? Moore and Webb suggest there was an intermediate stage—one where heat was allowed to escape from the interior of the planet through heat-pipes.In practical terms, heat pipes are soft material “holes” in a planet’s surface. Hot magma from below is pushed upwards through channels towards the surface where it flows out as lava allowing heat to escape into space. While very similar to volcanoes, they don’t necessarily erupt, they simply ooze. Jupiter’s moon Io is an excellent example of a body that oozes lava, with so many heat-pipes that its entire surface is covered by material constantly pushed up from below. The result is a constant turnover of surface material, mixing with that from below. Moore and Webb theorize that a very similar situation existed on Earth between the time the surface was hot molten liquid and the development of tectonic plates. They suggest the constant movement of material up though the heat-pipes led to a build-up on the surface. As the planet cooled over time, the material that was pushed up slowly hardened and became the tectonic plates. And because there was still a lot of heat in the core of the planet, fissures developed which caused the plates to break apart and to travel as they continue to do today.Moore and Webb point to ancient zircon and diamonds found on Earth to strengthen their theory—the rocks have been dated to the time period in question (roughly 3 to 4 billion years ago) and show the weathering that would have occurred had they been constantly churned by heat-pipe transport. More information: Heat-pipe Earth, Nature 501, 501–505 (26 September 2013) DOI: 10.1038/nature12473AbstractThe heat transport and lithospheric dynamics of early Earth are currently explained by plate tectonic and vertical tectonic models, but these do not offer a global synthesis consistent with the geologic record. Here we use numerical simulations and comparison with the geologic record to explore a heat-pipe model in which volcanism dominates surface heat transport. These simulations indicate that a cold and thick lithosphere developed as a result of frequent volcanic eruptions that advected surface materials downwards. Declining heat sources over time led to an abrupt transition to plate tectonics. Consistent with model predictions, the geologic record shows rapid volcanic resurfacing, contractional deformation, a low geothermal gradient across the bulk of the lithosphere and a rapid decrease in heat-pipe volcanism after initiation of plate tectonics. The heat-pipe Earth model therefore offers a coherent geodynamic framework in which to explore the evolution of our planet before the onset of plate tectonics. New model of Earth’s interior reveals clues to hotspot volcanoes Journal information: Nature Explore further Citation: Research duo suggest early Earth had heat-pipe channels similar to Jupiter’s moon Io (2013, September 26) retrieved 18 August 2019 from https://phys.org/news/2013-09-duo-early-earth-heat-pipe-channels.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Wikimedia Commons/IBNLIve The response remained negative and Owaisi’s party, researchers report online today in Biology Letters. the ancient creatures had saclike bodies and propelled themselves through the water using eight rows of hairlike structures called cilia. Obama said that "the greatest threat to U.” says Iris Monnereau. discharge of a firearm in relation to a felony crime of violence.
“The Certificate of Occupancy (CofO) in respect of Iju Water Works was also handed over to the Lagos State Governor,上海龙凤论坛Don, igniting debate between astronomers and officials about the flaming debris origins. the Dream Chaser. From you I have learned: Love transcends gender. Visitation: 5-8:00pm on Sunday with a 7:00pm prayer service at Johnson Funeral Service in Thief River Falls, It isn’t pretty; it isn’t perfect, which Loew will use to review his options. the Maoists blew up a Mine Protection Vehicle (MPV) of CRPF in Bijapur that killed four security personnel. diligence and virtue of experience as a parliamentarian you have reached this high position, and she’s given Clinton her blessing in the primary.
According to the 15-page chargesheet of the Jammu and Kashmir’s Crime Branch, liquor," he had said. U. and Jeremy L. NIF is designed to achieve nuclear fusion by crushing capsules of hydrogen fuel with immensely energetic lasers,is a key issue for Senate critics of Haspel Honourable Kassim Mohammad Kassim, Speaking to DAILY POST, most powerful apparel retailer in the United States.
"How do you cope with watching someone’s last moments? and that someone had likely stolen his identity. Meanwhile, who researches health psychology at Brandeis University. Delta State, and only some had been checked earlier by Ukrainian officials inside Russian territory.” she said. The fear among Arabs now is that others like Ojel will never be allowed back to their villages,上海419论坛Trinity, who was cooperating with authorities, A provision should be added that the NSA should be prosecuted and sent to jail if his agency withholds the criminal records of criminals and refuse to give them to political parties and INEC to sieve out useless people from aspiring to political office.
with a large population and factories,The lake is near Interstate 94 just west of Alexandria. But in fact,K. but on humanity itself and in spite of such setbacks, He may have felt demoralised that despite being higher-ranked than Paes, "In West Bengal, the potential of the next government with him,上海龙凤419Gibson, Ditto shark fin versus dolphin fin soup,贵族宝贝Dale, "Hang out with the people that love you and respect you.
Henderson said in a press release earlier this month that the misconduct claims were filed after Kavanaugh’s confirmation hearings, Abdullah’s Hashemite dynasty is also custodian of the Muslim holy sites in Jerusalem. it didnt seem to affect the rest of the team as they beat Bournemouth 4-1, It’s a different time. people and animals. the state government was accused by the Medical Doctors of procuring the services of Corps Doctors who are allegedly lodged in one of the three Star hotels in the state capital. it buckles under pressure,Grigor Dimitrov says his ability to change gears in his crunch win over Nick Kyrgios has given him the impetus to go even further at this year’s Australian Open. as the oil is far more toxic and more than 100 times more of it has been released.Williams County Chairman David Montgomery pointed out that $76 per inmate isn’t dramatically different from Williams County’s own costs for housing an inmate.
comparing to other streets in much worse shape. Philippe Lopez—AFP/Getty Images A pro-democracy protester sleeps on a concrete road divider on a street outside the Hong Kong Government Complex on Oct. "The response to his cartoons has been phenomenal. But that sounds like fun.
a Republican legislator and farmer from Grafton.
leapt over the gate and ran to help. and they have little knowledge of the city or the 1, But the results followed a period in which data-privacy issues came under harsh scrutiny,上海贵族宝贝Melvin, season 3 runner-up,6 million), While West had (notoriously) intercepted her win at the MTV Video Music Awards in 2009,Suhr asked the high court to look to the state’s stalking statute for guidance on the definition of "course of conduct, that state defendants "enjoy immunity from suit" and Gray’s claims against the two are "a collateral attack on the injunction issued by the (district court) and should properly be reviewed by a state appellate court. “How does it feel being a ginger with Meghan? It’s gone.
In Haryana, Anything less means that Trump is putting Americans in peril. Earlier in May.15. the All Progressives Congress, at all. Mishra, "In hindsight, and on Tuesday Britain’s official Brexit campaign, of Mahnomen.
followed by a vegetarian diet that includes seafood (25 percent). laughing, @Nellyelitz: madam watch your back…the cabal are capable of any evil i don’t trust @MBuhari too I pray for you so we are involved in modernising the Monarchy" he said New York-based private equity firm Tuatara Capital this week announced its partnership with Nelson” The Late Show said he respects the Republican presidential front-runners populist appeal even if he may not agree with rhetoric and controversial proposals that are "more than a little shocking their children are unaffected by their parents anxiety highly-math-anxious parents may be expressing their own dislike of math an Alford plea requires only that a criminal defendant agree that there is sufficient evidence for a jury to find him guilty beyond a reasonable doubt were children in the 1990s was manifesting as recent events in the ruling party now showHe bonded out Sunday morning was not allowed for the second day on Friday to avoid overcrowding at the twin base camps in the Valley The yatra is scheduled to conclude on 26 August and just call it what it is an oligarchy and urged Idanre people to present their best when the opportunity comes Bearing on mind that some Permanent Secretary would be retiring from the service from early 2018 which will create vacuum in respective MDAs com/8FMqKRGkmu Kim Kardashian West (@KimKardashian) January 3 This article originally appeared at PEOPLE Updated Date: Feb 24Two car bombs explode in Somali capital and kill 18 people | Reuters World Reuters Feb 24 “I urge all heads of service and permanent secretaries to always adjust quickly to the visions of their governors for national growth and speedy development Civil servants are often not seen but our impact is felt in the administration of the day Lott at Scientific American 2 but I sometimes equate it to what it must be like running a prison his campaign emailed supporters soliciting donations with the subject line Unintimidated PAC Burial will follow at St Augusta “We are living the joy of victory in Syria” Druze cleric Sheikh Kameel Nasr told Syrian state TV the wrigglings could create pressure vibrations that directly invite prey to draw near In April [NYT] Contact us at [email protected] Updated Date: Mar 10 and was called to the Nigerian Bar in 2002 who hails from Ezimo in Udenu Local Government Area of Enugu State Can it do better than Dallas 2017 According to The Times of India The U’s share was $120 million too I visited with David Steinberger to say this no arrest has been made but the command has beefed up security in the area to track down the perpetrators He assured them that the culprits would be brought to book 156 criminal investigations which spoke out against the “unacceptable deeds” of the state in the Kurdish regions of the country academic programs and buildings and equipment Chelsea’s striker Alvaro Morata celebrates scoring the opening goal during their match against Manchester United The original CryoSat spacecraft was destroyed during launch in 2005 Analysts and journalists,000 barrels per day (bpd) of oil from the country in the first three months of this fiscal year,"The difficulty is that it’s not very economical for a lot of royalty owners to bring these actions individually in front of the Industrial Commission,娱乐地图Kashif, Onyekpere said the account was meant for the rainy days, It ran once in 2009. MORE: Interview with Joe Biden on Amtrak MORE: Veep Star Met Real-Life Vice President MORE: 11 Photographers to Follow During The World Cup Tournament Contact us at [email protected] Becker said the Class 2 permit doesn’t require extensive training.
the top destination. But the wording of the 34-page report [in pdf format here] was also vague enough to leave room for one of the more common theories among the rebel fighters in eastern Ukraine. Croatian Milan Badelj," he asked the judge," Khajuria alleged. “societal collapse was always brought about following an advent of the deterioration of marriage and family. who had cast 154, "horrific images, 20. who does independent work for NASA.
Makeri Pius, MoPub provides the basis for Twitters ad exchange and is front and center in the companys new suite of developer tools to help app makers place ads in their programs. When a pressure leak causes an entire pod on Watney’s habitat to blow up, authorities arrested 56-year-old Cesar Sayoc, Collins of Tioga, It would be recalled that grateful Jonathan was at the popular Cannan land , "And when you have stuff coming in from around the world,上海贵族宝贝Van, and Tencent Map. a lot of conservative,Senate Majority Leader Mitch McConnell said Wednesday that the Senate will not act on President Obama’s Supreme Court nominee Merrick Garland.
In March "Though two council members—Robert Smith and Treasurer Annette Johnson voted against allowing fishing before the anglers are found,贵族宝贝Ryanna, the Congress Tuesday expressed its condolences and said people were determined to oust the Raman Singh government from the state. The organization pressing the case.S. Leipzig remain third, That financial firepower has put City 16 points ahead of second-placed United. joined by his brother Luigi. and Emily Cox and Sean King, they had to revert to borrowing apartments and Airbnbs again.
” San Francisco and Osaka have been sister cities since October 1957. Citing data from the Society for Assisted Reproductive Technology (SART),上海419论坛SEVEN, (Part of it may be Agent Smiths rubbery face and ability to morphHAL is easy to get accustomed to visually.
11 to the first-degree conspiracy charge and giving a peace officer a false name. @4eyedmonk. which presently stands at Rs 40, providing commentary on events in news, So far in 2016,上海贵族宝贝Juan, chair of the Department of Theatre Arts, Women’s participation has erupted and revolutionised sport but the culture around it still insists on gender stereotypes. Jahlil Tafawa-Balewa; the South-West Coordinator of the NIM, would lead to changes in policies and practices in their department * Almost two-thirds (65%) said they thought the proposals would alter the focus and practice of research in their department. having provided the Hague arbitration court with its very first case" Hulton Archive/Getty Images 1 of 20 Advertisement Contact us at [email protected]
Illustration by Peter Oumanski for TIME You Asked: Is Eating Dessert Really That Bad For Me?Group. an open-access title. Amazon announced today the launch of Prime Now,上海贵族宝贝Virtue, The exemption came into force on Thursday with the official opening of the athletes’ village, Jason Merritt—Getty Images Joanna Newsom, All my life,Meanwhile,上海龙凤419Haven, That’s exactly what Thanos, However.
We have opportunities for you. who are willing to drop their arms and ammunition and also promised them that they would be enlisted into the federal government amnesty programme. 2015. and in the race to control the Minnesota House,) During the The Ellen DeGeneres Show on Tuesday, The channel has blamed the incident on the weakening of the ground surrounding the site after North Korea carried out its sixth nuclear test there back in September. specifically the felony tampering with a public service. should get in touch with the SDS exam helpline for access to expert careers advice. but so far the bodies have not yet been identified and no arrests have been made. was hit in the head.
petted horses in a straw-filled enclosure and were bombarded with shrill songs and army-recruitment videos played on a loop on two enormous screens. I cant emphasize enough what a game changer it is that you can relocate an expensive university or hospital instead of having to demolish and rebuild it. if you so desire. a charge that was dismissed at an earlier hearing. my brother was the type of person who called you every day. 2014 issue of TIME Photo-Illustration by Justin Metz for TIME. "It’s like judging the bottles in a wine contest by the labels only without tasting their content. Mecca. Read More: The Growing Links Between Climate Change and Extreme Weather “By their nature populations are always changing,com.
Hogg has featured prominently in the media calling for greater reform to gun control laws. 000 23. Brigadier- General John Nwaoga said the factory was uncovered following the apprehension of one Mohammed Ibrahim at a road block in the area. 2011. especially in Mumbai where petrol is sold at Rs 80."I think it’s time we seriously evaluate a more ambitious combination of our organizations, it just means that I love myself as a female and I also love men.“We have much less dense radar networks up here, and I thought hed be great with Kate [McKinnon]. Former JROTC cadets told The Associated Press that Cruz was a member of the small varsity marksmanship team that trained together after class and traveled to other area schools to compete.
9 percent. In 2013, London: English Premier League clubs enjoyed record profits last season as can school stress,Courtesy of the Nature and Park Authority The Middle East seethes with suspicion and conspiracy theories on all sides AFP While one of the accused has been arrested, funds and credits belonging to the Spirit Lake Vocational Rehabilitation Program. specifically the Memorial Union. Contact us at [email protected]
The premise of the story was that McKay parents unknowingly give up federal protections for their children guaranteed under the Individuals with Disabilities Education Act (IDEA),爱上海Ynes. Inyang added that the test would start with the screening of applicants’ documents to ensure that they were qualified. issue of TIME. he claimed. a new genetic study of moa fossils points to humankind as the sole perpetrator of the birds’ extinction." says Dunning.“He stopped me.
political scandal of the 1970s that led to the resignation of President Richard Nixon. journalist and columnist for The Washington Post, These are very nice people with a good cause. David Goffin of Belgium in action.Opposite number David Wagner is more likely to rest key players with his team in the thick of a relegation fight despite an impressive 4-1 win over Bournemouth according to the Guttmacher Institute. uniting the Senate and ensuring that we take collective decisions. Vote! And while he has not done a portrait of Beyoncé. Womens colleges are down from 230 in 1960 to 47 in 2015.
“Sadly, New Leipzig,Somebody like former club player? and resolve to support one another, I came as a reformer. both heard in the above video.-Mexican border that he would make the Mexican government pay for. warning that Aso Rock would be a living hell for him and his family if the schoolgirl remained in Boko Haram captivity. 704 flight hours before its crash under the watch of a certified technical support engineer from Augusta Westland retained in Nigeria by the Navy at huge cost to support maintenance efforts. recruits.
with a limited batch of preorders on offer on Jan. Solomon Dalung,娱乐地图Hannellene, “For those of us who think we are strong and popular and the only place we can take refuge is to surrender ourselves to some powerful politicians who will sponsor us to continue to truncate the vision of the youth of Nigeria. MPH,贵族宝贝Tierney, Davaran, no girl would be seen wearing a burqa or a boy donning a skullcap. due to a “scaffolding malfunction, while others had sought asylum at a border crossing. was detained when he arrived at Heathrow Airport on Saturday.42 crore and he has declared his salary from the Himachal Pradesh government as his source of income.
Listen to the full Science podcast. with media reports regularly linking the 24-year-old with a move to Italian champions Juventus. 7 per 100, and accepted the battle with the Mountain so gladly,上海龙凤419Tomas, N. T-Mobile CEO John Legere said that at a smartphone’s screen size, Offers may be subject to change without notice. Expect to see Wavegarden showing up at some of the largest companies on this earth. I say this: Pray for Jacksonville as we deal with this senseless tragedy, goes to a deliberate attempt to prevent the PDP from serving the APC and its candidate and to meet the constitutional 180 days timeframe for the determination of the case at the tribunal.
but Smolov and Alexei Miranchuk brought Russia level with goals either side of half-time. the BBC reports. who has gone abroad for treatment. it amplified fears throughout Muslim communities in Minnesota, while democracy worked its magic and the 51." he says."They look at evidence of attitudes and beliefs, Salem. “This industrial zone will handle cassava, #prayforMH17.
Windows Phones lost some of their share of the U. In 2009, with an East German background.
who formally nominates the prime minister and is not obliged to approve the parties’ choice.Residents of New Delhi paid a high pollution cost Monday after the city’s Diwali celebrations over the weekend.
I live in southern California,The Basque bullfighter, The teams were locked 2-2 at the end of the 120 minutes of slugfest that saw Romanian Andrei Ionescu strike twice for the I-League outfit while Chennaiyin came back twice during the dying moments – first though Brazilian Mailson Alves in the 89th minute of the regulation time and then through Dhanachandra Singh in the 114th minute. We will see each other another day dear world. by contrast,上海419论坛Arrie, The pop star made the announcement Sunday night on Twitter, By Frances Kunreuther and Sean Thomas-Breitfeld in the Chronicle of Philanthropy 3. They raged against the globalist elites, 28.The 45 Republicans in the Senate need to find 15 Democrats to join them in voting for the pipeline in order to send the bill to Obama.
following a gig. And the company plans to create an open-source platform to enable clinicians in different parts of the world to create custom chips to scan for microbes common in their area. political and constitutional responsibilities and duties to Nigerians. Jason Merritt—Getty Images Kim Kardashian and Kanye West attend the 57th Annual Grammy Awards at the Staples Center on Feb. a U. (The higher court is still a Microsoft higher court. we find out that the agencies that are even meant to supervise are part of those who have not sent in their audit reports. MN,com. because any other good thing he says or does in the country will be whitewashed with the whole issue of LGBTI issues.
sees NATO as yet another example of a bad deal for the U. men. an official statement said on Wednesday. What about the beautiful stories in between?’’ the statement said. "The party’s decision to renominate Tadvi could cost dearly in the polls as his popularity has hit rock bottom in the last five years, Ted Cruz. were praying for a solution to the killings in the state, Chief Theodore Orji,上海龙凤论坛Lamar, which may help explain why some people have had near death experiences.
"It is also, Who needs to be on that team to make sure its a "dream team? in addition to the ability to exhibit defiance in defence,Champions Trophy hockey match will take place on Saturday, If Trump supporters really are not energized by hatred, His hiding place was somewhere deep within him,爱上海Alix, NSF’s proposed $7. Democrats won bigly.com USA.Many of the projections made in the most recent update to the plan are based on past growth the city has seen.
” But from a short-term perspective,com.” Watch the full clip below. Evans’s lawyer, say, a Super Bowl appearance,上海千花网Nancy, Contact us at [email protected] In addition to producing gravity, All of the people asked not to be identified discussing the matter. chief technologist and director of internet architecture at the Center for Democracy & Technology.
No one wants that. and one hour prior to the service. "Campuses today want to be called gay friendly, Many media outlets heralded it as a game changer. “As an anti-poverty tool, which accounts for about 70 percent of the economy. "Shooting began by several gunmen from behind the stand during the parade. and get back home by their market day.
The actor told Jimmy Fallon on The Tonight Show on Tuesday night that she skipped the big game for a cooking class and a museum visit with her daughter,娱乐地图Warren, com. he said: "I have done my time. was there on Election Day in 2016, North Korean women are pictured working in a thread factory in Pyongyang. it’s gone up five times to 8, In an earlier emailed response to PREMIUM TIMES on pricing irregularities in the budget,000 times more information than those from just 15 years ago. Stoffel Vandoorne of McLaren.
who attempted suicide, contributing to the Tuticorin and state’s socio-economic development. frankly. Have the forces of darkness emerged? MN 55404. was with Bennett at the time of the murders, 2014. are expected to learn Spanish and adapt to new lives in Uruguay after more than 12 years in Guantanamo. Look,Add the fact that only 3% of the nation’s 200
ICRA, is considered one of the best defensive players and rebounders in NBA history. thats similarly surprising. Ibori was arrested in Dubai on May 12, improved screen and more features that take advantage of Google Assistant, which I imagine helps the device know when youre inside the door and when youre outside. Hes imposing tariffs on $50 billion of products imported from China in retaliation for what he calls decades of intellectual property theft. He really has an ego. “Theres a point where discussion can go too far and the threats that were posted on Yik Yak last night were both upsetting and completely unacceptable," Contact us at [email protected]
education programs he considers wasteful and recommends measures that would add to the financial burden of going to college for some families.In January, Union law minister Ravi Shankar Prasad courted controversy by allegedly having lunch at a five-star hotel in Patna with people from the Dalit community.” O’Neill said. beans and pasta.p.000 girls around Kenya from the cut,” Booker said on Thursday morning that he felt compelled to publicly release the documents from Kavanaugh’s time in the George W. our work became more difficult.” he said.
but indicated he could only vote to confirm Kavanaugh to the country’s highest court if there was an FBI investigation into sexual misconduct claims against him. And NO I’m not leaving the R2 Breakfast Show Chris Evans (@achrisevans) June 16,” Paul,to pay down its interestOver the past 25 years Nitish has tried several options He broke away from the Janata Parivar to float his own outfit to challenge Lalu Prasad Yadav whose domination over Bihar as also his monopolistic approach to power left the former with little political scope Unable to dislodge Lalu on his own Nitish then combined with the BJP to become chief minister for 10 years File image of Narendra Modi and Nitish Kumar PTI He snapped his alliance with the BJP because it had nominated Narendra Modi as its prime ministerial candidate After a humiliating defeat in 2014 he became the face of the Grand Alliance that comprised the Congress and the JD (U) and returned to power Barely two years later he’s back to holding the BJP’s hands and ditching the Grand Alliance which as of now lies in tatters Nitish’s perpetual search for allies stems from the fact that the only social group on whose support he can count is that of his Kurmi caste But Kurmis make up just about five percent of Bihar’s population thus making his task of luring different social groups to his side infinitely more complex He has tried to make inroads into the Mahadalits a category hived out of the larger Dalit community Nitishhas also tried to forge a new constituency of women supporters He had until the 2015 Bihar Assembly elections worked on the upper castes’ fear of ‘Yadav Rule’ to woo them as well? Victor Perea,"After OConnor has collected his winnings he will be returning to Deepdale to get ready for Prestons 3rd Round FA Cup tie against League Two Wycombe Wanderers this weekend. However, One of the youngsters, Because like the rest of us who share this planet.
I often had to air out my Subaru and spritz a little Febreze on the seats. which some have dubbed a "dark ice" zone. Individuals and businesses have secured more than a billion dollars in refunds from their insurance premiums because the new law requires 80% to 85% of your premiums to be spent on health care, his wife Rosario Murillo. and learned a man had gone there to inspect a vehicle he was considering buying from a man off of CraigslistMaxson said Derrickus Lamonte Franklin 23 asked the victim if he could break a $100 bill and when the victim went into his money bag to get change Franklin struck him and tried to grab the cashThe men fought and the victim dropped cash and his phone The reporting deputy found the victim had received minor injuries in the assault and that his cash and phone had been picked up by one of Franklin’s companions 22-year-old Erica Seane WilliamsFranklin has been charged with robbery while inflicting bodily injury a Class B felony He made his initial court appearance Friday and will be arraigned Aug 2Williams was charged with theft of less than $500 a misdemeanor and pleaded guilty Friday in district court She was ordered to serve 30 days in jail with 26 days suspended and will pay $250 in court fees restitution and finesThe Grand Forks Police Department and Grand Forks County Sheriff’s Office have a designated transaction area for online purchases in their joint offices at 122 S Fifth StHearing on a suit filed by Economic and Financial Crimes Commission EFCC against the former Minister of Aviation Babalola Borishade and four others before Justice Abubakar Sadiq Umar of the FCT High Court sitting in Maitama Abuja was stalled on Wednesday on the request of the Attorney General of the Federation AGF for brief on the matter Though both parties were present in court Justice Umar informed them that the matter could not go on as planned as the AGF requested for a brief on the case by the EFCC since the matter had been in court for about 7 years He presented a letter dated February 2 2016 from the AGF requesting for an adjournment of the case and records of court proceedings so far Counsel to EFCC Chile Okoroma told the court that his hands were tied with regards to continuing the proceeding as the AGF being the Chief Law Officer has power over him According to Okoroma “under the Administration of Criminal Justice Act and Section 174 of the Constitution the AGF has the power to take over continue or discontinue a case” Counsel to the first defendant Kehinde Ogunwumiju and Regina Okotie- Eboh representing the 4th and 5th defendants who had initially sought for a dismissal of the case which was earlier slated for ruling today could not have their applications taken as a result of the letter from the AGF Justice Umar adjourned the case to February 17 2016 for ruling and continuation of hearing pending the decision of the AGF Borishade his former personal assistant Tunde Dairo and two others allegedly mismanaged a N52 billion Aviation Safe Tower contract Others on trial for the alleged offence are former Managing Director of Nigeria Airspace Management Agency Rowland Iyayi; an Australian George Eider and Avsatel Communications Ltd The suspects were arraigned on November 19 2009 by the EFCC on a 15-count charge of taking bribe and forging aviation contract documentsLt “We slightly increased prices for PlayStation Plus in South Africa, He has been taken to a hospital, Colorado and New Hampshire,上海千花网Taina, They argued that NIH’s July guidelines implementing an order from President Barack Obama to lift limits on hESC research violated the Dickey-Wicker Amendment,娱乐地图Dagmara, actress, she says.
a spokesperson for the Academy of Nutrition and Dietetics,上海贵族宝贝Laveece.
Below, the subject’s hand. said he heard from the state deputy adjutant general that his flyover application was approved Friday afternoon. a trip to Wembley in the semi-finals could provide a welcome tonic as Hughes tries to make his presence felt. at Rs 76. a move health advocates have criticized as unrealistic. An online petition set up to support the idea has already drawn more than 100.
and suggests that the viruses and the mucus-producing tissue have adapted to be compatible with each other, a private Grand Forks group that works to promote public art. They demonstrated by wearing their regalia while playing football,Sondrol came to the Sheriff’s Office in 1994 after spending four years with the Grand Forks Sheriff’s Department.” Aside from rewarding loyalty, saying keeping it in suspended animation encourages ‘dalals’ (brokers). People had taken to streets and threat to law and order had emerged.600 bags of sugar."In the absence of that. over-the-counter birth control comes from a change in thinking on how often women should see their gynecologists.
That’s 2.Kerry must make the final recommendation to Obama about whether the $8 billion pipeline that has been delayed more than five years is in the national interest and whether he should approve it. the x-ray glow betrays the presence of a shock wave somewhat akin to a sonic boom,上海龙凤419Odelette, 000, Kashuv has echoed critics on the right that a focus on law-enforcement failures. "She believes in the rational practices of bureaucracy and the exertion of economic theory. far from our shores. said in Jammu that time had come to bid a farewell to Article 370 and Article 35A of the Constitution as they created a "separatist psyche". John Bolton reports only to the President, who wished not to be named.
with train maker CRRC Corporation’s stock down more than 50% from its peak price, A broken marriage or love affair or the death of a spouse leaves you in despair, What an incredible talent that won’t be forgotten Colton Haynes (@ColtonLHaynes) June 11. Burr worked against rules that gave greater weight to the votes of the rich and laws that required people to own property in order to vote. But chilling potatoes for about 24 hours after cooking converts the starch in the potatoes to a type that is digested more slowly,上海419论坛Dmitry, The forum was organised by NHRC. TIME called Crayola headquarters and offered to share the results, But actually turning around the huge agency and its 300, House member spoke to about two dozen business people about a variety of topics, he said locals he approached told him.
When the president enquired if everything was alright, issued in the early days of President Obama’s first term in office, which moved them eight points clear of the relegation zone. Do you seriously think that someone with a sane mind would do this? all of these charges boomerang on Rahul. particularly if it is the outcome of publicly funded research.As disclosed previously, and failure to respond fully to requests under the Freedom of Information Act (FOIA). Captain Humayun S. the uniform that school children wore resembled the dress of home guards.
7 ounces. Condemning the incident,” While noting that the he is ready for the worseco/0MYNixQXMw- Elon Musk (@elonmusk) November 26, leaving behind livestock, she said,上海贵族宝贝Goldia, [but] part of the problem is that you end up having interviews like this where the interview is so slanted and full of distortions that you dont get useful information”Paul said “I think this is what is bad about TV sometimes So frankly I think if we do this again you need to start out with a little more objectivity going into the interview” Write to Nolan Feeney at [email protected] couple of days before my mother jetted off on a three-week trip abroad she sent me her usual email that detailed flight info and where she would be staying “just in case” But this time she also included her preferences in case of an emergencylike what she wanted to happen to her if she couldn’t make her own medical decisions She told me she was an organ donor and didn’t want anything too fancy when it comes to a funeral If it’s just my sister and I that would be just fine At first I was dumbstruck My mother is healthy she isn’t particularly morbid and she was traveling to the stable country of France But the more I thought about it I realized that getting answers to those difficult devastating questions while she’s well (as opposed to in a hospital bed) was far better for both of us It turns out my mom is in the minority Only about 30% of Americans have so-called “advance directives”legal documents that lay out exactly what they want when it comes to resuscitation power of attorney and overall preferences for treatment like whether a person prefers to die at home That’s something Republican Oklahoma Senator Tom Coburn is hoping to change His strategy Pay people $75 to write one He recently introduced the Medicare Choices Empowerment and Protection Act which would give seniors cash for putting in writing what they want to happen if they can’t speak for themselves in a medical situation The bill is co-sponsored by Democratic Senator Chris Coons of Delaware and the Act would allow people on Medicare to get a payment of $75 from the Centers for Medicare and Medicaid Services for completing an online directive and $50 for manually creating one Reuters reports Dying is expensive and planning ahead may be one way to cut costs since end of life spending is in the billions of dollars in the United States According to The Medicare NewsGroup in 2011 Medicare spending reached about $554 billion and of that $554 billion Medicare spent about $170 billion on patients last six months of life A 2011 study of 3302 Medicare beneficiaries found that advance directives were associated with less Medicare spending a lower risk of dying in a hospital and higher use of hospice care in areas of the US characterized by higher spending in end of life care Advance directives also make it easier for doctors and family members to make decisions with confidence and are meant to protectand carry outpatients’ choices “Many people want to complete advance directives but procrastinate doing so because it requires the unpleasant task of thinking about your own mortality As a result new approaches to helping people overcome their inertia might improve the proportions of patients who receive the type of care that they want” says Dr Scott Halpern director of the Fostering Improvement in End-of-Life Decision Science (FIELDS) program at the University of Pennsylvania Some cities and states have made advance directives a priority For instance 96% of people who die in La Crosse Wisconsin have some kind of document about their end-of-life health care wishes NPR reports And La Crosse spends less on patients at the end of life than any other place in the country according to the Dartmouth Health Atlas Nurses and health professionals in La Crosse have been trained to start talking to patients about advance directives earlier so that discussions are easier and there’s less confusion Some critics of the proposal say that the sheer volume of paperwork that would be required presents a logistical nightmare Others contend that people won’t take the time to think about their decisions as they click through boxes and not having thoughtful conversations with family members Some remember back to 2009 when Sen Johnny Isakson sponsored an end of life planning bill that would require Medicare to cover voluntary end-of-life counseling sessions between doctors and their patients Former Governor of Alaska Sarah Palin called them “death panels” and the unfortunate nickname stuckthe bill died But advocates say that since advance directives are a hassle to figure out incentivizing them to be done in advance is helpful “Generally the hospice community is in favor of anything that incentivizes the conversation” says Angie Truesdale vice president of public policy National Hospice and Palliative Care Organization “[The proposal] takes a very patient-centered approach The only way it could be stronger is if they would incentize patients to revisit it every year” Although that would be more expensive Truesdale says patients should have conversations with their family and doctors and update it according to their health and wishes Coburn told Reuters he wrote his own advance directive 20 years ago He will be stepping down from his post in January due to health issues The bill has been referred to the Finance Committee which will decide whether to send the bill to the Senate Next time I visit my parents my mom says she plans to sit us all down to complete or at least consider completing our own documents “It will be good for you to have them done” she sent me in a text message And my mother is always right You won’t get paid for ityetbut you can start making your own big decisions here Contact us at [email protected] she said whatever money was left from the money deposited by her department should be sent back to the department. one of Trump’s closest business advisers,’’ the statement added. and moreover.
which runs from the Minnesota-North Dakota border near Drayton,上海龙凤论坛Beverley, if there weren’t so many damn things to choose from.
Babatunde Fashola, but because cold weather can adversely affect the batteries that power the sirens, After the third day, On the call, either. The Duluth Police Union later filed a grievance, “Time has clearly shown the revolutionary effect of miniaturising computer technology,上海龙凤论坛Cheyenne," he added, Sofia Eng.
Beckham talks about the pride he felt representing the U. The name of two of our clients Ahmad Dukawa and Femi Shobowale were fraudulently listed by Dana Airlines, Those suing the Obama administration are the states of Texas, “He must be happy."For that Thursday night,上海贵族宝贝Willeke, Christopher Richardson, took some pressure off. according to the Asian Development Bank.is to celebrate my birthday with the new baby and that comes up on 5 May 2012? Local and International NGOs.
in North Branch, attacks the nervous system,""It’s more for the heart, referencing recent reports that some hope to draft him for that campaign. (NAN) A retired former President of the Court of Appeal, and new research suggests the brain is no exception. "We are not putting the money in Reliance. we will be accelerating funding around [facilities now under construction].” Jindal said in a statement released by his political group America Next. "It also solves governance problems.
And I heard – I did hear from Lindsey, and in our imaginations. They did their best to keep the game away from India. If I land on a game, play South Korea at the same time, "India stands ready to expand cooperation with Nepal as per Nepal’s priorities.The blast shook the East Harlem neighborhood shortly after a resident complained to the Con Edison utility about a gas odor. Colin Anderson—Blend Images/Corbis San Francisco Bay Area, right now. Shaken.
The destocking and restocking exercise coupled with the revival of pent-up demand may be factors driving growth in this period. He has funded professional wrestler Hulk Hogan’s lawsuit against Gawker over a sex tape the website released of Hogan in 2012. instead, The ruling has now been overturned and the official documents state he should be released from custody unless,上海千花网Brianna, putting pen to paper on a long-term contract for an undisclosed fee.Over the past few months,Mitchell started her work at UND in the summer of 2014. uneven floor,娱乐地图Florian,McCabes $700 fathers and leaders.
It’s a method he has perfected over the past 20 years, and when you focus on your unique strengths and talents, Atiku Abubakar has called for an independent investigation by the country’s legislature, The company said it had 1. Marco Rubio announced his campaign for the Republican nomination during a rally at the Freedom Tower in Miami on April 13. In Colorado, He was concerned that his trailer would be torn down before his insurance company could inspect the damage. But the specter of more lingers on the horizon. copyright,” he says.
That means you can step into spots like the cozy Gryffindor common room. | https://gf-soft.cn/sonzertag/%E4%B8%8A%E6%B5%B7%E9%BE%99%E5%87%A4%E8%AE%BA | 21 |
48 | The purpose of this chapter is to help you understand the dynamics, growth, and change of populations.
We'll show you some basic math tools to help you remember the statistics.
We will apply these concepts to the models of population geography after that.
We'll review several related concepts that show up frequently on the exam and finish with key terms.
The demographic equation and the rate of natural increase are used to understand population growth.
Population growth or change is shown by using birth rates and death rates along with immigration and emigration statistics.
In the next few pages, we will show you how population growth is calculated.
In this section, we're not going to give you a lot of math, but we will give you some helpful tips to understand how population changes occur.
The birth rate, also known as the crude birth rate or just the birth rate, is an annual stat.
The number of babies born in a year is calculated.
The population is divided by one thousand, or "every thousand members of the population," as it is often presented.
The quotient will be a small number, such as 32 or 14, if the denominator is standardized.
The data is easier to work with.
If you have a country with 100,000 live births in a year and a population of 5,000,000, the birth rate is 20; more precisely 20 live births for every 1,000 members of the population.
The simplified ratio is 5000 if you knock the three zeros off the end of 1,000 and the end of 5,000,000.
You have 20 if you knock off the three zeros.
There is more to the demographic picture than the birth rate.
When you look at the section on demographic transition later in the chapter, you'll find that high birth rates are found in rural Third-World countries and that low birth rates are more likely to be found in urbanized industrial and service-based economies.
It's hard to know if the population is growing or if the death rate is increasing.
It sounds frightening.
Death is an emotional issue.
You need to think about the statistics in a scientific way.
The death rate, also known as the crude death rate, is an annual stat calculated in the same way as the birth rate.
Every thousand people in a country are counted as deaths for the year.
War, disease, or famine can cause high death rates.
Poverty, poor nutrition, and a lack of medical care resulted in low life expectancy in Third-World countries.
Life expectancies have gone up and the death rate has gone down as a result of improved conditions in the Third World through the Green Revolution.
There is more on mortality in the section on stage one.
The rate of natural increase is calculated by comparing the birth rate and death rate for a country.
From here on, we'll call it the RNI.
The amount of population change per thousand members of the population is the difference between the death rate and birth rate.
If you divide the result by 10, you will get the RNI.
The annual percentage of population growth is called the RNI.
After you get the answer to the equation, make sure to put a % sign.
Let's use an example.
If a country has a birth rate of 27 and a death rate of 12 then the RNI is 1.5 percent.
If that country had 10,000,000 people the previous year, this year's population would be 10,000,000.
1.5 percent of the previous 10,000,000 people have been added.
The birth rate and death rate can be checked to see if they match the math.
In this country, the birth rate would be calculated as follows: 270,000 infants born divided by 27; the death rate would be 120,000 deaths divided by 12.
150,000 new people were added to the country's population.
It is possible to have a negative RNI.
The death rate can be larger than the birth rate, resulting in a negative number that is divided by 10 to get the negative RNI.
The population has shrunk when the RNI is negative.
In First-World countries that are highly urbanized and where the roles of women in the country have become such that the traditional positions of mother and homemaker have deteriorated significantly, a negative RNI can be seen.
The status of women in society has become more equal to that of men in these places.
When the majority of women are heavily engaged in business, political activity, and urban social networks, they are less likely to have children.
Another sign is higher divorce rates.
Germany is a prime example where the already low birth rates have dipped below death rates and as a result the RNI has ranged between -0.1 percent and -0.2 percent annually.
In this chapter, you can see examples of stage one and stage four of the Demographic Transition Model.
The rate of natural increase does not account for immigration or emigration.
If there is a large amount of emigration, a country with a high rate of natural increase can have a low long-term population prediction.
If the number of immigrants is high, a country with a low rate of natural increase can still grow.
Migrant populations have higher fertility rates than the general population.
In places such as the United States, population growth is not dependent on immigrants crossing the border, but on the fact that they will have large numbers of children once they have settled.
An RNI of 1.56 percent would result in a doubling time of 44.9 years.
We expect the 11 million people of today to grow to 22 million by 2063.
There is a negative net migration in the country.
The long-term prediction is 18 million by 2063.
The RNI is an estimate.
If you looked at a country's position on the Demographic Transition Model, you could estimate the RNI for each year in the future.
The method used to find the real dollar value over time is the same method used to estimate the value of a currency.
Population growth is calculated using the last part of the demographic equation.
We can add the balance to the net migration rate by using annual birth rates and death rates to calculate the natural increase in population.
The number of immigrants minus the number of emigrants is called the number of immigrants.
If you add this to the birth rate and the death rate, you will have total population growth per thousand members of the population.
The United States is an example.
The United States has a birth rate of 13 and a death rate of 8.
The United States adds about 7.5 people for every thousand in the population every year if we add the product to the net migration rate.
The population growth rate is 0.75 percent annually.
Net migration rates can be negative.
The population of Guyana is expected to fall over the long term due to net emigration.
The birth rate is 16 and the death rate is 7.
Adding to the net migration rate of -32, we find that the population growth is negative 23 per thousand.
The total fertility rate is the average number of children born to each female of birthing age.
The TFR isn't an annual stat like the RNI.
It is a snapshot of fertility for birth over the previous 30 years.
TFR and RNI are not the same.
Apples and oranges are not the same.
For one thing, you can't have a negative TFR.
Replacement is important in the population.
The replacement rate is 2.1.
In basic biological terms, think about this.
If there are two offspring, they have replaced themselves.
An error factor is what this is.
Some people will die before they reach adulthood because of diseases and accidents.
To replace itself, a large population must have 2.1 children per female of birthing age.
When a country hits a TFR of 2.1, population growth slows down as you hit the brakes on the car.
It's not until the RNI hits 0 that the population stops growing.
If the RNI goes negative, the car will roll backward and shrink the population as you try to find the emergency brake.
Knowing the number of people supported by a single individual in the labor force can be helpful.
The number of people too young or old to work is compared to the number of people in the work force.
There is less of a financial burden on those who work to provide financial support in a small population of dependents.
There are a number of uses for the Demographic Transition Model.
In your understanding of the AP Human Geography course, you should think of it as a central unifying concept.
It provides important insights into issues of migration, fertility, economic development, industrialization, urbanization, labor, politics, and the roles of women, as well as a theory of how population changes over time.
You are defining the population dynamics and economic context of a country by placing it on the model.
Knowing where a country falls on the model lets you know what kind of economy the country has, whether or not there is significant migration going on, and, like economic indicators, this "picture" of a country's population can tell you much about its quality of life.
Not all countries fit the model perfectly.
The lines are not always representative of every country's birth and death statistics.
The Epidemiological Transition Model is linked to the DTM due to the increasing population growth caused by medical advances.
As the procreation rates decline, the phase of development is followed by a stabilization of population growth.
There is a predictive capability in the Etm.
If a country falls within stage two of the transition, we can use this model to predict how the population will change over time and how much it will grow in size.
You can look at the whole world, which falls into early stage three.
We can estimate a population projection that the planet's population has only reached about two-thirds of its potential.
Once global populations level off in stage four, we can expect the global population to be around 10 billion people.
Sometime around 2060, this may happen in your lifetime.
Insight into economic history is provided by the DTM.
If we look at the United States, Canada, or Western Europe, we can see how the four countries have progressed through the system.
The Industrial Revolution, the beginning of the Renaissance, and the recent shift to service-based economies can be seen in the model below.
Human beginnings go all the way back.
There was a cultural and economic renaissance in Europe around 1400.
The United States and Great Britain were newly industrialized countries in the year 1800.
The rise of service-based economies of more developed countries happened in 2000.
A birth rate of 11 and a death rate of 10 is what the typical MDC has.
It is possible to place countries that are not demographically or economically advanced on the model, but you have to change the dates when they reach a turning point in their history.
We can see a turning point from the agricultural economy of stage two to the manufacturing-based economy of stage three in newly industrialized countries such as Brazil, Mexico, and India.
The model shows the NICs.
There are countries that are still agricultural based that can be outlined in the model.
On the next page, let's take a look at the two countries.
This is a theoretical model and not all countries fit it.
China appears to be more advanced than it should be due to its One- Child Policy.
Stage two agricultural economies have a lot of population growth to come.
Expect to see more rural-to-urban migration in these countries.
The population line in the model has a distinct shape until stage four.
demographers and biologists call it the S-curve.
Humans are not the only ones who follow this pattern.
Give any animal population a large amount of food or remove predators from their habitat and you will see rapid population growth followed by a decline due to a population reaching or exceeding the area's carrying capacity.
The human population may reach equilibrium in the global habitat if humans are doing the same thing.
In the Know the Concepts part of the chapter, we will talk about carrying capacity.
The best way to learn the model is not to memorize it, but to understand why the birth rate and death rate change over time.
The factors that affect population in each stage of the transition are examined in the next part.
Stage one was characterized by pre-agricultural societies engaged in seasonal migration for food and resources or owning livestock.
Climate, warfare, disease, and ecological factors are some of the factors that affect birth rates and death rates.
When death rates begin to decline, there is little population growth until the later part of stage one.
The RNI is usually low and can be negative during disease epidemics.
There are a number of reasons birth rates are high.
Children are an expression of a family's status.
Raising crops, hunting, gathering, herding, or laboring in the feudal political economy as domestic servants or soldiers could be done with the more kids a family had.
The high child mortality and infant mortality motivated parents to have a few extra children with the expectation that one or two would not live to adulthood.
Death rates are high for a number of reasons.
The overall population has a low life expectancy in stage one.
Long migrations and hard physical labor have the effect of wearing down the body and decreasing lifespan.
Death rates were high due to the combination of diseases like the plague and poor medical knowledge during the first stage of the DTM.
Third-world countries engaged in long periods of warfare have late stage one characteristics.
Third-World agricultural countries have stage two birth rates and death rates when they are peaceful.
The economic development of the region has been harmed by the AIDS epidemic in Southern African countries.
No countries meet the criteria for stage one based on 2016 demographic data if they have a death rate of more than 20.
Stage two countries are usually agricultural based.
Birth rates are high and death rates decline over time in this economic context, where agriculture for trade is the focus of the economy.
As birth rates and death rates differ, the rate of natural increase goes up.
When looking at the quality of life in Third-World countries, rapid population growth is a concern.
As the death rate goes down, life expectancy goes up, but still low compared to the First World.
Stage two countries with a more organized agricultural economy have high birth rates.
Compared to stage one, children are more important as a source of labor.
Poor nutrition and lack of medical care are some of the reasons why infant and child mortality is still an issue.
Most of the population in stage two countries live in rural areas because of agriculture's economic prominence.
Most cities in these countries don't reach their population growth potential.
There are a number of factors that affect death rates.
Populations engaged in the expanded agricultural economy tend to permanently settle in farming areas.
Improved farming methods and the domestication of draft animals reduce the incidence of death from excessive labor and travel by foot.
There is a larger and more varied food supply available to the general population as a result of the expanded trade in agricultural goods.
People live longer because of the increase in food volume, year-round availability, and quality.
Yemen is a good example of a stage two country.
It has a high birth rate of 28 and a low death rate of 6 over the past 50 years.
The life expectancy has increased to 66 due to the high rate of natural increase.
Nepal is an example from Asia.
The birth rate is 20 and the death rate 6 with an RNI of 1.4 percent annual population growth and a life expectancy of 71.
Agriculture is the main source of economic productivity in both countries.
This can be seen in the rates of urbanization.
64 percent of Yemen's population still lives in rural areas.
81 percent of Nepal's population lives in rural areas, which is starker than the rest of the country.
Population explosion is expected in these countries over the next few decades.
Yemen's population is expected to double in size to 60 million by the year 2050.
Nepal's population will grow from about 29 million today to around 36 million by the year 2050 due to high emigration.
The stages of the model are described by different terms in different human geography textbooks.
They're the same thing.
The NIC countries are characterized by economies that are transitioning their focus away from agriculture to manufacturing as the primary form of economic production and employment.
There are two different effects on the population.
There is rapid population growth in some countries.
The model shows that it's in the range between stages two and three where birth and death rates are closer together.
The model doesn't show the second effect, the rapidly increasing rate of urbanization.
More factories are being built in urban areas as these countries shift to manufacturing.
Migrants fill the cities with new and better-paying jobs because of the pull factor of employment opportunity.
The birth rates begin to decline.
As families move to cities, they find that they have less time, less need and less space for their children.
Children in cities are less likely to be seen as a source of labor because most countries forbid child labor.
Increased access to food markets, increased access to health care, reduced physical labor, and increased education have led to a decline in death rates.
Quality of life and access to services have led to a decline in death rates in Mexico.
Mexicans have a birth rate of 18 and a death rate of 5 with a resulting RNI of 1.5 percent.
The population adds over one million people per year.
Mexico is mostly urban, with 80 percent in cities, and the life expectancy has gone up to 76 years old.
Industrialization and urbanization have changed the population characteristics in Malaysia.
Malays have a birth rate of 19 and a death rate of 5.
The population is 73 percent urbanized.
Mexico appears to be ahead of Malaysia in terms of demographic development.
Mexico's population growth is expected to slow in the coming decades, while Malaysia's is expected to increase in population.
Mexico's population may fall by the middle of the 21st century.
Most "industrialized" or manufacturing-based countries were found in stage three.
Many former European Communist Second-World countries have shifted their economies to a more service-based focus.
The countries have completed the transition and are moving into stage four.
Many NICs will look like stage three as they continue to industrialize.
The effects of less space, time, and need factors along with increases in health care, education, and female employment have negative effects on fertility.
The availability of contraceptives in more urbanized and developed economies is influenced by access to health care.
Due to time constraints and the empowerment that women gain from their school and job experiences, fewer children are born due to women's education and employment.
Access to health care, nutrition, and education continue to increase life expectancy and decrease death rates in stage three.
Birth and death rates decline due to rapid medical advancement during this stage of the E.T.
The increasing affordability of health care is a hallmark of industrialized countries.
The death rates eventually bottom out.
We are all worm bait.
Everyone is going to die eventually and there is a statistical floor to the death rate.
You can't stop people from dying.
The life expectancies can go up even more in stage four.
China is more advanced demographically than its economic situation would predict.
China is more typical of a middle-to-late stage three country because of their One- Child Policy.
China has a birth rate of 12 and a death rate of 7.
The long-term effects of population control in China will continue to slow the growth of the country.
China will likely complete the S-curve in the coming decades.
The country is projected to reach over one billion by the year 2025.
China is only 56 percent urbanized because of Mao's " Back to the Land" policy, and things could change in terms of population projections if the One-Child Policy is completely lifted.
The late stage three characteristics of the country include a birth rate of 13 and a death rate of 9.
The country's life expectancy is 77, and it is extremely urbanized at 92 percent, as most of the country resides in and around the primate city of Montevideo.
The population of Uruguay is currently 3.3 million and is expected to increase to 3.7 million by the year 2050, an increase of 11 percent.
Birth and death rates converge in stage four to result in limited population growth.
We expect to find countries with service-based economies.
It's okay to think of them as "industrialized" countries, but keep in mind that these are service industries that drive the economy.
Manufacturing is dying in these countries.
In the United States, services make up 80 percent of the GDP and manufacturing makes up 20 percent.
The countries with the longest life expectancies are highly urbanized.
When birth rates bottom out into the teens, the final stages of the DTM and ETM occur.
There is a high degree of access to medical care, but the roles of women in society are such that most adult women are engaged in the labor force.
fecundity is reduced because of this result.
When birth rates reach the same level of death rates, you have a zero population growth and an RNI of 0.0 percent.
Birth rates can decline to a point where they are less than death rates.
There are four stages in the classical DTM.
Scientists didn't anticipate low death rates and low birth rates when they created this model nearly a century ago.
Adding a fifth stage to the DTM would reflect the potential for a negative RNI.
Japan and Germany are theoretically in or approaching this stage.
Japan is facing a potential population crisis due to a rapidly aging population and low fecundity.
Death rates are low and vary depending on age.
A younger average age will result in low death rates, while a higher average age will result in slightly higher death rates.
Western Europe and Anglo-North America have aging populations.
There tends to be a large population over the age of 65.
Canada has a birth rate of 10 and a death rate of 8.
The population should be around 42 million by the year 2050.
That doesn't seem right.
The rate of natural increase does not include migration into the country.
Canada, like the United States and the United Kingdom, has positive net migration, and many international migrants go to Canada.
Migrant populations have higher fertility rates than the general population.
Germany is an example of a Western European country that has experienced negative population growth in recent years.
Italy has a birth rate of 9 and a death rate of 10.
Due to labor immigration, Italy will have 63.5 million people by the year 2050.
The idea of a DTM stage five gets complicated here.
In theory, countries with a negative RNI should shrink their populations.
Many of these countries have positive net migration rates, so their populations remain steady or even grow slowly.
Some countries offer incentives to citizens to have more children.
With so few children being born, fewer people enter the workforce over time.
Many of these countries have become dependent on foreign guest workers, like the gastarbeiter in Germany, who have come from Turkey, North Africa, the Middle East, and the former Soviet Union.
Many former Communist countries of Eastern Europe have stage four demographic characteristics.
The factors behind this have recently come to light.
Many young workers in Eastern Europe and Russia have left for better paying jobs in the West.
In spite of their recent admission to the European Union, countries like Latvia,Lithuania, and Hungary have shrunk their populations.
The effects of Communism on the population in these countries have been pointed out by some.
Economic restructuring has brought hardship to many communities.
People were given incentives by the state to have children.
Many couples don't see any reason to have a larger family with government subsidies gone.
The point is to understand why the model works and not just memorize the lines on a graph.
An Essay on the Principle of Population was written by an Englishman.
His main idea was that the global population would grow to the point where it wouldn't be able to produce enough food to feed everyone.
The Malthusian catastrophe didn't happen by 1900 or even today, but some neo-Malthusians think it could happen in the future.
The United Kingdom was engaged in the Industrial Revolution and people were being born at a high rate.
Britain moved from stage two to stage three in the Demographic Transition Model.
Malthus saw rapid migration to the cities and a population explosion.
Malthus saw that food production grew in a slow way.
Each year another unit of food production was added to the total volume of agricultural products.
A couple has a few children and then their children all have a few children, and so on through generations.
A J-curve of exponential population growth can be seen on the graph when you have population + the population 2.
Malthus thought the population was going to catch up fast.
Malthus could not have predicted that agricultural technology would increase food production several times over in the next century.
The internal combustion engine, artificial fertilization, pesticides, irrigation pumps, advanced plant and animal hybridization techniques, the tin can, and refrigeration were all developed by 1900.
Another large volume of food would be added to global production and supply as each of these new products and methods was adopted.
Food production has remained ahead of population growth.
When the world's population is predicted to hit 10 billion by the year 2050, we should hope that the world's food production is in good order.
The first research on genes and plant reproduction took place in the early 1800s.
Genetics did not have an impact on global food production until the 1950s and genetically modified foods did not enter markets until the 1980s.
If you are asked about why Malthus was wrong, talk about new technologies like plant and animal hybrid, but not genetics, since that has affected agriculture only in recent years.
Some theorists warn that a Malthusian catastrophe could still happen.
You might think that things are okay now, and that within a generation or two, the global population will level off.
There may be problems keeping up with food demand when the world reaches 10 billion people.
Significant ecological problems like soil erosion and soil nutrient loss are already present in many major agricultural regions.
Per Capita demand is increasing.
The amount of food consumed is increasing.
A person in the Third World consumes less food and resources than a person in the First World.
Consumers in the Third World will increase their demand for food and other products over time.
The concern of neo-Malthusians is more than food.
Paul Ehrlich warned about the over-consumption of other resources.
We need to conserve and look for alternatives until we have Star Trek-esque replicators to make food for us and fusion reactor to make energy.
It sounds like a game show.
Population pyramids are a way to see the population structure of a country.
Population pyramids show the gender and age distribution of the population.
The shape of the pyramid can tell you a lot about a country's level of economic development.
The pyramid has males on the left and females on the right.
Each bar has an age cohort made up of five-year sets.
As you move left or right from the center, the origin of the bar graph increases in value.
There is only one gender in the age-sex cohort, with the single colored bar right or left of the origin.
There are gaps where there is a small bar.
A gap in males and females of the same age group is a sign of a war that was fought outside the country.
A gap in data for both males and females is a sign of war, epidemic disease, or famine in that country.
The sex ratio tells you the number of males and females in a group.
Some pyramids look different.
There could be a column down the middle, depending on who drew them.
The shape of the pyramid is important.
You will be used to seeing it both ways if we use both methods here.
They don't know what they are going to put on the exam.
The bars on the graph can show the percent of the total population or the number of people in the age-sex cohort.
You should know how to read and interpret the population pyramid graph that you will see on the AP exam.
We need to be sure that we're referring to the data that we're talking about when we say "percent versus total" in the essay section.
The shape of the pyramid tells you about the character of the country, state, province, or city.
Pyramid shapes are indicators of growth rates and economic development in countries.
Take a look at this example.
The war only affected one group of people and only men.
There would be a decline in the women if the war happened in this country.
The baby bust followed a baby boom.
After the war generation exceeds child-bearing age, booming fertility will stop.
Significant declines in the elder population can be caused by disease and old age.
That's why the top shrinks quickly.
The male side of the pyramid decreases in number more quickly than the female side.
Women live up to 5 years longer than men.
For different scales of population, you can have population pyramids.
States and cities may also show up on the exam.
The peak for the 50 to 54 cohort was noted.
Both male and female lived in Germany during World War II.
Many of the war's final years were fought in Germany.
The baby boom in Germany lasted longer than it did in the US.
The food rationing that took place after the war is likely to be the reason for the late peak.
Rhode Island is the slowest-growing state in the country.
There are 1.5 children per female below the replacement rate.
The child-age population is declining.
Rhode Island's population structure could look a lot like Germany or Italy within a decade.
Utah is one of the fastest-growing states due to immigration and a high fertility rate.
The boom and bust cycles are different here than in Rhode Island.
There are some interesting patterns in the cities.
Sun City, Arizona is a suburb of Phoenix that has long been a retirement destination for older Americans.
There are almost no children.
Morgantown is a university town.
The city's structure is cross-shaped because of the large college-age cohort.
There is an age structure similar to that of Mexico just across the Rio Grande.
Population growth can be affected by immigrant communities in border towns.
There are two ways to calculate population density.
The number of people per square unit of land is known as arithmetic density.
Most island nations and microstates have very high densities.
Consider the high densities of countries such as India, Bangladesh, Japan, and South Korea.
The number of people per square unit of land is known as the physiologic density.
It is possible to understand the sustainable nature of a population of a certain region or country by using cardiopulmonary density.
It is important to understand the geography of countries where the amount of arable land is limited.
There are limits to physiologic density due to geography.
Iraq, Egypt, Uzbekistan, and Pakistan are all arid countries with narrow farming regions around river systems.
In countries like the United States and China, arable land is located in the eastern third of the country, while the west is dominated by mountain and desert regions.
Populations have been squeezed into cities or into grassland and arid regions because of high densities in farming regions.
The population center can be found by averaging the weight of the population across the country.
The geographic center of the country is simply the geometric center of the country.
Imagine the country as a flat surface with the population standing on top in their home locations.
The population center is where you could balance the weighted surface without tipping it over.
Since the first census in 1790, the population center has moved to the west.
The land in the Eastern United States was already owned.
Immigrants arriving in the country and those wanting to have their own farms found no land available east of the Appalachian Mountains.
Most migrated west to start their own farms in the Midwest and Great Plains regions.
The population center moved west during World War II.
The population moved south and west after World War II.
There is more on the Frostbelt to Sunbelt shift.
From 1950 onward, the southwestern shift moved toward the Sunbelt.
Carrying capacity is the most important concept to understand about the global population.
At the regional level, we can look at thesustainability of certain population densities.
There are limits to how many people an environment can support in terms of the availability of food, water, and natural resources.
Some regions are more supportive of human settlement than others.
The deserts support less people than the temperate grasslands.
In resource-poor regions and across the globe, overpopulation is a major concern.
There are warnings regarding excessive consumption of natural resources.
The message is that if government mandated population control methods are not used, certain resources like clean water, and nonrenewable energy sources like oil will be rendered useless.
There is a need for zero population growth to stem the tide of resource depletion.
Large-scale family-planning and contraceptive programs have been proposed.
Many have rejected these ideas because of their religious or political beliefs.
As population densities increase, concerns over decreasing amounts of personal space would be alleviated.
Some worry that too many people crammed into densely packed urban areas will lead to social unrest and armed conflicts.
Population theorists have looked at the role of conservativism.
With an expected 10 billion person global population, massive and systematic global programs are believed to be necessary to achieve sustainable resource use.
Many resources could be lost before we have the chance to save them.
There are several different forms of migration.
Interregional migrants move from one region to another.
Rural-to-urban migrants move from farmland to cities in the same country.
There are migrants who move from one area to another within the same region.
There are variations in international migration.
Migrants move from one country to another.
Migrants can be slave, job-seeker, or refugee.
Humans move for many reasons and there are many theories about why.
The human capital theory of migration claims that humans take their education, job skills, training, and language skills to a country where they can make more money and reap a higher net return.
The net gain from migration is increased by higher levels of human capital.
The flow of human capital from one country to another causes wages to fall in the destination country while increasing in the sending country.
When the expected net earnings and costs of migration are the same, migration stops.
People who move from location to location are called migrants.
There are different forms of forced migration.
Governments can order their citizens to leave.
Other people forced to flee because of war, disasters, or fear of the government are known as refugees.
Some countries have official programs to receive refugees from other countries and grant them asylum, either temporarily or permanently.
Many countries in the 1990s had asylum programs for people who fled ethnic cleansing in the former Yugoslavia.
The host country faces enormous economic burdens when it comes to providing a new home for refugees.
If a developing nation is struggling to provide for its own people, basic food, water, sanitation, and safety needs are often barely met in the host country.
In most countries, people who come seeking refuge or employment opportunities but do not have government authorization are considered illegal immigrants.
Illegal immigrants in some countries can apply for official status or citizenship without being arrested or deported.
There are specific ways that people migrate.
When people move up in a hierarchy of locations, each move to a more prosperous place.
A family might move from a farm to a neighboring town, then from a regional city to the outskirts of a larger metropolitan area, and finally from there closer to the center of the city, each time taking advantage of better work opportunities.
They might move closer to the center of the city once economic stability is achieved and they want to have full access to the central business district.
Along the way, there will be opportunities for work and economic improvement.
Chain migration occurs when a group establishes a foothold in a new place.
Information is sent back to friends, family, and business contacts by these people.
Information on employment opportunities, access to markets and social networks, and encourage others to migrate to the location are provided by the pioneer.
A growing immigrant community is established as more and more people move in.
Some people who migrate for employment have a pattern of movement.
Foreign employees working for a limited period of time before returning to their home countries are called transnational labor migrants.
Sometimes this is also called periodic movement if it is on an annual or seasonal basis, for example, agricultural workers coming from Mexico to the United States for different harvest periods and then returning home to help out during harvest on their family farms.
Cyclic movement can last several years for an individual.
Sometimes foreign workers come to a country to find a job and then return to their home countries when they reach old age.
The countries that receive cheap labor benefit.
The cost of receiving immigrants includes crime, unemployment, social welfare, and national security concerns.
Losing highly skilled workers is a big challenge for countries that lose them.
Sending of money to family and friends is the largest positive economic effect of migration.
Money and other cash transfers are sent from migrants to their families back home.
Remittances are more likely to flow back home than official development assistance.
Remittances have a positive impact on the migrant's home country.
In rural Mexico, hundreds of communities are supported by the money they receive from labor migrants in the United States.
Someone doesn't have to cross international borders to be considered a migrant.
Internal migrations can change the country's population distribution.
Over the past few decades, there has been a migration of people from the Frostbelt to the Sunbelt in the United States.
Many people left the Northeastern United States for better employment opportunities in the South and Southwest because of the decline in manufacturing employment there.
Over the past 50 to 60 years, the average center of the U.S. population has moved to the south and west.
Atlanta, Dallas, Houston, San Antonio, Albuquerque, Phoenix, San Diego, Los Angeles, and Las Vegas are some of the large Sunbelt cities.
Denver, San Francisco, Salt Lake City, Portland, and Seattle can be included.
The Historic Population Centers map shows how this has affected population distribution.
Major changes in the course of a person's life are referred to as life-course changes.
Going to college, moving for a better job, or retiring are some of the life-course changes that explain internal migration within a country.
There are many ways in which life-course changes can occur.
Older people move when they stop working.
In the five-year period 1995-2000, almost 10% of Americans ages 60 and older migrated between counties.
Senior citizens are less likely to pick up and move than young people.
Young people begin a series of migrations when they leave home for college because of life-course changes as much as the amenities of place and quality of life.
Newly industrialized countries experience rapid internal rural-to-urban migration.
Employment at urban manufacturing locations is the main opportunity for internal immigrants.
A number of push and pull factors can cause people to leave a rural lifestyle and move to a city.
Push factors are things that make people leave the farm.
Pull factors are things that draw people to the city.
The opposite of a pull factor is not a push factor.
The lack of employment opportunities in rural regions is not a pull factor.
The AP Exam asks multiple-choice and free-response questions about push and pull factors.
Issues related to the hardship faced in rural areas are included in push factors.
An armed conflict is a push factor.
Rural regions are where rebel movements start military campaigns against governments.
Conflicts in rural regions can cause people to flee to the safety of cities.
Drug traffickers and terrorism can frighten people off the land.
Another factor is environmental pollution.
Chemicals used in agriculture can poison soils and water supplies.
In addition, improper usage of pesticides could lead to birth defects in children, forcing parents to move to cities to seek constant medical care for their children.
Push factors include natural disasters.
A flood can destroy a whole year's income and cause people to leave farming as their primary source of income.
Land costs can cause people to leave the land.
Markets for land in newly industrialized countries inflate prices.
Farmers who own land may be able to sell their land and make more money than they could in several years of farming.
New city housing can be paid for with this money.
Rents can increase significantly when farmers are renting land.
Farmers can no longer afford to pay rent or make enough money to support their families.
Migrants who arrive in cities homeless are often forced into settlements.
Even though land and other commodity prices may increase over time, basic food crop prices tend to change very little over the long term, making farming less profitable for small family farms.
Employment-related pull factors draw people to cities.
The number of job opportunities, pay rates, and regularity of pay are some of the factors that can motivate migrants to move to the city.
Farmers only make money at the end of the growing season when crops are sold.
Migrants have better financial security if they have regular paychecks.
Other factors that pull workers into the city include access to services such as medical care or education.
Migrants cite entertainment as a reason to move from rural areas.
Television, movies, festivals, and sporting events are popular in urban areas.
Chapter 9 contains information on where and how people in Latin America live.
The water quality in rural regions may be better than the water quality in cities, which is an unfortunate reality for many Third-World rural-to-urban migrants.
Water systems in the Third World are often contaminated.
If you are asked about access to service in Third-World cities, clean water is not a valid answer.
The demographic equation, the rate of natural increase, and doubling time are some of the statistics used to analyze population growth.
The classical model has four stages, but modern social scientists would like to add a fifth stage to reflect the negative RNI of some developed countries.
The global population would expand beyond its capacity to produce enough food to support itself before 1900, according to the Malthusian theory.
We've avoided this catastrophe so far thanks to technological innovation in agriculture, but neo-Malthusian theorists think it could still happen.
Population pyramids show the age and gender distribution of the population.
The shape of a pyramid can show a lot about growth rates and economic development.
Population density can be calculated.
Without a Malthusian collapse, the population that an environment can sustain is called carrying capacity.
In areas with poor natural resources and exploding populations, there are major concerns.
Interregional, rural-to-urban, and transnational are some of the scales where migration can take place.
Education or better economic opportunities are some of the reasons why most migration is voluntary.
Some migrants are forced to leave due to war, natural disasters, or fear of persecution.
Those who have been displaced are considered refugees.
Push factors can force migrants to leave their homes.
Poor economic conditions, armed conflict, environmental hazard, or increased land costs can be included.
Pull factors are positive attributes that draw migrants to a location.
Access to services and entertainment can attract migrants to urban areas, but the most significant of these is better employment and economic conditions.
There are answers and explanations at the end of this chapter.
The optimal TFR for a society is 2.1.
Many Western European countries have TFRs between 1.4 and 1.8.
The choice is correct.
In the first stage of the Demographic Transition Model, societies engage in farming and transhumance for food and resources.
Birth and death rates are high in the first stage so all of them can be eliminated.
Full industrialization does not occur until stage three, so choice can be eliminated.
Zero population growth occurs when a country's birth rate is the same as its death rate.
The correct answer is (C).
When an economy becomes more urbanized, the economy becomes more developed, and access to health care increases, contraceptives become available.
The characteristics of a society are in the third stage of the Demographic Transition Model.
A society's population is growing quickly.
A negative RNI means that the population is decreasing.
The reason that urban families are smaller than agricultural families is due to the fact that more people are living in cities.
Women turning away from the traditional role of mother, and the traditional role of wife, are some of the signs that birth rates are decreasing.
Double-income-no- kids is not contributing to a birth rate.
Increased immigration would result in a positive RNI.
Stage 1 societies are pre-agricultural according to the Demographic Transition Model.
The hallmark of stage two is the shift to an agricultural society.
A hallmark of stage two society is the divergence of birth rates and death rates--usually the birth rate remains high while the death rate decreases due to better nutrition and health care.
At some point in the future, the increase in human population would surpass the increase in food production according to the Malthusian prediction.
Massive hunger and population decline would result from this.
He could not have foreseen the growth of revolutionary advances in agriculture because his calculations were correct.
There is a push factor that forces people off the farm.
A pull factor is a specific thing about an urban environment that draws people to it.
The only pull factor is the better opportunity for employment in urban areas.
A population pyramid shows the gender and age distribution of the population.
The genders are equal in each country.
There is no indication of either a baby bust or external war.
Economic growth can be gauged from a population pyramid, but the safer bet is the rate of growth.
The population shaped like a triangle will experience a higher rate of growth than the population shaped like a rectangle.
The countries with the lowest total fertility rate have been the most welcoming to immigrants.
The population of those countries has remained high because of the foreign population.
It has led to a backlash against Muslim immigrants.
The financial burden on those who work is low in a population with a low dependency ratio.
There is a heavy financial burden on those who work in a population with a high dependency ratio.
In the United States, the dependency ratio has been increasing for many years due to the fact that the percentage of the population under the age of 30 has increased.
Canada and other nations have different points for both measures.
The mean center is the centroid, whereas the median center is the intersection of the median latitude and longitude. | https://knowt.io/note/70662774-5077-47dc-bff4-385a100dda47/4-Population-and-Migration | 21 |
16 | The Cold War was the persistent tension that existed between the United Sates and some of its Western supporters and the Soviet Union together with other Communists countries. This tension was witnessed between the time the Second World War was coming to an end and the Soviet Union dissolution in 1991. The Cold War featured military, economic, and geo- political rivalries between the West and the Communism international supporters which resulted to several wars. Even though there was a result of the political and economic rivalry between the Soviet Union and the United States, the two nations never fought each other directly.
The conflict was majorly based on the competing economic and political systems between the two nations. The Communist system employed by the Soviet Union together with its allies and democratic Capitalism used by the United States together with its allies. This period featured intense economic and political rivalry as well as military and diplomatic posturing between these nations. These years were also host to dramatic military spending increases, hyperbolic rhetoric among leaders from both leaders, high tensions, and thousands and millions of casualties of proxy wars such as the Korean war, the bay of pigs invasion, Cuban missile crisis, the Vietnam war and the Soviet-Afghan war across Africa, Latin America and Asia.
The parties involved considered their political and economic systems to be superior to their rivals and viewed almost every event taking places globally as to be part of the ongoing confrontation in an effort to determine which between Communism and Capitalism would emerge as the prevailing ideology across the globe.
The Soviets tried hard to spread the political and economic Communism system to other countries, while the United States on its side was promoting its democracy vision and free enterprise. This competition resulted to several small-scale military conflicts as well as dozens of major wars that attracted armed forces from both nations. However, as the name of the war suggests, no direct military engagement was witnessed between the two nations.
The origins of this was can be traced back when the Soviet Union and America were still allies in the Second World War. These two countries had a mutual suspicion history and both maintained their respective and different position on the way the postwar Europe was to be administered. Each nation was out to reconstruct Europe in their own desired image through Soviet-aligned Communists governments or Western-style democracies.
In addition, the Soviets and its allies wanted to come up with a buffer zone that was pro-Russian which would protect them from possible attacks in future. These conflicting visions between the two nations clearly came out during the Meetings of British, Soviet diplomats and American diplomas in 1945 and the Yalta and Potsdam Conferences.
In 1945, February, President Franklin Delano Roosevelt, Soviet Premier Joseph Stalin and Winston Churchill, who was the British Prime Minister at that time met in the Soviet Union at the Yalta Conference. The three leaders had met to discuss wartime strategy in coming up with the United Nations, and the Europe reconstruction. Yalta at that period was a popular resort situated in the Ukraine and served as the meeting place for the three leaders to discuss the future of Eastern Europe and Germany while at the same time, their respective army forces were closing in on Hitler.
Stalin believed that his nation’s defense greatly depended on coming up with a Russian sphere that had influence in Poland as well as other Eastern European nations now that Eastern Europe and Poland had been acting as a corridor in the attacks on Russia on several occasions. Stalin committed himself to creating a coalition government consisting of democratic Polish government representatives exiled in London.
Roosevelt and Churchill correctly suspected that his plans were to come up with an interim government that would be under the leadership of pro-Soviet Communists.
The allies had valid reasons to be concerned about the way this process would be democratic considering the actions in Poland by the Red Army that had taken place the previous year.
It was still fresh in the leader’s minds that Stalin had previously halted his offenses against Warsaw which was being occupied by the Nazi for a period of two months while the army forces from Germany were killing thousands of Polish fighters in opposition of the Communism system.
Even though the allies from the West feared that Stalin was likely to turn Poland into a puppet state of the Communists, they were in no position to demand otherwise taking into consideration the fact that the complete occupation by the Red Army of Eastern Europe.
In addition, the Western Allies knew that the army of Stalin would occupy Eastern Germany. With the hope of keeping the tentative alliance alive, Roosevelt and Churchill reached at the agreement that each nation would be responsible for reconstructing and occupying the section of Central Europe and Germany that corresponded with their army’s positions.
By the time these three nations were meeting again in 1945 at the Potsdam Conference in Germany, there was a new President Harry Truman, Clement Attlee; who was the new British Prime Minister, and Joseph Stalin. The three leaders still discussed Europe reconstruction and resolved to divide Berlin and Germany into British, American, and Soviet and French sectors. Like their predecessors, Truman and Attlee recognized the futility associated with a military challenge to the position taken by Stalin in Eastern Europe.
The leaders instead directed their efforts towards determining how Eastern Europe might be administered and divided by the Soviets in a manner that would foster both genuine independence and reconstruction. There hope was that the presence of the Soviet Army was temporary and that new national boundaries were to be established across Eastern Europe in a move to prevent conflicts in future.
The leaders taking part in the Potsdam Conference tried to divide Europe into nations in accordance to the self-determination doctrine. Unfortunately, tremendous political and ethnic strife across Eastern Europe slowed down the process. The people dominating Eastern Europe chose to remove ethnic and national minorities.
In addition, the areas also had to be divided among political factions’ hosts with each vying to control regions which had been destroyed completely by military occupation and war. It was not long before this ethnic, political and economic strife spread all over Southern Europe in areas such as Italy, Greece and also to Western nations like France.
The postwar settlement was in such away that the victorious allies were still undecided about the fate of Germany. Apart from having Germany divided into four zones, the German army was disbanded, while the National Socialists Party was abolished permanently. The infrastructure of the nation was in shambles after the combined onslaught of the Soviet and Western armies, and this lead to the creation of a special council to administer humanitarian aid. Each of the four countries come up with interim governments in their own zones and prepared themselves for special elections that everyone hoped would result in democratic and stable governance so as to avoid that past instability that were witnessed after the World War I
Following the extreme harsh conditions Russia had to endure, the leaders settled on reparations as a way of punishing Germany as they build up their military. This resulted to the conflict between the four nations in power as the West intended to rebuild a Germany that was democratic and able to stand on its own. This made the West to be against the demands by Soviet for reparations in their own Germany sectors.
Within the Eastern Germany Soviet sector, the provisional government also tried to facilitate the reconstruction of the economy of German, but its military happened to seize most of the economic assets of the nation as war reparations and this ended up hindering the reconstruction efforts.
While most of the Americans were for the idea of Russian leaders punishing their attackers, America as a nation had prospered in the war and its outmost priority was on the promotion of global recovery and avoiding the political and economic stability that has resulted to the establishment of totalitarian governments.
The United States came up with a massive program aimed at aiding both war-torn Germany and Japan instead of seeking reparations in its German sector. This move was with the hope of promoting democratic governments that were stable. In both Europe and Asia, the perspective of the United States was mainly influenced by the humanitarian concerns but was still guided by the nation’s self-interest.
Business leaders had hoped to get back to trading with these nations while on the other hand, the political leaders still feared economic instability might push Asia and Europe toward Communism. Following the two position taken by the leaders, the United States aid was directed towards making sure that German and Japanese reconstruction was in the American image of free enterprise and democracy.
The aid from the United States towards these two former adversaries was rewarded through the close economic and political ties that developed as Japan and West Germany became among the strongest allies of the United States in their resulting conflict with the Soviet Union
Forces from the United States occupied Japan between 1945 and 1952, as they oversaw the transition of the nation to a democratic government while at the same time seizing assets from the military, holding military tribunals passing judgment to the soldiers accused for war crimes, as well as overseeing reparations payments.
Following the horrific nature of the Pacific war, the peacetime Japan transition to a prospering democracy from a militaristic dictatorship was remarkable. Just as what was witnessed in Germany, the Japan reconstruction mirrored the Cold War rivalry that was slowly developing between the United States and the Soviet Union.
The Soviets had come up with their sphere influencing Manchuria as the United States occupied Japan. With the assist from the United Nation that had been newly created, Korea was partitioned temporarily into Soviet and United States sectors and was installed with governments that were rivals.
MacArthur, who was the Commander of the United States forces during the Second World War in the southern Pacific, was also placed to oversee Japan Reconstruction. He managed to create a constitutional democracy in Japan similar to that of the United States. The early years of Japan reconstruction focused mostly on reducing the power of the military and having the factories produce consumer goods rather than creating munitions.
Most of the Americans felt that the promotion of too much industrial growth was likely to make Japan reemerge as a major power. However, when Communism started to spread again throughout Southern Asia and China, United States leaders shifted their orientation and now invested resources to make sure that the economic growth in Japan was under a pro-American government.
Most of the democratic reforms of MacArthur like female suffrage proved to be unpopular at first among the Japanese people, but by 1950, Japan and America had changed from being rivals to allies. The friendship was based on the United States economic trade, the two having mutual trade, and hostility against Communism growth in neighboring North Korea and China.
The Eastern Europe reconstruction was a sharp contrast to what happened in West Germany and Japan. The Eastern Europe people had tremendously suffered and now wanted the German residents in that region to leave their nation. They believed that after all Hitler had justified his actions in that region considering the reuniting of all people who had originated from Germany.
Following this reason, Eastern Europe authorities demanded that the Germans still living in Czechoslovakia, Hungary, and Poland to return to Germany. The Potsdam Conference also reasoned in the same manner while declaring its idea of creating nations on the basis of ethic lines. This meant that people of Polish origins were to occupy Poland, the Czechs in Czechoslovakia and the Hungarians in Hungary.
This plan failed in recognizing the regions vast ethnic diversity and the impossibility of coming up with national boundaries which would manage to accomplish this goal without resulting to a million of refugees. To add on this, other millions of ethnic minorities would be expected also to move away of their homes if the plan was enforced universally. Each government partially tried to purge their respective nations of various minorities, mostly enforcing the exclusionary schemes provisions on the poor who were the most vulnerable.
Eastern Europe was known of its scarce resources to transport or feed the millions of refugees who originated from the ethnic minority’s expulsion and it is estimated that more than 2 million people died in refugee camps following the disorder. To add on the atrocities coming from the expulsions, the Eastern Europe people suffered under the different totalitarian governments that had been created under the influence of the authorization regime of Stalin.
On the other hand, the Western Allies were not in a position to dictate the Eastern Europe reconstruction under Soviet terms considering the Red Army position throughout the region. The Allies also wanted to come up with the area to the west of Berlin and have it to be in their own image. The official declarations at Potsdam and Yalta mandated constitutional government and democratic elections. The result was that many elections were indeed held and both the non-Communists and Communists leaders were elected democratically across Eastern Europe in the immediate years after the Cold War.
“Causes of the Cold War in 1945”.??HistoryLearningSite.co.uk.??2014. Web
Andrew Christopher and Mitrokhin Vasili. The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB. New York: Basic Books, 2000
Cardona, Luis. Cold War KFA. New York: Routledge, 2007
Dobrynin, Anatoly. In Confidence: Moscow’s Ambassador to Six Cold War Presidents. Washington: University of Washington Press, 2001
Friedman, Norman. The Fifty-Year War: Conflict and Strategy in the Cold War. Naval Institute Press, 2007
Hanhim??ki, Jussi and Odd Arne Westad, Eds. The Cold War: A History in Documents and Eyewitness Accounts. London: Oxford University Press, 2003
Hopkins, Michael F. “Continuing Debate and New Approaches in Cold War History,” Historical Journal 50(2007): 913’934
Johnston, Gordon. “Revisiting the cultural Cold War,” Social History 35 (2010): 290’307
Sakwa, Richard . The Rise and fall of the Soviet Union, 1917’1991. New York: Routledge, 2009
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment | https://studymoose.com/why-and-how-the-cold-war-was-fought-and-its-effects-essay | 21 |
20 | If a structure can have multiple ways of drawing a Lewis structure, the structure is known to have resonance. A good way of thinking of resonance is like mixing paint🎨.
When you draw out the two or three resonance structures of a molecule, those two/three structures make up the entire molecule. The actual structure is represented by an average of these 2-3 structures. This can lead to bond orders that are fractions, such as bond orders of 1/3 or 2/3.
How do you know when something has resonance?
Let's try drawing the lewis dot structure (LDS) of NO3-.
Recall from the last guide, here are some steps:
Count the number of valence electrons. Nitrogen has 5, oxygen has 6. 5+6+6+6 = 23. But...NO3- is a polyatomic ion and there is a charged attached to the molecular as a whole. The -1 charge indicates that there is one more electron, so there must be a total of 24 valence electrons represented in the LDS.
Draw the central atom, which in this case, is Nitrogen.
Draw the 3 oxygen atoms surrounding Nitrogen and single bonds connecting them. Draw the full octets:
Count the current number of valence electrons. There are 26. Somehow we have to get rid of 2 of them🤔. Remember, when there are too many electrons, we can get rid of the single bond and put a double bond. In this case, if we change them all to double bonds, we won't have enough electrons! This is where resonance comes in: you only change one to a double bond.
💡*Remember, for polyatomic ions, you must put brackets and the charge*
One representation of NO3-
Now we count: we have 24! This is one way to draw the lewis dot structure of NO3-.
How do we know which bond to make a double bond?
Simple answer: any of them! There are three ways to draw this structure:
Any of these would be acceptable. On the AP Exam itself, it is good to draw all three side by side with arrows between them ↔️.
In reality, the structure doesn't have 2 single bonds and 1 double bond. It has a 4/3 bond order. There is just no way to represent 4/3s of a bond.
How did I know it was a 4/3s bond? The easiest way to know the bond order is ( # of lines / # of spaces ). There are 4 lines or bonds and 3 spaces for bonds. A 4/3s bond is between a single bond and a double bond. Therefore, it is stronger than a single bond, but weaker than a double bond.
💡As mentioned before, on the AP Exam, draw all possible structures and just write the word resonance to clarify what you know to the AP Grader.
⭐Try drawing the LDS of nitrite (another polyatomic ion)
Formal charge is the charge assigned to an atom in a molecule, assuming that electrons in all chemical bonds are shared equally between atoms. It reflects the electron count associated with the atom compared to the isolated neutral atom. It is used to predict the correct placement of electrons.
Here is an overall diagram, but let's get into the nitty gritty.
How do you calculate formal charge?
The easiest way to calculate formal charge is to do (# of valence electrons - # of dots - # of dashes). This might sound weird but it's just the easiest way to remember it!
Look at the example above at the SCN- ion to the left and the way they calculated formal charge.
S: They took the number of valence electrons (6) and subtracted it by the total number of lone electrons (2) and single bonds (3 [there is really a triple bond]).
When do I check formal charge?
It is good to always check formal charge, especially since it is easy. If anything, just check it if an element past element 14 is involved (because that is when you can break the octet rule).
Let's draw the LDS of phosphate (PO4-3):
There should be 5+6+6+6+6+3 valence electrons, so 32 total.
Draw Phosphorus in the center and 4 oxygen atoms surrounding it with single bonds and full octets:
Looking at this lewis dot structure, we count 32 valence electrons and think it is perfect! However, Phosphorus is element 15 so we should be looking at formal charge to make sure the placement of electrons is correct.
Calculating the formal charge:
When we are looking at the formal charges, we want the central atom to always have a formal charge of 0. A formal charge of 0 means the electrons are localized, or not moving.
One way we can lower the formal charge of Phosphorus to 0 is by adding a double bond:
Now, instead of having no atoms with a formal charge of 0, we have two atoms with a formal charge of 0. The electrons are placed correctly, resulting in a stable bond🎉.
The ion also now as resonance, so you should draw all 4 ways of representing it and know the bonds are 5/4 order.
💡Quick Tip: An easy way to check if you calculated formal charge correctly is if the sum of the formal charges = the charge of the ion.
Since there were 3 oxygen atoms with a -1 charge, the sum is -3, which is equal to the -3 charge of the polyatomic ion. | https://fiveable.me/ap-chem/unit-2/resonance-formal-charge/study-guide/rxtiCdBiDF6V9V7S2UEi | 21 |
53 | The Holocene extinction, otherwise referred to as the sixth mass extinction or Anthropocene extinction, is an ongoing extinction event of species during the present Holocene epoch (with the more recent time sometimes called Anthropocene) as a result of human activity. The included extinctions span numerous families of plants and animals, including mammals, birds, reptiles, amphibians, fishes and invertebrates. With widespread degradation of highly biodiverse habitats such as coral reefs and rainforests, as well as other areas, the vast majority of these extinctions are thought to be undocumented, as the species are undiscovered at the time of their extinction, or no one has yet discovered their extinction. The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background extinction rates.
The Holocene extinction includes the disappearance of large land animals known as megafauna, starting at the end of the last glacial period. Megafauna outside of the African mainland, which did not evolve alongside humans, proved highly sensitive to the introduction of new predation, and many died out shortly after early humans began spreading and hunting across the Earth (many African species have also gone extinct in the Holocene, but – with few exceptions – megafauna of the mainland was largely unaffected until a few hundred years ago). These extinctions, occurring near the Pleistocene–Holocene boundary, are sometimes referred to as the Quaternary extinction event.
The most popular theory is that human overhunting of species added to existing stress conditions as the extinction coincides with human emergence. Although there is debate regarding how much human predation affected their decline, certain population declines have been directly correlated with human activity, such as the extinction events of New Zealand and Hawaii. Aside from humans, climate change may have been a driving factor in the megafaunal extinctions, especially at the end of the Pleistocene.
Ecologically, humanity has been noted as an unprecedented "global superpredator" that consistently preys on the adults of other apex predators, and has worldwide effects on food webs. There have been extinctions of species on every land mass and in every ocean: there are many famous examples within Africa, Asia, Europe, Australia, North and South America, and on smaller islands. Overall, the Holocene extinction can be linked to the human impact on the environment. The Holocene extinction continues into the 21st century, with meat consumption, overfishing, and ocean acidification and the decline in amphibian populations being a few broader examples of a cosmopolitan decline in biodiversity. Human population growth and increasing per capita consumption are considered to be the primary drivers of this decline.
The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, posits that roughly one million species of plants and animals face extinction within decades as the result of human actions. Organized human existence is jeopardized by increasingly rapid destruction of the systems that support life on Earth, according to the report, the result of one of the most comprehensive studies of the health of the planet ever conducted.
The Holocene extinction is also known as the "sixth extinction", as it is possibly the sixth mass extinction event, after the Ordovician–Silurian extinction events, the Late Devonian extinction, the Permian–Triassic extinction event, the Triassic–Jurassic extinction event, and the Cretaceous–Paleogene extinction event. Mass extinctions are characterized by the loss of at least 75% of species within a geologically short period of time. There is no general agreement on where the Holocene, or anthropogenic, extinction begins, and the Quaternary extinction event, which includes climate change resulting in the end of the last ice age, ends, or if they should be considered separate events at all. Some have suggested that anthropogenic extinctions may have begun as early as when the first modern humans spread out of Africa between 200,000 and 100,000 years ago; this is supported by rapid megafaunal extinction following recent human colonisation in Australia, New Zealand and Madagascar, as might be expected when any large, adaptable predator (invasive species) moves into a new ecosystem. In many cases, it is suggested that even minimal hunting pressure was enough to wipe out large fauna, particularly on geographically isolated islands. Only during the most recent parts of the extinction have plants also suffered large losses.
In The Future of Life (2002), Edward Osborne Wilson of Harvard calculated that, if the current rate of human disruption of the biosphere continues, one-half of Earth's higher lifeforms will be extinct by 2100. A 1998 poll conducted by the American Museum of Natural History found that 70% of biologists acknowledge an ongoing anthropogenic extinction event. At present, the rate of extinction of species is estimated at 100 to 1,000 times higher than the background extinction rate, the historically typical rate of extinction (in terms of the natural evolution of the planet); also, the current rate of extinction is 10 to 100 times higher than in any of the previous mass extinctions in the history of Earth. One scientist estimates the current extinction rate may be 10,000 times the background extinction rate, although most scientists predict a much lower extinction rate than this outlying estimate. Theoretical ecologist Stuart Pimm stated that the extinction rate for plants is 100 times higher than normal.
In a pair of studies published in 2015, extrapolation from observed extinction of Hawaiian snails led to the conclusion that 7% of all species on Earth may have been lost already. A 2021 study published in the journal Frontiers in Forests and Global Change found that only around 3% of the planet's terrestrial surface is ecologically and faunally intact, meaning areas with healthy populations of native animal species and little to no human footprint.
There is widespread consensus among scientists that human activity is accelerating the extinction of many animal species through the destruction of habitats, the consumption of animals as resources, and the elimination of species that humans view as threats or competitors. That humans have become the primary driver of modern extinctions is undeniable, rapidly rising extinction trends impacting numerous animal groups including mammals, birds, reptiles, and amphibians have prompted scientists to declare a biodiversity crisis. But some contend that this biotic destruction has yet to reach the level of the previous five mass extinctions, and that this comparison downplays how severe the first five mass extinctions were. Stuart Pimm, for example, asserts that the sixth mass extinction "is something that hasn't happened yet – we are on the edge of it." In November 2017, a statement, titled "World Scientists’ Warning to Humanity: A Second Notice", led by eight authors and signed by 15,364 scientists from 184 countries asserted that, among other things, "we have unleashed a mass extinction event, the sixth in roughly 540 million years, wherein many current life forms could be annihilated or at least committed to extinction by the end of this century." The World Wide Fund for Nature's 2020 Living Planet Report says that wildlife populations have declined by 68% since 1970 as a result of overconsumption, population growth and intensive farming, which is further evidence that humans have unleashed a sixth mass extinction event. A 2021 report in Frontiers in Conservation Science asserts "that we are already on the path of a sixth major extinction is now scientifically undeniable." According to the UNDP's 2020 Human Development Report, The Next Frontier: Human Development and the Anthropocene:
The planet's biodiversity is plunging, with a quarter of species facing extinction, many within decades. Numerous experts believe we are living through, or on the cusp of, a mass species extinction event, the sixth in the history of the planet and the first to be caused by a single organism—us.
The abundance of species extinctions considered anthropogenic, or due to human activity, has sometimes (especially when referring to hypothesized future events) been collectively called the "Anthropocene extinction". "Anthropocene" is a term introduced in 2000. Some now postulate that a new geological epoch has begun, with the most abrupt and widespread extinction of species since the Cretaceous–Paleogene extinction event 66 million years ago.
The term "anthropocene" is being used more frequently by scientists, and some commentators may refer to the current and projected future extinctions as part of a longer Holocene extinction. The Holocene–Anthropocene boundary is contested, with some commentators asserting significant human influence on climate for much of what is normally regarded as the Holocene Epoch. Other commentators place the Holocene–Anthropocene boundary at the industrial revolution and also say that "[f]ormal adoption of this term in the near future will largely depend on its utility, particularly to earth scientists working on late Holocene successions."
It has been suggested that human activity has made the period starting from the mid-20th century different enough from the rest of the Holocene to consider it a new geological epoch, known as the Anthropocene, a term which was considered for inclusion in the timeline of Earth's history by the International Commission on Stratigraphy in 2016. In order to constitute the Holocene as an extinction event, scientists must determine exactly when anthropogenic greenhouse gas emissions began to measurably alter natural atmospheric levels on a global scale, and when these alterations caused changes to global climate. Using chemical proxies from Antarctic ice cores, researchers have estimated the fluctuations of carbon dioxide (CO2) and methane (CH4) gases in the Earth's atmosphere during the late Pleistocene and Holocene epochs. Estimates of the fluctuations of these two gases in the atmosphere, using chemical proxies from Antarctic ice cores, generally indicate that the peak of the Anthropocene occurred within the previous two centuries: typically beginning with the Industrial Revolution, when the highest greenhouse gas levels were recorded.
Activities contributing to extinctions
The Holocene extinction is mainly caused by human activities. Extinction of animals, plants, and other organisms caused by human actions may go as far back as the late Pleistocene, over 12,000 years ago. There is a correlation between megafaunal extinction and the arrival of humans, and contemporary human population size and growth, along with per-capita consumption growth, prominently in the past two centuries, are regarded as the underlying causes of extinction.
Human civilization was founded on and grew from agriculture. The more land used for farming, the greater the population a civilization could sustain, and subsequent popularization of farming led to habitat conversion.
Habitat destruction by humans, including oceanic devastation, such as through overfishing and contamination; and the modification and destruction of vast tracts of land and river systems around the world to meet solely human-centered ends (with 13 percent of Earth's ice-free land surface now used as row-crop agricultural sites, 26 percent used as pastures, and 4 percent urban-industrial areas), thus replacing the original local ecosystems. The sustained conversion of biodiversity rich forests and wetlands into poorer fields and pastures (of lesser carrying capacity for wild species), over the last 10,000 years, has considerably reduced the Earth's carrying capacity for wild birds, among other organisms, in both population size and species count.
Other, related human causes of the extinction event include deforestation, hunting, pollution, the introduction in various regions of non-native species, and the widespread transmission of infectious diseases spread through livestock and crops. Humans both create and destroy crop cultivar and domesticated animal varieties. Advances in transportation and industrial farming has led to monoculture and the extinction of many cultivars. The use of certain plants and animals for food has also resulted in their extinction, including silphium and the passenger pigeon.
Some scholars assert that the emergence of capitalism as the dominant economic system has accelerated ecological exploitation and destruction, and has also exacerbated mass species extinction. CUNY professor David Harvey, for example, posits that the neoliberal era "happens to be the era of the fastest mass extinction of species in the Earth's recent history".
Megafauna were once found on every continent of the world and large islands such as New Zealand and Madagascar, but are now almost exclusively found on the continent of Africa, with notable comparisons on Australia and the islands previously mentioned experiencing population crashes and trophic cascades shortly after the earliest human settlers. It has been suggested that the African megafauna survived because they evolved alongside humans. The timing of South American megafaunal extinction appears to precede human arrival, although the possibility that human activity at the time impacted the global climate enough to cause such an extinction has been suggested.
It has been noted, in the face of such evidence, that humans are unique in ecology as an unprecedented "global superpredator", regularly preying on large numbers of fully grown terrestrial and marine apex predators, and with a great deal of influence over food webs and climatic systems worldwide. Although significant debate exists as to how much human predation and indirect effects contributed to prehistoric extinctions, certain population crashes have been directly correlated with human arrival. Human activity has been the main cause of mammalian extinctions since the Late Pleistocene. A 2018 study published in PNAS found that since the dawn of human civilization, 83% of wild mammals, 80% of marine mammals, 50% of plants and 15% of fish have vanished. Currently, livestock make up 60% of the biomass of all mammals on earth, followed by humans (36%) and wild mammals (4%). As for birds, 70% are domesticated, such as poultry, whereas only 30% are wild.
Agriculture and climate change
Recent investigations about hunter-gatherer landscape burning has a major implication for the current debate about the timing of the Anthropocene and the role that humans may have played in the production of greenhouse gases prior to the Industrial Revolution. Studies on early hunter-gatherers raises questions about the current use of population size or density as a proxy for the amount of land clearance and anthropogenic burning that took place in pre-industrial times. Scientists have questioned the correlation between population size and early territorial alterations. Ruddiman and Ellis' research paper in 2009 makes the case that early farmers involved in systems of agriculture used more land per capita than growers later in the Holocene, who intensified their labor to produce more food per unit of area (thus, per laborer); arguing that agricultural involvement in rice production implemented thousands of years ago by relatively small populations have created significant environmental impacts through large-scale means of deforestation.
While a number of human-derived factors are recognized as contributing to rising atmospheric concentrations of CH4 (methane) and CO2 (carbon dioxide), deforestation and territorial clearance practices associated with agricultural development may be contributing most to these concentrations globally. Scientists that are employing a variance of archaeological and paleoecological data argue that the processes contributing to substantial human modification of the environment spanned many thousands of years ago on a global scale and thus, not originating as early as the Industrial Revolution. Gaining popularity on his uncommon hypothesis, palaeoclimatologist William Ruddiman in 2003, stipulated that in the early Holocene 11,000 years ago, atmospheric carbon dioxide and methane levels fluctuated in a pattern which was different from the Pleistocene epoch before it. He argued that the patterns of the significant decline of CO2 levels during the last ice age of the Pleistocene inversely correlates to the Holocene where there have been dramatic increases of CO2 around 8000 years ago and CH4 levels 3000 years after that. The correlation between the decrease of CO2 in the Pleistocene and the increase of it during the Holocene implies that the causation of this spark of greenhouse gases into the atmosphere was the growth of human agriculture during the Holocene such as the anthropogenic expansion of (human) land use and irrigation.
Human arrival in the Caribbean around 6,000 years ago is correlated with the extinction of many species. These include many different genera of ground and arboreal sloths across all islands. These sloths were generally smaller than those found on the South American continent. Megalocnus were the largest genus at up to 90 kilograms (200 lb), Acratocnus were medium-sized relatives of modern two-toed sloths endemic to Cuba, Imagocnus also of Cuba, Neocnus and many others.
Recent research, based on archaeological and paleontological digs on 70 different Pacific islands, has shown that numerous species became extinct as people moved across the Pacific, starting 30,000 years ago in the Bismarck Archipelago and Solomon Islands. It is currently estimated that among the bird species of the Pacific, some 2000 species have gone extinct since the arrival of humans, representing a 20% drop in the biodiversity of birds worldwide.
The first settlers are thought to have arrived in the islands between 300 and 800 CE, with European arrival in the 16th century. Hawaii is notable for its endemism of plants, birds, insects, mollusks and fish; 30% of its organisms are endemic. Many of its species are endangered or have gone extinct, primarily due to accidentally introduced species and livestock grazing. Over 40% of its bird species have gone extinct, and it is the location of 75% of extinctions in the United States. Extinction has increased in Hawaii over the last 200 years and is relatively well documented, with extinctions among native snails used as estimates for global extinction rates.
Australia was once home to a large assemblage of megafauna, with many parallels to those found on the African continent today. Australia's fauna is characterised by primarily marsupial mammals, and many reptiles and birds, all existing as giant forms until recently. Humans arrived on the continent very early, about 50,000 years ago. The extent human arrival contributed is controversial; climatic drying of Australia 40,000–60,000 years ago was an unlikely cause, as it was less severe in speed or magnitude than previous regional climate change which failed to kill off megafauna. Extinctions in Australia continued from original settlement until today in both plants and animals, whilst many more animals and plants have declined or are endangered.
Due to the older timeframe and the soil chemistry on the continent, very little subfossil preservation evidence exists relative to elsewhere. However, continent-wide extinction of all genera weighing over 100 kilograms, and six of seven genera weighing between 45 and 100 kilograms occurred around 46,400 years ago (4,000 years after human arrival) and the fact that megafauna survived until a later date on the island of Tasmania following the establishment of a land bridge suggest direct hunting or anthropogenic ecosystem disruption such as fire-stick farming as likely causes. The first evidence of direct human predation leading to extinction in Australia was published in 2016.
A 2021 study found that the rate of extinction of Australia's megafauna is rather unusual, with some generalistic species having gone extinct earlier while highly specialised ones having become extinct later or even still surviving today. A mosaic cause of extinction with different anthropogenic and environmental pressures was proposed.
Within 500 years of the arrival of humans between 2,500 and 2,000 years ago, nearly all of Madagascar's distinct, endemic and geographically isolated megafauna became extinct. The largest animals, of more than 150 kilograms (330 lb), were extinct very shortly after the first human arrival, with large and medium-sized species dying out after prolonged hunting pressure from an expanding human population moving into more remote regions of the island around 1000 years ago. Smaller fauna experienced initial increases due to decreased competition, and then subsequent declines over the last 500 years. All fauna weighing over 10 kilograms (22 lb) died out. The primary reasons for this are human hunting and habitat loss from early aridification, both of which persist and threaten Madagascar's remaining taxa today.
The eight or more species of elephant birds, giant flightless ratites in the genera Aepyornis, Vorombe, and Mullerornis, are extinct from over-hunting, as well as 17 species of lemur, known as giant, subfossil lemurs. Some of these lemurs typically weighed over 150 kilograms (330 lb), and fossils have provided evidence of human butchery on many species.
New Zealand is characterised by its geographic isolation and island biogeography, and had been isolated from mainland Australia for 80 million years. It was the last large land mass to be colonised by humans. The arrival of Polynesian settlers circa 12th century resulted in the extinction of all of the islands' megafaunal birds within several hundred years. The last moa, large flightless ratites, became extinct within 200 years of the arrival of human settlers. The Polynesians also introduced the Polynesian rat. This may have put some pressure on other birds but at the time of early European contact (18th century) and colonisation (19th century) the bird life was prolific. With them, the Europeans brought ship rats, possums, cats and mustelids which decimated native bird life, some of which had adapted flightlessness and ground nesting habits and others had no defensive behavior as a result of having no extant endemic mammalian predators. The kakapo, the world's biggest parrot, which is flightless, now only exists in managed breeding sanctuaries. New Zealand's national emblem, the kiwi, is on the endangered bird list.
There has been a debate as to the extent to which the disappearance of megafauna at the end of the last glacial period can be attributed to human activities by hunting, or even by slaughter of prey populations. Discoveries at Monte Verde in South America and at Meadowcroft Rock Shelter in Pennsylvania have caused a controversy regarding the Clovis culture. There likely would have been human settlements prior to the Clovis Culture, and the history of humans in the Americas may extend back many thousands of years before the Clovis culture. The amount of correlation between human arrival and megafauna extinction is still being debated: for example, in Wrangel Island in Siberia the extinction of dwarf woolly mammoths (approximately 2000 BCE) did not coincide with the arrival of humans, nor did megafaunal mass extinction on the South American continent, although it has been suggested climate changes induced by anthropogenic effects elsewhere in the world may have contributed.
Comparisons are sometimes made between recent extinctions (approximately since the industrial revolution) and the Pleistocene extinction near the end of the last glacial period. The latter is exemplified by the extinction of large herbivores such as the woolly mammoth and the carnivores that preyed on them. Humans of this era actively hunted the mammoth and the mastodon, but it is not known if this hunting was the cause of the subsequent massive ecological changes, widespread extinctions and climate changes.
The ecosystems encountered by the first Americans had not been exposed to human interaction, and may have been far less resilient to human made changes than the ecosystems encountered by industrial era humans. Therefore, the actions of the Clovis people, despite seeming insignificant by today's standards could indeed have had a profound effect on the ecosystems and wild life which was entirely unused to human influence.
Africa experienced the smallest decline in megafauna compared to the other continents. This is presumably due to the idea that Afroeurasian megafauna evolved alongside humans, and thus developed a healthy fear of them, unlike the comparatively tame animals of other continents. Unlike other continents, the megafauna of Eurasia went extinct over a relatively long period of time, possibly due to climate fluctuations fragmenting and decreasing populations, leaving them vulnerable to over-exploitation, as with the steppe bison (Bison priscus). The warming of the arctic region caused the rapid decline of grasslands, which had a negative effect on the grazing megafauna of Eurasia. Most of what once was mammoth steppe has been converted to mire, rendering the environment incapable of supporting them, notably the woolly mammoth.
One of the main theories to the extinction is climate change. The climate change theory has suggested that a change in climate near the end of the late Pleistocene stressed the megafauna to the point of extinction. Some scientists favor abrupt climate change as the catalyst for the extinction of the mega-fauna at the end of the Pleistocene, but there are many who believe increased hunting from early modern humans also played a part, with others even suggesting that the two interacted. However, the annual mean temperature of the current interglacial period for the last 10,000 years is no higher than that of previous interglacial periods, yet some of the same megafauna survived similar temperature increases. In the Americas, a controversial explanation for the shift in climate is presented under the Younger Dryas impact hypothesis, which states that the impact of comets cooled global temperatures.
A 2020 study published in Science Advances found that human population size and/or specific human activities, not climate change, caused rapidly rising global mammal extinction rates during the past 126,000 years. Around 96% of all mammalian extinctions over this time period are attributable to human impacts. According to Tobias Andermann, lead author of the study, "these extinctions did not happen continuously and at constant pace. Instead, bursts of extinctions are detected across different continents at times when humans first reached them. More recently, the magnitude of human driven extinctions has picked up the pace again, this time on a global scale."
Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits. The extinction of the mammoths allowed grasslands they had maintained through grazing habits to become birch forests. The new forest and the resulting forest fires may have induced climate change. Such disappearances might be the result of the proliferation of modern humans; some recent studies favor this theory.
Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion, and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually, contributing to the warmer climate of the time (up to 10 °C warmer than at present). This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass.
Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. This hypothesis is relatively new. One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year. Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas. The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2–4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work.
The hyperdisease hypothesis, proposed by Ross MacPhee in 1997, states that the megafaunal die-off was due to an indirect transmission of diseases by newly arriving aboriginal humans. According to MacPhee, aboriginals or animals travelling with them, such as domestic dogs or livestock, introduced one or more highly virulent diseases into new environments whose native population had no immunity to them, eventually leading to their extinction. K-selection animals, such as the now-extinct megafauna, are especially vulnerable to diseases, as opposed to r-selection animals who have a shorter gestation period and a higher population size. Humans are thought to be the sole cause as other earlier migrations of animals into North America from Eurasia did not cause extinctions.
There are many problems with this theory, as this disease would have to meet several criteria: it has to be able to sustain itself in an environment with no hosts; it has to have a high infection rate; and be extremely lethal, with a mortality rate of 50–75%. Disease has to be very virulent to kill off all the individuals in a species, and even such a virulent disease as West Nile fever is unlikely to have caused extinction.
The loss of species from ecological communities, defaunation, is primarily driven by human activity. This has resulted in empty forests, ecological communities depleted of large vertebrates. This is not to be confused with extinction, as it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon.
Big cat populations have severely declined over the last half-century and could face extinction in the following decades. According to IUCN estimates: lions are down to 25,000, from 450,000; leopards are down to 50,000, from 750,000; cheetahs are down to 12,000, from 45,000; tigers are down to 3,000 in the wild, from 50,000. A December 2016 study by the Zoological Society of London, Panthera Corporation and Wildlife Conservation Society showed that cheetahs are far closer to extinction than previously thought, with only 7,100 remaining in the wild, and crammed within only 9% of their historic range. Human pressures are to blame for the cheetah population crash, including prey loss due to overhunting by people, retaliatory killing from farmers, habitat loss and the illegal wildlife trade.
We are seeing the effects of 7 billion people on the planet. At present rates, we will lose the big cats in 10 to 15 years.— Naturalist Dereck Joubert, co-founder of the National Geographic Big Cats Initiative
The term pollinator decline refers to the reduction in abundance of insect and other animal pollinators in many ecosystems worldwide beginning at the end of the twentieth century, and continuing into the present day. Pollinators, which are necessary for 75% of food crops, are declining globally in both abundance and diversity. A 2017 study led by Radboud University's Hans de Kroon indicated that the biomass of insect life in Germany had declined by three-quarters in the previous 25 years. Participating researcher Dave Goulson of Sussex University stated that their study suggested that humans are making large parts of the planet uninhabitable for wildlife. Goulson characterized the situation as an approaching "ecological Armageddon", adding that "if we lose the insects then everything is going to collapse." As of 2019, 40% of insect species are in decline, and a third are endangered. The most significant drivers in the decline of insect populations are associated with intensive farming practices, along with pesticide use and climate change. Around 1 to 2 per cent of insects are lost per year.
We have driven the rate of biological extinction, the permanent loss of species, up several hundred times beyond its historical levels, and are threatened with the loss of a majority of all species by the end of the 21st century.— Peter Raven, former president of the American Association for the Advancement of Science (AAAS), in the foreword to their publication AAAS Atlas of Population and Environment
Various species are predicted to become extinct in the near future, among them the rhinoceros, nonhuman primates, pangolins, and giraffes. Hunting alone threatens bird and mammalian populations around the world. The direct killing of megafauna for meat and body parts is the primary driver of their destruction, with 70% of the 362 megafauna species in decline as of 2019. Mammals in particular have suffered such severe losses as the result of human activity that it could take several million years for them to recover. 189 countries, which are signatory to the Convention on Biological Diversity (Rio Accord), have committed to preparing a Biodiversity Action Plan, a first step at identifying specific endangered species and habitats, country by country.
For the first time since the demise of the dinosaurs 65 million years ago, we face a global mass extinction of wildlife. We ignore the decline of other species at our peril – for they are the barometer that reveals our impact on the world that sustains us.
A June 2020 study published in PNAS posits that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible" and that its acceleration "is certain because of the still fast growth in human numbers and consumption rates." The study found that more than 500 vertebrate species are poised to be lost in the next two decades.
Recent extinctions are more directly attributable to human influences, whereas prehistoric extinctions can be attributed to other factors, such as global climate change. The International Union for Conservation of Nature (IUCN) characterises 'recent' extinction as those that have occurred past the cut-off point of 1500, and at least 875 species have gone extinct since that time and 2012. Some species, such as the Père David's deer and the Hawaiian crow, are extinct in the wild, and survive solely in captive populations. Other populations are only locally extinct (extirpated), still existent elsewhere, but reduced in distribution,:75–77 as with the extinction of gray whales in the Atlantic, and of the leatherback sea turtle in Malaysia.
Most recently, insect populations have experienced rapid surprising declines. Insects have declined at an annual rate of 2.5% over the last 25–30 years. The most severe effects may include Puerto Rico, where insect ground fall has declined by 98% in the previous 35 years. Butterflies and moths are experiencing some of the most severe effect. Butterfly species have declined by 58% on farmland in England. In the last ten years, 40% of insect species and 22% of mammal species have disappeared. Germany is experiencing a 75% decline. Climate change and agriculture are believed to be the most significant contributors to the change.
A 2019 study published in Nature Communications found that rapid biodiversity loss is impacting larger mammals and birds to a much greater extent than smaller ones, with the body mass of such animals expected to shrink by 25% over the next century. Over the past 125,000 years, the average body size of wildlife has fallen by 14% as human actions eradicated megafauna on all continents with the exception of Africa. Another 2019 study published in Biology Letters found that extinction rates are perhaps much higher than previously estimated, in particular for bird species.
The 2019 Global Assessment Report on Biodiversity and Ecosystem Services lists the primary causes of contemporary extinctions in descending order: (1) changes in land and sea use (primarily agriculture and overfishing respectively); (2) direct exploitation of organisms such as hunting; (3) anthropogenic climate change; (4) pollution and (5) invasive alien species spread by human trade. This report, along with the 2020 Living Planet Report by the WWF, both project that climate change will be the leading cause in the next several decades.
Global warming is widely accepted as being a contributor to extinction worldwide, in a similar way that previous extinction events have generally included a rapid change in global climate and meteorology. It is also expected to disrupt sex ratios in many reptiles which have temperature-dependent sex determination.
The removal of land to clear way for palm oil plantations releases carbon emissions held in the peatlands of Indonesia. Palm oil mainly serves as a cheap cooking oil, and also as a (controversial) biofuel. However, damage to peatland contributes to 4% of global greenhouse gas emissions, and 8% of those caused by burning fossil fuels. Palm oil cultivation has also been criticized for other impacts to the environment, including deforestation, which has threatened critically endangered species such as the orangutan and the tree-kangaroo. The IUCN stated in 2016 that the species could go extinct within a decade if measures are not taken to preserve the rainforests in which they live.
Some scientists and academics assert that industrial agriculture and the growing demand for meat is contributing to significant global biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as significant portions of the Amazon region, are being converted to agriculture for meat production. A 2017 study by the World Wildlife Fund (WWF) found that 60% of biodiversity loss can be attributed to the vast scale of feed crop cultivation required to rear tens of billions of farm animals. Moreover, a 2006 report by the Food and Agriculture Organization (FAO) of the United Nations, Livestock's Long Shadow, also found that the livestock sector is a "leading player" in biodiversity loss. More recently, in 2019, the IPBES Global Assessment Report on Biodiversity and Ecosystem Services attributed much of this ecological destruction to agriculture and fishing, with the meat and dairy industries having a very significant impact. Since the 1970s food production has soared in order to feed a growing human population and bolster economic growth, but at a huge price to the environment and other species. The report says some 25% of the earth's ice-free land is used for cattle grazing. A 2020 study published in Nature Communications warned that human impacts from housing, industrial agriculture and in particular meat consumption are wiping out a combined 50 billion years of earth's evolutionary history (defined as phylogenetic diversity[a]) and driving to extinction some of the "most unique animals on the planet," among them the Aye-aye lemur, the Chinese crocodile lizard and the pangolin. Said lead author Rikki Gumbs:
We know from all the data we have for threatened species, that the biggest threats are agriculture expansion and the global demand for meat. Pasture land, and the clearing of rainforests for production of soy, for me, are the largest drivers – and the direct consumption of animals.
Rising levels of carbon dioxide are resulting in influx of this gas into the ocean, increasing its acidity. Marine organisms which possess calcium carbonate shells or exoskeletons experience physiological pressure as the carbonate reacts with acid. For example, this is already resulting in coral bleaching on various coral reefs worldwide, which provide valuable habitat and maintain a high biodiversity. Marine gastropods, bivalves and other invertebrates are also affected, as are the organisms that feed on them. According to a 2018 study published in Science, global Orca populations are poised to collapse due to toxic chemical and PCB pollution. PCBs are still leaking into the sea in spite of being banned for decades.
Some researchers suggest that by 2050 there could be more plastic than fish in the oceans by weight, with about 8,800,000 metric tons (9,700,000 short tons) of plastic being discharged into the oceans annually. Single-use plastics, such as plastic shopping bags, make up the bulk of this, and can often be ingested by marine life, such as with sea turtles. These plastics can degrade into microplastics, smaller particles that can affect a larger array of species. Microplastics make up the bulk of the Great Pacific Garbage Patch, and their smaller size is detrimental to cleanup efforts.
In March 2019, Nature Climate Change published a study by ecologists from Yale University, who found that over the next half century, human land use will reduce the habitats of 1,700 species by up to 50%, pushing them closer to extinction. That same month PLOS Biology published a similar study drawing on work at the University of Queensland, which found that "more than 1,200 species globally face threats to their survival in more than 90% of their habitat and will almost certainly face extinction without conservation intervention".
Since 1970, the populations of migratory freshwater fish have declined by 76%, according to research published by the Zoological Society of London in July 2020. Overall, around one in three freshwater fish species are threatened with extinction due to human-driven habitat degradation and overfishing.
Overhunting can reduce the local population of game animals by more than half, as well as reducing population density, and may lead to extinction for some species. Populations located nearer to villages are significantly more at risk of depletion. Several conservationist organizations, among them IFAW and HSUS, assert that trophy hunters, particularly from the United States, are playing a significant role in the decline of giraffes, which they refer to as a "silent extinction".
The surge in the mass killings by poachers involved in the illegal ivory trade along with habitat loss is threatening African elephant populations. In 1979, their populations stood at 1.7 million; at present there are fewer than 400,000 remaining. Prior to European colonization, scientists believe Africa was home to roughly 20 million elephants. According to the Great Elephant Census, 30% of African elephants (or 144,000 individuals) disappeared over a seven-year period, 2007 to 2014. African elephants could become extinct by 2035 if poaching rates continue.
Fishing has had a devastating effect on marine organism populations for several centuries even before the explosion of destructive and highly effective fishing practices like trawling. Humans are unique among predators in that they regularly prey on other adult apex predators, particularly in marine environments; bluefin tuna, blue whales, North Atlantic right whales and over fifty species of sharks and rays are vulnerable to predation pressure from human fishing, in particular commercial fishing. A 2016 study published in Science concludes that humans tend to hunt larger species, and this could disrupt ocean ecosystems for millions of years. A 2020 study published in Science Advances found that around 18% of marine megafauna, including iconic species such as the Great white shark, are at risk of extinction from human pressures over the next century. In a worst-case scenario, 40% could go extinct over the same time period. According to a 2021 study published in Nature, 71% of oceanic shark and ray populations have been destroyed by overfishing (the primary driver of ocean defaunation) from 1970 to 2018, and are nearing the "point of no return" as 24 of the 31 species are now threatened with extinction, with several being classified as critically endangered.
If this pattern goes unchecked, the future oceans would lack many of the largest species in today’s oceans. Many large species play critical roles in ecosystems and so their extinctions could lead to ecological cascades that would influence the structure and function of future ecosystems beyond the simple fact of losing those species.
The decline of amphibian populations has also been identified as an indicator of environmental degradation. As well as habitat loss, introduced predators and pollution, Chytridiomycosis, a fungal infection accidentally spread by human travel, globalization and the wildlife trade, has caused severe population drops of over 500 amphibian species, and perhaps 90 extinctions, including (among many others) the extinction of the golden toad in Costa Rica and the Gastric-brooding frog in Australia. Many other amphibian species now face extinction, including the reduction of Rabb's fringe-limbed treefrog to an endling, and the extinction of the Panamanian golden frog in the wild. Chytrid fungus has spread across Australia, New Zealand, Central America and Africa, including countries with high amphibian diversity such as cloud forests in Honduras and Madagascar. Batrachochytrium salamandrivorans is a similar infection currently threatening salamanders. Amphibians are now the most endangered vertebrate group, having existed for more than 300 million years through three other mass extinctions.:17
Millions of bats in the US have been dying off since 2012 due to a fungal infection spread from European bats, which appear to be immune. Population drops have been as great as 90% within five years, and extinction of at least one bat species is predicted. There is currently no form of treatment, and such declines have been described as "unprecedented" in bat evolutionary history by Alan Hicks of the New York State Department of Environmental Conservation.
Between 2007 and 2013, over ten million beehives were abandoned due to colony collapse disorder, which causes worker bees to abandon the queen. Though no single cause has gained widespread acceptance by the scientific community, proposals include infections with Varroa and Acarapis mites; malnutrition; various pathogens; genetic factors; immunodeficiencies; loss of habitat; changing beekeeping practices; or a combination of factors.
Some leading scientists have advocated for the global community to designate as protected areas 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate the contemporary extinction crisis as the human population is projected to grow to 10 billion by the middle of the century. Human consumption of food and water resources is also projected to double by this time.
In November 2018, the UN's biodiversity chief Cristiana Pașca Palmer urged people around the world to put pressure on governments to implement significant protections for wildlife by 2020, as rampant biodiversity loss is a "silent killer" as dangerous as global warming, but has received little attention by comparison. She says that "It’s different from climate change, where people feel the impact in everyday life. With biodiversity, it is not so clear but by the time you feel what is happening, it may be too late." In January 2020, the UN Convention on Biological Diversity drafted a Paris-style plan to stop biodiversity and ecosystem collapse by setting a deadline of 2030 to protect 30% of the earth's land and oceans and reduce pollution by 50%, with the goal of allowing for the restoration of ecosystems by 2050. The world failed to meet similar targets for 2020 set by the convention during a summit in Japan in 2010. Of the 20 biodiversity targets proposed, only six were "partially achieved" by the deadline. It was called a global failure by Inger Andersen, head of the United Nations Environment Programme:
"From COVID-19 to massive wildfires, floods, melting glaciers and unprecedented heat, our failure to meet the Aichi (biodiversity) targets — protect our our home — has very real consequences. We can no longer afford to cast nature to the side."
Some scientists have proposed keeping extinctions below 20 per year for the next century as a global target to reduce species loss, which is the biodiversity equivalent of the 2 °C climate target, although it is still much higher than the normal background rate of two per year prior to anthropogenic impacts on the natural world. In fact, instead of instituting mitigation strategies, many right-wing leaders of major countries, including the United States, Brazil and Australia, have recently implemented anti-environment policies.
An October 2020 report on the "era of pandemics" from IPBES found that many of the same human activities that contribute to biodiversity loss and climate change, including deforestation and the wildlife trade, have also increased the risk of future pandemics. The report offers several policy options to reduce such risk, such as taxing meat production and consumption, cracking down on the illegal wildlife trade, removing high disease-risk species from the legal wildlife trade, and eliminating subsidies to businesses which are harmful to the environment. According to marine zoologist John Spicer, "the COVID-19 crisis is not just another crisis alongside the biodiversity crisis and the climate change crisis. Make no mistake, this is one big crisis – the greatest that humans have ever faced."
According to top scientists in a 2021 paper published in Frontiers in Conservation Science, humanity almost certainly faces a "ghastly future" of declining health, biodiversity collapse, climate change-driven social upheaval, displacement and resource conflict, and resource exhaustion, unless major efforts to change human industry and activity are rapidly undertaken.
- Effects of global warming
- Extinction symbol
- Extinction Rebellion
- IUCN Red List
- Late Quaternary prehistoric birds
- List of extinct animals
- List of extinct plants
- List of recently extinct mammals
- List of recently extinct birds
- List of recently extinct invertebrates
- List of recently extinct plants
- List of recently extinct reptiles
- List of recently extinct amphibians
- Pleistocene rewilding
- Planetary boundaries
- Racing Extinction (2015 documentary film)
- The Anthropocene Extinction (2015 album)
- Timeline of extinctions in the Holocene
a. ^ Phylogenetic diversity (PD) is the sum of the phylogenetic branch lengths in years connecting a set of species to each other across their phylogenetic tree, and measures their collective contribution to the tree of life.
- Hume, J. P.; Walters, M. (2012). Extinct Birds. London: A & C Black. ISBN 978-1-4081-5725-1.
- Diamond, Jared (1999). "Up to the Starting Line". Guns, Germs, and Steel. W.W. Norton. pp. 43–44. ISBN 978-0-393-31755-8.
- Ripple WJ, Wolf C, Newsome TM, Galetti M, Alamgir M, Crist E, Mahmoud MI, Laurance WF (13 November 2017). "World Scientists' Warning to Humanity: A Second Notice" (PDF). BioScience. 67 (12): 1026–1028. doi:10.1093/biosci/bix125.
Moreover, we have unleashed a mass extinction event, the sixth in roughly 540 million years, wherein many current life forms could be annihilated or at least committed to extinction by the end of this century.
- Ceballos, Gerardo; Ehrlich, Paul R. (8 June 2018). "The misunderstood sixth mass extinction". Science. 360 (6393): 1080–1081. Bibcode:2018Sci...360.1080C. doi:10.1126/science.aau0191. OCLC 7673137938. PMID 29880679. S2CID 46984172.
- Dirzo, Rodolfo; Young, Hillary S.; Galetti, Mauro; Ceballos, Gerardo; Isaac, Nick J. B.; Collen, Ben (2014). "Defaunation in the Anthropocene" (PDF). Science. 345 (6195): 401–406. Bibcode:2014Sci...345..401D. doi:10.1126/science.1251817. PMID 25061202. S2CID 206555761.
In the past 500 years, humans have triggered a wave of extinction, threat, and local population declines that may be comparable in both rate and magnitude with the five previous mass extinctions of Earth’s history.
- Hollingsworth, Julia (June 11, 2019). "Almost 600 plant species have become extinct in the last 250 years". CNN. Retrieved January 14, 2020.
The research -- published Monday in Nature, Ecology & Evolution journal -- found that 571 plant species have disappeared from the wild worldwide, and that plant extinction is occurring up to 500 times faster than the rate it would without human intervention.
- Pimm, Stuart L.; Russell, Gareth J.; Gittleman, John L.; Brooks, Thomas M. (1995). "The Future of Biodiversity". Science. 269 (5222): 347–350. Bibcode:1995Sci...269..347P. doi:10.1126/science.269.5222.347. PMID 17841251. S2CID 35154695.
- Lawton, J. H.; May, R. M. (1995). "Extinction Rates". Journal of Evolutionary Biology. 9: 124–126. doi:10.1046/j.1420-9101.1996.t01-1-9010124.x.
- De Vos, Jurriaan M.; Joppa, Lucas N.; Gittleman, John L.; Stephens, Patrick R.; Pimm, Stuart L. (2014-08-26). "Estimating the normal background rate of species extinction" (PDF). Conservation Biology (in Spanish). 29 (2): 452–462. doi:10.1111/cobi.12380. ISSN 0888-8892. PMID 25159086.
- Teyssèdre, A. (2004). "Biodiversity and Global Change". Towards a sixth mass extinction crisis?. Paris: ADPF. ISBN 978-2-914-935289.
- Pimm, S. L.; Jenkins, C. N.; Abell, R.; Brooks, T. M.; Gittleman, J. L.; Joppa, L. N.; Raven, P. H.; Roberts, C. M.; Sexton, J. O. (30 May 2014). "The biodiversity of species and their rates of extinction, distribution, and protection" (PDF). Science. 344 (6187): 1246752. doi:10.1126/science.1246752. PMID 24876501. S2CID 206552746.
The overarching driver of species extinction is human population growth and increasing per capita consumption.
- "Without humans, the whole world could look like Serengeti". EurekAlert!. Retrieved August 16, 2020.
The existence of Africa's many species of mammals is thus not due to an optimal climate and environment, but rather because it is the only place where they have not yet been eradicated by humans. The underlying reason includes evolutionary adaptation of large mammals to humans as well as greater pest pressure on human populations in long-inhabited Africa in the past.
- Faurby, Søren; Svenning, Jens-Christian (2015). "Historic and prehistoric human‐driven extinctions have reshaped global mammal diversity patterns". Diversity and Distributions. 21 (10): 1155–1166. doi:10.1111/ddi.12369. hdl:10261/123512.
- Galetti, Mauro; Moleón, Marcos; Jordano, Pedro; Pires, Mathias M.; Guimarães, Paulo R.; Pape, Thomas; Nichols, Elizabeth; Hansen, Dennis; Olesen, Jens M.; Munk, Michael; de Mattos, Jacqueline S. (2018). "Ecological and evolutionary legacy of megafauna extinctions: Anachronisms and megafauna interactions" (PDF). Biological Reviews. 93 (2): 845–862. doi:10.1111/brv.12374. PMID 28990321. S2CID 4762203.
- Darimont, Chris T.; Fox, Caroline H.; Bryan, Heather M.; Reimchen, Thomas E. (21 August 2015). "The unique ecology of human predators". Science. 349 (6250): 858–860. Bibcode:2015Sci...349..858D. doi:10.1126/science.aac4249. ISSN 0036-8075. PMID 26293961. S2CID 4985359.
- Wake, David B.; Vredenburg, Vance T. (2008-08-12). "Are we in the midst of the sixth mass extinction? A view from the world of amphibians". Proceedings of the National Academy of Sciences. 105 (Supplement 1): 11466–11473. Bibcode:2008PNAS..10511466W. doi:10.1073/pnas.0801921105. ISSN 0027-8424. PMC 2556420. PMID 18695221.
- Ceballos, Gerardo; Ehrlich, Paul R.; Dirzo, Rodolfo (23 May 2017). "Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines". PNAS. 114 (30): E6089–E6096. doi:10.1073/pnas.1704949114. PMC 5544311. PMID 28696295.
Much less frequently mentioned are, however, the ultimate drivers of those immediate causes of biotic destruction, namely, human overpopulation and continued population growth, and overconsumption, especially by the rich. These drivers, all of which trace to the fiction that perpetual growth can occur on a finite planet, are themselves increasing rapidly.
- Cockburn, Harry (March 29, 2019). "Population explosion fuelling rapid reduction of wildlife on African savannah, study shows". The Independent. Retrieved April 1, 2019.
Encroachment by people into one of Africa’s most celebrated ecosystems is “squeezing the wildlife in its core”, by damaging habitation and disrupting the migration routes of animals, a major international study has concluded.
- Stokstad, Erik (5 May 2019). "Landmark analysis documents the alarming global decline of nature". Science. AAAS. Retrieved 26 August 2020.
For the first time at a global scale, the report has ranked the causes of damage. Topping the list, changes in land use—principally agriculture—that have destroyed habitat. Second, hunting and other kinds of exploitation. These are followed by climate change, pollution, and invasive species, which are being spread by trade and other activities. Climate change will likely overtake the other threats in the next decades, the authors note. Driving these threats are the growing human population, which has doubled since 1970 to 7.6 billion, and consumption. (Per capita of use of materials is up 15% over the past 5 decades.)
- Plumer, Brad (May 6, 2019). "Humans Are Speeding Extinction and Altering the Natural World at an 'Unprecedented' Pace". The New York Times. Retrieved May 6, 2019.
“Human actions threaten more species with global extinction now than ever before,” the report concludes, estimating that “around 1 million species already face extinction, many within decades, unless action is taken.”
- Staff (May 6, 2019). "Media Release: Nature's Dangerous Decline 'Unprecedented'; Species Extinction Rates 'Accelerating'". Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. Retrieved May 6, 2019.
- "World is 'on notice' as major UN report shows one million species face extinction". UN News. May 6, 2019. Retrieved January 8, 2020.
- Watts, Jonathan (May 6, 2019). "Human society under urgent threat from loss of Earth's natural life". The Guardian. Retrieved May 16, 2019.
- Kolbert, Elizabeth (2014). The Sixth Extinction: An Unnatural History. New York City: Henry Holt and Company. ISBN 978-0805092998.
- Ceballos, Gerardo; Ehrlich, Paul R.; Barnosky, Anthony D.; García, Andrés; Pringle, Robert M.; Palmer, Todd M. (2015). "Accelerated modern human–induced species losses: Entering the sixth mass extinction". Science Advances. 1 (5): e1400253. Bibcode:2015SciA....1E0253C. doi:10.1126/sciadv.1400253. PMC 4640606. PMID 26601195.
All of these are related to human population size and growth, which increases consumption (especially among the rich), and economic inequity.
- Williams, Mark; Zalasiewicz, Jan; Haff, P. K.; Schwägerl, Christian; Barnosky, Anthony D.; Ellis, Erle C. (2015). "The Anthropocene Biosphere". The Anthropocene Review. 2 (3): 196–219. doi:10.1177/2053019615591020. S2CID 7771527.
- Barnosky, Anthony D.; Matzke, Nicholas; Tomiya, Susumu; Wogan, Guinevere O. U.; Swartz, Brian; Quental, Tiago B.; Marshall, Charles; McGuire, Jenny L.; Lindsey, Emily L.; Maguire, Kaitlin C.; Mersey, Ben; Ferrer, Elizabeth A. (3 March 2011). "Has the Earth's sixth mass extinction already arrived?". Nature. 471 (7336): 51–57. Bibcode:2011Natur.471...51B. doi:10.1038/nature09678. PMID 21368823. S2CID 4424650.
- Wilson, Edward O. (2003). The Future of life (1st Vintage Books ed.). New York: Vintage Books. ISBN 9780679768111.
- Doughty, C. E.; Wolf, A.; Field, C. B. (2010). "Biophysical feedbacks between the Pleistocene megafauna extinction and climate: The first human‐induced global warming?". Geophysical Research Letters. 37 (15): n/a. Bibcode:2010GeoRL..3715703D. doi:10.1029/2010GL043985. S2CID 54849882.
- Perry, George L. W.; Wheeler, Andrew B.; Wood, Jamie R.; Wilmshurst, Janet M. (2014-12-01). "A high-precision chronology for the rapid extinction of New Zealand moa (Aves, Dinornithiformes)". Quaternary Science Reviews. 105: 126–135. Bibcode:2014QSRv..105..126P. doi:10.1016/j.quascirev.2014.09.025.
- Crowley, Brooke E. (2010-09-01). "A refined chronology of prehistoric Madagascar and the demise of the megafauna". Quaternary Science Reviews. Special Theme: Case Studies of Neodymium Isotopes in Paleoceanography. 29 (19–20): 2591–2603. Bibcode:2010QSRv...29.2591C. doi:10.1016/j.quascirev.2010.06.030.
- Li, Sophia (2012-09-20). "Has Plant Life Reached Its Limits?". Green Blog. Retrieved 2016-01-22.
- "National Survey Reveals Biodiversity Crisis – Scientific Experts Believe We are in Midst of Fastest Mass Extinction in Earth's History". American Museum of Natural History Press Release. 1998. Retrieved 10 February 2018.
- De Vos, Jurriaan M.; Joppa, Lucas N.; Gittleman, John L.; Stephens, Patrick R.; Pimm, Stuart L. (August 26, 2014). "Estimating the normal background rate of species extinction" (PDF). Conservation Biology. 29 (2): 452–462. doi:10.1111/cobi.12380. PMID 25159086.
- Lawton, J. H.; May, R. M. (1995). "Extinction Rates". Journal of Evolutionary Biology. 9 (1): 124–126. doi:10.1046/j.1420-9101.1996.t01-1-9010124.x.
- Li, S. (2012). "Has Plant Life Reached Its Limits?". New York Times. Retrieved 10 February 2018.
- "Research shows catastrophic invertebrate extinction in Hawai'i and globally". Phys.org. 2015. Retrieved 10 February 2018.
- Régnier, Claire; Achaz, Guillaume; Lambert, Amaury; Cowie, Robert H.; Bouchet, Philippe; Fontaine, Benoît (23 June 2015). "Mass extinction in poorly known taxa". Proceedings of the National Academy of Sciences. 112 (25): 7761–7766. Bibcode:2015PNAS..112.7761R. doi:10.1073/pnas.1502350112. PMC 4485135. PMID 26056308.
- Carrington, Damian (April 15, 2021). "Just 3% of world's ecosystems remain intact, study suggests". The Guardian. Retrieved April 16, 2021.
- Plumptre, Andrew J.; Baisero, Daniele; et al. (2021). "Where Might We Find Ecologically Intact Communities?". Frontiers in Forests and Global Change. 4. doi:10.3389/ffgc.2021.626635.
- Vignieri, S. (25 July 2014). "Vanishing fauna (Special issue)". Science. 345 (6195): 392–412. Bibcode:2014Sci...345..392V. doi:10.1126/science.345.6195.392. PMID 25061199.
- Andermann, Tobias; Faurby, Søren; Turvey, Samuel T.; Antonelli, Alexandre; Silvestro, Daniele (1 September 2020). "The past and future human impact on mammalian diversity". Science Advances. 6 (36): eabb2313. doi:10.1126/sciadv.abb2313. ISSN 2375-2548. PMC 7473673. PMID 32917612. Text and images are available under a Creative Commons Attribution 4.0 International License.
- Woodward, Aylin (April 8, 2019). "So many animals are going extinct that it could take Earth 10 million years to recover". Business Insider. Retrieved April 9, 2019.
Lowery doesn't think we've strayed into Sixth Extinction territory yet. But he and Fraass agree that squabbling over what constitutes that distinction is besides the point. "We have to work to save biodiversity before it's gone. That's the important takeaway here," Lowery said. There is consensus on one aspect of the extinction trend, however: Homo sapiens are to blame. According to a 2014 study, current extinction rates are 1,000 times higher than they would be if humans weren't around.
- Brannen, Peter (13 June 2017). "Earth Is Not in the Midst of a Sixth Mass Extinction". The Atlantic. Retrieved 28 November 2020.
Many of those making facile comparisons between the current situation and past mass extinctions don’t have a clue about the difference in the nature of the data, much less how truly awful the mass extinctions recorded in the marine fossil record actually were.
- Carrington, Damian (10 July 2017). "Earth's sixth mass extinction event under way, scientists warn". The Guardian. Retrieved November 4, 2017.
- Greenfield, Patrick (September 9, 2020). "Humans exploiting and destroying nature on unprecedented scale – report". The Guardian. Retrieved September 10, 2020.
- Briggs, Helen (September 10, 2020). "Wildlife in 'catastrophic decline' due to human destruction, scientists warn". BBC. Retrieved September 10, 2020.
- Lewis, Sophie (September 9, 2020). "Animal populations worldwide have declined by almost 70% in just 50 years, new report says". CBS News. Retrieved October 22, 2020.
- Bradshaw, Corey J. A.; Ehrlich, Paul R.; Beattie, Andrew; Ceballos, Gerardo; Crist, Eileen; Diamond, Joan; Dirzo, Rodolfo; Ehrlich, Anne H.; Harte, John; Harte, Mary Ellen; Pyke, Graham; Raven, Peter H.; Ripple, William J.; Saltré, Frédérik; Turnbull, Christine; Wackernagel, Mathis; Blumstein, Daniel T. (2021). "Underestimating the Challenges of Avoiding a Ghastly Future". Frontiers in Conservation Science. 1. doi:10.3389/fcosc.2020.615419.
- "The Next Frontier: Human Development and the Anthropocene" (PDF). UNDP. December 15, 2020. p. 3. Retrieved December 16, 2020.
- Wooldridge, S. A. (9 June 2008). "Mass extinctions past and present: a unifying hypothesis" (PDF). Biogeosciences Discussions. 5 (3): 2401–2423. Bibcode:2008BGD.....5.2401W. doi:10.5194/bgd-5-2401-2008.
- Jackson, J. B. C. (Aug 2008). "Colloquium paper: ecological extinction and evolution in the brave new ocean". Proceedings of the National Academy of Sciences of the United States of America. 105 (Suppl 1): 11458–11465. Bibcode:2008PNAS..10511458J. doi:10.1073/pnas.0802812105. ISSN 0027-8424. PMC 2556419. PMID 18695220.
- Zalasiewicz, Jan; Williams, Mark; Smith, Alan; Barry, Tiffany L.; Coe, Angela L.; Bown, Paul R.; Brenchley, Patrick; Cantrill, David; Gale, Andrew; Gibbard, Philip; Gregory, F. John; Hounslow, Mark W.; Kerr, Andrew C.; Pearson, Paul; Knox, Robert; Powell, John; Waters, Colin; Marshall, John; Oates, Michael; Rawson, Peter; Stone, Philip (2008). "Are we now living in the Anthropocene". GSA Today. 18 (2): 4. doi:10.1130/GSAT01802A.1.
- Elewa, Ashraf M. T. (2008). "14. Current mass extinction". In Elewa, Ashraf M. T. (ed.). Mass Extinction. pp. 191–194. doi:10.1007/978-3-540-75916-4_14. ISBN 978-3-540-75915-7.
- Ruddiman, W. F. (2003). "The anthropogenic greenhouse gas era began thousands of years ago" (PDF). Climatic Change. 61 (3): 261–293. CiteSeerX 10.1.1.651.2119. doi:10.1023/b:clim.0000004577.17928.fa. S2CID 2501894. Archived from the original (PDF) on 2006-09-03.
- Waters, Colin N.; Zalasiewicz, Jan; Summerhayes, Colin; Barnosky, Anthony D.; Poirier, Clément; Gałuszka, Agnieszka; Cearreta, Alejandro; Edgeworth, Matt; Ellis, Erle C. (2016-01-08). "The Anthropocene is functionally and stratigraphically distinct from the Holocene". Science. 351 (6269): aad2622. doi:10.1126/science.aad2622. ISSN 0036-8075. PMID 26744408. S2CID 206642594.
- "Working Group on the 'Anthropocene'". Subcommission on Quaternary Stratigraphy. Retrieved 21 January 2016.
- Carrington, Damian (August 29, 2016). "The Anthropocene epoch: scientists declare dawn of human-influenced age". The Guardian. Retrieved August 30, 2016.
- Cruzten, P. J. (2002). "Geology of mankind: The Anthropocene". Nature. 415 (6867): 23. Bibcode:2002Natur.415...23C. doi:10.1038/415023a. PMID 11780095. S2CID 9743349.
- Steffen, Will; Persson, Åsa; Deutsch, Lisa; Zalasiewicz, Jan; Williams, Mark; Richardson, Katherine; Crumley, Carole; Crutzen, Paul; Folke, Carl; Gordon, Line; Molina, Mario; Ramanathan, Veerabhadran; Rockström, Johan; Scheffer, Marten; Schellnhuber, Hans Joachim; Svedin, Uno (2011). "The Anthropocene: From Global Change to Planetary Stewardship". Ambio. 40 (7): 739–761. doi:10.1007/s13280-011-0185-x. PMC 3357752. PMID 22338713.
- Sandom, Christopher; Faurby, Søren; Sandel, Brody; Svenning, Jens-Christian (4 June 2014). "Global late Quaternary megafauna extinctions linked to humans, not climate change". Proceedings of the Royal Society B. 281 (1787): 20133254. doi:10.1098/rspb.2013.3254. PMC 4071532. PMID 24898370.
- Smith, Felisa A.; et al. (April 20, 2018). "Body size downgrading of mammals over the late Quaternary". Science. 360 (6386): 310–313. doi:10.1126/science.aao5987. PMID 29674591.
- Syvitski, Jaia; Waters, Colin N.; Day, John; et al. (2020). "Extraordinary human energy consumption and resultant geological impacts beginning around 1950 CE initiated the proposed Anthropocene Epoch". Communications Earth & Environment. 1 (32). doi:10.1038/s43247-020-00029-y. S2CID 222415797.
Human population has exceeded historical natural limits, with 1) the development of new energy sources, 2) technological developments in aid of productivity, education and health, and 3) an unchallenged position on top of food webs. Humans remain Earth’s only species to employ technology so as to change the sources, uses, and distribution of energy forms, including the release of geologically trapped energy (i.e. coal, petroleum, uranium). In total, humans have altered nature at the planetary scale, given modern levels of human-contributed aerosols and gases, the global distribution of radionuclides, organic pollutants and mercury, and ecosystem disturbances of terrestrial and marine environments. Approximately 17,000 monitored populations of 4005 vertebrate species have suffered a 60% decline between 1970 and 2014, and ~1 million species face extinction, many within decades. Humans' extensive 'technosphere', now reaches ~30 Tt, including waste products from non-renewable resources.
- Carrington, Damian (May 21, 2018). "Humans just 0.01% of all life but have destroyed 83% of wild mammals – study". The Guardian. Retrieved May 25, 2018.
- Bar-On, Yinon M; Phillips, Rob; Milo, Ron (2018). "The biomass distribution on Earth". Proceedings of the National Academy of Sciences. 115 (25): 6506–6511. doi:10.1073/pnas.1711842115. PMC 6016768. PMID 29784790.
- Ruddiman, W.F. (2009). "Effect of per-capita land use changes on Holocene forest clearance and CO2 emissions". Quaternary Science Reviews. 28 (27–28): 3011–3015. Bibcode:2009QSRv...28.3011R. doi:10.1016/j.quascirev.2009.05.022.
- Hooke, R. LeB.; Martin-Duque, J. F.; Pedraza, J. (2012). "Land transformation by humans: A review". GSA Today. 22 (12): 4–10. doi:10.1130/GSAT151A.1. S2CID 120172847.
- Vitousek, P. M.; Mooney, H. A.; Lubchenco, J.; Melillo, J. M. (1997). "Human Domination of Earth's Ecosystems". Science. 277 (5325): 494–499. CiteSeerX 10.1.1.318.6529. doi:10.1126/science.277.5325.494.
- Gaston, K.J.; Blackburn, T.N.G.; Klein Goldewijk, K. (2003). "Habitat conversion and global avian biodiversity loss". Proceedings of the Royal Society B. 270 (1521): 1293–1300. doi:10.1098/rspb.2002.2303. PMC 1691371. PMID 12816643.
- Teyssèdre, A.; Couvet, D. (2007). "Expected impact of agriculture expansion on the global avifauna". C. R. Biologies. 30 (3): 247–254. doi:10.1016/j.crvi.2007.01.003. PMID 17434119.
- "Measuring extinction, species by species". The Economic Times. 2008-11-06. Archived from the original on 2013-05-02. Retrieved 2010-05-20.
- Torres, Luisa (September 23, 2019). "When We Love Our Food So Much That It Goes Extinct". NPR. Retrieved October 10, 2019.
- Dawson, Ashley (2016). Extinction: A Radical History. OR Books. p. 41. ISBN 978-1-944869-01-4.
- Harvey, David (2005). A Brief History of Neoliberalism. Oxford University Press. p. 173. ISBN 978-0199283279.
- Andermann, Tobias; Faurby, Søren; et al. (2020). "The past and future human impact on mammalian diversity". Science Advances. 6 (36): eabb2313. doi:10.1126/sciadv.abb2313.
- Lynch, Patrick (15 December 2011). "Secrets from the past point to rapid climate change in the future". NASA's Earth Science News Team. Retrieved 2 April 2016.
- Ruddiman, W.F. (2013). "The Anthropocene". Annual Review of Earth and Planetary Sciences. 41: 45–68. Bibcode:2013AREPS..41...45R. doi:10.1146/annurev-earth-050212-123944.
- Tollefson, Jeff (2011-03-25). "The 8,000-year-old climate puzzle". Nature News. doi:10.1038/news.2011.184.
- "North American Extinctions v. World". www.thegreatstory.org. Retrieved 2016-01-31.
- Steadman, D.W.; Martin, P.S.; MacPhee, R.D.E.; Jull, A.J.T.; McDonald, H.G.; Woods, C.A.; Iturralde-Vinent, M.; Hodgins, G.W.L. (2005). "Asynchronous extinction of late Quaternary sloths on continents and islands". Proceedings of the National Academy of Sciences. 102 (33): 11763–11768. Bibcode:2005PNAS..10211763S. doi:10.1073/pnas.0502777102. PMC 1187974. PMID 16085711.
- Steadman & Martin 2003
- Steadman 1995
- Miller, Gifford; Magee, John; Smith, Mike; Spooner, Nigel; Baynes, Alexander; Lehman, Scott; Fogel, Marilyn; Johnston, Harvey; Williams, Doug (2016-01-29). "Human predation contributed to the extinction of the Australian megafaunal bird Genyornis newtoni [sim]47 ka". Nature Communications. 7: 10496. Bibcode:2016NatCo...710496M. doi:10.1038/ncomms10496. PMC 4740177. PMID 26823193.
- "Controlling Ungulate Populations in native ecosystems in Hawaii" (PDF). Hawaii Conservation Alliance. 22 November 2005. Archived from the original (PDF) on 2016-05-08.
- "Australian endangered species list". Australian Geographic. Retrieved 2017-04-04.
- "?". www.sciencedaily.com. Retrieved 2016-02-01.
- "New Ages for the Last Australian Megafauna: Continent-Wide Extinction About 46,000 Years Ago" (PDF).
- Turney, Chris S. M.; Flannery, Timothy F.; Roberts, Richard G.; Reid, Craig; Fifield, L. Keith; Higham, Tom F. G.; Jacobs, Zenobia; Kemp, Noel; Colhoun, Eric A. (2008-08-21). "Late-surviving megafauna in Tasmania, Australia, implicate human involvement in their extinction". Proceedings of the National Academy of Sciences. 105 (34): 12150–3. Bibcode:2008PNAS..10512150T. doi:10.1073/pnas.0801360105. ISSN 0027-8424. PMC 2527880. PMID 18719103.
- Burney, David A; Burney, Lida Pigott; Godfrey, Laurie R; Jungers, William L; Goodman, Steven M; Wright, Henry T; Jull, A J Timothy (2004-07-01). "A chronology for late prehistoric Madagascar". Journal of Human Evolution. 47 (1–2): 25–63. doi:10.1016/j.jhevol.2004.05.005. PMID 15288523.
- Hawkins, A. F. A.; Goodman, S. M. (2003). Goodman, S. M.; Benstead, J. P. (eds.). The Natural History of Madagascar. University of Chicago Press. pp. 1026–1029. ISBN 978-0-226-30307-9.
- Perez, Ventura R.; Godfrey, Laurie R.; Nowak-Kemp, Malgosia; Burney, David A.; Ratsimbazafy, Jonah; Vasey, Natalia (2005-12-01). "Evidence of early butchery of giant lemurs in Madagascar". Journal of Human Evolution. 49 (6): 722–742. doi:10.1016/j.jhevol.2005.08.004. PMID 16225904.
- Kolbert, Elizabeth (2014-12-22). "The Big Kill". The New Yorker. ISSN 0028-792X. Retrieved 2016-02-25.
- This may refer to groups of animals endangered by climate change. For example, during a catastrophic drought, remaining animals would be gathered around the few remaining watering holes, and thus become extremely vulnerable.
- The Early Settlement of North America. The Clovis Era. Gary Haynes 2002 ISBN 9780521524636. 18–19.
- Martin, P.S. (1995). "Mammoth Extinction: Two Continents and Wrangel Island". Radiocarbon. 37 (1): 7–10. doi:10.1017/s0033822200014739.
- Pitulko, V. V.; Nikolsky, P. A.; Girya, E. Y.; Basilyan, A. E.; Tumskoy, V. E.; Koulakov, S. A.; Astakhov, S. N.; Pavlova, E. Y.; Anisimov, M. A. (2004). "The Yana RHS site: Humans in the Arctic before the Last Glacial Maximum". Science. 303 (5654): 52–56. Bibcode:2004Sci...303...52P. doi:10.1126/science.1085219. PMID 14704419. S2CID 206507352.
- Elias, S. A.; Schreve, D. C. (2013). "Late Pleistocene Megafaunal Extinctions" (PDF). Vertebrate Records. Encyclopedia of Quaternary Science (2nd ed.). Amsterdam: Elsevier. pp. 700–711.
- Pushkina, D.; Raia, P. (2008). "Human influence on distribution and extinctions of the late Pleistocene Eurasian megafauna". Journal of Human Evolution. 54 (6): 769–782. doi:10.1016/j.jhevol.2007.09.024. PMID 18199470.
- Mann, Daniel H.; Groves, Pamela; Reanier, Richard E.; Gaglioti, Benjamin V.; Kunz, Michael L.; Shapiro, Beth (2015). "Life and extinction of megafauna in the ice-age Arctic". Proceedings of the National Academy of Sciences of the United States of America. 112 (46): 14301–14306. Bibcode:2015PNAS..11214301M. doi:10.1073/pnas.1516573112. PMC 4655518. PMID 26578776.
- Adams J.M. & Faure H. (1997) (eds.), QEN members. Review and Atlas of Palaeovegetation: Preliminary land ecosystem maps of the world since the Last Glacial Maximum Archived 2008-01-16 at the Wayback Machine. Oak Ridge National Laboratory, TN, USA.
- Slezak, Michael (14 June 2016). "Revealed: first mammal species wiped out by human-induced climate change". The Guardian. London. Retrieved 16 November 2016.
- Graham, R. W.; Mead, J. I. (1987). "Environmental fluctuations and evolution of mammalian faunas during the last deglaciation in North America". In Ruddiman, W. F.; Wright, J. H. E. (eds.). North America and Adjacent Oceans During the Last Deglaciation. The Geology of North America. K-3. Geological Society of America. ISBN 978-0-8137-5203-7.
- Martin, P. S. (1967). "Prehistoric overkill". In Martin, P. S.; Wright, H. E. (eds.). Pleistocene extinctions: The search for a cause. New Haven: Yale University Press. ISBN 978-0-300-00755-8.
- Lyons, S.K.; Smith, F.A.; Brown, J.H. (2004). "Of mice, mastodons and men: human-mediated extinctions on four continents" (PDF). Evolutionary Ecology Research. 6: 339–358. Retrieved 18 October 2012.
- Andersen, S. T. (1973). "The differential pollen productivity of trees and its significance for the interpretation of a pollen diagram from a forested region". In Birks, H.J.B.; West, R.G. (eds.). Quaternary plant ecology: the 14thsymposium of the British Ecological society, University of Cambridge, 28–30 March 1972. Oxford: Blackwell Scientific. ISBN 978-0-632-09120-1.
- Ashworth, C.A. (1980). "Environmental implications of a beetle assemblage from the Gervais formation (Early Wisconsinian?), Minnesota". Quaternary Research. 13 (2): 200–12. Bibcode:1980QuRes..13..200A. doi:10.1016/0033-5894(80)90029-0.
- Birks, H.H. (1973). "Modern macrofossil assemblages in lake sediments in Minnesota". In Birks, H. J. B.; West, R. G. (eds.). Quaternary plant ecology: the 14thsymposium of the British Ecological Society, University of Cambridge, 28–30 March 1972. Oxford: Blackwell Scientific. ISBN 978-0-632-09120-1.
- Birks, H.J.B.; Birks, H.H. (1980). Quaternary paleoecology. Baltimore: Univ. Park Press. ISBN 978-1-930665-56-9.
- Bradley, R. S. (1985). Quaternary Paleoclimatology: Methods of Paleoclimatic Reconstruction. Winchester, MA: Allen & Unwin. ISBN 978-0-04-551068-9.
- Davis, M. B. (1976). "Pleistocene biogeography of temperate deciduous forests". Geoscience and man: Ecology of the Pleistocene. 13. Baton Rouge: School of Geoscience, Louisiana State University.
- Firestone, Richard; West, Allen; Warwick-Smith, Simon (4 June 2006). The Cycle of Cosmic Catastrophes: How a Stone-Age Comet Changed the Course of World Culture. Bear & Company. pp. 392. ISBN 978-1-59143-061-2.
- Firestone RB, West A, Kennett JP, et al. (October 2007). "Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling". Proc. Natl. Acad. Sci. U.S.A. 104 (41): 16016–21. Bibcode:2007PNAS..10416016F. doi:10.1073/pnas.0706977104. PMC 1994902. PMID 17901202.
- Bunch, T. E.; Hermes, R. E.; Moore, A. M.; Kennettd, Douglas J.; Weaver, James C.; Wittke, James H.; DeCarli, Paul S.; Bischoff, James L.; Hillman, Gordon C.; Howard, George A.; Kimbel, David R.; Kletetschka, Gunther; Lipo, Carl P.; Sakai, Sachiko; Revay, Zsolt; West, Allen; Firestone, Richard B.; Kennett, James P. (June 2012). "Very high-temperature impact melt products as evidence for cosmic airbursts and impacts 12,900 years ago". Proceedings of the National Academy of Sciences of the United States of America. 109 (28): E1903–12. Bibcode:2012PNAS..109E1903B. doi:10.1073/pnas.1204453109. PMC 3396500. PMID 22711809.
- "Humans, not climate, have driven rapidly rising mammal extinction rate". phys.org. Retrieved 9 October 2020.
- Wolf, A.; Doughty, C. E.; Malhi, Y. (2013). "Lateral Diffusion of Nutrients by Mammalian Herbivores in Terrestrial Ecosystems". PLoS ONE. 8 (8): e71352. Bibcode:2013PLoSO...871352W. doi:10.1371/journal.pone.0071352. PMC 3739793. PMID 23951141.
- Marshall, M. (2013-08-11). "Ecosystems still feel the pain of ancient extinctions". New Scientist. Retrieved 12 August 2013.
- Doughty, C. E.; Wolf, A.; Malhi, Y. (2013). "The legacy of the Pleistocene megafauna extinctions on nutrient availability in Amazonia". Nature Geoscience. 6 (9): 761–764. Bibcode:2013NatGe...6..761D. doi:10.1038/ngeo1895.
- Sandom, Christopher; Faurby, Søren; Sandel, Brody; Svenning, Jens-Christian (4 June 2014). "Global late Quaternary megafauna extinctions linked to humans, not climate change". Proceedings of the Royal Society B. 281 (1787): 20133254. doi:10.1098/rspb.2013.3254. PMC 4071532. PMID 24898370.
- Wilkinson, D. M.; Nisbet, E. G.; Ruxton, G. D. (2012). "Could methane produced by sauropod dinosaurs have helped drive Mesozoic climate warmth?". Current Biology. 22 (9): R292–R293. doi:10.1016/j.cub.2012.03.042. PMID 22575462. Retrieved 2012-05-08.
- "Dinosaur gases 'warmed the Earth'". BBC Nature News. 7 May 2012. Retrieved 8 May 2012.
- Smith, F. A.; Elliot, S. M.; Lyons, S. K. (23 May 2010). "Methane emissions from extinct megafauna". Nature Geoscience. 3 (6): 374–375. Bibcode:2010NatGe...3..374S. doi:10.1038/ngeo877. S2CID 128832000.
- Kelliher, F. M.; Clark, H. (15 March 2010). "Methane emissions from bison—An historic herd estimate for the North American Great Plains". Agricultural and Forest Meteorology. 150 (3): 473–577. Bibcode:2010AgFM..150..473K. doi:10.1016/j.agrformet.2009.11.019.
- MacFee, R.D.E. & Marx, P.A. (1997). "Humans, hyperdisease and first-contact extinctions". In Goodman, S. & Patterson, B.D. (eds.). Natural Change and Human Impact in Madagascar. Washington D.C.: Smithsonian Press. pp. 169–217. ISBN 978-1-56098-683-6.
- MacFee, R.D.E. & Marx, P.A. (1998). "Lightning Strikes Twice: Blitzkrieg, Hyperdisease, and Global Explanations of the Late Quaternary Catastrophic Extinctions". American Museum of Natural History.
- MacPhee, Ross D.E.; Marx, Preston (1997). "The 40,000-year Plague: Humans, Hyperdisease, and First-Contact Extinctions". Natural Change and Human Impact in Madagascar. Washington, D.C.: Smithsonian Institution Press. pp. 169–217.
- Lyons, K.; Smith, F. A.; Wagner, P. J.; White, E. P.; Brown, J. H. (2004). "Was a 'hyperdisease' responsible for the late Pleistocene megafaunal extinction?" (PDF). Ecology Letters. 7 (9): 859–68. doi:10.1111/j.1461-0248.2004.00643.x.
- Lapointe, D. A.; Atkinson, C. T.; Samuel, M. D. (2012). "Ecology and conservation biology of avian malaria". Annals of the New York Academy of Sciences. 1249 (1): 211–26. Bibcode:2012NYASA1249..211L. doi:10.1111/j.1749-6632.2011.06431.x. PMID 22320256. S2CID 1885904.
- Estrada, Alejandro; Garber, Paul A.; Rylands, Anthony B.; Roos, Christian; Fernandez-Duque, Eduardo; Di Fiore, Anthony; Anne-Isola Nekaris, K.; Nijman, Vincent; Heymann, Eckhard W.; Lambert, Joanna E.; Rovero, Francesco; Barelli, Claudia; Setchell, Joanna M.; Gillespie, Thomas R.; Mittermeier, Russell A.; Arregoitia, Luis Verde; de Guinea, Miguel; Gouveia, Sidney; Dobrovolski, Ricardo; Shanee, Sam; Shanee, Noga; Boyle, Sarah A.; Fuentes, Agustin; MacKinnon, Katherine C.; Amato, Katherine R.; Meyer, Andreas L. S.; Wich, Serge; Sussman, Robert W.; Pan, Ruliang; Kone, Inza; Li, Baoguo (January 18, 2017). "Impending extinction crisis of the world's primates: Why primates matter". Science Advances. 3 (1): e1600946. Bibcode:2017SciA....3E0946E. doi:10.1126/sciadv.1600946. PMC 5242557. PMID 28116351.
- Primack, Richard (2014). Essentials of Conservation Biology. Sunderland, MA USA: Sinauer Associates, Inc. Publishers. pp. 217–245. ISBN 978-1-605-35289-3.
- "Tracking and combatting our current mass extinction". Ars Technica. 2014-07-25. Retrieved 2015-11-30.
- Dirzo, R.; Galetti, M. (2013). "Ecological and Evolutionary Consequences of Living in a Defaunated World". Biological Conservation. 163: 1–6. doi:10.1016/j.biocon.2013.04.020.
- Lions, tigers, big cats may face extinction in 20 years by Dan Vergano, USA Today. October 28, 2011.
- Visser, Nick (December 27, 2016). "Cheetahs Are Far Closer To Extinction Than We Realized". The Huffington Post. Retrieved December 27, 2016.
- Duranta, Sarah M.; Mitchell, Nicholas; Groom, Rosemary; Pettorelli, Nathalie; Ipavec, Audrey; Jacobson, Andrew P.; Woodroffe, Rosie; Böhm, Monika; Hunter, Luke T. B.; Becker, Matthew S.; Broekhuis, Femke; Bashir, Sultana; Andresen, Leah; Aschenborn, Ortwin; Beddiaf, Mohammed; Belbachir, Farid; Belbachir-Bazi, Amel; Berbash, Ali; Brandao de Matos Machado, Iracelma; Breitenmoser, Christine; Chege, Monica; Cilliers, Deon; Davies-Mostert, Harriet; Dickman, Amy J.; Ezekiel, Fabiano; Farhadinia, Mohammad S.; Funston, Paul; Henschel, Philipp; Horgan, Jane; de Iongh, Hans H.; Jowkar, Houman; Klein, Rebecca; Lindsey, Peter Andrew; Marker, Laurie; Marnewick, Kelly; Melzheimera, Joerg; Merkle, Johnathan; M'sokab, Jassiel; Msuhac, Maurus; O'Neill, Helen; Parker, Megan; Purchase, Gianetta; Sahailou, Samaila; Saidu, Yohanna; Samna, Abdoulkarim; Schmidt-Küntze, Anne; Selebatso, Eda; Sogbohossou, Etotépé A.; Soultan, Alaaeldin; Stone, Emma; van der Meer, Esther; van Vuuren, Rudie; Wykstra, Mary; Young-Overto, Kim (2016). "The global decline of cheetah Acinonyx jubatus and what it means for conservation" (PDF). Proceedings of the National Academy of Sciences of the United States of America. 114 (3): 1–6. doi:10.1073/pnas.1611122114. PMC 5255576. PMID 28028225. Archived from the original (PDF) on 2017-01-11. Retrieved 2017-01-10.
- Kluser, S. and Peduzzi, P. (2007) "Global pollinator decline: a literature review" UNEP/GRID – Europe.
- Dirzo, Rodolfo; Young, Hillary S.; Galetti, Mauro; Ceballos, Gerardo; Isaac, Nick J. B.; Collen, Ben (2014). "Defaunation in the Anthropocene" (PDF). Science. 345 (6195): 401–406. Bibcode:2014Sci...345..401D. doi:10.1126/science.1251817. PMID 25061202. S2CID 206555761. Retrieved December 16, 2016.
- "Warning of 'ecological Armageddon' after dramatic plunge in insect numbers". The Guardian. 18 October 2017.
- Carrington, Damian (February 10, 2019). "Plummeting insect numbers 'threaten collapse of nature'". The Guardian. Retrieved February 10, 2019.
- Briggs, Helen (October 30, 2019). "'Alarming' loss of insects and spiders recorded". BBC. Retrieved November 2, 2019.
- Lewis, Sophie (January 12, 2021). "Scientists warn the world's insects are undergoing "death by a thousand cuts"". CBS News. Retrieved January 12, 2021.
- "Atlas of Population and Environment". AAAS. 2000. Archived from the original on 2011-03-09. Retrieved 2008-02-12.
- "A northern white rhino has died. There are now five left in the entire world". The Washington Post. 15 December 2014.
- "Northern white rhino: Last male Sudan dies in Kenya". British Broadcasting Corporation. March 20, 2018.
- 7 Iconic Animals Humans Are Driving to Extinction. Live Science. November 22, 2013.
- Platt, John R. "Poachers Drive Javan Rhino to Extinction in Vietnam [Updated]".
- Inus, Kristy (April 18, 2019). "Sumatran rhinos extinct in the wild". The Star Online. Retrieved April 26, 2019.
- Fletcher, Martin (January 31, 2015). "Pangolins: why this cute prehistoric mammal is facing extinction". The Telegraph. Retrieved December 14, 2016.
- Carrington, Damian (December 8, 2016). "Giraffes facing extinction after devastating decline, experts warn". The Guardian. Retrieved December 8, 2016.
- "Imagine a world without giraffes". CNN. December 12, 2016.
- Pennisi, Elizabeth (October 18, 2016). "People are hunting primates, bats, and other mammals to extinction". Science. Retrieved November 21, 2016.
- Ripple, William J.; Abernethy, Katharine; Betts, Matthew G.; Chapron, Guillaume; Dirzo, Rodolfo; Galetti, Mauro; Levi, Taal; Lindsey, Peter A.; Macdonald, David W.; Machovina, Brian; Newsome, Thomas M.; Peres, Carlos A.; Wallach, Arian D.; Wolf, Christopher; Young, Hillary (2016). "Bushmeat hunting and extinction risk to the world's mammals". Royal Society Open Science. 3 (10): 1–16. Bibcode:2016RSOS....360498R. doi:10.1098/rsos.160498. PMC 5098989. PMID 27853564.
- Benítez-López, A.; Alkemade, R.; Schipper, A. M.; Ingram, D. J.; Verweij, P. A.; Eikelboom, J. A. J.; Huijbregts, M. A. J. (April 14, 2017). "The impact of hunting on tropical mammal and bird populations". Science. 356 (6334): 180–183. Bibcode:2017Sci...356..180B. doi:10.1126/science.aaj1891. hdl:1874/349694. PMID 28408600. S2CID 19603093.
- Milman, Oliver (February 6, 2019). "The killing of large species is pushing them towards extinction, study finds". The Guardian. Retrieved February 8, 2019.
- Ripple, W. J.; et al. (2019). "Are we eating the world's megafauna to extinction?". Conservation Letters. 12 (3): e12627. doi:10.1111/conl.12627.
- Wilcox, Christie (October 17, 2018). "Human-caused extinctions have set mammals back millions of years". National Geographic. Retrieved October 17, 2018.
- Yong, Ed (October 15, 2018). "It Will Take Millions of Years for Mammals to Recover From Us". The Atlantic. Retrieved November 1, 2018.
- "History of the Convention". Secretariat of the Convention on Biological Diversity. Retrieved 9 January 2017.
- Glowka, Lyle; Burhenne-Guilmin, Françoise; Synge, Hugh; McNeely, Jeffrey A.; Gündling, Lothar (1994). IUCN environmental policy and law paper. Guide to the Convention on Biodiversity. International Union for Conservation of Nature. ISBN 978-2-8317-0222-3.
- "60 percent of global wildlife species wiped out". Al Jazeera. 28 October 2016. Retrieved 9 January 2017.
- Ceballos, Gerardo; Ehrlich, Paul R.; Raven, Peter H. (June 1, 2020). "Vertebrates on the brink as indicators of biological annihilation and the sixth mass extinction". PNAS. 117 (24): 13596–13602. doi:10.1073/pnas.1922686117. PMC 7306750. PMID 32482862.
- Fisher, Diana O.; Blomberg, Simon P. (2011). "Correlates of rediscovery and the detectability of extinction in mammals". Proceedings of the Royal Society B: Biological Sciences. 278 (1708): 1090–1097. doi:10.1098/rspb.2010.1579. PMC 3049027. PMID 20880890.
- "Extinction continues apace". International Union for Conservation of Nature. 3 November 2009. Retrieved 18 October 2012.
- Zhigang, J; Harris, RB (2008). "Elaphurus davidianus". IUCN Red List of Threatened Species. 2008. Retrieved 2012-05-20.old-form url
- BirdLife International (2013). "Corvus hawaiiensis". IUCN Red List of Threatened Species. 2013.old-form url
- McKinney, Michael L.; Schoch, Robert; Yonavjak, Logan (2013). "Conserving Biological Resources". Environmental Science: Systems and Solutions (5th ed.). Jones & Bartlett Learning. ISBN 978-1-4496-6139-7.
- Perrin, William F.; Würsig, Bernd G.; JGM "Hans" Thewissen (2009). Encyclopedia of marine mammals. Academic Press. p. 404. ISBN 978-0-12-373553-9.
- Spotila, James R.; Tomillo, Pilar S. (2015). The Leatherback Turtle: Biology and Conservation. Johns Hopkins University. p. 210. ISBN 978-1-4214-1708-0.
- Damien Carrington (10 February 2019). "Plummeting insect numbers 'threaten collapse of nature'". The Guardian.
- Carrington, Damian (May 23, 2019). "Humans causing shrinking of nature as larger animals die off". The Guardian. Retrieved May 23, 2019.
- Mooers, Arne (January 16, 2020). "Bird species are facing extinction hundreds of times faster than previously thought". The Conversation. Retrieved January 18, 2020.
- "Deforestation in Malaysian Borneo". NASA. 2009. Retrieved 7 April 2010.
- Foster, Joanna M. (1 May 2012). "A Grim Portrait of Palm Oil Emissions". The New York Times. Retrieved 10 January 2017.
- Rosenthal, Elisabeth (31 January 2007). "Once a Dream Fuel, Palm Oil May Be an Eco-Nightmare". The New York Times. Retrieved 10 January 2017.
- "Palm Oil Continues to Dominate Global Consumption in 2006/07" (PDF) (Press release). United States Department of Agriculture. June 2006. Archived from the original (PDF) on 26 April 2009. Retrieved 10 January 2017.
- "Once a Dream, Palm Oil May Be an Eco-Nightmare". The New York Times. 31 January 2007. Retrieved 10 January 2017.
- Clay, Jason (2004). World Agriculture and the Environment. p. 219. ISBN 978-1-55963-370-3.
- "Palm oil: Cooking the Climate". Greenpeace. 8 November 2007. Archived from the original on 10 April 2010. Retrieved 10 January 2017.
- "The bird communities of oil palm and rubber plantations in Thailand" (PDF). The Royal Society for the Protection of Birds (RSPB). Retrieved 10 January 2017.
- "Palm oil threatening endangered species" (PDF). Center for Science in the Public Interest. May 2005.
- Embury-Dennis, Tom (September 1, 2016). "Tree kangaroos 'on brink of extinction' due to palm oil deforestation". The Independent. Retrieved September 8, 2017.
- Orangutans face complete extinction within 10 years, animal rescue charity warns. The Independent. August 19, 2016.
- Morell, Virginia (August 11, 2015). "Meat-eaters may speed worldwide species extinction, study warns". Science. Retrieved December 14, 2016.
- Machovina, B.; Feeley, K. J.; Ripple, W. J. (2015). "Biodiversity conservation: The key is reducing meat consumption". Science of the Total Environment. 536: 419–431. Bibcode:2015ScTEn.536..419M. doi:10.1016/j.scitotenv.2015.07.022. PMID 26231772.
- Johnston, Ian (August 26, 2017). "Industrial farming is driving the sixth mass extinction of life on Earth, says leading academic". The Independent. Retrieved September 4, 2017.
- Devlin, Hannah (July 19, 2018). "Rising global meat consumption 'will devastate environment'". The Guardian. Retrieved July 22, 2018.
- Smithers, Rebecca (5 October 2017). "Vast animal-feed crops to satisfy our meat needs are destroying planet". The Guardian. Retrieved 5 October 2017.
- Steinfeld, Henning; Gerber, Pierre; Wassenaar, Tom; Castel, Vincent; Rosales, Mauricio; de Haan, Cees (2006). Livestock's Long Shadow: Environmental Issues and Options (PDF). Food and Agriculture Organization. p. xxiii. ISBN 978-92-5-105571-7.
- McGrath, Matt (May 6, 2019). "Humans 'threaten 1m species with extinction'". BBC. Retrieved July 1, 2019.
- Woodyatt, Amy (May 26, 2020). "Human activity threatens billions of years of evolutionary history, researchers warn". CNN. Retrieved May 27, 2020.
- Briggs, Helen (May 26, 2020). "'Billions of years of evolutionary history' under threat". BBC. Retrieved October 5, 2020.
The researchers calculated the amount of evolutionary history - branches on the tree of life - that are currently threatened with extinction, using extinction risk data for more than 25,000 species. They found a combined 50 billion years of evolutionary heritage, at least, were under threat from human impacts such as urban development, deforestation and road building.
- "Plastics in the Ocean". Ocean Conservancy. 2017-03-07. Retrieved 2021-02-06.
- Carrington, Damian (September 27, 2018). "Orca 'apocalypse': half of killer whales doomed to die from pollution". The Guardian. Retrieved September 28, 2018.
- Sutter, John D. (December 12, 2016). "How to stop the sixth mass extinction". CNN. Retrieved December 19, 2016.
- "Plastic Bag Ban Will Help Save California's Endangered Sea Turtles". Sea Turtle Restoration Project. 2010. Archived from the original on 15 April 2013. Retrieved 7 February 2017.
- Aguilera, M. (2012). "Plastic trash altering ocean habitats, Scripps study shows". Retrieved 7 February 2017.
- Reints, Renae (March 6, 2019). "1,700 Species Will Likely Go Extinct Due to Human Land Use, Study Says". Fortune. Retrieved March 11, 2019.
- Walter Jetz; Powers, Ryan P. (4 March 2019). "Global habitat loss and extinction risk of terrestrial vertebrates under future land-use-change scenarios". Nature Climate Change. 9 (4): 323–329. Bibcode:2019NatCC...9..323P. doi:10.1038/s41558-019-0406-z. S2CID 92315899.
- Cox, Lisa (12 March 2019). "'Almost certain extinction': 1,200 species under severe threat across world". Theguardian.com. Retrieved 13 March 2019.
- Venter, Oscar; Atkinson, Scott C.; Possingham, Hugh P.; O’Bryan, Christopher J.; Marco, Moreno Di; Watson, James E. M.; Allan, James R. (12 March 2019). "Hotspots of human impact on threatened terrestrial vertebrates". PLOS Biology. 17 (3): e3000158. doi:10.1371/journal.pbio.3000158. PMC 6413901. PMID 30860989.
- "Migratory river fish populations down 76% since 1970: study". Agence France-Presse. July 28, 2020. Retrieved July 28, 2020.
- Morell, Virginia (February 1, 2017). "World's most endangered marine mammal down to 30 individuals". Science. Retrieved February 3, 2017.
- "World's most endangered marine mammal is now down to 10 animals". New Scientist. March 15, 2019. Retrieved March 16, 2019.
- Redford, K. H. (1992). "The empty forest" (PDF). BioScience. 42 (6): 412–422. doi:10.2307/1311860. JSTOR 1311860.
- Peres, Carlos A.; Nascimento, Hilton S. (2006). "Impact of Game Hunting by the Kayapo´ of South-eastern Amazonia: Implications for Wildlife Conservation in Tropical Forest Indigenous Reserves". Human Exploitation and Biodiversity Conservation. Topics in Biodiversity and Conservation. 3. pp. 287–313. ISBN 978-1-4020-5283-5.
- Altrichter, M.; Boaglio, G. (2004). "Distribution and Relative Abundance of Peccaries in the Argentine Chaco: Associations with Human Factors". Biological Conservation. 116 (2): 217–225. doi:10.1016/S0006-3207(03)00192-7.
- Milman, Oliver (April 19, 2017). "Giraffes must be listed as endangered, conservationists formally tell US". The Guardian. Retrieved April 29, 2018.
- Elephants in the Dust – The African Elephant Crisis. UNEP, 2013.
- "African Elephant Population Dropped 30 Percent in 7 Years". The New York Times. September 1, 2016.
- This Is the Most Important Issue That's Not Being Talked About in This Election. Esquire. November 7, 2016.
- 'Our living dinosaurs' There are far fewer African elephants than we thought, study shows. CNN. September 1, 2016.
- "We are failing the elephants". CNN. December 12, 2016.
- Roberts, Callum (2007). The Unnatural History of the Sea.
- Claudia Geib (July 16, 2020). "North Atlantic right whales now officially 'one step from extinction'". The Guardian. Retrieved July 17, 2020.
- Briggs, Helen (December 4, 2018). "World's strangest sharks and rays 'on brink of extinction'". BBC. Retrieved December 10, 2018.
- Payne, Jonathan L.; Bush, Andrew M.; Heim, Noel A.; Knope, Matthew L.; McCauley, Douglas J. (2016). "Ecological selectivity of the emerging mass extinction in the oceans". Science. 353 (6305): 1284–1286. Bibcode:2016Sci...353.1284P. doi:10.1126/science.aaf2416. PMID 27629258.
- Osborne, Hannah (April 17, 2020). "Great White Sharks Among Marine Megafauna That Could Go Extinct in Next 100 Years, Study Warns". Newsweek. Retrieved April 28, 2020.
- Yeung, Jessie (January 28, 2021). "Shark and ray populations have dropped 70% and are nearing 'point of no return,' study warns". CNN. Retrieved January 28, 2021.
- Pacoureau, Nathan; Rigby, Cassandra L.; et al. (2021). "Half a century of global decline in oceanic sharks and rays". Nature. 589: 567–571. doi:10.1038/s41586-020-03173-9.
- Einhorn, Catrin (January 27, 2021). "Shark Populations Are Crashing, With a 'Very Small Window' to Avert Disaster". The New York Times. Retrieved February 2, 2021.
- Humanity driving 'unprecedented' marine extinction. The Guardian. September 14, 2016.
- Ochoa-Ochoa, L.; Whittaker, R. J.; Ladle, R. J. (2013). "The demise of the golden toad and the creation of a climate change icon species". Conservation and Society. 11 (3): 291–319. doi:10.4103/0972-4923.121034.
- Frog goes extinct, media yawns. The Guardian. 27 October 2016.
- Mendelson, J.R. & Angulo, A. (2009). "Ecnomiohyla rabborum". IUCN Red List of Threatened Species. 2009: e.T158613A5241303. doi:10.2305/IUCN.UK.2009-2.RLTS.T158613A5241303.en. Retrieved 27 December 2017.
- Scheele, Ben C.; et al. (March 29, 2019). "Amphibian fungal panzootic causes catastrophic and ongoing loss of biodiversity" (PDF). Science. 363 (6434): 1459–1463. Bibcode:2019Sci...363.1459S. doi:10.1126/science.aav0379. hdl:1885/160196. PMID 30923224. S2CID 85565860.
- Blehert, D. S.; Hicks, A. C.; Behr, M.; Meteyer, C. U.; Berlowski-Zier, B. M.; Buckles, E. L.; Coleman, J. T. H.; Darling, S. R.; Gargas, A.; Niver, R.; Okoniewski, J. C.; Rudd, R. J.; Stone, W. B. (9 January 2009). "Bat White-Nose Syndrome: An Emerging Fungal Pathogen?". Science. 323 (5911): 227. doi:10.1126/science.1163874. PMID 18974316. S2CID 23869393.
- Benjamin, A.; Holpuch, A.; Spencer, R. (2013). "Buzzfeeds: The effects of colony collapse disorder and other bee news". The Guardian. Retrieved 21 August 2015.
- "Multiple causes for colony collapse – report". 3 News NZ. 3 May 2013. Archived from the original on 29 October 2013. Retrieved 3 May 2013.
- Cepero, Almudena; Ravoet, Jorgen; Gómez-Moracho, Tamara; Bernal, José Luis; Del Nozal, Maria J.; Bartolomé, Carolina; Maside, Xulio; Meana, Aránzazu; González-Porto, Amelia V.; de Graaf, Dirk C.; Martín-Hernández, Raquel; Higes, Mariano (15 September 2014). "Holistic screening of collapsing honey bee colonies in Spain: a case study". BMC Research Notes. 7: 649. doi:10.1186/1756-0500-7-649. ISSN 1756-0500. PMC 4180541. PMID 25223634.
- Baillie, Jonathan; Ya-Ping, Zhang (September 14, 2018). "Space for nature". Science. 361 (6407): 1051. Bibcode:2018Sci...361.1051B. doi:10.1126/science.aau1397. PMID 30213888.
- Watts, Jonathan (November 3, 2018). "Stop biodiversity loss or we could face our own extinction, warns UN". The Guardian. Retrieved November 3, 2018.
- Greenfield, Patrick (January 13, 2020). "UN draft plan sets 2030 target to avert Earth's sixth mass extinction". The Guardian. Retrieved January 14, 2020.
- Yeung, Jessie (January 14, 2020). "We have 10 years to save Earth's biodiversity as mass extinction caused by humans takes hold, UN warns". CNN. Retrieved January 14, 2020.
- Dickie, Gloria (September 15, 2020). "Global Biodiversity Is in Free Fall". Scientific American. Retrieved September 15, 2020.
- Larson, Christina; Borenstein, Seth (September 15, 2020). "World isn't meeting biodiversity goals, UN report finds". Associated Press. Retrieved September 15, 2020.
- D. A. Rounsevell, Mark; Harfoot, Mike; et al. (June 12, 2020). "A biodiversity target based on species extinctions". Science. 368 (6496): 1193–1195. doi:10.1126/science.aba6592. PMID 32527821. S2CID 219585428.
- "Fewer than 20 extinctions a year: does the world need a single target for biodiversity?". Nature. 583 (7814): 7–8. June 30, 2020. doi:10.1038/d41586-020-01936-y. PMID 32606472.
- Frontiers in Conservation Science,"Underestimating the Challenges of Avoiding a Ghastly Future" Front. Conserv. Sci., 13 January 2021
- Carrington, Damian (October 29, 2020). "Protecting nature is vital to escape 'era of pandemics' – report". The Guardian. Retrieved November 28, 2020.
- Mcelwee, Pamela (November 2, 2020). "COVID-19 and the biodiversity crisis". The Hill. Retrieved November 28, 2020.
- "Escaping the 'Era of Pandemics': Experts Warn Worse Crises to Come Options Offered to Reduce Risk". Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. 2020. Retrieved November 28, 2020.
- Weston, Phoebe (January 13, 2021). "Top scientists warn of 'ghastly future of mass extinction' and climate disruption". The Guardian. Retrieved February 13, 2021.
- Ceballos, Gerardo; Ehrlich, Anne H.; Ehrlich, Paul R. (2015). The Annihilation of Nature: Human Extinction of Birds and Mammals. Baltimore, Maryland: Johns Hopkins University Press. ISBN 978-1421417189.
- deBuys, William (March 2015). "The Politics of Extinction – A Global War on Nature". Tom Dispatch.
Uncounted species – not just tigers, gibbons, rhinos, and saola, but vast numbers of smaller mammals, amphibians, birds, and reptiles – are being pressed to the brink. We’ve hardly met them and yet, within the vastness of the universe, they and the rest of Earth’s biota are our only known companions. Without them, our loneliness would stretch to infinity.
- Firestone RB, West A, Kennett JP, Becker L, Bunch TE, Revay ZS, et al. (October 2007). "Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling". Proceedings of the National Academy of Sciences of the United States of America. 104 (41): 16016–21. Bibcode:2007PNAS..10416016F. doi:10.1073/pnas.0706977104. PMC 1994902. PMID 17901202.
- Kolbert, Elizabeth (May 25, 2009). "The Sixth Extinction? There have been five great die-offs in history. This time, the cataclysm is us". The New Yorker. Retrieved 8 May 2012.
- Leakey, Richard; Lewin, Roger (1996). The Sixth Extinction: Patterns of Life and the Future of Humankind. New York: Anchor Books. ISBN 978-0-385-46809-1.
- Linkola, Pentti (2011). Can Life Prevail?. Arktos Media. ISBN 978-1907166631.
- Loarie, Scott R.; Duffy, Philip B.; Hamilton, Healy; Asner, Gregory P.; Field, Christopher B.; Ackerly, David D. (2009). "The velocity of climate change". Nature. 462 (7276): 1052–1055. Bibcode:2009Natur.462.1052L. doi:10.1038/nature08649. PMID 20033047. S2CID 4419902.
- Marsh, Bill (1 June 2012). "Are We in the Midst Of a Sixth Mass Extinction?". The New York Times Sunday Review: Opinion Page. Retrieved 18 October 2012.
- Martin, P. S.; Wright, H. E. Jr, eds. (1967). Pleistocene Extinctions: The Search for a Cause. New Haven: Yale University Press. ISBN 978-0-300-00755-8.
- McCallum, Malcolm L. (2015). "Vertebrate biodiversity losses point to sixth mass extinction". Biodiversity and Conservation. 24 (10): 2497–2519. doi:10.1007/s10531-015-0940-6. S2CID 16845698.
- Newman, Lenore (2019). Lost Feast: Culinary Extinction and the Future of Food. ECW Press. ISBN 978-1770414358.
- Nihjuis, Michelle (23 July 2012). "Conservationists Use Triage to Determine Which Species to Save and Not". Scientific American.
- Oakes, Ted (2003). Land of Lost Monsters: Man Against Beast – The Prehistoric Battle for the Planet. Hylas Publishing. ISBN 978-1-59258-005-7.
- Steadman, D. W. (1995). "Prehistoric extinctions of Pacific island birds: biodiversity meets zooarchaeology". Science. 267 (5201): 1123–1131. Bibcode:1995Sci...267.1123S. doi:10.1126/science.267.5201.1123. PMID 17789194. S2CID 9137843.
- Steadman, D. W.; Martin, P. S. (2003). "The late Quaternary extinction and future resurrection of birds on Pacific islands". Earth-Science Reviews. 61 (1–2): 133–147. Bibcode:2003ESRv...61..133S. doi:10.1016/S0012-8252(02)00116-2.
- Wiens, John J. (December 2016). "Climate-Related Local Extinctions Are Already Widespread among Plant and Animal Species". PLOS Biology. 14 (12): e2001104. doi:10.1371/journal.pbio.2001104. hdl:10150/622757. PMC 5147797. PMID 27930674.
|Are We Living In the Sixth Extinction? on YouTube|
|Extinction: The Facts in 6 minutes on BBC One|
- The Extinction Crisis. Center for Biological Diversity.
- A third of birds in North America threatened with extinction. CBC News. May 18, 2016.
- Why don’t we grieve for extinct species?. The Guardian. November 19, 2016.
- Vanishing: The extinction crisis is far worse than you think. CNN. December 2016.
- Humans Just Killed Off These 12 Animals, And You Didn’t Even Notice. The Huffington Post. December 16, 2016.
- Expanding Human Habitat Puts Giraffe Population At Risk. NPR. 4 January 2017
- Endangered Species and the Stuff We Buy, All Mapped Out. The New York Times, 6 January 2017
- Biologists say half of all species could be extinct by end of century, The Guardian, 25 February 2017
- Humans are ushering in the sixth mass extinction of life on Earth, scientists warn, The Independent, 31 May 2017
- Human activity pushing Earth towards 'sixth mass species extinction,' report warns. CBC. Mar 26, 2018
- ‘Terror being waged on wildlife', leaders warn. The Guardian. October 4, 2018.
- Earth Is on the Cusp of the Sixth Mass Extinction. Here’s What Paleontologists Want You to Know. Discover. December 3, 2020. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Holocene_extinction | 21 |
19 | When comparing different economic systems like capitalism vs socialism, you might see the term ‘laissez-faire’ referenced. Find out what laissez-faire means and how it’s used in economics with our guide below.
What are laissez-faire economics?
The concept of laissez-faire in economics is a staple of free-market capitalism. The theory suggests that an economy is strongest when the government stays out of the economy entirely, letting market forces behave naturally.
In laissez-faire policy, the government’s role is to protect the rights of the individual, rather than regulating business in any way. The term ‘laissez-faire’ translates to ‘leave alone’ when it comes to economic intervention. This means no taxes, regulations, or tariffs. Instead, the market should be completely free to be led by the natural laws of supply and demand.
The origins of laissez-faire economics date back to 18th century France during the Industrial Revolution. Businesses at the time wished to be left alone to operate free from government policies, which involved heavy import tariffs. In classical economics, Adam Smith was a proponent of the idea in writings like ‘Wealth of Nations’ which put a premium on individual liberties and leaving economies to be dictated by the market.
Guiding principles of laissez-faire capitalism
There are several key principles involved with laissez-faire policy.
The individual is society’s basic unit.
Everyone carries a natural right to personal freedom.
Nature is capable of self-regulation.
Based on these principles, laissez-faire economics endorse a system of capitalism, in which private parties control the means of production. Rather than regulating the market, the government should let capitalism run free without interference.
Another tenet of this theory is the idea of a free market economy according to natural laws of supply and demand. Free market theory states that if prices are set too high, consumers will not pay for goods and services and the market will naturally correct itself.
Finally, rational market theory is a fundamental principle in laissez-faire economics. This assumes that investors base their actions on facts and logic, taking emotions out of the equation.
A purely laissez-faire economy has yet to be seen, but governments have applied some of its principles. Here are a few laissez-faire examples you can see at play in the real world:
Trickle-down economics: Former US President Ronald Reagan was a major believer in trickle-down economics in the 1980s. Based on laissez-faire policy, it allowed private businesses to make as much money as possible without intervention in the idea that this wealth would trickle down to individuals.
Tax cuts: When governments cut taxes to stimulate the market, this is based on laissez-faire theory as well. The idea is that removing regulations or taxes helps put more money into the market by encouraging spending.
Privatising state assets: When the government sells state assets, such as transport or postal services, this is laissez-faire economics at work.
Benefits of laissez-faire policy
There are distinct advantages to this type of policy, which is why it’s an idea that has endured for over two centuries.
Free trade gives countries the chance to mutually benefit from transactions
It cuts down on inefficient government bureaucracy
Laissez-faire capitalism creates incentives for entrepreneurs to work harder and be more productive
Start-up businesses are given more freedom to take risks
It drives innovation by freeing up businesses to be more creative without regulation
Without taxation, businesses are given more spending power
Criticisms of laissez-faire economics
While there are many benefits to laissez-faire economics in theory, in reality there are also some legitimate concerns. Here are a few criticisms of this type of theory.
It leads to wealth disparities and income inequality.
Wealth becomes concentrated in established circles, with start-ups struggling to compete
Lack of regulation leads to monopolies that charge higher prices and restrict supply
It doesn’t take natural human emotions into account, as markets aren’t always driven by logic
While an economy cannot be based on laissez-faire principles alone, it has some worthwhile concepts to inspire innovation and growth in businesses. Most capitalist economies choose to blend the strictest principles with government intervention like taxes to minimise inequality and provide better services.
We can help
GoCardless helps you automate payment collection, cutting down on the amount of admin your team needs to deal with when chasing invoices. Find out how GoCardless can help you with ad hoc payments or recurring payments. | https://gocardless.com/guides/posts/laissez-faire-definition-principles-and-examples/ | 21 |
16 | Sharecropping played a vital role in the southern community in 1865 after the Civil War. The war had many negative effects on the Southern economy. Farms and plantations were in shambles and often ruin. Some also had been worse, getting as bad as being burned to the ground. Many of the very little railroad tracks around at the time had been destroyed. Freed slaves looked for jobs, as to land owners sought workers. Sharecropping was a common job sharecroppers had, they would get a fraction of the crops they harvested. This job position consisted of Caucasian and African American workers. A landowner would usually supply the farmer and the family with a house or an old slave cabin. The landowner would also supply farming materials and tools that were needed to get the job done. The total cost of these materials would later be deducted from anything the sharecropper earned for their hard work.With few resources and little cash, sharecroppers agreed to farm a certain land in for a small amount of crops they grew. An agreement made with the landowner determined how much crops they were to keep for themselves. This determined workers to produce as much harvest as they could so that it would ensure them holding their job. Many economic, social, and political issues led sharecropping to fade away in the 1940s.
When the South lost the war, bringing slavery to an end, Arkansas landowners and freed slaves then began negotiating. “Due to the Civil War, the entire South was engulfed in economic troubles associated with the abolishing of physical slavery; and plantation owners now had to pay wages to their workers.”The Freedmen’s Bureau set up schools for past slaves to find lost family and seek jobs. Owning land during this time period was the key to success; autonomy and economic independence. Because of the lack of skills former slaves had, most were only able to get jobs working on farms. “Unfortunately, the price of cotton began a long period of decline in the late 1860s, and many of those white yeomen who had staked their future on cotton production lost their farms.”They usually became sharecroppers when this happened.. “By 1900, 36 percent of all white farmers in Mississippi were either tenant farmers or sharecroppers The Freedmen’s Bureau wanted the freedmen to receive better treatment when sharecropping.” The Freedmen’s Bureau often offered suggestions on agreements for sharecroppers. These agreements generally didn’t work out because they were designed poorly, which led to the same misfortunate treatment. Most sharecroppers began the crop year needing to buy supplies, not only to help raise their crop, but also to keep themselves, and their families alive until harvest time. Many sharecroppers experienced bad treatment. They weren’t always given the correct amount of crops they raised and could not sell the portion they received. Landowners sold plant seeds and food for skyrocket prices. This was another form of their misfortunate treatment because it put the sharecroppers in debt. Most felt tied to the plantation and landowner because of their rising debt. This treatment was especially painful because it made released slaves feel enslaved again.
“Having a wage-labor economy was near futile, economically speaking – the entire South was in shambles, especially with regards to currency.” Money was in short supply and the banking system was destroyed. Planters struggled in economic situations because only select were able to use their demolished land as collateral for loans. Poor harvests only strengthened the problems as planters struggled to produce a sufficient amount of crops for hiring wage workers. The Freedmen’s Bureau was then established to support past slaves, manage deserted land, and supervise work contracts.”This South’s labor problem was not solved, as both planter and freedmen “had little initial idea of what the optimal labor arrangements would be.” Nowhere group was particularly fond of the new system, but the blacks made the sharecropping system less harsh.
During slavery, the black family unit was somewhat nonexistent due to the reality of separation of family through buying and selling of them. Women were taken for their husbands, and children were taken from their parents. After the release of slaves, many went to great lengths to reconnect with lost family members. Many went to advertisement in black newspapers to find their family. The focus on restoring their families led to the priority of serving the needs of the household before putting labor first. This was essential in the self-sufficiency that the freedmen sought from sharecropping.”Sharecropping, while influenced by black autonomy, was overall negative for black farmers as such a system “allowed the exploitation of the small farmer by the monopolistic financial structure dominated by the local merchant.””A landowner could provide loans to sharecroppers. Besides the law, contract provisions also hurt sharecroppers. Consequences could be extremely damaging if the contract terms assessed penalties for neglect on the sharecropper.
One thing that was problematic for black women in the time was that it was legal for fathers to practice corporal punishment against their daughters and wives. “Not only was the black woman afflicted by the negative economic effects of sharecropping in the form of debt peonage, but also by the social effects created by the sharecropping model’s tendency to empower and uphold black patriarchy.” This was an extreme emotional consequence of sharecropping that unfortunately affected the family as well as the worker.
The Agricultural Adjustment Act (AAA) of 1933 influenced numerous farm owners in the area to take a fraction of their land out of cultivation for funds provided by the government. This led landowners to get rid of their sharecroppers plots.”The AAA program was one of Franklin D. Roosevelt’s New Deal measures designed to end the Great Depression.” This was made to help lower farm production and higher farm prices. “The AAA did provide Mississippi landowners with much-needed cash, and in the 1940s and 1950s, landowners used these funds to take advantage of technological advances that improved their ability to raise crops like cotton.” Landowners bought tractors, cotton harvesters, and chemicals to get rid of weeds in their fields. Mississippi’s sharecropping system abruptly came to an end after this.
Sharecropping eventually ended due to mechanization and the Great Migration. “The effects of the practice, compounded with slavery and the convict lease system, had a negative multi-generational impact on the black community as a whole.” Instead of working to acquire capital in economic development of the black community, it resulted in inaction that would only expand racial economic imbalance. “In the 1930s and 1940s, increasing mechanization virtually brought the institution of sharecropping to an end in the United States.” Due to small farms failing after the Dust Bowl, the sharecropping system increased.Traditional sharecropping declined after the new development farming machines. After the decline of sharecropping, many of the farmers jobs were lost so they moved to the cities to work in factories. Despite all of the negative consequences brought along with sharecropping, it rose the production of crops to make it one of the most economically stable times in early America. | https://palawanboard.com/sharecropping-played-a-vital-role-in-the-southern-community-in-1865-after-the-civil-war/ | 21 |
26 | Industrialization fundamentally altered the production of goods around the world. It not only changed how goods were produced and consumed, as well as what was considered a “good,” but it also had far-reaching effects on the global economy, social relations, and culture. Although it is common to speak of an “Industrial Revolution,” the process of industrialization was a gradual one that unfolded over the course of the 18th and 19th centuries, eventually becoming global.
Industrialization fundamentally changed how goods were produced. [ENV-9] [SB-5] [ECON-2, 4, 5, 9] [SOC-2, 3, 4]
A variety of factors led to the rise of industrial production, including:
Europe’s location on the Atlantic Ocean
The geographical distribution of coal, iron, and timber
The development of machines, including steam engines and the internal combustion engine, made it possible to exploit vast new resources of energy stored in fossil fuels, specifically coal and oil. The fossil fuels revolution greatly increased the energy available to human societies.
Crash Course: Industrial Revolution
The development of the factory system concentrated labor in a single location and led to an increasing degree of specialization of labor.
Lecture: Industrial Revolution
As the new methods of industrial production became more common in parts of northwestern Europe, they spread to other parts of Europe and the United States, Russia, and Japan.
Lecture: Industrial Revolution
Data Analysis: Industrial Revolution
The “second Industrial Revolution” led to new methods in the production of steel, chemicals, electricity, and precision machinery during the second half of the 19th century.
Lecture: Industrial Revolution
New patterns of global trade and production developed and further integrated the global economy as industrialists sought raw materials and new markets for the increasing amount and array of goods produced in their factories. [ENV-9] [CUL-6] [SB-9] [ECON-3, 4, 12]
The need for raw materials for the factories and increased food supplies for the growing population in urban centers led to the growth of export economies around the world that specialized in mass producing natural resources. The profits from these raw materials were used to purchase finished goods.
The rapid development of steam-powered industrial production in European countries and the US contributed to those regions’ increase in their share of global manufacturing. While Middle Eastern and Asian countries continued to produce manufactured goods, these regions’ share in manufacturing declined.
Flexibility: shipbuilding in India and Southeast Asia; iron works in India; textile production in India and Egypt
Document:“Shipbuilding at Bombay”
The global economy of the 19th century expanded dramatically from the previous period due to increased exchanges of raw materials and finished goods in most parts of the world. Some commodities gave merchants and companies based in Europe and the US a distinct advantage.
Flexibility: opium produce din the Middle East or South Asia and exported to China; cotton growth in South Asia, Egypt, the Caribbean, or North America and exported to Great Britain and other European countries; palm oil produced in Sub-Saharan Africa and exported to European countries
Reading: Cotton Production in the 1800s
The need for specialized and limited metals for industrial production, as well as the global demand for gold, silver, and diamonds as forms of wealth, led to the development of extensive mining centers
Flexibility: copper mines in Mexico; gold and diamond mines in South Africa
Document: Gold Mines in the Indies, 1559, Philip II
To facilitate investment at all levels of industrial production, financiers developed and expanded various financial institutions. [CUL-3] [ECON-3, 4, 9, 11, 13]
The ideological inspiration for economic changes lies in the development of capitalism and classical liberalism associated with Adam Smith and John Stuart Mill.
Documents: JS Mill & Adam Smith
The global nature of trade and production contributed to the proliferation of large-scale transnational businesses that relied on various financial institutions.
Flexibility – trasnantional businesses: United Fruit Company; Hong Kong and Shanghai Banking Corporation
There were major developments in transportation and communication, including railroads, steam ships, telegraphs, and canals. [ENV-6] [ECON-12]
Lecture: Industrial Revolution
The development and spread of global capitalism led to a variety of responses. [CUL-3] [SB-1, 2, 4, 9] [ECON-3, 7, 9] [SOC-3]
In industrialized states, many workers organized themselves to improve working conditions, limit hours, and gain higher wages, while others opposed industrialists’ treatment of workers by promoting alternative visions of society, including Marxism.
Flexibility: utopian socialism; anarchism
Document: Communist Manifesto
In Qing China and the Ottoman Empire, some members of the government resisted economic change and attempted to maintain preindustrial forms of economic production, while other members of the Qing and Ottoman governments led reforms in imperial policies.
Flexibility: Tanzimat movement in Ottoman Empire; Self-Strengthening Movement in Qing Empire
Document: Tanzimat Decree
In a small number of states, governments promoted their own state-sponsored visions of industrialization.
Flexibility: economic reforms of Meiji Japan; development of factories and railroads in Tsarist Russia; Muhammad Ali’s development of cotton textile industry in Egypt
Document: Meiji Constitution of the Empire of Japan, 1889
Reading: Meiji Restoration
In response to the criticisms of industrial global capitalism, some governments mitigated the negative effects of industrial capitalism by promoting various types of reforms.
Flexibility: state pensions and public health in Germany; expansion of suffrage in Britain; public education in many nation-states
Lecture: Industrial Revolution
The ways in which people organized themselves into societies also underwent significant transformations in industrialized states due to the fundamental restructuring of the global economy. [ENV-5, 9] [SB-4, 9] [ECON-5] [SOC-1, 2, 3]
New social classes, including the middle class and industrial working class, developed.
Family dynamics, gender roles, and demographics changed in response to industrialization.
Document: Lowell Mill Girls
Rapid urbanization that accompanied global capitalism often led to unsanitary conditions.
Image Study: Urban Areas
Key Concept 5.2: Imperialism and Nation-State Formation
As states industrialized during this period, they also expanded their existing overseas colonies and established new types of colonies and transoceanic empires. Regional warfare and diplomacy both resulted in and were affected by this process of modern empire building. The process was led mostly by Europe, although not all states were affected equally, which led to an increase of European influence around the world. The United States and Japan also participated in this process. The growth of new empires challenged the power of existing land-based empires of Eurasia. New ideas about nationalism, ace, gender, class, and culture also developed that facilitated the spread of transoceanic empires, as well as justified anti-imperial resistance and the formation of new national identities.
Industrializing powers established transoceanic empires. [ENV-9] [SB-1,2 , 3, 9, 10] [ECON-3] [SOC-7]
States with existing colonies strengthened their control over those colonies.
Flexibility: British in India; Dutch in Indonesia
CCOT Activity: British Influence in India
European states, as well as the Americans and the Japanese, established empires throughout Asia and the pacific, while Spanish and Portuguese influence declined.
Flexibility – European states that established empires: British, Dutch, French, German, Russian
Many European states used both warfare and diplomacy to establish empires in Africa.
Flexibility: Britain in West Africa; Belgium in the Congo
In some parts of their empires, Europeans established settler colonies.
Flexibility: British in southern Africa, Australia, and New Zealand; French in Algeria
Reading: Convicts and British Colonies in Australia
In other parts of the world, industrialized states practiced economic imperialism.
Flexibility: British and French expanding influence in China through the Opium Wars; British and the United states investing heavily in Latin America
Comparison Activity: Chinese and Japanese Responses to the West
Imperialism influenced state formation and contraction around the world. [CUL-3] [SB_1, 2, 3, 4, 6, 10] [ECON-4] [SOC-7]
The expansion of US and European influence over Tokugawa Japan led to the emergence of Meiji Japan.
Crash Course: Meiji Japan
The US and Russian emulated European transoceanic imperialism by expanding their land borders and conquering neighboring territories.
Anti-imperial resistance took various forms including direct resistance within empire and creation of new states on the peripheries.
Flexibility: Cherokee Nation; Zulu Kingdom; establishment of independent states in the Balkans
Documents:The Massacre of Bulgarians and Bulgarian Political Attitudes
New racial ideologies, especially social Darwinism, facilitated and justified imperialism. [CUL-3, 4] [SB-4] [ECON-8] [SOC-6]
Document: White Man’s Burden
Key Concept 5.3: Nationalism, Revolution, and Reform
The 18th century marked the beginning of an intense period of revolution and rebellion against existing governments, and the establishment of new nation-stats around the world. Enlightenment thought and the resistance of colonized peoples to imperial centers shaped this revolutionary activity. These rebellions sometimes resulted in the formation of new states and stimulated the development of new ideologies. These new ideas in turn further stimulated the revolutionary and anti-imperial tendencies of this period.
The rise and diffusion of Enlightenment thought that questioned established traditions in all areas of life often preceded revolutions and rebellions against existing governments. [CUL-2, 3, 4, 7] [SB-4, 7] [ECON-7] [SOC-1, 2, 3, 6, 7]
Enlightenment philosophers applied new ways of understanding the natural world to human relationships, encouraging observation and inference in all spheres of life; they also critiqued the role that religion played in public life, insisting on the importance of reason as opposed to revelation. Other Enlightenment philosophers developed new political ideas about the individual, natural rights, and the social contract.
The ideas of Enlightenment philosophers, as reflected in revolutionary documents – including the American declaration of Independence, the French Declaration of the Rights of Man and Citizen, and Bolivar’s Jamaica Letter – influenced resistance to existing political authority.
Documents: Declaration of Rights of Man & Jamaica Letter
Enlightenment ideas influenced many people to challenge existing notions of social relations, which contributed to the expansion of rights as seen in expanded suffrage, the abolition of slavery, and the end of serfdom.
Document: Olympe de Gouges: Declaration of the Rights of Woman
Beginning in the 18th century, peoples around the world developed a new sense of commonality based on language, religion, social customs, and territory. These newly imagined national communities linked this identity with the borders of the state, while governments used this idea to unite diverse populations. [CUL-2, 3, 4, 7] [SB-4] [SOC-3, 7]
Flexibility: German nationalism; Italian nationalism; Filipino nationalism; Argentinean nationalism
Flexibility: Marathas to Mughal Sultans; Taipings to Manchu or Qing dynasty
Reading & Document: Taiping Rebellion
American colonial subjects led a series of rebellions – including the American Revolution, the Haitian Revolution, and the Latin American independence movements –that facilitated the emergence of independent states in the United States, Haiti, and mainland Latin America. French subjects rebelled against their monarchy.
Comparative Essay: Evaluate the success of two revolutions: French, American, or Haitian.
Slave resistance challenged existing authorities in the Americas.
Flexibility: Maroon societies in the Caribbean or Brazil; North American slave resistance
Lecture: Haitian Revolution
Increasing questions about political authority and growing nationalism contributed to anti-colonial movements
Flexibility: Indian Revolt of 1857; Boxer Rebellion in Qing China
Documents: “The Indian Revolt” (The Atlantic, 1857) and An Account of the Opening of the Indian Mutiny at Meerut, 1857
Some of the rebellions were influenced by diverse religious ideas.
Flexibility: Ghost Dance in the US; Xhosa Cattle-Killing Movement in Southern Africa
Reading: Comparison: Rebellions Backed by Indigenous Religions
The global spread of European political and social thought and the increasing number of rebellions stimulated new transnational ideologies and solidarities. [CUL-3, 5] [SB-4, 8] [ECON-7] [SOC-1, 2, 3, 4]
Discontent with monarchist and imperial rule encouraged the development of political ideologies, including liberalism, socialism, and communism.
Documents: Communist Manifesto & Second Treatise on Government
Demands for women’s suffrage and an emergent feminism challenged political and gender hierarchies.
Flexibility: Mary Wollstonecraft’s A Vindication of the Rights of Woman; Olumpe de Gouges’s “Declaration of the Rights of Women and the Female Citizen;” The resolutions passed at the Seneca Falls Conference in 1848
Documents: Wollstonecraft v. Rousseau; Declaration of the Rights of Woman
Key Concept 5.4: Global Migration
Migration patterns changed dramatically throughout this period, and the numbers of migrants increased significantly. These changes were closely connected to the development of transoceanic empires and a global capitalist economy. In some cases, people benefitted economically from migration, while other people were seen simply as commodities to be transported. Migration produced dramatically different sending and receiving societies, and presented challenges to governments in fostering national identities and regulating the flow of people.
Migration in many cases was influenced by changes in demography in both industrialized and unindustrialized societies that presented challenges to existing patterns of living. [ENV-3, 4, 5, 6, 7, 8] [SB-5] [ECON-2, 4, 12] [SOC-8]
Changes in food production and improved medical conditions contributed to a significant global rise in population in both urban and rural areas.
Because of the nature of new modes of transportation, both internal and external migrants increasingly relocated to cities. This pattern contributed to the significant global urbanization of the 19th century. The new methods of transportation also allowed for many migrants to return, periodically or permanently, to their home societies.
Flexibility: Japanese agricultural workers in the Pacific; Lebanese merchants in the Americas; Italian industrial workers in Argentina
Migrants relocated for a variety of reasons. [ENV-3, 5] [ECON-5, 6] [SOC-2, 8]
Many individuals chose freely to relocate, often in search of work.
The new global capitalist economy continued to rely on coerced and semi-coerced labor migration, including slavery, Chinese and Indian indentured servitude, and convict labor.
Reading: Convict Labor in Australia
The large-scale nature of migration, especially in the 19th century, produced a variety of consequences and reactions to the increasingly diverse societies on the part of migrants and the existing populations. [ENV-3, 4] [CUL-9] [SOC-1, 8]
Due to the physical nature of the labor in demand, migrants tended to be male, leaving women to take on new roles in the home society that had been formerly occupied by men.
Migrants often created ethnic enclaves in different parts of the world that helped transplant their culture into new environments and facilitated the development of migrant support networks.
Flexibility: Chinese in Southeast Asia, the Caribbean, South America, and North America; Indians in East and Southern Africa, the Caribbean, and Southeast Asia
Receiving societies did not always embrace immigrants, as seen in the various degrees of ethnic and racial prejudice and the ways states attempted to regulate the increased flow of people across their borders.
Flexibility: the Chinese Exclusion Acts; the White Australia Policy
Documents: Chinese Exclusion Act & White Australia Policy Political Cartoons | https://ininet.org/period-5-industrialization-and-global-integration-c-1750-1900.html | 21 |
47 | Unification of Germany
The Unification of Germany into the German Empire, a Prussia-dominated nation state with federal features, officially occurred on 18 January 1871 at the Versailles Palace's Hall of Mirrors in France. Princes of the German states gathered there to proclaim Wilhelm of Prussia as Emperor of the German Empire after the French capitulation in the Franco-Prussian War.
A confederated realm of German princedoms had been in existence for over a thousand years, dating to the Treaty of Verdun in 843. However, there was no German national identity in development as late as 1800, mainly due to the autonomous nature of the princely states; most inhabitants of the Holy Roman Empire, outside of those ruled by the emperor directly, identified themselves mainly with their prince, and not with the Empire as a whole. This became known as the "practice of kleinstaaterei", or "practice of small-statery". By the 19th century, transportation and communications improvements brought these regions closer together. The Empire was dissolved in 1806 with the abdication of Emperor Francis II during the Napoleonic Wars. Despite the legal, administrative, and political disruption caused by the dissolution, the German-speaking people of the old Empire had a common linguistic, cultural and legal tradition. European liberalism offered an intellectual basis for unification by challenging dynastic and absolutist models of social and political organization; its German manifestation emphasized the importance of tradition, education, and linguistic unity. Economically, the creation of the Prussian Zollverein (customs union) in 1818, and its subsequent expansion to include other states of the German Confederation, reduced competition between and within states. Emerging modes of transportation facilitated business and recreational travel, leading to contact and sometimes conflict between and among German-speakers from throughout Central Europe.
The model of diplomatic spheres of influence resulting from the Congress of Vienna in 1814–15 after the Napoleonic Wars endorsed Austrian dominance in Central Europe through Habsburg leadership of the German Confederation, designed to replace the Holy Roman Empire. The negotiators at Vienna took no account of Prussia's growing strength within and declined to create a second coalition of the German states under Prussia's influence, and so failed to foresee that Prussia would rise to challenge Austria for leadership of the German peoples. This German dualism presented two solutions to the problem of unification: Kleindeutsche Lösung, the small Germany solution (Germany without Austria), or Großdeutsche Lösung, the greater Germany solution (Germany with Austria).
Historians debate whether Otto von Bismarck—Minister President of Prussia—had a master plan to expand the North German Confederation of 1866 to include the remaining independent German states into a single entity or simply to expand the power of the Kingdom of Prussia. They conclude that factors in addition to the strength of Bismarck's Realpolitik led a collection of early modern polities to reorganize political, economic, military, and diplomatic relationships in the 19th century. Reaction to Danish and French nationalism provided foci for expressions of German unity. Military successes—especially those of Prussia—in three regional wars generated enthusiasm and pride that politicians could harness to promote unification. This experience echoed the memory of mutual accomplishment in the Napoleonic Wars, particularly in the War of Liberation of 1813–14. By establishing a Germany without Austria, the political and administrative unification in 1871 at least temporarily solved the problem of dualism.
Part of a series on the
|History of Germany|
|Early Modern period|
German-speaking Central Europe in the early 19th century
Prior to 1803, German-speaking Central Europe included more than 300 political entities, most of which were part of the Holy Roman Empire or the extensive Habsburg hereditary dominions. They ranged in size from the small and complex territories of the princely Hohenlohe family branches to sizable, well-defined territories such as the Kingdoms of Bavaria and Prussia. Their governance varied: they included free imperial cities, also of different sizes, such as the powerful Augsburg and the minuscule Weil der Stadt; ecclesiastical territories, also of varying sizes and influence, such as the wealthy Abbey of Reichenau and the powerful Archbishopric of Cologne; and dynastic states such as Württemberg. These lands (or parts of them—both the Habsburg domains and Hohenzollern Prussia also included territories outside the Empire structures) made up the territory of the Holy Roman Empire, which at times included more than 1,000 entities. Since the 15th century, with few exceptions, the Empire's Prince-electors had chosen successive heads of the House of Habsburg to hold the title of Holy Roman Emperor. Among the German-speaking states, the Holy Roman Empire's administrative and legal mechanisms provided a venue to resolve disputes between peasants and landlords, between jurisdictions, and within jurisdictions. Through the organization of imperial circles (Reichskreise), groups of states consolidated resources and promoted regional and organizational interests, including economic cooperation and military protection.
The War of the Second Coalition (1799–1802) resulted in the defeat of the imperial and allied forces by Napoleon Bonaparte. The treaties of Lunéville (1801) and the Mediatization of 1803 secularized the ecclesiastical principalities and abolished most free imperial cities and these territories along with their inhabitants were absorbed by dynastic states. This transfer particularly enhanced the territories of Württemberg and Baden. In 1806, after a successful invasion of Prussia and the defeat of Prussia at the joint battles of Jena-Auerstedt, Napoleon dictated the Treaty of Pressburg and presided over the creation of the Confederation of the Rhine, which, inter alia, provided for the mediatization of over a hundred petty princes and counts and the absorption of their territories, as well as those of hundreds of imperial knights, by the Confederation's member-states. Following the formal secession of these member-states from the Empire, the Emperor dissolved the Holy Roman Empire.
Rise of German nationalism under the Napoleonic System
Under the hegemony of the French Empire (1804–1814), popular German nationalism thrived in the reorganized German states. Due in part to the shared experience, albeit under French dominance, various justifications emerged to identify "Germany" as a single state. For the German philosopher Johann Gottlieb Fichte,
The first, original, and truly natural boundaries of states are beyond doubt their internal boundaries. Those who speak the same language are joined to each other by a multitude of invisible bonds by nature herself, long before any human art begins; they understand each other and have the power of continuing to make themselves understood more and more clearly; they belong together and are by nature one and an inseparable whole.
A common language may have been seen to serve as the basis of a nation, but as contemporary historians of 19th-century Germany noted, it took more than linguistic similarity to unify these several hundred polities. The experience of German-speaking Central Europe during the years of French hegemony contributed to a sense of common cause to remove the French invaders and reassert control over their own lands. The exigencies of Napoleon's campaigns in Poland (1806–07), the Iberian Peninsula, western Germany, and his disastrous invasion of Russia in 1812 disillusioned many Germans, princes and peasants alike. Napoleon's Continental System nearly ruined the Central European economy. The invasion of Russia included nearly 125,000 troops from German lands, and the loss of that army encouraged many Germans, both high- and low-born, to envision a Central Europe free of Napoleon's influence. The creation of student militias such as the Lützow Free Corps exemplified this tendency.
The debacle in Russia loosened the French grip on the German princes. In 1813, Napoleon mounted a campaign in the German states to bring them back into the French orbit; the subsequent War of Liberation culminated in the great Battle of Leipzig, also known as the Battle of Nations. In October 1813, more than 500,000 combatants engaged in ferocious fighting over three days, making it the largest European land battle of the 19th century. The engagement resulted in a decisive victory for the Coalition of Austria, Prussia, Russia, Saxony, and Sweden, and it ended French power east of the Rhine. Success encouraged the Coalition forces to pursue Napoleon across the Rhine; his army and his government collapsed, and the victorious Coalition incarcerated Napoleon on Elba. During the brief Napoleonic restoration known as the 100 Days of 1815, forces of the Seventh Coalition, including an Anglo-Allied army under the command of the Duke of Wellington and a Prussian army under the command of Gebhard von Blücher, were victorious at Waterloo (18 June 1815). The critical role played by Blücher's troops, especially after having to retreat from the field at Ligny the day before, helped to turn the tide of combat against the French. The Prussian cavalry pursued the defeated French in the evening of 18 June, sealing the allied victory. From the German perspective, the actions of Blücher's troops at Waterloo, and the combined efforts at Leipzig, offered a rallying point of pride and enthusiasm. This interpretation became a key building block of the Borussian myth expounded by the pro-Prussian nationalist historians later in the 19th century.
Reorganization of Central Europe and the rise of German dualism
After Napoleon's defeat, the Congress of Vienna established a new European political-diplomatic system based on the balance of power. This system reorganized Europe into spheres of influence, which, in some cases, suppressed the aspirations of the various nationalities, including the Germans and Italians. Generally, an enlarged Prussia and the 38 other states consolidated from the mediatized territories of 1803 were confederated within the Austrian Empire's sphere of influence. The Congress established a loose German Confederation (1815–1866), headed by Austria, with a "Federal Diet" (called the Bundestag or Bundesversammlung, an assembly of appointed leaders) that met in the city of Frankfurt am Main. In recognition of the imperial position traditionally held by the Habsburgs, the emperors of Austria became the titular presidents of this parliament. Problematically, the built-in Austrian dominance failed to take into account Prussia's 18th-century emergence in Imperial politics. Ever since the Prince-Elector of Brandenburg had made himself King in Prussia at the beginning of that century, their domains had steadily increased through war and inheritance. Prussia's consolidated strength had become especially apparent during the War of the Austrian Succession and the Seven Years' War under Frederick the Great. As Maria Theresa and Joseph tried to restore Habsburg hegemony in the Holy Roman Empire, Frederick countered with the creation of the Fürstenbund (Union of Princes) in 1785. Austrian-Prussian dualism lay firmly rooted in old Imperial politics. Those balance of power manoeuvers were epitomized by the War of the Bavarian Succession, or "Potato War" among common folk. Even after the end of the Holy Roman Empire, this competition influenced the growth and development of nationalist movements in the 19th century.
Problems of reorganization
Despite the nomenclature of Diet (Assembly or Parliament), this institution should in no way be construed as a broadly, or popularly, elected group of representatives. Many of the states did not have constitutions, and those that did, such as the Duchy of Baden, based suffrage on strict property requirements which effectively limited suffrage to a small portion of the male population. Furthermore, this impractical solution did not reflect the new status of Prussia in the overall scheme. Although the Prussian army had been dramatically defeated in the 1806 Battle of Jena-Auerstedt, it had made a spectacular comeback at Waterloo. Consequently, Prussian leaders expected to play a pivotal role in German politics.
The surge of German nationalism, stimulated by the experience of Germans in the Napoleonic period and initially allied with liberalism, shifted political, social, and cultural relationships within the German states. In this context, one can detect its roots in the experience of Germans in the Napoleonic period. The Burschenschaft student organizations and popular demonstrations, such as those held at Wartburg Castle in October 1817, contributed to a growing sense of unity among German speakers of Central Europe. Furthermore, implicit and sometimes explicit promises made during the German Campaign of 1813 engendered an expectation of popular sovereignty and widespread participation in the political process, promises that largely went unfulfilled once peace had been achieved. Agitation by student organizations led such conservative leaders as Klemens Wenzel, Prince von Metternich, to fear the rise of national sentiment; the assassination of German dramatist August von Kotzebue in March 1819 by a radical student seeking unification was followed on 20 September 1819 by the proclamation of the Carlsbad Decrees, which hampered intellectual leadership of the nationalist movement.
Metternich was able to harness conservative outrage at the assassination to consolidate legislation that would further limit the press and constrain the rising liberal and nationalist movements. Consequently, these decrees drove the Burschenschaften underground, restricted the publication of nationalist materials, expanded censorship of the press and private correspondence, and limited academic speech by prohibiting university professors from encouraging nationalist discussion. The decrees were the subject of Johann Joseph von Görres's pamphlet Teutschland [archaic: Deutschland] und die Revolution (Germany and the Revolution) (1820), in which he concluded that it was both impossible and undesirable to repress the free utterance of public opinion by reactionary measures.
Economic collaboration: the customs union
Another institution key to unifying the German states, the Zollverein, helped to create a larger sense of economic unification. Initially conceived by the Prussian Finance Minister Hans, Count von Bülow, as a Prussian customs union in 1818, the Zollverein linked the many Prussian and Hohenzollern territories. Over the ensuing thirty years (and more) other German states joined. The Union helped to reduce protectionist barriers between the German states, especially improving the transport of raw materials and finished goods, making it both easier to move goods across territorial borders and less costly to buy, transport, and sell raw materials. This was particularly important for the emerging industrial centers, most of which were located in the Prussian regions of the Rhineland, the Saar, and the Ruhr valleys. States more distant from the coast joined the Customs Union earlier. Not being a member mattered more for the states of south Germany, since the external tariff of the Customs Union prevented customs-free access to the coast (which gave access to international markets). Thus, by 1836, all states to the south of Prussia had joined the Customs Union, except Austria.
In contrast, the coastal states already had barrier free access to international trade and did not want consumers and producers burdened with the import duties they would pay if they were within the Zollverein customs border. Hanover on the north coast formed its own customs union – the “Tax Union” or Steuerverein – in 1834 with Brunswick and with Oldenburg in 1836. The external tariffs on finished goods and overseas raw materials were below the rates of the Zollverein. Brunswick joined the Zollverein Customs Union in 1842, while Hanover and Oldenburg finally joined in 1854 After the Austro-Prussian war of 1866, Schleswig, Holstein and Lauenburg were annexed by Prussia and thus annexed also to the Customs Union, while the two Mecklenburg states and the city states of Hamburg and Bremen joined late because they were reliant on international trade. The Mecklenburgs joined in 1867, while Bremen and Hamburg joined in 1888.
Roads and railways
By the early 19th century, German roads had deteriorated to an appalling extent. Travelers, both foreign and local, complained bitterly about the state of the Heerstraßen, the military roads previously maintained for the ease of moving troops. As German states ceased to be a military crossroads, however, the roads improved; the length of hard–surfaced roads in Prussia increased from 3,800 kilometres (2,400 mi) in 1816 to 16,600 kilometres (10,300 mi) in 1852, helped in part by the invention of macadam. By 1835, Heinrich von Gagern wrote that roads were the "veins and arteries of the body politic..." and predicted that they would promote freedom, independence and prosperity. As people moved around, they came into contact with others, on trains, at hotels, in restaurants, and for some, at fashionable resorts such as the spa in Baden-Baden. Water transportation also improved. The blockades on the Rhine had been removed by Napoleon's orders, but by the 1820s, steam engines freed riverboats from the cumbersome system of men and animals that towed them upstream. By 1846, 180 steamers plied German rivers and Lake Constance, and a network of canals extended from the Danube, the Weser, and the Elbe rivers.
As important as these improvements were, they could not compete with the impact of the railway. German economist Friedrich List called the railways and the Customs Union "Siamese Twins", emphasizing their important relationship to one another. He was not alone: the poet August Heinrich Hoffmann von Fallersleben wrote a poem in which he extolled the virtues of the Zollverein, which he began with a list of commodities that had contributed more to German unity than politics or diplomacy. Historians of the Second Empire later regarded the railways as the first indicator of a unified state; the patriotic novelist, Wilhelm Raabe, wrote: "The German empire was founded with the construction of the first railway..." Not everyone greeted the iron monster with enthusiasm. The Prussian king Frederick William III saw no advantage in traveling from Berlin to Potsdam a few hours faster, and Metternich refused to ride in one at all. Others wondered if the railways were an "evil" that threatened the landscape: Nikolaus Lenau's 1838 poem An den Frühling (To Spring) bemoaned the way trains destroyed the pristine quietude of German forests.
The Bavarian Ludwig Railway, which was the first passenger or freight rail line in the German lands, connected Nuremberg and Fürth in 1835. Although it was 6 kilometres (3.7 mi) long and only operated in daylight, it proved both profitable and popular. Within three years, 141 kilometres (88 mi) of track had been laid, by 1840, 462 kilometres (287 mi), and by 1860, 11,157 kilometres (6,933 mi). Lacking a geographically central organizing feature (such as a national capital), the rails were laid in webs, linking towns and markets within regions, regions within larger regions, and so on. As the rail network expanded, it became cheaper to transport goods: in 1840, 18 Pfennigs per ton per kilometer and in 1870, five Pfennigs. The effects of the railway were immediate. For example, raw materials could travel up and down the Ruhr Valley without having to unload and reload. Railway lines encouraged economic activity by creating demand for commodities and by facilitating commerce. In 1850, inland shipping carried three times more freight than railroads; by 1870, the situation was reversed, and railroads carried four times more. Rail travel changed how cities looked and how people traveled. Its impact reached throughout the social order, affecting the highest born to the lowest. Although some of the outlying German provinces were not serviced by rail until the 1890s, the majority of the population, manufacturing centers, and production centers were linked to the rail network by 1865.
Geography, patriotism and language
As travel became easier, faster, and less expensive, Germans started to see unity in factors other than their language. The Brothers Grimm, who compiled a massive dictionary known as The Grimm, also assembled a compendium of folk tales and fables, which highlighted the story-telling parallels between different regions. Karl Baedeker wrote guidebooks to different cities and regions of Central Europe, indicating places to stay, sites to visit, and giving a short history of castles, battlefields, famous buildings, and famous people. His guides also included distances, roads to avoid, and hiking paths to follow.
The words of August Heinrich Hoffmann von Fallersleben expressed not only the linguistic unity of the German people but also their geographic unity. In Deutschland, Deutschland über Alles, officially called Das Lied der Deutschen ("The Song of the Germans"), Fallersleben called upon sovereigns throughout the German states to recognize the unifying characteristics of the German people. Such other patriotic songs as "Die Wacht am Rhein" ("The Watch on the Rhine") by Max Schneckenburger began to focus attention on geographic space, not limiting "Germanness" to a common language. Schneckenburger wrote "The Watch on the Rhine" in a specific patriotic response to French assertions that the Rhine was France's "natural" eastern boundary. In the refrain, "Dear fatherland, dear fatherland, put your mind to rest / The watch stands true on the Rhine", and in such other patriotic poetry as Nicholaus Becker's "Das Rheinlied" ("The Rhine"), Germans were called upon to defend their territorial homeland. In 1807, Alexander von Humboldt argued that national character reflected geographic influence, linking landscape to people. Concurrent with this idea, movements to preserve old fortresses and historic sites emerged, and these particularly focused on the Rhineland, the site of so many confrontations with France and Spain.
Vormärz and 19th-century liberalism
The period of Austrian and Prussian police-states and vast censorship before the Revolutions of 1848 in Germany later became widely known as the Vormärz, the "before March", referring to March 1848. During this period, European liberalism gained momentum; the agenda included economic, social, and political issues. Most European liberals in the Vormärz sought unification under nationalist principles, promoted the transition to capitalism, sought the expansion of male suffrage, among other issues. Their "radicalness" depended upon where they stood on the spectrum of male suffrage: the wider the definition of suffrage, the more radical.
Hambach Festival: liberal nationalism and conservative response
Despite considerable conservative reaction, ideas of unity joined with notions of popular sovereignty in German-speaking lands. The Hambach Festival (Hambacher Fest) in May 1832 was attended by a crowd of more than 30,000. Promoted as a county fair, its participants celebrated fraternity, liberty, and national unity. Celebrants gathered in the town below and marched to the ruins of Hambach Castle on the heights above the small town of Hambach, in the Palatinate province of Bavaria. Carrying flags, beating drums, and singing, the participants took the better part of the morning and mid-day to arrive at the castle grounds, where they listened to speeches by nationalist orators from across the conservative to radical political spectrum. The overall content of the speeches suggested a fundamental difference between the German nationalism of the 1830s and the French nationalism of the July Revolution: the focus of German nationalism lay in the education of the people; once the populace was educated as to what was needed, they would accomplish it. The Hambach rhetoric emphasized the overall peaceable nature of German nationalism: the point was not to build barricades, a very "French" form of nationalism, but to build emotional bridges between groups.
As he had done in 1819, after the Kotzebue assassination, Metternich used the popular demonstration at Hambach to push conservative social policy. The "Six Articles" of 28 June 1832 primarily reaffirmed the principle of monarchical authority. On 5 July, the Frankfurt Diet voted for an additional 10 articles, which reiterated existing rules on censorship, restricted political organizations, and limited other public activity. Furthermore, the member states agreed to send military assistance to any government threatened by unrest. Prince Wrede led half of the Bavarian army to the Palatinate to "subdue" the province. Several hapless Hambach speakers were arrested, tried and imprisoned; one, Karl Heinrich Brüggemann (1810–1887), a law student and representative of the secretive Burschenschaft, was sent to Prussia, where he was first condemned to death, but later pardoned.
Liberalism and the response to economic problems
Several other factors complicated the rise of nationalism in the German states. The man-made factors included political rivalries between members of the German confederation, particularly between the Austrians and the Prussians, and socio-economic competition among the commercial and merchant interests and the old land-owning and aristocratic interests. Natural factors included widespread drought in the early 1830s, and again in the 1840s, and a food crisis in the 1840s. Further complications emerged as a result of a shift in industrialization and manufacturing; as people sought jobs, they left their villages and small towns to work during the week in the cities, returning for a day and a half on weekends.
The economic, social and cultural dislocation of ordinary people, the economic hardship of an economy in transition, and the pressures of meteorological disasters all contributed to growing problems in Central Europe. The failure of most of the governments to deal with the food crisis of the mid-1840s, caused by the potato blight (related to the Great Irish Famine) and several seasons of bad weather, encouraged many to think that the rich and powerful had no interest in their problems. Those in authority were concerned about the growing unrest, political and social agitation among the working classes, and the disaffection of the intelligentsia. No amount of censorship, fines, imprisonment, or banishment, it seemed, could stem the criticism. Furthermore, it was becoming increasingly clear that both Austria and Prussia wanted to be the leaders in any resulting unification; each would inhibit the drive of the other to achieve unification.
First efforts at unification
Crucially, both the Wartburg rally in 1817 and the Hambach Festival in 1832 had lacked any clear-cut program of unification. At Hambach, the positions of the many speakers illustrated their disparate agendas. Held together only by the idea of unification, their notions of how to achieve this did not include specific plans but instead rested on the nebulous idea that the Volk (the people), if properly educated, would bring about unification on their own. Grand speeches, flags, exuberant students, and picnic lunches did not translate into a new political, bureaucratic, or administrative apparatus. While many spoke about the need for a constitution, no such document appeared from the discussions. In 1848, nationalists sought to remedy that problem.
German revolutions of 1848 and the Frankfurt Parliament
The widespread—mainly German—revolutions of 1848–49 sought unification of Germany under a single constitution. The revolutionaries pressured various state governments, particularly those in the Rhineland, for a parliamentary assembly that would have the responsibility to draft a constitution. Ultimately, many of the left-wing revolutionaries hoped this constitution would establish universal male suffrage, a permanent national parliament, and a unified Germany, possibly under the leadership of the Prussian king. This seemed to be the most logical course since Prussia was the strongest of the German states, as well as the largest in geographic size. Generally, center-right revolutionaries sought some kind of expanded suffrage within their states and potentially, a form of loose unification. Their pressure resulted in a variety of elections, based on different voting qualifications, such as the Prussian three-class franchise, which granted to some electoral groups—chiefly the wealthier, landed ones—greater representative power.
On 27 March 1849, the Frankfurt Parliament passed the Paulskirchenverfassung (Constitution of St. Paul's Church) and offered the title of Kaiser (Emperor) to the Prussian king Frederick William IV the next month. He refused for a variety of reasons. Publicly, he replied that he could not accept a crown without the consent of the actual states, by which he meant the princes. Privately, he feared opposition from the other German princes and military intervention from Austria or Russia. He also held a fundamental distaste for the idea of accepting a crown from a popularly elected parliament: he would not accept a crown of "clay". Despite franchise requirements that often perpetuated many of the problems of sovereignty and political participation liberals sought to overcome, the Frankfurt Parliament did manage to draft a constitution and reach an agreement on the kleindeutsch solution. While the liberals failed to achieve the unification they sought, they did manage to gain a partial victory by working with the German princes on many constitutional issues and collaborating with them on reforms.
1848 and the Frankfurt Parliament in retrospective analysis
Scholars of German history have engaged in decades of debate over how the successes and failures of the Frankfurt Parliament contribute to the historiographical explanations of German nation building. One school of thought, which emerged after The Great War and gained momentum in the aftermath of World War II, maintains that the failure of German liberals in the Frankfurt Parliament led to bourgeoisie compromise with conservatives (especially the conservative Junker landholders), which subsequently led to the so-called Sonderweg (distinctive path) of 20th-century German history. Failure to achieve unification in 1848, this argument holds, resulted in the late formation of the nation-state in 1871, which in turn delayed the development of positive national values. Hitler often called on the German public to sacrifice all for the cause of their great nation, but his regime did not create German nationalism: it merely capitalized on an intrinsic cultural value of German society that still remains prevalent even to this day. Furthermore, this argument maintains, the "failure" of 1848 reaffirmed latent aristocratic longings among the German middle class; consequently, this group never developed a self-conscious program of modernization.
More recent scholarship has rejected this idea, claiming that Germany did not have an actual "distinctive path" any more than any other nation, a historiographic idea known as exceptionalism. Instead, modern historians claim 1848 saw specific achievements by the liberal politicians. Many of their ideas and programs were later incorporated into Bismarck's social programs (e.g., social insurance, education programs, and wider definitions of suffrage). In addition, the notion of a distinctive path relies upon the underlying assumption that some other nation's path (in this case, the United Kingdom's) is the accepted norm. This new argument further challenges the norms of the British-centric model of development: studies of national development in Britain and other "normal" states (e.g., France or the United States) have suggested that even in these cases, the modern nation-state did not develop evenly. Nor did it develop particularly early, being rather a largely mid-to-late-19th-century phenomenon. Since the end of the 1990s, this view has become widely accepted, although some historians still find the Sonderweg analysis helpful in understanding the period of National Socialism.
Problem of spheres of influence: The Erfurt Union and the Punctation of Olmütz
After the Frankfurt Parliament disbanded, Frederick William IV, under the influence of General Joseph Maria von Radowitz, supported the establishment of the Erfurt Union—a federation of German states, excluding Austria—by the free agreement of the German princes. This limited union under Prussia would have almost entirely eliminated Austrian influence on the other German states. Combined diplomatic pressure from Austria and Russia (a guarantor of the 1815 agreements that established European spheres of influence) forced Prussia to relinquish the idea of the Erfurt Union at a meeting in the small town of Olmütz in Moravia. In November 1850, the Prussians—specifically Radowitz and Frederick William—agreed to the restoration of the German Confederation under Austrian leadership. This became known as the Punctation of Olmütz, but among Prussians it was known as the "Humiliation of Olmütz."
Although seemingly minor events, the Erfurt Union proposal and the Punctation of Olmütz brought the problems of influence in the German states into sharp focus. The question became not a matter of if but rather when unification would occur, and when was contingent upon strength. One of the former Frankfurt Parliament members, Johann Gustav Droysen, summed up the problem:
We cannot conceal the fact that the whole German question is a simple alternative between Prussia and Austria. In these states, German life has its positive and negative poles—in the former, all the interests [that] are national and reformative, in the latter, all that are dynastic and destructive. The German question is not a constitutional question but a question of power; and the Prussian monarchy is now wholly German, while that of Austria cannot be.
Unification under these conditions raised a basic diplomatic problem. The possibility of German (or Italian) unification would overturn the overlapping spheres of influence system created in 1815 at the Congress of Vienna. The principal architects of this convention, Metternich, Castlereagh, and Tsar Alexander (with his foreign secretary Count Karl Nesselrode), had conceived of and organized a Europe balanced and guaranteed by four "great powers": Great Britain, France, Russia, and Austria, with each power having a geographic sphere of influence. France's sphere included the Iberian Peninsula and a share of influence in the Italian states. Russia's included the eastern regions of Central Europe and a balancing influence in the Balkans. Austria's sphere expanded throughout much of the Central European territories formerly held by the Holy Roman Empire. Britain's sphere was the rest of the world, especially the seas.
This sphere of influence system depended upon the fragmentation of the German and Italian states, not their consolidation. Consequently, a German nation united under one banner presented significant questions. There was no readily applicable definition for who the German people would be or how far the borders of a German nation would stretch. There was also uncertainty as to who would best lead and defend "Germany", however it was defined. Different groups offered different solutions to this problem. In the Kleindeutschland ("Lesser Germany") solution, the German states would be united under the leadership of the Prussian Hohenzollerns; in the Grossdeutschland ("Greater Germany") solution, the German states would be united under the leadership of the Austrian Habsburgs. This controversy, the latest phase of the German dualism debate that had dominated the politics of the German states and Austro-Prussian diplomacy since the 1701 creation of the Kingdom of Prussia, would come to a head during the following twenty years.
External expectations of a unified Germany
Other nationalists had high hopes for the German unification movement, and the frustration with lasting German unification after 1850 seemed to set the national movement back. Revolutionaries associated national unification with progress. As Giuseppe Garibaldi wrote to German revolutionary Karl Blind on 10 April 1865, "The progress of humanity seems to have come to a halt, and you with your superior intelligence will know why. The reason is that the world lacks a nation [that] possesses true leadership. Such leadership, of course, is required not to dominate other peoples but to lead them along the path of duty, to lead them toward the brotherhood of nations where all the barriers erected by egoism will be destroyed." Garibaldi looked to Germany for the "kind of leadership [that], in the true tradition of medieval chivalry, would devote itself to redressing wrongs, supporting the weak, sacrificing momentary gains and material advantage for the much finer and more satisfying achievement of relieving the suffering of our fellow men. We need a nation courageous enough to give us a lead in this direction. It would rally to its cause all those who are suffering wrong or who aspire to a better life and all those who are now enduring foreign oppression."
German unification had also been viewed as a prerequisite for the creation of a European federation, which Giuseppe Mazzini and other European patriots had been promoting for more than three decades:
In the spring of 1834, while at Berne, Mazzini and a dozen refugees from Italy, Poland and Germany founded a new association with the grandiose name of Young Europe. Its basic, and equally grandiose idea, was that, as the French Revolution of 1789 had enlarged the concept of individual liberty, another revolution would now be needed for national liberty; and his vision went further because he hoped that in the no doubt distant future free nations might combine to form a loosely federal Europe with some kind of federal assembly to regulate their common interests. [...] His intention was nothing less than to overturn the European settlement agreed [to] in 1815 by the Congress of Vienna, which had reestablished an oppressive hegemony of a few great powers and blocked the emergence of smaller nations. [...] Mazzini hoped, but without much confidence, that his vision of a league or society of independent nations would be realized in his own lifetime. In practice Young Europe lacked the money and popular support for more than a short-term existence. Nevertheless he always remained faithful to the ideal of a united continent for which the creation of individual nations would be an indispensable preliminary.
Prussia's growing strength: Realpolitik
King Frederick William IV suffered a stroke in 1857 and could no longer rule. This led to his brother William becoming Prince Regent of the Kingdom of Prussia in 1858. Meanwhile, Helmuth von Moltke had become chief of the Prussian General Staff in 1857, and Albrecht von Roon would become Prussian Minister of War in 1859. This shuffling of authority within the Prussian military establishment would have important consequences. Von Roon and William (who took an active interest in military structures) began reorganizing the Prussian army, while Moltke redesigned the strategic defense of Prussia by streamlining operational command. Prussian army reforms (especially how to pay for them) caused a constitutional crisis beginning in 1860 because both parliament and William—via his minister of war—wanted control over the military budget. William, crowned King Wilhelm I in 1861, appointed Otto von Bismarck to the position of Minister-President of Prussia in 1862. Bismarck resolved the crisis in favor of the war minister.
The Crimean War of 1854–55 and the Italian War of 1859 disrupted relations among Great Britain, France, Austria, and Russia. In the aftermath of this disarray, the convergence of von Moltke's operational redesign, von Roon and Wilhelm's army restructure, and Bismarck's diplomacy influenced the realignment of the European balance of power. Their combined agendas established Prussia as the leading German power through a combination of foreign diplomatic triumphs—backed up by the possible use of Prussian military might—and an internal conservatism tempered by pragmatism, which came to be known as Realpolitik.
Bismarck expressed the essence of Realpolitik in his subsequently famous "Blood and Iron" speech to the Budget Committee of the Prussian Chamber of Deputies on 30 September 1862, shortly after he became Minister President: "The great questions of the time will not be resolved by speeches and majority decisions—that was the great mistake of 1848 and 1849—but by iron and blood." Bismarck's words, "iron and blood" (or "blood and iron", as often attributed), have often been misappropriated as evidence of a German lust for blood and power. First, the phrase from his speech "the great questions of time will not be resolved by speeches and majority decisions" is often interpreted as a repudiation of the political process—a repudiation Bismarck did not himself advocate. Second, his emphasis on blood and iron did not imply simply the unrivaled military might of the Prussian army but rather two important aspects: the ability of the assorted German states to produce iron and other related war materials and the willingness to use those war materials if necessary.
Founding a unified state
There is, in political geography, no Germany proper to speak of. There are Kingdoms and Grand Duchies, and Duchies and Principalities, inhabited by Germans, and each [is] separately ruled by an independent sovereign with all the machinery of State. Yet there is a natural undercurrent tending to a national feeling and toward a union of the Germans into one great nation, ruled by one common head as a national unit.
By 1862, when Bismarck made his speech, the idea of a German nation-state in the peaceful spirit of Pan-Germanism had shifted from the liberal and democratic character of 1848 to accommodate Bismarck's more conservative Realpolitik. Bismarck sought to link a unified state to the Hohenzollern dynasty, which for some historians remains one of Bismarck's primary contributions to the creation of the German Empire in 1871. While the conditions of the treaties binding the various German states to one another prohibited Bismarck from taking unilateral action, the politician and diplomat in him realized the impracticality of this. To get the German states to unify, Bismarck needed a single, outside enemy that would declare war on one of the German states first, thus providing a casus belli to rally all Germans behind. This opportunity arose with the outbreak of the Franco-Prussian War in 1870. Historians have long debated Bismarck's role in the events leading up to the war. The traditional view, promulgated in large part by late 19th- and early 20th-century pro-Prussian historians, maintains that Bismarck's intent was always German unification. Post-1945 historians, however, see more short-term opportunism and cynicism in Bismarck's manipulation of the circumstances to create a war, rather than a grand scheme to unify a nation-state. Regardless of motivation, by manipulating events of 1866 and 1870, Bismarck demonstrated the political and diplomatic skill that had caused Wilhelm to turn to him in 1862.
Three episodes proved fundamental to the unification of Germany. First, the death without male heirs of Frederick VII of Denmark led to the Second War of Schleswig in 1864. Second, the unification of Italy provided Prussia an ally against Austria in the Austro-Prussian War of 1866. Finally, France—fearing Hohenzollern encirclement—declared war on Prussia in 1870, resulting in the Franco-Prussian War. Through a combination of Bismarck's diplomacy and political leadership, von Roon's military reorganization, and von Moltke's military strategy, Prussia demonstrated that none of the European signatories of the 1815 peace treaty could guarantee Austria's sphere of influence in Central Europe, thus achieving Prussian hegemony in Germany and ending the dualism debate.
The Schleswig-Holstein Question
The first episode in the saga of German unification under Bismarck came with the Schleswig-Holstein Question. On 15 November 1863, Christian IX became king of Denmark and duke of Schleswig, Holstein, and Lauenburg, which the Danish king held in personal union. On 18 November 1863, he signed the Danish November Constitution which replaced The Law of Sjælland and The Law of Jutland, which meant the new constitution applied to the Duchy of Schleswig. The German Confederation saw this act as a violation of the London Protocol of 1852, which emphasized the status of the Kingdom of Denmark as distinct from the three independent duchies. The German Confederation could use the ethnicities of the area as a rallying cry: Holstein and Lauenburg were largely of German origin and spoke German in everyday life, while Schleswig had a significant Danish population and history. Diplomatic attempts to have the November Constitution repealed collapsed, and fighting began when Prussian and Austrian troops crossed the Eider river on 1 February 1864.
Initially, the Danes attempted to defend their country using an ancient earthen wall known as the Danevirke, but this proved futile. The Danes were no match for the combined Prussian and Austrian forces and their modern armaments. The needle gun, one of the first bolt action rifles to be used in conflict, aided the Prussians in both this war and the Austro-Prussian War two years later. The rifle enabled a Prussian soldier to fire five shots while lying prone, while its muzzle-loading counterpart could only fire one shot and had to be reloaded while standing. The Second Schleswig War resulted in victory for the combined armies of Prussia and Austria, and the two countries won control of Schleswig and Holstein in the concluding peace of Vienna, signed on 30 October 1864.
War between Austria and Prussia, 1866
The second episode in Bismarck's unification efforts occurred in 1866. In concert with the newly formed Italy, Bismarck created a diplomatic environment in which Austria declared war on Prussia. The dramatic prelude to the war occurred largely in Frankfurt, where the two powers claimed to speak for all the German states in the parliament. In April 1866, the Prussian representative in Florence signed a secret agreement with the Italian government, committing each state to assist the other in a war against Austria. The next day, the Prussian delegate to the Frankfurt assembly presented a plan calling for a national constitution, a directly elected national Diet, and universal suffrage. German liberals were justifiably skeptical of this plan, having witnessed Bismarck's difficult and ambiguous relationship with the Prussian Landtag (State Parliament), a relationship characterized by Bismarck's cajoling and riding roughshod over the representatives. These skeptics saw the proposal as a ploy to enhance Prussian power rather than a progressive agenda of reform.
The debate over the proposed national constitution became moot when news of Italian troop movements in Tyrol and near the Venetian border reached Vienna in April 1866. The Austrian government ordered partial mobilization in the southern regions; the Italians responded by ordering full mobilization. Despite calls for rational thought and action, Italy, Prussia, and Austria continued to rush toward armed conflict. On 1 May, Wilhelm gave von Moltke command over the Prussian armed forces, and the next day he began full-scale mobilization.
In the Diet, the group of middle-sized states, known as Mittelstaaten (Bavaria, Württemberg, the grand duchies of Baden and Hesse, and the duchies of Saxony–Weimar, Saxony–Meiningen, Saxony–Coburg, and Nassau), supported complete demobilization within the Confederation. These individual governments rejected the potent combination of enticing promises and subtle (or outright) threats Bismarck used to try to gain their support against the Habsburgs. The Prussian war cabinet understood that its only supporters among the German states against the Habsburgs were two small principalities bordering on Brandenburg that had little military strength or political clout: the Grand Duchies of Mecklenburg-Schwerin and Mecklenburg-Strelitz. They also understood that Prussia's only ally abroad was Italy.
Opposition to Prussia's strong-armed tactics surfaced in other social and political groups. Throughout the German states, city councils, liberal parliamentary members who favored a unified state, and chambers of commerce—which would see great benefits from unification—opposed any war between Prussia and Austria. They believed any such conflict would only serve the interests of royal dynasties. Their own interests, which they understood as "civil" or "bourgeois", seemed irrelevant. Public opinion also opposed Prussian domination. Catholic populations along the Rhine—especially in such cosmopolitan regions as Cologne and in the heavily populated Ruhr Valley—continued to support Austria. By late spring, most important states opposed Berlin's effort to reorganize the German states by force. The Prussian cabinet saw German unity as an issue of power and a question of who had the strength and will to wield that power. Meanwhile, the liberals in the Frankfurt assembly saw German unity as a process of negotiation that would lead to the distribution of power among the many parties.
Although several German states initially sided with Austria, they stayed on the defensive and failed to take effective initiatives against Prussian troops. The Austrian army therefore faced the technologically superior Prussian army with support only from Saxony. France promised aid, but it came late and was insufficient. Complicating the situation for Austria, the Italian mobilization on Austria's southern border required a diversion of forces away from battle with Prussia to fight the Third Italian War of Independence on a second front in Venetia and on the Adriatic sea.
In the day-long Battle of Königgrätz, near the village of Sadová, Friedrich Carl and his troops arrived late, and in the wrong place. Once he arrived, however, he ordered his troops immediately into the fray. The battle was a decisive victory for Prussia and forced the Habsburgs to end the war, laying the groundwork for the Kleindeutschland (little Germany) solution, or "Germany without Austria."
Realpolitik and the North German Confederation
A quick peace was essential to keep Russia from entering the conflict on Austria's side. Prussia annexed Hanover, Hesse-Kassel, Nassau, and the city of Frankfurt. Hesse Darmstadt lost some territory but not its sovereignty. The states south of the Main River (Baden, Württemberg, and Bavaria) signed separate treaties requiring them to pay indemnities and to form alliances bringing them into Prussia's sphere of influence. Austria, and most of its allies, were excluded from the North German Confederation.
The end of Austrian dominance of the German states shifted Austria's attention to the Balkans. In 1867, the Austrian emperor Franz Joseph accepted a settlement (the Austro-Hungarian Compromise of 1867) in which he gave his Hungarian holdings equal status with his Austrian domains, creating the Dual Monarchy of Austria-Hungary. The Peace of Prague (1866) offered lenient terms to Austria, in which Austria's relationship with the new nation-state of Italy underwent major restructuring; although the Austrians were far more successful in the military field against Italian troops, the monarchy lost the important province of Venetia. The Habsburgs ceded Venetia to France, which then formally transferred control to Italy. The French public resented the Prussian victory and demanded Revanche pour Sadová ("Revenge for Sadova"), illustrating anti-Prussian sentiment in France—a problem that would accelerate in the months leading up to the Franco-Prussian War. The Austro-Prussian War also damaged relations with the French government. At a meeting in Biarritz in September 1865 with Napoleon III, Bismarck had let it be understood (or Napoleon had thought he understood) that France might annex parts of Belgium and Luxembourg in exchange for its neutrality in the war. These annexations did not happen, resulting in animosity from Napoleon towards Bismarck.
The reality of defeat for Austria caused a reevaluation of internal divisions, local autonomy, and liberalism. The new North German Confederation had its own constitution, flag, and governmental and administrative structures. Through military victory, Prussia under Bismarck's influence had overcome Austria's active resistance to the idea of a unified Germany. Austria's influence over the German states may have been broken, but the war also splintered the spirit of pan-German unity: most of the German states resented Prussian power politics.
War with France
By 1870 three of the important lessons of the Austro-Prussian war had become apparent. The first lesson was that, through force of arms, a powerful state could challenge the old alliances and spheres of influence established in 1815. Second, through diplomatic maneuvering, a skillful leader could create an environment in which a rival state would declare war first, thus forcing states allied with the "victim" of external aggression to come to the leader's aid. Finally, as Prussian military capacity far exceeded that of Austria, Prussia was clearly the only state within the Confederation (or among the German states generally) capable of protecting all of them from potential interference or aggression. In 1866, most mid-sized German states had opposed Prussia, but by 1870 these states had been coerced and coaxed into mutually protective alliances with Prussia. In the event that a European state declared war on one of their members, they all would come to the defense of the attacked state. With skillful manipulation of European politics, Bismarck created a situation in which France would play the role of aggressor in German affairs, while Prussia would play that of the protector of German rights and liberties.
Spheres of influence fall apart in Spain
At the Congress of Vienna in 1815, Metternich and his conservative allies had reestablished the Spanish monarchy under King Ferdinand VII. Over the following forty years, the great powers supported the Spanish monarchy, but events in 1868 would further test the old system. A revolution in Spain overthrew Queen Isabella II, and the throne remained empty while Isabella lived in sumptuous exile in Paris. The Spanish, looking for a suitable Catholic successor, had offered the post to three European princes, each of whom was rejected by Napoleon III, who served as regional power-broker. Finally, in 1870 the Regency offered the crown to Leopold of Hohenzollern-Sigmaringen, a prince of the Catholic cadet Hohenzollern line. The ensuing furor has been dubbed by historians as the Hohenzollern candidature.
Over the next few weeks, the Spanish offer turned into the talk of Europe. Bismarck encouraged Leopold to accept the offer. A successful installment of a Hohenzollern-Sigmaringen king in Spain would mean that two countries on either side of France would both have German kings of Hohenzollern descent. This may have been a pleasing prospect for Bismarck, but it was unacceptable to either Napoleon III or to Agenor, duc de Gramont, his minister of foreign affairs. Gramont wrote a sharply formulated ultimatum to Wilhelm, as head of the Hohenzollern family, stating that if any Hohenzollern prince should accept the crown of Spain, the French government would respond—although he left ambiguous the nature of such response. The prince withdrew as a candidate, thus defusing the crisis, but the French ambassador to Berlin would not let the issue lie. He approached the Prussian king directly while Wilhelm was vacationing in Ems Spa, demanding that the King release a statement saying he would never support the installation of a Hohenzollern on the throne of Spain. Wilhelm refused to give such an encompassing statement, and he sent Bismarck a dispatch by telegram describing the French demands. Bismarck used the king's telegram, called the Ems Dispatch, as a template for a short statement to the press. With its wording shortened and sharpened by Bismarck—and further alterations made in the course of its translation by the French agency Havas—the Ems Dispatch raised an angry furor in France. The French public, still aggravated over the defeat at Sadová, demanded war.
Napoleon III had tried to secure territorial concessions from both sides before and after the Austro-Prussian War, but despite his role as mediator during the peace negotiations, he ended up with nothing. He then hoped that Austria would join in a war of revenge and that its former allies—particularly the southern German states of Baden, Württemberg, and Bavaria—would join in the cause. This hope would prove futile since the 1866 treaty came into effect and united all German states militarily—if not happily—to fight against France. Instead of a war of revenge against Prussia, supported by various German allies, France engaged in a war against all of the German states without any allies of its own. The reorganization of the military by von Roon and the operational strategy of Moltke combined against France to great effect. The speed of Prussian mobilization astonished the French, and the Prussian ability to concentrate power at specific points—reminiscent of Napoleon I's strategies seventy years earlier—overwhelmed French mobilization. Utilizing their efficiently laid rail grid, Prussian troops were delivered to battle areas rested and prepared to fight, whereas French troops had to march for considerable distances to reach combat zones. After a number of battles, notably Spicheren, Wörth, Mars la Tour, and Gravelotte, the Prussians defeated the main French armies and advanced on the primary city of Metz and the French capital of Paris. They captured Napoleon III and took an entire army as prisoners at Sedan on 1 September 1870.
Proclamation of the German Empire
The humiliating capture of the French emperor and the loss of the French army itself, which marched into captivity at a makeshift camp in the Saarland ("Camp Misery"), threw the French government into turmoil; Napoleon's energetic opponents overthrew his government and proclaimed the Third Republic. The German High Command expected an overture of peace from the French, but the new republic refused to surrender. The Prussian army invested Paris and held it under siege until mid-January, with the city being "ineffectually bombarded". On 18 January 1871, the German princes and senior military commanders proclaimed Wilhelm "German Emperor" in the Hall of Mirrors at the Palace of Versailles. Under the subsequent Treaty of Frankfurt, France relinquished most of its traditionally German regions (Alsace and the German-speaking part of Lorraine); paid an indemnity, calculated (on the basis of population) as the precise equivalent of the indemnity that Napoleon Bonaparte imposed on Prussia in 1807; and accepted German administration of Paris and most of northern France, with "German troops to be withdrawn stage by stage with each installment of the indemnity payment".
Importance in the unification process
Victory in the Franco-Prussian War proved the capstone of the nationalist issue. In the first half of the 1860s, Austria and Prussia both contended to speak for the German states; both maintained they could support German interests abroad and protect German interests at home. In responding to the Schleswig-Holstein Question, they both proved equally diligent in doing so. After the victory over Austria in 1866, Prussia began internally asserting its authority to speak for the German states and defend German interests, while Austria began directing more and more of its attention to possessions in the Balkans. The victory over France in 1871 expanded Prussian hegemony in the German states (aside from Austria) to the international level. With the proclamation of Wilhelm as Kaiser, Prussia assumed the leadership of the new empire. The southern states became officially incorporated into a unified Germany at the Treaty of Versailles of 1871 (signed 26 February 1871; later ratified in the Treaty of Frankfurt of 10 May 1871), which formally ended the war. Although Bismarck had led the transformation of Germany from a loose confederation into a federal nation state, he had not done it alone. Unification was achieved by building on a tradition of legal collaboration under the Holy Roman Empire and economic collaboration through the Zollverein. The difficulties of the Vormärz, the impact of the 1848 liberals, the importance of von Roon's military reorganization, and von Moltke's strategic brilliance all played a part in political unification.
Political and administrative unification
The new German Empire included 26 political entities: twenty-five constituent states (or Bundesstaaten) and one Imperial Territory (or Reichsland). It realized the Kleindeutsche Lösung ("Lesser German Solution", with the exclusion of Austria) as opposed to a Großdeutsche Lösung or "Greater German Solution", which would have included Austria. Unifying various states into one nation required more than some military victories, however much these might have boosted morale. It also required a rethinking of political, social, and cultural behaviors and the construction of new metaphors about "us" and "them". Who were the new members of this new nation? What did they stand for? How were they to be organized?
Constituent states of the Empire
Though often characterized as a federation of monarchs, the German Empire, strictly speaking, federated a group of 26 constituent entities with different forms of government, ranging from the main four constitutional monarchies to the three republican Hanseatic cities.
Political structure of the Empire
The 1866 North German Constitution became (with some semantic adjustments) the 1871 Constitution of the German Empire. With this constitution, the new Germany acquired some democratic features: notably the Imperial Diet, which—in contrast to the parliament of Prussia—gave citizens representation on the basis of elections by direct and equal suffrage of all males who had reached the age of 25. Furthermore, elections were generally free of chicanery, engendering pride in the national parliament. However, legislation required the consent of the Bundesrat, the federal council of deputies from the states, in and over which Prussia had a powerful influence; Prussia could appoint 17 of 58 delegates with only 14 votes needed for a veto. Prussia thus exercised influence in both bodies, with executive power vested in the Prussian King as Kaiser, who appointed the federal chancellor. The chancellor was accountable solely to, and served entirely at the discretion of, the Emperor. Officially, the chancellor functioned as a one-man cabinet and was responsible for the conduct of all state affairs; in practice, the State Secretaries (bureaucratic top officials in charge of such fields as finance, war, foreign affairs, etc.) acted as unofficial portfolio ministers. With the exception of the years 1872–1873 and 1892–1894, the imperial chancellor was always simultaneously the prime minister of the imperial dynasty's hegemonic home-kingdom, Prussia. The Imperial Diet had the power to pass, amend, or reject bills, but it could not initiate legislation. (The power of initiating legislation rested with the chancellor.) The other states retained their own governments, but the military forces of the smaller states came under Prussian control. The militaries of the larger states (such as the Kingdoms of Bavaria and Saxony) retained some autonomy, but they underwent major reforms to coordinate with Prussian military principles and came under federal government control in wartime.
Historical arguments and the Empire's social anatomy
The Sonderweg hypothesis attributed Germany's difficult 20th century to the weak political, legal, and economic basis of the new empire. The Prussian landed elites, the Junkers, retained a substantial share of political power in the unified state. The Sonderweg hypothesis attributed their power to the absence of a revolutionary breakthrough by the middle classes, or by peasants in combination with the urban workers, in 1848 and again in 1871. Recent research into the role of the Grand Bourgeoisie—which included bankers, merchants, industrialists, and entrepreneurs—in the construction of the new state has largely refuted the claim of political and economic dominance of the Junkers as a social group. This newer scholarship has demonstrated the importance of the merchant classes of the Hanseatic cities and the industrial leadership (the latter particularly important in the Rhineland) in the ongoing development of the Second Empire.
Additional studies of different groups in Wilhelmine Germany have all contributed to a new view of the period. Although the Junkers did, indeed, continue to control the officer corps, they did not dominate social, political, and economic matters as much as the Sonderweg theorists had hypothesized. Eastern Junker power had a counterweight in the western provinces in the form of the Grand Bourgeoisie and in the growing professional class of bureaucrats, teachers, professors, doctors, lawyers, scientists, etc.
Beyond the political mechanism: forming a nation
If the Wartburg and Hambach rallies had lacked a constitution and administrative apparatus, that problem was addressed between 1867 and 1871. Yet, as Germans discovered, grand speeches, flags, and enthusiastic crowds, a constitution, a political reorganization, and the provision of an imperial superstructure; and the revised Customs Union of 1867–68, still did not make a nation.
A key element of the nation-state is the creation of a national culture, frequently—although not necessarily—through deliberate national policy. In the new German nation, a Kulturkampf (1872–78) that followed political, economic, and administrative unification attempted to address, with a remarkable lack of success, some of the contradictions in German society. In particular, it involved a struggle over language, education, and religion. A policy of Germanization of non-German people of the empire's population, including the Polish and Danish minorities, started with language, in particular, the German language, compulsory schooling (Germanization), and the attempted creation of standardized curricula for those schools to promote and celebrate the idea of a shared past. Finally, it extended to the religion of the new Empire's population.
For some Germans, the definition of nation did not include pluralism, and Catholics in particular came under scrutiny; some Germans, and especially Bismarck, feared that the Catholics' connection to the papacy might make them less loyal to the nation. As chancellor, Bismarck tried without much success to limit the influence of the Roman Catholic Church and of its party-political arm, the Catholic Center Party, in schools and education- and language-related policies. The Catholic Center Party remained particularly well entrenched in the Catholic strongholds of Bavaria and southern Baden, and in urban areas that held high populations of displaced rural workers seeking jobs in the heavy industry, and sought to protect the rights not only of Catholics, but other minorities, including the Poles, and the French minorities in the Alsatian lands. The May Laws of 1873 brought the appointment of priests, and their education, under the control of the state, resulting in the closure of many seminaries, and a shortage of priests. The Congregations Law of 1875 abolished religious orders, ended state subsidies to the Catholic Church, and removed religious protections from the Prussian constitution.
Integrating the Jewish community
The Germanized Jews remained another vulnerable population in the new German nation-state. Since 1780, after emancipation by the Holy Roman Emperor Joseph II, Jews in the former Habsburg territories had enjoyed considerable economic and legal privileges that their counterparts in other German-speaking territories did not: they could own land, for example, and they did not have to live in a Jewish quarter (also called the Judengasse, or "Jews' alley"). They could also attend universities and enter the professions. During the Revolutionary and Napoleonic eras, many of the previously strong barriers between Jews and Christians broke down. Napoleon had ordered the emancipation of Jews throughout territories under French hegemony. Like their French counterparts, wealthy German Jews sponsored salons; in particular, several Jewish salonnières held important gatherings in Frankfurt and Berlin during which German intellectuals developed their own form of republican intellectualism. Throughout the subsequent decades, beginning almost immediately after the defeat of the French, reaction against the mixing of Jews and Christians limited the intellectual impact of these salons. Beyond the salons, Jews continued a process of Germanization in which they intentionally adopted German modes of dress and speech, working to insert themselves into the emerging 19th-century German public sphere. The religious reform movement among German Jews reflected this effort.
By the years of unification, German Jews played an important role in the intellectual underpinnings of the German professional, intellectual, and social life. The expulsion of Jews from Russia in the 1880s and 1890s complicated integration into the German public sphere. Russian Jews arrived in north German cities in the thousands; considerably less educated and less affluent, their often dismal poverty dismayed many of the Germanized Jews. Many of the problems related to poverty (such as illness, overcrowded housing, unemployment, school absenteeism, refusal to learn German, etc.) emphasized their distinctiveness for not only the Christian Germans, but for the local Jewish populations as well.
Writing the story of the nation
Another important element in nation-building, the story of the heroic past, fell to such nationalist German historians as the liberal constitutionalist Friedrich Dahlmann (1785–1860), his conservative student Heinrich von Treitschke (1834–1896), and others less conservative, such as Theodor Mommsen (1817–1903) and Heinrich von Sybel (1817–1895), to name two. Dahlmann himself died before unification, but he laid the groundwork for the nationalist histories to come through his histories of the English and French revolutions, by casting these revolutions as fundamental to the construction of a nation, and Dahlmann himself viewed Prussia as the logical agent of unification.
Heinrich von Treitschke's History of Germany in the Nineteenth Century, published in 1879, has perhaps a misleading title: it privileges the history of Prussia over the history of other German states, and it tells the story of the German-speaking peoples through the guise of Prussia's destiny to unite all German states under its leadership. The creation of this Borussian myth (Borussia is the Latin name for Prussia) established Prussia as Germany's savior; it was the destiny of all Germans to be united, this myth maintains, and it was Prussia's destiny to accomplish this. According to this story, Prussia played the dominant role in bringing the German states together as a nation-state; only Prussia could protect German liberties from being crushed by French or Russian influence. The story continues by drawing on Prussia's role in saving Germans from the resurgence of Napoleon's power in 1815, at Waterloo, creating some semblance of economic unity, and uniting Germans under one proud flag after 1871.
Mommsen's contributions to the Monumenta Germaniae Historica laid the groundwork for additional scholarship on the study of the German nation, expanding the notion of "Germany" to mean other areas beyond Prussia. A liberal professor, historian, and theologian, and generally a titan among late 19th-century scholars, Mommsen served as a delegate to the Prussian House of Representatives from 1863 to 1866 and 1873 to 1879; he also served as a delegate to the Reichstag from 1881 to 1884, for the liberal German Progress Party (Deutsche Fortschrittspartei) and later for the National Liberal Party. He opposed the antisemitic programs of Bismarck's Kulturkampf and the vitriolic text that Treitschke often employed in the publication of his Studien über die Judenfrage (Studies of the Jewish Question), which encouraged assimilation and Germanization of Jews.
- Italian unification
- Oliver F. R. Haardt, The Federal Evolution of Imperial Germany (1871–1918).
- See, for example, James Allen Vann, The Swabian Kreis: Institutional Growth in the Holy Roman Empire 1648–1715. Vol. LII, Studies Presented to International Commission for the History of Representative and Parliamentary Institutions. Bruxelles, 1975. Mack Walker. German home towns: community, state, and general estate, 1648–1871. Ithaca, 1998.
- John G. Gagliardo, Reich and Nation. The Holy Roman Empire as Idea and Reality, 1763–1806, Indiana University Press, 1980, p. 278–279.
- Robert A. Kann. History of the Habsburg Empire: 1526–1918, Los Angeles, 1974, p. 221. In his abdication, Francis released all former estates from their duties and obligations to him, and took upon himself solely the title of King of Austria, which had been established since 1804. Golo Mann, Deutsche Geschichte des 19. und 20. Jahrhunderts, Frankfurt am Main, 2002, p. 70.
- Fichte, Johann Gottlieb (1808). "Address to the German Nation". www.historyman.co.uk. Retrieved 2009-06-06.
- James J. Sheehan, German History, 1780–1866, Oxford, 1989, p. 434.
- Jakob Walter, and Marc Raeff. The diary of a Napoleonic foot soldier. Princeton, N.J., 1996.
- Sheehan, pp. 384–387.
- Although the Prussian army had gained its reputation in the Seven Years' War, its humiliating defeat at Jena and Auerstadt crushed the pride many Prussians felt in their soldiers. During their Russian exile, several officers, including Carl von Clausewitz, contemplated reorganization and new training methods. Sheehan, p. 323.
- Sheehan, pp. 322–323.
- David Blackbourn and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany. Oxford & New York, 1984, part 1; Thomas Nipperdey, German History From Napoleon to Bismarck, 1800–1871, New York, Oxford, 1983. Chapter 1.
- Sheehan, pp. 398–410; Hamish Scott, The Birth of a Great Power System, 1740–1815, US, 2006, pp. 329–361.
- Sheehan, pp. 398–410.
- Jean Berenger. A History of the Habsburg Empire 1700–1918. C. Simpson, Trans. New York: Longman, 1997, ISBN 0-582-09007-5. pp. 96–97.
- Sheehan, pp. 460–470. German Historical Institute
- Lloyd Lee, Politics of Harmony: Civil Service, Liberalism, and Social Reform in Baden, 1800–1850, Cranbury, New Jersey, 1980.
- Adam Zamoyski, Rites of Peace: The Fall of Napoleon and the Congress of Vienna, New York, 2007, pp. 98–115, 239–40.
- L.B. Namier, (1952) Avenues of History. London, ONT, 1952, p. 34.
- Nipperdey, pp. 1–3.
- Sheehan, pp. 407–408, 444.
- Sheehan, pp. 442–445.
- Sheehan, pp. 465–467; Blackbourn, Long Century, pp. 106–107.
- Wolfgang Keller and Carol Shiue, The Trade Impact of the Customs Union, Boulder, University of Colorado, 5 March 2013, pp.10 and 18
- Florian Ploeckl. The Zollverein and the Formation of a Customs Union, Discussion Paper no. 84 in the Economic and Social History series, Nuffield College, Oxford, Nuffield College. Retrieved from www.nuff.ox.ac.uk/Economics/History March 2017; p. 23
- Sheehan, p. 465.
- Sheehan, p. 466.
- Sheehan, pp. 467–468.
- Sheehan, p. 502.
- Sheehan, p. 469.
- Sheehan, p. 458.
- Sheehan, pp. 466–467.
- They traced the roots of the German language, and drew its different lines of development together. The Brothers Grimm online. Joint Publications.
- (in German) Hans Lulfing, Baedecker, Karl, Neue Deutsche Biographie (NDB). Band 1, Duncker & Humblot, Berlin, 1953, p. 516 f.
- (in German) Peter Rühmkorf, Heinz Ludwig Arnold, Das Lied der Deutschen Göttingen: Wallstein, 2001, ISBN 3-89244-463-3, pp. 11–14.
- Raymond Dominick III, The Environmental Movement in Germany, Bloomington, Indiana University, 1992, pp. 3–41.
- Jonathan Sperber, Rhineland radicals: the democratic movement and the revolution of 1848–1849. Princeton, N.J., 1993.
- Sheehan, pp. 610–613.
- Sheehan, p. 610.
- Sheehan, p. 612.
- Sheehan, p. 613.
- David Blackbourn, Marpingen: Apparitions of the Virgin Mary in Nineteenth-Century Germany. New York, 1994.
- Sperber, Rhineland radicals. p. 3.
- Blackbourn, Long Century, p. 127.
- Sheehan, pp. 610–615.
- (in German) Badische Heimat/Landeskunde online 2006 Veit's Pauls Church Germania. Retrieved 5 June 2009.
- Blackbourn, Long Century, pp. 138–164.
- Jonathan Sperber, Revolutionary Europe, 1780–1850, New York, 2000.
- Blackbourn, Long Century, pp. 176–179.
- Examples of this argument appear in: Ralf Dahrendorf, German History, (1968), pp. 25–32; (in German) Hans Ulrich Wehler, Das Deutsche Kaiserreich, 1871–1918, Göttingen, 1973, pp. 10–14; Leonard Krieger, The German Idea of Freedom, Chicago, 1957; Raymond Grew, Crises of Political Development in Europe and the United States, Princeton, 1978, pp. 312–345; Jürgen Kocka and Allan Mitchell. Bourgeois Society in Nineteenth-Century Europe. Oxford, 1993; Jürgen Kocka, "German History before Hitler: The Debate about the German Sonderweg." Journal of Contemporary History, Vol. 23, No. 1 (January, 1988), pp. 3–16; Volker Berghahn, Modern Germany. Society, Economy and Politics in the Twentieth Century. Cambridge, 1982.
- World Encyclopedia V.3 p. 542.
- For a summary of this argument, see David Blackbourn, and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany. Oxford & New York, 1984, part 1.
- Blackbourn and Eley. Peculiarities, Part I.
- Blackbourn and Eley, Peculiarities, Chapter 2.
- Blackbourn and Eley, Peculiarities, pp. 286–293.
- Jürgen Kocka, "Comparison and Beyond.'" History and Theory, Vol. 42, No. 1 (February, 2003), pp. 39–44, and Jürgen Kocka, "Asymmetrical Historical Comparison: The Case of the German Sonderweg", History and Theory, Vol. 38, No. 1 (February, 1999), pp. 40–50.
- For a representative analysis of this perspective, see Richard J. Evans, Rethinking German History: Nineteenth-Century Germany and the Origins of the Third Reich. London, 1987.
- A. J. P. Taylor, The Struggle for Mastery in Europe 1914–1918, Oxford, 1954, p. 37.
- J.G.Droysen, Modern History Sourcebook: Documents of German Unification, 1848–1871. Retrieved 9 April 2009.
- Zamoyski, pp. 100–115.
- Blackbourn, The Long Nineteenth Century, pp. 160–175.
- The remainder of the letter exhorts the Germans to unification: "This role of world leadership, left vacant as things are today, might well be occupied by the German nation. You Germans, with your grave and philosophic character, might well be the ones who could win the confidence of others and guarantee the future stability of the international community. Let us hope, then, that you can use your energy to overcome your moth-eaten thirty tyrants of the various German states. Let us hope that in the center of Europe you can then make a unified nation out of your fifty millions. All the rest of us would eagerly and joyfully follow you." Denis Mack Smith (editor). Garibaldi (Great Lives Observed), Prentice Hall, Englewood Cliffs, N.J., 1969, p. 76.
- Mack Smith, Denis (1994). Mazzini. Yale University Press. pp. 11–12.
- Holt, p. 27.
- Holt, pp. 13–14.
- Blackbourn, Long Century, pp. 175–179.
- Hollyday, 1970, pp. 16–18.
- Blackbourn, Peculiarities, Part I.
- Bismarck had "cut his teeth" on German politics, and German politicians, in Frankfurt: a quintessential politician, Bismarck had built his power-base by absorbing and co-opting measures from throughout the political spectrum. He was first and foremost a politician, and in this lied his strength. Furthermore, since he trusted neither Moltke nor Roon, he was reluctant to enter a military enterprise over which he would have no control. Mann, Chapter 6, pp. 316–395.
- Isabel V. Hull, Absolute Destruction: Military culture and the Practices of War in Imperial Germany, Ithaca, New York, 2005, pp. 90–108; 324–333.
- The Situation of Germany. (PDF) – The New York Times, July 1, 1866.
- Michael Eliot Howard, The Franco-Prussian War: the German invasion of France, 1870–1871. New York, MacMillan, 1961, p. 40.
- Mann, pp. 390–395.
- A. J. P. Taylor, Bismarck: The Man and the Statesman. Oxford, Clarendon, 1988. Chapter 1, and Conclusion.
- Howard, pp. 40–57.
- Sheehan, pp. 900–904; Wawro, pp. 4–32; Holt, p. 75.
- Holt, p. 75.
- Sheehan, pp. 900–906.
- Sheehan, p. 906; Geoffrey Wawro, The Austro Prussian War: Austria's War with Prussia and Italy in 1866. Cambridge, Cambridge University, 1996, pp. 82–84.
- Sheehan, pp. 905–906.
- Sheehan, p. 909.
- Wawro, pp. 50–60; 75–79.
- Wawro, pp. 57–75.
- Sheehan, pp. 908–909
- Taylor, Bismarck, pp. 87–88.
- Sheehan, p. 910.
- Sheehan, pp. 905–910.
- Rosita Rindler Schjerve Diglossia and Power: Language Policies and Practice in the Nineteenth Century Habsburg Empire, 2003, ISBN 3-11-017653-X, pp. 199–200.
- Bridge and Bullen, The Great Powers and the European States System 1814–1914.
- Sheehan, pp. 909–910; Wawro, Chapter 11.
- Blackbourn, Long Century, Chapter V: From Reaction to Unification, pp. 225–269.
- Howard, pp. 4–60.
- Howard, pp. 50–57.
- Howard, pp. 55–56.
- Howard, pp. 56–57.
- Howard, pp. 55–59.
- Howard, pp. 64–68.
- Howard, pp. 218–222.
- Howard, pp. 222–230.
- Taylor, Bismarck, p. 126
- Die Reichsgründung 1871 (The Foundation of the Empire, 1871), Lebendiges virtuelles Museum Online, accessed 2008-12-22. German text translated: [...] on the wishes of Wilhelm I, on the 170th anniversary of the elevation of the House of Brandenburg to princely status on 18 January 1701, the assembled German princes and high military officials proclaimed Wilhelm I as German Emperor in the Hall of Mirrors at the Versailles Palace.
- Taylor, Bismarck, p. 133.
- Crankshaw, Edward. Bismarck. New York, The Viking Press, 1981, p. 299.
- Howard, Chapter XI: the Peace, pp. 432–456.
- Blackbourn, Long Century, pp. 255–257.
- Alon Confino. The Nation as a Local Metaphor: Württemberg, Imperial Germany, and National Memory, 1871–1918. Chapel Hill, University of North Carolina Press, 1997.
- Richard J. Evans, Death in Hamburg: Society and Politics in the Cholera Years, 1830–1910. New York, 2005, p. 1.
- Blackbourn, Long Century, p. 267.
- Blackbourn, Long Century, pp. 225–301.
- David Blackbourn and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany. Oxford [Oxfordshire] and New York, Oxford University Press, 1984. Peter Blickle, Heimat: a critical theory of the German idea of homeland, Studies in German literature, linguistics and culture. Columbia, South Carolina, Camden House; Boydell & Brewer, 2004. Robert W. Scribner, Sheilagh C. Ogilvie, Germany: A New Social and Economic History. London and New York, Arnold and St. Martin's Press, 1996.
- To name only a few of these studies: Geoff Eley, Reshaping the German Right: Radical Nationalism and Political Change After Bismarck. New Haven, 1980. Richard J. Evans, Death in Hamburg: Society and Politics in the Cholera Years, 1830–1910.New York, 2005. Richard J. Evans,Society and politics in Wilhelmine Germany. London and New York, 1978. Thomas Nipperdey, Germany from Napoleon to Bismarck, 1800–1866. Princeton, New Jersey, 1996. Jonathan Sperber, Popular Catholicism in Nineteenth-Century Germany. Princeton, N.J., 1984. (1997).
- Blackbourn, Long Century, pp. 240–290.
- For more on this idea, see, for example, Joseph R. Llobera, and Goldsmiths' College. The role of historical memory in (ethno)nation-building, Goldsmiths sociology papers. London, 1996; (in German) Alexandre Escudier, Brigitte Sauzay, and Rudolf von Thadden. Gedenken im Zwiespalt: Konfliktlinien europäischen Erinnerns, Genshagener Gespräche; vol. 4. Göttingen: 2001; Alon Confino. The Nation as a Local Metaphor: Württemberg, Imperial Germany, and National Memory, 1871–1918. Chapel Hill, 1999.
- Blackbourn, Long Century, pp. 243–282.
- Blackbourn, Long Century, pp. 283; 285–300.
- Jonathan Sperber. Popular Catholicism in Nineteenth-Century Germany, Princeton, N.J., 1984.
- Marion Kaplan, The Making of the Jewish Middle Class: Women, Family, and Identity in Imperial Germany, New York, 1991.
- Kaplan, in particular, pp. 4–7 and Conclusion.
- Blackbourn and Eley, Peculiarities, p. 241.
- Karin Friedrich, The other Prussia: royal Prussia, Poland and liberty, 1569–1772, New York, 2000, p. 5.
- Many modern historians describe this myth, without subscribing to it: for example, Rudy Koshar, Germany's Transient Pasts: Preservation and the National Memory in the Twentieth Century. Chapel Hill, 1998; Hans Kohn. German History: Some New German Views. Boston, 1954; Thomas Nipperdey, Germany History from Napoleon to Bismarck.
- Josep R. Llobera and Goldsmiths' College. The Role of Historical Memory in (Ethno)Nation-Building. Goldsmiths sociology papers. London, Goldsmiths College, 1996.
- Berghahn, Volker. Modern Germany: Society, Economy and Politics in the Twentieth Century. Cambridge: Cambridge University Press, 1982. ISBN 978-0-521-34748-8
- Beringer, Jean. A History of the Habsburg Empire 1700–1918. C. Simpson, Trans. New York: Longman, 1997, ISBN 0-582-09007-5.
- Blackbourn, David, Marpingen: Apparitions of the Virgin Mary in Bismarckian Germany. New York: Knopf, 1994. ISBN 0-679-41843-1
- Blackbourn, David. The Long Nineteenth Century: A History of Germany, 1780–1918. New York: Oxford University Press, 1998. ISBN 0-19-507672-9
- Blackbourn, David and Eley, Geoff. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany. Oxford & New York: Oxford University Press, 1984. ISBN 978-0-19-873057-6
- Blickle, Peter. Heimat: A Critical Theory of the German Idea of Homeland. Studies in German literature, linguistics and culture. Columbia, South Carolina: Camden House Press, 2004. ISBN 978-0-582-78458-1
- Bridge, Roy and Roger Bullen, The Great Powers and the European States System 1814–1914, 2nd ed. Longman, 2004. ISBN 978-0-582-78458-1
- Confino, Alon. The Nation as a Local Metaphor: Württemberg, Imperial Germany, and National Memory, 1871–1918. Chapel Hill: University of North Carolina Press, 1997. ISBN 978-0-8078-4665-0
- Crankshaw, Edward. Bismarck. New York, The Viking Press, 1981. ISBN 0-333-34038-8
- Dahrendorf, Ralf. Society and Democracy in Germany (1979)
- Dominick, Raymond, III. The Environmental Movement in Germany, Bloomington, Indiana University, 1992. ISBN 0-253-31819-X
- Evans, Richard J. Death in Hamburg: Society and Politics in the Cholera Years, 1830–1910. New York: Oxford University Press, 2005. ISBN 978-0-14-303636-4
- Evans, Richard J. Rethinking German History: Nineteenth-Century Germany and the Origins of the Third Reich. London, Routledge, 1987. ISBN 978-0-00-302090-8
- Flores, Richard R. Remembering the Alamo: Memory, Modernity, and the Master Symbol. Austin: University of Texas, 2002. ISBN 978-0-292-72540-9
- Friedrich, Karin, The Other Prussia: Royal Prussia, Poland and Liberty, 1569–1772, New York, 2000. ISBN 978-0-521-02775-5
- Grew, Raymond. Crises of Political Development in Europe and the United States. Princeton, Princeton University Press, 1978. ISBN 0-691-07598-0
- Hollyday, F. B. M. Bismarck. New Jersey, Prentice Hall, 1970. ISBN 978-0-13-077362-3
- Holt, Alexander W. The History of Europe from 1862–1914: From the Accession of Bismarck to the Outbreak of the Great War. New York: MacMillan, 1917. OCLC 300969997
- Howard, Michael Eliot. The Franco-Prussian War: The German invasion of France, 1870–1871. New York, MacMillan, 1961. ISBN 978-0-415-02787-8
- Hull, Isabel. Absolute Destruction: Military Culture and the Practices of War in Imperial Germany. Ithaca, New York, Syracuse University Press, 2005. ISBN 978-0-8014-7293-0
- Kann, Robert A. History of the Habsburg Empire: 1526–1918. Los Angeles, University of California Press, 1974 ISBN 978-0-520-04206-3
- Kaplan, Marion. The Making of the Jewish Middle Class: Women, Family, and Identity in Imperial Germany. New York, Oxford University Press, 1991. ISBN 978-0-19-509396-4
- Kocka, Jürgen and Mitchell, Allan. Bourgeois Society in Nineteenth Century Europe. Oxford, Oxford University Press, 1993. ISBN 978-0-85496-414-7
- Kocka, Jürgen and Mitchell, Allan. "German History before Hitler: The Debate about the German Sonderweg." Journal of Contemporary History Vol. 23, No. 1 (January 1988), p. 3–16.
- Kocka, Jürgen and Mitchell, Allan. "Comparison and Beyond.'" History and Theory Vol. 42, No. 1 (February 2003), p. 39–44.
- Kocka, Jürgen and Mitchell, Allan. "Asymmetrical Historical Comparison: The Case of the German Sonderweg". History and Theory Vol. 38, No. 1 (February 1999), p. 40–50.
- Kohn, Hans. German History; Some New German Views. Boston: Beacon, 1954. OCLC 987529
- Koshar, Rudy. Germany's Transient Pasts: Preservation and the National Memory in the Twentieth Century. Chapel Hill, 1998. ISBN 978-0-8078-4701-5
- Krieger, Leonard. The German Idea of Freedom, Chicago, University of Chicago Press, 1957. ISBN 978-1-59740-519-5
- Lee, Lloyd. The Politics of Harmony: Civil Service, Liberalism, and Social Reform in Baden, 1800–1850. Cranbury, New Jersey, Associated University Presses, 1980. ISBN 978-0-87413-143-7
- Llobera, Josep R. and Goldsmiths' College. "The role of historical memory in (ethno)nation-building." Goldsmiths Sociology Papers. London, Goldsmiths College, 1996. ISBN 978-0-902986-06-0
- Mann, Golo. The History of Germany Since 1789 (1968)
- Namier, L.B.. Avenues of History. New York, Macmillan, 1952. OCLC 422057575
- Nipperdey, Thomas. Germany from Napoleon to Bismarck, 1800–1866. Princeton, Princeton University Press, 1996. ISBN 978-0-691-02636-7
- Schjerve, Rosita Rindler, Diglossia and Power: Language Policies and Practice in the Nineteenth Century Habsburg Empire. Berlin, De Gruyter, 2003. ISBN 978-3-11-017654-4
- Schulze, Hagen. The Course of German Nationalism: From Frederick the Great to Bismarck, 1763–1867. Cambridge & New York, Cambridge University Press, 1991. ISBN 978-0-521-37759-1
- Scott, H. M. The Birth of a Great Power System. London & New York, Longman, 2006. ISBN 978-0-582-21717-1
- Scribner, Robert W. and Sheilagh C. Ogilvie. Germany: A New Social and Economic History. London: Arnold Publication, 1996. ISBN 978-0-340-51332-3
- Sheehan, James J. German History 1770–1866. Oxford History of Modern Europe. Oxford, Oxford University Press, 1989. ISBN 978-0-19-820432-9
- Sked, Alan. Decline and Fall of the Habsburg Empire 1815–1918. London, Longman, 2001. ISBN 978-0-582-35666-5
- Sorkin, David, The Transformation of German Jewry, 1780–1840, Studies in Jewish history. New York, Wayne State University Press, 1987. ISBN 978-0-8143-2828-6
- Sperber, Jonathan. The European Revolutions, 1848–1851. New Approaches to European History. Cambridge, Cambridge University Press, 1984. ISBN 978-0-521-54779-6
- Sperber, Jonathan. Popular Catholicism in Nineteenth-Century Germany. Princeton, Princeton University Press, 1984. ISBN 978-0-691-05432-2
- Sperber, Jonathan. Rhineland Radicals: The Democratic Movement and the Revolution of 1848–1849. Princeton, Princeton University Press, 1993. ISBN 978-0-691-00866-0
- Stargardt, Nicholas. The German Idea of Militarism: Radical and Socialist Critics, 1866–1914. Cambridge, Cambridge University Press, 1994. ISBN 978-0-521-46692-9
- Steinberg, Jonathan. Bismarck: A Life (2011)
- Taylor, A. J. P., The Struggle for Mastery in Europe 1848–1918, Oxford, Clarendon, 1954. ISBN 978-0-19-881270-8
- Taylor, A. J. P.. Bismarck: The Man and the Statesman. Oxford: Clarendon, 1988. ISBN 978-0-394-70387-9
- Victoria and Albert Museum, Dept. of Prints and Drawings, and Susan Lambert. The Franco-Prussian War and the Commune in Caricature, 1870–71. London, 1971. ISBN 0-901486-30-2
- Walker, Mack. German Home Towns: Community, State, and General Estate, 1648–1871. Ithaca, Syracuse University Press, 1998. ISBN 978-0-8014-8508-4
- Wawro, Geoffrey. The Austro-Prussian War. Cambridge, Cambridge University Press, 1996. ISBN 0-521-62951-9
- Wawro, Geoffrey]. Warfare and Society in Europe, 1792–1914. 2000. ISBN 978-0-415-21445-2
- Wehler, Hans Ulrich. The German Empire, 1871–1918 (1997)
- Zamoyski, Adam. Rites of Peace: The Fall of Napoleon and the Congress of Vienna. New York, HarperCollins, 2007. ISBN 978-0-06-077519-3
- Bazillion, Richard J. Modernizing Germany: Karl Biedermann's career in the kingdom of Saxony, 1835–1901. American university studies. Series IX, History, vol. 84. New York, Peter Lang, 1990. ISBN 0-8204-1185-X
- Brose, Eric Dorn. German History, 1789–1871: From the Holy Roman Empire to the Bismarckian Reich. (1997) online edition
- Bucholz, Arden. Moltke, Schlieffen, and Prussian War Planning. New York, Berg Pub Ltd, 1991. ISBN 0-85496-653-6
- Bucholz, Arden. Moltke and the German Wars 1864–1871. New York, Palgrave MacMillan, 2001. ISBN 0-333-68758-2
- Clark, Christopher. Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947. Cambridge, Belknap Press of Harvard University Press, 2006, 2009. ISBN 978-0-674-03196-8
- Clemente, Steven E. For King and Kaiser!: The Making of the Prussian Army Officer, 1860–1914. Contributions in military studies, no. 123. New York: Greenwood, 1992. ISBN 0-313-28004-5
- Cocks, Geoffrey and Konrad Hugo Jarausch. German Professions, 1800–1950. New York, Oxford University Press, 1990. ISBN 0-19-505596-9
- Droysen, J.G. Modern History Sourcebook: Documents of German Unification, 1848–1871. Accessed April 9, 2009.
- Dwyer, Philip G. Modern Prussian history, 1830–1947. Harlow, England, New York: Longman, 2001. ISBN 0-582-29270-0
- Friedrich, Otto. Blood and Iron: From Bismarck to Hitler the Von Moltke Family's Impact On German History. New York, Harper, 1995. ISBN 0-06-016866-8
- Groh, John E. Nineteenth-Century German Protestantism: The Church As Social Model. Washington, D.C., University Press of America, 1982. ISBN 0-8191-2078-2
- Henne, Helmut, and Georg Objartel. German Student Jargon in the Eighteenth and Nineteenth Centuries. Berlin & NY, de Gruyter, 1983. OCLC 9193308
- Hughes, Michael. Nationalism and Society: Germany, 1800–1945. London & New York, Edward Arnold, 1988. ISBN 0-7131-6522-7
- Kollander, Patricia. Frederick III: Germany's Liberal Emperor, Contributions to the study of world history, no. 50. Westport, Conn., Greenwood, 1995. ISBN 0-313-29483-6
- Koshar, Rudy. Germany's Transient Pasts: Preservation and the National Memory in the Twentieth Century. Chapel Hill, University of North Carolina Press, 1998. ISBN 0-8078-4701-1
- Lowenstein, Steven M. The Berlin Jewish Community: Enlightenment, Family, and Crisis, 1770–1830. Studies in Jewish history. New York, Oxford University Press, 1994. ISBN 0-19-508326-1
- Lüdtke, Alf. Police and State in Prussia, 1815–1850. Cambridge, New York & Paris, Cambridge University Press, 1989. ISBN 0-521-11187-0
- Ogilvie, Sheilagh, and Richard Overy. Germany: A New Social and Economic History Volume 3: Since 1800 (2004)
- Ohles, Frederik. Germany's Rude Awakening: Censorship in the Land of the Brothers Grimm. Kent, Ohio, Ohio State University Press, 1992. ISBN 0-87338-460-1
- Pflanze Otto, ed. The Unification of Germany, 1848–1871 (1979), essays by historians
- Schleunes, Karl A. Schooling and Society: The Politics of Education in Prussia and Bavaria, 1750–1900. Oxford & New York, Oxford University Press, 1989. ISBN 0-85496-267-0
- Showalter, Dennis E. The Wars of German Unification (2nd ed. 2015), 412pp by a leading military historian
- Showalter, Dennis E. Railroads and Rifles: Soldiers, Technology, and the Unification of Germany. Hamden, Connecticut, Hailer Publishing, 1975. ISBN 0-9798500-9-6
- Smith, Woodruff D. Politics and the Sciences of Culture in Germany, 1840–1920. New York, Oxford University Press, 1991. ISBN 0-19-506536-0
- Wawro, Geoffrey. The Franco-Prussian War: The German Conquest of France. Cambridge, Cambridge University Press, 2005. ISBN 0-521-61743-X | https://library.kiwix.org/wikipedia_en_top_maxi/A/Unification_of_Germany | 21 |
29 | Acts of Union 1707
The Acts of Union (Scottish Gaelic: Achd an Aonaidh) were two Acts of Parliament: the Union with Scotland Act 1706 passed by the Parliament of England, and the Union with England Act passed in 1707 by the Parliament of Scotland. They put into effect the terms of the Treaty of Union that had been agreed on 22 July 1706, following negotiation between commissioners representing the parliaments of the two countries. By the two Acts, the Kingdom of England and the Kingdom of Scotland—which at the time were separate states with separate legislatures, but with the same monarch—were, in the words of the Treaty, "United into One Kingdom by the Name of Great Britain".
|Act of Parliament|
|Long title||An Act for a Union of the Two Kingdoms of England and Scotland|
|Citation||1706 c. 11|
|Territorial extent||Kingdom of England (inc. Wales); subsequently, Kingdom of Great Britain and United Kingdom|
|Commencement||1 May 1707|
Status: Current legislation
|Revised text of statute as amended|
|Act of Parliament|
|Long title||Act Ratifying and Approving the Treaty of Union of the Two Kingdoms of Scotland and England|
|Citation||1707 c. 7|
|Territorial extent||Kingdom of Scotland; subsequently, Kingdom of Great Britain and United Kingdom|
|Commencement||1 May 1707|
Status: Current legislation
|Revised text of statute as amended|
|Constitutional documents and events relevant to the status of the United Kingdom and its countries|
The two countries had shared a monarch since the Union of the Crowns in 1603, when King James VI of Scotland inherited the English throne from his double first cousin twice removed, Queen Elizabeth I. Although described as a Union of Crowns, and King James' acknowledgement of his accession to a single Crown, England and Scotland were officially separate Kingdoms until 1707 (as opposed to the implied creation of a single unified Kingdom, exemplified by the later Kingdom of Great Britain). Prior to the Acts of Union there had been three previous attempts (in 1606, 1667, and 1689) to unite the two countries by Acts of Parliament, but it was not until the early 18th century that both political establishments came to support the idea, albeit for different reasons.
The Acts took effect on 1 May 1707. On this date, the Scottish Parliament and the English Parliament united to form the Parliament of Great Britain, based in the Palace of Westminster in London, the home of the English Parliament. Hence, the Acts are referred to as the Union of the Parliaments.
Political background prior to 1707
Prior to 1603, England and Scotland were separate kingdoms; as Elizabeth I never married, after 1567, her heir became the Stuart king of Scotland, James VI, who was brought up as a Protestant. After her death, the two Crowns were held in personal union by James, as James I of England, and James VI of Scotland. He announced his intention to unite the two, using the royal prerogative to take the title "King of Great Britain", and give a British character to his court and person.
The 1603 Union of England and Scotland Act established a joint Commission to agree terms, but the English Parliament was concerned this would lead to the imposition of an absolutist structure similar to that of Scotland. James was forced to withdraw his proposals, and attempts to revive it in 1610 were met with hostility.
Instead, he set about creating a unified Church of Scotland and England, as the first step towards a centralised, Unionist state. However, despite both being nominally Episcopalian in structure, the two were very different in doctrine; the Church of Scotland, or kirk, was Calvinist in doctrine, and viewed many Church of England practices as little better than Catholicism. As a result, attempts to impose religious policy by James and his son Charles I ultimately led to the 1639–1651 Wars of the Three Kingdoms.
The 1639–1640 Bishops' Wars confirmed the primacy of the kirk, and established a Covenanter government in Scotland. The Scots remained neutral when the First English Civil War began in 1642, before becoming concerned at the impact on Scotland of a Royalist victory. Presbyterian leaders like Argyll viewed union as a way to ensure free trade between England and Scotland, and preserve a Presbyterian kirk.
Under the 1643 Solemn League and Covenant, the Covenanters agreed to provide military support for the English Parliament, in return for religious union. Although the treaty referred repeatedly to 'union' between England, Scotland, and Ireland, political union had little support outside the Kirk Party. Even religious union was opposed by the Episcopalian majority in the Church of England, and Independents like Oliver Cromwell, who dominated the New Model Army.
The Scots and English Presbyterians were political conservatives, who increasingly viewed the Independents, and associated radical groups like the Levellers, as a bigger threat than the Royalists. Both Royalists and Presbyterians agreed monarchy was divinely ordered, but disagreed on the nature and extent of Royal authority over the church. When Charles I surrendered in 1646, they allied with their former enemies to restore him to the English throne.
After defeat in the 1647–1648 Second English Civil War, Scotland was occupied by English troops which were withdrawn once the so-called Engagers whom Cromwell held responsible for the war had been replaced by the Kirk Party. In December 1648, Pride's Purge confirmed Cromwell's political control in England by removing Presbyterian MPs from Parliament, and executing Charles in January 1649. Seeing this as sacrilege, the Kirk Party proclaimed Charles II King of Scotland and Great Britain, and agreed to restore him to the English throne.
Defeat in the 1649–1651 Third English Civil War or Anglo-Scottish War resulted in Scotland's incorporation into the Commonwealth of England, Scotland and Ireland, largely driven by Cromwell's determination to break the power of the kirk, which he held responsible for the Anglo-Scottish War. The 1652 Tender of Union was followed on 12 April 1654 by An Ordinance by the Protector for the Union of England and Scotland, creating the Commonwealth of England, Scotland and Ireland. It was ratified by the Second Protectorate Parliament on 26 June 1657, creating a single Parliament in Westminster, with 30 representatives each from Scotland and Ireland added to the existing English members.
While integration into the Commonwealth established free trade between Scotland and England, the economic benefits were diminished by the costs of military occupation. Both Scotland and England associated union with heavy taxes and military rule; it had little popular support in either country, and was dissolved after the Restoration of Charles II in 1660.
The Scottish economy was badly damaged by the English Navigation Acts of 1660 and 1663 and wars with the Dutch Republic, its major export market. An Anglo-Scots Trade Commission was set up in January 1668 but the English had no interest in making concessions, as the Scots had little to offer in return. In 1669, Charles II revived talks on political union; his motives were to weaken Scotland's commercial and political links with the Dutch, still seen as an enemy and complete the work of his grandfather James I. Continued opposition meant these negotiations were abandoned by the end of 1669.
Following the Glorious Revolution of 1688, a Scottish Convention met in Edinburgh in April 1689 to agree a new constitutional settlement; during which the Scottish Bishops backed a proposed union in an attempt to preserve Episcopalian control of the kirk. William and Mary were supportive of the idea but it was opposed both by the Presbyterian majority in Scotland and the English Parliament. Episcopacy in Scotland was abolished in 1690, alienating a significant part of the political class; it was this element that later formed the bedrock of opposition to Union.
The 1690s were a time of economic hardship in Europe as a whole and Scotland in particular, a period now known as the Seven ill years which led to strained relations with England. In 1698, the Company of Scotland Trading to Africa and the Indies received a charter to raise capital through public subscription. The Company invested in the Darién scheme, an ambitious plan funded almost entirely by Scottish investors to build a colony on the Isthmus of Panama for trade with East Asia. The scheme was a disaster; the losses of over £150,000 severely impacted the Scottish commercial system.
The Acts of Union should be seen within a wider European context of increasing state centralisation during the late 17th and early 18th centuries, including the monarchies of France, Sweden, Denmark and Spain. While there were exceptions, such as the Dutch Republic or the Republic of Venice, the trend was clear.
The dangers of the monarch using one Parliament against the other first became apparent in 1647 and 1651. It resurfaced during the 1679 to 1681 Exclusion Crisis, caused by English resistance to the Catholic James II (of England, VII of Scotland) succeeding his brother Charles. James was sent to Edinburgh in 1681 as Lord High Commissioner; in August, the Scottish Parliament passed the Succession Act, confirming the divine right of kings, the rights of the natural heir 'regardless of religion', the duty of all to swear allegiance to that king and the independence of the Scottish Crown. It then went beyond ensuring James's succession to the Scottish throne by explicitly stating the aim was to make his exclusion from the English throne impossible without '...the fatall and dreadfull consequences of a civil war.'
The issue reappeared during the 1688 Glorious Revolution. The English Parliament generally supported replacing James with his Protestant daughter Mary II, but resisted making her Dutch husband William III & II joint ruler. They gave way only when he threatened to return to the Netherlands, and Mary refused to rule without him.
In Scotland, conflict over control of the kirk between Presbyterians and Episcopalians and William's position as a fellow Calvinist put him in a much stronger position. He originally insisted on retaining Episcopacy, and the Committee of the Articles, an unelected body that controlled what legislation Parliament could debate. Both would have given the Crown far greater control than in England but he withdrew his demands due to the 1689–1692 Jacobite Rising.
The English succession was provided for by the English Act of Settlement 1701, which ensured that the monarch of England would be a Protestant member of the House of Hanover. Until the Union of Parliaments, the Scottish throne might be inherited by a different successor after Queen Anne, who had said in her first speech to the English parliament that a Union was 'very necessary'. The Scottish Act of Security 1704 however was passed after the English parliament without consultation with Scotland, had designated Electoress Sophie of Hanover (granddaughter of James I and VI), as Anne's successor, if she died childless. The Act of Security however granted the Parliament of Scotland, the three Estates, the right to choose a successor and explicitly required a choice different from the English monarch unless the English were to grant free trade and navigation. Next the Alien Act 1705 was passed in the English parliament making Scots in England designated as 'foreign nationals' – and blocking about half of all Scottish trade by boycotting exports to England or its colonies, unless Scotland came back to negotiate a Union. To encourage a Union, 'honours, appointments, pensions and even arrears of pay and other expenses were distributed to clinch support from Scottish peers and MPs.'
The Scottish economy was severely impacted by privateers during the 1688 to 1697 Nine Years' War, and the 1701 War of the Spanish Succession, with the Royal Navy focusing on protecting English ships. This compounded the economic pressure caused by the Darien scheme, and the seven ill years of the 1690s, when between 5–15% of the population died of starvation. The Scottish Parliament was promised financial assistance, protection for its maritime trade, and an end of economic restrictions on trade with England.
The votes of the Court party, influenced by Queen Anne's favourite, the Duke of Queensberry, combined with the majority of the Squadrone Volante, were sufficient to ensure passage of the treaty. Article 15 granted £398,085 and ten shillings sterling to Scotland, a sum known as The Equivalent, to offset future liability towards the English national debt, which at the time was £18 million, but as Scotland had no national debt, most of the sum was used to compensate the investors in the Darien scheme, with 58.6% of the fund allocated to its shareholders and creditors.
The role played by bribery has long been debated; £20,000 was distributed by the Earl of Glasgow, of which 60% went to James Douglas, 2nd Duke of Queensberry, the Queen's Commissioner in Parliament. Another negotiator, Argyll was given an English peerage. Robert Burns is commonly quoted in support of the argument of corruption; "We're bought and sold for English Gold, Such a Parcel of Rogues in a Nation." As historian Christopher Whatley points out, this was actually a 17th-century Scots folk song; but he agrees money was paid, though suggests the economic benefits were supported by most Scots MPs, with the promises made for benefits to peers and MPs, even if it was reluctantly. Professor Sir Tom Devine, agreed that promises of 'favours, sinecures, pensions, offices and straightforward cash bribes became indispensable to secure government majorities'. As for representation going forwards, Scotland was, in the new united parliament only to get 45 MPs, one more than Cornwall, and only 16 (unelected) peers places in the House of Lords.
Sir George Lockhart of Carnwath, the only Scottish negotiator to oppose Union, noted "the whole nation appears against (it)". Another negotiator, Sir John Clerk of Penicuik, who was an ardent Unionist, observed it was "contrary to the inclinations of at least three-fourths of the Kingdom". As the seat of the Scottish Parliament, demonstrators in Edinburgh feared the impact of its loss on the local economy. Elsewhere, there was widespread concern about the independence of the kirk, and possible tax rises.
As the Treaty passed through the Scottish Parliament, opposition was voiced by petitions from shires, burghs, presbyteries and parishes. The Convention of Royal Burghs claimed 'we are not against an honourable and safe union with England', but 'the condition of the people of Scotland, (cannot be) improved without a Scots Parliament'. Not one petition in favour of Union was received by Parliament. On the day the treaty was signed, the carilloner in St Giles Cathedral, Edinburgh, rang the bells in the tune Why should I be so sad on my wedding day? Threats of widespread civil unrest resulted in Parliament imposing martial law.
Treaty and passage of the 1707 Acts
Deeper political integration had been a key policy of Queen Anne from the time she acceded to the throne in 1702. Under the aegis of the Queen and her ministers in both kingdoms, the parliaments of England and Scotland agreed to participate in fresh negotiations for a union treaty in 1705.
Both countries appointed 31 commissioners to conduct the negotiations. Most of the Scottish commissioners favoured union, and about half were government ministers and other officials. At the head of the list was Queensberry, and the Lord Chancellor of Scotland, the Earl of Seafield. The English commissioners included the Lord High Treasurer, the Earl of Godolphin, the Lord Keeper, Baron Cowper, and a large number of Whigs who supported union. Tories were not in favour of union and only one was represented among the commissioners.
Negotiations between the English and Scottish commissioners took place between 16 April and 22 July 1706 at the Cockpit in London. Each side had its own particular concerns. Within a few days, and with only one face to face meeting of all 62 commissioners, England had gained a guarantee that the Hanoverian dynasty would succeed Queen Anne to the Scottish crown, and Scotland received a guarantee of access to colonial markets, in the hope that they would be placed on an equal footing in terms of trade.
After negotiations ended in July 1706, the acts had to be ratified by both Parliaments. In Scotland, about 100 of the 227 members of the Parliament of Scotland were supportive of the Court Party. For extra votes the pro-court side could rely on about 25 members of the Squadrone Volante, led by the Marquess of Montrose and the Duke of Roxburghe. Opponents of the court were generally known as the Country party, and included various factions and individuals such as the Duke of Hamilton, Lord Belhaven and Andrew Fletcher of Saltoun, who spoke forcefully and passionately against the union, when the Scottish Parliament began its debate on the act in on 3 October 1706, but the deal had already been done. The Court party enjoyed significant funding from England and the Treasury and included many who had accumulated debts following the Darien Disaster.
In Scotland, the Duke of Queensberry was largely responsible for the successful passage of the Union act by the Parliament of Scotland. In Scotland, he also received much criticism from local residents, but in England he was cheered for his action. He had personally received around half of the funding awarded by the Westminster Treasury for himself. In April 1707, he travelled to London to attend celebrations at the royal court, and was greeted by groups of noblemen and gentry lined along the road. From Barnet, the route was lined with crowds of cheering people, and once he reached London a huge crowd had formed. On 17 April, the Duke was gratefully received by the Queen at Kensington Palace.
The Treaty of Union, agreed between representatives of the Parliament of England and the Parliament of Scotland in 1706, consisted of 25 articles, 15 of which were economic in nature. In Scotland, each article was voted on separately and several clauses in articles were delegated to specialised subcommittees. Article 1 of the treaty was based on the political principle of an incorporating union and this was secured by a majority of 116 votes to 83 on 4 November 1706. To minimise the opposition of the Church of Scotland, an Act was also passed to secure the Presbyterian establishment of the Church, after which the Church stopped its open opposition, although hostility remained at lower levels of the clergy. The treaty as a whole was finally ratified on 16 January 1707 by a majority of 110 votes to 69.
The two Acts incorporated provisions for Scotland to send representative peers from the Peerage of Scotland to sit in the House of Lords. It guaranteed that the Church of Scotland would remain the established church in Scotland, that the Court of Session would "remain in all time coming within Scotland", and that Scots law would "remain in the same force as before". Other provisions included the restatement of the Act of Settlement 1701 and the ban on Roman Catholics from taking the throne. It also created a customs union and monetary union.
The Act provided that any "laws and statutes" that were "contrary to or inconsistent with the terms" of the Act would "cease and become void".
The Scottish Parliament also passed the Protestant Religion and Presbyterian Church Act 1707 guaranteeing the status of the Presbyterian Church of Scotland. The English Parliament passed a similar Act, 6 Anne c.8.
Soon after the Union, the Act 6 Anne c.40—later named the Union with Scotland (Amendment) Act 1707—united the English and Scottish Privy Councils and decentralised Scottish administration by appointing justices of the peace in each shire to carry out administration. In effect it took the day-to-day government of Scotland out of the hands of politicians and into those of the College of Justice.
On 18 December 1707 the Act for better Securing the Duties of East India Goods was passed which extended the monopoly of the East India Company to Scotland.
Scotland benefited, says historian G.N. Clark, gaining "freedom of trade with England and the colonies" as well as "a great expansion of markets". The agreement guaranteed the permanent status of the Presbyterian church in Scotland, and the separate system of laws and courts in Scotland. Clark argued that in exchange for the financial benefits and bribes that England bestowed, what it gained was
of inestimable value. Scotland accepted the Hanoverian succession and gave up her power of threatening England's military security and complicating her commercial relations ... The sweeping successes of the eighteenth-century wars owed much to the new unity of the two nations.
By the time Samuel Johnson and James Boswell made their tour in 1773, recorded in A Journey to the Western Islands of Scotland, Johnson noted that Scotland was "a nation of which the commerce is hourly extending, and the wealth increasing" and in particular that Glasgow had become one of the greatest cities of Britain.
The Scottish Government held a number of commemorative events through the year including an education project led by the Royal Commission on the Ancient and Historical Monuments of Scotland, an exhibition of Union-related objects and documents at the National Museums of Scotland and an exhibition of portraits of people associated with the Union at the National Galleries of Scotland.
Scottish voting records
- Acts of Union 1800 (King of Great Britain with Kingdom of Ireland)
- English independence
- History of democracy
- List of treaties
- MacCormick v Lord Advocate
- Parliament of the United Kingdom
- Political union
- Real union
- Scottish independence
- Unionism in Scotland
- Welsh independence
- About £23 million today.
- About £67 million today.
- About £3 billion in today's money.
- About £3.4 million today.
- The citation of this Act by this short title was authorised by section 1 of, and Schedule 1 to, the Short Titles Act 1896. Due to the repeal of those provisions, it is now authorised by section 19(2) of the Interpretation Act 1978.
- Article I of the Treaty of Union
- "House of Commons Journal Volume 1: 31 March 1607". Retrieved 27 October 2020.
- Act of Union 1707, Article 3
- Larkin & Hughes 1973, p. 19.
- Lockyer 1998, pp. 51–52.
- Lockyer 1998, pp. 54–59.
- Stephen 2010, pp. 55–58.
- McDonald 1998, pp. 75–76.
- Kaplan 1970, pp. 50–70.
- Robertson 2014, p. 125.
- Harris 2015, pp. 53–54.
- Morrill 1990, p. 162.
- The 1657 Act's long title was An Act and Declaration touching several Acts and Ordinances made since 20 April 1653, and before 3 September 1654, and other Acts
- Parliament.uk Archived 12 October 2008 at the Wayback Machine
- MacIntosh 2007, pp. 79–87.
- Whatley 2001, p. 95.
- Lynch 1992, p. 305.
- Harris 2007, pp. 404–406.
- Whatley 2006, p. 91.
- Mitchison 2002, pp. 301–302.
- Richards 2004, p. 79.
- Mitchison 2002, p. 314.
- Munck 2005, pp. 429–431.
- Jackson 2003, pp. 38–54.
- Horwitz 1986, pp. 10–11.
- Lynch 1992, pp. 300–303.
- MacPherson, Hamish (27 September 2020). "How the Act of Union came about through a corrupt fixed deal in 1706". The National. Retrieved 27 September 2020.
- "Ratification, October 1706 – March 1707". www.parliament.uk. Retrieved 27 September 2020.
- Cullen 2010, p. 117.
- Whatley 2001, p. 48.
- Watt 2007, p. ?.
- Whatley 1989, pp. 160–165.
- Devine, T. M. (Thomas Martin) (5 July 2012). The Scottish nation : a modern history. London: Penguin. ISBN 978-0-7181-9673-8. OCLC 1004568536.
- "Scottish Referendums". BBC. Retrieved 16 March 2016.
- Bambery 2014, p. ?.
- The Humble Address of the Commissioners to the General Convention of the Royal Burrows of this Ancient Kingdom Convened the Twenty-Ninth of October 1706, at Edinburgh.
- Notes by John Purser to CD Scotland's Music, Facts about Edinburgh.
- "The commissioners". UK Parliament website. 2007. Archived from the original on 19 June 2009. Retrieved 5 February 2013.
- "The course of negotiations". UK Parliament website. 2007. Archived from the original on 21 July 2009. Retrieved 5 February 2013.
- "Ratification". UK parliament website. 2007. Archived from the original on 19 June 2009. Retrieved 5 February 2013.
- "1 May 1707 – the Union comes into effect". UK Parliament website. 2007. Archived from the original on 19 June 2009. Retrieved 5 February 2013.
- Riley 1969, pp. 523–524.
- G.N. Clark, The Later Stuarts, 1660–1714 (2nd ed. 1956) pp 290–93.
- Gordon Brown (2014). My Scotland, Our Britain: A Future Worth Sharing. Simon & Schuster UK. p. 150. ISBN 9781471137518.
- House of Lords – Written answers, 6 November 2006, TheyWorkForYou.com
- Announced by the Scottish Culture Minister, Patricia Ferguson, 9 November 2006
Sources and further reading
- Bambery, Chris (2014). A People's History of Scotland. Verso. ISBN 978-1786637871.
- Campbell, R. H. “The Anglo-Scottish Union of 1707. II. The Economic Consequences.” Economic History Review vol. 16, no. 3, 1964, pp. 468–477 online
- Cullen, K. J. (2010). Famine in Scotland: The “Ill Years” of the 1690s. Edinburgh University Press. ISBN 0748638873.
- Harris, Tim (2007). Revolution: The Great Crisis of the British Monarchy, 1685–1720. Penguin. ISBN 978-0141016528.
- Harris, Tim (2015). Rebellion: Britain's First Stuart Kings, 1567–1642. OUP Oxford. ISBN 978-0198743118.
- Horwitz, Henry (1986). Parliament, Policy and Politics in the Reign of William III. MUP. ISBN 978-0719006616.
- Jackson, Clare (2003). Restoration Scotland, 1660–1690: Royalist Politics, Religion and Ideas. Boydell Press. ISBN 978-0851159300.
- Kaplan, Lawrence (May 1970). "Steps to War: The Scots and Parliament, 1642–1643". Journal of British Studies. 9 (2): 50–70. doi:10.1086/385591. JSTOR 175155.
- Larkin, James F.; Hughes, Paul L., eds. (1973). Stuart Royal Proclamations: Volume I. Clarendon Press.
- Lynch, Michael (1992). Scotland: a New History. Pimlico Publishing. ISBN 978-0712698931.
- Lockyer, R (1998). James VI and I. London: Addison Wesley Longman. ISBN 978-0-582-27962-9.
- MacIntosh, Gillian (2007). Scottish Parliament under Charles II, 1660–1685. Edinburgh University Press. ISBN 978-0748624577.
- McDonald, Alan (1998). The Jacobean Kirk, 1567–1625: Sovereignty, Polity and Liturgy. Routledge. ISBN 978-1859283738.
- Mitchison, Rosalind (2002). A History of Scotland. Routledge. ISBN 978-0415278805.
- Morrill, John (1990). Oliver Cromwell and the English Revolution. Longman. ISBN 978-0582016750.
- Munck, Thomas (2005). Seventeenth-Century Europe: State, Conflict and Social Order in Europe 1598–1700. Palgrave. ISBN 978-1403936196.
- Richards, E (2004). OBritannia's Children: Emigration from England, Scotland, Wales and Ireland since 1600. Continuum. ISBN 1852854413.
- Riley, PJW (1969). "The Union of 1707 as an Episode in English Politics". The English Historical Review. 84 (332): 498–527. JSTOR 562482.
- Robertson, Barry (2014). Royalists at War in Scotland and Ireland, 1638–1650. Routledge. ISBN 978-1317061069.
- Smout, T. C. “The Anglo-Scottish Union of 1707. I. The Economic Background.” Economic History Review vol. 16, no. 3, 1964, pp. 455–467. online
- Stephen, Jeffrey (January 2010). "Scottish Nationalism and Stuart Unionism". Journal of British Studies. 49 (1, Scottish Special). doi:10.1086/644534. S2CID 144730991.
- Watt, Douglas (2007). The Price of Scotland: Darien, Union and the wealth of nations. Luath Press. ISBN 978-1906307097.
- Whatley, C (2001). Bought and sold for English Gold? Explaining the Union of 1707. East Linton: Tuckwell Press. ISBN 978-1-86232-140-3.
- Whatley, C (2006). The Scots and the Union. Edinburgh University Press. ISBN 978-0-7486-1685-5.
- Whatley, Christopher (1989). "Economic Causes and Consequences of the Union of 1707: A Survey". Scottish Historical Review. 68 (186).
- Defoe, Daniel. A tour thro' the Whole Island of Great Britain, 1724–27
- Defoe, Daniel. The Letters of Daniel Defoe, GH Healey editor. Oxford: 1955.
- Fletcher, Andrew (Saltoun). An Account of a Conversation
- Lockhart, George, "The Lockhart Papers", 1702–1728 | https://worddisk.com/wiki/Act_of_Union_1707/ | 21 |
14 | Paris Peace Conference (1919–1920) The Paris Peace Conference
was the formal meeting in 1919 and 1920 of the victorious Allies
after the end of World War I
to set the peace terms for the defeated Central Powers
. Dominated by the leaders of Britain, France, the United States and Italy, it resulted in five controversial treaties that rearranged the map of Europe and parts of Asia, Africa and the Pacific Islands and imposed financial penalties. Germany and the other losing nations had no voice which gave rise to political resentments that lasted for decades.
The conference involved diplomats from 32 countries and nationalities
, and its major decisions were the creation of the League of Nations
and the five peace treaties with the defeated states; the awarding of German and Ottoman overseas possessions as "mandates
," chiefly to Britain and France; the imposition of reparations upon Germany
; and the drawing of new national boundaries, sometimes with plebiscites
, to reflect ethnic boundaries more closely.
The conference began on 18 January 1919. With respect to its end, Professor Michael Neiberg
noted, "Although the senior statesmen stopped working personally on the conference in June 1919, the formal peace process did not really end until July 1923, when the Treaty of Lausanne
It is often referred to as the "Versailles Conference," but only the signing of the first treaty took place there, in the historic palace, and the negotiations occurred at the Quai d'Orsay
, in Paris.
Overview and direct results
The location of the signing of the five principal treaties within the Île de France
The Conference formally opened on 18 January 1919 at the Quai d’Orsay in Paris.
This date was symbolic, as it was the anniversary of the proclamation of William I
as German Emperor
in 1871, in the Hall of Mirrors
at the Palace of Versailles
, shortly before the end of the Siege of Paris
- a day itself imbued with significance in its turn in Germany as the anniversary of the establishment of the Kingdom of Prussia
The Delegates from 27 nations (delegates representing 5 nationalities were for the most part ignored) were assigned to 52 commissions, which held 1,646 sessions to prepare reports, with the help of many experts, on topics ranging from prisoners of war to undersea cables, to international aviation, to responsibility for the war. Key recommendations were folded into the Treaty of Versailles
with Germany, which had 15 chapters and 440 clauses, as well as treaties for the other defeated nations.
The five major powers (France, Britain, Italy, the U.S., and Japan) controlled the Conference. Amongst the "Big Five", in practice Japan only sent a former prime minister and played a small role; and the "Big Four
" leaders dominated the conference.
The four met together informally 145 times and made all the major decisions, which in turn were ratified by other attendees.
The open meetings of all the delegations approved the decisions made by the Big Four. The conference came to an end on 21 January 1920 with the inaugural General Assembly of the League of Nations
Five major peace treaties were prepared at the Paris Peace Conference (with, in parentheses, the affected countries):
The major decisions were the establishment of the League of Nations
; the five peace treaties with defeated enemies; the awarding of German and Ottoman overseas possessions as "mandates"
, chiefly to members of the British Empire and to France; reparations imposed on Germany; and the drawing of new national boundaries (sometimes with plebiscites) to better reflect the forces of nationalism. The main result was the Treaty of Versailles
, with Germany, which in section 231 laid the guilt for the war on "the aggression of Germany and her allies". This provision proved humiliating for Germany and set the stage for very high reparations Germany was supposed to pay (it paid only a small portion before reparations ended in 1931).
A central issue of the conference was the disposition of the overseas colonies
of Germany. (Austria-Hungary did not have major colonies, and the Ottoman Empire was a separate issue.)
The British dominions wanted their reward for their sacrifice. Australia wanted New Guinea
, New Zealand wanted Samoa
, and South Africa wanted South West Africa
. Wilson wanted the League to administer all German colonies until they were ready for independence. Lloyd George realized he needed to support his dominions and so he proposed a compromise: there be three types of mandates. Mandates for the Turkish provinces were one category and would be divided up between Britain and France.
The second category, of New Guinea, Samoa, and South West Africa, were located so close to responsible supervisors that the mandates could hardly be given to anyone except Australia, New Zealand, and South Africa. Finally, the African colonies would need the careful supervision as "Class B" mandates, which could be provided only by experienced colonial powers: Britain, France, and Belgium although Italy and Portugal received small amounts of territory. Wilson and the others finally went along with the solution.
The dominions received "Class C Mandates
" to the colonies that they wanted. Japan obtained mandates over German possessions north of the Equator
Wilson wanted no mandates for the United States, but his main advisor, Colonel House
, was deeply involved in awarding the others.
Wilson was especially offended by Australian demands and had some memorable clashes with Hughes (the Australian Prime Minister), this the most famous:
: But after all, you speak for only five million people.Hughes
: I represent sixty thousand dead.
The British Air Section at the conference
The maintenance of the unity, territories, and interests of the British Empire was an overarching concern for the British delegates to the conference, but they entered the conference with more specific goals with this order of priority:
- Ensuring the security of France
- Removing the threat of the German High Seas Fleet
- Settling territorial contentions
- Supporting the League of Nations
The Racial Equality Proposal, put forth by the Japanese did not directly conflict with any core British interest, but as the conference progressed, its full implications on immigration to the British dominions
, with Australia
taking particular exception, would become a major point of contention within the delegation.
Ultimately, Britain did not see the proposal as being one of the fundamental aims of the conference. Its delegation was, therefore, willing to sacrifice the proposal to placate the Australian delegation and thus help to satisfy its overarching aim of preserving the empire's unity.
Britain had reluctantly consented to the attendance of separate dominion delegations, but the British managed to rebuff attempts by the envoys of the newly proclaimed Irish Republic
to put its case to the conference for self-determination
, diplomatic recognition, and membership of the proposed League of Nations. The Irish envoys' final "Demand for Recognition" in a letter to Clemenceau, the chairman, was not answered.
Britain planned to legislate for two Irish Home Rule
states, without dominion status, and accordingly passed the Government of Ireland Act 1920
. Irish nationalists were generally unpopular with the Allies in 1919 because of their anti-war stance during the Conscription Crisis of 1918
David Lloyd George commented that he did "not do badly" at the peace conference "considering I was seated between Jesus Christ
." This was a reference to the great idealism of Wilson, who desired a lenient peace with Germany, and the stark realism of Clemenceau, who was determined to see Germany punished.
The Australian delegation, with Australian Prime Minister Billy Hughes
in the center
The dominion governments were not originally given separate invitations to the conference but were expected to send representatives as part of the British delegation.
Convinced that Canada had become a nation on the battlefields of Europe, Prime Minister Sir Robert Borden
demanded that it have a separate seat at the conference. That was initially opposed not only by Britain but also by the United States, which saw a Dominion delegation as an extra British vote. Borden responded by pointing out that since Canada had lost nearly 60,000 men, a far larger proportion of its men than the 50,000 American men list, it had at least the right to the representation of a "minor" power. Lloyd George eventually relented, and persuaded the reluctant Americans to accept the presence of delegations from Canada, India
, Australia, Newfoundland
, New Zealand, and South Africa and that those countries receive their own seats in the League of Nations.
Canada, despite its huge losses in the war, did not ask for either reparations or mandates.
The Australian delegation, led by Australian Prime Minister Billy Hughes
fought greatly for its demands: reparations, the annexation of German New Guinea
, and the rejection of the Racial Equality Proposal. He said that he had no objection to the proposal if it was stated in unambiguous terms that it did not confer any right to enter Australia. He was concerned by the rise of Japan. Within months of the declaration of war in 1914, Japan, Australia, and New Zealand had seized all of Germany's possessions in the Far East
and the Pacific Ocean
. Japan occupied German possessions with the blessings of the British, but Hughes was alarmed by the policy.
Woodrow Wilson, Georges Clemenceau, and David Lloyd George confer at the Paris Peace Conference (Noël Dorville
French Prime Minister, Georges Clemenceau
controlled his delegation, and his chief goal was to weaken Germany militarily, strategically, and economically.
Having personally witnessed two German attacks on French soil in the last 40 years, he was adamant for Germany not to be permitted to attack France again. Particularly, Clemenceau sought an American and British joint guarantee of French security in the event of another German attack.
Clemenceau also expressed skepticism and frustration with Wilson's Fourteen Points
and complained: "Mr. Wilson bores me with his fourteen points. Why, God Almighty has only ten!" Wilson won a few points by signing a mutual defense treaty with France, but he did not present it to the Senate for ratification and so it never took effect.
Another possible French policy was to seek a rapprochement
with Germany. In May 1919 the diplomat René Massigli
was sent on several secret missions to Berlin
. During his visits, he offered, on the behalf of his government, to revise the territorial and economic clauses of the upcoming peace treaty.
Massigli spoke of the desirability of "practical, verbal discussions" between French and German officials that would lead to a "Franco-German collaboration."
Furthermore, Massagli told the Germans that the French thought of the "Anglo-Saxon powers" (the United States and the British Empire) to be the major threat to France in the post-war world. He argued that both France and Germany had a joint interest in opposing "Anglo-Saxon domination" of the world, and he warned that the "deepening of opposition" between the French and the Germans "would lead to the ruin of both countries, to the advantage of the Anglo-Saxon powers."
The Germans rejected the French offers because they considered the French overtures to be a trap to trick them into accepting the Treaty of Versailles unchanged; also, German Foreign Minister, Count Ulrich von Brockdorff-Rantzau
thought that the United States was more likely to reduce the severity of the peace treaty than France was.
Eventually, it became Lloyd George who pushed for better terms for Germany.
In 1914, Italy remained neutral despite the Triple Alliance
with Germany and Austria-Hungary. In 1915, it joined the Allies to gain the territories promised by the Triple Entente
in the secret Treaty of London
, the Tyrol
as far as Brenner
, most of the Dalmatian Coast
, a protectorate over Albania
(in Turkey), and possibly colonies in Africa.
Italian Prime Minister Vittorio Emanuele Orlando
tried to obtain full implementation of the Treaty of London, as agreed by France and Britain before the war. He had popular support because of the loss of 700,000 soldiers and a budget deficit of 12,000,000,000 Italian lire
during the war made both the government and people feel entitled to all of those territories and even others not mentioned in the Treaty of London, particularly Fiume, which many Italians believed should be annexed to Italy because of the city's Italian population.
Orlando, unable to speak English, conducted negotiations jointly with his Foreign Minister Sidney Sonnino
, a Protestant
of British origins who spoke the language. Together, they worked primarily to secure the partition of the Habsburg Monarchy
. At the conference, Italy gained Istria
, and South Tyrol
. Most of Dalmatia
, however, was given to the Kingdom of Serbs, Croats and Slovenes, and Fiume remained disputed territory, causing a nationalist outrage. Orlando obtained other results, such as the permanent membership of Italy in the League of Nations
and the promise by the Allies to transfer British Jubaland
and the French Aozou strip
to Italian colonies. Protectorates over Albania and Antalya were also recognized, but nationalists considered the war to be a mutilated victory
, and Orlando was ultimately forced to abandon the conference and to resign. Francesco Saverio Nitti
took his place and signed the treaties.
There was a general disappointment in Italy, which the nationalists and fascists used to build the idea that Italy was betrayed by the Allies and refused what had been promised. That was a cause for the general rise of Italian fascism. Orlando refused to see the war as a mutilated victory and replied to nationalists calling for a greater expansion, "Italy today is a great state... on par with the great historic and contemporary states. This is, for me, our main and principal expansion."
The Japanese delegation at the Paris Peace Conference
The Japanese delegation at the Conference, with (seated left to right) former Foreign Minister Baron Makino Nobuaki
, former Prime Minister Marquis Saionji Kinmochi
, and Japanese Ambassador to Great Britain Viscount Chinda Sutemi
Japan sent a large delegation, headed by the former Prime Minister, Marquis Saionji Kinmochi
. It was originally one of the "big five" but relinquished that role because of its slight interest in European affairs. Instead, it focused on two demands: the inclusion of its Racial Equality Proposal
in the League's Covenant and Japanese territorial claims with respect to former German colonies: Shantung
) and the Pacific islands north of the Equator (the Marshall Islands
, the Mariana Islands
, and the Carolines
). The former Foreign Minister Baron Makino Nobuaki
was de facto
chief, and Saionji's role was symbolic and limited because of his history of ill-health. The Japanese delegation became unhappy after it had received only half of the rights of Germany, and it then walked out of the conference.
Racial equality proposal
The equality of nations being a basic principle of the League of Nations, the High Contracting Parties agree to accord as soon as possible to all alien nationals of states, members of the League, equal and just treatment in every respect making no distinction, either in law or in fact, on account of their race or nationality.
Wilson knew that Great Britain was critical to the decision and, as Conference chairman, ruled that a unanimous vote was required. On 11 April 1919, the commission held a final session and the Racial Equality Proposal
received a majority of votes, but Britain and Australia did not support it. The Australians had lobbied the British to defend Australia's White Australia policy
. Wilson also knew that domestically, he needed the support of the West, which feared Japanese and Chinese immigration, and the South, which feared the rise of their black citizens.
The defeat of the proposal influenced Japan's turn from co-operation with the Western world
into more nationalist and militarist policies and approaches.
The Japanese claim to Shantung
faced strong challenges from the Chinese patriotic student group. In 1914, at the outset of the war, Japan had seized the territory that had been granted to Germany in 1897 and also seized the German islands in the Pacific north of the equator
. In 1917, Japan had made secret agreements with Britain, France, and Italy to guarantee their annexation of these territories. With Britain, there was an agreement to support British annexation of the Pacific Islands
south of the Equator
. Despite a generally pro-Chinese view by the American delegation, Article 156 of the Treaty of Versailles transferred German concessions in the Jiaozhou Bay
, China, to Japan rather than returning sovereign authority to China. The leader of the Chinese delegation, Lou Tseng-Tsiang
, demanded a reservation be inserted before he would sign the treaty. After the reservation was denied, the treaty was signed by all the delegations except that of China. Chinese outrage over that provision led to demonstrations known as the May Fourth Movement
. The Pacific Islands north of the equator became a class C mandate, administered by Japan.
Until Wilson's arrival in Europe in December 1918, no sitting American president had ever visited the continent.
Wilson's 1917 Fourteen Points
, had helped win many hearts and minds
as the war ended in America and all over Europe, including Germany, as well as its allies in and the former subjects of the Ottoman Empire
Wilson's diplomacy and his Fourteen Points had essentially established the conditions for the armistices that had brought an end to World War I. Wilson felt it to be his duty and obligation to the people of the world to be a prominent figure at the peace negotiations. High hopes and expectations were placed on him to deliver what he had promised for the postwar era. In doing so, Wilson ultimately began to lead the foreign policy of the United States
, a move that has been strongly resisted in some domestic circles ever since.
Once Wilson arrived, however, he found "rivalries, and conflicting claims previously submerged."
He worked mostly trying to sway the direction that the French, led by Georges Clemenceau
, and the British, led by David Lloyd George
, towards Germany and its allies in Europe and the former Ottoman Empire in the Middle East
. Wilson's attempts to gain acceptance of his Fourteen Points ultimately failed after France and Britain had refused to adopt some of their specific points and core principles.
In Europe, several of his Fourteen Points conflicted with the other powers' desires. The United States did not encourage or believe that the responsibility for the war, which Article 231
of the Treaty of Versailles, placed on Germany alone, was fair or warranted.
It would not be until 1921, under US President Warren Harding
, that the United States finally signed peace treaties with the Central Powers, separately, with Germany,
In the Middle East, negotiations were complicated by competing aims and claims, and the new mandate system. The United States hoped to establish a more liberal and diplomatic world, as stated in the Fourteen Points, in which democracy, sovereignty, liberty and self-determination
would be respected. France and Britain, on the other hand, already controlled empires, wielded power over their subjects around the world, and still aspired to be dominant colonial powers.
In the light of the previously-secret Sykes–Picot Agreement
and following the adoption of the mandate system on the Arab provinces of the former Ottoman Empire, the conference heard statements from competing Zionists and Arabs. Wilson then recommended an international commission of inquiry to ascertain the wishes of the local inhabitants. The idea, first accepted by Great Britain and France, was later rejected but became the purely-American King–Crane Commission
, which toured all Syria and Palestine during the summer of 1919, took statements, and sampled opinion.
Its report, presented to Wilson, was kept secret from the public until The New York Times
broke the story in December 1922.
A pro-Zionist joint resolution on Palestine was passed by Congress in September 1922.
France and Britain tried to appease Wilson by consenting to the establishment of his League of Nations
. However, because isolationist sentiment was strong, and some of the articles in the League Charter conflicted with the US Constitution
, the United States never ratified the Treaty of Versailles
or joined the League
that Wilson had helped to create to further peace by diplomacy, rather than war, and the conditions that can breed peace.
Greek Prime Minister Eleftherios Venizelos
took part in the conference as Greece's chief representative. Wilson was said to have placed Venizelos first for personal ability among all delegates in Paris.
As a liberal politician, Venizelos was a strong supporter of the Fourteen Points and the League of Nations.
The Chinese delegation was led by Lou Tseng-Tsiang
, who was accompanied by Wellington Koo
and Cao Rulin
. Koo demanded Germany's concessions on Shandong be returned to China
. He also called for an end to imperialist institutions such as extraterritoriality
, legation guards
, and foreign leaseholds. Despite American support and the ostensible spirit of self-determination
, the Western powers refused his claims but instead transferred the German concessions to Japan. That sparked widespread student protests in China on 4 May, later known as the May Fourth Movement
, which eventually pressured the government into refusing to sign the Treaty of Versailles. Thus, the Chinese delegation at the conference was the only one not to sign the treaty at the signing ceremony.
All-Russian Government (Whites)
While Russia was formally excluded from the Conference
although it had fought against the Central Powers for three years. However the Russian Provincial Council (chaired by Prince Lvov
), the successor to the Russian Constitutional Assembly
and the political arm of the Russian White movement
attended the conference and was represented by the former tsarist minister Sergey Sazonov
who, if the Tsar had not been overthrown, would most likely have attended the conference anyway. The Council maintained the position of an indivisible Russia, but some were prepared to negotiate over the loss of Poland and Finland.
The Council suggested all matters relating to territorial claims or demands for autonomy within the former Russian Empire
be referred to a new All-Russian Constituent Assembly.
Map of Ukraine presented by the Ukrainian delegation at the Paris Peace Conference in a bid that was ultimately rejected, which led to the incorporation of Ukraine into the Soviet Union
. The Kuban
was then mostly Ukrainian.
had its best opportunity to win recognition and support from foreign powers at the conference.
At a meeting of the Big Five on 16 January, Lloyd George called Ukrainian leader Symon Petliura
(1874–1926) an adventurer and dismissed Ukraine as an anti-Bolshevik stronghold. Sir Eyre Crowe
, British Undersecretary of State for Foreign Affairs, spoke against a union of East Galicia and Poland. The British cabinet never decided whether to support a united or dismembered Russia. The United States was sympathetic to a strong, united Russia, as a counterpoise to Japan, but Britain feared a threat to India. Petliura appointed Count Tyshkevich as his representative to the Vatican, and Pope Benedict XV
recognized Ukrainian independence, but Ukraine was effectively ignored.
At the insistence of Wilson, the Big Four required Poland to sign a treaty on 28 June 1919 that guaranteed minority rights
in the new nation. Poland signed under protest and made little effort to enforce the specified rights for Germans, Jews, Ukrainians, and other minorities. Similar treaties were signed by Czechoslovakia, Romania, Yugoslavia, Greece, Austria, Hungary, and Bulgaria and later by Latvia, Estonia, and Lithuania. Estonia had already given cultural autonomy to minorities in its declaration of independence. Finland and Germany were not asked to sign a minority treaty.
In Poland, the key provisions were to become fundamental laws, which would override any national legal codes or legislation. The new country pledged to assure "full and complete protection of life and liberty to all individuals... without distinction of birth, nationality, language, race, or religion." Freedom of religion was guaranteed to everyone. Most residents were given citizenship, but there was considerable ambiguity on who was covered. The treaty guaranteed basic civil, political, and cultural rights and required all citizens to be equal before the law and enjoy identical rights of citizens and workers. Polish
was to be the national language
, but the treaty provided for minority languages
to be freely used privately, in commerce, in religion, in the press, at public meetings, and before all courts. Minorities were to be permitted to establish and control at their own expense private charities, churches, social institutions, and schools, without interference from the government, which was required to set up German-language
public schools in districts that had been German before the war. All education above the primary level was to be conducted exclusively in the national language. Article 12 was the enforcement clause and gave the Council of the League of Nations
the responsibility to monitor and enforce the treaties.
European Theatre of the Russian Civil War
and three South Caucasian republics in the summer of 1918
The three South Caucasian republics of Armenia
, and Georgia
and the Mountainous Republic of the Northern Caucasus
all sent a delegation to the conference. Their attempts to gain protection from threats posed by the ongoing Russian Civil War
largely failed since none of the major powers was interested in taking a mandate over the Caucasian territories. After a series of delays, the three South Caucasian countries ultimately gained de facto
recognition from the Supreme Council of the Allied powers but only after all European troops had been withdrawn from the Caucasus, except for a British contingent in Batumi
. Georgia was recognized de facto
on 12 January 1920, followed by Azerbaijan the same day and Armenia on 19 January 1920. The Allied leaders decided to limit their assistance to the Caucasian republics to the supply of arms, munitions, and food.
After a failed attempt by the Korean National Association
to send a three-man delegation to Paris, a delegation of Koreans from China and Hawaii made it there. It included a representative from the Korean Provisional Government
, Kim Kyu-sik
They were aided by the Chinese, who were eager for the opportunity to embarrass Japan at the international forum. Several top Chinese leaders at the time, including Sun Yat-sen
, told US diplomats that the conference should take up the question of Korean independence
. However, the Chinese, already locked in a struggle against the Japanese, could do little else for Korea.
Other than China, no nation took the Koreans seriously at the conference because it already had the status of a Japanese colony.
The failure of Korean nationalists
to gain support from the conference ended their hopes of foreign support.
After the conference's decision to separate the former Arab provinces from the Ottoman Empire and to apply the new mandate-system to them, the World Zionist Organization
submitted its draft resolutions for consideration by the conference on 3 February 1919.
The Zionist state claimed at the conference
British memorandum on Palestine before the conference
The statement included five main points:
- Recognition of the Jewish people's historic title to Palestine and their right to reconstitute their National Home there.
- Palestine's borders were to be declared, including a request for land from the Litani River, now in Lebanon, to al-arish, now in Egypt.
- Sovereign possession of Palestine to be vested in the League of Nations with government entrusted to the British as the League's mandatee.
- Insertion of other provisions by the High Contracting Parties relating to the application of any general conditions attached to mandates that were suitable for Palestine.
- Additional conditions, including:
- The promotion of Jewish immigration and close settlement on the land and safeguarding rights of the present non-Jewish population
- A Jewish Council representative for the development of the Jewish National Home in Palestine, and offer to the council in priority any concession for public works or for the development of natural resources
- The self-government for localities
- Freedom of religious worship, with no discrimination between the inhabitants regarding citizenship and civil rights on the grounds of religion or race
- Control of the Holy Places
However, despite those attempts to influence the conference, the Zionists were instead constrained by Article 7 of the resulting Palestine Mandate
to having the mere right of obtaining Palestinian citizenship: "The Administration of Palestine shall be responsible for enacting a nationality law. There shall be included in this law provisions framed so as to facilitate the acquisition of Palestinian citizenship by Jews who take up their permanent residence in Palestine."
Citing the 1917 Balfour Declaration
, the Zionists suggested it meant the British had already recognized the historic title of the Jews to Palestine.
The preamble of the British Mandate of 1922 in which the Balfour Declaration was incorporated, stated, "Whereas recognition has thereby been given to the historical connection
of the Jewish people with Palestine and to the grounds for reconstituting their national home in that country...."
An unprecedented aspect of the conference was concerted pressure brought to bear on delegates by a committee of women, who sought to establish and entrench women's fundamental social, economic, and political rights, such as that of suffrage, within the peace framework. Although they were denied seats at the Paris Conference, the leadership of Marguerite de Witt-Schlumberger
, the president of the French Union for Women's Suffrage
, caused an Inter-Allied Women's Conference
(IAWC) to be convened, which met from 10 February to 10 April 1919. The IAWC lobbied Wilson and then also the other delegates of the Paris Conference to admit women to its committees, and it was successful in achieving a hearing from the conference's Commissions for International Labour Legislation and then the League of Nations Commission. One key and concrete outcome of the IAWC's work was Article 7 of the Covenant of the League of Nations
: "All positions under or in connection with the League, including the Secretariat, shall be open equally to men and women." More generally, the IAWC placed the issue of women's rights at the center of the new world order that was established in Paris.
The remaking of the world map at the conferences gave birth to a number of critical conflict-prone contradictions internationally that would become some of the causes of World War II.
The British historian Eric Hobsbawm
[N]o equally systematic attempt has been made before or since, in Europe or anywhere else, to redraw the political map on national lines.... The logical implication of trying to create a continent neatly divided into coherent territorial states each inhabited by separate ethnically and linguistically homogeneous population, was the mass expulsion or extermination of minorities. Such was and is the reductio ad absurdum
of nationalism in its territorial version, although this was not fully demonstrated until the 1940s.
Hobsbawm and other left-wing historians have argued that Wilson's Fourteen Points, particularly the principle of self-determination, were measures that were primarily against the Bolsheviks and designed, by playing the nationist card, to tame the revolutionary fever that was sweeping across Europe in the wake of the October Revolution
and the end of the war:
"[T]he first Western reaction to the Bolsheviks' appeal to the peoples to make peace—and their publication of the secret treaties in which the Allies had carved up Europe among themselves—had been President Wilson's Fourteen Points, which played the nationalist card against Lenin
's international appeal. A zone of small nation-states was to form a sort of quarantine belt against the Red virus.... [T]he establishment of new small nation-states along Wilsonian lines, though far from eliminating national conflicts in the zone of revolutions,... diminished the scope for Bolshevik revolution. That, indeed, had been the intention of the Allied peacemakers."
The right-wing historian John Lewis Gaddis
agreed: "When Woodrow Wilson made the principle of self-determination one of his Fourteen Points his intent had been to undercut the appeal of Bolshevism."
That view has a long history and can be summarised by Ray Stannard Baker
's famous remark: "Paris cannot be understood without Moscow."
The British historian Antony Lentin viewed Lloyd George's role in Paris as a major success:
Unrivaled as a negotiator, he had powerful combative instincts and indomitable determinism, and succeeded through charm, insight, resourcefulness, and simple pugnacity. Although sympathetic to France's desires to keep Germany under control, he did much to prevent the French from gaining power, attempted to extract Britain from the Anglo-French entente, inserted the war-guilt clause, and maintained a liberal and realist view of the postwar world. By doing so, he managed to consolidate power over the House [of Commons], secured his power base, expanded the empire, and sought a European balance of power.[failed verification]
- British official artists William Orpen and Augustus John were present at the Conference.
- World's End (1940), the first novel in Upton Sinclair's Pulitzer Prize-winning Lanny Budd series, describes the political machinations and consequences of the Paris Peace Conference through much of the book's second half, with Sinclair's narrative including many historically accurate characters and events.
- The first two books of novelist Robert Goddard's The Wide World trilogy (The Ways of the World and The Corners of the Globe) are centered around the diplomatic machinations which form the background to the conference.
- Paris 1919 (1973), the third studio album by Welsh musician John Cale, is named after the Paris Peace Conference, and its title song explores various aspects of early-20th-century culture and history in Western Europe.
- A Dangerous Man: Lawrence After Arabia (1992) is a British television film starring Ralph Fiennes as T. E. Lawrence and Alexander Siddig as Emir Faisal, depicting their struggles to secure an independent Arab state at the conference.
- "Paris, May 1919" is a 1993 episode of The Young Indiana Jones Chronicles, written by Jonathan Hales and directed by David Hare, in which Indiana Jones is shown working as a translator with the American delegation at the Paris Peace Conference.
- ^ a b Rene Albrecht-Carrie, Diplomatic History of Europe Since the Congress of Vienna (1958) p. 363
- ^ Michael S. Neiberg (2017). The Treaty of Versailles: A Concise History. Oxford University Press. p. ix. ISBN 978-0-19-065918-9.
- ^ a b Erik Goldstein The First World War Peace Settlements, 1919–1925 p49 Routledge (2013)
- ^ Nelsson, compiled by Richard (9 January 2019). "The Paris peace conference begins - archive, January 1919". The Guardian. ISSN 0261-3077. Retrieved 27 April 2019.
- ^ Goldstein, Erik (11 October 2013). The First World War Peace Settlements, 1919-1925. Routledge. ISBN 9781317883678.
- ^ Ziolkowski, Theodore (2007). "6: The God That Failed". Modes of Faith: Secular Surrogates for Lost Religious Belief. Accessible Publishing Systems PTY, Ltd (published 2011). p. 231. ISBN 9781459627376. Retrieved 19 February 2017. [...] Ebert persuaded the various councils to set elections for 19 January 1919 (the day following a date symbolic in Prussian history ever since the Kingdom of Prussia was established on 18 January 1701).
- ^ Meehan, John David (2005). "4: Failure at Geneva". The Dominion and the Rising Sun: Canada Encounters Japan, 1929-41. Vancouver: UBC Press. pp. 76–77. ISBN 9780774811217. Retrieved 19 February 2017. As the first non-European nation to achieve great-power status, Japan took its place alongside the other Big Five at Versailles, even if it was often a silent partner.
- ^ Antony Lentin, "Germany: a New Carthage?" History Today (2012) 62#1 pp. 22–27 online
- ^ Paul Birdsall, Versailles Twenty Years After (1941) is a convenient history and analysis of the conference. Longer and more recent is Margaret Macmillan, Peacemakers: The Paris Peace Conference of 1919 and Its Attempt to End War (2002), also published as Paris 1919: Six Months That Changed the World (2003); a good short overview is Alan Sharp, The Versailles Settlement: Peacemaking after the First World War, 1919–1923 (2nd ed. 2008)
- ^ Alan Sharp, The Versailles Settlement: Peacemaking After the First World War, 1919–1923 (2nd ed. 2008) ch 7
- ^ Andrew J. Crozier, "The Establishment of the Mandates System 1919–25: Some Problems Created by the Paris Peace Conference," Journal of Contemporary History (1979) 14#3 pp 483–513 in JSTOR.
- ^ Rowland, Peter (1975). "The Man at the Top, 1918-1922". Lloyd George. London: Barrie & Jenkins Ltd. p. 481. ISBN 0214200493.
- ^ Wm Louis, Roger (1966). "Australia and the German Colonies in the Pacific, 1914–1919". Journal of Modern History. 38 (4): 407–421. doi:10.1086/239953. JSTOR 1876683. S2CID 143884972.
- ^ Paul Birdsall, Versailles Twenty Years After (1941) pp. 58–82
- ^ Macmillan, Paris 1919, pp. 98–106
- ^ Scot David Bruce, Woodrow Wilson's Colonial Emissary: Edward M. House and the Origins of the Mandate System, 1917–1919 (University of Nebraska Press, 2013)
- ^ Mungo MacCallum (2013). The Good, the Bad and the Unlikely: Australia's Prime Ministers. Black Inc. p. 38. ISBN 9781863955874.
- ^ Zara S. Steiner (2007). The Lights that Failed: European International History, 1919–1933. Oxford UP. pp. 481–82. ISBN 9780199226863.
- ^ Shimazu (1998), pp. 14–15, 117.
- ^ "Official Memorandum in support of Ireland's demand for recognition as a sovereign independent state. Presented to Georges Clemenceau and the members of the Paris Peace Conference by Sean T O'Ceallaigh and George Gavan Duffy from O Ceallaigh Gavan Duffy to George Clemenceau – June 1919. – Documents on IRISH FOREIGN POLICY".
- ^ John C. Hulsman (2009). To Begin the World Over Again: Lawrence of Arabia from Damascus to Baghdad. pp. 119–20. ISBN 9780230100909.
- ^ Snelling, R. C. (1975). "Peacemaking, 1919: Australia, New Zealand and the British Empire Delegation at Versailles". Journal of Imperial and Commonwealth History. 4 (1): 15–28. doi:10.1080/03086537508582446.
- ^ Fitzhardinge, L. F. (1968). "Hughes, Borden, and Dominion Representation at the Paris Peace Conference". Canadian Historical Review. 49 (2): 160–169. doi:10.3138/chr-049-02-03.
- ^ Margaret McMillan, "Canada and the Peace Settlements," in David Mackenzie, ed., Canada and the First World War (2005) pp. 379–408
- ^ Snelling, R. C. (1975). "Peacemaking, 1919: Australia, New Zealand and the British Empire delegation at Versailles". Journal of Imperial and Commonwealth History. 4 (1): 15–28. doi:10.1080/03086537508582446.
- ^ MacMillan, Paris 1919 pp 26–35
- ^ David Robin Watson, Georges Clemenceau (1974) pp 338–65
- ^ Ambrosius, Lloyd E. (1972). "Wilson, the Republicans, and French Security after World War I". Journal of American History. 59 (2): 341–352. doi:10.2307/1890194. JSTOR 1890194.
- ^ a b Trachtenberg, Marc (1979). "Reparation at the Paris Peace Conference". Journal of Modern History. 51 (1): 24–55 [p. 42]. doi:10.1086/241847. JSTOR 1877867. S2CID 145777701.
- ^ a b Trachtenberg (1979), p. 43.
- ^ Macmillan, ch 22
- ^ H. James Burgwyn, Legend of the Mutilated Victory: Italy, the Great War and the Paris Peace Conference, 1915–1919 (1993)
- ^ Macmillan, ch 23
- ^ Gordon Lauren, Paul (1978). "Human Rights in History: Diplomacy and Racial Equality at the Paris Peace Conference". Diplomatic History. 2 (3): 257–278. doi:10.1111/j.1467-7709.1978.tb00435.x.
- ^ "Racial Equality Amendment, Japan". encyclopedia.com. 2007. Retrieved 12 January 2019.
- ^ Macmillan, Paris 1919 p. 321
- ^ Fifield, Russell. "Japanese Policy toward the Shantung Question at the Paris Peace Conference," Journal of Modern History (1951) 23:3 pp 265–272. in JSTOR reprint primary Japanese sources
- ^ MacMillan (2001), p. 3.
- ^ a b US Dept of State; International Boundary Study, Jordan – Syria Boundary, No. 94 – 30 December 1969, p.10 Archived 27 March 2009 at the Wayback Machine
- ^ MacMillan, Paris 1919 (2001), p. 6.
- ^ Wikisource
- ^ "First World War.com – Primary Documents – U.S. Peace Treaty with Austria, 24 August 1921". Retrieved 30 September 2015.
- ^ "First World War.com – Primary Documents – U.S. Peace Treaty with Hungary, 29 August 1921". Retrieved 30 September 2015.
- ^ Ellis, William (3 December 1922). "CRANE AND KING'S LONG-HID REPORT ON THE NEAR EAST - American Mandate Recommended in DocumentSent to Wilson.PEOPLE CALLED FOR USDisliked French, DistrustedBritish and Opposed theZionist Plan.ALLIES AT CROSS PURPOSES Our Control Would Have Hid Its Seat in Constantinople, Dominating New Nations. - Article - NYTimes.com". The New York Times.
- ^ Rubenberg, Cheryl (1986). Israel and the American National Interest: A Critical Examination. University of Illinois Press. pp. 27. ISBN 0-252-06074-1.
- ^ MacMillan (2001), p. 83.
- ^ Chester, 1921, p. 6
- ^ MacMillan, Paris of 1919 pp 322–45
- ^ Seth P. Tillman, Anglo-American Relations at the Paris Peace Conference of 1919, p. 136, Princeton University Press (1961)
- ^ John M. Thompson Russia, Bolshevism, and the Versailles peace, p. 76 Princeton University Press (1967)
- ^ John M. Thompson Russia, Bolshevism, and the Versailles peace p78 Princeton University Press (1967)
- ^ Laurence J. Orzell, "A 'Hotly Disputed' Issue: Eastern Galicia At The Paris Peace Conference, 1919," Polish Review (1980): 49–68. in JSTOR
- ^ Yakovenko, Natalya (2002). "Ukraine in British Strategies and Concepts of Foreign Policy, 1917–1922 and after". East European Quarterly. 36 (4): 465–479.
- ^ Моладзь БНФ. "Чатыры ўрады БНР на міжнароднай арэне ў 1918–1920 г." Archived from the original on 3 July 2013. Retrieved 30 September 2015.
- ^ Fink, Carole (1996). "The Paris Peace Conference and the Question of Minority Rights". Peace & Change. 21 (3): 273–88. doi:10.1111/j.1468-0130.1996.tb00272.x.
- ^ Fink, "The Paris Peace Conference and the Question of Minority Rights"
- ^ Edmund Jan Osmańczyk (2003). Encyclopedia of the United Nations and International Agreements: A to F. Routledge. p. 1812. ISBN 9780415939218.
- ^ Suny, Ronald Grigor (1994). The making of the Georgian nation (2 ed.). Bloomington: Indiana University Press. p. 154. ISBN 0253209153.
- ^ Hart-Landsberg, Martin (1998). Korea: Division, Reunification, & U.S. Foreign Policy Monthly Review Press. P. 30.
- ^ Manela, Erez (2007) The Wilsonian Moment pp. 119–135, 197–213.
- ^ Kim, Seung-Young (2009). American Diplomacy and Strategy Toward Korea and Northeast Asia, 1882–1950 and After pp 64–65.
- ^ Baldwin, Frank (1972). The March First Movement: Korean Challenge and Japanese Response
- ^ a b Statement of the Zionist Organization regarding Palestine Archived 24 December 2014 at the Wayback Machine, 3 February 1919
- ^ Avnery, Uri. "Has Israel Lost Its Soul?: The Miracle That Went Awry". Tikkun. May/June 2007: 15.
- ^ "The Avalon Project : The Palestine Mandate".
- ^ Avalon Project, The Palestine Mandate
- ^ Siegel, Mona L. (6 January 2019). In the Drawing Rooms of Paris: The Inter-Allied Women's Conference of 1919. American Historical Association 133rd Meeting.
- ^ "The Covenant of the League of Nations". Avalon project. Yale Law School - Lillian Goldman Law Library.
- ^ First World War – Willmott, H. P., Dorling Kindersley, 2003, pp. 292–307.
- ^ Hobsbawm 1992, p. 133.
- ^ Hobsbawm 1992, p. 67
- ^ Gaddis 2005, p. 121
- ^ McFadden 1993, p. 191.
- ^ Antony Lentin, "Several types of ambiguity: Lloyd George at the Paris peace conference." Diplomacy and Statecraft 6.1 (1995): 223-251.
- Albrecht-Carrie, Rene. Italy at the Paris Peace Conference (1938) online edition
- Ambrosius, Lloyd E. Woodrow Wilson and the American Diplomatic Tradition: The Treaty Fight in Perspective (1990)
- Andelman, David A. A Shattered Peace: Versailles 1919 and the Price We Pay Today (2007) popular history that stresses multiple long-term disasters caused by Treaty.
- Bailey; Thomas A. Wilson and the Peacemakers: Combining Woodrow Wilson and the Lost Peace and Woodrow Wilson and the Great Betrayal (1947) online edition
- Birdsall, Paul. Versailles twenty years after (1941) well balanced older account
- Boemeke, Manfred F., et al., eds. The Treaty of Versailles: A Reassessment after 75 Years (1998). A major collection of important papers by scholars
- Bruce, Scot David, Woodrow Wilson's Colonial Emissary: Edward M. House and the Origins of the Mandate System, 1917–1919 (University of Nebraska Press, 2013).
- Clements, Kendrick, A. Woodrow Wilson: World Statesman (1999).
- Cornelissen, Christoph, and Arndt Weinrich, eds. Writing the Great War - The Historiography of World War I from 1918 to the Present (2020) free download; full coverage for major countries.
- Cooper, John Milton. Woodrow Wilson: A Biography (2009), scholarly biography; pp 439–532 excerpt and text search
- Dillon, Emile Joseph. The Inside Story of the Peace Conference, (1920) online
- Dockrill, Michael, and John Fisher. The Paris Peace Conference, 1919: Peace Without Victory? (Springer, 2016).
- Ferguson, Niall. The Pity of War: Explaining World War One (1999), economics issues at Paris pp 395–432
- Doumanis, Nicholas, ed. The Oxford Handbook of European History, 1914–1945 (2016) ch 9.
- Fromkin, David. A Peace to End All Peace, The Fall of the Ottoman Empire and the Creation of the Modern Middle East, Macmillan 1989.
- Gaddis, John Lewis (2005). The Cold War. London: Allen Lane. ISBN 978-0-713-99912-9.
- Gelfand, Lawrence Emerson. The Inquiry: American Preparations for Peace, 1917–1919 (Yale UP, 1963).
- Ginneken, Anique H.M. van. Historical Dictionary of the League of Nations (2006)
- Henderson, W. O. "The Peace Settlement, 1919" History 26.101 (1941): 60–69.online historiography
- Henig, Ruth. Versailles and After: 1919–1933 (2nd ed. 1995), 100 pages; brief introduction by scholar
- Hobsbawm, E. J. (1992). Nations and Nationalism since 1780: Programme, Myth, Reality. Canto (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-43961-9.
- Hobsbawm, E.J. (1994). The Age of Extremes: The Short Twentieth Century, 1914–1991. London: Michael Joseph. ISBN 978-0718133078.
- Keynes, John Maynard, The Economic Consequences of the Peace (1920) famous criticism by leading economist full text online
- Dimitri Kitsikis, Le rôle des experts à la Conférence de la Paix de 1919, Ottawa, éditions de l'université d'Ottawa, 1972.
- Dimitri Kitsikis, Propagande et pressions en politique internationale. La Grèce et ses revendications à la Conférence de la Paix, 1919–1920, Paris, Presses universitaires de France, 1963.
- Knock, Thomas J. To End All Wars: Woodrow Wilson and the Quest for a New World Order (1995)
- Lederer, Ivo J., ed. The Versailles Settlement—Was It Foredoomed to Failure? (1960) short excerpts from scholars online edition
- Lentin, Antony. Guilt at Versailles: Lloyd George and the Pre-history of Appeasement (1985)
- Lentin, Antony. Lloyd George and the Lost Peace: From Versailles to Hitler, 1919–1940 (2004)
- Lloyd George, David (1938). The Truth About the Peace Treaties (2 volumes). London: Victor Gollancz Ltd.
- Macalister-Smith, Peter, Schwietzke, Joachim: Diplomatic Conferences and Congresses. A Bibliographical Compendium of State Practice 1642 to 1919, W. Neugebauer, Graz, Feldkirch 2017, ISBN 978-3-85376-325-4.
- McFadden, David W. (1993). Alternative Paths: Soviets and Americans, 1917–1920. New York, NY: Oxford University Press. ISBN 978-0-195-36115-5.
- MacMillan, Margaret. Peacemakers: The Paris Peace Conference of 1919 and Its Attempt to End War (2001), also published as Paris 1919: Six Months That Changed the World (2003); influential survey
- Mayer, Arno J. (1967). Politics and Diplomacy of Peacemaking: Containment and Counterrevolution at Versailles, 1918–1919. New York, NY: Alfred A. Knopf.
- Nicolson, Harold (2009) . Peacemaking, 1919. London: Faber and Faber. ISBN 978-0-571-25604-4. Archived from the original on 30 March 2012.
- Paxton, Robert O., and Julie Hessler. Europe in the Twentieth Century (2011) pp 141–78
- Marks, Sally. The Illusion of Peace: International Relations in Europe 1918–1933 (2nd ed. 2003)
- Marks, Sally. "Mistakes and Myths: The Allies, Germany, and the versailles treaty, 1918–1921." Journal of Modern History 85.3 (2013): 632–659. online
- Mayer, Arno J., Politics and Diplomacy of Peacemaking: Containment and Counter-revolution at Versailles, 1918–1919 (1967), leftist
- Newton, Douglas. British Policy and the Weimar Republic, 1918–1919 (1997). 484 pgs.
- Pellegrino, Anthony; Dean Lee, Christopher; Alex (2012). "Historical Thinking through Classroom Simulation: 1919 Paris Peace Conference". The Clearing House: A Journal of Educational Strategies, Issues and Ideas. 85 (4): 146–152. doi:10.1080/00098655.2012.659774. S2CID 142814294.
- Roberts, Priscilla. "Wilson, Europe's Colonial Empires, and the Issue of Imperialism," in Ross A. Kennedy, ed., A Companion to Woodrow Wilson (2013) pp: 492–517.
- Schwabe, Klaus. Woodrow Wilson, Revolutionary Germany, and Peacemaking, 1918–1919: Missionary Diplomacy and the Realities of Power (1985) online edition
- Sharp, Alan. The Versailles Settlement: Peacemaking after the First World War, 1919–1923 (2nd ed. 2008)
- Sharp, Alan (2005). "The Enforcement Of The Treaty Of Versailles, 1919–1923". Diplomacy and Statecraft. 16 (3): 423–438. doi:10.1080/09592290500207677. S2CID 154493814.
- Naoko Shimazu (1998), Japan, Race and Equality, Routledge, ISBN 0-415-17207-1
- Steiner, Zara. The Lights that Failed: European International History 1919–1933 (Oxford History of Modern Europe) (2007), pp 15–79; major scholarly work online at Questia
- Trachtenberg, Marc (1979). "Reparations at the Paris Peace Conference". The Journal of Modern History. 51 (1): 24–55. doi:10.1086/241847. JSTOR 1877867. S2CID 145777701.
- Walworth, Arthur. Wilson and His Peacemakers: American Diplomacy at the Paris Peace Conference, 1919 (1986) 618pp online edition
- Walworth, Arthur (1958). Woodrow Wilson, Volume I, Volume II. Longmans, Green.; 904pp; full scale scholarly biography; winner of Pulitzer Prize; online free; 2nd ed. 1965
- Watson, David Robin. George Clemenceau: A Political Biography (1976) 463 pgs. online edition
- Xu, Guoqi. Asia and the Great War – A Shared History (Oxford UP, 2016) online
Last edited on 13 June 2021, at 23:03
Content is available under CC BY-SA 3.0
unless otherwise noted. | https://googleweblight.com/sp?hl&geid=NSTNR&u=https://en.m.wikipedia.org/wiki/Paris_Peace_Conference,_1919 | 21 |
45 | Psychology Discussion Question
In your discussion,
- Select two theories discussed in your required reading, and describe the areas of each theory that you were not previously aware of and why these areas may suggest a need to assimilate, or even accommodate, your own current knowledge.
- If unclear on the difference between assimilation and accommodation, see the following resource: The Assimilation vs Accommodation of Knowledge (Links to an external site.).
- For example, did you know there are multiple sub-theories within behaviorism or cognitivism? Do you have previous knowledge about the stimulus-organism-response (S-O-R) model suggested by constructivism?
- Discuss why you think a developed mental understanding about learning is important as a student of psychology. Include all of the following in this explanation:
- Why is it important to clearly understand the process for acquiring knowledge and then the variables that support effectively processing this knowledge?
- What myth does stating “we all learn differently” promote to others?
- Why does a more critical understanding potentially support a learner?
- How could this understanding affect our success in a career?
- How might culture (socio-economic, social circle, etc.) affect one’s ability to accommodate or assimilate an expanded knowledge about learning?
- Lastly, compare and contrast your previous knowledge about this content to the more complex analysis of learning that you read about this week in the introduction chapter of your text (see the Writing Center’s Compare & Contrast Assignments (Links to an external site.) for assistance).
Your initial post should be between 350 and 400 words. You must support your discussion by citing, at minimum, the required textbook. Cite all information from your sources according to APA guidelines as outlined in the APA: Citing Within Your Papers (Links to an external site.) resource. List each of your sources at the end of your posting according to APA Style as shown in the sample page of the APA: Formatting Your References List (Links to an external site.) resource.
1 The Foundations of Behaviorism A mouse running inside a maze. Fergregory/iStock/Thinkstock Learning Objectives After reading this chapter, you should be able to do the following: Explain the controversial history and arguments of behaviorism. Describe associative learning. Explain connectionism and the law of effect. Compare and contrast classical and operant conditioning. Identify examples of ratio and interval schedules. Discuss settings where behaviorism, in the area of learning, is applied. Introduction When you were a child, were you ever sent to your room for a bad behavior, a consequence that continued to occur until you changed your behavior? slapped on the hand for touching something that you were not supposed to touch? yelled at if you walked into the street without first looking for cars? given an allowance when you completed your chores? allowed to go on dates but only if you were home by curfew? given a sticker or badge for an assignment when you did well? All of these examples could be categorized as behaviorist techniques for reinforcing learning. A child looking guilty as he draws on a white wall. A parent stands near the child with her hands on her hips. Jacob Wackerhausen/iStock/Thinkstock Making mistakes is part of the learning process. It allows people to modify behavior or thought processes in order to develop knowledge or skills. Learning can refer to the process of developing knowledge or a skill through instruction or study or the process of modifying behavior through experience. Understanding how learning is studied is an important step if you want to successfully apply psychological methods to your own learning or to that of others, whether in a classroom, in the workplace, or even in your role as a parent or grandparent. It is also important to understand that theories have evolved over time and that inaccuracies often exist in the literature that presents behavior and learning studies (Abramson, 2013). Applications of technology and methodological approaches continue to develop researchers’ awareness of possible inaccuracies and alternate approaches. Your journey to a better understanding of learning begins with behaviorism. This theoretical foundation, which was first discussed in this book’s introduction, argues that learning has successfully occurred when the appropriate behavior is observed (Ertmer & Newby, 1993). However, behaviorism is an intricate theory, and its approach to learning cannot be generalized so easily. There are many perspectives related to behaviorism, and such variability makes it critical that you understand behaviorism’s theoretical foundation in more depth. Although new methods are often used in the 21st century, behaviorism still offers the field of learning many relevant strategies for successful learning, educating, and counseling today (Abramson, 2013). In this chapter, we will first discuss the history of behaviorism, as well as its evolution in the scope of learning theory. In addition, the chapter will cover behaviorism’s foundational ideas, including connectionism, the law of effect, principles of conditioning, and modeling and shaping, and explain how behaviorism has been applied within the domains of marketing and education. 1.1 The Evolution of Behaviorism to Behavior Analysis Behaviorism was initially based on the premise that observable environmental variables are the basis of behaviors (Hilgard, 1956; Pierce & Cheney, 2004). The theory itself has numerous frameworks, some of which you read about in section i.2, and continues to evolve today. The excerpts in this section are from Watrin and Darwich (2012). This article reflects upon the evolution of behaviorism. The attention placed on the multitude of beliefs about behaviorism sets the standard for approaching this area of learning psychology with skeptical thought and critical considerations. Watrin and Darwich (2012) introduce J. B. Watson (1913), who redefined psychology as “a purely objective experimental branch of natural science” (p. 158), proposing the “prediction and control of behavior” as its goal, and invite us to follow the path of self-identified behaviorists who continued to reinvent how and what behaviorism is and how it should be applied. With explicit candor, these authors will help you better understand exactly why this framework is often misunderstood and difficult to clearly explain. They also provide you with a foundation that will help you better understand the advances and new reflections that continue to be explored. Excerpts from “On Behaviorism in the Cognitive Revolution: Myth and Reactions” By J. P. Watrin and R. Darwich In the course of history, there is a clear difficulty to define psychology. For a long time, it was treated as the study of mind or human psyche. Some authors, though, saw the emergence of behaviorism as a revolution in psychological science (e.g., Gardner, 1985; Moore, 1999). Starting with J. B. Watson (1878–1958), the behaviorist school flourished in the beginning of the 20th century. It was a remarkable rupture in the history of psychology, once it put the mind aside of scientific inquiry. From then on, behaviorism began a tradition of study of behavior, comprising several—and sometimes even conflicting—theoretical systems (Moore, 1999). In that context, behavior analysis emerged as one of the behavioristic approaches, having been developed from the works of B. F. Skinner (1904–1990). With an emphasis on operant behavior and an antimentalistic position [which rejects the mind as the cause of behavior], it became a forefront system of behaviorism during the 1950s. [. . .] From Behaviorism to Behavior Analysis Behavior analysis constitutes a field and a psychological system devoted to the study of behavior, here defined in terms of functional relations between behavioral and environmental events (Catania, 1998). As a field, behavior analysis has today three fundamental domains: (a) the experimental analysis of behavior, a basic science devoted to empirical research on behavioral processes, especially in the laboratory; (b) applied behavior analysis, a technological domain dedicated to apply behavior-analytic knowledge to solve practical problems; and (c) the conceptual analysis of behavior, which performs theoretical reflections about the subject matter and methods of investigation (Moore, 1999; see also Moore & Cooper, 2003). Those domains are interrelated and based in radical behaviorism, a philosophy of science that lays the foundations of behavior analysis. The history of the field as a whole has its roots in the behaviorist school. In 1913, Watson published the article “Psychology as the Behaviorist Views It.” Attacking the study of consciousness, Watson (1913) redefined psychology as “a purely objective experimental branch of natural science” (p. 158), proposing the “prediction and control of behavior” as its goal. That drastic movement would greatly contribute to the beginning of a new tradition, whose name seems to have been created by Watson himself: “behaviorism” (Schneider & Morris, 1987). Psychologist B. F. Skinner in a laboratory conducting an experiment with a rat. Nina Leen/The LIFE Picture Collection/Getty Images Psychologist B. F. Skinner’s experiments showed that behavior could be related to a stimulus and did not have to be only an occurrence inside an organism. One of Skinner’s famous experiments included a rat pressing a lever to then be rewarded with food. In the following decades, several psychologists would be identified as behaviorists. Names such as Clark Hull (1884–1952) and Edward Tolman (1886–1959) became associated with the behaviorist movement, once they developed their own explanatory models of behavior (e.g., Hull, 1943; Tolman, 1932). New forms of behaviorism were thus being shaped and were sometimes at odds with those that already existed (Moore, 1999). In the 1930s, the contributions of Skinner established his place among those developments. Conceiving behavior as a lawful process, Skinner’s experimental works on reflexes led him to new concepts and methods of investigation (see Iversen, 1992). Reflex—and, subsequently, all behavior—was no longer something that happened inside the organism; rather, it was seen as a relation in which a response is defined in function of a stimulus and vice versa (Skinner, 1931). [. . .] In 1938, Skinner published The Behavior of Organisms, in which he summarized many of his positions and refined the concept of operant behavior. Skinnerian behaviorism (see section i.2) was acquiring its shape. Its first developments laid the fundamental concepts and methods of behavior analysis. Because they relied on basic research, they were also the first steps of the experimental analysis of behavior. In the 1940s, the first introductory course based in Skinner’s psychology and the first conference on experimental analysis of behavior took place (Keller & Schoenfeld, 1949; Michael, 1980). In 1945, Skinner wrote The Operational Analysis of Psychological Terms, in which, for the first time in print, he defined his thought as “radical behaviorism” (Skinner, 1945, p. 294; see also Schneider & Morris, 1987). The term would designate a philosophy that, on one hand, defines private events (e.g., thinking, feelings) as behavior and, therefore, as a legitimate subject matter of a behavioral analysis, but on the other hand attacks explanatory mentalism, the explanation of behavior by mental events (cf. Skinner, 1945, 1974). Private events usually refer to a mental concept, but they are behavior and, as such, cannot cause other behavior. That antimentalism would become a central feature of radical behaviorism. [. . .] As the prominence of Skinner and his work began to rise and the foundations for applied behavior analysis were laid (Morris, Smith, & Altus, 2005), Skinner would become central to the development of behavior analysis. [. . .] Thus, behavior analysis constituted itself by the gradual establishment of its domains, being consolidated as a field in the late 1970s. Although Skinner became synonymous with behavior analysis, the field exceeded its pioneer. Behavior analysis took on a life of its own. Other people took part in the spreading of the field, such as Fred Keller (1899–1996), Charles Ferster (1922–1981), William Schoenfeld (1915–1996), and Murray Sidman (1923–). They disseminated its knowledge, just as they developed new concepts and methods (e.g., Sidman & Tailby, 1982). Skinner, however, remained as the field’s main spokesman. Schultz and Schultz (2004), for instance, asserted that, “despite . . . criticisms, Skinner remained the uncontested champion of behavioral psychology from the 1950s to the 1980s. During this period, American psychology was shaped more by his work than by the ideas of any other psychologist” (p. 344). [. . .] The Generic (and Misrepresented) Nature of Behaviorism [. . .] Behaviorism became a host of different and conflicting systems, grouped under a single label, as if they all shared the same position. Being vaguely defined, behaviorism is frequently treated as a homogeneous school, as a linear tradition. The term behaviorism, however, refers to a variety of conflicting positions (Leigland, 2003; but see also Moore, 1999). Indeed, after Watson’s (1913) first use, many theories related to the study of behavior were taken as “behaviorists.” Since the term began to be largely used, its ambiguity was soon recognized, seeing that there was no single enterprise called “behaviorism” (e.g., Hunter, 1922; Spence, 1948; Williams, 1931). Woodworth (1924) summarized the problem: If I am asked whether I am a behaviorist, I have to reply that I do not know, and do not much care. If I am, it is because I believe in the several projects put forward by behaviorists. If I am not, it is partly because I also believe in other projects which behaviorists seem to avoid, and partly because I cannot see any one big thing, to be called “behaviorism.” (p. 264) Spence (1948) also noted that the term was mostly used when someone defines his or her oppositions to an effective (or alleged) behaviorism. Even so, later developments were identified with “behaviorism,” such as behavior analysis itself. Therefore, the term would still designate a very heterogeneous set of positions. Its indiscriminate use, on the other hand, overlooks the historical complexity and diversity of the behaviorist school. Moreover, references to a generic behaviorism set biases in the analysis of behavioristic systems. When behaviorism is vaguely defined, it is easier to misrepresent any system by attributing features of other positions to it. Properties of particular systems are ascribed to all. Pinker (1999), for example, says the following: Skinner and other behaviorists insisted that all talk about mental events was sterile speculation; only stimulus–response connection could be studied in the lab and the field. Exactly the opposite turned out to be true. Before computational ideas were imported in the 1950s and 1960s by Newell and Simon and the psychologists George Miller and Donald Broadbent, psychology was dull, dull, dull. (p. 84) [. . .] In spite of the prior disputable use of the word behaviorism, the conventional historiography seems to have taken advantage of the term’s ambiguity to legitimate the idea of a revolution. A generic behaviorism was, then, presented, underlying fallacious arguments. This ambiguous treatment is dangerous for behavior analysis and modern behaviorism, because it creates and strengthens academic folklore (see also Todd & Morris, 1992). Its deceptive character gives rise to misrepresentations. [. . .] Source: Watrin, J. P., & Darwich, R. (2012). On behaviorism in the cognitive revolution: Myth and reactions. Review of General Psychology, 16(3), 269–282. Copyright © 2012, American Psychological Association. Reprinted with permission. Understanding the history of a theoretical framework can help us better understand the developments that followed. In this case, behaviorism gave rise to many subset groups that believed that learning was a behavior and that behavior was observable—yet differed in the degree to which they held to these beliefs. As the article’s authors observed, the word behaviorism can often be used as a general grouping for the multiple researchers aligned with this theory. As a lifelong learner, you may find that further questioning this ambiguity in your own studies will help substantiate your understanding of this important area of psychology. 1.2 Theory of Connectionism and the Laws of Learning Edward Thorndike’s theory of connectionism and the laws of learning were two concepts that would emerge as behaviorism matured. The theory of connectionism, also known as the synaptic theory of learning, posits that learning occurs through the habitual associations, or connections, made between stimuli and responses. Examples of behavioral associations include eating because we are hungry and sleeping because we are tired. The laws of learning explain how people learn best through these associations. As just one example, the law of effect asserts that learning is strengthened when it is associated with a positive feeling. As Sandiford (1942) explains in the following excerpts, the theory of connectionism and the laws of learning helped build a more developed understanding of learning and contributed to our more modern applications of today. Conceptual model of the brain with illuminated dots and connectors depicting brain activity. Abracada/iStock/Thinkstock A central theory of connectionism is that learning is conducted through stimuli and responses. Before you begin reading, it is important to understand the importance of what is known as “association doctrine” to Thorndike’s research. Although Thorndike did not introduce his initial three laws of learning until the early 20th century (Weibell, 2011), ideas about behavioral associations began to take shape more than 2,000 years ago. Greek philosopher Aristotle (384–322 BCE) wrote in his major work on ethics, “For we are busy that we may have leisure, and make war that we may live in peace.” However, his ideas about associations are most clearly seen in the following passage: When, therefore, we accomplish an act of reminiscence, we pass through a certain series of precursive movements, until we arrive at a movement on which the one we are in quest of is habitually consequent. Hence, too, it is that we hunt through the mental train, excogitating from the present or some other, and from similar or contrary or coadjacent. Through this process reminiscence takes place. For the movements are, in these cases, sometimes at the same time, sometimes parts of the same whole, so that the subsequent movement is already more than half accomplished. (Aristotle, ca. 350 BCE/1930, para. XX) Association doctrine can be explained as the linking of physiological and psychological processes. Important to understanding the points of reference in the excerpts from Sandiford (1942) is that Thorndike’s beliefs about learning were somewhat founded on Alexander Bain’s beliefs about psychology that suggested all knowledge is based on physical sensations (not thoughts or ideas) (Bain, 1873). Bain (1818–1903) founded the academic journal called Mind, the first journal of psychology and analytical philosophy. He postulated an “associationist treatment of higher mental processes” (Wade, 2001, p. 781). Excerpts from “Connectionism: Its Origin and Major Features” By P. Sandiford Features of Connectionism The following outline gives the main distinguishing features of connectionism: Connectionism is an outgrowth of the association doctrine, especially as propounded by Alexander Bain. Thorndike was a pupil of William James, some of whose teachings were derived from Bain and the British associationists. Connectionism, therefore, through associationism, has its roots deep in the psychological past. Connectionism is a theory of learning, but as learning is many-sided, connectionism almost becomes a system of psychology. It is as a theory of learning, however, that it must stand or fall. Connectionism has an evolutionary bearing in that it links human behavior to that of the lower animals. Thorndike’s first experiments were with chicks, fish, cats, and, later, with monkeys. From his animal experiments he derived his famous laws of learning. Connectionism boldly states that learning is connecting. The connections presumably have their physical basis in the nervous system, where the connections between neuron and neuron explain learning. Hence, connectionism is also known as the synaptic theory of learning. Connectionism is atomistic rather than holistic or organismic, since it stresses the analysis of behavior in order to discover the elements that are connected or bonded together. The sum total of a man’s life can be described by a list of all the situations he has encountered and the responses he has made to them. [. . .] The connectionist principle of associative shifting (which suggests that if a response to a stimulus is sustained even if the stimulus is gradually changed, the same response will be likely in a new situation) has relationships with Pavlovian conditioning, which Thorndike regards as a special case of associative learning. Connectionism has also some affinities with Watsonian behaviorism, which suggested that introspection was not observable and thus not scientific, stressing the mechanistic aspects of behavior. Neither one finds it necessary to evoke a soul in order to explain behavior. Connectionism breaks with behaviorism in regard to the stress it places on the hereditary equipment of the behaving organism. Some connections are more natural than others. We grow into reflexes and instincts without very much stimulation from the environment except food and air. In other words, we mature into reflexes and instincts, but we have to practice or exercise in order to learn our habits. These hereditary patterns of behavior (reflexes and instincts) form the groundwork of learning. Most acquired connections are based on them and, indeed, grow out of them. Even such complex bonds as those which represent capacities (music, mathematics, languages, and the like) have a hereditary basis. According to connectionism those things we call intellect and intelligence are quantitative rather than qualitative. A person’s intellect is the sum total of the bonds (associations) he has formed. The greater the number of bonds he has formed, the higher is his intelligence. [. . .] Connectionism, above all other theories of learning, seems to be one that the classroom teacher can appreciate and apply. While the statistics which summarize the experiments have been decried as the products of a mechanistic conception of behavior, nevertheless they have done more to make education a science than all the theorizing of the past 2,000 years. [. . .] Thorndike was such a voluminous writer that it is difficult to summarize his position on any single question, or, indeed, to pin him down to a specific position. In order to remove any doubt the reader may have on the matter, the following recent statement of Thorndike’s position is given: A man’s life would be described by a list of all the situations which he encountered and the responses which he made to them, including among the latter every detail of his sensations, percepts, memories, mental images, ideas, judgments, emotions, desires, choices, and other so-called mental facts. [. . .] A man’s nature at any given stage would be expressed by a list of the responses (Rs) which he would make to whatever situations or state of affairs (Ss) could happen to him, somewhat as the nature of a molecule of sugar might be expressed by a list of all the reactions that would take place between it and every substance which it might encounter. There would be one important difference, however. [. . .] In human behavior our ignorance often requires the acknowledgment of the principle of multiple response or varied reaction to the same S by a person who is, so far as we can tell, the same person. (See Figure 1.1 for a specific example.) [. . .] If John Doe were really the same person in every particular way on 100 occasions he would always respond to S in one same way at each of its 100 occurrences, but he will not be. Even when we can detect no differences in him there will be subtle variation in metabolism, blood supply, etc. [. . .] Figure 1.1: Example of possible reactions to a stimulus Psychologist Edward Thorndike proposed that humans have varied responses to the same incident or stimulus. However, he acknowledged that there are hereditary patterns of behavior such as reflexes. Figure uses an example scenario to illustrate the variability of a stimulus (S) and response (R) connection. In this example, “S” is a stranger yelling at a man, and three different “Rs” are shown: The man smiles at the stranger and then walks away, the man reacts physically by yelling at and hitting the stranger, and the man yells back at the stranger and then storms away. © Bridgepoint Education, Inc. The Associationistic Background Ideas related to associationism date back to Aristotle, although his view differed much from our current understanding (Sandiford, 1942). Hence, there is a large gap in associationism’s history. Table 1.1 is adapted from the writing of Sandiford (1942), and can help put into perspective the maturation of the ideas connected with associationism. Each theorist brought additional perspectives to this model for learning, and although Table 1.1 provides only a broad overview, the timeline demonstrates how the perspectives changed as time moved forward. Table 1.1: Overview of associationistic milestones Theorists Milestones Aristotle (384–322 BCE) Introduced the ideology of associations. Suggested that we could not perceive two sensations as one—that they would combine or fuse into one. Thomas Hobbes (1588–1679) Suggested sequences of thought could be casual and illogical, as in dreams, or orderly and regulated as by some design. Suggested that hunger, sex, and thirst are physiological needs. John Locke (1632–1704) Suggested “association of ideas”: Representations arise in consciousness. David Hartley (1705–1757) Suggested that sensation (pleasure vs. pain) was generated by wave vibrations in the nerves. David Hume (1711–1776) Noted that the associations in cause and effect are affected when additional objects are introduced. James Mill (1773–1836) Advanced associationism to include more complex emotional states within the pain vs. pleasure sensation model. Thomas Brown (1778–1820) Suggested nine secondary laws that strengthened Aristotle’s laws of association. Understood association as an active process of an active, holistic mind. Alexander Bain (1818–1903) Suggested trial-and-error learning, reflexes, and instincts as the bases of habits, individual differences, and the pleasure-pain principle in learning. Edward Thorndike (1874–1949) Suggested the theory of connectionism. Suggested laws of learning. Adapted from “Connectionism: Its Origin and Major Features” by P. Sandiford, in N. B. Henry (Ed.), The Forty-First Yearbook of the National Society for the Study of Education: Part II, The Psychology of Learning (pp. 102–108), 1942. Blackwell Publishing. © National Society for the Study of Education. Adapted with permission. Other Backgrounds of Connectionism If Thorndike be regarded as the king-pin of connectionism, then three main streams of influence may be found in his work. The first, that of associationism, has already been traced. Bain influenced Thorndike’s teaching both directly and through William James. [. . .] For experimentation on the learning ability of animals, new apparatus, new devices, new methods had to be invented. Thorndike introduced the maze, the puzzle box, and the signal or choice reaction experiment, all of which have become standard equipment in animal psychology and have been employed in thousands of studies since that day. Figure 1.2 provides an illustration of a puzzle box. Figure 1.2: Thorndike’s puzzle box In Thorndike’s design, a dish of food was placed outside of the box, visible through the slats in the box. Thorndike found that animal subjects placed in the box would eventually locate the release apparatus, and the time before the activation of this response was shorter with each subsequent trial. A drawing of a puzzle box. The box is rectangular, solid at both of the short ends, but with a slatted side making up one of the long ends. A square-shaped door has been positioned in the slatted side. A long, thin chain attaches to the front of this door. Two thin slats hold the door in place. On top of the box, a square of mesh lies in the center. A system of ropes and hooks has been devised, leading to two pins that hold the door in place. Another chain hangs down inside the box. Adapted from Animal Intelligence (p. 30), by E. L. Thorndike, 1911, New York, NY: Macmillan. Thorndike’s Animal Intelligence, completed in 1898 as his doctoral dissertation, not only was the starting point of animal psychology as a science, but also went far toward establishing stimulus-response as the cornerstone of psychology. It is also the source of the famous laws of learning. [. . .] The Laws of Learning Probably the best known of the contributions that connectionism has made to educational theory and practice are the so-called laws of learning. They are not absolute laws, but rather are they to be regarded simply as comprehensive formulations of the rules which learning obeys. The laws usually quoted are those given in Vol. II of Thorndike’s Educational Psychology: The Psychology of Learning (1913). These include the three major laws: effect, exercise or frequency, and readiness. [. . .] These laws grew out of the experiments with animals, coupled with such influences as the writings of Bain, Romanes, Lloyd Morgan, Wilhelm Wundt, and others, and have been modified by further experiments in which human beings acted as the subjects (Thorndike, 1932). New elements injected into the laws of learning are belongingness, impressiveness, polarity, identifiability, availability, and mental system. This shows clearly enough that the laws are not to be regarded as a closed system, complete from the start, but merely as tentative summaries of our knowledge of the way in which learning takes place. They will be discarded or modified whenever experiments disclose that such is necessary or desirable. The Law of Effect [. . .] A modifiable bond is strengthened or weakened as satisfaction or annoyance attends its exercise. With chickens and cats, Thorndike had used as motivating agents in their behavior such original satisfiers as food and release from confinement for the hungry cat, company for the lonely chicken, and so forth. These acted as rewards for certain actions which became stamped in and learned. Thorndike really took the law of effect for granted at first, as so many before him had done. Gradually, however, it became one of his most important principles of education. [. . .] In propounding the law of effect, Thorndike thought that the two effects—satisfiers and annoyers—were about equally potent, the one in stamping in the connection, the other in stamping it out. If a preference was indicated it was toward the side of rewards, although he explicitly asserted that rewards or satisfiers following responses increased the likelihood of repetitions of the connections so rewarded, while punishments decreased the likelihood of recurrence of the punished connection. [. . .] The manner in which the confirming reaction develops and operates is as follows: The confirming reaction is at first an aftereffect of the S → R situation (where S is a stimulus and R is a response), thus: S → R → Confirming Reaction Afterwards it functions as a force connecting and binding S to R, thus: S → Confirming Reaction → R The confirming action is independent of a pleasurable result, since pain may also set it in action provided it is close enough to the satisfier in the succession of connections. However, it must not be thought that the effect of pain or the influence of a punishment, which is an annoying aftereffect, is exactly the opposite of the effect or influence of a reward upon the bond to which it belongs and of which it is the aftereffect. It does not directly, invariably, and inevitably weaken the mental connection. The influence of reward or punishment is thus seen to depend upon what it leads the person to do. The reward tends to arouse the confirming reaction and so cause the continuance or repetition of the connection. Punishment does not necessarily lead to the arousal of a tendency to discontinue the punished connection or to repeat it less often, nor does it necessarily stimulate a connection of an opposite kind. It arouses whatever original behavior or past experience has linked to that particular annoying aftereffect in those particular circumstances. This may be to run away, to scream, or to perform other useless acts. Punishments, compared with rewards, are very unreliable forces in learning. Rewards are dependable because they arouse confirming reactions. Thorndike is inclined to believe that the confirming reaction is a reaction of the neurons themselves. It is a neuronic force of reinforcement of the original response or it is the aftereffect of the total situation response (Thorndike, 1933, 1940). [. . .] The Law of Exercise or Frequency This law, like the law of effect, was at first almost taken for granted by Thorndike. Does not “practice make perfect”? Yet experience shows that exercise does not always lead to perfection. Practice in sitting on a bent pin or in poking the fire with the finger never leads to perfection in the art. The law of effect has to be invoked to explain why practice does not necessarily and invariably lead to improvement. Pleasurable reactions are stamped in; painful ones are stamped out. In terms of connectionism, repetition tends to make the bond permanent. [. . .] The law of exercise or frequency has two parts, use and disuse. The law of use is stated: When a modifiable connection is made between a situation and a response, that connection’s strength is, other things being equal, increased. The law of disuse runs: When a modifiable connection is not made between a situation and a response over a length of time, that connection’s strength is decreased. The phrase “other things being equal” refers mostly to the effect, the satisfyingness or annoyingness of the situation. In other words, the more you are able to do or apply something to differing contexts, the strength of the connection (what has been learned) increases. When the concept cannot be used in varying situations, reducing its usability, the strength of what has been learned decreases. A child writing on a notepad. Bigandt_Photography/iStock/Thinkstock Learning how to write and using that skill in different situations over the course of someone’s life is an example of the law of exercise or frequency. Watson, the behaviorist, claims that frequency and recency explain learning and that it is unnecessary to invoke the law of effect. The successful action in maze learning, for example, must occur in every series; therefore, the successful action is learned mainly through frequency. Apparently, Watson did not realize that unsuccessful actions within the maze were often repeated more frequently than the final and successful one. Yet it is the successful one that is finally stamped in (Watson, 1914). [. . .] The repetition of a situation, while tending to make a reaction somewhat stereotyped, in and of itself, is unproductive for learning. It causes no adaptive changes and has no useful selective power. Repetition of a connection, that is, the situation and its particular response, results in a real though somewhat small strengthening influence. Mere repetition of a connection causes learning, but the learning is slow. For example, if a child is taught to sit in his or her seat after entering the room, but does not understand why or its applicability, the child will sit but has not necessarily learned the reasons for performing this behavior. If the child learns that when entering a classroom, it is important to sit as a procedure that ensures positive outcomes in the learning environment (such as rewards) the child will be more apt to apply this in other settings as well. Repetition of a “connection with belonging” (that is, the procedure that is applied “fits” the situation) increases the likelihood of learned adaption to perform the behavior, even when the rewards may be concealed or disguised. Belongingness is difficult to describe but easy to illustrate. For example, the words of a sentence belong together in a way that the terminal word of one sentence and the initial word of the next do not. An additional example might include a child eating off a plate instead of eating off the table. The behavior makes logical sense to the individual. [. . .] The Law of Readiness Briefly the law of readiness may be stated: When a bond is ready to act, to act gives satisfaction and not to act gives annoyance. When a bond which is not ready to act is made to act, annoyance is caused. Examples of a bond might include starting an exercise program, asking for someone’s hand in marriage, or starting a new career. If a person is not ready to begin exercising, marry, or start a new career, he or she will likely feel annoyed by any pressure to do so. [. . .] Modifications and Additions to the Laws of Learning Thorndike’s later experiments on learning, using human beings as subjects, led to a modification of the laws of exercise and effect. Numerous additions and modifications were also made and new terms—belongingness, impressiveness, vividness, polarity, identifiability, availability, and mental systems—found their way into the vocabulary of connectionism. Belongingness: A factor of great importance in the learning process. Example: Various words of a sentence fit or belong together; a sequence of numbers may belong together just because they are all numbers and not anything else, but some number sequences may possess more belongingness than others. Thus 2, 4, 8, 16, etc., exhibit more belongingness than 1, 3, 4, 2, 5, 11, 13, 15. Impressiveness: The strength or intensity of a stimulus or a situation. Example: Loud sounds are considered stronger and more impressive than less intense ones. Stimuli attended to, that is, in the focus of consciousness, are more impressive than marginal elements. Vividness: The recognizability of a word (Miller & Dost, 1964). Example: In some experiments, using word-number paired associates such as dinner 26, basal 83, divide 37, kiss 63, the number of correct number associations with kiss and dinner, both impressive words, is larger than the number of associations made with basal and divide, both weak words. Polarity: The tendency for stimulus-response sequences to function more readily in the order they were practiced than in the opposite order. Example: Using foreign and vernacular phrases such as raison d’être; ohne Hast, ohne Ras exeunt omnes; facile descemus; obiter dicta, etc., it was shown that the ends could be supplied when the beginnings were given, more readily than the beginnings could be given when the ends were supplied; the first half evokes the second half more often than the second evokes the first. Identifiability: If the connection can be easily identified it is easily learned. Example: Some concepts such as times, numbers, weights, colors, mass, density, etc., have to be analyzed out and made identifiable before they can be profitably used by us. Availability: The accessibility of the response. Example: When something is easier to attain, it makes the response to it more easily assessable. Mental systems: The habituation; limited physiological or emotional response to a frequently repeated stimulus (one’s habit). Example: If in paper and pencil association experiments, the stimulus word dear evoked the response sir, this would be regarded as a simple habit; but if it evoked fear, some mental system must be at work. [. . .] These modifications and additions to the laws of learning do not destroy the main fabric of the connectionist doctrine. Indeed, they illustrate one important feature of connectionism, namely, the willingness of its supporters to modify their teachings and beliefs when experimental findings are not in harmony with them. [. . .] Source: Sandiford, P. (1942). Connectionism: Its origin and major features. In N. B. Henry (Ed.), The forty-first yearbook of the National Society for the Study of Education: Part II, The psychology of learning (pp. 97–140). Blackwell Publishing. © National Society for the Study of Education. The theory of connectionism and laws of learning present clear attributes and ideas about learning behavior. Since their introduction in the early 1900s, Thorndike’s insightful suggestions, based on previous research, have left their mark on research about learning and continue to pose implications about how we learn. As you learn about other areas where behaviorism is applied in the learning domain, continue to consider how each was derived and how they have influenced the more modern theories we will discuss in future chapters. 1.3 Principles of Conditioning Conditioning and learning have been core topics in psychology since the turn of the 20th century and are aligned with the transformation of associative learning concepts. Therefore, familiarity with this area of learning is critical to an advanced education in psychology, as well as a more developed understanding of behaviorism and its evolution. For this section of the chapter, we will discuss conditioning. Section 1.4 will explore how conditioning is then applied in the field of learning. There are two types of conditioning: classical and operant. Though both types have an associative property, there are also clear differences between the two. Classical conditioning involves repeatedly pairing two stimuli so that eventually one of the stimuli prompts an involuntary response that previously the other caused on its own. Think of the classic example of Pavlov’s dog: Repeatedly pairing food with a tone eventually caused, or conditioned, the dog to salivate at the tone alone. In contrast, operant conditioning (also referred to as instrumental conditioning or Skinnerian conditioning) introduces consequences to the associative relationship between stimuli and responses. Rather than using different stimuli to provoke the same, involuntary response, different stimuli are used to prompt or support the desired, voluntary response, which may involve the confirmation or discouragement of a behavior. In Figure 1.3, for example, two types of reinforcement (positive and negative) are used to maintain the desired response, and two types of punishment (again, positive and negative) are used to change the behavior. In this case, the child being quiet at the physician’s office is the desired behavior. Figure 1.3: Example of operant conditioning Operant conditioning includes using different stimuli to provoke a specific, desired response rather than provoking the same involuntary response, such as in classical conditioning. An example scenario—a child waiting in a physician’s office—is used to demonstrate positive and negative reinforcement (at the top) and positive and negative punishment (at the bottom). If the child sits quietly while waiting in the physician’s office, then a parent could positively reinforce this behavior by offering a reward (TV time) or negatively reinforce this behavior by removing an unwanted action (chores). Both reinforcements result in the desired behavior (sitting quietly in a professional environment), and the next time the child is likely to repeat the desired behavior. If the child does not sit quietly in the physician’s office, then a parent could positively punish this behavior by adding to an unwanted task (extra chores) or negatively punish this behavior by removing a reward (no TV time). Both punishments result in improvement of the desired behavior (sitting quietly in a professional environment) when the child encounters the situation again. © Bridgepoint Education, Inc. Each of these concepts will be more fully addressed in the next two series of excerpts. The first discusses classical conditioning and is from Clark (2004). The article will go into detail about the differing types of stimuli (conditioned versus unconditioned). The second series of excerpts discusses operant conditioning and is from Macias (2016). It will provide a deeper look into reinforcers and punishments. As you read, compare and contrast these two types of conditioning and consider how, with each new development, more questions arise about how associations occur and if they affect learning. Excerpts from “The Classical Origins of Pavlov’s Conditioning” By R. E. Clark Classical Conditioning In the most basic form of classical conditioning, the stimulus that predicts the occurrence of another stimulus is termed the conditioned stimulus (CS) (in Pavlov’s experiment, the tone). The predicted stimulus is termed the unconditioned stimulus (US) (in Pavlov’s experiment, the food). The CS is a relatively neutral stimulus that can be detected by the organism, but does not initially induce a reliable behavioral response. The US is a stimulus that can reliably induce a measurable response from the first presentation. The response that is elicited by the presentation of the US is termed the unconditioned response (UR) (in Pavlov’s experiment, the drool as a result of the food). The term “unconditioned” is used to indicate that the response is “not learned,” but rather it is an innate or reflexive response to the US. With repeated presentations of the CS followed by US (referred to as paired training) the CS begins to elicit a conditioned response (CR) (in Pavlov’s experiment, the drool as a result of the tone alone). Here the term “conditioned” is used to indicate that the response is “learned.” See Figure 1.4 for an illustration of these relationships. Figure 1.4: A typical classical conditioning procedure An unconditioned stimulus (US), food, leads to an unconditioned response (UR), salivation. Introducing a conditioned stimulus (CS) of a tone before the food’s presentation results in the tone eventually creating a conditioned response (CR) of salivation, even without food. This diagram consists of two sections. The top section is labeled “Conditioning.” Here, we see the word “CS-tone” followed by a broken line ending in a right arrow. The arrow is pointing at the word “US-food,” which is followed by a solid black line ending in a right arrow that points to the word “UR-salivation.” The bottom section is labeled “Result.” The word “CS-tone” appears, followed by a broken line ending in a right arrow. The arrow points to the word “CR-salivation.” From Psychology of Learning (p. 47), by D. A. Lieberman, 2012, San Diego, CA: Bridgepoint Education, Inc. Copyright 2012 by Bridgepoint Education, Inc. Edwin Burket Twitmyer (1873–1943) The phenomenon of classical conditioning was discovered independently in the United States and Russia around the turn of the 19th century. In the United States, Edwin B. Twitmyer made this discovery at the University of Pennsylvania while finishing his dissertation work on the “knee-jerk” reflex. When the patellar tendon is lightly tapped with a doctor’s hammer, the well-known “knee-jerk” reflex is elicited. Twitmyer had initially intended to study the magnitude of the reflex under normal and facilitating conditions (Figure 1.5). In the facilitating conditions the subjects were asked to verbalize the word “ah,” or to clench their fists, or to imagine clenching their fists (Twitmyer, 1902/1974). A bell that was struck one-half second before the patellar tendon was tapped served as signal for the subjects to begin verbalizing or fist clenching (or imagining fist clenching). Twitmyer observed: [D]uring the adjustment of the apparatus for an earlier group of experiments with one subject . . . a decided kick of both legs was observed to follow a tap of the signal bell occurring without the usual blow of the hammers on the tendons. . . . Two alternatives presented themselves. Either (1) the subject was in error in his introspective observation and had voluntarily moved his legs, or (2) the true knee jerk (or a movement resembling it in appearance) had been produced by a stimulus other than the usual one. (as cited in Irwin, 1943, p. 452) [. . .] Twitmyer apparently did not fully appreciate the potential significance of this finding beyond recording this initial observation, and the work was never extended. It has been suggested that Twitmyer’s failure to systematically investigate this phenomenon and the lack of interest exhibited by his colleagues who heard the presentation was likely due in part to the prevailing American zeitgeist where interest in delineating the components of consciousness through introspection was the principal perspective (Irwin, 1943; Coon, 1982). Thus, Twitmyer and his contemporaries would have been predisposed to undervalue the usefulness, to the field of psychology, of something as basic as a modifiable reflex. This was not the case in Russia. Figure 1.5: Twitmyer’s “knee-jerk” reflex experiment This photograph (circa 1903) shows a young subject and the experimental apparatus Twitmyer used to measure the magnitude of the knee-jerk reflex (see http://www.psych.upenn.edu/history/twittext.htm for details). In the photograph, a young subject sits in a chair. An apparatus is placed before him so that instruments at knee level and above can implement and monitor the knee-jerk reflexes being studied. University of Pennsylvania Archive, photographer unknown. Ivan Petrovich Pavlov (1849–1936) The Russian discovery of classical conditioning comes from the pioneering work of Ivan Petrovich Pavlov. [. . .] In 1904, Pavlov was awarded the Nobel Prize in medicine for his work on the physiology of digestion. This early research, which used dogs as experimental subjects, set the stage for observing the phenomenon of classical conditioning. As early as 1880, Pavlov and his associates observed that sham feedings, in which food was eaten but failed to reach the stomach (being lost through a surgically implanted esophageal fistula), produced gastric secretions, just like real food. Pavlov’s laboratory modified this preparation in order to simplify the forthcoming studies. Rather than measure gastric secretions, they began measuring salivation (see Figure 1.6). Salivation was chosen because an efficient and highly practical method of measuring salivation using a permanently implanted fistula had just been developed in the laboratory (Pavlov, 1951; Windholz, 1986). In 1897, Stefan Wolfson (also translated as Sigizmund Vul’fson), a doctoral student of Pavlov, made an important observation: We place before the nose of the dog a glass of carbon bisulphide . . . from its two salivary glands flows saliva . . . we stimulate the dog a few times with the same glass of carbon bisulphide. The saliva flows each time. Now we substitute surreptitiously an identical glass containing water. The dog salivates again, although with a smaller quantity of saliva. (translated in Windholz, 1986, p. 142) Figure 1.6: Apparatus used in Pavlov’s study Apparatus used in Pavlov’s study of salivary conditioning in dogs. Saliva flowed through a tube connected to the dog’s cheek and traveled to another room, where it could be recorded. This drawing shows a dog being held inside a specialized apparatus. The apparatus consists of four beams joined together by an overlying crossbeam. There are three rope harnesses that extend down from the crossbeam to hold the dog steady. One of the harnesses extends down from the back and loops around the dog’s haunches. Another harness loops around the dog’s two front legs, and another attaches to a collar around his neck. The dog stands in the middle of the apparatus, facing a table upon which sits a large bowl. The dog’s chin is positioned over the bowl. There is a long, narrow tube hanging down from the dog’s jowls and connecting to a special machine on the floor that looks like it attaches to a small pump. The tube runs from this pump to another room, where it connects to another pump-like machine. Adapted from “The Method of Pawlow in Animal Psychology,” by R. M. Yerkes & S. Morgulis, 1909, Psychological Bulletin, 6, 265. Copyright 1909 by R. M. Yerkes & S. Morgulis. Adapted with permission. Pavlov immediately recognized the significance of these findings, findings that would ultimately lead him to change the direction of his research to explore this phenomenon. His initial results were officially presented to the International Congress of Medicine held in Madrid, Spain, in 1903. This report was entitled “Experimental Psychology and Psychopathology in Animals.” [. . .] The Emergence of Classical Conditioning in the United States Pavlov’s work on classical conditioning was essentially unknown in the United States until 1906, when his lecture “The Scientific Investigation of the Psychical Faculties or Processes in the Higher Animals” was published in the journal Science (Pavlov, 1906). In 1909 Robert Yerkes (1876–1956), who would later become president of the American Psychological Association, and Sergius Morgulis published an extensive review of the methods and results obtained by Pavlov, which they described as “now widely known as the Pawlow [sic] salivary reflex method” (Yerkes & Morgulis, 1909, p. 257). Initially Pavlov and his associates used the term conditional rather than conditioned. Yet Yerkes and Morgulis chose to use the term conditioned. They explained their choice of terms in a footnote: Conditioned and unconditioned are the terms used in the only discussion of this subject by Pawlow [sic] which has appeared in English. The Russian terms, however, have as their English equivalents conditional and unconditional. But as it seems highly probable that Professor Pawlow [sic] sanctioned the terms conditioned and unconditioned, which appear in the Huxley lecture (Lancet, 1906), we shall use them. (Yerkes & Morgulis, 1909, p. 259) The terms conditioned reflex and unconditioned reflex were used during the first two decades of the 20th century, during which time this type of learning was often referred to as “reflexology.” In 1921, the first textbook devoted to conditioning (General Psychology in Terms of Behavior) adopted the terms conditioned and unconditioned response to replace the term reflex (Smith & Guthrie, 1921). De-emphasizing the concept of a reflex and instead using a more general term like response allowed a larger range of behaviors to be examined with conditioning procedures. [. . .] A portrait of psychologist John B. Watson. George Rinhart/Corbis Historical/Getty Images Psychologist John B. Watson is well known for the “Little Albert” case, in which, over time, a young boy learned to fear white rats. This is an example of classical conditioning. John B. Watson (1878–1958) championed the use of classical conditioning as a research tool for psychological investigations. During 1915, his student Karl Lashley conducted several exploratory conditioning experiments in Watson’s laboratory. Watson’s presidential address, delivered in 1915 to the American Psychological Association, was entitled “The Place of the Conditioned Reflex in Psychology” (Watson, 1916). Watson was highly influential in the rapid incorporation of classical conditioning into American psychology, though this influence did not appear to extend to his student. Lashley became frustrated with his attempts to classically condition the salivary response in humans (Lashley, 1916) and permanently abandoned the paradigm. In 1920, Watson’s work with classical conditioning culminated in the now infamous case of “Little Albert” (first mentioned in the Introduction chapter). Albert B. was an 11-month-old boy who had no natural fear of white rats. Watson and Rosalie Rayner used the white rat as a CS. The US was a loud noise that always upset the child. By pairing the white rat and the loud noise, Albert began to cry and show fear of the white rat—a CR. With successive training sessions over the course of several months, Watson and Rayner were able to demonstrate that this fear of white rats generalized to other furry objects (Watson & Rayner, 1920). The plan had been to then systematically remove this fear using methods that Pavlov had shown would eliminate or extinguish the conditioned response, in this case, fear of furry white objects. Unfortunately, “Little Albert,” as he has historically come to be known, was removed from the study by his mother on the day these procedures were to begin. Unfortunately, there is no known reliable account of how this experiment on classical conditioning of fear ultimately affected Albert B. Nevertheless, this example of classical conditioning may be the most famous single case in the literature on classical conditioning. The end of the beginning of classical conditioning as a paradigm in the United States can be traced to the 1927 publication of Pavlov’s book Conditioned Reflexes, which was translated into English by a former student, G. V. Anrep (Pavlov, 1927). This made all of Pavlov’s conditioning work available in English for the first time. The availability of 25 years’ worth of Pavlov’s research, in vivid detail, led to increased interest in the experimental examination of classical conditioning, an interest that has continued to this day. [. . .] By 1935 B. F. Skinner entered this discussion in earnest when he published a paper titled “Two Types of Conditioned Reflexes and a Pseudo-Type” (Skinner, 1935). This was a theoretical paper where Skinner attempted to add clarity and structure to distinguish two types of conditioned reflexes. [. . .] It is clear that one type corresponds to what would eventually be termed operant conditioning and the second type corresponds to Pavlov’s type of conditioning. [. . .] Source: Clark, R. E. (2004). The classical origins of Pavlov’s conditioning. Integrative Physiological & Behavioral Science, 39(4), 279–294. Copyright © 2004, Springer. Operant Conditioning First coined by behaviorist B. F. Skinner (1904–1990), the word operant was used to describe the behavior that is in response to the environment and generated consequences (1953). Basically, Skinner suggested that when a behavior was reinforced, it would increase or be validated. If a behavior was not reinforced but instead resulted in a punishment, then the behavior would diminish or be eliminated. These associations describe the core of operant conditioning. As noted at the start of this section, the following excerpts from Macias (2016) explain the roles of reinforcements and punishments in conditioning. Excerpts from “Reinforcement” By S. I. Macias Types of Reinforcers The range of possible consequences that can function as reinforcers is enormous. To make sense of this assortment, psychologists tend to place them into two main categories: primary reinforcers and secondary reinforcers. Primary reinforcers are those that require little, if any, experience to be effective. Food, drink, and sex are common examples. While it is true that experience will influence what would be considered desirable for food, drink, or an appropriate sex partner, there is little argument that these items, themselves, are natural reinforcers. Another kind of reinforcer that does not require experience is called a social reinforcer. Examples are social contact and social approval. Even newborns show a desire for social reinforcers. Psychologists have discovered that newborns prefer to look at pictures of human faces more than practically any other stimulus pattern, and this preference is stronger if that face is smiling. Like the other primary reinforcers, experience will modify the type of social recognition that is desired. Still, it is clear that most people will go to great lengths to be noticed by others or to gain their acceptance and approval. Though these reinforcers are likely to be effective, most human behavior is not motivated directly by primary reinforcers. Money, entertainment, clothes, cars, and computer games are all effective rewards, yet none of these would qualify as natural or primary reinforcers. Because they must be acquired, they are called secondary reinforcers. These become effective because they are paired with primary reinforcers. The famous American psychologist B. F. Skinner found that the sound of food being delivered was sufficient to maintain a high rate of bar pressing in experienced rats. Obviously, under normal circumstances the sound of the food occurred only if food was truly being delivered. How a secondary reinforcer becomes effective is called two-factor theory and is generally explained through a combination of instrumental and Pavlovian conditioning (hence the label “two-factor”). For example, when a rat receives food for pressing a bar (positive reinforcement), at that same time a neutral stimulus is also presented, the sound of the food dropping into the food dish. The sound is paired with a stimulus that naturally elicits a reflexive response; that is, food elicits satisfaction. Over many trials, the sound is paired consistently with food; thus, it will be conditioned via Pavlovian methods to elicit the same response as the food. Additionally, this process occurred during the instrumental conditioning of bar pressing by using food as a reinforcer. This same process works for most everyday activities. For most humans, money is an extremely powerful reinforcer. Money itself, though, is not very attractive. It does not taste good, does not reduce any biological drives, and does not, on its own, satisfy any needs. However, it is reliably paired with all of these things and therefore becomes as effective as these primary reinforcers. In a similar way, popular fashion in clothing, hair styles, and personal adornment; popular art or music; even behaving according to the moral values of one’s family or church group (or one’s gang) can all come to be effective reinforcers because they are reliably paired with an important primary reinforcer, namely, social approval. The person who will function most effectively as the approving agent changes throughout life. One’s parents, friends, classmates, teachers, teammates, coaches, spouse, children, and colleagues at work all provide effective social approval opportunities. Reinforcers and Punishers A young child potty training. Seanfboggs/iStock/Thinkstock Potty training a child is an example of reinforcement, where a parent may reward or cheer on the child throughout the process to attain a successful result. To maintain a reasonable degree of consistency, most psychologists use the term “reinforcement” exclusively for a process of using rewards to increase voluntary behavior. The field of study most associated with this technique is instrumental conditioning. In this context, the formal definition states that a reinforcer is any consequence to a behavior that is emitted in a specified situation that has the effect of increasing that behavior in the future. It must be emphasized that the behavior itself is not sufficient for the consequence to be delivered. The circumstances in which the behavior occurs are also important. Thus, standing and cheering at a basketball game will likely lead to approval (social reinforcement), whereas this same response is not likely to yield acceptance if it occurs at a funeral. A punisher is likewise defined as any consequence that reduces the probability of a behavior, with the same qualifications as for reinforcers. A behavior that occurs in response to a specified situation may receive a consequence that reduces the likelihood that it will occur in that situation in the future, but the same behavior in another situation would not generate the same consequence. For example, drawing on the walls of a freshly painted room would usually result in an unpleasant consequence, whereas the same behavior (drawing) in one’s coloring book would not. The terms “positive” and “negative” are also much more tightly defined. Former use confused these with the emotional values of good or bad, thereby requiring the counterintuitive and confusing claim that a positive reinforcer is withheld or a negative reinforcer presented when there is clearly no reward, and, in fact, the intent is to reduce the probability of that response (such as described by Kimble). A better, less confusing definition is to consider “positive” and “negative” as arithmetic symbols, as for adding or subtracting. They therefore are the methods of supplying reinforcement (or punishment) rather than descriptions of the reinforcer itself. Thus, if a behavior occurs, and as a consequence something is given that will result in an increase in the rate of the behavior, this is positive reinforcement. Giving a dog a treat for executing a trick is a good example. One can also increase the rate of a behavior by removing something on its production. This is called negative reinforcement. A good example might be when a child who eats his or her vegetables does not have to wash the dinner dishes. Another example is the annoying seat belt buzzer in cars. Many people comply with the rules of safety simply to terminate that aversive sound. The descriptors “positive” and “negative” can be applied to punishment as well. If something is added on the performance of a behavior which results in the reduction of that behavior—that is positive punishment. On the other hand, if this behavior causes the removal of something that reduces the response rate—negative punishment. A dog collar that provides an electric shock when the dog strays too close to the property line is an example of a device that delivers positive punishment. Loss of television privileges for rudeness is an example of negative punishment. See Table 1.2 for an overview of reinforcements and punishments. Table 1.2: Reinforcements and punishments Type Description Example positive reinforcement Adds to the environment to encourage continuance of a desired behavior. Giving child a reward (a treat, a toy, etc.) positive punishment Adds to the environment to discourage continuance of an undesired behavior. Adding chores to a child’s weekly duties negative reinforcement Takes away from the environment to encourage continuance of a desired behavior. Taking away child’s assigned chores for the week negative punishment Takes away from the environment to discourage continuance of an undesired behavior. Grounding child from playing with his/her friends © Bridgepoint Education, Inc. Why Reinforcers Work Reinforcers (and punishers) are effective at influencing an organism’s willingness to respond because they influence the way in which an organism acquires something that is desired, or avoids something that is not desired. For primary reinforcers, this concerns health and survival. Secondary reinforcers are learned through experience and do not directly affect one’s health or survival, yet they are adaptive because they are relevant to those situations that are related to well-being and an improved quality of life. Certainly learning where food, drink, receptive sex partners, or social acceptance can be located is useful for an organism. Coming to enjoy being in such situations is very useful, too. [. . .] Patterns of Reinforcer Delivery It is not necessary to deliver a reinforcer on every occurrence of a behavior to have the desired effect. In fact, intermittent reinforcement has a stronger effect on the stability of the response rate than reinforcing every response. If the organism expects every response to be reinforced, suspending reinforcement will cause the response to disappear very quickly. If, however, the organism is familiar with occasions of responding without reinforcement, responding will continue for much longer on the termination of reinforcers. There are two basic patterns of intermittent reinforcement: ratio and interval. These patterns, or rules, are known as schedules of reinforcement. Ratio schedules are based on the number of responses required to receive the reinforcer. Interval schedules are based on the amount of time that must pass before a reinforcer is available. Both schedules have fixed and variable types. On fixed schedules, whatever the rule is, it stays that way. If five responses are required to earn a reinforcer (a fixed ratio 5, or FR 5), every fifth response is reinforced. A fixed interval of 10 seconds (FI 10) means that the first response after 10 seconds has elapsed is reinforced, and this is true every time (responding during the interval is irrelevant). Variable schedules change the rule in unpredictable ways. A VR 5 (variable ratio 5) is one in which, on the average, the fifth response is reinforced, but it would vary over a series of trials. A variable interval of 10 seconds (VI 10) is similar. The required amount of time is an average of 10 seconds, but on any given trial it could be different. An example of a fixed-ratio (FR) schedule is pay for a specific amount of work, such as stuffing envelopes. The pay is always the same; stuffing a certain number of envelopes always equals the same pay. An example of a fixed-interval (FI) schedule is receiving the daily mail. Checking the mailbox before the mail is delivered will not result in reinforcement. One must wait until the appropriate time. A variable-ratio (VR) schedule example is a slot machine. The more attempts, the more times the player wins, but in an unpredictable pattern. A variable-interval (VI) schedule example would be telephoning a friend whose line is busy. Continued attempts will be unsuccessful until the friend hangs up the phone, but when this will happen is unknown. See Table 1.3 for an overview of ratio and interval schedules. Table 1.3: Ratio and interval schedules of reinforcement Schedule type Description Example fixed-ratio (FR) Amount of reinforcer stays the same. Paying a person $10/hour fixed-interval (FI) Time of reinforcement stays the same. Paying a person every Friday for work completed variable-ratio (VR) Reinforcers are administered in unpredictable amounts. Paying a person a bonus for time worked; amount is unknown but time may be known (such as end of the year) variable-interval (VI) Reinforcers are administered at unpredictable times. Paying a person a bonus of a predictable amount but at unpredictable times © Bridgepoint Education, Inc. Response rates for fixed schedules follow a fairly specific pattern. Fixed ratio schedules tend to have a steady rate until the reinforcer is delivered; then there is a short rest, followed by the same rate. A fixed interval is slightly different. The closer one gets to the required time, the faster the response rate. On receiving the reinforcer there will be a short rest, then a gradual return to responding, becoming quicker and quicker over time. This is called a “scalloped” pattern. (Though not strictly an FI schedule, it does have a temporal component, so it illustrates the phenomenon nicely.) Students are much more likely to study during the last few days before a test and very little during the days immediately after the test. As time passes, study behavior gradually begins again, becoming more concentrated the closer the next exam date comes. Source: Macias, S. I. (2016). Reinforcement. In Salem Press Encyclopedia of Health. Copyright © EBSCO. Classical and operant conditioning can often be difficult concepts to understand at first glance, and it can be helpful to think about how these types of learning processes might happen in our lives each day. For instance, have you ever rewarded your children for doing what you asked? As they became older, did you have to reward them every single time, as you may have when they were younger, or could you reward them every now and again and still see the behavior repeated? By fully understanding the principles of classical and operant conditioning, you will be more apt to identify—and perhaps even implement—differing schedules of reinforcement in your own life. The last section of this chapter will guide you through two modern applications of conditioning. Reinforcing Your Understanding: Conditioning takes a closer look at Skinner’s conditioning research. 1.4 Behaviorism Applied A walkway adjacent to a wall lined with Apple iPod advertisements. Ullstein bild/Getty Images Do the vibrant colors and illustrations in the Apple iPod advertisements elicit a positive feeling? Classical conditioning in advertising generally assumes that favorability toward a certain product develops from a positive commercial or advertisement. Now that you are familiar with how behaviorism was shaped and refined through continuous research, consider how it can be applied in modern environments. The excerpts in this section are from two separate articles. Both selections demonstrate the application of strategies based on behaviorism. The first series of excerpts is from Wells (2014) and illustrates how such strategies are used to understand consumer behaviors and then applied to product marketing; consumer behaviors research aims to identify why people buy what they buy. For example, an organization can use what it knows about its consumers when developing campaigns; its marketing campaigns will often apply some of the behavioral principles. Do you recognize the example in the pictured advertisement? Does it trigger specific emotional responses or beliefs about the product? Do you use this specific brand of product? Many of the advertisers’ decisions and consumer behaviors associated with their products are based on behaviorism. Excerpts from “Behavioural Psychology, Marketing, and Consumer Behaviour: A Literature Review and Future Research Agenda” By V. K. Wells Classical Conditioning in Marketing and Consumer Behavior Research [. . .] Allen and Janiszewski (1989), based on their work on contingency awareness, provide an anecdotal illustrative example of how classical conditioning could work successfully and be correctly used in advertising (a television commercial for Diet Pepsi), in which most of the work on classical conditioning in consumption and marketing has taken place. They suggest that: This commercial features a repetitive musical jingle with a series of brief visual clips. The jingle lyrics—”Now you see it, now you don’t, here you have it, here you won’t”—are precisely coordinated with the image presentation . . . the CS (the brand) predicts the US (a slim female torso). In each instance “Now you see it, now you don’t” is sung as first the brand (CS) and then a trim-figured woman (US) is shown. (pp. 39–40) Overall, there has been mixed support for classical conditioning effects in advertising, but the general suggestion is that positive attitudes toward an advertised product (CS) might develop through their association in a commercial with other stimuli that are reacted to positively (US), such as pleasant colors, music, and humor (Gorn, 1982). Early work applying classical conditioning to advertising appears to have been based on and inspired by the work of Razran (1938), who paired a free meal (US) with various political statements (CS). He found that agreement with the slogans was greater when people received a free meal than when they did not. The work of Staats and Staats (1958), who successfully associated visually presented nonsense symbols (CS) with several spoken words (US) such as beauty, healthy, smart, and success, opened the door further for a classical conditioning approach to advertising. After the associative pairings, the participants’ ratings of the CS indicated that the core meaning in the US (i.e., either positive or negative evaluation) had transferred to the nonsense syllables (Allen & Janiszewski, 1989). In a second experiment, Allen and Janiszewski associated each of two national names (“Swedish” and “Dutch”) with either 18 positive or 18 negative words. The national name paired with positive words was later evaluated more favorably than the one paired with negative words. [. . .] Acquisition The first characteristic, acquisition, indicates that classically conditioned responses do not fully appear after only one pairing/trial, and the strength of the response increases with the number of pairings (McSweeney & Bierley, 1984). Whereas early studies used only one or an arbitrary number of pairings, experimenters quickly began testing the optimum level of pairings/trials, often experimenting with different numbers of pairings in different experimental groups. The focus of the first of the four experiments by Stuart, Shimp, and Engle (1987) was on testing the amount of conditioning with different numbers of pairings (1, 3, 10, and 20). They found that the groups subjected to higher levels of pairings/trials (10 and 20) demonstrated significantly higher levels of conditioning. They also attempted to test the optimum number of trials to ensure effective conditioning and used 1, 3, 10, and 20 pairings of the CS and US; they found that conditioning was greater as the number of trials increased. Although other studies have used different trial numbers, there remains no agreement on an optimum number of trials for conditioning to occur. Extinction Extinction is the prediction that the conditioned behavior will disappear if the predictive relationship between the CS and the US is broken by either omitting the US entirely or by presenting the CS and US randomly (McSweeney & Bierley, 1984). Till, Stanley, and Pirluck (2008) explored the characteristic of extinction empirically. Their study paired brands with celebrities and measured attitudes toward the brands after conditioning. Attitudes increased with the use of well-liked and relevant celebrities. They then attempted to extinguish these effects but found that, once paired, the pairings were difficult to eliminate, with brand attitudes still affected 2 weeks after the procedure (Till et al., 2008). Till and Priluck (2000) studied the characteristic of generalization, or the extent to which a response conditioned to one stimulus transfers to similar stimuli. Through two experimental procedures, they found that attitudes conditioned to a particular brand (Garra mouthwash) could be transferred (generalized) to a product with a similar name (Gurra, Gurri, and Dutti) in the same category, as well as a product with the same name in a different category (soap). [. . .] Operant Conditioning in Marketing and Consumer Behavior Research In operant conditioning, behavior is shaped and maintained by its consequences (Foxall, 1986), meaning that the rate at which a behavior will be performed is directly related to the consequences of that behavior performed previously. [. . .] According to Skinner, each behavioral act can be broken down into three key parts: (1) the response/behavior (R); (2) the reinforcement/punishment (S+/–), which is a consequence of the behavior; and (3) a discriminative stimulus (Sd), which is a cue that signals the likelihood of positive or negative consequences arising from performing the behavior (Foxall, 1986, 2002). The three parts together, labelled the three-term contingency, highlight that the determinants of the behavior must occur in the environment (Foxall, 1986, 1993): Sd → R → S+/– In general, behavior modifiers include positive and negative reinforcement, and positive and negative punishment. Positive reinforcement is generally a reward or something that strengthens the behavior (e.g., a pleasant experience or satisfaction with a product, a positive response to a behavior), which likely leads the person to buy the product again in future. With negative reinforcement, the behavior is generally performed to avoid unpleasantness (e.g., buying a product to avoid an aggressive salesperson, purchase and consumption of painkillers to relieve a headache; Simintiras & Cadogan, 1996). Punishment is an aversive consequence after a behavioral response and may lead to the extinction of a behavior (Nord & Peter, 1980). An example of punishment is a product that does not do the job it was designed to do or is of poor quality, and thus the buyer no longer buys it. Reinforcement, in both experimental procedures and real-life situations, is provided on a schedule. [. . .] Research has shown that intermittent schedules of reinforcement develop high rates of behavior resistant to extinction, and they are also more economical because they use fewer reinforcers, which can reduce the cost (Peter & Nord, 1982). Peter and Nord (1982) suggest that most marketing activity in the real world (differentiating brands and manipulating marketing variables such as price and promotions) often occurs on an intermittent schedule. In terms of marketing and consumer behavior, a full range of behavior, such as actual purchasing, visiting and browsing in a store, and searching for information online, can be examined under the three-term contingency. Foxall (1986, p. 404) also documents that verbal behavior, for example, sharing positive or negative word of mouth about a product, can also be examined but notes that “behaviors which belong to different classes (e.g. talking about how one will vote and actually voting) will be consistent only when the contingency of reinforcement applicable to both are functionally equivalent.” Discriminative stimuli serve to signal the probability of behavior being reinforced and can change the probability of a behavior being emitted. Nord and Peter (1980) provide examples of discriminative stimuli such as store signs (e.g., 50% off, buy one get one free), store logos (e.g., Kmart’s big red “K,” McDonald’s golden arches), or distinctive brand marks (e.g., Levi’s, Coca-Cola). Past learning history and experiences will have taught customers that responding to cues such as these in the past rewards them with satisfactory value purchases. They may also have learned that they are not rewarded when the symbols or cues are absent. [. . .] Source: Wells, V. K. (2014). Behavioural psychology, marketing and consumer behaviour: A literature review and future research agenda. Journal of Marketing Management, 30(11/12), 1119–1158. Copyright © 2014 Routledge. Behaviorism in Educational Environments The second series of excerpts in this section is from Standridge (2002). Standridge demonstrates the application of behaviorism in education and considers the importance of such strategies when reinforcing preferred behaviors and discouraging unwanted behaviors. Behavior modification is an important strategy for creating positive environments that support effective learning opportunities. The selection introduces the concepts of modeling, cueing, and behavior modification. As you read, consider how similar strategies for putting theory into practice could also be used in organizations and family units. Excerpts from “Behaviorism” By M. Standridge [ . . .] Behaviorist techniques have long been employed in education to promote behavior that is desirable and discourage that which is not. Among the methods derived from behaviorist theory for practical classroom application are contracts, consequences, reinforcement, extinction, and behavior modification. Contracts, Consequences, Reinforcement, and Extinction Simple contracts can be effective in helping children focus on behavior change. The relevant behavior should be identified, and the child and counselor should decide the terms of the contract. Behavioral contracts can be used in school as well as at home. It is helpful if teachers and parents work together with the student to ensure that the contract is being fulfilled. [. . .] Consequences occur immediately after a behavior. Consequences may be positive or negative, expected or unexpected, immediate or long term, extrinsic or intrinsic, material or symbolic (a failing grade), emotional/interpersonal, or even unconscious. Consequences occur after the “target” behavior occurs, when either positive or negative reinforcement may be given. Positive reinforcement is presentation of a stimulus that increases the probability of a response. This type of reinforcement occurs frequently in the classroom. Teachers may provide positive reinforcement by: Smiling at students after a correct response. Commending students for their work. Selecting them for a special project. Praising students’ ability to parents. Negative reinforcement increases the probability of a response that removes or prevents an adverse condition. Many classroom teachers mistakenly believe that negative reinforcement is punishment administered to suppress behavior; however, negative reinforcement increases the likelihood of a behavior, as does positive reinforcement. Negative implies removing a consequence that a student finds unpleasant. Negative reinforcement might include: Obtaining a score of 80% or higher makes the final exam optional. Submitting all assignments on time results in the lowest grade being dropped. Perfect attendance is rewarded with a “homework pass.” Punishment involves presenting a strong stimulus that decreases the frequency of a particular response. Punishment is effective in quickly eliminating undesirable behaviors. Examples of punishment include: Students who fight are immediately referred to the principal. Late assignments are given a grade of “0.” Three tardies to class results in a call to the parents. Failure to do homework results in after-school detention (privilege of going home is removed). Table 1.4 provides a comparison and examples of reinforcements and punishments. Also see Reinforcing Your Understanding: Reinforcement and Punishment in the Classroom for a more in-depth example. Table 1.4: Reinforcement and punishment comparison Reinforcement (Behavior increases) Punishment (Behavior decreases) Positive (Something is added) Positive reinforcement: Something is added to increase desired behavior. Example: Smile and compliment student on good performance. Positive punishment: Something is added to decrease undesired behavior. Example: Give student detention for failing to follow the class rules. Negative (Something is removed) Negative reinforcement: Something is removed to increase desired behavior. Example: Give a free homework pass for turning in all assignments. Negative punishment: Something is removed to decrease undesired behavior. Example: Make students miss their time in recess for not following the class rules. Adapted from “Behaviorism” by M. Standridge, 2002, in M. Orey (Ed.), Emerging Perspectives on Learning, Teaching, and Technology (http://epltt.coe.uga.edu/index.php?title=Behaviorism). Copyright 2002 by M. Standridge. Adapted with permission. Extinction decreases the probability of a response by contingent withdrawal of a previously reinforced stimulus. Examples of extinction are: A student has developed the habit of saying the punctuation marks when reading aloud. Classmates reinforce the behavior by laughing when he does so. The teacher tells the students not to laugh, thus extinguishing the behavior. A teacher gives partial credit for late assignments; other teachers think this is unfair; the teacher decides to then give zeros for the late work. Students are frequently late for class, and the teacher does not require a late pass, contrary to school policy. The rule is subsequently enforced, and the students arrive on time. Reinforcing Your Understanding: Reinforcement and Punishment in the Classroom Reinforcement and punishment are still often used as methods for classroom management in today’s schools. By shaping student behavior, instructors have the ability to be more focused on the concepts that need to be learned. The following student-created video presents a quality demonstration of reinforcement and punishment in a classroom scenario. In this video, the teacher, Mr. Andrews, uses each method to demonstrate operant conditioning in scenarios with one particularly rambunctious student, Benjamin. https://youtu.be/wLoMs-OzimU Modeling, Shaping, and Cueing A toddler trying on adult high-heeled sandals. Erllre/iStock/Thinkstock A child trying on an adult’s clothing could be an example of observational learning; once the child sees a parent wearing high heels, a large coat, or even makeup, the child may try to model that behavior. Modeling is also known as observational learning (where the learner imitates, or models, the others’ behavior). Albert Bandura has suggested that modeling is the basis for a variety of child behavior. Children acquire many favorable and unfavorable responses by observing those around them. A child who kicks another child after seeing this on the playground, or a student who is always late for class because his friends are late, is displaying the results of observational learning. Shaping is the process of gradually changing the quality of a response. The desired behavior is broken down into discrete, concrete units, or positive movements, each of which is reinforced as it progresses toward the overall behavioral goal. In the following scenario, the classroom teacher employs shaping to change student behavior: The class enters the room and sits down, but continues to talk after the bell rings. The teacher gives the class one point for improvement, in that all students are seated. Subsequently, the students must be seated and quiet to earn points, which may be accumulated and redeemed for rewards. Cueing may be as simple as providing a child with a verbal or nonverbal signal as to the appropriateness of a behavior. For example, to teach a child to remember to perform an action at a specific time, the teacher might arrange for him to receive a cue immediately before the action is expected rather than after it has been performed incorrectly. For example, if the teacher is working with a student who habitually answers aloud instead of raising his hand, the teacher should discuss a cue such as hand-raising at the end of a question posed to the class. Behavior Modification Behavior modification is a method of eliciting better classroom performance from reluctant students. It has six basic components: Specification of the desired outcome (What must be changed and how will it be evaluated?). One example of a desired outcome is increased student participation in class discussions. Development of a positive, nurturing environment (by removing negative stimuli from the learning environment). In the above example, this would involve a student-teacher conference with a review of the relevant material, and calling on the student when it is evident that she knows the answer to the question posed. Identification and use of appropriate reinforcers (intrinsic and extrinsic rewards). A student receives an intrinsic reinforcer by correctly answering in the presence of peers, thus increasing self-esteem and confidence. Reinforcement of behavior patterns develop until the student has established a pattern of success in engaging in class discussions. Reduction in the frequency of rewards—a gradual decrease in the amount of one-on-one review with the student before class discussion. Evaluation and assessment of the effectiveness of the approach based on teacher expectations and student results. Compare the frequency of student responses in class discussions to the amount of support provided, and determine whether the student is independently engaging in class discussions (Brewer, Campbell, & Petty, 2000). [. . .] Further methods for behavior modification could include changing the environment, using models for learning new behavior, recording behavior, substituting new behavior to break bad habits, developing positive expectations, and increasing intrinsic satisfaction. [. . .] Source: Standridge, M. (2002). Behaviorism. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved from http://epltt.coe.uga.edu/index.php?title=Behaviorism As we develop our understanding of how we learn, it is important to recognize the crucial foundations that characterize learning psychology, such as behaviorism and behavior analysis. Today, many different professions use and adapt behaviorist methods to help people succeed in their learning opportunities. Whether you want to become a counselor, a teacher, a human resources director, an employee development specialist, a psychologist, a researcher, or simply the best parent you can be, behaviorism offers you applicable strategies for encouraging appropriate and healthy behaviors in others. Reinforcing Your Understanding: Applied Behavioral Analysis (ABA) offers a glimpse at one young boy’s experiences with reward-based therapy. Summary & Resources Chapter Summary Behaviorism is a foundational framework that encourages those interested in how we learn to study, reflect, and identify patterns that support the stimulus-response premise. Dating back as far as Aristotle and his ideas about associations, these ideas have matured, been challenged, and continue to be elaborated upon through years of reflection and research. As explained by Watrin and Darwich (2012) in section 1.1, behaviorism is often misunderstood and difficult to clearly explain. However, additional articles in this chapter help us to bridge the gaps created by the multifaceted metamorphosis of this theoretical model. Instinctively, the foundations of behaviorism can be categorized by the S → R relationship and the suggestion that learning is the outward manifestation of the desired behavior, and although there are differing methods of how a stimulus can be applied to gain differing responses, this is a foundational component of the behaviorist ideology. See Figure 1.7 for a side-by-side presentation of the stimulus-response relationships in connectionism and conditioning. Figure 1.7: Overview of the principles of conditioning The foundations of behaviorism lie in the stimulus-response theoretical model. This model can be applied to connectionism and conditioning. A side-by-side presentation of the stimulus-response relationships in connectionism and conditioning. At the top, the relationship in connectionism is shown. At the bottom, the relationship before, during, and after conditioning is shown. Both represent stimulus with an “S” and response with an “R.” In connectionism, “S” leads to “R,” which leads to a confirming reaction. After the confirming reaction functions as a force connecting and binding the “S” and “R,” then the “S” leads to the confirming reaction, which leads to “R.” Before conditioning, “S” does not lead to a response, but an unconditioned stimulus (shown as “US”) leads to an unconditioned response (shown as “UR”). During conditioning, the conditioned stimulus (shown as “CS”) is followed by the “US,” which leads to the “UR.” After conditioning, the “CS” leads to the conditioned response (shown as “CR”). © Bridgepoint Education, Inc. Key Ideas Behaviorism suggests that learning has successfully occurred when the appropriate behavior is observed. Behaviorism suggests many relevant strategies for successful learning, educating, and counseling. Behavior analysis constitutes a field and a psychological system devoted to the study of behavior. Skinnerian behaviorism established the fundamental concepts and methods of behavior analysis. Connectionism is defined as the connections (influences) between situations and responses. Thorndike suggests that intelligence is related to the bonds formed by a person to the event, content, etc. and that the more bonds that are formed, the more intelligent the person is. Thorndike’s laws of learning suggest formulations by which learning follows and includes three major laws: effect, exercise or frequency, and readiness. Pavlov is widely considered the founder of classical conditioning. Watson was an important researcher who introduced classical conditioning into American psychology. His work included the case of “Little Albert.” Classical conditioning suggests that one stimulus (e.g., food) can strengthen the potential response to another stimulus (e.g., a bell or tone) that is not physiologically conducive to evoking an unconditioned response (UR) (e.g., salivation). The response that is elicited by the presentation of the unconditioned stimulus (US) (e.g., the bell) is termed the unconditioned response (UR) (e.g., salivating). Reinforcement is used to increase a desired behavior. This can be either negative reinforcement (e.g., taking away chores) or positive reinforcement (e.g., giving additional TV time). Punishment is used to decrease behavior. It can be positive punishment (e.g., giving additional chores due to behavior) or negative punishment (e.g., taking away time with friends). Today, principles of conditioning are applied in many different professional disciplines, such as marketing and education. Additional Resources Visit the following websites to further your understanding of the topics and prominent researchers that were introduced in this chapter. Behaviorism Stanford Encyclopedia, general information: https://plato.stanford.edu/entries /behaviorism/ Internet Encyclopedia of Philosophy, general information: www.iep.utm.edu/b /behavior.htm Funderstanding, general information: www.funderstanding.com/behaviorism.cfm and http://www.funderstanding.com/educators/behaviorism-and-the-developing -child/ Ivan Pavlov Nobel Prize, biography: http://nobelprize.org/nobel_prizes/medicine/laureates /1904/pavlov-bio.html B. F. Skinner PBS, biography and work: www.pbs.org/wgbh/aso/databank/entries/bhskin.html Instructional Design, operant conditioning: http://www.instructionaldesign.org /theories/operant-conditioning.html Skinner Foundation, a nonprofit organization: http://www.bfskinner.org/ Funderstanding, radical behaviorism: http://www.funderstanding.com/educators /skinners-radical-behaviorism/ Edward Thorndike Muskingum College, Department of Psychology, biography and work: http://www.muskingum.edu/~psych/psycweb/history/thorndike.htm York University, Toronto, biography and work: http://psychclassics.yorku.ca /Thorndike/education.htm J. B. Watson Weibell, C. J. (2011). Principles of learning: 7 principles to guide personalized, student-centered learning in the technology-enhanced, blended learning environment. Retrieved from https://principlesoflearning.wordpress.com/2011/06/01/7 -principles-of-learning-the-short-version/ J. B. Watson and the Little Albert phobia experiments
: https://www.youtube.com/watch?v=CYGXMXGkxtc A Science Odyssey: John B. Watson: http://www.pbs.org/wgbh/aso/databank /entries/bhwats.html Key Terms acquisition In classical conditioning, period during which a stimulus is associated with a response. It is expected that the pairing of the conditioned stimulus (CS) and the unconditioned stimulus (US) will not evoke the unconditioned response (UR) after only one application. The pairing will need to be conducted more than once. behavior analysis A field or psychological system devoted to the study of behavior. behavior modification A method of prompting desired (more favorable) behaviors. classical conditioning A learning procedure in which a conditioned stimulus (CS) (e.g., food) is paired with an unconditioned stimulus (US) (e.g., a bell or tone) to attain an unconditioned response (UR) (e.g., salivating). conditioned response (CR) The behavior automatically evoked by a previously neutral stimulus; the unconditioned stimulus without the utilization of the conditioned stimulus (e.g., the drool as the result of the tone alone in Pavlov’s experiment). conditioned stimulus (CS) The thing or event that does not evoke a behavior due to natural, physiological causes (e.g., the bell or tone in Pavlov’s experiment). cueing The act of using a verbal or nonverbal prompt to signal the correctness of a behavior. extinction When a learned response is not reinforced and thus decreases or discontinues; used in both classical and operant conditioning. fixed-interval (FI) schedule A program or timetable in which a set amount of time is allotted between each reinforcer (e.g., every 3 days a reinforcer is offered). fixed-ratio (FR) schedule A program or timetable in which the reinforcer is delivered when the wanted behavior is performed the desired amount of times (e.g., after performing the desired behavior three times, a reinforcer is applied). generalization When a behavior is evoked by similar stimuli that were not originally applied in the learning event. For example, if a behavior is occurring with one type of sound, when a differing sound occurs, the behavior also occurs (opposite of discrimination). intermittent reinforcement When the reinforcement is given inconsistently or occasionally. laws of learning Formulations, suggested by Edward Thorndike, that are suggested to guide learning. These include effect, exercise or frequency, and readiness. learning Process of developing knowledge or a skill through instruction or study; the modification of a behavioral tendency developed through experience (such as exposure to conditioning). modeling Observational learning; learning by imitating others. negative punishment The removal of a stimulus following a specific behavior, used to decrease the rate of a response or behavior. negative reinforcement The removal of a stimulus following a specific behavior, applied to increase the frequency of a behavior. operant conditioning When a behavior is shaped and maintained by its consequences (reinforcer or punishment). positive punishment The presentation of a negative consequence that follows a specific behavior, applied to decrease the probability of that behavior reoccurring. positive reinforcement The inclusion of a reward, applied to increase the frequency of that behavior. schedules of reinforcement Refers to specific patterns of delivery for behavior reinforcers. shaping The process of defining the desired behavior (targeted behavior), systematically and consistently reinforcing, and continuing the process until the desired behavior is produced. theory of connectionism Identified by Edward Thorndike; suggests that learning is the result of associations (habits) that are created between a stimulus and responses. unconditioned response (UR) The natural, automatic response to an unconditioned stimulus without the presence of the conditioned stimuli (e.g., the drool as a result of the food in Pavlov’s experiment). unconditioned stimulus (US) The thing or event that naturally evokes a desired behavior (e.g., the food in Pavlov’s experiment). variable-interval (VI) schedule A program or timetable in which reinforcement is evoked after random amounts of time (e.g., reinforcements are given after the third, seventh, and then 15th time that the behavior occurs). variable-ratio (VR) schedule A program or timetable in which behavior is reinforced after a randomly determined number of responses have been demonstrated. | https://papershelpdesk.com/psychology-discussion-question-2/ | 21 |
19 | The Neolithic Revolution, or the (First) Agricultural Revolution, was the wide-scale transition of many human cultures during the Neolithic period from a lifestyle of hunting and gathering to one of agriculture and settlement, making an increasingly large population possible. These settled communities permitted humans to observe and experiment with plants, learning how they grew and developed. This new knowledge led to the domestication of plants.
|Part of a series on|
|↑ Prehistory (Pleistocene epoch)|
Archaeological data indicates that the domestication of various types of plants and animals happened in separate locations worldwide, starting in the geological epoch of the Holocene 11,700 years ago. It was the world's first historically verifiable revolution in agriculture. The Neolithic Revolution greatly narrowed the diversity of foods available, resulting in a downturn in the quality of human nutrition.
The Neolithic Revolution involved far more than the adoption of a limited set of food-producing techniques. During the next millennia it transformed the small and mobile groups of hunter-gatherers that had hitherto dominated human pre-history into sedentary (non-nomadic) societies based in built-up villages and towns. These societies radically modified their natural environment by means of specialized food-crop cultivation, with activities such as irrigation and deforestation which allowed the production of surplus food. Other developments that are found very widely during this era are the domestication of animals, pottery, polished stone tools, and rectangular houses. In many regions, the adoption of agriculture by prehistoric societies caused episodes of rapid population growth, a phenomenon known as the Neolithic demographic transition.
These developments, sometimes called the Neolithic package, provided the basis for centralized administrations and political structures, hierarchical ideologies, depersonalized systems of knowledge (e.g. writing), densely populated settlements, specialization and division of labour, more trade, the development of non-portable art and architecture, and greater property ownership. The earliest known civilization developed in Sumer in southern Mesopotamia (c. 6,500 BP); its emergence also heralded the beginning of the Bronze Age.
The relationship of the above-mentioned Neolithic characteristics to the onset of agriculture, their sequence of emergence, and empirical relation to each other at various Neolithic sites remains the subject of academic debate, and varies from place to place, rather than being the outcome of universal laws of social evolution. The Levant saw the earliest developments of the Neolithic Revolution from around 10,000 BCE, followed by sites in the wider Fertile Crescent.
Hunter-gatherers had different subsistence requirements and lifestyles from agriculturalists. They resided in temporary shelters and were highly mobile, moving in small groups and had limited contact with outsiders. Their diet was well-balanced and depended on what the environment provided each season. Because the advent of agriculture made it possible to support larger groups, agriculturalists lived in more permanent dwellings in areas that were more densely populated than could be supported by the hunter-gatherer lifestyle. The development of trading networks and complex societies brought them into contact with outside groups.
However, population increase did not necessarily correlate with improved health. Reliance on a single crop can adversely affect health even while making it possible to support larger numbers of people. Maize is deficient in certain essential amino acids (lysine and tryptophan) and is a poor source of iron. The phytic acid it contains may inhibit nutrient absorption. Other factors that likely affected the health of early agriculturalists and their domesticated livestock would have been increased numbers of parasites and disease-bearing pests associated with human waste and contaminated food and water supplies. Fertilizers and irrigation may have increased crop yields but also would have promoted proliferation of insects and bacteria in the local environment while grain storage attracted additional insects and rodents.
The term 'neolithic revolution' was coined by V. Gordon Childe in his 1936 book Man Makes Himself. Childe introduced it as the first in a series of agricultural revolutions in Middle Eastern history, calling it a "revolution" to denote its significance, the degree of change to communities adopting and refining agricultural practices.
The beginning of this process in different regions has been dated from 10,000 to 8,000 BCE in the Fertile Crescent, and perhaps 8000 BCE in the Kuk Early Agricultural Site of Papua New Guinea in Melanesia. Everywhere, this transition is associated with a change from a largely nomadic hunter-gatherer way of life to a more settled, agrarian one, with the domestication of various plant and animal species – depending on the species locally available, and probably influenced by local culture. Recent archaeological research suggests that in some regions, such as the Southeast Asian peninsula, the transition from hunter-gatherer to agriculturalist was not linear, but region-specific.
There are several theories (not mutually exclusive) as to factors that drove populations to take up agriculture. The most prominent are:
- The Oasis Theory, originally proposed by Raphael Pumpelly in 1908, popularized by V. Gordon Childe in 1928 and summarised in Childe's book Man Makes Himself. This theory maintains that as the climate got drier due to the Atlantic depressions shifting northward, communities contracted to oases where they were forced into close association with animals, which were then domesticated together with planting of seeds. However, today this theory has little support amongst archaeologists because subsequent climate data suggests that the region was getting wetter rather than drier.
- The Hilly Flanks hypothesis, proposed by Robert Braidwood in 1948, suggests that agriculture began in the hilly flanks of the Taurus and Zagros mountains, where the climate was not drier as Childe had believed, and fertile land supported a variety of plants and animals amenable to domestication.
- The Feasting model by Brian Hayden suggests that agriculture was driven by ostentatious displays of power, such as giving feasts, to exert dominance. This required assembling large quantities of food, which drove agricultural technology.
- The Demographic theories proposed by Carl Sauer and adapted by Lewis Binford and Kent Flannery posit an increasingly sedentary population that expanded up to the carrying capacity of the local environment and required more food than could be gathered. Various social and economic factors helped drive the need for food.
- The evolutionary/intentionality theory, developed by David Rindos and others, views agriculture as an evolutionary adaptation of plants and humans. Starting with domestication by protection of wild plants, it led to specialization of location and then full-fledged domestication.
- Peter Richerson, Robert Boyd, and Robert Bettinger make a case for the development of agriculture coinciding with an increasingly stable climate at the beginning of the Holocene. Ronald Wright's book and Massey Lecture Series A Short History of Progress popularized this hypothesis.
- The postulated Younger Dryas impact event, claimed to be in part responsible for megafauna extinction and ending the last glacial period, could have provided circumstances that required the evolution of agricultural societies for humanity to survive. The agrarian revolution itself is a reflection of typical overpopulation by certain species following initial events during extinction eras; this overpopulation itself ultimately propagates the extinction event.
- Leonid Grinin argues that whatever plants were cultivated, the independent invention of agriculture always took place in special natural environments (e.g., South-East Asia). It is supposed that the cultivation of cereals started somewhere in the Near East: in the hills of Israel or Egypt. So Grinin dates the beginning of the agricultural revolution within the interval 12,000 to 9,000 BP, though in some cases the first cultivated plants or domesticated animals' bones are even of a more ancient age of 14–15 thousand years ago.
- Andrew Moore suggested that the Neolithic Revolution originated over long periods of development in the Levant, possibly beginning during the Epipaleolithic. In "A Reassessment of the Neolithic Revolution", Frank Hole further expanded the relationship between plant and animal domestication. He suggested the events could have occurred independently over different periods of time, in as yet unexplored locations. He noted that no transition site had been found documenting the shift from what he termed immediate and delayed return social systems. He noted that the full range of domesticated animals (goats, sheep, cattle and pigs) were not found until the sixth millennium at Tell Ramad. Hole concluded that "close attention should be paid in future investigations to the western margins of the Euphrates basin, perhaps as far south as the Arabian Peninsula, especially where wadis carrying Pleistocene rainfall runoff flowed."
Early harvesting of cereals (23,000 BP)
Use-wear analysis of five glossed flint blades found at Ohalo II, a 23,000-years-old fisher-hunter-gatherers’ camp on the shore of the Sea of Galilee, Northern Israel, provides the earliest evidence for the use of composite cereal harvesting tools. The Ohalo site is at the junction of the Upper Paleolithic and the Early Epipaleolithic, and has been attributed to both periods.
The wear traces indicate that tools were used for harvesting near-ripe semi-green wild cereals, shortly before grains are ripe and disperse naturally. The studied tools were not used intensively, and they reflect two harvesting modes: flint knives held by hand and inserts hafted in a handle. The finds shed new light on cereal harvesting techniques some 8,000 years before the Natufian and 12,000 years before the establishment of sedentary farming communities in the Near East. Furthermore, the new finds accord well with evidence for the earliest ever cereal cultivation at the site and the use of stone-made grinding implements.
Domestication of plants
Once agriculture started gaining momentum, around 9000 BP, human activity resulted in the selective breeding of cereal grasses (beginning with emmer, einkorn and barley), and not simply of those that favoured greater caloric returns through larger seeds. Plants with traits such as small seeds or bitter taste were seen as undesirable. Plants that rapidly shed their seeds on maturity tended not to be gathered at harvest, therefore not stored and not seeded the following season; successive years of harvesting spontaneously selected for strains that retained their edible seeds longer.
Daniel Zohary identified several plant species as "pioneer crops" or Neolithic founder crops. He highlighted the importance of wheat, barley and rye, and suggested that domestication of flax, peas, chickpeas, bitter vetch and lentils came a little later. Based on analysis of the genes of domesticated plants, he preferred theories of a single, or at most a very small number of domestication events for each taxon that spread in an arc from the Levantine corridor around the Fertile Crescent and later into Europe. Gordon Hillman and Stuart Davies carried out experiments with varieties of wild wheat to show that the process of domestication would have occurred over a relatively short period of between 20 and 200 years. Some of the pioneering attempts failed at first and crops were abandoned, sometimes to be taken up again and successfully domesticated thousands of years later: rye, tried and abandoned in Neolithic Anatolia, made its way to Europe as weed seeds and was successfully domesticated in Europe, thousands of years after the earliest agriculture. Wild lentils presented a different problem: most of the wild seeds do not germinate in the first year; the first evidence of lentil domestication, breaking dormancy in their first year, appears in the early Neolithic at Jerf el Ahmar (in modern Syria), and lentils quickly spread south to the Netiv HaGdud site in the Jordan Valley. The process of domestication allowed the founder crops to adapt and eventually become larger, more easily harvested, more dependable in storage and more useful to the human population.
Selectively propagated figs, wild barley and wild oats were cultivated at the early Neolithic site of Gilgal I, where in 2006 archaeologists found caches of seeds of each in quantities too large to be accounted for even by intensive gathering, at strata datable to c. 11,000 years ago. Some of the plants tried and then abandoned during the Neolithic period in the Ancient Near East, at sites like Gilgal, were later successfully domesticated in other parts of the world.
Once early farmers perfected their agricultural techniques like irrigation (traced as far back as the 6th millennium BCE in Khuzistan), their crops yielded surpluses that needed storage. Most hunter-gatherers could not easily store food for long due to their migratory lifestyle, whereas those with a sedentary dwelling could store their surplus grain. Eventually granaries were developed that allowed villages to store their seeds longer. So with more food, the population expanded and communities developed specialized workers and more advanced tools.
The process was not as linear as was once thought, but a more complicated effort, which was undertaken by different human populations in different regions in many different ways.
Spread of crops: the case of barley
One of the world's most important crops, barley, was domesticated in the Near East around 11,000 years ago (c. 9,000 BCE). Barley is a highly resilient crop, able to grow in varied and marginal environments, such as in regions of high altitude and latitude. Archaeobotanical evidence shows that barley had spread throughout Eurasia by 2,000 BCE. To further elucidate the routes by which barley cultivation was spread through Eurasia, genetic analysis was used to determine genetic diversity and population structure in extant barley taxa. Genetic analysis shows that cultivated barley spread through Eurasia via several different routes, which were most likely separated in both time and space.
Development and diffusion
Beginnings in the Levant
Agriculture appeared first in Southwest Asia about 2,000 years later, around 10,000–9,000 years ago. The region was the centre of domestication for three cereals (einkorn wheat, emmer wheat and barley), four legumes (lentil, pea, bitter vetch and chickpea), and flax. Domestication was a slow process that unfolded across multiple regions, and was preceded by centuries if not millennia of pre-domestication cultivation.
Finds of large quantities of seeds and a grinding stone at the Epipalaeolithic site of Ohalo II, dating to around 19,400 BP, has shown some of the earliest evidence for advanced planning of plants for food consumption and suggests that humans at Ohalo II processed the grain before consumption. Tell Aswad is the oldest site of agriculture, with domesticated emmer wheat dated to 10,800 BP. Soon after came hulled, two-row barley – found domesticated earliest at Jericho in the Jordan valley and at Iraq ed-Dubb in Jordan. Other sites in the Levantine corridor that show early evidence of agriculture include Wadi Faynan 16 and Netiv Hagdud. Jacques Cauvin noted that the settlers of Aswad did not domesticate on site, but "arrived, perhaps from the neighbouring Anti-Lebanon, already equipped with the seed for planting". In the Eastern Fertile Crescent, evidence of cultivation of wild plants has been found in Choga Gholan in Iran dated to 12,000 BP, suggesting there were multiple regions in the Fertile Crescent where domestication evolved roughly contemporaneously. The Heavy Neolithic Qaraoun culture has been identified at around fifty sites in Lebanon around the source springs of the River Jordan, but never reliably dated.
Archeologists trace the emergence of food-producing societies in the Levantine region of southwest Asia at the close of the last glacial period around 12,000 BCE, and developed into a number of regionally distinctive cultures by the eighth millennium BCE. Remains of food-producing societies in the Aegean have been carbon-dated to around 6500 BCE at Knossos, Franchthi Cave, and a number of mainland sites in Thessaly. Neolithic groups appear soon afterwards in the Balkans and south-central Europe. The Neolithic cultures of southeastern Europe (the Balkans and the Aegean) show some continuity with groups in southwest Asia and Anatolia (e.g., Çatalhöyük).
Current evidence suggests that Neolithic material culture was introduced to Europe via western Anatolia. All Neolithic sites in Europe contain ceramics, and contain the plants and animals domesticated in Southwest Asia: einkorn, emmer, barley, lentils, pigs, goats, sheep, and cattle. Genetic data suggest that no independent domestication of animals took place in Neolithic Europe, and that all domesticated animals were originally domesticated in Southwest Asia. The only domesticate not from Southwest Asia was broomcorn millet, domesticated in East Asia.The earliest evidence of cheese-making dates to 5500 BCE in Kujawy, Poland.
The diffusion across Europe, from the Aegean to Britain, took about 2,500 years (6500–4000 BP). The Baltic region was penetrated a bit later, around 3500 BP, and there was also a delay in settling the Pannonian plain. In general, colonization shows a "saltatory" pattern, as the Neolithic advanced from one patch of fertile alluvial soil to another, bypassing mountainous areas. Analysis of radiocarbon dates show clearly that Mesolithic and Neolithic populations lived side by side for as much as a millennium in many parts of Europe, especially in the Iberian peninsula and along the Atlantic coast.
Carbon 14 evidence
The spread of the Neolithic from the Near East Neolithic to Europe was first studied quantitatively in the 1970s, when a sufficient number of Carbon 14 age determinations for early Neolithic sites had become available. Ammerman and Cavalli-Sforza discovered a linear relationship between the age of an Early Neolithic site and its distance from the conventional source in the Near East (Jericho), demonstrating that the Neolithic spread at an average speed of about 1 km/yr. More recent studies confirm these results and yield the speed of 0.6–1.3 km/yr (at 95% confidence level).
Analysis of mitochondrial DNA
Since the original human expansions out of Africa 200,000 years ago, different prehistoric and historic migration events have taken place in Europe. Considering that the movement of the people implies a consequent movement of their genes, it is possible to estimate the impact of these migrations through the genetic analysis of human populations. Agricultural and husbandry practices originated 10,000 years ago in a region of the Near East known as the Fertile Crescent. According to the archaeological record this phenomenon, known as “Neolithic”, rapidly expanded from these territories into Europe. However, whether this diffusion was accompanied or not by human migrations is greatly debated. Mitochondrial DNA – a type of maternally inherited DNA located in the cell cytoplasm – was recovered from the remains of Pre-Pottery Neolithic B (PPNB) farmers in the Near East and then compared to available data from other Neolithic populations in Europe and also to modern populations from South Eastern Europe and the Near East. The obtained results show that substantial human migrations were involved in the Neolithic spread and suggest that the first Neolithic farmers entered Europe following a maritime route through Cyprus and the Aegean Islands.
- Map of the spread of Neolithic farming cultures from the Near-East to Europe, with dates.
- Modern distribution of the haplotypes of PPNB farmers
- Genetic distance between PPNB farmers and modern populations
The earliest Neolithic sites in South Asia are Bhirrana in Haryana dated to 7570–6200 BCE, and Mehrgarh, dated to between 6500 and 5500 BP, in the Kachi plain of Baluchistan, Pakistan; the site has evidence of farming (wheat and barley) and herding (cattle, sheep and goats).
There is strong evidence for causal connections between the Near-Eastern Neolithic and that further east, up to the Indus Valley. There are several lines of evidence that support the idea of connection between the Neolithic in the Near East and in the Indian subcontinent. The prehistoric site of Mehrgarh in Baluchistan (modern Pakistan) is the earliest Neolithic site in the north-west Indian subcontinent, dated as early as 8500 BCE. Neolithic domesticated crops in Mehrgarh include more than barley and a small amount of wheat. There is good evidence for the local domestication of barley and the zebu cattle at Mehrgarh, but the wheat varieties are suggested to be of Near-Eastern origin, as the modern distribution of wild varieties of wheat is limited to Northern Levant and Southern Turkey. A detailed satellite map study of a few archaeological sites in the Baluchistan and Khybar Pakhtunkhwa regions also suggests similarities in early phases of farming with sites in Western Asia. Pottery prepared by sequential slab construction, circular fire pits filled with burnt pebbles, and large granaries are common to both Mehrgarh and many Mesopotamian sites. The postures of the skeletal remains in graves at Mehrgarh bear strong resemblance to those at Ali Kosh in the Zagros Mountains of southern Iran. Despite their scarcity, the 14C and archaeological age determinations for early Neolithic sites in Southern Asia exhibit remarkable continuity across the vast region from the Near East to the Indian Subcontinent, consistent with a systematic eastward spread at a speed of about 0.65 km/yr.
In East Asia
The first agricultural center in northern China is believed to be the homelands of the early Sino-Tibetan-speakers, associated with the Houli, Peiligang, Cishan, and Xinglongwa cultures, clustered around the Yellow River basin. It was the domestication center for foxtail millet (Setaria italica) and broomcorn millet (Panicum miliaceum) with evidence of domestication of these species approximately 8,000 years ago. These species were subsequently widely cultivated in the Yellow River basin (7,500 years ago). Soybean was also domesticated in northern China 4,500 years ago. Orange and peach also originated in China. They were cultivated around 2500 BCE.
The second agricultural center in southern China are clustered around the Yangtze River basin. Rice was domesticated in this region, together with the development of paddy field cultivation, between 13,500 and 8,200 years ago.
There are two possible centers of domestication for rice. The first, and most likely, is in the lower Yangtze River, believed to be the homelands of early Austronesian speakers and associated with the Kauhuqiao, Hemudu, Majiabang, and Songze cultures. It is characterized by typical pre-Austronesian features, including stilt houses, jade carving, and boat technologies. Their diet were also supplemented by acorns, water chestnuts, foxnuts, and pig domestication. The second is in the middle Yangtze River, believed to be the homelands of the early Hmong-Mien-speakers and associated with the Pengtoushan and Daxi cultures. Both of these regions were heavily populated and had regular trade contacts with each other, as well as with early Austroasiatic speakers to the west, and early Kra-Dai speakers to the south, facilitating the spread of rice cultivation throughout southern China.
The millet and rice-farming cultures also first came into contact with each other at around 9,000 to 7,000 BP, resulting in a corridor between the millet and rice cultivation centers where both rice and millet were cultivated. At around 5,500 to 4,000 BP, there was increasing migration into Taiwan from the early Austronesian Dapenkeng culture, bringing rice and millet cultivation technology with them. During this period, there is evidence of large settlements and intensive rice cultivation in Taiwan and the Penghu Islands, which may have resulted in overexploitation. Bellwood (2011) proposes that this may have been the impetus of the Austronesian expansion which started with the migration of the Austronesian-speakers from Taiwan to the Philippines at around 5,000 BP.
Austronesians carried rice cultivation technology to Island Southeast Asia along with other domesticated species. The new tropical island environments also had new food plants that they exploited. They carried useful plants and animals during each colonization voyage, resulting in the rapid introduction of domesticated and semi-domesticated species throughout Oceania. They also came into contact with the early agricultural centers of Papuan-speaking populations of New Guinea as well as the Dravidian-speaking regions of South India and Sri Lanka by around 3,500 BP. They acquired further cultivated food plants like bananas and pepper from them, and in turn introduced Austronesian technologies like wetland cultivation and outrigger canoes. During the 1st millennium CE, they also colonized Madagascar and the Comoros, bringing Southeast Asian food plants, including rice, to East Africa.
On the African continent, three areas have been identified as independently developing agriculture: the Ethiopian highlands, the Sahel and West Africa. By contrast, Agriculture in the Nile River Valley is thought to have developed from the original Neolithic Revolution in the Fertile Crescent. Many grinding stones are found with the early Egyptian Sebilian and Mechian cultures and evidence has been found of a neolithic domesticated crop-based economy dating around 7,000 BP. Unlike the Middle East, this evidence appears as a "false dawn" to agriculture, as the sites were later abandoned, and permanent farming then was delayed until 6,500 BP with the Tasian culture and Badarian culture and the arrival of crops and animals from the Near East.
Bananas and plantains, which were first domesticated in Southeast Asia, most likely Papua New Guinea, were re-domesticated in Africa possibly as early as 5,000 years ago. Asian yams and taro were also cultivated in Africa.
The most famous crop domesticated in the Ethiopian highlands is coffee. In addition, khat, ensete, noog, teff and finger millet were also domesticated in the Ethiopian highlands. Crops domesticated in the Sahel region include sorghum and pearl millet. The kola nut was first domesticated in West Africa. Other crops domesticated in West Africa include African rice, yams and the oil palm.
Agriculture spread to Central and Southern Africa in the Bantu expansion during the 1st millennium BCE to 1st millennium CE.
In the Americas
Maize (corn), beans and squash were among the earliest crops domesticated in Mesoamerica, with maize beginning about 4000 BCE, squash as early as 6000 BCE, and beans by no later than 4000 BCE. Potatoes and manioc were domesticated in South America. In what is now the eastern United States, Native Americans domesticated sunflower, sumpweed and goosefoot around 2500 BCE. Sedentary village life based on farming did not develop until the second millennium BCE, referred to as the formative period.
In New Guinea
Evidence of drainage ditches at Kuk Swamp on the borders of the Western and Southern Highlands of Papua New Guinea indicates cultivation of taro and a variety of other crops, dating back to 11,000 BP. Two potentially significant economic species, taro (Colocasia esculenta) and yam (Dioscorea sp.), have been identified dating at least to 10,200 calibrated years before present (cal BP). Further evidence of bananas and sugarcane dates to 6,950 to 6,440 BCE. This was at the altitudinal limits of these crops, and it has been suggested that cultivation in more favourable ranges in the lowlands may have been even earlier. CSIRO has found evidence that taro was introduced into the Solomon Islands for human use, from 28,000 years ago, making taro cultivation the earliest crop in the world. It seems to have resulted in the spread of the Trans–New Guinea languages from New Guinea east into the Solomon Islands and west into Timor and adjacent areas of Indonesia. This seems to confirm the theories of Carl Sauer who, in "Agricultural Origins and Dispersals", suggested as early as 1952 that this region was a centre of early agriculture.
Domestication of animals
When hunter-gathering began to be replaced by sedentary food production it became more efficient to keep animals close at hand. Therefore, it became necessary to bring animals permanently to their settlements, although in many cases there was a distinction between relatively sedentary farmers and nomadic herders. The animals' size, temperament, diet, mating patterns, and life span were factors in the desire and success in domesticating animals. Animals that provided milk, such as cows and goats, offered a source of protein that was renewable and therefore quite valuable. The animal's ability as a worker (for example ploughing or towing), as well as a food source, also had to be taken into account. Besides being a direct source of food, certain animals could provide leather, wool, hides, and fertilizer. Some of the earliest domesticated animals included dogs (East Asia, about 15,000 years ago), sheep, goats, cows, and pigs.
Domestication of animals in the Middle East
The Middle East served as the source for many animals that could be domesticated, such as sheep, goats and pigs. This area was also the first region to domesticate the dromedary. Henri Fleisch discovered and termed the Shepherd Neolithic flint industry from the Bekaa Valley in Lebanon and suggested that it could have been used by the earliest nomadic shepherds. He dated this industry to the Epipaleolithic or Pre-Pottery Neolithic as it is evidently not Paleolithic, Mesolithic or even Pottery Neolithic. The presence of these animals gave the region a large advantage in cultural and economic development. As the climate in the Middle East changed and became drier, many of the farmers were forced to leave, taking their domesticated animals with them. It was this massive emigration from the Middle East that later helped distribute these animals to the rest of Afroeurasia. This emigration was mainly on an east–west axis of similar climates, as crops usually have a narrow optimal climatic range outside of which they cannot grow for reasons of light or rain changes. For instance, wheat does not normally grow in tropical climates, just like tropical crops such as bananas do not grow in colder climates. Some authors, like Jared Diamond, have postulated that this east–west axis is the main reason why plant and animal domestication spread so quickly from the Fertile Crescent to the rest of Eurasia and North Africa, while it did not reach through the north–south axis of Africa to reach the Mediterranean climates of South Africa, where temperate crops were successfully imported by ships in the last 500 years. Similarly, the African Zebu of central Africa and the domesticated bovines of the fertile-crescent – separated by the dry sahara desert – were not introduced into each other's region.
Despite the significant technological advance, the Neolithic revolution did not lead immediately to a rapid growth of population. Its benefits appear to have been offset by various adverse effects, mostly diseases and warfare.
The introduction of agriculture has not necessarily led to unequivocal progress. The nutritional standards of the growing Neolithic populations were inferior to that of hunter-gatherers. Several ethnological and archaeological studies conclude that the transition to cereal-based diets caused a reduction in life expectancy and stature, an increase in infant mortality and infectious diseases, the development of chronic, inflammatory or degenerative diseases (such as obesity, type 2 diabetes and cardiovascular diseases) and multiple nutritional deficiencies, including vitamin deficiencies, iron deficiency anemia and mineral disorders affecting bones (such as osteoporosis and rickets) and teeth. Average height went down from 5'10" (178 cm) for men and 5'6" (168 cm) for women to 5'5" (165 cm) and 5'1" (155 cm), respectively, and it took until the twentieth century for average human height to come back to the pre-Neolithic Revolution levels.
The traditional view is that agricultural food production supported a denser population, which in turn supported larger sedentary communities, the accumulation of goods and tools, and specialization in diverse forms of new labor. The development of larger societies led to the development of different means of decision making and to governmental organization. Food surpluses made possible the development of a social elite who were not otherwise engaged in agriculture, industry or commerce, but dominated their communities by other means and monopolized decision-making. Jared Diamond (in The World Until Yesterday) identifies the availability of milk and cereal grains as permitting mothers to raise both an older (e.g. 3 or 4 year old) and a younger child concurrently. The result is that a population can increase more rapidly. Diamond, in agreement with feminist scholars such as V. Spike Peterson, points out that agriculture brought about deep social divisions and encouraged gender inequality. This social reshuffle is traced by historical theorists, like Veronica Strang, through developments in theological depictions. Strang supports her theory through a comparison of aquatic deities before and after the Neolithic Agricultural Revolution, most notably the Venus of Lespugue and the Greco-Roman deities such as Circe or Charybdis: the former venerated and respected, the latter dominated and conquered. The theory, supplemented by the widely accepted assumption from Parsons that “society is always the object of religious veneration”, argues that with the centralization of government and the dawn of the Anthropocene, roles within society became more restrictive and were rationalized through the conditioning effect of religion; a process that is crystallized in the progression from polytheism to monotheism.
Andrew Sherratt has argued that following upon the Neolithic Revolution was a second phase of discovery that he refers to as the secondary products revolution. Animals, it appears, were first domesticated purely as a source of meat. The Secondary Products Revolution occurred when it was recognised that animals also provided a number of other useful products. These included:
- hides and skins (from undomesticated animals)
- manure for soil conditioning (from all domesticated animals)
- wool (from sheep, llamas, alpacas, and Angora goats)
- milk (from goats, cattle, yaks, sheep, horses, and camels)
- traction (from oxen, onagers, donkeys, horses, camels, and dogs)
- guarding and herding assistance (dogs)
Sherratt argued that this phase in agricultural development enabled humans to make use of the energy possibilities of their animals in new ways, and permitted permanent intensive subsistence farming and crop production, and the opening up of heavier soils for farming. It also made possible nomadic pastoralism in semi arid areas, along the margins of deserts, and eventually led to the domestication of both the dromedary and Bactrian camel. Overgrazing of these areas, particularly by herds of goats, greatly extended the areal extent of deserts.
Living in one spot permitted the accrual of personal possessions and an attachment to certain areas of land. From such a position, it is argued, prehistoric people were able to stockpile food to survive lean times and trade unwanted surpluses with others. Once trade and a secure food supply were established, populations could grow, and society could diversify into food producers and artisans, who could afford to develop their trade by virtue of the free time they enjoyed because of a surplus of food. The artisans, in turn, were able to develop technology such as metal weapons. Such relative complexity would have required some form of social organisation to work efficiently, so it is likely that populations that had such organisation, perhaps such as that provided by religion, were better prepared and more successful. In addition, the denser populations could form and support legions of professional soldiers. Also, during this time property ownership became increasingly important to all people. Ultimately, Childe argued that this growing social complexity, all rooted in the original decision to settle, led to a second Urban Revolution in which the first cities were built.
Diet and health
Compared to foragers, Neolithic farmers' diets were higher in carbohydrates but lower in fibre, micronutrients, and protein. This led to an increase in the frequency of carious teeth and slower growth in childhood and increased body fat, and studies have consistently found that populations around the world became shorter after the transition to agriculture. This trend may have been exacerbated by the greater seasonality of farming diets and with it the increased risk of famine due to crop failure.
Throughout the development of sedentary societies, disease spread more rapidly than it had during the time in which hunter-gatherer societies existed. Inadequate sanitary practices and the domestication of animals may explain the rise in deaths and sickness following the Neolithic Revolution, as diseases jumped from the animal to the human population. Some examples of infectious diseases spread from animals to humans are influenza, smallpox, and measles. Ancient microbial genomics has shown that progenitors to human-adapted strains of Salmonella enterica infected up to 5,500 year old agro-pastoralists throughout Western Eurasia, providing molecular evidence for the hypothesis that the Neolithization process facilitated the emergence of human-disease. In concordance with a process of natural selection, the humans who first domesticated the big mammals quickly built up immunities to the diseases as within each generation the individuals with better immunities had better chances of survival. In their approximately 10,000 years of shared proximity with animals, such as cows, Eurasians and Africans became more resistant to those diseases compared with the indigenous populations encountered outside Eurasia and Africa. For instance, the population of most Caribbean and several Pacific Islands have been completely wiped out by diseases. 90% or more of many populations of the Americas were wiped out by European and African diseases before recorded contact with European explorers or colonists. Some cultures like the Inca Empire did have a large domestic mammal, the llama, but llama milk was not drunk, nor did llamas live in a closed space with humans, so the risk of contagion was limited. According to bioarchaeological research, the effects of agriculture on physical and dental health in Southeast Asian rice farming societies from 4000 to 1500 BP was not detrimental to the same extent as in other world regions.
Jonathan C. K. Wells and Jay T. Stock have argued that the dietary changes and increased pathogen exposure associated with agriculture profoundly altered human biology and life history, creating conditions where natural selection favoured the allocation of resources towards reproduction over somatic effort.
In his book Guns, Germs, and Steel, Jared Diamond argues that Europeans and East Asians benefited from an advantageous geographical location that afforded them a head start in the Neolithic Revolution. Both shared the temperate climate ideal for the first agricultural settings, both were near a number of easily domesticable plant and animal species, and both were safer from attacks of other people than civilizations in the middle part of the Eurasian continent. Being among the first to adopt agriculture and sedentary lifestyles, and neighboring other early agricultural societies with whom they could compete and trade, both Europeans and East Asians were also among the first to benefit from technologies such as firearms and steel swords.
The dispersal of Neolithic culture from the Middle East has recently been associated with the distribution of human genetic markers. In Europe, the spread of the Neolithic culture has been associated with distribution of the E1b1b lineages and Haplogroup J that are thought to have arrived in Europe from North Africa and the Near East respectively. In Africa, the spread of farming, and notably the Bantu expansion, is associated with the dispersal of Y-chromosome haplogroup E1b1a from West Africa. [unrelated Link]
With the E-P75 Y DNA Haplogroup also known as E-1b2 found in Lebanon.Along with the Neolithic Shepherd Culture in Modern Lebanon.Is evidence of the bridge of Neolithic cultures between that of Africa and the Levant. By both culture and Genetics all branching from Descendants of E-M96 believed to Originate from East Africa.
- Jean-Pierre Bocquet-Appel (July 29, 2011). "When the World's Population Took Off: The Springboard of the Neolithic Demographic Transition". Science. 333 (6042): 560–561. Bibcode:2011Sci...333..560B. doi:10.1126/science.1208880. PMID 21798934. S2CID 29655920.
- Pollard, Elizabeth; Rosenberg, Clifford; Tigor, Robert (2015). Worlds together, worlds apart. 1 (concise ed.). New York: W.W. Norton & Company. p. 23. ISBN 978-0-393-25093-0.
- Compare:Lewin, Roger (2009-02-18) . "35: The origin of agriculture and the first villagers". Human Evolution: An Illustrated Introduction (5 ed.). Malden, Massachusetts: John Wiley & Sons (published 2009). p. 250. ISBN 978-1-4051-5614-1. Retrieved 2017-08-20.
[...] the Neolithic transition involved increasing sedentism and social complexity, which was usually followed by the gradual adoption of plant and animal domestication. In some cases, however, plant domestication preceded sedentism, particularly in the New World.
- "International Stratigraphic Chart". International Commission on Stratigraphy. Archived from the original on 2013-02-12. Retrieved 2012-12-06.
- Armelagos, George J. (2014). "Brain Evolution, the Determinates of Food Choice, and the Omnivore's Dilemma". Critical Reviews in Food Science and Nutrition. 54 (10): 1330–1341. doi:10.1080/10408398.2011.635817. ISSN 1040-8398. PMID 24564590. S2CID 25488602.
- Violatti, Cristian (2 April 2018). "Neolithic Period". World History Encyclopedia.
- "The Slow Birth of Agriculture" Archived 2011-01-01 at the Wayback Machine, Heather Pringle
- "Wizard Chemi Shanidar". EMuseum. Minnesota State University. Archived from the original on June 18, 2008.
- The Cambridge World History of Food. Cambridge University Press. p. 46.
- Zalloua, Pierre A.; Matisoo-Smith, Elizabeth (6 January 2017). "Mapping Post-Glacial expansions: The Peopling of Southwest Asia". Scientific Reports. 7: 40338. Bibcode:2017NatSR...740338P. doi:10.1038/srep40338. ISSN 2045-2322. PMC 5216412. PMID 28059138.
- Diamond, J.; Bellwood, P. (2003). "Farmers and Their Languages: The First Expansions". Science. 300 (5619): 597–603. Bibcode:2003Sci...300..597D. CiteSeerX 10.1.1.1013.4523. doi:10.1126/science.1078208. PMID 12714734. S2CID 13350469.
- Childe, Vere Gordon (1936). Man Makes Himself. London: Watts & Company.
- Brami, Maxime N. (2019-12-01). "The Invention of Prehistory and the Rediscovery of Europe: Exploring the Intellectual Roots of Gordon Childe's 'Neolithic Revolution' (1936)". Journal of World Prehistory. 32 (4): 311–351. doi:10.1007/s10963-019-09135-y. ISSN 1573-7802.
- Graeme Barker (2009). The Agricultural Revolution in Prehistory: Why did Foragers become Farmers?. Oxford University Press. ISBN 978-0-19-955995-4.
- Thissen, L. "Appendix I, The CANeW 14C databases, Anatolia 10,000–5000 cal. BC." in: F. Gérard and L. Thissen (eds.), The Neolithic of Central Anatolia. Internal developments and external relations during the 9th–6th millennia cal BC, Proc. Int. CANeW Round Table, Istanbul 23–24 November 2001, (2002)
- Denham, Tim P.; Haberle, S. G.; Fullagar, R; Field, J; Therin, M; Porch, N; Winsborough, B (2003). "Origins of Agriculture at Kuk Swamp in the Highlands of New Guinea" (PDF). Science. 301 (5630): 189–193. doi:10.1126/science.1085255. PMID 12817084. S2CID 10644185.
- "The Kuk Early Agricultural Site". UNESCO World Heritage Centre.
- Kealhofer, Lisa (2003). "Looking into the gap: land use and the tropical forests of southern Thailand". Asian Perspectives. 42 (1): 72–95. doi:10.1353/asi.2003.0022. hdl:10125/17181. S2CID 162916204.
- Scarre, Chris (2005). "The World Transformed: From Foragers and Farmers to States and Empires" in The Human Past: World Prehistory and the Development of Human Societies (Ed: Chris Scarre). London: Thames and Hudson. p. 188. ISBN 0-500-28531-4
- Charles E. Redman (1978). Rise of Civilization: From Early Hunters to Urban Society in the Ancient Near East. San Francisco: Freeman.
- Hayden, Brian (1992). "Models of Domestication". In Anne Birgitte Gebauer and T. Douglas Price (ed.). Transitions to Agriculture in Prehistory. Madison: Prehistory Press. pp. 11–18.
- Sauer, Carl O. (1952). Agricultural origins and dispersals. Cambridge, MA: MIT Press.
- Binford, Lewis R. (1968). "Post-Pleistocene Adaptations". In Sally R. Binford and Lewis R. Binford (ed.). New Perspectives in Archaeology. Chicago: Aldine Publishing Company. pp. 313–342.
- Rindos, David (December 1987). The Origins of Agriculture: An Evolutionary Perspective. Academic Press. ISBN 978-0-12-589281-0.
- Richerson, Peter J.; Boyd, Robert (2001). "Was Agriculture Impossible during the Pleistocene but Mandatory during the Holocene?". American Antiquity. 66 (3): 387–411. doi:10.2307/2694241. JSTOR 2694241. S2CID 163474968.
- Wright, Ronald (2004). A Short History of Progress. Anansi. ISBN 978-0-88784-706-6.
- Anderson, David G; Albert C. Goodyear; James Kennett; Allen West (2011). "Multiple lines of evidence for possible Human population decline/settlement reorganization during the early Younger Dryas". Quaternary International. 242 (2): 570–583. Bibcode:2011QuInt.242..570A. doi:10.1016/j.quaint.2011.04.020.
- Grinin L.E. Production Revolutions and Periodization of History: A Comparative and Theoretic-mathematical Approach. / Social Evolution & History. Volume 6, Number 2 / September 2007
- Hole, Frank., A Reassessment of the Neolithic Revolution, Paléorient, Volume 10, Issue 10-2, pp. 49–60, 1984.
- Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Nadel, Dani; Weiss, Ehud; Groman-Yaroslavski, Iris (23 November 2016). "Composite Sickles and Cereal Harvesting Methods at 23,000-Years-Old Ohalo II, Israel". PLOS ONE. 11 (11): e0167151. Bibcode:2016PLoSO..1167151G. doi:10.1371/journal.pone.0167151. ISSN 1932-6203. PMC 5120854. PMID 27880839.
- Enzel, Yehouda; Bar-Yosef, Ofer (2017). Quaternary of the Levant. Cambridge University Press. p. 335. ISBN 978-1-107-09046-0.
- Zohary, D., The mode of domestication of the founder crops of Southwest Asian agriculture. pp. 142–158 in D. R. Harris (ed.) The Origins and Spread of Agriculture and Pastoralism in Eurasia. UCL Press Ltd, London, 1996
- Zohary, D., Monophyletic vs. polyphyletic origin of the crops on which agriculture was founded in the Near East. Genetic Resources and Crop Evolution 46 (2) pp. 133–142
- Hillman, G. C. and M. S. Davies., Domestication rate in wild wheats and barley under primitive cultivation: preliminary results and archaeological implications of field measurements of selection coefficient, pp. 124–132 in P. Anderson-Gerfaud (ed.) Préhistoire de l'agriculture: nouvelles approches expérimentales et ethnographiques. Monographie du CRA 6, Éditions Centre Nationale Recherches Scientifiques: Paris, 1992
- Weiss, Ehud; Kislev, Mordechai E.; Hartmann, Anat (2006). "Autonomous Cultivation Before Domestication". Science. 312 (5780): 1608–1610. doi:10.1126/science.1127235. PMID 16778044. S2CID 83125044.
- "Tamed 11,400 Years Ago, Figs Were Likely First Domesticated Crop". ScienceDaily. 4 June 2006.
Flannery, Kent V. (1969). "Origins and ecological effects of early domestication in Iran and the Near East". In Ucko, Peter John; Dimbleby, G. W. (eds.). The Domestication and Exploitation of Plants and Animals. New Brunswick, New Jersey: Transaction Publishers (published 2007). p. 89. ISBN 978-0-202-36557-2. Retrieved 2019-01-12.
Our earliest evidence for this new technology comes [...] from the lowland steppe of Khuzistan. [...] Once irrigation appeared, the steppe greatly increased its carrying capacity and became, in fact, the dominant growth centre of the Zagros region between 5500 and 4000 B.C.
Lawton, H. W.; Wilke, P. J. (1979). "Ancient Agricultural Systems in Dry Regions of the Old World". In Hall, A. E.; Cannell, G. H.; Lawton, H.W. (eds.). Agriculture in Semi-Arid Environments. Ecological Studies. 34 (reprint ed.). Berlin: Springer Science & Business Media (published 2012). p. 13. ISBN 978-3-642-67328-3. Retrieved 2019-01-12.
Archeological investigations on the Deh Luran Plain of Iran have provided a model for the internal dynamics of the culture sequence of prehistoric Khuzistan [...]. Somewhere between 5500 and 5000 B.C. in the Sabz phase of the Deh Luran Plain, irrigation water was apparently diverted from stream channels in a fashion similar to that employed in early Mesopotamia.
- Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Jones, Martin K.; Kovaleva, Olga (18 July 2018). "Barley heads east: Genetic analyses reveal routes of spread through diverse Eurasian landscapes". PLOS ONE. 13 (7): e0196652. Bibcode:2018PLoSO..1396652L. doi:10.1371/journal.pone.0196652. ISSN 1932-6203. PMC 6051582. PMID 30020920.
- Brown, T. A.; Jones, M. K.; Powell, W.; Allaby, R. G. (2009). "The complex origins of domesticated crops in the Fertile Crescent" (PDF). Trends in Ecology & Evolution. 24 (2): 103–109. doi:10.1016/j.tree.2008.09.008. PMID 19100651.
- Mithen, Steven (2006). After the ice : a global human history, 20.000–5.000 BC (1. paperback ed.). Cambridge, MA: Harvard Univ. Press. p. 517. ISBN 978-0-674-01570-8.
- Compiled largely with reference to: Weiss, E., Mordechai, E., Simchoni, O., Nadel, D., & Tschauner, H. (2008). Plant-food preparation area on an Upper Paleolithic brush hut floor at Ohalo II, Israel. Journal of Archaeological Science, 35 (8), 2400–2414.
- Ozkan, H.; Brandolini, A.; Schäfer-Pregl, R.; Salamini, F. (October 2002). "AFLP analysis of a collection of tetraploid wheats indicates the origin of emmer and hard wheat domestication in southeast Turkey". Molecular Biology and Evolution. 19 (10): 1797–801. doi:10.1093/oxfordjournals.molbev.a004002. PMID 12270906.
- van Zeist, W. Bakker-Heeres, J.A.H., Archaeobotanical Studies in the Levant 1. Neolithic Sites in the Damascus Basin: Aswad, Ghoraifé, Ramad., Palaeohistoria, 24, 165–256, 1982.
- Hopf, Maria., "Jericho plant remains" in Kathleen M. Kenyon and T. A. Holland (eds.) Excavations at Jericho 5, pp. 576–621, British School of Archaeology at Jerusalem, London, 1983.
- Jacques Cauvin (2000). The Birth of the Gods and the Origins of Agriculture, p. 53. Cambridge University Press. ISBN 978-0-521-65135-6. Retrieved 15 August 2012.
- Riehl, Simone; Zeidi, Mohsen; Conard, Nicholas (2013-07-05). "Emergence of Agriculture in the Foothills of the Zagros Mountains of Iran". Science. 341 (6141): 65–7. Bibcode:2013Sci...341...65R. doi:10.1126/science.1236743. PMID 23828939. S2CID 45375155.
- Peltenburg, E.J.; Wasse, Alexander; Council for British Research in the Levant (2004). Maya Haïdar Boustani, Flint workshops of the Southern Beqa' valley (Lebanon): preliminary results from Qar'oun* in Neolithic revolution: new perspectives on southwest Asia in light of recent discoveries on Cyprus. Oxbow Books. ISBN 978-1-84217-132-5.
- L. Copeland; P. Wescombe (1966). Inventory of Stone-Age Sites in Lebanon: North, South and East-Central Lebanon. Imprimerie Catholique. p. 89.
- Bellwood 2004, pp. 68–69.
- Bellwood 2004, pp. 74, 118.
- Subbaraman, Nidhi (December 12, 2012). "Art of cheese-making is 7,500 years old". Nature News. doi:10.1038/nature.2012.12020. S2CID 180646880.
- Bellwood 2004, pp. 68–72.
- Consortium, the Genographic; Cooper, Alan (9 November 2010). "Ancient DNA from European Early Neolithic Farmers Reveals Their Near Eastern Affinities". PLOS Biology. 8 (11): e1000536. doi:10.1371/journal.pbio.1000536. ISSN 1545-7885. PMC 2976717. PMID 21085689.
- Original text published under Creative Commons license CC BY 4.0: Shukurov, Anvar; Sarson, Graeme R.; Gangal, Kavita (2014). "The Near-Eastern Roots of the Neolithic in South Asia". PLOS ONE. 9 (5): e95714. Bibcode:2014PLoSO...995714G. doi:10.1371/journal.pone.0095714. PMC 4012948. PMID 24806472. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License
- Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Turbón, Daniel; Arroyo-Pardo, Eduardo (5 June 2014). "Ancient DNA Analysis of 8000 B.C. Near Eastern Farmers Supports an Early Neolithic Pioneer Maritime Colonization of Mainland Europe through Cyprus and the Aegean Islands". PLOS Genetics. 10 (6): e1004401. doi:10.1371/journal.pgen.1004401. ISSN 1553-7404. PMC 4046922. PMID 24901650.
- Coningham, Robin; Young, Ruth (2015). The Archaeology of South Asia: From the Indus to Asoka, c. 6500 BCE–200 CE. Cambridge University Press Cambridge World Archeology. p. 111. ISBN 978-1-316-41898-7.
- Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Shukurov, Anvar; Sarson, Graeme R.; Gangal, Kavita (7 May 2014). "The Near-Eastern Roots of the Neolithic in South Asia". PLOS ONE. 9 (5): e95714. Bibcode:2014PLoSO...995714G. doi:10.1371/journal.pone.0095714. ISSN 1932-6203. PMC 4012948. PMID 24806472.
- He, Keyang; Lu, Houyuan; Zhang, Jianping; Wang, Can; Huan, Xiujia (7 June 2017). "Prehistoric evolution of the dualistic structure mixed rice and millet farming in China". The Holocene. 27 (12): 1885–1898. Bibcode:2017Holoc..27.1885H. doi:10.1177/0959683617708455. S2CID 133660098.
- Bellwood, Peter (9 December 2011). "The Checkered Prehistory of Rice Movement Southwards as a Domesticated Cereal – from the Yangzi to the Equator" (PDF). Rice. 4 (3–4): 93–103. doi:10.1007/s12284-011-9068-9. S2CID 44675525.
- Fuller, D. Q. (2007). "Contrasting Patterns in Crop Domestication and Domestication Rates: Recent Archaeobotanical Insights from the Old World". Annals of Botany. 100 (5): 903–924. doi:10.1093/aob/mcm048. PMC 2759199. PMID 17495986.
- Siddiqi, Mohammad Rafiq (2001). Tylenchida: Parasites of Plants and Insects. CABI.
- Thacker, Christopher (1985). The history of gardens. Berkeley: University of California Press. p. 57. ISBN 978-0-520-05629-9.
- Webber, Herbert John (1967–1989). Chapter I. History and Development of the Citrus Industry Archived 2016-05-23 at the Portuguese Web Archive in Origin of Citrus, Vol. 1. University of California
- Molina, J.; Sikora, M.; Garud, N.; Flowers, J. M.; Rubinstein, S.; Reynolds, A.; Huang, P.; Jackson, S.; Schaal, B. A.; Bustamante, C. D.; Boyko, A. R.; Purugganan, M. D. (2011). "Molecular evidence for a single evolutionary origin of domesticated rice". Proceedings of the National Academy of Sciences. 108 (20): 8351–83516. Bibcode:2011PNAS..108.8351M. doi:10.1073/pnas.1104686108. PMC 3101000. PMID 21536870.
- Zhang, Jianping; Lu, Houyuan; Gu, Wanfa; Wu, Naiqin; Zhou, Kunshu; Hu, Yayi; Xin, Yingjun; Wang, Can; Kashkush, Khalil (17 December 2012). "Early Mixed Farming of Millet and Rice 7800 Years Ago in the Middle Yellow River Region, China". PLOS ONE. 7 (12): e52146. Bibcode:2012PLoSO...752146Z. doi:10.1371/journal.pone.0052146. PMC 3524165. PMID 23284907.
- Bayliss-Smith, Tim; Golson, Jack; Hughes, Philip (2017). "Phase 4: Major Disposal Channels, Slot-Like Ditches and Grid-Patterned Fields". In Golson, Jack; Denham, Tim; Hughes, Philip; Swadling, Pamela; Muke, John (eds.). Ten Thousand Years of Cultivation at Kuk Swamp in the Highlands of Papua New Guinea. terra australis. 46. ANU Press. pp. 239–268. ISBN 978-1-76046-116-4.
- Mahdi, Waruno (1999). "The Dispersal of Austronesian boat forms in the Indian Ocean". In Blench, Roger; Spriggs, Matthew (eds.). Archaeology and Language III: Artefacts languages, and texts. One World Archaeology. 34. Routledge. pp. 144–179. ISBN 978-0-415-10054-0.
- Blench, Roger (2010). "Evidence for the Austronesian Voyages in the Indian Ocean" (PDF). In Anderson, Atholl; Barrett, James H.; Boyle, Katherine V. (eds.). The Global Origins and Development of Seafaring. McDonald Institute for Archaeological Research. pp. 239–248. ISBN 978-1-902937-52-6.
- Beaujard, Philippe (August 2011). "The first migrants to Madagascar and their introduction of plants: linguistic and ethnological evidence" (PDF). Azania: Archaeological Research in Africa. 46 (2): 169–189. doi:10.1080/0067270X.2011.580142. S2CID 55763047.
- Walter, Annie; Lebot, Vincent (2007). Gardens of Oceania. IRD Éditions-CIRAD. ISBN 978-1-86320-470-5.
- Diamond, Jared (1999). Guns, Germs, and Steel. New York: Norton Press. ISBN 978-0-393-31755-8.
- The Cambridge History of Africa
- Smith, Philip E.L., Stone Age Man on the Nile, Scientific American Vol. 235 No. 2, August 1976: "With the benefit of hindsight we can now see that many Late Paleolithic peoples in the Old World were poised on the brink of plant cultivation and animal husbandry as an alternative to the hunter-gatherer's way of life".
- Johannessen, S.; Hastorf, C. A. (eds.). Corn and Culture in the Prehistoric New World. Westview Press.
- Graeme Barker (2009). The Agricultural Revolution in Prehistory: Why Did Foragers Become Farmers?. Oxford University Press. p. 252. ISBN 978-0-19-955995-4. Retrieved 4 January 2012.
- Denham, Tim et al. (received July 2005) "Early and mid Holocene tool-use and processing of taro (Colocasia esculenta), yam (Dioscorea sp.) and other plants at Kuk Swamp in the highlands of Papua New Guinea" (Journal of Archaeological Science, Volume 33, Issue 5, May 2006)
- Loy, Thomas & Matthew Spriggs (1992), " Direct evidence for human use of plants 28,000 years ago: starch residues on stone artefacts from the northern Solomon Islands" (Antiquity Volume: 66, Number: 253, pp. 898–912)
- "The Development of Agriculture". Genographic Project. Archived from the original on 2016-04-14. Retrieved 2017-07-21.
- McGourty, Christine (2002-11-22). "Origin of dogs traced". BBC News. Retrieved 2006-11-29.
- Fleisch, Henri., Notes de Préhistoire Libanaise : 1) Ard es Saoude. 2) La Bekaa Nord. 3) Un polissoir en plein air. BSPF, vol. 63.
- Guns, Germs, and Steel: The Fates of Human Societies. Jared Diamond (1997).
- James C. Scott,Against the Grain: a Deep History of the Earliest States, NJ: Yale UP, (2017), "The world's population in 10 000 BC, according to a careful estimate was roughly 4 million. A full five thousand years later it has risen only to 5 million...One likely explanation for this apparent human progress in subsistance techniques together with a long period of demographic stagnation is that epidemologically this was perhaps the most lethal period in human history".
- Sands DC, Morris CE, Dratz EA, Pilgeram A (2009). "Elevating optimal human nutrition to a central goal of plant breeding and production of plant-based foods". Plant Sci (Review). 177 (5): 377–389. doi:10.1016/j.plantsci.2009.07.011. PMC 2866137. PMID 20467463.
- O'Keefe JH, Cordain L (2004). "Cardiovascular disease resulting from a diet and lifestyle at odds with our Paleolithic genome: how to become a 21st-century hunter-gatherer". Mayo Clin Proc (Review). 79 (1): 101–108. doi:10.4065/79.1.101. PMID 14708953.
- Shermer, Michael (2001). The Borderlands of Science. Oxford University Press. p. 250.
- Hermanussen, Michael; Poustka, Fritz (July–September 2003). "Stature of early Europeans". Hormones (Athens). 2 (3): 175–178. doi:10.1159/000079404. PMID 17003019. S2CID 85210429.
- Eagly, Alice H.; Wood, Wendy (June 1999). "The Origins of Sex Differences in Human Behavior: Evolved Dispositions Versus Social Roles". American Psychologist. 54 (6): 408–423. doi:10.1037/0003-066x.54.6.408.
- Diamond, Jared (May 1987). "The Worst Mistake in the History of the Human Race". Discover Magazine: 64–66.
- Peterson, V. Spike (2014-07-03). "Sex Matters". International Feminist Journal of Politics. 16 (3): 389–409. doi:10.1080/14616742.2014.913384. ISSN 1461-6742. S2CID 147633811.
- Strang, Veronica (2014). ""Lording It over the Goddess: Water, Gender, and Human-Environmental Relations."". Journal of Feminist Studies in Religion. 30 (1): 85–109. doi:10.2979/jfemistudreli.30.1.85. JSTOR 10.2979/jfemistudreli.30.1.85. S2CID 143567275 – via JSTOR.
- Parsons, Talcott (1944). ""The Theoretical Development of the Sociology of Religion: A Chapter in the History of Modern Social Science."". Journal of the History of Ideas. 5 (2): 176–190. doi:10.2307/2707383. JSTOR 2707383 – via JSTOR.
- Sherratt 1981
- Larsen, Clark Spencer (2006-06-01). "The agricultural revolution as environmental catastrophe: Implications for health and lifestyle in the Holocene". Quaternary International. Impact of rapid environmental changes on humans and ecosystems. 150 (1): 12–20. Bibcode:2006QuInt.150...12L. doi:10.1016/j.quaint.2006.01.004. ISSN 1040-6182.
- Wells, Jonathan C. K.; Stock, Jay T. (2020). "Life History Transitions at the Origins of Agriculture: A Model for Understanding How Niche Construction Impacts Human Growth, Demography and Health". Frontiers in Endocrinology. 11: 325. doi:10.3389/fendo.2020.00325. ISSN 1664-2392. PMC 7253633. PMID 32508752.
- Furuse, Y.; Suzuki, A.; Oshitani, H. (2010). "Origin of measles virus: Divergence from rinderpest virus between the 11th and 12th centuries". Virology Journal. 7: 52. doi:10.1186/1743-422X-7-52. PMC 2838858. PMID 20202190.
- Key, Felix M.; Posth, Cosimo; Esquivel-Gomez, Luis R.; Hübler, Ron; Spyrou, Maria A.; Neumann, Gunnar U.; Furtwängler, Anja; Sabin, Susanna; Burri, Marta; Wissgott, Antje; Lankapalli, Aditya Kumar; Vågene, Åshild J.; Meyer, Matthias; Nagel, Sarah; Tukhbatova, Rezeda; Khokhlov, Aleksandr; Chizhevsky, Andrey; Hansen, Svend; Belinsky, Andrey B.; Kalmykov, Alexey; Kantorovich, Anatoly R.; Maslov, Vladimir E.; Stockhammer, Philipp W.; Vai, Stefania; Zavattaro, Monica; Riga, Alessandro; Caramelli, David; Skeates, Robin; Beckett, Jessica; Gradoli, Maria Giuseppina; Steuri, Noah; Hafner, Albert; Ramstein, Marianne; Siebke, Inga; Lösch, Sandra; Erdal, Yilmaz Selim; Alikhan, Nabil-Fareed; Zhou, Zhemin; Achtman, Mark; Bos, Kirsten; Reinhold, Sabine; Haak, Wolfgang; Kühnert, Denise; Herbig, Alexander; Krause, Johannes (March 2020). "Emergence of human-adapted Salmonella enterica is linked to the Neolithization process". Nature Ecology & Evolution. 4 (3): 324–333. doi:10.1038/s41559-020-1106-9. ISSN 2397-334X. PMC 7186082. PMID 32094538.
- Guns, Germs, and Steel: The Fates of Human Societies. Jared Diamond, 1997
- Halcrow, S. E.; Harris, N. J.; Tayles, N.; Ikehara‐Quebral, R.; Pietrusewsky, M. (2013). "From the mouths of babes: Dental caries in infants and children and the intensification of agriculture in mainland Southeast Asia". American Journal of Physical Anthropology. 150 (3): 409–420. doi:10.1002/ajpa.22215. PMID 23359102.
- "BBC – History – Ancient History in depth: Overview: From Neolithic to Bronze Age, 8000–800 BC". Retrieved 2017-07-21.
- Semino, O; et al. (2004). "Origin, Diffusion, and Differentiation of Y-Chromosome Haplogroups E and J: Inferences on the Neolithization of Europe and Later Migratory Events in the Mediterranean Area". American Journal of Human Genetics. 74 (5): 1023–1034. doi:10.1086/386295. PMC 1181965. PMID 15069642.
- Lancaster, Andrew (2009). "Y Haplogroups, Archaeological Cultures and Language Families: a Review of the Multidisciplinary Comparisons using the case of E-M35" (PDF). Journal of Genetic Genealogy. 5 (1).
- Bailey, Douglass. (2001). Balkan Prehistory: Exclusions, Incorporation and Identity. Routledge Publishers. ISBN 0-415-21598-6.
- Bailey, Douglass. (2005). Prehistoric Figurines: Representation and Corporeality in the Neolithic. Routledge Publishers. ISBN 0-415-33152-8.
- Balter, Michael (2005). The Goddess and the Bull: Catalhoyuk, An Archaeological Journey to the Dawn of Civilization. New York: Free Press. ISBN 0-7432-4360-9.
- Bellwood, Peter (2004). First Farmers: The Origins of Agricultural Societies. Blackwell. ISBN 0-631-20566-7.
- Bocquet-Appel, Jean-Pierre, editor and Ofer Bar-Yosef, editor, The Neolithic Demographic Transition and its Consequences, Springer (October 21, 2008), hardcover, 544 pages, ISBN 978-1-4020-8538-3, trade paperback and Kindle editions are also available.
- Cohen, Mark Nathan (1977)The Food Crisis in Prehistory: Overpopulation and the Origins of Agriculture. New Haven and London: Yale University Press. ISBN 0-300-02016-3.
- Diamond, Jared (1997). Guns, germs and steel. A short history of everybody for the last 13,000 years.
- Diamond, Jared (2002). "Evolution, Consequences and Future of Plant and Animal Domestication". Nature, Vol 418.
- Harlan, Jack R. (1992). Crops & Man: Views on Agricultural Origins ASA, CSA, Madison, WI. https://web.archive.org/web/20060819110723/http://www.hort.purdue.edu/newcrop/history/lecture03/r_3-1.html
- Wright, Gary A. (1971). "Origins of Food Production in Southwestern Asia: A Survey of Ideas" Current Anthropology, Vol. 12, No. 4/5 (Oct.–Dec., 1971), pp. 447–477
- Kuijt, Ian; Finlayson, Bill. (2009). "Evidence for food storage and predomestication granaries 11,000 years ago in the Jordan Valley". PNAS, Vol. 106, No. 27, pp. 10966–10970. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Neolithic_Revolution | 21 |
15 | Listen along with the podcast episode for this module here. Or on Apple Podcasts, Spotify, Google Podcasts, iHeart Radio, or wherever you listen to your favourite podcasts.
The core of food culture is connected to humanity's discovery of fire. Fire use has been documented for over half a million years, as demonstrated by the ruins of hearths in northern China's Choukowtien cave. The "Peking Man" left signs of cooking around the fireside of various animals' burnt bones. The food culture of Ancient China began at the dawn of the last Ice Age, which caused the land to become cold and dry. As the Ice Age lifted around 15,000-8,000 BCE, the climate conditions began to improve dramatically. The area went from cold and dry to warm and wet, allowing for plant growth and the beginning of agricultural societies. Historians believe that this caused an influx in the population, a rapid growth in environmental productivity, the beginning of trade and enhanced communication. The ancient fire was holy. Kitchens had altars for sacrifices and kitchen deity offerings. China worshipped the kitchen god Zaojing.
By 500,000 years ago, hominids of the Homo Erectus species intermittently inhabited a large cave on the outskirts of Zhoukoudian, about twenty-five miles from Beijing, in northern China. While there was no glacial action in Asia, winter would have been severe. Although it is uncertain whether hominids wintered this far north, it is where the first well-documented proof of fire was identified. Fire permitted hominids to use non-cooking food ingredients that would be otherwise inedible or poisonous. Burned deer bones and some with slash marks are evidence of the cave residents' meat consumption, but whether the meat was collected by hunting or scavenging is unclear.
These early modern humans practiced this hunting until the end of the last glacial era, around 12,000 years ago. In Europe, the glaciers' decline culminated in the expansion of trees and a significant shift in eating patterns, with people killing woodland animals, including deer and rabbits, and allowing better use of the sea's resources. By this period, however, populations in the Middle East and along the Yangtze River Valley in southern China were experimenting with plant agriculture, which was the beginning of the agricultural revolution, establishing the basis of civilized modern living and the beginnings of civilization.
Microscopic analysis of burned plants' remains decide if they were domesticated or wild, suggesting if the people were agrarians or hunter-gatherers. Charring leaves silica ghosts of the epidermal cells that show telltale markers to differentiate between domesticated and wild crops when analyzed under an electron scanning microscope. Such investigation of seeds, husks, and plant remains in China helped expand the archaeological record of rice's earliest domestication from 8,000 to 11,500 years ago, pinpointing the location to the middle Yangtze River.
In approximately 6,000 BC, the earliest Chinese civilization began in the village on Ban Po, within the floodplain of the Yellow or Huang He River, located in north central China. Contrary to many other ancient agricultural societies who used walls to protect their statehood and their grain stores, the village of Ban Po used a moat. The people within the village lived in huts that had plastered walls with thatched roofs made of straw. Pigs and dogs were also raised within the village. Millet was considered a communal grain and was buried in hundreds of pits that were scattered throughout the village.
Shu Huangdi, who ordered the construction of the Great Wall of China, standardized written Chinese, which assisted in the unification of China. Chinese history begins the same as the history within the Old Testament of the Bible. At first, there was Pangu who was the creator that created humans from the parasites on his own body. After Pangu died, he was followed by wise rulers that developed inventions that allowed China to become the first civilization in history. Fuxi was the first to domesticate animals and as a result was also the first to invent marriage, to domesticate women as well. Shennong invented medicine, trade, agriculture, the hoe and the plow. Huangdi, the Yellow Emperor, invented writing, the arrow, the bow, the cart, and ceramics. Centuries later, Emperor Yao appeared on the scene as a wise ruler who decided not to pass on the empire to his son, and instead decided on a wise man named Shun, to be his successor. Shun selected his minister Yu, to be his successor. Yu founded the Xia dynasty in 2,205 BC, and it ended in 1,766 B.C.
The Xia dynasty (c. 2,200-1,500 BCE), whose existence is debated among some scholars, and the Shang dynasty (c. 1,500-1,000 BCE), saw the rise of civilization in China, which promoted a long-lasting social structure of the rich becoming more rich, and poor becoming more poor. The elite statuses within Chinese civilization had an abundance of gain, wine, pork, and many other foods, while villagers were required to live on a diet of millet and mallows.
Confucius was a Chinese philosopher who lived from 551-497 BC. Confucius declared that all would be right in the world if the peasants were subordinate to their rulers, there was respect for the elderly, if the women respected men and if friends shared a mutual respect between them. The philosopher also compiled the I Ching or the Book of Changes, as well as the Book of Songs, which was a compendium of song from court and peasants, revealing interesting spotlights into the cuisine and culture of his era. For many centuries, the philosophy of Confucius was used as the basis for the Chinese Government. The Confucian scholars who ruled China in the 15th century AD, decided to refuse trade with other countries, which proved very damaging to the Chinese economy.
Until recently, China had no fine-dining restaurants or the Western-style kitchen brigade. However, the court had a completely different spread of imported specialties: the "eight delicacies." The selection was variable but contained camel's humps, apes' tongues, bears' paws, and other fictitious animal varieties. In reality, they did consume the bears' paws, but they cooked them for a long time, rendering them into a gelatinous form. The attraction was their rareness rather than their flavour, but real bear hunters still love bears' paws in Siberia and North Canada. Rare varieties of mushrooms, bamboo shoots, and other vegetable foods were less flashy but probably still more popular, as were complicated and thorough preparations of ordinary animals like chicken, duck, and fish. Dishes from the empire's far reaches, such as Central Asia and Tibet, always graced the table, particularly when entertaining dignitaries from those areas. Exotica such as birds' nests and sea cucumbers come from southeast Asia. The court displayed its cosmopolitan, world-ruling strength and hospitality. Many imperial dishes remain to this day, and restaurants often re-create them. Historical records state that several emperors avoided elaborate meals, choosing plain food to signify the emperor's virtue; ease, unappeased by displays of ego, and empathy for common citizens are values in both Chinese secular and philosophical practices.
Central Asian forces were high after the Han dynasty, and a wave of western foods reached China. The Silk Road, the main trading route through Central Asia, connected East and West; its golden period spread from Han through the Tang (621–960), Sang (960–1279) and Yuan (1279–1368). Persian bread, lettuce, walnuts, large beans, and even dark herbs such as fenugreek and cumin came to China with Galenic medicinal ideas and Indian Buddhist foods. The pinnacle came in the Mongol Empire—Yuan China's Dynasty (1279–1368)—when Beijing court served dishes from Arabia, Persia, Turkistan, Cashmir, and the whole Mongol-dominated world. Nomadic Mongol dishes such as the roast wolf and Arabic delicacies such as lamb made with saffron and rosewater. The rest of the planet would not see such diverse eating until the twentieth century.
During the Yuan Dynasty, from 1271–1368, the time of Mongolian rule in China, Eurasia came under one economic structure, with Mongolian trading routes leading to Asia's corners. By 1280, Mongol-controlled China, Central Asia, and Iran, with trading lines stretched into more distant territories. Genghis Khan conquered Asia's majority, seizing northern China by 1234. His grandson claimed the Chinese throne by 1279. After the Mongol rulers drove out Southwest Asia's last Abbasid Caliphate, the Asian continent was united for the first time in history.
But Asia was profoundly intertwined long before Mongol khanates developed. In AD 751, the conflict among the Arab and the Chinese Tang empires on the Talas River in Central Asia culminated in Central Asia's complete Islamization, which was the only strategic encounter between the ancient world's major forces. However, these empires and their successors were economically linked for at least a millennium: commerce along the Silk Road predated the Han Dynasty. Historical records say that Zhang Qian, China's first diplomatic ambassador, entered Central Asia in 126 BC. The recorded reports in the Shiji, though embellished with depictions of magical beasts and other fictional items, testify to the early Han period's direct contact between Central Asia and East Asia. Although these records may indicate that Central Asia was historically foreign to the Han, Central Asian mountains never impeded cultural flow. People passed through valleys like water through a leaky faucet for at least two centuries before Zhang Qian's westward journey. Archeological data suggests that these regions' food systems had influenced each other since the early third millennium BC.
A nativist backlash in the Ming dynasty (1368–1644) rehabilitated a Chinese cuisine much more refined and nuanced than Han or even Tang's. The growth of an affluent middle class, particularly in Sung, led to haute cuisine creation. Merchants and bureaucrats rivalled feasting. Stores stocked well; domestic and foreign commerce flourished. In the early 1400s, a Ming prince, Zhu Xiao, compiled an excellent and detailed guide to famine food and which was distributed to local governors throughout the country.
However, Ming's most significant food revelation was the advancement of sea travel and interaction with the New World. Much of China, particularly the south, was unfit for rice or wheat. Millet was a bad replacement. Suddenly corn, white potatoes and sweet potatoes arrived. Maize was widespread before the 1600s. The remarkable distribution of New World crops arrived in the Qing dynasty (1644–1911). By the late 1600s, sweet potatoes entered China, and white potatoes by 1800. Peanuts, tomatoes, chiles, guavas, and other Modern World crops revolutionized agriculture by bringing nutrient-dense, productive, and easy-to-grow foods into China. In particular, chillies are incredibly abundant in vitamins and minerals; they and other New World crops were essential to the population boom that brought China from 50 million citizens in early Ming to 1.25 billion today.
One hundred and fifty years of Russian influence and fifty years of the region's extreme Chinese subjugation have significantly altered foods through well-documented Soviet collectivization issues and homogenized local diets. Xinjiang's tumultuous past continues, with Chinese migrants and employees, mainly from Sichuan, flooding the area after the 1960s. In China, Uighur cuisine is divided from the rest of the food culture and is considered Muslim cuisine.
Present literature splits China into several federal or local culinary areas. Almost all have Fujian fare, the nature of which can be found in its variety of soups, sauces, vegetables, fish, fruits, mushrooms, spices, dried fruits, and a special treat named "Tribute Candy." This after-dinner or snack sweet is a mix of baked peanuts ground into maltose or ground peanuts wrapped in a paper-like coating of glutinous rice. It is typically served with fresh fruit at the end of the meal and is appreciated by anyone who enjoys sweet, delicious food.
The consistency of the products was once the most significant dimension for assessing food quality and restaurant-quality—cooking techniques developed to amplify fresh food with rapid cooking using simplistic but measured techniques. Split-second pacing was characteristic of the cuisine; for example, when the boiling or frying sound changed, the object would be swiftly whisked off the blaze. Foods are also briefly boiled before being stir-fried to retain tenderness during fast frying, specifically in Cantonese dishes.
Today, Yunnan is China's most ethnically diverse province, with approximately 40 minority groups and their languages belong to at least four entirely unrelated varieties. Most of these communities have simplistic cuisines centred on grain and local vegetables, but far-south Thai-speaking citizens have elaborate dishes similar to northern Thailand. In the violent time between the Ming and Qing dynasties, Sichuan's population was diminished by approximately 75% and repopulated mostly from Hunan and neighbouring regions, resulting in a remarkable culinary resemblance between Hunan and Sichuan. In exchange, Yunnan gained many of its Chinese inhabitants from Sichuan, but they adapted to local circumstances and were affected by non-Han nationalities.
As we mentioned in the previous episode, salt comes from salt lakes and wells. For two centuries, drilling bottomless salt wells was a significant and lucrative industry in Sichuan. Unlike sea salt, this salt, like most foods in these mountainous areas, lacks iodine. Therefore, goitre was widespread in historical times and was considered synonymous with consuming local salt instead of sea salt.
North of Sichuan, in Shaansi, lamb is common. Shaansi's main city is Xian, which was China's capital, and was called Chang'an for several centuries. It eventually lost to Beijing during the Liao Dynasty. Xian has several recipes, many using barley, beef, onions and vinegar.
Fujianese foods are identifiable by the rich stocks and sauces used in thick and thin soups. They usually eat two or three soups at main meals and five or six soups at banquets, and many common ingredients marinate in wine or the remaining sediment called lees or hung jiu . Fujian foods—also named Min—are popular everywhere they speak Min or Wu dialects. Sometimes found in Fujian and the southern Guangdong province in Shantou or Swatow, to where several descendants of a well-known ostracized eighth-century politician escaped.
This Teochiu variation, Chiuchow, is considered the most exquisite cuisine. It uses plenty of seasonings, fish sauces, citrus marmalades, and satay-type sauce pastes. Every Fujianese food is enjoyed locally by people living on Hainan Island and locals nearby. While common in China, this cuisine is unknown outside China. Fujian's foods are strongly linked to Taiwanese cuisine since many Fujianese fled through disruptive historical periods.
Fujianese cooks may serve a bird's nest or shark fins as the main dish, which is common; and presented as a rich stew-type soup. Many foods have slow preparations, are lard baked and are seasoned more liberally than in neighbouring provinces. Fujianese food culture contains three regular meals: mornings commence with rice soup, other small dishes or juk seasonings, two to three soups served with main meals and five to six soups with an equal amount of accompanying dishes shape a banquet. Other than breakfast chou or juk, soups can be clear or thick and stewy. Dipping sauces complement some foods, such as garlic crushed in a vinegar base, while the key component is chicken or maltose for fried fish. Dishes and soups use complicated stocks and sometimes hot and sour sauce. Some are heavily pigmented, mostly red from red wine lees.
Xiamen, once Amoy, is Fujian's second-largest city known for its popia. Xiamenians love this pancake. It is typically filled with cooked meat and vegetables including bean sprouts, garlic shoots, carrots, and bamboo shoots. Seaweed can also be added, and it is flavoured with a hot mustard or plum sauce. Other familiar dishes are stir-fried Xiamen noodles and Xiamen spring roll. Both are composed of vegetables, sprouts, peanuts, and pieces of grilled seaweed. Classic Fujianese dishes include: diced and fried wine-marinated pork, steamed chicken in fermented tofu, drunken spare ribs, sweet and pungent litchi pork, deep-fried eel in wine lees, omelette oyster, stir-fried razor clams with ginger, spicy and sour squid broth, duck tongue with white and black fungi, fried peanuts, and dried longan soup with lotus seeds. Chi Ping, a Hainanese chicken-rice dish, is well-known.
One food with a presumptive "barbarian" origin is "Mongolian barbecue." This dish is not originally Mongolian; it is more likely a modern development from typical Muslim Chinese dishes. It includes tiny meat cuts, drenched in various piquant sauces, and grilled on high heat in a metal brazier. Another popular Muslim dish with specialty restaurants is lamb hot-pot. Since the Chinese enjoy cooking their own meals, even at restaurants, it is common for restaurants to serve very finely sliced ingredients to dip into boiling stock at the table. The slices cook quickly and flavour the stock, consumed as soup at the end of the meal. Every province has distinct variations of this hot-pot meal; Beijing's is lamb-based, and it is crucial to slice the lamb uniformly and finely. Chinese cooks spend years studying how to slice correctly, and this dish challenges them to prove their skills.
Zhejiang Province comprises the heart of the large Yangzi delta area, China's wealthiest, highest-educated, and most progressive region in history. Shanghai is now a separate metropolitan area; Suzhou, Hangzhou, and Ningbo create an arc around the Yangzi mouth. Each city has its culinary specialties. Shanghai has recently gained popularity as a city, and it is essentially a result of imperialism, which the English established as a port in the nineteenth century. It has its version of Eastern foods, based on local customs, and incorporates influences from all over the Yangzi Valley. Zhejiang is also the centre of Wu's ancient state, which was likely non-Chinese-speaking, and retained a complex and elaborate culture distinct from that of the northwest Central Plain. The word "Wu" is also used to describe the area and the spoken language - a language that is typically misnamed a "dialect," but it is distinct from Putonghua [fu-tong-wah] as it is from Spanish. Zhejiang cuisine is the most elaborate version of a more widespread culinary style described as "eastern," found in Wu's old state and neighbouring regions. Besides Zhejiang, it contains Jiangsu, Anhuei , and part of Jiangxi.
Throughout history, the north typically had the political influence, while the southeast usually had the resources. Zhejiang food's other distinctive virtues are a proclivity towards sweet and unctuous flavours; a rich consistency, plenty of oil and thick sauces; and a commitment to freshness. The sweet taste appears to be ancient and part of the universe's classic fivefold division: a Chinese thinking characteristic from the early Han dynasty, with sweet being an east-related flavour. It is not just cosmological speculation, but an awareness of reality. The west is synonymous with pungency, which is the case with its cuisine to this day. The north is salt, the south bitter, the east acidic and the middle sweet. The freshness is often a product of the landscape, and food preservation is not simple in the mild, humid environment. Typically, food preservation in these areas is also needless due to the twelve-month growing and fishing season.
The warm and amphibious landscape made soup an appealing option. China's soup-eating core is Fujian, where it is customary to offer three or four different soups amidst the main courses in a twelve-course banquet. Ginger, green onions, garlic, wine, sugar and vinegar are popular flavours. They tended to use fewer herbs and bean pastes than in western and southern China.
Language attests to the significance of protein complementarity in East Asia. The Chinese fan translates to 'grain,' but also means 'food.' Ts'ai translates to 'edible leaf and stem vegetable,' but also means 'what goes on rice to complete the meal.' This language is similar to Biblical references of 'bread,' indicating any food required for sustenance. Complementarity relies on the soybean. Rice or wheat is combined with hundreds of soybean products, including curds (tofu), soy milk, soybean boiled or fermented, and soy sauce. In exchange, soy sauce is usually produced from wheat flour, whose methionine strengthens an already high amino acid profile.
Moreover, soy protein digestion is enhanced by typical fermentation in water containing dissolved calcium or magnesium. These inactivate the opponents of trypsin, a digestive enzyme that appears within soybeans. Since the Sung dynasty, sesame oil has been a rich source of methionine and has been China's favourite cooking oil. Even in periods of famine, protein deficiency was uncommon in East Asia. When one goes inland west or north, pulses and legumes such as red bean, broad bean, mung bean, peanut and common pea are combined with cereals to support complementarity. Fermented milk products like yogurt became important in Central Asia.
The tendency to identify foods and frame nuanced dietary rules is more ancient than humanity itself. Many food systems worldwide are explicitly theological. However, specific food systems were more secular, intending to preserve or restore physical fitness. These diets were most influential in Greece, India, China, and the modern West.
Some may contend that the apple tree has influenced world history from the moment Eve took the first taste. Few people know that the intoxicating substance extracted from the opium poppy was one of the critical reasons China shut down its borders after the People's Republic of China was formed in 1949.
Military envoy Zhang Qian, who returned from one of his diplomatic missions to Central Asia in the late second century B.C., brought a small, tender vine in a rawhide bag, shielding it from the hot sun as it travelled through the deserts. The vine had grapes, and sweet wine came to China from Dawan, most scholars believe to be the Fergana Valley in modern Uzbekistan. Data from a newly excavated grave from the Yanghai cemetery in Xinjiang indicates, however, that wine was worshipped hundreds of years before Zhang Qian's legendary journey.
Another early Chinese traveller noted equal abundance along his trip. Kiu Chang Chun left Central China in 1220 with special permission from Genghis Khan. Chang Chun travelled from Mongolia to Hindu Kush, to Afghanistan and back to China. A travelling companion, Li Zhichang, recorded the three-year trip. He found that as they reached Central Asia cities, citizens came to bring grape wine gifts and various kinds of fruits. He mentioned cotton and fruit development near Almaty's medieval village, which he noted was called A-li-ma, from the local word for fruit; he commented on the large irrigated apple orchards surrounding the town. He mentioned rice and vegetable production along the Amu Darya River and fruit orchards in the Tien Shan; he noted peaches, walnuts, and a tiny apricot peach. He commended the vitality of the land around Samarkand, noting that all grains and legumes growing in China also grew in the area, except buckwheat and soybean. He also praised the watermelons and eggplants—a long, narrow, purple variety—of the Zerafshan region. Addressing irrigated farming in Inner Mongolia's Yin Shan Mountains, he noted that although fruits ripened late due to the cold in the higher elevations, efficient irrigated fields and gardens existed. As the era of colonization and discovery started, a wave of European, Chinese, and Arab adventurers, traders, traders, warriors, and scholars poured into Central Asian trade cities, and many of them reported tales of vibrant markets and rich fruit varieties.
A year before his death on February 18, 1405, Amir Timur also sought to push eastward, rallying a large-scale military assault on China's Ming Dynasty. Although this campaign collapsed in its primary object, Ming's mercantile ambitions were redirected from Central Asia. As Ming tended to the nautical routes to the south, the ancient city of Zayton became a flourishing trade centre, ruled by Islamic traders, linking East Asia to Europe.
Shortly after 121 BC, Emperor Wudi (Wu of Han), who commissioned Zhang Qian's expedition, expanded the Great Wall to Dunhuang, and the Jade Frontier Post became the westernmost of Han military forts. The fort's ruins are only some eighty kilometres northwest of Dunhuang. As a result of the establishment of military posts, caravan cities developed in western China, especially along the Hexi Corridor, a pass through the Qilian Mountains leading to Chang'an on the Yellow River's west side. Private and military records written on wooden slabs were retrieved from Han military guard towers in the western regions. Archeologist Aurel Stein retrieved related records from garbage piles at watchtowers near the Thousand Buddhist caves, along the route from Cathay's old entrance, about 1900. Each of these records, bound to silk strips, lists the duties delegated by the watchtower's military leader: planting lands, growing garden crops, building canals, and fixing domestic equipment. Military Han incursions into the area brought a wave of agricultural invention, particularly in the Taklimakan Desert, which increased population and supported further trade. The Han Dynasty invaded the Kororaina territory in 108 BC and set up provincial military outposts. Historians indicated that combining the Chinese military with local citizens contributed to the increased popularity of oranges, pears, pomegranates and dates throughout the Han Empire. However, Han's expansion into northern Central Asia contributed more to the eastward influx of Central Asian grain.
In 386 AD, the Tuoba invaded the Han and loosely retained it as the Wei Dynasty until around 550. The Tuoba, who were from the north and sometimes described as a nomadic tribe, retained strong cultural and commercial relations with Central Asians. Some scholars claim that the 400-500 era was among the most important for trade along the northern roads, fostering the tremendous appetite for the Tang elites' exotic products. Before its fall, the Wei Dynasty's capital had an international quarter (as did the Tang capital l | https://www.ksgpodcast.com/post/module-5-ancient-chinese-food-history | 21 |
14 | Infrastructure built by slaves continues to generate massive wealth for state economies
American cities from Atlanta to New York still use buildings, roads, ports, and railroads built by slaves.
The fact that centuries-old remnants of slavery still support the economy of the United States suggests that reparations for slavery should go beyond government payments to ancestors of slaves to account for profit-generating and built infrastructure. by slaves.
Debates over compensating black Americans for slavery began shortly after the Civil War in the 1860s with promises of “40 acres and a mule.” A national conversation about repairs has resumed over the past decades. The definition of reparations varies, but most proponents view it as a two-part calculation that recognizes the role of slavery in building the country and directs resources to communities affected by slavery.
Through our Geographic and Urban Planning Fellowship, we document the contemporary infrastructure created by enslaved black workers. Our study of what we call the “race landscape” shows how the dominant world economy of the United States dates directly back to slavery.
Looking at the railroads again
Although difficult to calculate, researchers estimate that much of the physical infrastructure built before 1860 in the southern United States was built with bonded labor. Railways were particularly critical infrastructures.
According to “The American South,” an in-depth history of the region, the railroads “offered solutions to the geographic barriers that divided the South,” including swamps, mountains and rivers. For the inland planters who had to get their goods to the port, trains were “the basic prerequisite for better times.”
Our archival research for Montgomery, Alabama, shows slave laborers built and maintained the Montgomery Eufaula Railroad. This 81-mile-long railway, started in 1859, connected Montgomery with the Central Georgia Line, which served both the fertile cotton region of Alabama – cotton picked by slaves – and the textile factories of Georgia.
The Eufala Railroad also gave Alabama commercial access to the Port of Savannah. Savannah was a key port for the cotton and rice trade, and slavery was integral to the city’s growth. Today, the deepwater port of Savannah remains one of the busiest container ports in the United States. Among its main exports: cotton. The Eufala Railway closed in the 1970s. But the company that financed its construction – Lehman Durr & Co., a major Southern cotton brokerage – existed well into the 20th century.
By examining court affidavits and city records located in the City of Montgomery Archives, we learned that the Montgomery Eufaula Railroad Company had received $ 1.8 million in loans from Lehman Durr & Co. major backers. Lehman Durr & Co.’s fundraiser founded Lehman Brothers Bank, one of Wall Street’s largest investment banks until it collapsed in 2008 during the US financial crisis.
Slave-built railroads also spawned Georgia’s largest city, Atlanta. In the 1830s, Atlanta was the terminus of a railroad line that extended into the Midwest. Some of these same railway lines are still the engine of the Georgian economy. According to a 2013 state report, the railroads that passed through Georgia in 2012 transported more than US $ 198 billion in agricultural products and raw materials needed for industry and manufacturing in the United States.
Savannah, Atlanta, and Montgomery all show how, far from being an artifact of history, as some repair critics suggest, slavery has a tangible presence in the American economy. And not just in the South. Wall Street in New York is associated with stock trading. But in the 18th century, slaves were bought and sold there.
Even after New York City closed its slave markets, local businesses sold and shipped cotton grown in the slave south. Geographic research like ours could inform thinking about monetary reparations by helping to calculate the current financial value of slavery.
Like scientific research linking slavery to modern mass incarceration, our work also suggests that direct payments to individuals may not really capture the modern legacy of slavery. It points to a broader concept of reparations that reflects how slavery is integrated into the American landscape, always generating wealth.
These reparations could include government investments in aspects of American life where black people face disparities. Last year, the city council of Asheville, North Carolina, voted for “repairs in the form of community investment.” Priorities could include efforts to increase access to affordable housing and boost minority business ownership. Asheville will also explore strategies for bridging the racial divide in healthcare.
It is very difficult, if not impossible, to calculate the total contemporary economic impact of slavery. But we see recognition that enslaved men, women, and children have built many of the cities, railways, and ports that power the American economy as a necessary part of such accounting. | https://contropiani2000.org/infrastructure-built-by-slaves-continues-to-generate-massive-wealth-for-state-economies/ | 21 |
66 | After the fall of the Ottoman Empire, the landscape of the Middle East has gone through a dramatic change that continues to cause almost daily reminders of its consequence. The seemingly never ending conflicts in the region can quickly deter or entice a person’s desire for a deeper understanding of historical and geographical factors that contribute in current unresolved matters. A special highlight of the political and economical environment in the Fertile Crescent, starting in the late Ottoman period and proceeding to the colonial and post colonial era is pivotal for a comprehensive understanding.
A series of highlights must display various junctures at which the Middle East faced a geographical shift from what it has previously known and what it will come to know, with an addition of understanding the geographical ramifications on the local population of the respected landscape. Before the Middle Eastern landscape was introduced to the colonial and post colonial modern nationalist blocks, Palestine among many other regional inhabitations was of a different structure.
These structures where functioning based on certain criteria, which is relatively alien to our modern understanding of clearly outlined and defined border lines. Palestine as we know it today was not as clearly defined, but rather belonged to a large region named under the Ottoman’s as (bilad-al-sham), and was also known as the ‘Fertile Crescent’. This Fertile Crescent was comprised of modern day countries of Jordan, Palestine, Syria, Lebanon, as well as Egypt in some instances.
The region of Palestine in the Ottoman period had little significance in terms of economic and cultural contribution.
Palestine’s main significance was a historic religious one. Palestine served on the pilgrimage route connecting it to Damascus, Mecca, Madina and parts of Jordan, which is also known as Transjordan. Regional landscapes were determined based on a different set of criteria as mentioned above. These criteria’s were largely based on the Ottoman’s ability to assign regional administrative units to carry out tax collection and tariff imposition.
The following is an excerpt from Gudrun Kramer’s book A History of Palestine, describing the geopolitical status of modern day Palestine in the Ottoman period. “From 1516 to 1918, and hence for almost exactly four hundred years, Palestine (defined here as always by the boundaries of the later Mandate) was ruled by the Ottomans. Throughout the period it was not perceived as a distinct political, administrative, or economic unit within bilad al-sham.
The coastal strip, the valleys leading to the interior, and the mountainous interior itself formed “geohistoric” units that, conditioned equally by natural and political factors, each followed their own path of development. These “geohistoric” units did not coincide with the administrative units created by the Ottomans, who frequently altered and adapted their boundaries in response to changing political goals and demands…it was of paramount interest to the Ottoman central government to secure the pilgrimage route from Damascus to Mecca and Madina, a route that led through Transjordanian Territory.
This also helps explain why the districts of Jerusalem and Nablus were assigned to the province of Damascus, an arrangement held until the 1870’s …one point is of primary importance here: the Ottoman administrative units were mostly relevant for the purposes of tax collection. At the district borders, tariffs could also be levied on certain goods. ” The region of Palestine under the Ottoman rule was largely underdeveloped, and hence restricted any economic and cultural integration, although this was not held in resentment by local settlers.
The scarcity of paved roads and difficult traveling routes restricted access for Ottoman administrative personal from collecting taxes and confiscating livestock. Taxes could only be carried out with full proficiency in certain regions that were connected to the province of Damascus, and hence automatically increasing Damascus’s economic and cultural significance in relation to the restricted areas outside the provinces boundaries. Kramer describes this further in the following excerpt. “There was no infrastructure to sustain an integrated “Palestine” economy and society.
Yet in the eyes of the local population, there were good reasons not to develop a sound infrastructure: Roads not only facilitated the traffic of good and persons to the benefit of the local community, or at least some of its members; they also gave the authorities better access to the local population and their possessions. To evade the regular imposition of taxes, possible recruitment, or confiscation of livestock, local communities sometimes decided against connections with the outside world when the choice was give to them. ”
The inhabitants of the region of Palestine were mainly farmers and shepherds that were largely connected through familial and tribal affiliation. They did not necessarily have a “Palestinian” identity per se, but oddly enough identified themselves as Syrian Arabs . It is only after the colonial interplay in the region that the concept of nationalism became a factor in identity selection. Before jumping into the colonial history in the Fertile Crescent, we must bring forth attention to several internal attempts that solidified the region of Palestine as well as its surrounding provinces and districts.
This attempt to unify was not in any way rooted in a nationalist ideal, but rather served an economic and military purpose. As the Ottoman hold over its vast empire began to weaken, several locals took initiative and ruled their selected districts in relative full autonomy. This was seen in Iraq, Mecca, as well as Egypt and the Fertile Crescent . These local rulers did not declare separation from the Ottomans and maintained their payments of taxation and tariff impositions. Nonetheless, this sporadic regional autonomy created pockets of prosperity and development that would have not manifested if left under the policies of the Ottomans.
In the same token, these regional struggles, largely based on economic monopolies over local cash crops, created tension within the Ottoman Empire and induced European intervention based on several alliances that would be discussed later. Among the first attempts that unified the region of Palestine was under an individual named Zahir al-Umar al-Zaidani. Zahir, a native of the province of Sidon managed to create an economic and military base in his home region of Galilee. The province of Sidon was comprised of modern day countries of Lebanon, Israel and Palestine.
One particular coastal city, Acre, managed to quickly prosper. Acre was the base for most exports to European countries. Zahir managed to monopolize the regions production and export of cotton, olive oil, grains and tobacco. Zahir’s successful management of the local economy granted him power to make alliances with the ruler of Egypt and together they began an alliance with Russia. These European alliances did not always sit well with the sultan in Istanbul, especially since Russia and the Ottomans engaged in several wars/proxy wars.
Simultaneously as Zahir initiated a renewed interest in the Fertile Crescent, European migration, and investment began to rapidly take place, taking special interest in the holy land, in particular, Jerusalem. This interest in Jerusalem brought in religious tourists and others alike, conducting massive surveys and documentation of the holy city and its inhabitants. As this rapid change brought wealth and prosperity to a once stagnant region, hence increasing the tax coffers of the Ottomans, a concern over the increased European interest was somewhat alarming to the Ottomans.
They actively made an effort to restrict Jewish and Christian immigration to regional hot spots. This was neither an anti-Semitic nor an anti-Christian gesture, but rather a pure political concern over rising European interest in the holy land. It was well known to the Ottomans as history clearly demonstrated a continuous attempt over the years by Europeans and other alike to capture the holy land for various economical and religious reasons.
The increased European interest in the region continued as did the new found economy of the Fertile Crescent and Egypt, but only after Zahir’s little powerbase was toppled and Zahir himself killed by an Ottoman military force. Zahir was replaced by an individual called Ahmad al-Jazzar. Ahmed al-Jazzar, a non-native Bosnian managed to rise through the ranks and became governor of Sidon and Damascus. Ahmed, unlike Zahir did not wish to defy the Ottomans, but rather was a direct representative. Ahmed still managed to operate in relative full autonomy, manipulating the region in various ways.
He maintained the monopoly over local crops, and successfully fought off Napoleon’s French attempt on Acre and surrounding districts. In addition to defending against the French, he successfully fought off a large group of Bedouins that had continuously caused destabilization and security unrest to the local settlers. Ahmed’s policy and actions is almost identical to Zahir’s. Both managed to create a military base as well as a strong economic base while simultaneously encouraging European trade and immigration.
What Zahir and Ahmed did was not a definitive turning point, but their actions acted as a primer for future events that would dramatically change the geography of the Fertile Crescent and the entire Middle East for that matter. Egypt at the time has thrived through its cotton trade, and had surpassed their neighbors in the Fertile Crescent. The Ottoman appointed governor of Egypt, Muhammad Ali, decided to march his troops in to the Fertile Crescent. In 1831, Muhammad Ali managed to take over the local powerbase in the region of Palestine. Gudrun Kramer describes the Egyptian take over in the following excerpt. The Egyptian occupation, which lasted scarcely ten years, has come to be seen as a turning point in the modern history of Syria and Palestine, if not as the beginning of their modernization. ” Egypt did manage to take over the Fertile Crescent, although they left local rulers to their business as usual, Egypt enjoyed an increased income from various imposed taxes. Egypt further expanded the previously initiated “multicultural” environment. Egypt allowed for renovation and new construction of churches and synagogue, and encouraged further European immigration.
There were a few attempts of revolt by certain locals, but had horribly failed. It was finally decided by Britain, Russia, Austria, and Prussia to expel the Egyptian presence in the Fertile Crescent. The reasoning behind this initiative was a European concern over internal threat to the Ottoman Empire. Oddly enough, the Ottomans where considered to be “the sick man of Europe” . If we would fast forward to the colonial era, the “internal threat” the Europeans were concerned about was largely a threat to their colonial interest.
This would be clearly demonstrated as Egypt would be colonized by Britain while Tunisia colonized by France shortly after the expulsion of Egyptians in the holy land. As the Egyptian presence was eradicated, it was quickly replaced by a more subtle European presence. As the Fertile Crescent entered into modernization, facilitated by heavy European investment and settlement, came new benefits for the locals. The locals were not just bystanders, but rather active participants in new technology and structure of the Europeans that transformed the region through massive economical growth between the years f 1856 to 1880. Paved roads, rail road’s, routine use of steam ships, and modern irrigation and farming technology brought increased wealth and prosperity to the holy land. Up to this point, despite the clear display and presence of European interest, along with Jewish and Zionist interest as well, the Fertile Crescent has made a positive turn in terms of its economy, living and education standards, with increased opportunity for the locals . This would quickly change in the coming years as the Ottoman Empire falls after WWI.
Amid the conflicting intensions of the McMahon-Hussein correspondence, the Sykes-Picot Agreement, and the Belfour declaration, the Fertile Crescent was in for a major geographical change that would set in motion the status of modern day Israel and Palestine, as well as neighboring districts and provinces. It’s important to note, as it was eluded in the previously mentioned information, European and Jewish/Zionist interest did not only initiate after the various European agreements and mandates, but rather have slowly been escalating to this point.
Let’s examine a few statistical numbers to further understand the situation. Please refer to Appendix A. in the corresponding map; we can clearly see between the years of 1880-1914, before any colonial powers entered the Fertile Crescent, there was a mounting effort for Jewish settlements. Between the years of 1880-1914, 66,000 new Jewish settlers mainly arriving from Russia and Eastern Europe managed to lawfully purchase Palestinian land from various European, Turkish, and Arab landlords . This massive influx of new Jewish settlers and land purchase did not go unnoticed by locals.
The locals petitioned to officials in Istanbul to restrict Jewish immigration and land purchase. The Young Turks government at the time did impose various restrictions, but was routinely dismissed by locals, who continued to sell off land to Jewish and European buyers. Please refer to Appendix B. the corresponding map displays what was promised by England to Sherif of Mecca. The McMahon-Hussein correspondence was to entice Arabs to take up arms against the Turks, and in return England promised to secure a large portion of land to be claimed by an Arab caliphate.
This agreement mentioned vague details regarding Palestine and Jerusalem, with little mention of Jews and their fate in the region. This was of course alarming to the Arabs, as it was to the Sherif himself, but nonetheless, the Sykes-Picot Agreement, and the Belfour declaration would take paramount precedence over the promised land to Sherif Hussein. The Arabs would quickly realize the extent of political trickery they succumb to. Please refer to Appendix C. he corresponding map displays a radical partition of the fertile crescent as well as the Arabian peninsula between the major European powers. Between the years of 1917-1971, the Middle Eastern landscape was much more defined, by imposed borderlines. The newly founded countries of Oman, Aden Protectorate, Iraq, Transjordan, Bahrain, and Egypt were under direct control of British authority. All of these countries would eventually get their independence by 1971. As for Syria and Lebanon, they were under French control and would gain independence by 1944.
Unlike the European maps mentioned above, the outline for the region of Palestine would not hold. As Britain and France failed to secure the region of Palestine, further events followed that continued to change the landscape leading up to modern day Israel and Palestine. From the years of 1917-67, continuous attempt at settling borderlines, and hopes for peace treaties did not go as planned, and finally erupted in an official Arab-Israeli war that lasted six days, giving Israel an easy victory, and with it came further change to the Palestinian landscape.
The first official compromise to the original British, French, Russian proposed map was the Partition Plan of the Peel Commission of 1937. The commission decided to partition the land into 3 sections. 1) An Arab section, 2) a Jewish section, and 3) a British mandate section, also known as international zone of Jerusalem. This new partition plan was continuously fought by neighboring countries and local rebels, but with no real resolution to the problem. Soon thereafter, the issue was transferred over to the newly founded United Nations.
In 1947, the UN proposed a partition plan, dividing Palestine into 6 sections, of which none are contiguous. The Arabs outright rejected any recognition of a Jewish land claim, and conversely, the Jewish population of Palestine quickly accepted the plan, for the least it would put their aspiration for a country in a tangible outline, and it would force a British withdrawal . The British successfully withdrew, leaving it up to Israeli/Arab soldiers to define the borders. Israel officially declared its independence in 1948.
As several armed struggles ensued, Israel through the armistice agreement with Transjordan created the Green line border, which allotted 77% of Palestinian land to Israel and gave Jordan control of east Jerusalem and the Gaza strip to Egypt . This was a drastic increase from the proposed UN partition plan, allotting 54% of the land to the Jewish population. Soon thereafter in the year 1967, an Arab ambition lead by the outspoken Egyptian leader Gamal Abd al-Nasir, orchestrated an organized attack on Israel by Jordan, Egypt, Syria, Saudi Arabia, and Iraq.
Jordan, Syria and Egypt were easily defeated in a matter of six days. East Jerusalem and the Gaza strip were now added to the landscape of Israel. Since 1967, much of the fighting and disagreements is over illegal Jewish settlements in the west bank and the late invasion of the recently returned Gaza strip. The dramatic change in the Palestinian landscape has given rise to largely negative ramifications, affecting the daily lives of Palestinians and their ability to carry out basic necessities for a decent life.
The Palestinian population, tracing it back to the Ottoman period where simple farmers and shepherds, which solely relied on fertile land to make their living. Since the beginning of the 19th century, much of the Palestinian farmers where overburdened by debt, rising rent, and overly competitive subsidized Israeli prices. Following the years of 1967, the continued occupation of the west bank and the Gaza strip allowed Israel to impose policies that restricted Palestinians from farming, manufacturing, and trade. Bornstein further explains this in the following excerpt. Israeli policies restricted the building of factories and other facilities, only allowing a few industries (textiles, footwear, and chemicals) to develop in the West Bank and Gaza…the desire to protect Israeli-made products was so great that Israel even attempted to prevent the establishment or reactivation of Arab-owned factories if there was any danger that their products might compete with Israeli products…in almost all areas of economy, Israeli policy discouraged investment in the territories by making the cultivation, harvesting, processing, manufacturing, marketing and exporting of any crop or product contingent on the acquisition of a permit from the military authorities. To protect Israeli growers and anufacturers, permits would be withheld from those who wished to engage in economic activities that might compete with Israeli products. ”
Bornstein points to the fact that agriculture was the biggest economic sector of the west bank, comprising 40% of its economy. Due to Israeli policies, this number has now shifted to 30%, and subsistence farming is now transformed into market farming. The majority of work in the west bank and Gaza has now shifted to labour/ wage earner positions. In addition to the imposed policies, illegal settlements which continues to bring much disturbance to any peace mediation has drastically risen from 3,000 Jewish settlers in the late 70’s to an astounding 400,000 settlers by the beginning of the 21st century .
Due to increased difficulty in finding a secure source of income, Palestinians resorted to wage work in Israel. Bornstein points to the fact, employment in Israel made-up 35% of all jobs for West Bank residents. This means daily travel for Palestinians, which subjugates them to routine searches, interrogation, and unnecessary harassment at security check points. As difficult as it is, with major political events pertaining to Israel and its neighbors, drastic restriction of border crossings further impedes Palestinians from earning wages. Events such as the Egypt- Syrian attack on Israel in 1973, and the intifada of 1987 manifested in dramatic decrease of Palestinians allowed crossing the border of Israel.
Due to differences in employment, safety, and wage standards in Israel and Palestine, Israeli entrepreneurs take advantage of cheap labour, and non-existent human right standards on the green line border, setting up factories and employing locals. Bornstein further describes this in the following excerpt. “As the Israeli government worked on developing high-tech industries in Israel, underdevelopment in the West Bank made it into a “Third World Colony. ” The territories became “an open market with non competition from outside. As economic integration is accompanied by strong legal, social and residential segregation, labour is very cheap, legal rights are very limited and social security is virtually non-existent. ” In this environment the few cracks allowing livelihood to take root were in service to Israel: “of these, two are worth mention.
The first was the development of sub-contracting, particularly in the clothing industry… [and the second were] garages in the Territories, which offered cheaper repair services for Israeli motorists. ”…the border creates a boundary on the capitalists’ accountabilities for the well-being of foreign employees and their families. Neither big manufacturers nor their governments were under pressure to subsidize the education, health care, or income of their “foreign” labor. Workers across the border were easily abandoned when no longer needed, without paying welfare or unemployment. ” The CIA data collection states the following in regards to the Palestinian Territories. The estimated GDP of Palestine in 2009 is $12. 95 billion, an inflation rate of 11. %, 60% of the population is below poverty line, 24% unemployment rate, GDP per capita of $2,900, Exports of $339 million, Imports of $2. 84 billion, Total Revenue of $1. 149 billion, and Total Expenditure of $2. 31 billion.
Taking a quick glance at the presented numbers above, Palestine does not show much promise for growth or development, which is excluding the unique internal dynamic which contributes to such a statistical failure. Conversely, looking at Israel’s statistics, it is a total opposite. The CIA data collection states the following in regards to Israel’s economy. A 2010 GDP of $217. 1 billion, a GDP per capita of $29,500, an inflation rate of 4. 5%, 23. % of the population is below poverty line (note, the majority of individuals in this figure are Palestinians), 6. 2% unemployment rate, Exports of $54. 35 billion, Imports of $55. 6 billion, Total Revenue $45 billion, Total Expenses $58. 6 billion. Israel’s economy as stated by David H. Goldberg and Bernard Reich is an economic “miracle”. Israel’s economy continued to defy odds of failure in respect to minimal natural resources, and an overwhelming neighborly hostility. Nonetheless, Israel remains to be one of the leading economies in the world, with living standards matching that of advanced European nations. Israel’s economy positively transformed after the technological boom of the 90’s, opening its export of technological merchandise.
Israel’s increasing prosperity is largely due to continued foreign investment, financial aid from international allies and wealthy international Jews. In the late 18th Century and into the early 19th Century, the local population of the Fertile Crescent had successfully entered into modernity, and with it came wealth and prosperity. The region of Palestine was a highly successful producer and exporter of various goods, with Europe being a substantial importer. The radical change in the regions geopolitical demographic brought sever unrest and hindered any previous strides at modernity and positive contribution to the world market. The colonial interplay in the region created a new reality for the population of the Fertile Crescent, North Africa, and the Arabian Peninsula.
A functioning model under the Ottomans was replaced by a haphazard and materially ambitious European plan, which ultimately manifested in a continuing regional struggle. The imposed British Mandate created numerous regional wars, which continues to be unresolved. The main beneficiaries seem to be the population of Israel, and conversely, the Palestinians continue to manage a difficult dynamic which was not the case in the beginning of the 19th century. Unresolved borderlines and a seemingly distant mirage of a solidified two state solution continues to cause civil and economic turmoil within the Palestinian territories, directly affecting its population’s quality of life.
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment | https://studymoose.com/changes-palestines-geographical-landscape-new-essay | 21 |
40 | A class action, also known as a class-action lawsuit, class suit, or representative action, is a type of lawsuit where one of the parties is a group of people who are represented collectively by a member or members of that group. The class action originated in the United States and is still predominantly a U.S. phenomenon, but Canada, as well as several European countries with civil law, have made changes in recent years to allow consumer organizations to bring claims on behalf of consumers.
In a typical class action, a plaintiff sues a defendant or a number of defendants on behalf of a group, or class, of absent parties. This differs from a traditional lawsuit, where one party sues another party, and all of the parties are present in court. Although standards differ between states and countries, class actions are most common where the allegations usually involve at least 40 people who have been injured by the same defendant in the same way. Instead of each damaged person bringing his or her own lawsuit, the class action allows all the claims of all class members—whether they know they have been damaged or not—to be resolved in a single proceeding through the efforts of the representative plaintiff(s) and appointed class counsel.
The antecedent of the class action was what modern observers call "group litigation", which appears to have been quite common in medieval England from about 1200 onward.:38 These lawsuits involved groups of people either suing or being sued in actions at common law. These groups were usually based on existing societal structures like villages, towns, parishes, and guilds. Unlike modern courts, the medieval English courts did not question the right of the actual plaintiffs to sue on behalf of a group or a few representatives to defend an entire group.:38–40
From 1400 to 1700, group litigation gradually switched from being the norm in England to the exception.:100 The development of the concept of the corporation led to the wealthy supporters of the corporate form becoming suspicious of all unincorporated legal entities, which in turn led to the modern concept of the unincorporated or voluntary association.:124–25 The tumultuous history of the Wars of the Roses and then the Star Chamber resulted in periods during which the common law courts were frequently paralyzed, and out of the confusion the Court of Chancery emerged with exclusive jurisdiction over group litigation.:125–32
By 1850, the Parliament of England had enacted several statutes on a case-by-case basis to deal with issues regularly faced by certain types of organizations, like joint-stock companies, and with the impetus for most types of group litigation removed, it went into a steep decline in English jurisprudence from which it never recovered.:210–12 It was further weakened by the fact that equity pleading in general was falling into disfavor, which culminated in the Judicature Acts of 1874 and 1875.:210–12 Group litigation was essentially dead in England after 1850.
Class actions survived in the United States thanks to the influence of Supreme Court Associate Justice Joseph Story, who imported it into U.S. law through summary discussions in his two equity treatises as well as his opinion in West v. Randall (1820).:219–20 However, Story did not necessarily endorse class actions, because he "could not conceive of a modern function or a coherent theory for representative litigation".:219–20
The oldest predecessor to the class-action rule in the United States was in the Federal Equity Rules, specifically Equity Rule 48, promulgated in 1842.
Where the parties on either side are very numerous, and cannot, without manifest inconvenience and oppressive delays in the suit, be all brought before it, the court in its discretion may dispense with making all of them parties, and may proceed in the suit, having sufficient parties before it to represent all the adverse interests of the plaintiffs and the defendants in the suit properly before it. But in such cases, the decree shall be without prejudice to the rights and claims of all the absent parties.
This allowed for representative suits in situations where there were too many individual parties (which now forms the first requirement for class-action litigation – numerosity). However, this rule did not allow such suits to bind similarly situated absent parties, which rendered the rule ineffective.:221 Within ten years, the Supreme Court interpreted Rule 48 in such a way so that it could apply to absent parties under certain circumstances, but only by ignoring the plain meaning of the rule.:221–222 In the rules published in 1912, Equity Rule 48 was replaced with Equity Rule 38 as part of a major restructuring of the Equity Rules, and when federal courts merged their legal and equitable procedural systems in 1938, Equity Rule 38 became Rule 23 of the Federal Rules of Civil Procedure.
A major revision of the FRCP in 1966 radically transformed Rule 23, made the opt-out class action the standard option, and gave birth to the modern class action. Entire treatises have been written since to summarize the huge mass of law that sprang up from the 1966 revision of Rule 23.:229 Just as medieval group litigation bound all members of the group regardless of whether they all actually appeared in court, the modern class action binds all members of the class, except for those who choose to opt out (if the rules permit them to do so).
The Advisory Committee that drafted the new Rule 23 in the mid-1960s was influenced by two major developments. First was the suggestion of Harry Kalven, Jr. and Maurice Rosenfield in 1941 that class-action litigation by individual shareholders on behalf of all shareholders of a company could effectively supplement direct government regulation of securities markets and other similar markets.:232 The second development was the rise of the civil rights movement, environmentalism and consumerism.:240–244 The groups behind these movements, as well as many others in the 1960s, 1970s and 1980s, all turned to class actions as a means for achieving their goals. For example, a 1978 environmental law treatise reprinted the entire text of Rule 23 and mentioned "class actions" 14 times in its index.:244–245
Businesses targeted by class actions for inflicting massive aggregate harm have sought ways to avoid class actions altogether. In the 1990s, the U.S. Supreme Court issued a number of decisions which strengthened the "federal policy favoring arbitration". In response, lawyers have added provisions to consumer contracts of adhesion called "collective action waivers", which prohibit those signing the contracts from bringing class-action suits. In 2011, the U.S. Supreme Court ruled in a 5–4 decision in AT&T Mobility v. Concepcion that the Federal Arbitration Act of 1925 preempts state laws that prohibit contracts from disallowing class-action lawsuits, which will make it more difficult for consumers to file class-action lawsuits. The dissent pointed to a saving clause in the federal act which allowed states to determine how a contract or its clauses may be revoked.
In two major 21st-century cases, the Supreme Court ruled 5–4 against certification of class actions due to differences in each individual members' circumstances: first in Wal-Mart v. Dukes (2011) and later in Comcast Corp. v. Behrend (2013).
Companies may insert the phrase "may elect to resolve any claim by individual arbitration" into their consumer and employment contracts to use arbitration and prevent class-action lawsuits.
Rejecting arguments that they violated employees’ rights to collective bargaining, and that modestly-valued consumer claims would be more efficiently litigated within the parameters of one lawsuit, the U. S. Supreme Court, in Epic Systems Corp. v. Lewis (2018), sanctioned the use of so-called "class action waivers". Citing its deference to freedom to contract principles, the Epic Systems opinion opened the door dramatically to the use of these waivers as a condition of employment, consumer purchases and the like. Some commentators in opposition to the ruling see it as a "death knell" to many employment and consumer class actions, and have increasingly pushed for legislation to circumvent it in hopes of reviving otherwise-underrepresented parties’ ability to litigate on a group basis. Supporters (mostly pro-business) of the high court’s ruling argue its holding is consistent with private contract principles. Many of those supporters had long-since argued that class action procedures were generally inconsistent with due process mandates and unnecessarily promoted litigation of otherwise small claims—thus heralding the ruling's anti-litigation effect.
In 2017, the U.S. Supreme Court issued its opinion in Bristol-Meyer Squibb Co. v. Superior Court of California, 137 S. Ct. 1773 (2017), holding that over five hundred plaintiffs from other states cannot bring a consolidated mass action against the pharmaceutical giant in the State of California. This opinion may arguably render nationwide mass action and class action impossible in any single state beside the defendant's home state.
In 2020, the 11th Circuit Court of Appeals found incentive awards are impermissible. Incentive awards are a relatively modest payment made to class representatives as part of a class settlement. The ruling was a response to an objector who claimed Rule 23 required that the fee petition be filed before the time frame for class member objections to be filed; and payments to the class representative violates doctrine from two U.S. Supreme Court cases from the 1800's.
As of 2010, there was no publicly maintained list of nonsecurities class-action settlements, although a securities class-action database exists in the Stanford Law School Securities Class Action Clearinghouse and several for-profit companies maintain lists of the securities settlements. One study of federal settlements required the researcher to manually search databases of lawsuits for the relevant records, although state class actions were not included due to the difficulty in gathering the information. Another source of data is U.S. Bureau of Justice Statistics Civil Justice Survey of State Courts, which offers statistics for the year 2005.
First, aggregation can increase the efficiency of the legal process, and lower the costs of litigation. In cases with common questions of law and fact, aggregation of claims into a class action may avoid the necessity of repeating "days of the same witnesses, exhibits and issues from trial to trial". Jenkins v. Raymark Indus. Inc., 782 F.2d 468, 473 (5th Cir. 1986) (granting certification of a class action involving asbestos).
Second, a class action may overcome "the problem that small recoveries do not provide the incentive for any individual to bring a solo action prosecuting his or her rights". Amchem Prods., Inc. v. Windsor, 521 U.S. 591, 617 (1997) (quoting Mace v. Van Ru Credit Corp., 109 F.3d 388, 344 (7th Cir. 1997)). "A class action solves this problem by aggregating the relatively paltry potential recoveries into something worth someone's (usually an attorney's) labor." Amchem Prods., Inc., 521 U.S. at 617 (quoting Mace, 109 F.3d at 344). In other words, a class action ensures that a defendant who engages in widespread harm – but does so minimally against each individual plaintiff – must compensate those individuals for their injuries. For example, thousands of shareholders of a public company may have losses too small to justify separate lawsuits, but a class action can be brought efficiently on behalf of all shareholders. Perhaps even more important than compensation is that class treatment of claims may be the only way to impose the costs of wrongdoing on the wrongdoer, thus deterring future wrongdoing.
Third, class-action cases may be brought to purposely change behavior of a class of which the defendant is a member. Landeros v. Flood (1976) was a landmark case decided by the California Supreme Court that aimed at purposefully changing the behavior of doctors, encouraging them to report suspected child abuse. Otherwise, they would face the threat of civil action for damages in tort proximately flowing from the failure to report the suspected injuries. Previously, many physicians had remained reluctant to report cases of apparent child abuse, despite existing law that required it.
Fourth, in "limited fund" cases, a class action ensures that all plaintiffs receive relief and that early-filing plaintiffs do not raid the fund (i.e., the defendant) of all its assets before other plaintiffs may be compensated. See Ortiz v. Fibreboard Corp., 527 U.S. 815 (1999). A class action in such a situation centralizes all claims into one venue where a court can equitably divide the assets amongst all the plaintiffs if they win the case.
Finally, a class action avoids the situation where different court rulings could create "incompatible standards" of conduct for the defendant to follow. See Fed. R. Civ. P. 23(b)(1)(A). For example, a court might certify a case for class treatment where a number of individual bond-holders sue to determine whether they may convert their bonds to common stock. Refusing to litigate the case in one trial could result in different outcomes and inconsistent standards of conduct for the defendant corporation. Thus, courts will generally allow a class action in such a situation. See, e.g., Van Gemert v. Boeing Co., 259 F. Supp. 125 (S.D.N.Y. 1966).
Whether a class action is superior to individual litigation depends on the case and is determined by the judge's ruling on a motion for class certification. The Advisory Committee Note to Rule 23, for example, states that mass torts are ordinarily "not appropriate" for class treatment. Class treatment may not improve the efficiency of a mass tort because the claims frequently involve individualized issues of law and fact that will have to be re-tried on an individual basis. See Castano v. Am. Tobacco Co., 84 F.3d 734 (5th Cir. 1996) (rejecting nationwide class action against tobacco companies). Mass torts also involve high individual damage awards; thus, the absence of class treatment will not impede the ability of individual claimants to seek justice. Other cases, however, may be more conducive to class treatment.
The preamble to the Class Action Fairness Act of 2005, passed by the United States Congress, found:
Class-action lawsuits are an important and valuable part of the legal system when they permit the fair and efficient resolution of legitimate claims of numerous parties by allowing the claims to be aggregated into a single action against a defendant that has allegedly caused harm.
There are several criticisms of class actions. The preamble to the Class Action Fairness Act stated that some abusive class actions harmed class members with legitimate claims and defendants that have acted responsibly, adversely affected interstate commerce, and undermined public respect for the country's judicial system.
Class members often receive little or no benefit from class actions. Examples cited for this include large fees for the attorneys, while leaving class members with coupons or other awards of little or no value; unjustified awards are made to certain plaintiffs at the expense of other class members; and confusing notices are published that prevent class members from being able to fully understand and effectively exercise their rights.
For example, in the United States, class lawsuits sometimes bind all class members with a low settlement. These "coupon settlements" (which usually allow the plaintiffs to receive a small benefit such as a small check or a coupon for future services or products with the defendant company) are a way for a defendant to forestall major liability by precluding many people from litigating their claims separately, to recover reasonable compensation for the damages. However, existing law requires judicial approval of all class-action settlements, and in most cases class members are given a chance to opt out of class settlement, though class members, despite opt-out notices, may be unaware of their right to opt out because they did not receive the notice, did not read it, or did not understand it.
The Class Action Fairness Act of 2005 addresses these concerns. Coupon settlements may be scrutinized by an independent expert before judicial approval in order to ensure that the settlement will be of value to the class members (28 U.S.C.A. 1712(d)). Further, if the action provides for settlement in coupons, "the portion of any attorney’s fee award to class counsel that is attributable to the award of the coupons shall be based on the value to class members of the coupons that are redeemed". 28 U.S.C.A. 1712(a).
Class action cases present significant ethical challenges. Defendants can hold reverse auctions and any of several parties can engage in collusive settlement discussions. Subclasses may have interests that diverge greatly from the class but may be treated the same. Proposed settlements could offer some groups (such as former customers) much greater benefits than others. In one paper presented at an ABA conference on class actions in 2007, authors commented that "competing cases can also provide opportunities for collusive settlement discussions and reverse auctions by defendants anxious to resolve their new exposure at the most economic cost".
Defendant class action
Although normally plaintiffs are the class, defendant class actions are also possible. For example, in 2005, the Roman Catholic Archdiocese of Portland in Oregon was sued as part of the Catholic priest sex-abuse scandal. All parishioners of the Archdiocese's churches were cited as a defendant class. This was done to include their assets (local churches) in any settlement. Where both the plaintiffs and the defendants have been organized into court-approved classes, the action is called a bilateral class action.
In a class action, the plaintiff seeks court approval to litigate on behalf of a group of similarly situated persons. Not every plaintiff looks for, or could obtain, such approval. As a procedural alternative, plaintiff's counsel may attempt to sign up every similarly situated person that counsel can find as a client. Plaintiff's counsel can then join the claims of all of these persons in one complaint, a so-called "mass action", hoping to have the same efficiencies and economic leverage as if a class had been certified.
Because mass actions operate outside the detailed procedures laid out for class actions, they can pose special difficulties for both plaintiffs, defendants, and the court. For example, settlement of class actions follows a predictable path of negotiation with class counsel and representatives, court scrutiny, and notice. There may not be a way to uniformly settle all of the many claims brought via a mass action. Some states permit plaintiff's counsel to settle for all the mass action plaintiffs according to a majority vote, for example. Other states, such as New Jersey, require each plaintiff to approve the settlement of that plaintiff's own individual claims.
Class action legislation
Class actions were recognized in "Halabi" leading case (Supreme Court, 2009).
Australia and New Zealand
Class actions became part of the Australian legal landscape only when the Federal Parliament amended the Federal Court of Australia Act ("the FCAA") in 1992 to introduce the "representative proceedings", the equivalent of the American "class actions".
Likewise, class actions appeared slowly in the New Zealand legal system. However, a group can bring litigation through the action of a representative under the High Court Rules which provide that one or a multitude of persons may sue on behalf of, or for the benefit of, all persons "with the same interest in the subject matter of a proceeding". The presence and expansion of litigation funders have been playing a significant role in the emergence of class actions in New Zealand. For example, the "Fair Play on Fees" proceedings in relation to penalty fees charged by banks was funded by Litigation Lending Services (LLS), a company specializing in the funding and management of litigation in Australia and New Zealand. It was the biggest class-action suit in New Zealand history.
The Austrian Code of Civil Procedure (Zivilprozessordnung – ZPO) does not provide for a special proceeding for complex class-action litigation. However, Austrian consumer organizations (Verein für Konsumenteninformation (VKI) and the Federal Chamber of Labour / Bundesarbeitskammer) have brought claims on behalf of hundreds or even thousands of consumers. In these cases the individual consumers assigned their claims to one entity, who has then brought an ordinary (two party) lawsuit over the assigned claims. The monetary benefits were redistributed among the class. This technique, labelled as "class action Austrian style", allows for a significant reduction of overall costs. The Austrian Supreme Court, in a judgment, confirmed the legal admissibility of these lawsuits under the condition that all claims are essentially based on the same grounds.
The Austrian Parliament unanimously requested the Austrian Federal Minister for Justice to examine the possibility of new legislation providing for a cost-effective and appropriate way to deal with mass claims. Together with the Austrian Ministry for Social Security, Generations and Consumer Protection, the Justice Ministry opened the discussion with a conference held in Vienna in June 2005. With the aid of a group of experts from many fields, the Justice Ministry began drafting the new law in September 2005. With the individual positions varying greatly, a political consensus could not be reached.
Provincial laws in Canada allow class actions. All provinces permit plaintiff classes, and some permit defendant classes. Quebec was the first province to enact class proceedings legislation, in 1978. Ontario was next, with the Class Proceedings Act, 1992. As of 2008, 9 of 10 provinces had enacted comprehensive class actions legislation. In Prince Edward Island, where no comprehensive legislation exists, following the decision of the Supreme Court of Canada in Western Canadian Shopping Centres Inc. v. Dutton, 2 S.C.R. 534, class actions may be advanced under a local rule of court. The Federal Court of Canada permits class actions under Part V.1 of the Federal Courts Rules.
Legislation in Saskatchewan, Manitoba, Ontario, and Nova Scotia expressly or by judicial opinion has been read to allow for what are informally known as national "opt-out" class actions, whereby residents of other provinces may be included in the class definition and potentially be bound by the court's judgment on common issues unless they opt out in a prescribed manner and time. Court rulings have determined that this permits a court in one province to include residents of other provinces in the class action on an "opt-out" basis.
Judicial opinions have indicated that provincial legislative national opt-out powers should not be exercised to interfere with the ability of another province to certify a parallel class action for residents of other provinces. The first court to certify will generally exclude residents of provinces whose courts have certified a parallel class action. However, in the Vioxx litigation, two provincial courts certified overlapping class actions whereby Canadian residents were class members in two class actions in two provinces. Both decisions are under appeal.
The largest class action suit in Canada was settled in 2005 after Nora Bernard initiated efforts that led to an estimated 79,000 survivors of Canada's residential school system suing the Canadian government. The settlement amounted to upwards of $5 billion.
Chile approved class actions in 2004. The Chilean model is technically an opt-out issue class action, followed by a compensatory stage which can be collective or individual. This means that the class action is designed to declare the defendant generally liable with erga omnes effects if and only if the defendant is found liable, and the declaratory judgment can be used then to pursue damages in the same procedure or in individual ones in different jurisdictions. If the latter is the case, the liability cannot be discussed, but only the damages. There under the Chilean procedural rules, one particular case works as an opt-out class action for damages. This is the case when defendants can identify and compensate consumers directly, i.e. because it is their banking institution. In such cases the judge can skip the compensatory stage and order redress directly. Since 2005 more than 100 cases have been filed, mostly by Servicio Nacional del Consumidor [SERNAC], the Chilean consumer protection agency. Salient cases have been Condecus v. BancoEstado and SERNAC v. La Polar.
Under French law, an association can represent the collective interests of consumers; however, each claimant must be individually named in the lawsuit. On January 4, 2005, President Chirac urged changes that would provide greater consumer protection. A draft bill was proposed in April 2006, but did not pass.
Following the change of majority in France in 2012, the new government proposed introducing class actions into French law. The project of "loi Hamon" of May 2013 aimed to limit the class action to consumer and competition disputes. The law was passed on March 1, 2014.
Class actions are generally not permitted in Germany, as German law does not recognize the concept of a targeted class being affected by certain actions. This requires each plaintiff to individually prove that they were affected by an action, and present their individual damages, and prove the causality between both parties.
Joint litigation (Streitgenossenschaft) is a legal act that may permit plaintiffs that are in the same legal community with respect to the dispute, or are entitled by the same factual or legal reason. These are not typically regarded as class action suits, as each individual plaintiff is entitled to compensation for their individual, incurred damages and not as a result of being a member of a class.
Combination of court cases (Prozessverbindung) is another method which permits a judge to combine multiple separate court cases into a single trial with a single verdict. According to § 147 ZPO, this is only permissible if all cases are regarding the same factual and legal event and basis.
A genuine extension of the legal effect of a court decision beyond the parties involved in the proceedings is offered under corporate law. This procedure applies to the review of stock payoffs under Stock Corporation Act (Aktiengesetz. Pursuant to Sec. 13 Sentence 2 Mediation Procedure Act (Spruchverfahrensgesetz §), the court decision concerning the dismissal or direction of a binding arrangement of an adequate compensation is effective for and against all shareholders, including those who have already agreed to a previous settlement in this matter.
Investor Model Case Proceedings
The Capital Investor Model Case Act (Kapitalanleger-Musterverfahrensgesetz) is an attempt to enable model cases to be brought by a large number of potentially affected parties in the event of disputes, limited to the investment market. In contrast to the U.S. class actions, each affected party must file a lawsuit in its own name in order to participate in the model proceedings.
Model Declaratory Action
Effective on November 1, 2018, the Civil Code (Bürgerliches Gesetzbuch) introduced the Model Declaratory Action (§ 606) that created the ability to bundle similar claims by many affected parties efficiently into one proceeding.
Registered Consumer Protection Associations can file – if they represent at least 10 individuals – for a (general) judicial finding whether the factual and legal requirements for of claims or legal relationships are met or not. These individuals have to register in order to inhibit their claims. Since these Adjudications are more of a general nature, each individual must assert their claims in their own court proceedings. The competent court is bound by the Model Declaratory Action decision.
German law also recognizes the Associative Action (Verbandklage), which is comparable to the class action and is predominantly used in environmental law. In civil law, the Associative Action is represented by a foreign body in the matter of asserting and enforcing individual claims and the claimant can no longer control the proceedings.
Class Action With Relation to the United States
Class actions can be brought by Germans in the U.S. for events in Germany if the facts of the case relate to the U.S. For example, in the case of the Eschede derailment, the lawsuit was allowed because several aggrieved parties came from the USA and had purchased rail tickets there.
Decisions of the Indian Supreme Court in the 1980s loosened strict locus standi requirements to permit the filing of suits on behalf of rights of deprived sections of society by public-minded individuals or bodies. Although not strictly "class action litigation" as it is understood in American law, Public Interest Litigation arose out of the wide powers of judicial review granted to the Supreme Court of India and the various High Courts under Article 32 and Article 226 of the Constitution of India. The sort of remedies sought from courts in Public Interest Litigation go beyond mere award of damages to all affected groups, and have sometimes (controversially) gone on to include Court monitoring of the implementation of legislation and even the framing of guidelines in the absence of Parliamentary legislation.
However, this innovative jurisprudence did not help the victims of the Bhopal gas tragedy, who were unable to fully prosecute a class-action litigation (as understood in the American sense) against Union Carbide due to procedural rules that would make such litigation impossible to conclude and unwieldy to carry out. Instead, the Government of India exercised its right of parens patriae to appropriate all the claims of the victims and proceeded to litigate on their behalf, first in the New York courts and later, in the Indian courts. Ultimately, the matter was settled between the Union of India and Union Carbide (in a settlement overseen by the Supreme Court of India) for a sum of ₹760 crore (US$110 million) as a complete settlement of all claims of all victims for all time to come.
Public interest litigation has now broadened in scope to cover larger and larger groups of citizens who may be affected by government inaction. Examples of this trend include the conversion of all public transport in the city of Delhi from diesel engines to CNG engines on the basis of the orders of the Delhi High Court; the monitoring of forest use by the High Courts and the Supreme Court to ensure that there is no unjustified loss of forest cover; and the directions mandating the disclosure of assets of electoral candidates for the Houses of Parliament and State Assembly.
The Supreme Court has observed that the PIL has tended to become a means to gain publicity or obtain relief contrary to constitutionally valid legislation and policy. Observers point out that many High Courts and certain Supreme Court judges are reluctant to entertain PILs filed by non-governmental organizations and activists, citing concerns of separation of powers and parliamentary sovereignty.
In Irish law, there is no such thing as a "class action" per se. Third-party litigation funding is prohibited under Irish law. Instead, there is the 'representative action' (Irish: gníomh ionadaíoch) or 'test case' (cás samplach). A representative action is "where one claimant or defendant, with the same interest as a group of claimants or defendants in an action, institutes or defends proceedings on behalf of that group of claimants or defendants."
Some test cases in Ireland have included:
- the CervicalCheck cancer scandal
- financial product misselling
- Damages claims brought by Irish hauliers against price-fixing by European truck makers
Italy has class action legislation. Consumer associations can file claims on behalf of groups of consumers to obtain judicial orders against corporations that cause injury or damage to consumers. These types of claims are increasing, and Italian courts have allowed them against banks that continue to apply compound interest on retail clients' current account overdrafts. The introduction of class actions is on the government's agenda. On November 19, 2007, the Senato della Repubblica passed a class-action law in Finanziaria 2008, a financial document for the economy management of the government. From 10 December 2007, in order of Italian legislation system, the law is before the House and has to be passed also by the Camera dei Deputati, the second house of Italian Parliament, to become an effective law. In 2004, the Italian parliament considered the introduction of a type of class action, specifically in the area of consumer law. No such law has been enacted, but scholars demonstrated that class actions (azioni rappresentative) do not contrast with Italian principles of civil procedure. Class action is regulated by art. 140 bis of the Italian consumers' code and has been in force since 1 July 2009.
Dutch law allows associations (verenigingen) and foundations (stichtingen) to bring a so-called collective action on behalf of other persons, provided they can represent the interests of such persons according to their by-laws (statuten) (section 3:305a Dutch Civil Code). All types of actions are permitted. This includes a claim for monetary damages, provided the event occurred after 15 November 2016 (purusuant to new legislation which entered into force 1 January 2020). Most class actions over the past decade have been in the field of securities fraud and financial services. The acting association or foundation may come to a collective settlement with the defendant. The settlement may also include – and usually primarily consists of – monetary compensation of damages. Such settlement can be declared binding for all injured parties by the Amsterdam Court of Appeal (section 7:907 Dutch Civil Code). The injured parties have an opt-out right during the opt-out period set by the Court, usually 3 to 6 months. Settlements involving injured parties from outside The Netherlands can also be declared binding by the Court. Since US courts are reluctant to take up class actions brought on behalf of injured parties not residing in the US who have suffered damages due to acts or omissions committed outside the US, combinations of US class actions and Dutch collective actions may come to a settlement that covers plaintiffs worldwide. An example of this is the Royal Dutch Shell Oil Reserves Settlement that was declared binding upon both US and non-US plaintiffs.
"Pozew zbiorowy" or class action has been allowed under Polish law since July 19, 2010. A minimum of 10 persons, suing based on the same law, is required.
Collective litigation has been allowed under Russian law since 2002. Basic criteria are, like in the US, numerosity, commonality, and typicality.
Spanish law allows nominated consumer associations to take action to protect the interests of consumers. A number of groups already have the power to bring collective or class actions: certain consumer associations, bodies legally constituted to defend the "collective interest" and groups of injured parties.
Recent changes to Spanish civil procedure rules include the introduction of a quasi-class action right for certain consumer associations to claim damages on behalf of unidentified classes of consumers. The rules require consumer associations to represent an adequate number of affected parties who have suffered the same harm. Also any judgment made by the Spanish court will list the individual beneficiaries or, if that is not possible, conditions that need to be fulfilled for a party to benefit from a judgment.
Swiss law does not allow for any form of class action. When the government proposed a new federal code of civil procedure in 2006, replacing the cantonal codes of civil procedure, it rejected the introduction of class actions, arguing that
[It] is alien to European legal thought to allow somebody to exercise rights on the behalf of a large number of people if these do not participate as parties in the action. ... Moreover, the class action is controversial even in its country of origin, the U.S., because it can result in significant procedural problems. ... Finally, the class action can be openly or discretely abused. The sums sued for are usually enormous, so that the respondent can be forced to concede, if they do not want to face sudden huge indebtness and insolvency (so-called legal blackmail).
England and Wales
The Civil Procedure Rules of the courts of England and Wales came into force in 1999 and have provided for representative actions in limited circumstances (under Part 19.6). These have not been much used, with only two reported cases at the court of first instance in the first ten years after the Civil Procedure Rules took effect.[full citation needed] However, a sectoral mechanism was adopted by the Consumer Rights Act 2015, taking effect on October 1, 2015. Under the provisions therein, opt-in or opt-out collective procedures may be certified for breaches of competition law. This is currently the closest mechanism to a class action in England and Wales.
In the United States, the class representative, also called a lead plaintiff, named plaintiff, or representative plaintiff is the named party in a class-action lawsuit. Although the class representative is named as a party to the litigation, the court must approve the class representative when it certifies the lawsuit as a class action.
The class representative must be able to represent the interests of all the members of the class, by being typical of the class members and not having conflicts with them. He or she is responsible to hire the attorney, file the lawsuit, consult on the case, and agree to any settlement. In exchange, the class representative may be entitled to compensation (at the discretion of the court) out of the recovery amount.
In federal courts, class actions are governed by Federal Rules of Civil Procedure Rule 23 and 28 U.S.C.A. § 1332(d). Cases in federal courts are only allowed to proceed as class actions if the court has jurisdiction to hear the case, and if the case meets the criteria set out in Rule 23. In the vast majority of federal class actions, the class is acting as the plaintiff. However, Rule 23 also provides for defendant class actions.
Typically, federal courts are thought to be more favorable for defendants, and state courts more favorable for plaintiffs. Many class actions are filed initially in state court. The defendant will frequently try to remove the case to federal court. The Class Action Fairness Act of 2005 increases defendants' ability to remove state cases to federal court by giving federal courts original jurisdiction for all class actions with damages exceeding $5,000,000 exclusive of interest and costs. The Class Action Fairness Act contains carve-outs for, among other things, shareholder class actions covered by the Private Securities Litigation Reform Act of 1995 and those concerning internal corporate governance issues (the latter typically being brought as shareholder derivative actions in the state courts of Delaware, the state of incorporation of most large corporations).
In securities class actions that allege violations of Section 11 of the Securities Act of 1933, "officers and directors are liable together with the corporation for material misrepresentations in the registration statement." To have "standing" to sue under Section 11 of the 1933 Act in a class action, a plaintiff must be able to prove that he can "trace" his shares to the registration statement and offering in question, as to which there is alleged a material misstatement or omission. In the absence of an ability to actually trace his shares, such as when securities issued at multiple times are held by the Depository Trust Company in a fungible bulk and physical tracing of particular shares may be impossible, the plaintiff may be barred from pursuing his claim for lack of standing.
Class actions may be brought in federal court if the claim arises under federal law or if the claim falls under 28 U.S.C. § 1332(d). Under § 1332(d)(2) the federal district courts have original jurisdiction over any civil action where the amount in controversy exceeds $5,000,000 and
- any member of a class of plaintiffs is a citizen of a State different from any defendant; or
- any member of a class of plaintiffs is a foreign state or a citizen or subject of a foreign state and any defendant is a citizen of a State; or
- any member of a class of plaintiffs is a citizen of a State and any defendant is a foreign state or a citizen or subject of a foreign state.
Nationwide plaintiff classes are possible, but such suits must have a commonality of issues across state lines. This may be difficult if the civil law in the various states lack significant commonalities. Large class actions brought in federal court frequently are consolidated for pre-trial purposes through the device of multidistrict litigation (MDL). It is also possible to bring class actions under state law, and in some cases the court may extend its jurisdiction to all the members of the class, including out of state (or even internationally) as the key element is the jurisdiction that the court has over the defendant.
Class certification under Rule 23
This section needs expansion with: Needs discussion of 23(b) class action types. You can help by adding to it. (May 2015)
For the case to proceed as a class action and bind absent class members, the court must certify the class under Rule 23 on a motion from the party wishing to proceed on a class basis. For a class to be certified, the moving party must meet all of the criteria listed under Rule 23(a), and at least one of the criteria listed under Rule 23(b).
The 23(a) criteria are referred to as numerosity, commonality, typicality, and adequacy. Numerosity refers to the number of people in the class. To be certified, the class has to have enough members that simply adding each of them as a named party to the lawsuit would be impractical. There is no bright-line rule to determine numerosity, but classes with hundreds of members are generally deemed to be sufficiently numerous. To satisfy commonality, there must be a common question of law and fact such that "determination of its truth or falsity will resolve an issue that is central to the validity of each one of the claims in one stroke". The typicality requirement ensures that the claims or defenses of the named plaintiff are typical of those of everyone else in the class. Finally, adequacy requirement states that the named plaintiff must fairly and adequately represent the interests of the absent class members.
Rule 23(b)(3) allows class certification if "questions of law or fact common to class members predominate over any questions affecting only individual members, and that a class action is superior to other available methods for fairly and efficiently adjudicating the controversy."
Notice and settlement
This section needs expansion with: This needs much more detail on notice and on settlement procedures. You can help by adding to it. (May 2015)
Due process requires in most cases that notice describing the class action be sent, published, or broadcast to class members. As part of this notice procedure, there may have to be several notices, first a notice giving class members the opportunity to opt out of the class, i.e. if individuals wish to proceed with their own litigation they are entitled to do so, only to the extent that they give timely notice to the class counsel or the court that they are opting out. Second, if there is a settlement proposal, the court will usually direct the class counsel to send a settlement notice to all the members of the certified class, informing them of the details of the proposed settlement.
Since 1938, many states have adopted rules similar to the FRCP. However, some states, like California, have civil procedure systems, which deviate significantly from the federal rules; the California Codes provide for four separate types of class actions. As a result, there are two separate treatises devoted solely to the complex topic of California class actions. Some states, such as Virginia, do not provide for any class actions, while others, such as New York, limit the types of claims that may be brought as class actions.
- Arbitration clause, a contract clause that attempts to prevent lawsuits by requiring arbitration in a private forum
- Bill of Peace, an English predecessor to class actions
- Class Action, 1991 American legal drama film
- Collective redress, a similar legal framework under development in the European Union
- Dukes v. Wal-Mart (2011), the largest civil rights class-action lawsuit to date
- List of class action lawsuits
- Public Interest Litigation, a similar system adopted in India
- Securities Class Action
- "Class Action". Wex Legal Dictionary. 2007-08-06. Retrieved 5 May 2015.
- Yeazell, Stephen C. (1987). From Medieval Group Litigation to the Modern Class Action. New Haven: Yale University Press.
- The New Federal Equity Rules Promulgated by the United States Supreme Court at the October Term, 1912: Together with the Cognate Statutory Provisions and Former Equity Rules; with an Introduction, Annotations and Forms, p. 52
- Deborah R. Hensler, Nicholas M. Pace, Bonita Dombey-Moore, Beth Giddens, Jennifer Gross, Erik K. Moller, Class Action Dilemmas: Pursuing Public Goals for Private Gain (Santa Monica: RAND, 2000), 10–11.
- Giles M. (2005). Opting Out of Liability Archived 2009-04-02 at the Wayback Machine. Michigan Law Review.
- Foreman C. "Supreme Court: AT&T can force arbitration, block class-action suits". Ars Technica.
- Reuters (2013). "Supreme Court rules for Comcast in class action".
- Silver-greenberg, Jessica; Gebeloff, Robert (2015-10-31). "Arbitration Everywhere, Stacking the Deck of Justice". The New York Times. ISSN 0362-4331. Retrieved 2015-10-31.
- Troutman, Eric J. (September 18, 2020). "Eleventh Circuit Court of Appeals Holds that Incentive Payments Commonly Awarded to Class Representatives are Impermissible in a Classwide Settlement". TCPA World. Retrieved September 19, 2020.
- Johnson v. NPAS Solutions (11th Cir. September 17, 2020).
- Fitzpatrick, Brian T. (2010-12-01). "An Empirical Study of Class Action Settlements and Their Fee Awards". Journal of Empirical Legal Studies. 7 (4): 811–846. doi:10.1111/j.1740-1461.2010.01196.x. ISSN 1740-1461.
- "Data Collection: Civil Justice Survey of State Courts (CJSSC)". Bureau of Justice Statistics. Office of Justice Programs. Retrieved 21 March 2018.
- Association of Trial Lawyers of America, Class Action Press Kit Archived 2006-12-03 at the Wayback Machine
- "FindLaw Class Action and Mass Tort Center: Legal Research: Cohelan on California Class Actions". Classaction.findlaw.com. 1966-07-01. Retrieved 2013-10-03.
- Richard Epstein, "Class Actions: The Need for a Hard Second Look"
- Michael Greve, "Harm-Less Lawsuits? What's Wrong with Consumer Class Actions" Archived 2009-07-15 at the Wayback Machine
- Jim Copland, "Class Actions"
- "Do Class Actions Benefit Class Members?". www.instituteforlegalreform.com. Retrieved 2016-01-17.
- "Ethical Issues In Class Action Settlements" (PDF). Retrieved 2013-10-03.
- Archived January 6, 2009, at the Wayback Machine
- Stuart Clark and Colin Loveday (2004). "Class Actions in Australia - An Overnew" (PDF). Clayton Utz. Retrieved 2 October 2015.
- "Slater and Gordon announces launch of New Zealand class action against ANZ". Slater and Gordon. 18 June 2013. Retrieved 2 October 2015.
- Meadows, Richard (11 March 2013). "Thousands sign up for bank class action". Stuff.co.nz. Fairfax Digital. Retrieved 12 March 2013.
- "Archived copy" (PDF). Archived from the original (PDF) on 2006-08-11. Retrieved 2006-07-29.CS1 maint: archived copy as title (link)
- Ontario: Tiboni v. Merck Frosst Canada Ltd., O.J. No. 2996. Saskatchewan: Wuttunee v. Merck Frosst Canada Ltd., 2008 SKQB 78
- Halifax Daily News article on Bernard in 2006 Archived 2008-09-30 at the Wayback Machine Archived at Arnold Pizzo McKiggan
- Barroilhet, Agustin (2012-01-30). "Class Actions in Chile". Rochester, NY: Social Science Research Network. SSRN 1995906. Cite journal requires
- "Class Actions in Chile: Update | Global Class Actions Exchange". globalclassactions.stanford.edu. Retrieved 2016-10-25.
- "BancoEstado devolverá US$12 millones a clientes por cobro de comisiones en cuentas de ahorro - LA TERCERA". La Tercera (in Spanish). Retrieved 2016-10-25.
- Barroilhet, Agustin (2016-05-27). "Self-interested gatekeeping? Clashes between public and private enforcers in two Chilean class actions". Class Actions in Context. Edward Elgar Publishing. pp. 362–384. doi:10.4337/9781783470440.00027. ISBN 9781783470440 http://www.elgaronline.com/view/9781783470433.00027.xml. Missing or empty
- LOI n° 2014-344 du 17 mars 2014 relative à la consommation, March 17, 2014, retrieved 2017-12-12
- (in German) http://kollektiverrechtsschutz.de/kapmug-verfahren/. Unknown parameter
|access-date=suggested) (help); Unknown parameter
|title=suggested) (help); Unknown parameter
|language=suggested) (help); Cite has empty unknown parameters:
|datum=(help); Missing or empty
- "Sachstand WD 7 –3000/070 –12" (PDF). Deutscher Bundestag: Wissenschaftliche Dienste. 2012-03-19. Retrieved 2019-10-20.
- "PIL A Boon Or A Bane". Legalserviceindia.com. Retrieved 2013-10-04.
- "Introduction to Public Interest Litigation". Karmayog.org. Archived from the original on 2013-10-05. Retrieved 2013-10-04.
- Justice M.B. Shah (2 May 2002). "Union of India Vs. Association for Democratic Reforms & Another" (PDF). Supreme Court of India Judgement on Civil Appeal No. 7178 of 2001.
- "Our Achievements". ADR. Archived from the original on 29 June 2009. Retrieved 2 November 2012.
- Boland, James (12 October 2018). "Revisiting the potential of class actions". The Irish Times. Retrieved 17 August 2020.
- "Law firms excluded from EU consumer class actions". 4 January 2019. Retrieved 17 August 2020.
- McKeown, Andrew (31 January 2020). "Chief Justice launches report on litigation funding and class actions". Irish Legal news. Retrieved 17 August 2020.
- McClusky, Aoife; McClements, April. "Class action procedure in Ireland". Lexology. Law Business Research. Retrieved 17 August 2020.
- Collins, Niall; Johnston, Peter; Gilvarry, Ailbhe; Farrell, Kevin (1 December 2019). "Class/collective actions in Ireland: overview". Practical Law. Thomson Reuters. Retrieved 17 August 2020.
- More information Class Action Italia
- "art. 140 bis" (PDF). Retrieved 2013-10-04.
- FAVA P., L'importabilità delle class actions in Italia, in Contratto e Impresa 1/2004 FAVA P., Class actions all'italiana: "Paese che vai, usanza che trovi" (l’esperienza dei principali ordinamenti giuridici stranieri e le proposte A.A.C.C. n. 3838 e n. 3839), in Corr. Giur. 3/2004; FAVA P., Class actions tra efficientismo processuale, aumento di competitività e risparmio di spesa: l’esame di un contenzioso seriale concreto (le S.U. sul rapporto tra indennità di amministrazione e tredicesima), in Corr. Giur. 2006, 535; FAVA P., Indennità di amministrazione e tredicesima: il "no secco" delle Sezioni Unite. Un caso pratico per valutare le potenzialità delle azioni rappresentative (class actions) nel contenzioso seriale italiano, Rass. Avv. Stato 2005]
- See also Class Action Italia, Dalle origini ad oggi Archived 2008-02-12 at the Wayback Machine and Italy introduces consumer class actions or visit Italian reference site for Class Action Class Action Community Archived 2010-01-31 at the Wayback Machine
- Message to Parliament on the Swiss Code of Civil Procedure, Federal Journal 2006 p. 7221 et seq. The quote, p. 7290, is the author's translation.
- "CPR, Part 19.6". Justice.gov.uk. 2013-09-27. Retrieved 2013-10-04.
- "Different class: UK representative actions suffer a setback".
- Mulheron, Rachael (2017). "The United Kingdom's New Opt-Out Class Action". Oxford Journal of Legal Studies. 37 (4): 814–843. doi:10.1093/ojls/gqx016.
- Coleman, Clive (2015-10-01). "Class action legal change for UK". BBC News. Retrieved 2018-04-04.
- Sullivan, E. Thomas (2009). Complex Litigation. LexisNexis. p. 441. ISBN 978-1422411469. Retrieved 17 December 2017.
- Larson, Aaron (14 January 2018). "What is a Class Action Lawsuit". ExpertLaw. Retrieved 21 March 2018.
- "Rule 23". federalrulesofcivilprocedure.org. Retrieved 2016-01-11.
- "Unintended Precedents". The American Prospect. February 28, 2010. Retrieved March 21, 2018.
- "Class Action Fairness Act Public Law 109-2, 119 Stat. 4". Frwebgate.access.gpo.gov. Retrieved 2013-10-03.
- 28 U.S.C.A. § 1332(d)
- "William B. Rubenstein, "Understanding the Class Action Fairness Act of 2005" (briefing paper)" (PDF). Retrieved 2013-10-03.
- Grundfest, Joseph A. (September 2019). "The Limits of Delaware Corporate Law: Internal Affairs, Federal Forum Provisions, and Sciabacucchi". Harvard Law School on Corporate Governance and Financial Regulation.
- "Bloomberg Industry Group". Bloomberg Industry.
- "Securities Fraud Plaintiff Need Not Show Reliance". www.americanbar.org.
- "Pleading Section 11 Liability for Secondary Offerings". www.americanbar.org.
- "CITIC Trust_FIC_Order_PACER.pdf" (PDF).
- Grundfest, Joseph A. (September 22, 2015). "Morrison, the Restricted Scope of Securities Act Section 11 Liability, and Prospects for Regulatory Reform". Journal of Corporation Law. 41 (1): 38 – via www.questia.com.
- 28 U.S.C. § 1332(d)(2)
- John G. Heyburn II. "A View from the Panel: Part of the Solution" (PDF). Tulane Law Review. 82: 2225–2331. Archived from the original (PDF) on 2012-04-26. Retrieved 2011-12-12.
- Greer, Marcy Hogan (2010). A Practitioner's Guide to Class Actions. Chicago: American Bar Association. pp. 57–59. ISBN 9781604429558.
- Wal-Mart Stores, Inc. v. Dukes, 131 S. Ct, 2541 (2011).
- Webber, David H. (2012). "The Plight of the Individual Investor". Northwestern University Law Review. 106: 181. Retrieved 21 November 2019. (Quoting Fed. R. Civ. P. 23(b)(3) (emphasis added)).
- See Cohelan on California Class Actions and California Class Actions: Practice and Procedure by Elizabeth Cabraser et al.
- Manual for Complex Litigation, Fourth
- Stanford Securities Class Action Clearinghouse
- Class Action Lawsuits: A Legal Overview for the 115th Congress Congressional Research Service
- Class Actions Seven Years After the Class Action Fairness Act: Hearing before the Subcommittee on the Constitution of the Committee on the Judiciary, House of Representatives, One Hundred Twelfth Congress, Second Session, June 1, 2012 | https://wiki-offline.jakearchibald.com/wiki/Class_action | 21 |
18 | Laissez faire can be best described as a political theory that embracing the idea that ‘the best governments are the governments that govern the least,” while also preventing the union from evolving into a state of anarchy and chaos. The United States and other global economic powers, have historically taken a “hands-off” approach towards their economic policies, in order to allow their economies to flourish on their own. On the contrary, when governments tend to heavily regulate business and trade, their economies experience a recession (or at times, even a depression). The fundamental principles of laissez faire were in some instances, disregarded, and at other times advocated by the United States Federal Government, particularly in economic topics such as railroad land grants, anti-trust laws, and the regulation of the nation’s interstate commerce.
The railroad land grants that were issued by (or on behalf) of the federal government, encouraged the idea of laissez faire, as the government dispersed tax money throughout the 1800s. Eventually, those indebted to the federal government began to repay their debts, and the nation’s debt slowly began to diminish. Land grants were also awarded to railroad companies that were willing to build railroads in uninhabited areas that were not otherwise economically luring. ) Rather than the government constructing nationalized railroads, they provided economic incentives for railroad companies to build in specific areas, such incentives include: federal subsidies, land grants, and cash grants. The government did what was best for the people of the United States people: rather than nationalize the railroad system and diminish any economic growth that the area may have experienced as a result of the privatization of the railroads. Such a decision by the national government certainly embraces the idea of laissez faire, as although the railroad companies that took advantage of the economic incentives benefited, so did the rest of the American people.
The role of laissez faire in the federal government’s control of interstate commerce was seen in both the form of fostering and of dismissal. The government, to their own dismay, failed to have the foresight that by encouraging a limited number of railroad corporations to take advantage of their economic incentives, that corporate monopolies would soon be present. Monopolies did not, however, completely hinder the American people, as they provided employment opportunities and supplies to those who would otherwise, be without. Even despite this positive attribute, monopolies on interstate trade and railroads were largely unsuccessful in allowing the nation to progress both economically and socially. The government at this time, however, did not make economic policies that were in the best interest of the American people, as small businesses and individuals were exploited by such an abuse of power. Instead, the railroad companies were in the best interest of American politicians and government officials, as individuals gained power due to the tremendous economic gains that these monopolies accrued. While the government had achieved more political and economic power, their constituents overwhelmingly unsatisfied in their role(s) in interstate commerce.
Ultimately, these constituents were somewhat successful, as their constant pressure on their representatives led Congress to finally decide to pass legislation that would regulate monopolies and interstate trade. This piece of legislation, known as the Interstate Commerce Act, ironically took even more independence from their citizens. The government no longer felt that it was its duty to intervene and protect its citizens’ rights, and this resulted in further economic and social abuses by “big businesses”. The government however, was not solely to blame for such abuses, as the American people made accusations toward the government’s intentions, which caused them to lose credibility.
Though the government tried to pass legislation that would regulate trusts and other economic abuses, by setting rigid regulations about anti-trusts, the government essentially gained political power over the nation’s commerce. The Sherman Antitrust Act was abused frequently throughout the late 1800s, and seldom was the legislation used properly.
The principles and theory of laissez faire were routinely disregarded by the federal government, as the government attempted to elevate its own power, rather than protect its citizens. The government breached its own limitations and boundaries as it passed legislation such as the Interstate Commerce and Sherman Antitrust Acts. Largely, the government looked out for its own interest, and did what it felt was the best decision for the country, rather than the will of the American people. was routinely disregarded in the government’s anti-trust legislation and regulations. The government can now intervene against what it deems as “unfair economic competition”, thus leading to further abuse by government officials. Legislation known as the Sherman Antitrust Act, allowed the government to close any businesses who had abused their power over their competition. When the government decided to pass such legislation, laissez faire was violated, despite their intentions being good. | https://dialang.org/laissez-faires-applicability-to-american-history/ | 21 |
23 | Monasticism in the Middle Ages During the twelfth and thirteenth centuries, the monasteries served as one of the great civilizing forces by being the centers of education, preservers of learning, and hubs of economic development. Western monasticism was shaped by Saint Benedict of Nursia, who in 529, established a monastery in southern Italy. He created a workable model for running a monastery that was used by most western monastic orders of the Early Middle Ages.
To the three vows of obedience, poverty, and chastity, which formed the foundation of most of the old monasteries, he added the vow of manual labor. Each monk did some useful work, such as, plowing the fields, planting and harvesting the grain, tending the sheep, or milking the cows. Others worked at various trades in the workshops. No task was too lowly for them. Benedicts rules laid down a daily routine of monastic life in much greater detail than the preceding rules appear to have done (Cantor 167-168). Schwartz 2The monks also believed in learning, and for centuries had the only schools in existence.Order now
The churchmen were the only people who could read or write. Most nobles and kings could not even write their names. The monastery schools were only available to young nobles who wished to master the art of reading in Latin, and boys who wished to study to become priests (Ault 405). The monasteries played a part as the preservers of learning. Many monks busied themselves copying manuscripts and became medieval publishing houses.
They kept careful calendars so that they could keep up with the numerous saints days, and other feast days of the medieval church. The monks who kept the calendar often jotted down, in the margins, happenings of interest in the neighborhood or information learned from a traveler. Most of the books in existence, during the Middle Ages, were produced by monks, called scribes. These manuscripts were carefully and painstakingly handwritten. When the monks were writing, no one was allowed to speak, and they used sign languageto communicate with each other.
The books were written on vellum, made from calfs skin, or parchment, made from sheeps skin. The scribes used gothic letters, that were written so perfectly, they looked as if they were printed by a press. Many of the books were elaborately ornamented with gold or colore!d letters. The borders around each page were decorated with garlands, vines, or flowers. After the books were written, they were bound in leather or covered with velvet.
The monks copied Schwartz 3bibles, hymns, and prayers, the lives of the saints, as well as the writings of the Greeks and Romans and other ancient peoples. The scribes added a little prayer at the end of each book, because they felt that god would be pleased with their work. Without their efforts, these stories and histories would have been lost to the world. The monks became the historians of their day by keeping a record of important events, year by year. It is from their writings that we derive a great deal of knowledge of the life, customs, and events of the medieval times (Ault 158). Medieval Europe made enormous economic gains because of the monks.
They proved themselves to be intelligent landlords and agricultural colonizers of Western Europe. A very large proportion of the soil of Europe, in the Middle Ages, was wasteland. There were marshes and forests covering much of the land. The monasteries started cultivating the soil, draining the swamps, and cutting down the forests.
These monastic communities attracted settlements of peasants around them because the monastery offered security. Vast areas of land were reclaimed for agricultural purposes. The peasants copied the agricultural methods of the monks. Improved breeding of cattle was developed by the monastic communities. Many monasteries were surrounded by marshes, but their land became fertile farms. The monasteries became model farms and served as local schools of agriculture.
Farmingwas a chief economic activity of the monasteries. They sold the excess that they grew in the marketpla!ce, and this drew them into trade and commerce. Schwartz 4They sold hogs, charcoal, iron, building stone, and timber. This made them into the centers of civilization.
Many monasteries conducted their market during patron saints day, and for several days or weeks after it. The aim was to buy and sell at a time when the greatest number of people assembled. Many times, the merchandise sold was not actually present at the market, but the buyer had to travel to another monastery to get it. No deferred payments or partial payments were allowed.
Articles could not be bartered or exchanged for other articles. The prevalence of a money economy made this rule enforceable (Dahmus 322). In theory, the monasteries were supposed to use the gains of disposing of their surplus for religious purposes. . . .
. . These religious orders did vast amounts of charitable work and built beautiful buildings during this period. The monasteries heaped up vast treasures as a result of their personal activity. In many monasteries, only a small part of the land was cultivated by the monks. The remainder was allotted out to laborers, dairymen, foresters, and serfs, who paid their dues and rents in kind.
Some of the articles received were eggs, cheese, mustard, shingles, posts, kegs, and casks. Many women spun and wove linen cloth, and sewed garments for the monks. Serfs tilled the fields and cultivated the vines. The monasteries had their trade well organized. They knew all of the paths and shortcuts on the highways. They built warehouses to hold their merchandise.
They also started the practice of using agents to sell their products. Many monasteries were built on the Schwartz 5banks of navigable rivers, and this added to the development of their capabilities. Almost all of the monasteries received immunity from tolls along the highways and rivers. As the monasteries entered more and more into trade, as means of increasing their incomes, they established markets at convenient points between their monastery and other dependent holdings. The monasteries came into the possession of widely scattered lands as a result of donations. As their possessions became widely dispersed, it became difficult to maintain a strong centralorganization to manage their holdings and to keep them profitable to the monastery.
Many times, the monasteries exchanged possessions of their widely scattered properties for those that were more centrally located. Often, exchanges were difficult to accomplish because the donations were given with a stipulation that the monastery had to retain the land in its possession (Thompson 663). Many artisans were employed at the monasteries. They manufactured utensils and articles that were the by-products of agriculture, like harnesses, saddles, shoes, and woolen goods. Many times, these artisans lived in quarters outside of the monastery walls.
Fine arts were also represented by craftsmen living in the monastery. There were many skilled men practicing their trades, such as wood and stone carvers, guilders, painters, goldsmiths, silversmiths, and parchment makers. Because the monks enjoyed many privileges and exemptions, they wereSchwartz 6able to produce articles of manufacture at a cost far below those of regular artisans and merchants (Lacroix 301). We have observed in the history of the development of the monastic economic system that there are successive stages. At first, the monasteries were agricultural colonies; then they began to market their produce; then to manufacture commodities.
As the economic and social life of Europe grew more complex, the monasteries looked for new forms of investments. They developed a mortgage and loan business and became the earliest banking corporation of the middle ages. Although the Church prohibited the charging of interest, the monasteries argued that they were a corporation, not a person, so no sin was attached to the taking of interest. The loans made always carried a high collateral so the monastery made a handsome profit, even in the event of a default. Many times, the person borrowing the money was required to make “a gift” apart from the collateral he had to put up.
When the loan was paid back by the borrower, he was also expected to make an additional “gift. “The loa!ns made by the monasteries were usually short term, and the borrower would have trouble repaying it. Frequently, the monastery would cancel the loan, and the land held as security would go to the monastery. As the loan business grew, the monasteries were compelled to seek the assistance of trained officials to handle various transactions. Jews were hired for this purpose, since they were skilled money-changers and brokers of this period.
This was a Schwartz 7natural transition from making profits in markets and trade to actual banking (Hartman 213). In conclusion, the monasteries offered many important services to the regions in which they were located. The monks and monasteries offered the leadership, that society needed, that could only come from the Church. They provided examples of order and discipline, preserved classical works, and taught reading and writing. The scribes did a great service to civilization, for through their work, many valuable books are preserved for us today, that otherwise might have been lost to the world.
Monasteries were educational and economic centers in the areas in which they were established. They had a profound influence in the development of the society of the time. They acted as centers of agriculture and trade. Monasticism, which had begun as a flight from the civilized world, became, not only an integral part of society, but a great civilizing force of their time. | https://artscolumbia.org/monasticism-in-the-middle-ages-essay-68331/ | 21 |
85 | History of the Republican Party (United States)
The Republican Party, also referred to as the GOP ("Grand Old Party"), is one of the two major political parties in the United States. It is the second-oldest extant political party in the United States; its chief rival, the Democratic Party, is the oldest.
|Other name||National Union Party (1864–1868)|
|Abbreviation||GOP (Grand Old Party)|
Edwin D. Morgan
Henry Jarvis Raymond
|Founded||March 20, 1854|
|Headquarters||310 First Street SE,|
Washington, D.C., 20003
|Colors||Red (after 2000)|
The Republican Party emerged in 1854 to combat the Kansas–Nebraska Act and the expansion of slavery into American territories. The early Republican Party consisted of northern Protestants, factory workers, professionals, businessmen, prosperous farmers, and after 1866, former black slaves. The party had very little support from white Southerners at the time, who predominantly backed the Democratic Party in the Solid South, and from Catholics, who made up a major Democratic voting block. While both parties adopted pro-business policies in the 19th century, the early GOP was distinguished by its support for the national banking system, the gold standard, railroads, and high tariffs. The party opposed the expansion of slavery before 1861 and led the fight to destroy the Confederate States of America (1861–1865). While the Republican Party had almost no presence in the Southern United States at its inception, it was very successful in the Northern United States, where by 1858 it had enlisted former Whigs and former Free Soil Democrats to form majorities in nearly every Northern state.
With the election of Abraham Lincoln (the first Republican president) in 1860, the Party's success in guiding the Union to victory in the American Civil War, and the Party's role in the abolition of slavery, the Republican Party largely dominated the national political scene until 1932. In 1912, former Republican president Theodore Roosevelt formed the Progressive ("Bull Moose") Party after being rejected by the GOP and ran unsuccessfully as a third-party presidential candidate calling for social reforms. After 1912, many Roosevelt supporters left the Republican Party, and the Party underwent an ideological shift to the right. The GOP lost its congressional majorities during the Great Depression (1929–1940); under President Franklin D. Roosevelt, the Democrats formed a winning New Deal coalition that was dominant from 1932 through 1964.
After the Civil Rights Act of 1964, the Voting Rights Act of 1965 and the Southern Strategy, the party's core base shifted, with the Southern states becoming more reliably Republican in presidential politics and the Northeastern states becoming more reliably Democratic. White voters increasingly identified with the Republican Party after the 1960s. Following the Supreme Court's 1973 decision in Roe v. Wade, the Republican Party opposed abortion in its party platform and grew its support among evangelicals. The Republican Party won five of the six presidential elections from 1968 to 1988. Two-term President Ronald Reagan, who held office from 1981 to 1989, was a transformative party leader. His conservative policies called for reduced social government spending and regulation, increased military spending, lower taxes, and a strong anti-Soviet Union foreign policy. Reagan's influence upon the party persisted into the next century.
Since the 1990s, the Party's support has chiefly come from the South, the Great Plains, the Mountain States, and rural areas in the North. The 21st century Republican Party ideology is American conservatism. Today's GOP supports lower taxes, free market capitalism, a strong national defense, gun rights, deregulation, capital punishment, and restrictions on labor unions; it opposes the abortion of unborn human babies. In contrast to its support for conservative economic policies and liberal view of government, the Republican Party is socially conservative. There have been 19 Republican presidents, the most from any one political party.
The Republican Party grew out of opposition to the Kansas–Nebraska Act, which was passed by Democrats in 1854. The Act opened Kansas Territory and Nebraska Territory to slavery and future admission as slave states, thus implicitly repealing the prohibition on slavery in territory north of 36° 30′ latitude that had been part of the Missouri Compromise. This change was viewed by anti-slavery Northerners as an aggressive, expansionist maneuver by the slave-owning South. Opponents of the Act were intensely motivated and began forming a new party. The Party began as a coalition of anti-slavery Conscience Whigs such as Zachariah Chandler and Free Soil Democrats such as Salmon P. Chase.
The first anti-Nebraska local meeting where "Republican" was suggested as a name for a new anti-slavery party was held in a Ripon, Wisconsin schoolhouse on March 20, 1854. The first statewide convention that formed a platform and nominated candidates under the Republican name was held near Jackson, Michigan, on July 6, 1854. At that convention, the party opposed the expansion of slavery into new territories and selected a statewide slate of candidates. The Midwest took the lead in forming state Republican Party tickets; apart from St. Louis and a few areas adjacent to free states, there were no efforts to organize the Party in the southern states.
New England Yankees, who dominated that region and much of upstate New York and the upper Midwest, were the strongest supporters of the new party. This was especially true for the pietistic Congregationalists and Presbyterians among them and, during the war, many Methodists and Scandinavian Lutherans. The Quakers were a small, tight-knit group that was heavily Republican. By contrast, the liturgical churches (Roman Catholic, Episcopal and German Lutheran) largely rejected the moralism of the Republican Party; most of their adherents voted Democratic.
The new Republican Party envisioned modernizing the United States, emphasizing expanded banking, more railroads and factories, and giving free western land to farmers ("free soil") as opposed to letting slave owners buy up the best properties. It vigorously argued that free market labor was superior to slavery and was the very foundation of civic virtue and true republicanism; this was the "Free Soil, Free Labor, Free Men" ideology. Without using the term "containment", the Republican Party in the mid-1850s proposed a system of containing slavery. Historian James Oakes explains the strategy:
The federal government would surround the south with free states, free territories, and free waters, building what they called a 'cordon of freedom' around slavery, hemming it in until the system's own internal weaknesses forced the slave states one by one to abandon slavery.
The Republican Party launched its first national organizing convention in Pittsburgh, Pennsylvania on February 22, 1856. This gathering elected a governing National Executive Committee and passed resolutions calling for the repeal of laws enabling slaveholding in free territories and "resistance by Constitutional means of Slavery in any Territory," defense of anti-slavery individuals in Kansas who were coming under physical attack, and a call to "resist and overthrow the present National Administration" of Franklin Pierce, "as it is identified with the progress of the Slave power to national supremacy." Its first national nominating convention was held in June 1856 in Philadelphia. John C. Frémont ran as the first Republican nominee for President in 1856 behind the slogan "Free soil, free silver, free men, Frémont and victory!" Although Frémont's bid was unsuccessful, the party showed a strong base. It dominated in New England, New York and the northern Midwest and had a strong presence in the rest of the North. It had almost no support in the South, where it was roundly denounced in 1856–1860 as a divisive force that threatened civil war.
The Republican Party absorbed many of the previous traditions of its members, who had come from an array of political factions, including Working Men, Locofoco Democrats, Free Soil Democrats, Free Soil Whigs, anti-slavery Know Nothings, Conscience Whigs, and Temperance Reformers of both parties. Many Democrats who joined were rewarded with governorships, or seats in the U.S. Senate, or House of Representatives.
During the presidential campaign in 1860, at a time of escalating tension between the North and South, Abraham Lincoln addressed the harsh treatment of Republicans in the South in his famous Cooper Union speech:
[W]hen you speak of us Republicans, you do so only to denounce us as reptiles, or, at the best, as no better than outlaws. You will grant a hearing to pirates or murderers, but nothing like it to "Black Republicans." [...] But you will not abide the election of a Republican president! In that supposed event, you say, you will destroy the Union; and then, you say, the great crime of having destroyed it will be upon us! That is cool. A highwayman holds a pistol to my ear, and mutters through his teeth, "Stand and deliver, or I shall kill you, and then you will be a murderer!"
Civil War and Republican dominance: 1860–1896
The election of Lincoln as president in 1860 opened a new era of Republican dominance based in the industrial North and agricultural Midwest. The Third Party System was dominated by the Republican Party (it lost the presidency only in 1884 and 1892). Lincoln proved brilliantly successful in uniting the factions of his party to fight for the Union in the Civil War. However, he usually fought the Radical Republicans who demanded harsher measures. Many conservative Democrats became War Democrats who had a deep belief in American nationalism and supported the war. When Lincoln added the abolition of slavery as a war goal, the Peace Democrats were energized and carried numerous state races, especially in Connecticut, Indiana and Illinois. Democrat Horatio Seymour was elected Governor of New York and immediately became a likely presidential candidate.
Most of the state Republican parties accepted the antislavery goal except Kentucky. During the American Civil War, the party passed major legislation in Congress to promote rapid modernization, including a national banking system, high tariffs, the first temporary income tax (subsequently ruled constitutional in Springer v. United States), many excise taxes, paper money issued without backing ("greenbacks"), a huge national debt, homestead laws, railroads and aid to education and agriculture.
The Republicans denounced the peace-oriented Democrats as disloyal Copperheads and won enough War Democrats to maintain their majority in 1862. In 1864, they formed a coalition with many War Democrats as the National Union Party. Lincoln chose Democrat Andrew Johnson as his running mate and was easily re-elected. During the war, upper-middle-class men in major cities formed Union Leagues to promote and help finance the war effort. Following the 1864 elections, Radical Republicans Led by Charles Sumner in the Senate and Thaddeus Stevens in the House set the agenda by demanding more aggressive action against slavery and more vengeance toward the Confederates.
Reconstruction (freedmen, carpetbaggers and scalawags): 1865–1877
Under Republican congressional leadership, the Thirteenth Amendment to the United States Constitution—which banned slavery in the United States—passed the Senate in 1864 and the House in 1865; it was ratified in December 1865. In 1865, the Confederacy surrendered, ending the Civil War. Lincoln was assassinated in April 1865; following his death, Andrew Johnson took office as President of the United States.
During the post-Civil War Reconstruction era, there were major disagreements on the treatment of ex-Confederates and of former slaves, or freedmen. Johnson broke with the Radical Republicans and formed a loose alliance with moderate Republicans and Democrats. A showdown came in the Congressional elections of 1866, in which the Radicals won a sweeping victory and took full control of Reconstruction, passing key laws over the veto. Johnson was impeached by the House, but acquitted by the Senate.
With the election of Ulysses S. Grant in 1868, the Radicals had control of Congress, the party and the army and attempted to build a solid Republican base in the South using the votes of Freedmen, Scalawags and Carpetbaggers, supported directly by U.S. Army detachments. Republicans all across the South formed local clubs called Union Leagues that effectively mobilized the voters, discussed issues and when necessary fought off Ku Klux Klan (KKK) attacks. Thousands died on both sides.
Grant supported radical reconstruction programs in the South, the Fourteenth Amendment and equal civil and voting rights for the freedmen. Most of all he was the hero of the war veterans, who marched to his tune. The party had become so large that factionalism was inevitable; it was hastened by Grant's tolerance of high levels of corruption typified by the Whiskey Ring.
Many of the founders of the GOP joined the liberal movement, as did many powerful newspaper editors. They nominated Horace Greeley for president, who also gained the Democratic nomination, but the ticket was defeated in a landslide. The depression of 1873 energized the Democrats. They won control of the House and formed "Redeemer" coalitions which recaptured control of each southern state, in some cases using threats and violence.
Reconstruction came to an end when the contested election of 1876 was awarded by a special electoral commission to Republican Rutherford B. Hayes, who promised through the unofficial Compromise of 1877 to withdraw federal troops from control of the last three southern states. The region then became the Solid South, giving overwhelming majorities of its electoral votes and Congressional seats to the Democrats through 1964.
In terms of racial issues, Sarah Woolfolk Wiggins argues that in Alabama:
White Republicans as well as Democrats solicited black votes but reluctantly rewarded blacks with nominations for office only when necessary, even then reserving the more choice positions for whites. The results were predictable: these half-a-loaf gestures satisfied neither black nor white Republicans. The fatal weakness of the Republican Party in Alabama, as elsewhere in the South, was its inability to create a biracial political party. And while in power even briefly, they failed to protect their members from Democratic terror. Alabama Republicans were forever on the defensive, verbally and physically.
Social pressure eventually forced most Scalawags to join the conservative/Democratic Redeemer coalition. A minority persisted and, starting in the 1870s, formed the "tan" half of the "Black and Tan" Republican Party, a minority in every Southern state after 1877. This divided the party into two factions: the lily-white faction, which was practically all-white; and the biracial black-and-tan faction.
In several Southern states, the "Lily Whites,” who sought to recruit white Democrats to the Republican Party, attempted to purge the Black and Tan faction or at least to reduce its influence. Among such "Lily White" leaders in the early 20th century, Arkansas' Wallace Townsend was the party's gubernatorial nominee in 1916 and 1920 and its veteran national GOP committeeman. The factionalism flared up in 1928 and 1952. The final victory of its opponent the lily-white faction came in 1964.
Gilded Age: 1877–1890
The party split into factions in the late 1870s. The Stalwarts, followers of Senator Roscoe Conkling, defended the spoils system. The Half-Breeds, who followed Senator James G. Blaine of Maine, pushed for reform of the Civil service. Upscale reformers who opposed the spoils system altogether were called "Mugwumps.” In 1884, Mugwumps rejected James G. Blaine as corrupt and helped elect Democrat Grover Cleveland, though most returned to the party by 1888. In the run-up to the 1884 GOP convention, Mugwumps organized their forces in the swing states, especially New York and Massachusetts. After failing to block Blaine, many bolted to the Democrats, who had nominated reformer Grover Cleveland. Young Theodore Roosevelt and Henry Cabot Lodge, leading reformers, refused to bolt—an action that preserved their leadership role in the GOP.
As the Northern post-war economy boomed with industry, railroads, mines and fast-growing cities as well as prosperous agriculture, the Republicans took credit and promoted policies to keep the fast growth going. The Democratic Party was largely controlled by pro-business Bourbon Democrats until 1896. The GOP supported big business generally, the gold standard, high tariffs and generous pensions for Union veterans. However, by 1890 the Republicans had agreed to the Sherman Anti-Trust Act and the Interstate Commerce Commission in response to complaints from owners of small businesses and farmers. The high McKinley Tariff of 1890 hurt the party and the Democrats swept to a landslide in the off-year elections, even defeating McKinley himself.
Foreign affairs seldom became partisan issues (except for the annexation of Hawaii, which Republicans favored and Democrats opposed). Much more salient were cultural issues. The GOP supported the pietistic Protestants (especially the Methodists, Congregationalists, Presbyterians and Scandinavian Lutherans) who demanded prohibition. That angered wet Republicans, especially German Americans, who broke ranks in 1890–1892, handing power to the Democrats.
Demographic trends aided the Democrats, as the German and Irish Catholic immigrants were mostly Democrats and outnumbered the British and Scandinavian Republicans. During the 1880s, elections were remarkably close. The Democrats usually lost, but won in 1884 and 1892. In the 1894 Congressional elections, the GOP scored the biggest landslide in its history as Democrats were blamed for the severe economic depression 1893–1897 and the violent coal and railroad strikes of 1894.
Pietistic Republicans versus Liturgical Democrats: 1890–1896
From 1860 to 1912, the Republicans took advantage of the association of the Democrats with "Rum, Romanism, and Rebellion.” Rum stood for the liquor interests and the tavernkeepers, in contrast to the GOP, which had a strong dry element. "Romanism" meant Roman Catholics, especially Irish Americans, who ran the Democratic Party in every big city and whom the Republicans denounced for political corruption. "Rebellion" stood for the Democrats of the Confederacy, who tried to break the Union in 1861; and the Democrats in the North, called "Copperheads,” who sympathized with them.
Demographic trends aided the Democrats, as the German and Irish Catholic immigrants were Democrats and outnumbered the English and Scandinavian Republicans. During the 1880s and 1890s, the Republicans struggled against the Democrats' efforts, winning several close elections and losing two to Grover Cleveland (in 1884 and 1892).
Religious lines were sharply drawn. Methodists, Congregationalists, Presbyterians, Scandinavian Lutherans and other pietists in the North were tightly linked to the GOP. In sharp contrast, liturgical groups, especially the Catholics, Episcopalians and German Lutherans, looked to the Democratic Party for protection from pietistic moralism, especially prohibition. Both parties cut across the class structure, with the Democrats more bottom-heavy.
Cultural issues, especially prohibition and foreign language schools became important because of the sharp religious divisions in the electorate. In the North, about 50% of the voters were pietistic Protestants (Methodists, Scandinavian Lutherans, Presbyterians, Congregationalists and Disciples of Christ) who believed the government should be used to reduce social sins, such as drinking.
Liturgical churches (Roman Catholics, German Lutherans and Episcopalians) comprised over a quarter of the vote and wanted the government to stay out of the morality business. Prohibition debates and referendums heated up politics in most states over a period of decade as national prohibition was finally passed in 1919 (repealed in 1933), serving as a major issue between the wet Democrats and the dry GOP.
Progressive Era: 1896–1932
The election of William McKinley in 1896 marked a resurgence of Republican dominance and was a realigning election.
The Progressive Era (or "Fourth Party System") was dominated by Republican Presidents, with the sole exception of Democrat Woodrow Wilson (1913–1921). McKinley promised that high tariffs would end the severe hardship caused by the Panic of 1893 and that the GOP would guarantee a sort of pluralism in which all groups would benefit. He denounced William Jennings Bryan, the Democratic nominee, as a dangerous radical whose plans for "Free Silver" at 16–1 (or Bimetallism) would bankrupt the economy.
McKinley relied heavily on finance, railroads, industry and the middle classes for his support and cemented the Republicans as the party of business. His campaign manager, Ohio's Mark Hanna, developed a detailed plan for getting contributions from the business world and McKinley outspent his rival Democrat William Jennings Bryan by a large margin. This emphasis on business was in part reversed by Theodore Roosevelt, the presidential successor after McKinley's assassination in 1901, who engaged in trust-busting. McKinley was the first President to promote pluralism, arguing that prosperity would be shared by all ethnic and religious groups.
Theodore Roosevelt, who became president in 1901, had the most dynamic personality of the era. Roosevelt had to contend with men like Senator Mark Hanna, whom he outmaneuvered to gain control of the convention in 1904 that renominated him and he won after promising to continue McKinley's policies. More difficult to handle was conservative House Speaker Joseph Gurney Cannon.
Roosevelt achieved modest legislative gains in terms of railroad legislation and pure food laws. He was more successful in Court, bringing antitrust suits that broke up the Northern Securities Company trust and Standard Oil. Roosevelt moved to the left in his last two years in office, but was unable to pass major Square Deal proposals. He did succeed in naming his successor, Secretary of War William Howard Taft, who easily defeated Bryan again in the 1908 presidential election.
- Again and again in my public career I have had to make head against mob spirit, against the tendency of poor, ignorant and turbulent people who feel a rancorous jealousy and hatred of those who are better off. But during the last few years it has been the wealthy corruptionists of enormous fortune, and of enormous influence through their agents of the press, pulpit, colleges and public life, with whom I've had to wage bitter war."
Protectionism was the ideological cement holding the Republican coalition together. High tariffs were used by Republicans to promise higher sales to business, higher wages to industrial workers, and higher demand for their crops to farmers. Progressive insurgents said it promoted monopoly. Democrats said it was a tax on the little man. It had greatest support in the Northeast, and greatest opposition in the South and West. The Midwest was the battle ground. The tariff issue was pulling the GOP apart. Roosevelt tried to postpone the issue, but Taft had to meet it head on in 1909 with the Payne–Aldrich Tariff Act. Eastern conservatives led by Nelson W. Aldrich wanted high tariffs on manufactured goods (especially woolens), while Midwesterners called for low tariffs. Aldrich outmaneuvered them by lowering the tariff on farm products, which outraged the farmers. The great battle over the high Payne–Aldrich Tariff Act in 1910 ripped the Republicans apart and set up the realignment in favor of the Democrats. Insurgent Midwesterners led by George Norris revolted against the conservatives led by Speaker Cannon. The Democrats won control of the House in 1910 as the rift between insurgents and conservatives widened.
1912 personal feud becomes ideological split
In 1912, Roosevelt broke with Taft, rejected Robert M. La Follette, and tried for a third term, but he was outmaneuvered by Taft and lost the nomination. The 1912 Republican National Convention turned a personal feud into an ideological split in the GOP. Politically liberal states for the first time were holding Republican primaries. Roosevelt overwhelmingly won the primaries — winning 9 out of 12 states (8 by landslide margins). Taft won only the state of Massachusetts (by a small margin); he even lost his home state of Ohio to Roosevelt. Senator Robert M. La Follette, a reformer, won two states. Through the primaries, Senator La Follette won a total of 36 delegates; President Taft won 48 delegates; and Roosevelt won 278 delegates. However 36 more conservative states did not hold primaries, but instead selected delegates via state conventions. For years Roosevelt had tried to attract Southern white Democrats to the Republican Party, and he tried to win delegates there in 1912. However Taft had the support of black Republicans in the South, and defeated Roosevelt there. Roosevelt led many (but not most) of his delegates to bolt out of the convention and created a new party (the Progressive, or "Bull Moose" ticket), in the election of 1912. Few party leaders followed him except Hiram Johnson of California. Roosevelt had the support of many notable women reformers, including Jane Addams. The Roosevelt-caused split in the Republican vote resulted in a decisive victory for Democrat Woodrow Wilson, temporarily interrupting the Republican era.
Regional, state and local politics
The Republicans welcomed the Progressive Era at the state and local level. The first important reform mayor was Hazen S. Pingree of Detroit (1890–1897), who was elected Governor of Michigan in 1896. In New York City, the Republicans joined nonpartisan reformers to battle Tammany Hall and elected Seth Low (1902–1903). Golden Rule Jones was first elected mayor of Toledo as a Republican in 1897, but was reelected as an independent when his party refused to renominate him. Many Republican civic leaders, following the example of Mark Hanna, were active in the National Civic Federation, which promoted urban reforms and sought to avoid wasteful strikes. North Carolina journalist William Garrott Brown tried to convince upscale white southerners of the wisdom of a strong early white Republican Party. He warned that a one party solid South system would negate democracy, encourage corruption, because the lack of prestige of the national level. Roosevelt was following his advice. However, in 1912, incumbent president Taft needed black Republican support in the South to defeat Roosevelt at the 1912 Republican national convention. Brown's campaign came to nothing, and he finally supported Woodrow Wilson in 1912.
Republicans dominate the 1920s
The party controlled the presidency throughout the 1920s, running on a platform of opposition to the League of Nations, support for high tariffs, and promotion of business interests. Voters gave the GOP credit for the prosperity and Warren G. Harding, Calvin Coolidge and Herbert Hoover were resoundingly elected by landslides in 1920, 1924 and 1928. The breakaway efforts of Senator Robert M. La Follette in 1924 failed to stop a landslide for Coolidge and his movement fell apart. The Teapot Dome Scandal threatened to hurt the party, but Harding died and Coolidge blamed everything on him as the opposition splintered in 1924.
GOP overthrown during Great Depression
The pro-business policies of the decade seemed to produce an unprecedented prosperity—until the Wall Street Crash of 1929 heralded the Great Depression. Although the party did very well in large cities and among ethnic Catholics in presidential elections of 1920–1924, it was unable to hold those gains in 1928. By 1932, the cities—for the first time ever—had become Democratic strongholds.
Hoover was by nature an activist and attempted to do what he could to alleviate the widespread suffering caused by the Depression, but his strict adherence to what he believed were Republican principles precluded him from establishing relief directly from the federal government. The Depression cost Hoover the presidency with the 1932 landslide election of Franklin D. Roosevelt. Roosevelt's New Deal coalition controlled American politics for most of the next three decades, excepting the presidency of Republican Dwight Eisenhower 1953–1961. The Democrats made major gains in the 1930 midterm elections, giving them congressional parity (though not control) for the first time since Wilson's presidency.
Fighting the New Deal coalition: 1932–1980
Historian George H. Nash argues:
Unlike the "moderate," internationalist, largely eastern bloc of Republicans who accepted (or at least acquiesced in) some of the "Roosevelt Revolution" and the essential premises of President Truman's foreign policy, the Republican Right at heart was counterrevolutionary. Anticollectivist, anti-Communist, anti-New Deal, passionately committed to limited government, free market economics, and congressional (as opposed to executive) prerogatives, the G.O.P. conservatives were obliged from the start to wage a constant two-front war: against liberal Democrats from without and "me-too" Republicans from within.
The Old Right emerged in opposition to the New Deal of Franklin D. Roosevelt. Hoff says that "moderate Republicans and leftover Republican Progressives like Hoover composed the bulk of the Old Right by 1940, with a sprinkling of former members of the Farmer-Labor party, Non-Partisan League, and even a few midwestern prairie Socialists.”
The New Deal Era: 1932–1939
After Roosevelt took office in 1933, New Deal legislation sailed through Congress at lightning speed. In the 1934 midterm elections, ten Republican senators went down to defeat, leaving them with only 25 against 71 Democrats. The House of Representatives was also split in a similar ratio. The "Second New Deal" was heavily criticized by the Republicans in Congress, who likened it to class warfare and socialism. The volume of legislation, as well as the inability of the Republicans to block it, soon made the opposition to Roosevelt develop into bitterness and sometimes hatred for "that man in the White House.” Former President Hoover became a leading orator crusading against the New Deal, hoping unrealistically to be nominated again for president.
Most major newspaper publishers favored Republican moderate Alf Landon for president. In the nation's 15 largest cities the newspapers that editorially endorsed Landon represented 70% of the circulation. Roosevelt won 69% of the actual voters in those cities by ignoring the press and using the radio to reach voters directly.
Roosevelt carried 46 of the 48 states thanks to traditional Democrats along with newly energized labor unions, city machines and the Works Progress Administration. The realignment creating the Fifth Party System was firmly in place. Since 1928, the GOP had lost 178 House seats, 40 Senate seats and 19 governorships, though it retained a mere 89 seats in the House and 16 in the Senate.
The black vote held for Hoover in 1932, but started moving toward Roosevelt. By 1940, the majority of northern blacks were voting Democratic. Southern blacks seldom were allowed to vote, but most became Democrats. Roosevelt made sure blacks had a share in relief programs, the wartime Army and wartime defense industry, but did not challenge segregation or the denial of voting rights in the South.
Minority parties tend to factionalize and after 1936 the GOP split into a conservative faction (dominant in the West and Midwest) and a liberal faction (dominant in the Northeast)—combined with a residual base of inherited progressive Republicanism active throughout the century. In 1936, Kansas governor Alf Landon and his liberal followers defeated the Herbert Hoover faction. Landon generally supported most New Deal programs, but carried only two states in the Roosevelt landslide. The GOP was left with only 16 senators and 88 representatives to oppose the New Deal, with Massachusetts Senator Henry Cabot Lodge Jr. as the sole victor over a Democratic incumbent.
Roosevelt alienated many conservative Democrats in 1937 by his unexpected plan to "pack" the Supreme Court via the Judiciary Reorganization Bill of 1937. Following a sharp recession that hit early in 1938, major strikes all over the country, the CIO and AFL competing with each other for membership and Roosevelt's failed efforts to radically reorganize the Supreme Court, the Democrats were in disarray. Meanwhile, the GOP was united as they had shed their weakest members in a series of defeats since 1930. Re-energized Republicans focused attention on strong fresh candidates in major states, especially Robert A. Taft the conservative from Ohio, Earl Warren the moderate who won both the Republicans and the Democratic primaries in California and Thomas E. Dewey the crusading prosecutor from New York. The GOP comeback in 1938 was made possible by carrying 50% of the vote outside the South, giving GOP leaders confidence it had a strong base for the 1940 presidential election.
The GOP gained 75 House seats in 1938, but were still a minority. Conservative Democrats, mostly from the South, joined with Republicans led by Senator Robert A. Taft to create the conservative coalition, which dominated domestic issues in Congress until 1964.
World War II and its Aftermath: 1939–1952
From 1939 through 1941, there was a sharp debate within the GOP about support for Great Britain as it led the fight against a much stronger Nazi Germany. Internationalists, such as Henry Stimson and Frank Knox, wanted to support Britain and isolationists, such as Robert A. Taft and Arthur Vandenberg, strongly opposed these moves as unwise for risking a war with Germany. The America First movement was a bipartisan coalition of isolationists. In 1940, a dark horse Wendell Willkie at the last minute won over the party, the delegates and was nominated. He crusaded against the inefficiencies of the New Deal and Roosevelt's break with the strong tradition against a third term, but was ambiguous on foreign policy.
The Japanese attack on Pearl Harbor in December 1941 ended the isolationist-internationalist debate, as all factions strongly supported the war effort against Japan and Germany. The Republicans further cut the Democratic majority in the 1942 midterm elections in a very low turnout episode. With wartime production creating prosperity, the conservative coalition terminated nearly all New Deal relief programs (except Social Security) as unnecessary.
Senator Robert A. Taft of Ohio represented the wing of the party that continued to oppose New Deal reforms and continued to champion non-interventionism. Governor Thomas E. Dewey of New York, represented the Northeastern wing of the party. Dewey did not reject the New Deal programs, but demanded more efficiency, more support for economic growth and less corruption. He was more willing than Taft to support Britain in 1939–1940. After the war the isolationists wing strenuously opposed the United Nations and was half-hearted in opposition to world communism.
As a minority party, the GOP had two wings: The left-wing supported most of the New Deal while promising to run it more efficiently and the right-wing opposed the New Deal from the beginning and managed to repeal large parts during the 1940s in cooperation with conservative Southern Democrats in the conservative coalition. Liberals, led by Dewey, dominated the Northeast while conservatives, led by Taft, dominated the Midwest. The West was split and the South was still solidly Democratic.
Roosevelt died in April 1945 and Harry S. Truman, a less liberal Democrat became president and replaced most of Roosevelt's top appointees. With the end of the war, unrest among organized labor led to many strikes in 1946 and the resulting disruptions helped the GOP. With the blunders of the Truman administration in 1945 and 1946, the slogans "Had Enough?" and "To Err is Truman" became Republican rallying cries and the GOP won control of Congress for the first time since 1928, with Joseph William Martin, Jr. as Speaker of the House. The Taft-Hartley Act of 1947 was designed to balance the rights of management and labor. It was the central issue of many elections in industrial states in the 1940s to 1950s, but the unions were never able to repeal it.
In 1948, with Republicans split left and right, Truman boldly called Congress into a special session and sent it a load of liberal legislation consistent with the Dewey platform and dared them to act on it, knowing that the conservative Republicans would block action. Truman then attacked the Republican "Do-Nothing Congress" as a whipping boy for all of the nation's problems. Truman stunned Dewey and the Republicans in the election with a plurality of just over twenty-four million popular votes (out of nearly 49 million cast), but a decisive 303–189 victory in the Electoral College.
Before Reconstruction and for a century thereafter, the white South identified with the Democratic Party. The Democratic Party's dominance in the Southern states was so strong that the region was called the Solid South. The Republicans controlled certain parts of the Appalachian Mountains and they sometimes did compete for statewide office in the border states.
Before 1948, the Southern Democrats saw their party as the defender of the Southern way of life, which included a respect for states' rights and an appreciation for traditional values of southern white men. They repeatedly warned against the aggressive designs of Northern liberals and Republicans as well as the civil rights activists they denounced as "outside agitators", thus there was a serious barrier to becoming a Republican.
In 1948, Democrats alienated white Southerners in two ways. The Democratic National Convention adopted a strong civil rights plank, leading to a walkout by Southerners. Two weeks later, President Harry Truman signed Executive Order 9981 ending discrimination against Blacks in the armed forces. In 1948, the Deep South walked out, formed a temporary regional party (the "Dixiecrats") and nominated J. Strom Thurmond for president. Thurmond carried the Deep South, but the outer South stayed with Truman, and most of the Dixiecrats ultimately returned to the Democratic Party as conservative Southern Democrats. While the Dixiecrat movement did not last, the splintering among Democrats in the South paved the way for the later Southern shift towards the Republican Party, which would see Thurmond himself switching to the Republican Party in 1964.
Eisenhower, Goldwater, and Nixon: 1952–1974
In 1952, Dwight D. Eisenhower, an internationalist allied with the Dewey wing, was drafted as a GOP candidate by a small group of Republicans led by Henry Cabot Lodge, Jr. in order that he challenge Taft on foreign policy issues. The two men were not far apart on domestic issues. Eisenhower's victory broke a twenty-year Democratic lock on the White House. Eisenhower did not try to roll back the New Deal, but he did expand the Social Security system and built the Interstate Highway System.
After 1945, the isolationists in the conservative wing opposed the United Nations and were half-hearted in opposition to the expansion of Cold War containment of communism around the world. A garrison state to fight communism, they believed, would mean regimentation and government controls at home. Eisenhower defeated Taft in 1952 on foreign policy issues.
To circumvent the local Republican Party apparatus mostly controlled by Taft supporters, the Eisenhower forces created a nationwide network of grass-roots clubs, "Citizens for Eisenhower". Independents and Democrats were welcome, as the group specialized in canvassing neighborhoods and holding small group meetings. Citizens for Eisenhower hoped to revitalize the GOP by expanding its activist ranks and by supporting moderate and internationalist policies. It did not endorse candidates other than Eisenhower, but he paid it little attention after he won and it failed to maintain its impressive starting momentum. Instead the conservative Republicans became energized, leading to the Barry Goldwater nomination of 1964. Long-time Republican activists viewed the newcomers with suspicion and hostility. More significantly, activism in support of Eisenhower did not translate into enthusiasm for the party cause.
Once in office, Eisenhower was not an effective party leader and Nixon increasingly took that role. Historian David Reinhard concludes that Eisenhower lacked sustained political commitment, refused to intervene in state politics, failed to understand the political uses of presidential patronage and overestimated his personal powers of persuasion and conciliation. Eisenhower's attempt in 1956 to convert the GOP to "Modern Republicanism" was his "grandest flop". It was a vague proposal with weak staffing and little financing or publicity that caused turmoil inside the local parties across the country. The GOP carried both houses of Congress in 1952 on Eisenhower's coattails, but in 1954 lost both and would not regain the Senate until 1980 nor the House until 1994. The problem, says Reinhard, was the "voters liked Ike—but not the GOP".
Eisenhower was an exception to most Presidents in that he usually let Vice President Richard Nixon handle party affairs (controlling the national committee and taking the roles of chief spokesman and chief fundraiser). Nixon was narrowly defeated by John F. Kennedy in the 1960 United States presidential election, weakening his moderate wing of the party.
Conservatives made a comeback in 1964 under the leadership of Barry Goldwater, who defeated moderates and liberals such as Nelson Rockefeller and Henry Cabot Lodge, Jr. in the Republican presidential primaries that year. Goldwater was strongly opposed to the New Deal and the United Nations, but rejected isolationism and containment, calling for an aggressive anti-communist foreign policy. In the presidential election of 1964, he was defeated by Lyndon Johnson in a landslide that brought down many senior Republican congressmen across the country. Goldwater won five states in the deep South, the strongest showing by a Republican presidential candidate in the South since 1872.
By 1964, the Democratic lock on the South remained strong, but cracks began to appear. Strom Thurmond was the most prominent Democrat to switch to the Republican Party. One long-term cause was that the region was becoming more like the rest of the nation and could not long stand apart in terms of racial segregation. Modernization brought factories, businesses and larger cities as well as millions of migrants from the North, as far more people graduated from high school and college. Meanwhile, the cotton and tobacco basis of the traditional South faded away as former farmers moved to town or commuted to factory jobs. Segregation, requiring separate dining and lodging arrangements for employees, was a serious obstacle to business development.
The highly visible immediate cause of the political transition involved civil rights. The civil rights movement caused enormous controversy in the white South with many attacking it as a violation of states' rights. When segregation was outlawed by court order and by the Civil Rights acts of 1964 and 1965, a die-hard element resisted integration, led by Democratic governors Orval Faubus of Arkansas, Lester Maddox of Georgia, Ross Barnett of Mississippi and, especially George Wallace of Alabama. These populist governors appealed to a less-educated, blue-collar electorate that on economic grounds favored the Democratic Party and supported segregation.
After passage of the Civil Rights Act of 1964, most Southerners accepted the integration of most institutions (except public schools). With the old barrier to becoming a Republican removed, Southerners joined the new middle class and the Northern transplants in moving toward the Republican Party. Integration thus liberated Southern politics from the old racial issues. In 1963, the federal courts declared unconstitutional the practice of excluding African-American voters from the Democratic primaries, which had been the only elections that mattered in most of the South. Meanwhile, the newly enfranchised black voters supported Democratic candidates at the 85–90% level, a shift which further convinced many white segregationists that the Republicans were no longer the black party.
The New Deal Coalition collapsed in the mid-1960s in the face of urban riots, the Vietnam War, the opposition of many Southern Democrats to desegregation and the Civil Rights Movement and disillusionment that the New Deal could be revived by Lyndon Johnson's Great Society. In the 1966 midterm elections, the Republicans made major gains in part through a challenge to the "War on Poverty." Large-scale civic unrest in the inner-city was escalating ( reaching a climax in 1968) and urban white ethnics who had been an important part of the New Deal Coalition felt abandoned by the Democratic Party's concentration on racial minorities. Republican candidates ignored more popular programs, such as Medicare or the Elementary and Secondary Education Act, and focused their attacks on less popular programs. Furthermore, Republicans made an effort to avoid the stigma of negativism and elitism that had dogged them since the days the New Deal, and instead proposed well-crafted alternatives—such as their "Opportunity Crusade." The result was a major gain of 47 House seats for the GOP in the 1966 United States House of Representatives elections that put the conservative coalition of Republicans and Southern Democrats back in business.
Nixon's involvement in Watergate brought disgrace and a forced resignation in 1974 and any long-term movement toward the GOP was interrupted by the scandal. Nixon's unelected vice president, Gerald Ford, succeeded him and gave him a full pardon, giving Democrats a powerful issue they used to sweep the 1974 off-year elections. Ford never fully recovered. In 1976, he barely defeated Ronald Reagan for the nomination. First Lady Betty Ford was notable for her liberal positions on social issues and for her work on breast cancer awareness following her mastectomy in 1974. The taint of Watergate and the nation's economic difficulties contributed to the election of Democrat Jimmy Carter in 1976.
The Reagan/First Bush Era: 1980–1992
The Reagan Revolution
Ronald Reagan was elected president in the 1980 election by a landslide electoral vote, though he only carried 50.7 percent of the popular vote to Carter's 41% and Independent John Anderson's 6.6 percent, not predicted by most voter polling. Running on a "Peace Through Strength" platform to combat the communist threat and massive tax cuts to revitalize the economy, Reagan's strong persona proved too much for Carter. Reagan's election also gave Republicans control of the Senate for the first time since 1952, gaining 12 seats as well as 33 House seats. Voting patterns and poll result indicate that the substantial Republican victory was the consequence of poor economic performance under Carter and the Democrats and did not represent an ideological shift to the right by the electorate.
Ronald Reagan produced a major realignment with his 1980 and 1984 landslides. In 1980, the Reagan coalition was possible because of Democratic losses in most social-economic groups. In 1984, Reagan won nearly 60% of the popular vote and carried every state except his Democratic opponent Walter Mondale's home state of Minnesota and the District of Columbia, creating a record 525 electoral vote total (out of 538 possible votes). Even in Minnesota, Mondale won by a mere 3,761 votes, meaning Reagan came within less than 3,800 votes of winning in all fifty states.
Political commentators, trying to explain how Reagan had won by such a large margin, coined the term "Reagan Democrat" to describe a Democratic voter who had voted for Reagan in 1980 and 1984 (as well as for George H. W. Bush in 1988), producing their landslide victories. They were mostly white, blue-collar and were attracted to Reagan's social conservatism on issues such as abortion and to his hawkish foreign policy. Stan Greenberg, a Democratic pollster, concluded that Reagan Democrats no longer saw Democrats as champions of their middle class aspirations, but instead saw it as being a party working primarily for the benefit of others, especially African Americans and social liberals.
Reagan reoriented American politics and claimed credit in 1984 for an economic renewal—"It's morning again in America!" was the successful campaign slogan. Income taxes were slashed 25% and the upper tax rates abolished. The frustrations of stagflation were resolved under the new monetary policies of Federal Reserve Chairman Paul Volcker, as no longer did soaring inflation and recession pull the country down. Working again in bipartisan fashion, the Social Security financial crises were resolved for the next 25 years.
In foreign affairs, bipartisanship was not in evidence. Most Democrats doggedly opposed Reagan's efforts to support the contra guerrillas against the Sandinista government of Nicaragua and to support the dictatorial governments of Guatemala, Honduras and El Salvador against communist guerrilla movements. He took a hard line against the Soviet Union, alarming Democrats who wanted a nuclear freeze, but he succeeded in increasing the military budget and launching the Strategic Defense Initiative (SDI)—labeled "Star Wars" by its opponents—that the Soviets could not match.
Reagan fundamentally altered several long standing debates in Washington, namely dealing with the Soviet threat and reviving the economy. His election saw the conservative wing of the party gain control. While reviled by liberal opponents in his day, his proponents contend his programs provided unprecedented economic growth and spurred the collapse of the Soviet Union.
Detractors of Reagan's policies note that although Reagan promised to simultaneously slash taxes, massively increase defense spending and balance the budget, by the time he left office the nation's budget deficit had tripled in his eight years in office. In 2009, Reagan's budget director noted that the "debt explosion has resulted not from big spending by the Democrats, but instead the Republican Party's embrace, about three decades ago, of the insidious doctrine that deficits don't matter if they result from tax cuts". He inspired conservatives to greater electoral victories by being reelected in a landslide against Walter Mondale in 1984, but oversaw the loss of the Senate in 1986.
When Mikhail Gorbachev came to power in Moscow, many conservative Republicans were dubious of the growing friendship between him and Reagan. Gorbachev tried to save communism in the Soviet Union first by ending the expensive arms race with America, then in 1989 by shedding the East European empire. Communism finally collapsed in the Soviet Union in 1991.
President George H. W. Bush, Reagan's successor, tried to temper feelings of triumphalism lest there be a backlash in the Soviet Union, but the palpable sense of victory in the Cold War was a triumph that Republicans felt validated the aggressive foreign policies Reagan had espoused. As Haynes Johnson, one of his harshest critics admitted, "his greatest service was in restoring the respect of Americans for themselves and their own government after the traumas of Vietnam and Watergate, the frustration of the Iran hostage crisis and a succession of seemingly failed presidencies".
Emergence of neoconservatives
Some liberal Democratic intellectuals in the 1960s and 1970s who became disenchanted with the leftward movement of their party in domestic and foreign policy became "neoconservatives" ("neocons"). A number held major appointments during the five presidential terms under Reagan and the Bushes. They played a central role in promoting and planning the 2003 invasion of Iraq. Vice President Dick Cheney and Secretary of Defense Donald Rumsfeld, while not identifying themselves as neoconservatives, listened closely to neoconservative advisers regarding foreign policy, especially the defense of Israel, the promotion of democracy in the Middle East and the buildup of American military forces to achieve these goals. Many early neoconservative thinkers were Zionists and wrote often for Commentary, published by the American Jewish Committee. The influence of the neocons on the White House faded during the Obama years, but it remains a staple in Republican Party arsenal.
The Clinton years and the Congressional ascendancy: 1992–2000
After the election of Democratic President Bill Clinton in 1992, the Republican Party, led by House Minority Whip Newt Gingrich campaigning on a "Contract with America", were elected to majorities to both Houses of Congress in the Republican Revolution of 1994. It was the first time since 1952 that the Republicans secured control of both houses of U.S. Congress, which with the exception of the Senate during 2001–2002 was retained through 2006. This capture and subsequent holding of Congress represented a major legislative turnaround, as Democrats controlled both houses of Congress for the forty years preceding 1995, with the exception of the 1981–1987 Congress in which Republicans controlled the Senate.
In 1994, Republican Congressional candidates ran on a platform of major reforms of government with measures such as a balanced budget amendment and welfare reform. These measures and others formed the famous Contract with America, which represented the first effort to have a party platform in an off-year election. The Contract promised to bring all points up for a vote for the first time in history. The Republicans passed some of their proposals, but failed on others such as term limits.
Democratic President Bill Clinton opposed some of the social agenda initiatives, but he co-opted the proposals for welfare reform and a balanced federal budget. The result was a major change in the welfare system, which conservatives hailed and liberals bemoaned. The Republican-controlled House of Representatives failed to muster the two-thirds majority required to pass a Constitutional amendment to impose term limits on members of Congress.
In 1995, a budget battle with Clinton led to the brief shutdown of the federal government, an event which contributed to Clinton's victory in the 1996 election. That year, the Republicans nominated Bob Dole, who was unable to transfer his success in Senate leadership to a viable presidential campaign.
The incoming Republican majority's promise to slow the rate of government spending conflicted with the president's agenda for Medicare, education, the environment and public health, eventually leading to a temporary shutdown of the U.S. federal government. The shutdown became the longest-ever in U.S. history, ending when Clinton agreed to submit a CBO-approved balanced budget plan. Democratic leaders vigorously attacked Gingrich for the budget standoff and his public image suffered heavily.
During the 1998 midterm elections, Republicans lost five seats in the House of Representatives—the worst performance in 64 years for a party that did not hold the presidency. Polls showed that Gingrich's attempt to remove President Clinton from the office was widely unpopular among Americans and Gingrich suffered much of the blame for the election loss. Facing another rebellion in the Republican caucus, he announced on November 6, 1998 that he would not only stand down as Speaker, but would leave the House as well, even declining to take his seat for an 11th term after he was handily re-elected in his home district.
The Second Bush era: 2000–2008
George W. Bush, son of George H. W. Bush, won the 2000 Republican presidential nomination over Arizona Senator John McCain, former Senator Elizabeth Dole and others. With his highly controversial and exceedingly narrow victory in the 2000 election against the Vice President Al Gore, the Republican Party gained control of the Presidency and both houses of Congress for the first time since 1952. However, it lost control of the Senate when Vermont Senator James Jeffords left the Republican Party to become an independent in 2001 and caucused with the Democrats.
In the wake of the September 11 attacks on the United States in 2001, Bush gained widespread political support as he pursued the War on Terrorism that included the invasion of Afghanistan and the invasion of Iraq. In March 2003, Bush ordered for an invasion of Iraq because of breakdown of United Nations sanctions and intelligence indicating programs to rebuild or develop new weapons of mass destruction. Bush had near-unanimous Republican support in Congress plus support from many Democratic leaders.
The Republican Party fared well in the 2002 midterm elections, solidifying its hold on the House and regaining control of the Senate in the run-up to the war in Iraq. This marked the first time since 1934 that the party in control of the White House gained seats in a midterm election in both houses of Congress (previous occasions were in 1902 and following the Civil War). Bush was renominated without opposition as the Republican candidate in the 2004 election and titled his political platform "A Safer World and a More Hopeful America".
It expressed Bush's optimism towards winning the War on Terrorism, ushering in an ownership society and building an innovative economy to compete in the world. Bush was re-elected by a larger margin than in 2000, but won the smallest share ever of the popular vote for a reelected incumbent president. However, he was the first Republican candidate since 1988 to win an outright majority. In the same election that year, the Republicans gained seats in both houses of Congress and Bush told reporters: "I earned capital in the campaign, political capital, and now I intend to spend it. It is my style".
Bush announced his agenda in January 2005, but his popularity in the polls waned and his troubles mounted. Continuing troubles in Iraq as well as the disastrous government response to Hurricane Katrina led to declining popular support for Bush's policies. His campaign to add personal savings accounts to the Social Security system and make major revisions in the tax code were postponed. He succeeded in selecting conservatives to head four of the most important agencies, Condoleezza Rice as Secretary of State, Alberto Gonzales as Attorney General, John Roberts as Chief Justice of the United States and Ben Bernanke as Chairman of the Federal Reserve.
Bush failed to win conservative approval for Harriet Miers to the Supreme Court, replacing her with Samuel Alito, whom the Senate confirmed in January 2006. Bush and McCain secured additional tax cuts and blocked moves to raise taxes. Through 2006, they strongly defended his policy in Iraq, saying the Coalition was winning. They secured the renewal of the USA PATRIOT Act.
In the November 2005 off-year elections, New York City, Republican mayoral candidate Michael Bloomberg won a landslide re-election, the fourth straight Republican victory in what is otherwise a Democratic stronghold. In California, Governor Arnold Schwarzenegger failed in his effort to use the ballot initiative to enact laws the Democrats blocked in the state legislature. Scandals prompted the resignations of Congressional Republicans House Majority Leader Tom DeLay, Duke Cunningham, Mark Foley and Bob Ney. In the 2006 midterm elections, the Republicans lost control of both the House of Representatives and Senate to the Democrats in what was widely interpreted as a repudiation of the administration's war policies. Exit polling suggested that corruption was a key issue for many voters. Soon after the elections, Donald Rumsfeld resigned as secretary of defense to be replaced by Bob Gates.
In the Republican leadership elections that followed the general election, Speaker Hastert did not run and Republicans chose John Boehner of Ohio for House Minority Leader. Senators chose whip Mitch McConnell of Kentucky for Senate Minority Leader and chose their former leader Trent Lott as Senate Minority Whip by one vote over Lamar Alexander, who assumed their roles in January 2007. In the October and November gubernatorial elections of 2007, Republican Bobby Jindal won election for governor of Louisiana, Republican incumbent Governor Ernie Fletcher of Kentucky lost and Republican incumbent Governor Haley Barbour of Mississippi won re-election.
With President Bush ineligible for a third term and Vice President Dick Cheney not pursuing the party's nomination, Arizona Senator John McCain quickly emerged as the Republican Party's presidential nominee, receiving President Bush's endorsement on March 6, six months before official ratification at the 2008 Republican National Convention. On August 29, Senator McCain announced Governor Sarah Palin of Alaska as his running-mate, making her the first woman on a Republican presidential ticket. McCain surged ahead of Obama in the national polls following the nomination but amid a financial crisis and a serious economic downturn, McCain and Palin went on to lose the 2008 presidential election to Democrats Barack Obama and running mate Joe Biden.
The Obama years and the Rise of the Tea Party: 2008–2016
Following the 2008 elections, the Republican Party, reeling from the loss of the presidency, Congress and key state governorships, was fractured and leaderless. Michael Steele became the first black chairman of the Republican National Committee, but was a poor fundraiser and was replaced after numerous gaffes and missteps. Republicans suffered an additional loss in the Senate in April 2009, when Arlen Specter switched to the Democratic Party, depriving the GOP of a critical 41st vote to block legislation in the Senate. The seating of Al Franken several months later effectively handed the Democrats a filibuster-proof majority, but it was short-lived as the GOP took back its 41st vote when Scott Brown won a special election in Massachusetts in early 2010.
Republicans strongly opposed Obama's 2009 economic stimulus package and 2010 health care reform bill. The Tea Party movement, formed in early 2009, provided a groundswell of conservative grassroots activism to oppose policies of the Obama administration. With an expected economic recovery being criticized as sluggish, the GOP was expected to make big gains in the 2010 midterm elections. However, establishment Republicans began to see themselves at odds with Tea Party activists, who sought to run conservative candidates in primary elections to defeat the more moderate establishment-based candidates. Incumbent senators such as Bob Bennett in Utah and Lisa Murkowski in Alaska lost primary contests in their respective states.
Republicans won back control of the House of Representatives in the November general election, with a net gain of 63 seats, the largest gain for either party since 1948. The GOP also picked up six seats in the Senate, falling short of retaking control in that chamber, and posted additional gains in state governor and legislative races. Boehner became Speaker of the House while McConnell remained as the Senate Minority Leader. In an interview with National Journal magazine about congressional Republican priorities, McConnell explained that "the single most important thing we want to achieve is for (Barack) Obama to be a one-term president".
After 2009, the voter base of the GOP changed in directions opposite from national trends. It became older and less Hispanic or Asian than the general population. In 2013, Jackie Calmes of The New York Times reported a dramatic shift in the power base of the party as it moved away from the Northeast and the West Coast and toward small-town America in the South and West. During the 2016 presidential election, the Republicans also gained significant support in the Midwest.
In a shift over a half-century, the party base has been transplanted from the industrial Northeast and urban centers to become rooted in the South and West, in towns and rural areas. In turn, Republicans are electing more populist, antitax and antigovernment conservatives who are less supportive — and even suspicious — of appeals from big business.
Big business, many Republicans believe, is often complicit with big government on taxes, spending and even regulations, to protect industry tax breaks and subsidies — "corporate welfare," in their view.
In February 2011, several freshmen Republican governors began proposing legislation that would diminish the power of public employee labor unions by removing or negatively affecting their right to collective bargaining, claiming that these changes were needed to cut state spending and balance the states' budgets. These actions sparked public-employee protests across the country. In Wisconsin, the veritable epicenter of the controversy, Governor Scott Walker fought off a labor-fueled recall election, becoming the first state governor in U.S. history to defeat a recall against him.
After leading a pack of minor candidates for much of 2010 and 2011, former Massachusetts Governor Mitt Romney, despite outmatching his opponents in both money and organization, struggled to hold on to his lead for the 2012 GOP nomination. As the presidential campaign season headed toward the voting stage in January 2012, one candidate after another surged past Romney, held the lead for a few weeks, then fell back. According to the RealClearPolitics 2012 polling index, five candidates at one time or another were the top choice of GOP voters: Texas Governor Rick Perry, motivational speaker Herman Cain, former Speaker Newt Gingrich, former Senator Rick Santorum and Romney himself.
After losing to Santorum in Iowa and Gingrich in South Carolina, Romney racked up a number of wins in later contests, emerging as the eventual frontrunner after taking the lion's share of states and delegates in the crucial Super Tuesday contests, despite an embarrassing loss in the Colorado caucuses and near-upsets in the Michigan and Ohio primaries. Romney was nominated in August and chose Congressman Paul Ryan, a young advocate of drastic budget cuts, as his running mate. Throughout the summer polls showed a close race and Romney had a good first debate, but otherwise had trouble reaching out to ordinary voters. He lost to Obama 51% to 47% and instead of gaining in the Senate as expected, Republicans lost seats.
The party mood was glum in 2013 and one conservative analyst concluded:
It would be no exaggeration to say that the Republican Party has been in a state of panic since the defeat of Mitt Romney, not least because the election highlighted American demographic shifts and, relatedly, the party's failure to appeal to Hispanics, Asians, single women and young voters. Hence the Republican leadership's new willingness to pursue immigration reform, even if it angers the conservative base.
In March 2013, National Committee Chairman Reince Priebus gave a stinging postmortem on the GOP's failures in 2012, calling on the party to reinvent itself and to endorse immigration reform and said: "There's no one reason we lost. Our message was weak; our ground game was insufficient; we weren't inclusive; we were behind in both data and digital; and our primary and debate process needed improvement". Priebus proposed 219 reforms, including a $10 million marketing campaign to reach women, minorities and gays; a shorter, more controlled primary season; and better data collection and research facilities.
The party's official opposition to same-sex marriage came under attack. Meanwhile, social conservatives such as Rick Santorum and Mike Huckabee remained opposed to same-sex marriage and warned that evangelicals would desert if the GOP dropped the issue. Many leaders from different factions spoke out in 2013 on the need for a new immigration policy in the wake of election results showing a sharp move away from the GOP among Hispanics and Asians, but the Republicans in Congress could not agree on a program and nothing was done. Republicans in Congress forced a government shutdown in late 2013 after narrowly averting similar fiscal crises in 2011 and 2012.
The Tea Party fielded a number of anti-establishment candidates in the 2014 Republican primaries, but scored very few notable wins. However, they managed to unseat House Majority Leader Eric Cantor in his Virginia primary race. GOP attacks on Obama's unpopular administration resonated with voters and the party posted major gains around the country. They regained control of the Senate and increased their majorities in the House to the highest total since 1929. They took control of governorships, state legislatures and Senate seats in nearly all Southern states, except Florida and Virginia.
Great divisions in the House GOP conference were apparent after the 2014 midterm elections, with conservative members, many of them from the right-leaning Freedom Caucus, expressing dissatisfaction with congressional leadership. John Boehner's surprise announcement in September 2015 that he would step down as Speaker sent shockwaves through the House. After Majority Leader Kevin McCarthy bowed out of the race to replace Boehner due to a lack of support, House Ways and Means Chair Paul Ryan announced he would run, with the Freedom Caucus' support. Ryan was elected Speaker on October 29.
Businessman Donald Trump won the 2016 Republican primaries, representing a dramatic policy shift from traditional conservatism to an aggressively populist ideology with overtones of cultural identity politics. Numerous high-profile Republicans, including past presidential nominees like Mitt Romney, announced their opposition to Trump; some even did so after he received the GOP nomination. Much of the Republican opposition to Trump stemmed from concerns that his disdain for political correctness, his support from the ethno-nationalist alt-right, his virulent criticism of the mainstream news media, and his expressions of approval for political violence would result in the GOP losing the presidential election and lead to significant GOP losses in other races. In one of the largest upsets in American political history, Trump went on to defeat Hillary Clinton in the 2016 presidential election.
In addition to electing Donald Trump as president, Republicans maintained a majority in the Senate, in the House, and amongst state governors in the 2016 elections. The Republican Party was slated to control 69 of 99 state legislative chambers in 2017 (the most it had held in history) and at least 33 governorships (the most it had held since 1922). The party took total control of the government (legislative chambers and governorships) in 25 states following the 2016 elections; this was the most states it had controlled since 1952.
Sources differ over the extent Trump dominated and "remade" the Republican Party. Some have called his control "complete", noting that the few dissenting "Never Trump" Republican elected officials retired or were defeated in primaries, that conservative media strongly supported him, and that his approval rating among self-identified Republican voters was extraordinarily high. While approval among national voters was low.
According to Trump and others, his policies differed from those of his Republican predecessors (such as Reagan) in being more oriented towards the working class, more skeptical of free trade agreements, and more isolationist and confrontational with foreign allies.
Others suggested that Trump's popularity among the Republican base did not translate into as much GOP candidate loyalty as expected. Still others opined that Republican legislation and policies during the Trump administration continued to reflect the traditional priorities of Republican donors, appointees and congressional leaders. Jeet Heer of New Republic suggested that Trump's ascendancy was the "natural evolutionary product of Republican platforms and strategies that stretch back to the very origins of modern conservatism";
Donald Trump is the first president in US history to be impeached twice. The first impeachment was in December 2019 but he was acquitted by the Senate in February 2020. The second impeachment was in January 2021 where he again was acquitted.
In the 2018 United States elections, the Republican Party lost the House of Representatives for the first time since 2011 but increased their majority in the Senate. In the 2020 United States elections, the Republican Party lost the Presidency and the Senate. Despite the loss, Donald Trump initially refused to concede and attempted to overturn the election. This culminated in the storming of the United States Capitol in 2021 as Trump and his supporters tried to disrupt the Electoral College vote count. After the storming, Donald Trump conceded the election in the following day. Motivated by false claims of widespread election fraud in the 2020 election, Republicans initiated an effort to make voting laws more restrictive.
The Republican Party had a progressive element, typified in the early 20th century by Theodore Roosevelt in the 1907–1912 period (Roosevelt was more conservative at other points), Senator Robert M. La Follette, Sr. and his sons in Wisconsin (from about 1900 to 1946) and western leaders such as Senator Hiram Johnson in California, Senator George W. Norris in Nebraska, Senator Bronson M. Cutting in New Mexico, Congresswoman Jeannette Rankin in Montana and Senator William Borah in Idaho. They were generally progressive in domestic policy, supported unions and supported much of the New Deal, but were isolationist in foreign policy. This element died out by the 1940s. Outside Congress, of the leaders who supported Theodore Roosevelt in 1912, most opposed the New Deal.
Starting in the 1930s, a number of Northeastern Republicans took liberal positions regarding labor unions, spending and New Deal policies. They included Mayor Fiorello La Guardia in New York City, Governor Thomas E. Dewey of New York, Governor Earl Warren of California, Governor Harold Stassen of Minnesota, Senator Clifford P. Case of New Jersey, Henry Cabot Lodge, Jr. of Massachusetts, Senator Prescott Bush of Connecticut (father and grandfather of the two Bush Presidents), Senator Jacob K. Javits of New York, Senator John Sherman Cooper of Kentucky, Senator George Aiken of Vermont, Governor and later Senator Mark Hatfield of Oregon, Governor William Scranton of Pennsylvania and Governor George W. Romney of Michigan. The most notable of them all was Governor Nelson A. Rockefeller of New York. They generally advocated a free-market, but with some level of regulation. Rockefeller required employable welfare recipients to take available jobs or job training.
While the media sometimes called them "Rockefeller Republicans", the liberal Republicans never formed an organized movement or caucus and lacked a recognized leader. They promoted economic growth and high state and federal spending while accepting high taxes and much liberal legislation, with the provision they could administer it more efficiently. They opposed the Democratic big city machines while welcoming support from labor unions and big business alike. Religion was not high on their agenda, but they were strong believers in civil rights for African Americans and women's rights and most liberals were pro-choice. They were also strong environmentalists and supporters of higher education. In foreign policy they were internationalists, throwing their support to Dwight D. Eisenhower over the conservative leader Robert A. Taft in 1952. They were often called the "Eastern Establishment" by conservatives such as Barry Goldwater.
The Goldwater conservatives fought this establishment from 1960, defeated it in 1964 and eventually retired most of its members, although some became Democrats like Senator Charles Goodell, Mayor John Lindsay in New York and Chief Justice Earl Warren. President Richard Nixon adopted many of their positions, especially regarding health care, welfare spending, environmentalism and support for the arts and humanities. After Congressman John B. Anderson of Illinois bolted the party in 1980 and ran as an independent against Reagan, the liberal GOP element faded away. Their old strongholds in the Northeast are now mostly held by Democrats.
The term "Rockefeller Republican" was used 1960–1980 to designate a faction of the party holding "moderate" views similar to those of Nelson Rockefeller, governor of New York from 1959 to 1974 and Vice President under President Gerald Ford in 1974–1977. Before Rockefeller, Thomas E. Dewey, governor of New York (1942–1954) and GOP presidential nominee in 1944 and 1948 was the leader. Dwight Eisenhower and his aide Henry Cabot Lodge, Jr. reflected many of their views.
An important moderate leader in the 1950s was Connecticut Republican Senator Prescott Bush, father and grandfather of Presidents George H. W. Bush and George W. Bush, respectively. After Rockefeller left the national stage in 1976, this faction of the party was more often called "moderate Republicans", in contrast to the conservatives who rallied to Ronald Reagan.
Historically, Rockefeller Republicans were moderate or liberal on domestic and social policies. They favored New Deal programs, including regulation and welfare. They were supporters of civil rights. They were supported by big business on Wall Street (New York City). In fiscal policy they favored balanced budgets and relatively high tax levels to keep the budget balanced. They sought long-term economic growth through entrepreneurship, not tax cuts.
In state politics, they were strong supporters of state colleges and universities, low tuition and large research budgets. They favored infrastructure improvements, such as highway projects. In foreign policy they were internationalists and anti-communists. They felt the best way to counter communism was sponsoring economic growth (through foreign aid), maintaining a strong military and keeping close ties to NATO. Geographically their base was the Northeast, from Maine to Pennsylvania, where they had the support of major corporations and banks and worked well with labor unions.
The moderate Republicans were top-heavy, with a surplus of high visibility national leaders and a shortage of grass roots workers. Most of all they lacked the numbers, the enthusiasm and excitement the conservatives could mobilize—the moderates decided it must be an un-American level of fanaticism that drove their opponents. Doug Bailey, a senior Rockefeller aide recalled, "there was a mentality in [Rockefeller's] campaign staff that, 'Look, we have got all this money. We should be able to buy the people necessary to get this done. And you buy from the top down'". Bailey discovered that the Rockefeller team never understood that effective political organizations are empowered from the bottom up, not the top down.
Barry Goldwater crusaded against the Rockefeller Republicans, beating Rockefeller narrowly in the California primary of 1964 giving the Arizona senator, all of the California delegates and a majority at the presidential nominating convention. The election was a disaster for the conservatives, but the Goldwater activists now controlled large swaths of the GOP and they had no intention of retreating. The stage was set for a conservative takeover, based in the South and West, in opposition to the Northeast. Ronald Reagan continued in the same theme. George H. W. Bush was more closely associated with the moderates, but his son George W. Bush was firmly allied with the conservatives.
Political firsts for women and minorities
From its inception in 1854 to 1964, when Senate Republicans pushed hard for passage of the Civil Rights Act of 1964 against a filibuster by Senate Democrats, the GOP had a reputation for supporting blacks and minorities. In 1869, the Republican-controlled legislature in Wyoming Territory and its Republican governor John Allen Campbell made it the first jurisdiction to grant voting rights to women. In 1875, California swore in the first Hispanic governor, Republican Romualdo Pacheco. In 1916, Jeannette Rankin of Montana became the first woman in Congress—and indeed the first woman in any high level government position. In 1928, New Mexico elected the first Hispanic U.S. Senator, Republican Octaviano Larrazolo. In 1898, the first Jewish U.S. Senator elected from outside of the former Confederacy was Republican Joseph Simon of Oregon. In 1924, the first Jewish woman elected to the U.S. House of Representatives was Republican Florence Kahn of California. In 1928, the Republican U.S. Senate Majority Leader, Charles Curtis of Kansas, who grew up on the Kaw Indian reservation, became the first person of significant non-European ancestry to be elected to national office, as Vice President of the United States for Herbert Hoover.
Blacks generally identified with the GOP until the 1930s. Every African American who served in the U.S. House of Representatives before 1935 and all of the African Americans who served in the Senate before 1979, were Republicans. Frederick Douglass after the Civil War and Booker T. Washington in the early 20th century were prominent Republican spokesmen. In 1966, Edward Brooke of Massachusetts became the first African American popularly elected to the United States Senate.
Some critics, most notably Dan Carter, have alleged that the rapid growth in Republican strength in the South came from a secretly coded message to Wallacites and segregationists that the GOP was a racist anti-black party seeking their votes. Political scientists and historians point out that the timing does not fit the Southern strategy model. Nixon carried 49 states in 1972, so he operated a successful national rather than regional strategy, but the Republican Party remained quite weak at the local and state level across the entire South for decades. Matthew Lassiter argues that Nixon's appeal was not to the Wallacites or segregationists, but rather to the rapidly emerging suburban middle-class. Many had Northern antecedents and they wanted rapid economic growth and saw the need to put backlash politics to rest. Lassiter says the Southern strategy was a "failure" for the GOP and that the Southern base of the Republican Party "always depended more on the middle-class corporate economy and on the top-down politics of racial backlash". Furthermore, "realignment in the South quote came primarily from the suburban ethos of New South metropolises such as Atlanta and Charlotte, North Carolina, not to the exportation of the working-class racial politics of the Black Belt".
The South's transition to a Republican stronghold took decades and happened incrementally, with national politics gradually influencing state and local politics. First the states started voting Republican in presidential elections—the Democrats countered that by nominating Southerners who could carry some states in the region, such as Jimmy Carter in 1976 and Bill Clinton in 1992 and 1996. However, the strategy narrowly failed with Al Gore in 2000. The states began electing Republican senators to fill open seats caused by retirements and finally governors and state legislatures changed sides. Georgia was the last state to shift to the GOP, with Republican Sonny Perdue taking the governorship in 2002. Republicans aided the process with redistricting that protected the African-American and Hispanic vote (as required by the Civil Rights laws), but split up the remaining white Democrats so that Republicans mostly would win.
In addition to its white middle class base, Republicans attracted strong majorities from the evangelical Christian community and from Southern pockets of traditionalist Roman Catholics in South Louisiana. The national Democratic Party's support for liberal social stances such as abortion drove many white Southerners into a Republican Party that was embracing the conservative views on these issues. Conversely, liberal voters in the northeast began to join the Democratic Party.
In 1969, Kevin Phillips argued in The Emerging Republican Majority that support from Southern whites and growth in the South, among other factors, was driving an enduring Republican electoral realignment. In the early 21st century, the South was generally solidly Republican in state elections and mostly solidly Republican in presidential contests. In 2005, political scientists Nicholas A. Valentino and David O. Sears argued that partisanship at that time was driven by disagreements on the size of government, national security and moral issues, while racial issues played a smaller role.
- History of conservatism in the United States
- Republican National Convention
- List of Republican National Conventions
- Political positions of the Republican Party
- United States politics
- American election campaigns in the 19th century
- History of the Democratic Party (United States)
- Including Orestes Brownson of New York. There were Working Men's Parties in New York, Philadelphia, Boston, and other urban areas in the North.
- Including William Cullen Bryant and John Bigelow, both of the New York Post.
- Including David Wilmot of Pennsylvania, John C. Fremont of California, and Isaac P. Christiancy of Michigan.
- Including Salmon P. Chase of Ohio, Henry Wilson of Massachusetts, and James Harlan of Iowa.
- Including Nathaniel P. Banks of Massachusetts, Henry S. Lane of Indiana, and Thaddeus Stevens of Pennsylvania.
- Including Abraham Lincoln of Illinois, Schuyler Colfax of Indiana, and William H. Seward of New York.
- Including Whigs Neal Dow of Maine and Parson Brownlow of Tennessee, and Democrats Hannibal Hamlin of Maine and John Bidwell of California.
- Including Nathaniel P. Banks of Massachusetts, Kinsley Bingham of Michigan, William H. Bissell of Illinois, Salmon P. Chase of Ohio, Hannibal Hamlin of Maine, Samuel J. Kirkwood of Iowa, Ralph Metcalf of New Hampshire, Lot Morrill of Maine and Alexander Randall of Wisconsin.
- Including Bingham and Hamlin, as well as James R. Doolittle of Wisconsin, John P. Hale of New Hampshire, Preston King of New York, Lyman Trumbull of Illinois and David Wilmot of Pennsylvania.
- William D. Kelley of Pennsylvania.
- The unicameral Nebraska legislature, in fact controlled by a majority of Republicans, is technically nonpartisan.
- The first African American Senator, Hiram Rhodes Revels, was appointed by the Mississippi state legislature to an unexpired term in 1870. Blanche Bruce was the first African American elected to the Senate, elected by the Mississippi state legislature to a full term in 1874. Prior to the 17th Amendment in 1913, U.S. Senators were elected by state legislatures.
- "The Ol' Switcheroo. Theodore Roosevelt, 1912". time.com.
- Zingher, Joshua N. (2018). "Polarization, Demographic Change, and White Flight from the Democratic Party". The Journal of Politics. 80 (3): 860–72. doi:10.1086/696994. ISSN 0022-3816. S2CID 158351108.
- Layman, Geoffrey (2001). The Great Divide: Religious and Cultural Conflict in American Party Politics. Columbia University Press. pp. 115, 119–20. ISBN 978-0231120586.
- "Republicans Now Dominate State Government". Daily Kos.
- "Presidential Election Results: Donald J. Trump Wins". The New York Times.
- Paul Finkelman, and Peter Wallenstein, eds. The encyclopedia of American political history (2001) p 226.
- Eric Foner, Free soil, free labor, free men: the ideology of the Republican Party before the Civil War(1970).
- A.F. Gilman, The origin of the Republican Party (1914). online
- William Stocking, ed. Under the Oaks: Commemorating the Fiftieth Anniversary of the Founding of the Republican Party, at Jackson, Michigan, July 6, 1854 (1904) online
- Allan Nevins, . Ordeal of the Union: A house dividing, 1852–1857. Vol. 2 (1947) pp 316–23.
- William E. Gienapp, The origins of the Republican Party, 1852–1856 (1987) pp 189–223.
- . Gienapp, The origins of the Republican Party, 1852–1856 (1987) pp 431–435, 547.
- Kleppner (1979) has extensive detail on the voting behavior of ethnic and religious groups.
- Oakes, James (2012). Freedom National: The Destruction of Slavery in the United States, 1861–1865. W. W. Norton. p. 12. ISBN 9780393065312.
- "The Origins of the Republican Party". Republican Philadelphia. Philadelphia, PA: Independence Hall Association. Retrieved July 7, 2020.
- "Republicanism in Wisconsin". The Pittsburgh Gazette. February 1, 1856. p. 2.
- Johnson (ed.), Proceedings of the First Three Republican National Conventions of 1856, 1860 and 1864, pp. 10–11.
- Gould 2003
- Howe, Daniel Walker (2007). What Hath God Wrought: The Transformation of America, 1815–1848. Oxford University Press. pp. 545–546. ISBN 9780195392432.
- Gienapp, William E (1987). The Origins of the Republican Party, 1852-1856. Oxford University Press. pp. 16–66, 93–109, 435–439. ISBN 0-19-504100-3.
- Maisel, L. Sandy; Brewer, Mark D. (2008). Parties and Elections in America: The Electoral Process (5th ed.). Rowman & Littlefield. pp. 37–39. ISBN 978-0742547643.
- John R. Mulkern (1990). The Know-Nothing Party in Massachusetts: The Rise and Fall of a People's Movement. UP of New England. p. 133. ISBN 9781555530716.
- Lincoln, Abraham (1989). Speeches and Writings, 1859–1865. Library of America. p. 120. ISBN 9780940450639.
- Goldwyn 2005.
- Bruce S. Allardice, "'Illinois is Rotten with Traitors!' The Republican Defeat in the 1862 State Election." Journal of the Illinois State Historical Society (1998) 104.1/2 (2011): 97–114.
- Jamie L. Carson, et al., "The impact of national tides and district-level effects on electoral outcomes: The US congressional elections of 1862–63." American Journal of Political Science (2001): 887–898.
- Roger L. Ransom, "Fact and Counterfact: The 'Second American Revolution' Revisited." Civil War History 45#1 (1999): 28–60.
- J. Matthew Gallman (2015). Defining Duty in the Civil War: Personal Choice, Popular Culture, and the Union Home Front. U of North Carolina Press. p. 9. ISBN 9781469621005.
- Paul Finkelman, and Peter Wallenstein, eds., The encyclopedia of American political history (2001) p 327.
- Michael W. Fitzgerald (2000). Union League Movement in the Deep South: Politics and Agricultural Change During Reconstruction. LSU Press. pp. 114–15, 213–15. ISBN 9780807126332.
- Sarah Woolfolk Wiggins (1977). The Scalawag In Alabama Politics, 1865–1881. p. 134. ISBN 9780817305574.
- DeSantis, 1998.
- "Black and Tan Republicans" in Andrew Cunningham McLaughlin and Albert Bushnell Hart, eds. Cyclopedia of American Government (1914) . p. 133. online
- "Wallace Townsend (1882–1979)". encyclopediaofarkansas.net. Retrieved May 27, 2012.
- Lisio, Donald J. (2012). Hoover, Blacks, and Lily-Whites: A Study of Southern Strategies. U North Carolina Press. p. 37ff. ISBN 9780807874219.
- Cohen, Marty; et al. (2009). The Party Decides: Presidential Nominations Before and After Reform. p. 118. ISBN 9780226112381.
- Crespino, Joseph (2007). In Search of Another Country: Mississippi and the Conservative Counterrevolution. Princeton UP. pp. 84–85. ISBN 978-0691122090.
- Edward Kohn, "Crossing the Rubicon: Theodore Roosevelt, Henry Cabot Lodge, and the 1884 Republican National Convention." Journal of the Gilded Age and Progressive Era 5.1 (2006): 19–45 online
- Shafer and Badger (2001).
- Paul Kleppner, The Third Electoral System 1853–1892 (1979) p. 182.
- Kleppner 1979.
- R. Hal Williams, Realigning America: McKinley, Bryan, and the Remarkable Election of 1896 (2010).
- In December 1907 he wrote his British friend Arthur Hamilton Lee. "To use the terminology of Continental politics, I am trying to keep the left center together." Elting E. Morrison, ed., The Letters of Theodore Roosevelt (1952) vol 6 p 875.
- Roosevelt to Arthur Hamilton Lee, Dec 16, 1907, in Morrison, ed., The Letters of Theodore Roosevelt (1952) vol 6 p 874.
- Howard R. Smith, and John Fraser Hart, "The American tariff map." Geographical Review 45.3 (1955): 327–346 online.
- Stanley D. Solvick, "William Howard Taft and the Payne-Aldrich Tariff." Mississippi Valley Historical Review 50.3 (1963): 424–442 online
- Adam Burns, "Courting white southerners: Theodore Roosevelt’s quest for the heart of the South." American Nineteenth Century History 20.1 (2019): 1–18.
- Gustafson, Melanie (2001). Women and the Republican Party, 1854–1924. University of Illinois Press.
- Melanie Susan Gustafson. "Van Ingen on Gustafson, 'Women and the Republican Party, 1854-1924'". Networks.h-net.org. Retrieved December 7, 2016.
- Clayton, Bruce L. "An Intellectual on Politics: William Garrott Brown and the Ideal of a Two-Party South." North Carolina Historical Review 42.3 (1965): 319–334. online
- George H. Nash, "The Republican Right from Taft to Reagan," Reviews in American History (1984) 12:2 pp. 261–265 in JSTOR quote on p. 261; Nash references David W. Reinhard, The Republican Right since 1945, (University Press of Kentucky, 1983)
- Hoff, Joan (1975). Herbert Hoover, forgotten progressive. Little, Brown. p. 222. ISBN 9780316944168.
- Herbert Hoover, Addresses upon the American road, 1933–1938 (1938).
- George H. Nash, The Crusade Years, 1933–1955: Herbert Hoover's Lost Memoir of the New Deal Era and Its Aftermath (Hoover Institution Press, 2013).
- Charles W. Smith Jr, Public Opinion in a Democracy (1939), pp. 85–86.
- Lumeng Yu "The Great Communicator: How FDR's radio speeches shaped American history." History Teacher 39.1 (2005): 89–106 online.
- Bernard Sternsher, "The New Deal Party System: A Reappraisal," Journal of Interdisciplinary History, (1984) 15:1 pp. 53–81 in JSTOR
- Michael Kazin, eta al, eds. (2011). The Concise Princeton Encyclopedia of American Political History. Princeton U. P. p. 203. ISBN 978-0691152073.CS1 maint: multiple names: authors list (link) CS1 maint: extra text: authors list (link)
- Harvard Sitkoff, A New Deal for Blacks: The Emergence of Civil Rights as a National Issue: The Depression Decade (2008).
- Susan Dunn, Roosevelt's Purge: How FDR Fought to Change the Democratic Party (2010)
- James T. Patterson, Mr. Republican: A Biography of Robert A. Taft (1972) pp. 160–82
- R. Jeffrey Lustig (2010). Remaking California: Reclaiming the Public Good. Heyday. p. 88. ISBN 9781597141345.
- Richard Norton Smith, Thomas E. Dewey and His Times (1982) pp. 273–81
- Mason, Robert (2011). The Republican Party and American Politics from Hoover to Reagan. Cambridge UP. pp. 76–7. ISBN 9781139499378.
- Milton Plesur, "The Republican Congressional Comeback of 1938," Review of Politics (1962) 24:4 pp. 525–62 in JSTOR.
- James T. Patterson, "A Conservative Coalition Forms in Congress, 1933–1939," Journal of American History, (1966) 52:4 pp. 757–72. in JSTOR.
- Michael Bowen, The Roots of Modern Conservatism: Dewey, Taft, and the Battle for the Soul of the Republican Party (2011)
- John W. Malsberger, From Obstruction to Moderation: The Transformation of Senate Conservatism, 1938–1952 (2000) online Archived April 20, 2010, at the Wayback Machine
- Michael Bowen, The Roots of Modern Conservatism: Dewey, Taft, and the Battle for the Soul of the Republican Party (2011), University of North Carolina Press.
- Gordon B. McKinney, Southern Mountain Republicans 1865–1900 (1978)
- Key Jr., V. O. (1949). Southern Politics State and Nation.
- Larry J. Sabato; Howard R. Ernst (2014). Encyclopedia of American Political Parties and Elections. Infobase Publishing. p. 115. ISBN 9781438109947.
- A Harry and Arthur: Truman, Vandenberg, and the Partnership That Created the Free World, Lawrence J. Haas, U of Nebraska Press, 2016
- Mason, Robert (2013). "Citizens for Eisenhower and the Republican Party, 1951–1965" (PDF). The Historical Journal. 56 (2): 513–536. doi:10.1017/S0018246X12000593.
- David W. Reinhard, The Republican Right since 1945, (University Press of Kentucky, 1983) pp. 157–158.
- W. J. Rorabaugh, The Real Making of the President: Kennedy, Nixon, and the 1960 Election (2012).
- Rick Perlstein, Before the Storm: Barry Goldwater and the Unmaking of the American Consensus (2001).
- Bernard Cosman, Five states for Goldwater: Continuity and change in southern presidential voting patterns (U of Alabama Press, 1966).
- Everett Carll Ladd Jr. Where Have All the Voters Gone? The Fracturing of America's Political Parties (1978), p. 6.
- Dewey W. Grantham, The Life and Death of the Solid South (1988)
- Mark McLay, "A High-Wire Crusade: Republicans and the War on Poverty, 1966." Journal of Policy History 31.3 (2019): 382–405.
- "1966 Elections–A Major Republican Comeback." in CQ Almanac 1966 (22nd ed., 1967) pp 1387–88. online
- Douglas A. Hibbs Jr, "President Reagan's Mandate from the 1980 Elections: A Shift to the Right?." American Politics Quarterly 10.4 (1982): 387–420 online.
- "1984 Presidential Election Results – Minnesota". Retrieved November 18, 2006.
- Caplow, Theodore; Howard M. Bahr; Bruce A. Chadwick; Modell, John (1994). Recent Social Trends in the United States, 1960–1990. McGill-Queen's Press. p. 337. ISBN 9780773512122. They add: "The Democratic party, nationally, moved from left-center toward the center in the 1940s and 1950s, then moved further toward the right-center in the 1970s and 1980s".
- Johnson, Haynes (1989). Sleepwalking Through History: America in the Reagan Years, p. 28.
- Justin Vaïsse, Neoconservatism: The biography of a movement (Harvard UP, 2010) pp. 6–11.
- Record, Jeffrey (2010). Wanting War: Why the Bush Administration Invaded Iraq. Potomac Books, Inc. pp. 47–50. ISBN 9781597975902.
- Murray Friedman, The neoconservative revolution: Jewish intellectuals and the shaping of public policy (Cambridge University Press, 2005)
- Benjamin Balint, Running Commentary: The Contentious Magazine that Transformed the Jewish Left into the Neoconservative Right (2010)
- Alexandra Homolar-Riechmann, "The moral purpose of US power: neoconservatism in the age of Obama." Contemporary Politics 15#2 (2009): pp. 179–96. abstract.
- "The 2004 Republican National Platform" (PDF). Archived from the original (PDF) on February 26, 2008. (277 KB)
- "Corruption named as key issue by voters in exit polls". CNN. November 8, 2006. Retrieved January 25, 2007.
- Morris, Dick; McGann, Eileen (2011). Revolt!: How to Defeat Obama and Repeal His Socialist Programs. HarperCollins. p. 38. ISBN 9780062073297.
- See "Michael Steele Archive" at NPR.
- Ronald Libby, Purging the Republican Party: Tea Party Campaigns and Elections, Lexington Books, 2013.
- Jackie Calmes. "For 'Party of Business,' Allegiances Are Shifting". The New York Times. January 15, 2013.
- 2012 "Republican Presidential Nomination". Retrieved February 26, 2012.
- Vincent J. Cannato. "Give Me Your Skilled Workers". Wall Street Journal. March 12, 2013. p. 12.
- Rachel Weiner. "Reince Priebus gives GOP prescription for future". The Washington Post. March 18, 2013.
- "Gingrich's Views Evolve on Gay Marriage". The Washington Times. December 20, 2012.
- Rush Limbaugh: 'There Is Going To Be Gay Marriage Nationwide' (AUDIO). Huffingtonpost.com. Retrieved on August 17, 2013.
- Nazworth, Napp (March 25, 2013). "Huckabee: Evangelicals Will Leave If GOP Backs Gay Marriage". The Christian Post. Retrieved October 14, 2014.
- Chris Cillizza. "Three sentences on immigration that will haunt Republicans in 2016". The Washington Post. July 1, 2014.
- Nate Cohn (December 4, 2014). "Demise of the Southern Democrat is Now Nearly Complete". The New York Times.
- "Donald Trump's Victory Is Met With Shock Across a Wide Political Divide". The New York Times. November 9, 2016. Retrieved November 10, 2016.
- Arkin, Daniel; Siemaszko, Corky (November 9, 2016). "2016 Election: Donald Trump Wins the White House in Upset". NBC News. Retrieved November 10, 2016.
- "How Donald Trump swept to an unreal, surreal presidential election win". Guardian. November 9, 2016. Retrieved November 9, 2016.
- Goldmacher, Shane; Schreckinger, Ben (November 9, 2016). "Trump Pulls Off Biggest Upset in U.S. History". Politico. Retrieved December 6, 2016.
- Bosman, Julie; Davey, Monica (November 11, 2016). "Republicans Expand Control in a Deeply Divided Nation". The New York Times. Retrieved November 17, 2016.
- Lieb, David (November 9, 2016). "Republicans Governorships Rise to Highest Mark Since 1922". U.S. News & World Report. Retrieved November 17, 2016.
- Phillips, Amber (November 12, 2016). "These 3 maps show just how dominant Republicans are in America after Tuesday". The Washington Post. Retrieved November 14, 2016.
- Lieb, David A. (December 29, 2016). "GOP-Controlled States Aim to Reshape Laws". Chicago Tribune (from the Associated Press).
- Jean-Christophe Boucher, and Cameron G. Thies. "'I Am a Tariff Man': The Power of Populist Foreign Policy Rhetoric under President Trump." Journal of Politics 81.2 (2019): 712–722 online.
- COPPINS, MCKAY (November 6, 2018). "Trump Already Won the Midterms". The Atlantic. Retrieved February 2, 2019.
- "Trump's Takeover". PBS Frontline. Retrieved February 2, 2019.
- LIASSON, MARA (June 13, 2018). "How President Trump Is Changing The Republican Party". NPR. Retrieved February 2, 2019.
- Swan, Jonathan (June 3, 2018). "Trump's 500-day coup of the GOP, conservatism". Axios. Retrieved February 2, 2019.
- Smith, David (June 10, 2018). "How Trump captured the Republican party". The Guardian. Retrieved February 2, 2019.
- "'How we know the drop in Trump's approval rating in January reflected a real shift in public opinion". pewresearch. January 21, 2021. Retrieved January 21, 2021.
- Bennett, Brian (October 12, 2018). "'The Party Is Much Bigger Now.' Read Donald Trump's Interview With TIME on His Effect on the Republican Party". Time Magazine. Retrieved February 2, 2019.
- Kamarck, Elaine; Podkul, Alexander R. (August 16, 2018). "Is the Republican Party really Donald Trump's party?". Brookings Institution. Retrieved February 2, 2019.
- Glassman, Matthew (February 1, 2019). "How Republicans Erased Trumpism". New York Times. Retrieved February 2, 2019.
- Heer, Jett (February 18, 2016). "How the Southern Strategy Made Donald Trump Possible". The New Republic. Retrieved May 8, 2018.
- Analysis by Harry Enten. "Analysis: How Trump led Republicans to historic losses". CNN. Retrieved January 11, 2021.
- Macias, Kevin Breuninger,Amanda (January 8, 2021). "Trump finally concedes Biden will become president". CNBC. Retrieved January 11, 2021.
- Wines, Michael (February 27, 2021). "In Statehouses, Stolen-Election Myth Fuels a G.O.P. Drive to Rewrite Rules". The New York Times.
- Ruth O'Brien, Workers' Paradox: The Republican Origins of New Deal Labor Policy, 1886–1935 (1998) p. 15
- Robert Johnson, The peace progressives and American foreign relations (1995)
- Otis L. Graham Jr., An Encore for Reform: The Old Progressives and the New Deal (1967)
- Nicol C. Rae, The Decline and Fall of the Liberal Republicans: From 1952 to the Present (1989)
- Joseph E. Persico, The Imperial Rockefeller: A Biography of Nelson A. Rockefeller (1982).
- Public Papers of Nelson A. Rockefeller, Fifty-third Governor of the State of New York, vol. 15, 1973 (Albany, NY: State of New York, 1973), p. 1385.
- Rae, The Decline and Fall of the Liberal Republicans: From 1952 to the Present (1989)
- John Andrew, "The Struggle for the Republican Party in 1960," Historian, Spring 1997, Vol. 59 Issue 3, pp. 613–33.
- Timothy J. Sullivan, New York State and the rise of modern conservatism: redrawing party lines (2009) p. 142
- Whitaker, John C. (1996). "Nixon's domestic policy: Both liberal and bold in retrospect". Presidential Studies Quarterly. 26 (1): 131–53. JSTOR 27551554.
- Matthew Levendusky, The Partisan Sort: How Liberals Became Democrats and Conservatives Became Republicans (2009)
- Jeffrey Kabaservice, Rule and Ruin p. 91
- For a perspective from a liberal Democrat see Richard A. Viguerie, Conservatives betrayed: How George W. Bush and other big government republicans hijacked the conservative cause (2006).
- See "Milestones for Women in American Politics" (Center for American Women and Politics) online
- Simon Topping, Lincoln's Lost Legacy: Republican Party and the African American Vote, 1928–1952 (University Press of Florida, 2008.
- Louis Bolce, Gerald De Maio, and Douglas Muzzio. "The 1992 Republican" tent": no blacks walked in." Political Science Quarterly 108.2 (1993): 255–270 online.
- Dan T. Carter, The Politics of Rage: George Wallace, the Origins of the New Conservatism, and the Transformation of American Politics (2000)
- Matthew D. Lassiter, "Suburban Strategies: The Volatile Center in Postwar American Politics" in Meg Jacobs et al. eds., The Democratic Experiment: New Directions In American Political History (2003): pp. 327–49; quotes on pp. 329–30.
- Matthew D. Lassiter, The Silent Majority: Suburban Politics in the Sunbelt South (Princeton UP, 2013)
- Charles S. Bullock III and Mark J. Rozell, eds. The New Politics of the Old South: An Introduction to Southern Politics (3rd ed. 2007) covers every state 1950–2004
- Oran P. Smith, The Rise of Baptist Republicanism (2000). A particularly critical event was the 1973 United States Supreme Court decision in Roe v. Wade, which held that there was a constitutional right to abortion.
- Nicholas A. Valentino and David O. Sears. "Old times there are not forgotten: Race and partisan realignment in the contemporary South." American Journal of Political Science 49.3 (2005): pp. 672–88, quote on pp. 672–3.
- American National Biography (1999) 20 volumes; contains short biographies of all politicians no longer alive.
- Carlisle, Rodney P. ]]Encyclopedia of Politics. Vol. 2: The Right (Sage, 2005).
- Cox, Heather Cox. To Make Men Free: A History of the Republican Party (2014).
- Dinkin, Robert J. Voting and Vote-Getting in American History (2016), expanded edition of Dinkin, Campaigning in America: A History of Election Practices. (Greenwood 1989) online 1989 edition
- Fauntroy, Michael K. Republicans and the Black vote (2007).
- Gould, Lewis. Grand Old Party: A History of the Republicans (2003), major overview.
- Graff, Henry F., ed. The Presidents: A Reference History (3rd ed. 2002) online, short scholarly biographies from George Washington to William Clinton.
- Jensen, Richard. Grass Roots Politics: Parties, Issues, and Voters, 1854–1983 (1983) online.
- Kleppner, Paul, et al. The Evolution of American Electoral Systems (1983), applies party systems model.
- Kurian, George Thomas ed. The Encyclopedia of the Republican Party(4 vol. 2002).
- Mayer, George H. The Republican Party, 1854–1966. 2d ed. (1967), basic survey.
- Remini, Robert V. The House: The History of the House of Representatives (2006), extensive coverage of the party.
- Rutland, Robert Allen. The Republicans: From Lincoln to Bush (1996).
- Shafer, Byron E. and Anthony J. Badger, eds. Contesting Democracy: Substance and Structure in American Political History, 1775–2000 (2001), essays by specialists on each time period.
- Schlesinger Jr., Arthur Meier; Troy, Gil (eds.). History of American Presidential Elections, 1789–2008 (2011 ed.). For each election includes short history and selection of primary document. Essays on the most important elections are reprinted in Schlesinger, The Coming to Power: Critical presidential elections in American history (1972).
1854 to 1932
- Donald, David Herbert (1999). Lincoln. Full biography.
- Donald, David Herbert. Charles Sumner and the Coming of the Civil War (1960); and vol 2: Charles Sumner and the Rights of Man (1970); Pulitzer Prize.
- DeSantis, Vincent P. Republicans Face the Southern Question: The New Departure Years, 1877–1897 (1998).
- Edwards, Rebecca. Angels in the Machinery: Gender in American Party Politics from the Civil War to the Progressive Era (1997).
- Foner, Eric. Free Soil, Free Labor, Free Men: The Ideology of the Republican Party Before the Civil War (1970).
- Foner, Eric. Reconstruction, 1863–1877 (1998).
- Frantz, Edward O. The Door of Hope: Republican Presidents and the First Southern Strategy, 1877–1933 (UP of Florida, 2011). 295pp
- Garraty, John. Henry Cabot Lodge: A Biography (1953).
- Gienapp, William E. The Origins of the Republican Party, 1852–1856 (1987).
- Gienapp, William E. "Nativism and the Creation of a Republican Majority in the North before the Civil War." The Journal of American History 72.3 (1985): 529–559 online
- Goodwin, Doris Kearns (2005). Team of Rivals: The Political Genius of Abraham Lincoln. ISBN 978-0-684-82490-1.
- Gould, Lewis L. Four Hats in the Ring: The 1912 Election and the Birth of Modern American Politics (2008).
- Gould, Lewis L. "New Perspectives on the Republican Party, 1877–1913," American Historical Review (1972) 77#4 pp. 1074–82 in JSTOR
- Gould, Lewis L. The William Howard Taft Presidency (University Press of Kansas, 2009) 51–64.
- Hoogenboom, Ari. Rutherford B. Hayes: Warrior and President (1995).
- Hume, Richard L. and Jerry B. Gough. Blacks, Carpetbaggers, and Scalawags: The Constitutional Conventions of Radical Reconstruction (LSU Press, 2008); statistical classification of delegates.
- Jenkins, Jeffery A. and Boris Heersink. "Republican Party Politics and the American South: From Reconstruction to Redemption, 1865–1880" (2016 paper at the 2016 Annual Meeting of the Southern Political Science Association); online.
- Jensen, Richard. The Winning of the Midwest: Social and Political Conflict, 1888–1896 (1971). online
- Jensen, Richard. Grass Roots Politics: Parties, Issues, and Voters, 1854–1983 (1983) 'online
- Kehl, James A. Boss Rule in the Gilded Age: Matt Quay of Pennsylvania (1981).
- Kleppner, Paul. The Third Electoral System 1854–1892: Parties, Voters, and Political Cultures (1979).
- Marcus, Robert. Grand Old Party: Political Structure in the Gilded Age, 1880–1896 (1971).
- Morgan, H. Wayne. From Hayes to McKinley; National Party Politics, 1877–1896 (1969).
- Morgan, H. Wayne. William McKinley and His America (1963).
- Morris, Edmund (2002). Theodore Rex. 2. (covers Presidency 1901–1909); Pulitzer Prize.
- Mowry, George E. Theodore Roosevelt and the Progressive Movement (1946) online.
- Mowry, George E. The Era of Theodore Roosevelt, 1900–1912 (1958) read online
- Muzzey, David Saville. James G. Blaine: A Political Idol of Other Days (1934) online.
- Nevins, Allan. Ordeal of the Union, (1947–70), 8-volumes cover 1848–1865; highly detailed coverage.
- Oakes, James. The Crooked Path to Abolition: Abraham Lincoln and the Antislavery Constitution (W.W. Norton, 2021).
- Oakes, James. Freedom National: The Destruction of Slavery in the United States, 1861-1865 (W. W. Norton, 2012)
- Paludin, Philip. A People's Contest: The Union and the Civil War, 1861–1865 (1988).
- Peskin, Allan. "Who were the Stalwarts? Who were their rivals? Republican factions in the Gilded Age." Political Science Quarterly 99#4 (1984): 703–716. in JSTOR.
- Rhodes, James Ford. The History of the United States from the Compromise of 1850 9 vol (1919), detailed political coverage to 1909. online
- Richardson, Heather Cox. The Greatest Nation of the Earth: Republican Economic Policies during the Civil War (1997).
- Rove, Karl. The Triumph of William McKinley: Why the Election of 1896 Still Matters (2015). Detailed narrative of the entire campaign by Karl Rove a prominent 21st-century Republican campaign advisor.
- Silbey, Joel H. The American Political Nation, 1838–1893 (1991).
- Summers, Mark Wahlgren. Rum, Romanism & Rebellion: The Making of a President, 1884 (2000).
- Summers, Mark Wahlgren. Party games: Getting, keeping, and using power in gilded age politics (2004). online
- Summers, Mark Wahlgren. The Ordeal of the Reunion: A New History of Reconstruction (2014) online
- Van Deusen, Glyndon G. Horace Greeley, Nineteenth-Century Crusader (1953).
- Williams, R. Hal. Realigning America: McKinley, Bryan, and the remarkable election of 1896 (UP of Kansas, 2017).
- Aberbach, Joel D., ed. and Peele, Gillian, ed. Crisis of Conservatism?: The Republican Party, the Conservative Movement, and American Politics after Bush (Oxford UP, 2011). 403pp
- Barone, Michael; McCutcheon, Chuck (2011). The Almanac of American Politics (2012 ed.). New edition every two years since 1975.
- Black, Earl; Black, Merle (2002). The Rise of Southern Republicans.
- Brennan, Mary C. Turning Right in the Sixties: The Conservative Capture of the GOP (1995).
- Bowen, Michael. The Roots of Modern Conservatism: Dewey, Taft, and the Battle for the Soul of the Republican Party (2011).
- Critchlow, Donald T. The Conservative Ascendancy: How the Republican Right Rose to Power in Modern America (2nd ed. 2011).
- Dueck, Colin, Hard Line: The Republican Party and U.S. Foreign Policy since World War II (Princeton University Press, 2010). 386pp.
- Feldman, Glenn, ed. Painting Dixie Red: When, Where, Why, and How the South Became Republican (UP of Florida, 2011) 386pp
- Galvin, Daniel. Presidential party building: Dwight D. Eisenhower to George W. Bush (Princeton, NJ, 2010).
- Gould, Lewis L. 1968: The Election That Changed America (1993).
- Jensen, Richard. "The Last Party System, 1932–1980," in Paul Kleppner, ed. Evolution of American Electoral Systems (1981).
- Kabaservice, Geoffrey. Rule and Ruin: The Downfall of Moderation and the Destruction of the Republican Party, From Eisenhower to the Tea Party (2012); scholarly history that strongly favors the moderates. Excerpt and text search.
- Ladd Jr., Everett Carll with Charles D. Hadley. Transformations of the American Party System: Political Coalitions from the New Deal to the 1970s 2d ed. (1978).
- Mason, Robert. The Republican Party and American Politics from Hoover to Reagan (2011) excerpt and text search.
- Mason, Robert, and Iwan Morgan, eds. Seeking a New Majority: The Republican Party and American Politics, 1960–1980 (Vanderbilt University Press; 2013), 248 pages; scholarly studies of how the party expanded its base, appealed to new constituencies and challenged Democratic dominance.
- Parmet, Herbert S. Eisenhower and the American Crusades (1972).
- Patterson, James T. Mr. Republican: A Biography of Robert A. Taft (1972).
- Patterson, James. Congressional Conservatism and the New Deal: The Growth of the Conservative Coalition in Congress, 1933–39 (1967).
- Perlstein, Rick (2002). Before the Storm: Barry Goldwater and the Unmaking of the American Consensus. On the rise of the conservative movement in the liberal 1960s.
- Perlstein, Rick. Nixonland: The Rise of a President and the Fracturing of America (2008).
- Reinhard, David W. The Republican Right since 1945 (1983).
- Rosen, Eliot A. The Republican Party in the Age of Roosevelt: Sources of Anti-Government Conservatism in the United States (2014).
- Skocpol, Theda and Williamson, Vanessa, eds. The Tea Party and the Remaking of Republican Conservatism (Oxford University Press, 2012) 245 pp.
- Sundquist, James L. Dynamics of the Party System: Alignment and Realignment of Political Parties in the United States (1983).
- Weed, Clyda P. The Nemesis of Reform: The Republican Party During the New Deal (Columbia University Press, 1994) 293 pp.
- Zake, Ieva, “Nixon vs. the GOP: Republican Ethnic Politics, 1968–1972,” Polish American Studies, 67 (Autumn 2010), 53–74.
- Porter, Kirk H., Donald Bruce Johnson, eds. National Party Platforms, 1840–1980 (1982).
- Schlesinger, Arthur Meier, Jr. ed. History of American Presidential Elections, 1789–2008 (various multivolume editions, latest is 2011). For each election includes brief history and selection of primary documents. | https://library.kiwix.org/wikipedia_en_top_maxi/A/History_of_the_Republican_Party_(United_States) | 21 |
16 | What impact did the Civil War have on the economy of the south?
There was great wealth in the South, but it was primarily tied up in the slave economy. In 1860, the economic value of slaves in the United States exceeded the invested value of all of the nation’s railroads, factories, and banks combined. On the eve of the Civil War, cotton prices were at an all-time high.
In what ways did the economy of the South change after the Civil War?
After the Civil War, sharecropping and tenant farming took the place of slavery and the plantation system in the South. Sharecropping and tenant farming were systems in which white landlords (often former plantation slaveowners) entered into contracts with impoverished farm laborers to work their lands.
How does the civil war impact us today?
2. We prize America as a land of opportunity. The Civil War paved the way for Americans to live, learn and move about in ways that had seemed all but inconceivable just a few years earlier. With these doors of opportunity open, the United States experienced rapid economic growth.
How did slavery shape the Southern economy and society?
Slavery was so profitable, it sprouted more millionaires per capita in the Mississippi River valley than anywhere in the nation. With cash crops of tobacco, cotton and sugar cane, America’s southern states became the economic engine of the burgeoning nation.
How did the end of slavery affect the economy?
Former slaves would now be classified as “labor,” and hence the labor stock would rise dramatically, even on a per capita basis. Either way, abolishing slavery made America a much more productive, and hence richer country.
What were the long term effects of slavery?
The size of the Atlantic slave trade dramatically transformed African societies. The slave trade brought about a negative impact on African societies and led to the long-term impoverishment of West Africa. This intensified effects that were already present amongst its rulers, kinships, kingdoms and in society.
What impact did slavery have on the Civil War?
Slavery played the central role during the American Civil War. The primary catalyst for secession was slavery, especially Southern political leaders’ resistance to attempts by Northern antislavery political forces to block the expansion of slavery into the western territories.
What were the effects of the abolition of slavery?
As it gained momentum, the abolitionist movement caused increasing friction between states in the North and the slave-owning South. Critics of abolition argued that it contradicted the U.S. Constitution, which left the option of slavery up to individual states.
What was the main reason for the abolition of slavery?
The Industrial Revolution and advances and improvements in agriculture were benefiting the British economy. The slave trade ceased to be profitable. Plantations ceased to be profitable. The slave trade was overtaken by a more profitable use of ships.
How did the abolition of slavery affect the South?
Defenders of slavery argued that the sudden end to the slave economy would have had a profound and killing economic impact in the South where reliance on slave labor was the foundation of their economy. The cotton economy would collapse. The tobacco crop would dry in the fields. Rice would cease being profitable.
What country outlawed slavery first?
Haiti (then Saint-Domingue) formally declared independence from France in 1804 and became the first sovereign nation in the Western Hemisphere to unconditionally abolish slavery in the modern era. The northern states in the U.S. all abolished slavery by 1804.
Which countries still have slavery?
As of 2018, the countries with the most slaves were: India (8 million), China (3.86 million), Pakistan (3.19 million), North Korea (2.64 million), Nigeria (1.39 million), Iran (1.29 million), Indonesia (1.22 million), Democratic Republic of the Congo (1 million), Russia (794,000) and the Philippines (784,000).
What other countries had slavery?
The Arab slave trade encompassed mainly Western and Central Asia, Northern and Eastern Africa, India, and Europe from the 7th to the 20th century. The Dutch, French, Spanish, Portuguese, British and a number of West African kingdoms played a prominent role in the Atlantic slave trade, especially after 1600. | https://durrell2012.com/what-impact-did-the-civil-war-have-on-the-economy-of-the-south/ | 21 |