id
int64
39
19.8M
url
stringlengths
31
264
title
stringlengths
1
182
text
stringlengths
1
316k
__index_level_0__
int64
1
7.91M
5,460
https://en.wikipedia.org/wiki/Geography%20of%20Cape%20Verde
Geography of Cape Verde
Cape Verde (formally, the Republic of Cabo Verde) is a group of arid Atlantic islands which are home to distinct communities of plants, birds, and reptiles. The islands constitute the unique Cape Verde Islands dry forests ecoregion, according to the World Wildlife Fund. Location and description The Cape Verde Islands are located in the mid-Atlantic Ocean some off the west coast of the continent of Africa. The landscape varies from dry plains to high active volcanoes with cliffs rising steeply from the ocean. The climate is arid. The total size is . The archipelago consists of ten islands and five islets, divided into the windward (Barlavento) and leeward (Sotavento) groups. The six islands in the Barlavento group are Santo Antão, São Vicente, Santa Luzia, São Nicolau, Sal, and Boa Vista. The islands in the Sotavento group are Maio, Santiago, Fogo, and Brava. All but Santa Luzia are inhabited. Three islands – Sal, Boa Vista, and Maio – generally are level and lack natural water supplies. Mountains higher than are found on Santiago, Fogo, Santo Antão, and São Nicolau. Sand carried by strong winds has caused erosion on all islands, especially the windward ones. Sheer, jagged cliffs rise from the sea on several of the mountainous islands. The lack of natural vegetation in the uplands and coast also contributes to soil erosion. The interior valleys support denser natural vegetation. Data Geographic coordinates Area Total: 4,072 km2 Land: 4,072 km2 Water: 0 km2 (inland water is negligible) Area – comparativeAbout 1.5 times as large as Luxembourg Coastline Maritime claims Measured from claimed archipelagic baselines Territorial sea: Contiguous zone: Exclusive economic zone: Exclusive economic zone EEZ area: Continental shelf: 5,591 km2 Coral reefs: 0.09% of world Sea mounts: 0.04% of world TerrainSteep, rugged, rocky, volcanic. Elevation extremes Lowest point: Atlantic Ocean 0 m Highest point: Mount Fogo (a volcano on Fogo Island) Natural resourcesSalt, basalt rock, limestone, kaolin, fish, clay, gypsum Land use agricultural land: 18.6% (2018 est.) arable land: 11.7% (2018 est.) permanent crops: 0.7% (2018 est.) permanent pasture: 6.2% (2018 est.) forest: 21% (2018 est.) other: 60.4% (2018 est.) Irrigated land (2012) Total renewable water resources0.3 km3 (2017) Freshwater withdrawal (domestic/industrial/agricultural) total: 0.02 km3/yr (6%/1%/93%) per capita: 48.57 m3/yr (2004) Natural hazards Prolonged droughts; seasonal harmattan wind produces obscuring dust; volcanically and seismically active. Geography - note Strategic location 500 km from west coast of Africa near major north-south sea routes; important communications station; important sea and air refueling site. Table of islands Borders Cabo Verde shares maritime boundaries with Mauritania and Senegal. Cabo Verde has signed treaties with Senegal and Mauritania delimiting the specific boundaries. However, the two treaties conflict in their delimitation of the precise borders. Due to the numerous islands it has an Exclusive Economic Zone of . Climate Rainfall is irregular, historically causing periodic droughts and famines. Desalination plants now provide water to more than half the country's population. Experiments with fog collectors have been conducted since 1962, however, such collectors had not been expanded beyond the Serra Malagueta community of Santiago Island, as of 2009. The average precipitation per year in Praia is . During the winter, storms blowing from the Sahara sometimes cloud the sky; however, sunny days are the norm year round. The clearest skies are found between February and June, with very little rainfall during these months. Saharan dust, Harmattan wind, laden with dust come from the Sahara. This occurs between November and March and is generally similar to the "Calima" affecting the Canary Islands. The ocean near Cabo Verde is an area of tropical cyclone formation; since these storms have the whole Atlantic over which to develop as they move westward, they are among the most intense hurricanes, and are called Cape Verde-type hurricanes. The Cape Verde islands are a very degradated area. Also, most islands do not always get a monsoon. In fact, it is not surprising that some atmospheric precipitation of islands are limited in tropical rain. If rain arrives, it is usually between August and October. The first "rainy season" brings high temperatures and high humidity that condenses as dew mountain. The other rainy season is between December and June, when the northeast trade winds are common during this season, only altitudes above 600 m tend to receive regular rain. The island of Sal receives an average of 0 mm in May. When the rain comes, if it comes, it can be very strong. Half of the rain in a particular year can often fall into a single storm. Most of the Cape Verde islands are dry, but on islands with high mountains and farther away from the continental land mass, by orography, the humidity is much higher, giving a rainforest habitat, very degraded by the strong human presence. Northeastern slopes of high mountains often receive a lot of rain and southwestern slopes do not. This is because they are umbria areas, situated in the north hillsides or slopes of the mountainous areas, oriented behind the sun in the Northern Hemisphere, in the shadyside orographic. So the amount of solar radiation that it receive is much lower than it would if it had without the island's relief which intercepts much of the sun. In terms of botanical ecology, these umbria areas are identified as being cool and moist. The current north of the Canaries, has a cooling effect on the islands of Cabo Verde, making the air temperature more bearable than it would expect in any case at this latitude. Conversely, the islands do not receive the upwellings (cold streams) that affect the West African coast, so the air temperature is cooler than in Senegal, but the sea is actually warmer, because the orographic relief of some islands, such as Sao Miguel with steep mountains, cover it with rich woods and luxuriant vegetation where the humid air condenses and soak the plants, rocks, soil, logs, and moss. Hurricanes often begin forming in the waters around the islands of Cabo Verde, but it is rare that the strength of the storm reaches close to the islands. A Cape Verde type hurricane is formed in the area south of the islands, near Sao Miguel, after a tropical wave on the African continent during the rainy season. The storm picks up strength when it crosses the warm waters of Atlantic. The laurel forest is a type of cloud forest, the cloud forests, are developed preferably about mountains, where the dense moisture from the sea or ocean, is precipitated by the action of the relief. Opposing the terrain to a front of warm, moist air mass, it forces to increase the height above sea level of that body wet and warm air mass, which cools and decreases the dew point, causing it to condense part of the moisture that falls as rain or fog, creating a habitat especially cool, saturated with moisture in the air and soil. It is the balance between the dry and warm influence of the subtropical anticyclone, hot and dry summer and orography the responsible for carrying cool wet air. As latitude increases, this increases the impact of the storms, which in its journey from west to east, swept the western coasts of continents, dumping heavy rains as carrying high humidity. Precipitation multiply if these air masses are crossing mountains in the way. The resulting climate is wetter, but with an annual oscillation of the temperature moderated by the proximity of the ocean. Appear mostly occupying favorable areas named geographically Umbrias, this is north hillsides or slopes of the mountainous areas that are oriented behind the sun, because the islands are in the Northern Hemisphere, between 600 and 1,500 meters, thus benefiting from the humidity provided by the trade winds to form such a sea of clouds. In its botanical ecology, the mountain umbria is identified with cool and moisture. Flora Cape Verde is the driest archipelago of the ecoregion of Macaronesia. That with a greater influence of African species due to its geographical location near the African mainland of the Sahel. At first, the islands of Cabo Verde housed an extensive savanna and dry forest cover, but mostly it was removed to convert to agricultural land, which, together with the arid climate and rugged terrain, has led to a soil erosion and desertification widespread. However, the archipelago can be divided into four broad ecological zones (arid, semiarid, subhumid and humid), according to altitude and average annual rainfall ranging from 200 mm in the arid areas of the coast to more than 1000 mm in the humid mountain. Mostly rainfall precipitation is due to condensation of the ocean mist. Today much of the forest cover comprises relatively immature agroforestry plantations, in which are used non-native species such as Prosopis juliflora, Leucaena leucocephala and Jatropha curcas. The native laurel forest species are in wet area only in mountainous areas. On the lower and drier islands the vegetation, before human colonization, consisted of savanna or steppe vegetation, with the flattest inland portion supporting semi-desert plants. At higher altitudes, a form of arid shrubland was also present. These islands were covered with savanna on the plains and arid shrubland on the mountainsides, but after over 500 years of human habitation (after Portuguese colonisation) nearly all the original vegetation has been cleared in favour of widespread agriculture including the grazing of goats, sheep and cattle and the planting of imported crop species. There are some remaining patches of dry forest high on steep mountainside including a number of endemic plant species, but these are inaccessible and hard to study. On the higher islands and somewhat wetter islands, exclusively in mountainous areas, like Santo Antao island, the climate is suitable for the development of dry monsoon forest, and laurel forest as this vegetation is believed to have been present in the past. However, most vegetation has now been converted to agriculture and forest fragments are now restricted to areas where cultivation is not possible, such as mountain peaks and steep slopes. The demand for wood has resulted in deforestation and desertification. Of particular note is the endemic type of humid subtropical laurel forest of macaronesian laurisilva, found on several of the Macaronesian islands of the North Atlantic and Macaronesian African mainland enclaves: these are a relic of the Pliocene subtropical forests, supporting numerous endemic species, namely Madeira Islands, the Azores, Cape Verde Islands and the Canary Islands. This laurisilva forests are found in the islands of Macaronesia in the eastern Atlantic, in particular the Azores, Madeira Islands, and western Canary Islands, from 400 m to 1,200 m elevation. Trees of the genera Apollonias (Lauraceae), Ocotea (Lauraceae), Persea (Lauraceae), Clethra (Clethraceae), Dracaena (Ruscaceae), and Picconia (Oleaceae) are characteristic. The Madeira Islands laurel forest was designated a World Heritage Site by UNESCO in 1999. Fauna There are four endemic bird species including the Raso lark along with more common swifts, larks, warblers, and sparrows. The islands are an important breeding site for seabirds including the Cape Verde shearwater and Fea's petrel (Pterodroma feae), which breeds only here and in Madeira. Santiago Island holds the only breeding site of the endemic and critically endangered Bourne's heron. The 11 endemic reptile species include a giant gecko (Tarentola gigas), and there are other geckos and skinks in abundance. The giant skink (Macroscincus coctei) is now thought to be extinct. Threats and protection Almost all of the natural environment has been destroyed by conversion to agriculture and logging for firewood, as well as natural soil erosion, all of which has threatened several species of birds and reptiles. The remaining original forest exists at high altitudes only. Newer problems include illegal beach sand extraction and overfishing while the nesting birds are vulnerable due to introduced mammals, including cats and rats. Environment - international agreements Party to: Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Climate Change-Paris Agreement, Comprehensive Nuclear Test Ban, Desertification, Endangered Species, Environmental Modification, Hazardous Wastes, Law of the Sea, Marine Dumping, Nuclear Test Ban, Ozone Layer Protection, Ship Pollution, Wetlands Extreme points Northernmost point - Ponta do Sol on Santao Antão Island Southernmost point - Ponta Nho Martinho on Brava Westernmost point - Ponta Chao de Mongrade on Santao Antão* Easternmost point - Ponta Meringuel on Boa Vista *Note: this is also the westernmost point of Africa References
2,428
5,462
https://en.wikipedia.org/wiki/Politics%20of%20Cape%20Verde
Politics of Cape Verde
Politics of Cape Verde takes place in a framework of a semi-presidential representative democratic republic, whereby the Prime Minister of Cape Verde is the head of government and the President of the Republic of Cape Verde is the head of state, and of a multi-party system. Executive power is exercised by the president and the government. Legislative power is vested in both the government and the National Assembly. The judiciary is independent of the executive and the legislature. The constitution, first approved in 1980 and substantially revised in 1992, forms the basis of government organization. It declares that the government is the "organ that defines, leads, and executes the general internal and external policy of the country" and is responsible to the National Assembly. Political conditions Following independence in 1975, the African Party for the Independence of Guinea and Cape Verde (PAIGC) established a one-party political system. This became the African Party for the Independence of Cape Verde (PAICV) in 1980, as Cape Verde sought to distance itself from Guinea-Bissau, following unrest in that country. In 1991, following growing pressure for a more pluralistic society, multi-party elections were held for the first time. The opposition party, the Movement for Democracy (Movimento para a Democracia, MpD), won the legislative elections, and formed the government. The MpD candidate also defeated the PAICV candidate in the presidential elections. In the 1996 elections, the MpD increased their majority, but in the 2001 the PAICV returned to power, winning both the Legislative and the Presidential elections. Generally, Cape Verde enjoys a stable democratic system. The elections have been considered free and fair, there is a free press, and the rule of law is respected by the State. In acknowledgment of this, Freedom House granted Cape Verde two first places in its annual Freedom in the World report, a perfect score. It is the only African country to receive this score. The Prime Minister is the head of the government and as such proposes other ministers and secretaries of state. The Prime Minister is nominated by the National Assembly and appointed by the President. The President is the head of state and is elected by popular vote for a five-year term; the most recent elections were held in 2021. Also in the legislative branch, the National Assembly (Assembleia Nacional) has 72 members, elected for a five-year term by proportional representation. Movement for Democracy (MpD) ousted the ruling African Party for the Independence of Cape Verde (PAICV) for the first time in 15 years in the 2016 parliamentary election. The leader of MpD, Ulisses Correia e Silva has been prime minister since 2016. Jorge Carlos Almeida Fonseca was elected president in August 2011 and re-elected in October 2016. He is also supported by MpD. In April 2021, the ruling centre-right Movement for Democracy (MpD) of Prime Minister Ulisses Correia e Silva, won the parliamentary election. In October 2021, opposition candidate and former prime minister, Jose Maria Neves of PAICV, won Cape Verde's presidential election. On 9 November 2021, Jose Maria Neves was sworn in as the new President of Cape Verde. Political parties and elections Courts and criminal law The judicial system is composed of the Supreme Court and the regional courts. Of the five Supreme Court judges, one is appointed by the President, one by the National Assembly, and three by the Superior Judiciary Council. This council consists of the President of the Supreme Court, the Attorney General, eight private citizens, two judges, two prosecutors, the senior legal inspector of the Attorney General's office, and a representative of the Ministry of Justice. Judges are independent and may not belong to a political party. In October 2000, a female judge who was known for taking strict legal measures in cases of domestic violence was transferred from the capital to the countryside. Separate courts hear civil, constitutional and criminal cases. Appeal is to the Supreme Court. Reforms to strengthen an overburdened judiciary were implemented in 1998. Free legal counsel is provided to indigents, defendants are presumed innocent until proven guilty, and trials are public. Judges must lay charges within 24 hours of arrests. The Constitution provides for an independent judiciary, and the government generally respects this provision in practice. The constitution provides for the right to a fair trial and due process, and an independent judiciary usually enforces this right. Unlike in the previous year, there were no reports of politicization and biased judgement in the judiciary. Cases involving former public office holders still are under investigation. For example, investigations continued in the case of the former prime minister accused of embezzlement in the privatization of ENACOL (a parastatal oil supply firm) in which he allegedly embezzled approximately $16,250 (2 million Cape Verdean escudos) from the buyers of the parastatal. The case of four persons accused of church desecration in 1996 also was under investigation. These individuals filed a complaint with the Attorney General against the judiciary police for alleged fabrication of evidence. The constitution provides for the right to a fair trial. Defendants are presumed to be innocent; they have the right to a public, non-jury trial; to counsel; to present witnesses; and to appeal verdicts. Regional courts adjudicate minor disputes on the local level in rural areas. The Ministry of Justice does not have judicial powers; such powers lie with the courts. The judiciary generally provides due process rights; however, the right to an expeditious trial is constrained by a seriously overburdened and understaffed judicial system. A backlog of cases routinely leads to trial delays of 6 months or more; more than 10,780 cases were pending at year's end. In addition the right of victims to compensation and recovery for pain and mental suffering are overlooked, due both to the low damage assessments imposed and ineffective enforcement of court sentences. Administrative divisions Cape Verde is divided into 22 municipalities (concelhos, singular - concelho): Boa Vista, Brava, Maio, Mosteiros, Paul, Porto Novo, Praia, Ribeira Grande, Ribeira Grande de Santiago, Sal, Santa Catarina, Santa Catarina do Fogo, Santa Cruz, São Domingos, São Filipe, São Lourenço dos Órgãos, São Miguel, São Nicolau, São Salvador do Mundo, São Vicente, Tarrafal, Tarrafal de São Nicolau. Voting rights for non citizens Article 24 of the Cape Verde Constitution states that alinea 3.: "Rights not conferred to foreigners and apatrids may be attributed to citizens of countries with Portuguese as an official language, except for access to functions of sovereignty organs, service in the armed forces or in the diplomatic career." alinea 4. "Active and passive electoral capacity can be attributed by law to foreigners and apatrid residents on the national territory for the elections of the members of the organs of the local municipalities." The website of the governmental Institute of Cape Verde Communities states that such a measure was adopted "to stimulate reciprocity from host countries of Cape Verdian migrants". A law nr. 36/V/97 was promulgated on August 25, 1997 regulating the "Statute of Lusophone Citizen", concerning nationals from any country member of the Community of Portuguese Language Countries (article 2), stating in its article 3 that "The lusophone citizen with residence in Cape Verde is recognized the active and passive electoral capacity for municipal elections, under conditions of the law. The lusophone citizen with residence in Cape Verde has the right to exercise political activity related to his electoral capacity." International organization participation ACCT, ACP, AfDB, AU, CCC, ECA, ECOWAS, FAO, G-77, IBRD, ICAO, ICRM, IDA, IFAD, IFC, IFRCS, ILO, IMF, IMO, Intelsat, Interpol, IOC, IOM (observer), ITU, ITUC, NAM, OAU, OPCW, UN, UNCTAD, UNESCO, UNIDO, UPU, WHO, WIPO, WMO, WTO (applicant) Sources External links Government of Cape Verde National Assembly of Cape Verde Official site of the President of Cape Verde Chief of State and Cabinet Members Supreme Court EU Relations with Cape Verde
2,430
5,466
https://en.wikipedia.org/wiki/Cape%20Verdean%20Armed%20Forces
Cape Verdean Armed Forces
The Cape Verdean Armed Forces (), Cabo Verdean Armed Forces or FACV are the military of Cape Verde. They include two branches, the National Guard and the Coast Guard. History Before 1975, Cape Verde was an overseas province of Portugal, having a small Portuguese military garrison that included both Cape Verdean and European Portuguese soldiers. At the same time, some Cape Verdeans were serving in the People's Revolutionary Armed Forces (, FARP), the military wing of the African Party for the Independence of Guinea and Cape Verde that was fighting for the joint independence of Guinea and Cape Verde in the Guinea-Bissau War of Independence. The FARP became the national armed forces of Guinea-Bissau when its independence was recognized by Portugal in 1974. The Armed Forces of Cape Verde were created when the country became independent in 1975, being also officially designated the People's Revolutionary Armed Forces (, FARP). The Cape Verdean FARP consisted of two independent branches, the Army () and the Coast Guard (). In the early 1990s, the designation "FARP" was dropped and the military of Cape Verde was designated the Cape Verdean Armed Forces (, FACV). In 2007, the FACV started a major reorganization that included the transformation of the Army into the National Guard (). Together with the Cape Verdean Police, the FACV carried out Operation Flying Launch (), a successful operation to put an end to a drug trafficking group which smuggled cocaine from Colombia to the Netherlands and Germany using Cape Verde as a reorder point. The operation took more than three years, being a secret operation during the first two years, and ended in 2010. Although located in Africa, Cape Verde has always had close relations with Europe. Because of this, it has been argued that Cape Verde may be eligible for entry into the Organization for Security and Co-operation in Europe and NATO. The most recent engagement of the FACV was the Monte Tchota massacre that resulted in 11 deaths. Structure The Cape Verdean Armed Forces are part of the Ministry of National Defense of Cape Verde and include: the military bodies of command: Chief of Staff of the Armed Forces Office of the CEMFA Staff of the Armed Forces (EMFA) Personnel Command Logistics Command the National Guard the Coast Guard National Guard The National Guard (Guarda National) is the main branch of the Cape Verdean Armed Forces for the military defense of the country, being responsible for the execution of land and maritime environment operations and the support to internal security. It includes: Territorial commands: 1st Military Region Command 2nd Military Region Command 3rd Military Region Command Corps: Military Police Corps Marine Corps Artillery Corps There is no general command of the National Guard. Each military region command is headed by a lieutenant-colonel directly subordinate to the Chief of Staff of the Armed Forces, and includes units of the three corps. Coast Guard The Coast Guard (Guarda Costeira) is the branch of the Cape Verdean Armed Forces responsible for the defense and protection of the country's economical interests at the sea under national jurisdiction and for providing air and naval support to land and amphibious operations. It includes: Coast Guard Command Maritime Security Operations Center (COSMAR) Naval Squadron Air Squadron The Coast Guard is headed by an officer with the rank of lieutenant-colonel. The Naval and Air Squadrons incorporate, respectively, all the vessels and aircraft of the Cape Verdean Armed Forces. Ranks The rank insignia for commissioned officers for the national guard and coast guard. The rank insignia of enlisted for the national guard and coast guard. Equipment Armored vehicles 10 BRDM-2 Artillery 12 82-PM-41 6 120-PM-43 mortar Anti-aircraft 9K32 Strela-2 18 ZPU-1 12 ZU-23-2 Aircraft The Cape Verdean Army used to have its own air arm; after personnel training received from the USSR in 1982, three Antonov An-26 aircraft were delivered to Cape Verde – these were believed to be the only military aircraft possessed by the nation. However these three aircraft were supplemented in 1991 by a Dornier 228 light aircraft equipped for use by the Coast Guard, and, in the late 1990s by an EMB-110 aircraft from Brazil, similarly equipped for maritime operations. The government has been in negotiations with China to acquire multirole helicopters for both military and civilian use. Current Inventory Vessels 1 Kondor I patrol craft – 360 tons full load – commissioned 1970 1 Peterson MK 4 patrol craft – 22 tons – commissioned 1993 1 other patrol craft – 55 tons 1 Damen Stan 5009 patrol vessel – Guardiao (P511) – commissioned 2012 References Further reading: Defense Intelligence Agency, Military Intelligence Summary - Africa South of the Sahara, DDB 2680-104-85, ICOD 15 October 1984, declassified by letter dated April 29, 2014. External links The World Factbook http://www.nationmaster.com/country/cv-cape-verde/mil-military https://web.archive.org/web/20110721071532/http://praia.usembassy.gov/about-us/security-assistance-office.html http://www.snpc.cv/ Cape Verde Military Police Government of Cape Verde
2,432
5,473
https://en.wikipedia.org/wiki/Economy%20of%20the%20Cayman%20Islands
Economy of the Cayman Islands
The economy of the Cayman Islands, a British overseas territory located in the western Caribbean Sea, is mainly fueled by the tourism sector and by the financial services sector, together representing 50–60 percent of the country's gross domestic product (GDP). The Cayman Islands Investment Bureau, a government agency, has been established with the mandate of promoting investment and economic development in the territory. Because of the territory’s economic success and it being a popular banking destination for wealthy individuals and businesses, it is often dubbed the ‘financial capital’ of the Caribbean. The emergence of what are now considered the Cayman Islands' "twin pillars of economic development" (tourism and international finance) started in the 1950s with the introduction of modern transportation and telecommunications. History From the earliest settlement of the Cayman Islands, economic activity was hindered by isolation and a limited natural resource base. The harvesting of sea turtles to resupply passing sailing ships was the first major economic activity on the islands, but local stocks were depleted by the 1790s. Agriculture, while sufficient to support the small early settler population, has always been limited by the scarcity of arable land. Fishing, shipbuilding, and cotton production boosted the economy during the early days of settlement. In addition, settlers scavenged shipwreck remains from the surrounding coral reefs. The boom in the Cayman Islands' international finance industry can also be at least partly attributed to the British overseas territory having no direct taxation. A popular legend attributes the tax-free status to the heroic acts of the inhabitants during a maritime tragedy in 1794, often referred to as "Wreck of the Ten Sails". The wreck involved nine British merchant vessels and their naval escort, the frigate HMS Convert, that ran aground on the reefs off Grand Cayman. Due to the rescue efforts by the Caymanians using canoes, the loss of life was limited to eight. However, records from the colonial era indicate that Cayman Islands, then a dependency of Jamaica, was not tax-exempt during the period that followed. In 1803, the inhabitants signed a petition addressed to the Jamaican governor asking him to grant them a tax exemption from the "Transient Tax on Wreck Goods". Sir Vassel Johnson, the second caymanian to be knighted, was a pioneer of Cayman's financial services industry. Cayman Islands Past Governor Stuart Jack said 'As one of the architects of modern Cayman, especially the financial industry, Sir Vassel guided the steady growth of these Islands as the first financial secretary. His remarkable vision set the foundation for the prosperity and economic stability of these islands. Without his input, Cayman might well have remained the islands that time forgot.' International finance The Cayman Islands' tax-free status has attracted numerous banks and other companies to its shores. More than 92,000 companies were registered in the Cayman Islands as of 2014, including almost 600 banks and trust companies, with banking assets exceeding $500 billion. Numerous large corporations are based in the Cayman Islands, including, for example, Semiconductor Manufacturing International Corporation (SMIC). The Cayman Islands Stock Exchange was opened in 1997. Financial services industry The Cayman Islands is a major international financial centre. The largest sectors are "banking, hedge fund formation and investment, structured finance and securitisation, captive insurance, and general corporate activities". Regulation and supervision of the financial services industry is the responsibility of the Cayman Islands Monetary Authority (CIMA). Sir Vassel Johnson was a pioneer of Cayman's financial services industry. Sir Vassel, who became the only Caymanian ever knighted in 1994, served as the Cayman Islands financial secretary from 1965 through 1982 and then as an Executive Council member from 1984 through 1988. In his government roles, Sir Vassel was a driving force in shaping the Cayman Islands financial services industry. The Cayman Islands is the fifth-largest banking centre in the world, with $1.5 trillion in banking liabilities . In March 2017 there were 158 banks, 11 of which were licensed to conduct banking activities with domestic (Cayman-based) and international clients, and the remaining 147 were licensed to operate on an international basis with only limited domestic activity. Financial services generated KYD$1.2 billion of GDP in 2007 (55% of the total economy), 36% of all employment and 40% of all government revenue. In 2010, the country ranked fifth internationally in terms of value of liabilities booked and sixth in terms of assets booked. It has branches of 40 of the world's 50 largest banks. The Cayman Islands is the second largest captive domicile (Bermuda is largest) in the world with more than 700 captives, writing more than US$7.7 billion of premiums and with US$36.8 billion of assets under management. There are a number of service providers. These include global financial institutions including HSBC, Deutsche Bank, UBS, and Goldman Sachs; over 80 administrators, leading accountancy practices (incl. the Big Four auditors), and offshore law practices including Maples & Calder. They also include wealth management such as Rothschilds private banking and financial advice. Since the introduction of the Mutual Funds Law in 1993, which has been copied by jurisdictions around the world, the Cayman Islands has grown to be the world's leading offshore hedge fund jurisdiction. In June 2008, it passed 10,000 hedge fund registrations, and over the year ending June 2008 CIMA reported a net growth rate of 12% for hedge funds. Starting in the mid-late 1990s, offshore financial centres, such as the Cayman Islands, came under increasing pressure from the OECD for their allegedly harmful tax regimes, where the OECD wished to prevent low-tax regimes from having an advantage in the global marketplace. The OECD threatened to place the Cayman Islands and other financial centres on a "black list" and impose sanctions against them. However, the Cayman Islands successfully avoided being placed on the OECD black list in 2000 by committing to regulatory reform to improve transparency and begin information exchange with OECD member countries about their citizens. In 2004, under pressure from the UK, the Cayman Islands agreed in principle to implement the European Union Savings Directive (EUSD), but only after securing some important benefits for the financial services industry in the Cayman Islands. As the Cayman Islands is not subject to EU laws, the implementation of the EUSD is by way of bilateral agreements between each EU member state and the Cayman Islands. The government of the Cayman Islands agreed on a model agreement, which set out how the EUSD would be implemented with the Cayman Islands. A report published by the International Monetary Fund (IMF), in March 2005, assessing supervision and regulation in the Cayman Islands' banking, insurance and securities industries, as well as its money laundering regime, recognised the jurisdiction's comprehensive regulatory and compliance frameworks. "An extensive program of legislative, rule and guideline development has introduced an increasingly effective system of regulation, both formalizing earlier practices and introducing enhanced procedures", noted IMF assessors. The report further stated that "the supervisory system benefits from a well-developed banking infrastructure with an internationally experienced and qualified workforce as well as experienced lawyers, accountants and auditors", adding that, "the overall compliance culture within Cayman is very strong, including the compliance culture related to AML (anti-money laundering) obligations". On 4 May 2009, the United States President, Barack Obama, declared his intentions to curb the use of financial centres by multinational corporations. In his speech, he singled out the Cayman Islands as a tax shelter. The next day, the Cayman Island Financial Services Association submitted an open letter to the president detailing the Cayman Islands' role in international finance and its value to the US financial system. The Cayman Islands was ranked as the world's second most significant tax haven on the Tax Justice Network's "Financial Secrecy Index" from 2011, scoring slightly higher than Luxembourg and falling behind only Switzerland. In 2013, the Cayman Islands was ranked by the Financial Secrecy Index as the fourth safest tax haven in the world, behind Hong Kong but ahead of Singapore. In the first conviction of a non-Swiss financial institution for US tax evasion conspiracy, two Cayman Islands financial institutions pleaded guilty in Manhattan Federal Court in 2016 to conspiring to hide more than $130 million in Cayman Islands bank accounts. The companies admitted to helping US clients hide assets in offshore accounts, and agreed to produce account files of non-compliant US taxpayers. Foreign Account Tax Compliance Act On 30 June 2014, the tax jurisdiction of the Cayman Islands was deemed to have an inter-governmental agreement (IGA) with the United States of America with respect to the "Foreign Account Tax Compliance Act" of the United States of America. The Model 1 Agreement recognizes: The Tax Information Exchange Agreement (TIEA) between the United States of America and The Cayman Islands which was signed in London, United Kingdom on 29 November 2013. Page 1 – Clause 2 of the FATCA Agreement. The Government of Great Britain and Northern Ireland provided a copy of the Letter of Entrustment which was sent to the Government of the Cayman Islands, to the Government of the United States of America "via diplomatic note of October 16, 2013". The Letter of Entrustment dated 20 October 2013, The Govt of Great Britain and Northern Ireland, authorized the Govt of the Cayman Islands to sign an agreement on information exchange to facilitate the Implementation of the Foreign Account Tax Compliance Act – Page 1 – Clause 10. On 26 March 2017, the US Treasury site disclosed that the Model 1 agreement and related agreement were "In Force" on 1 July 2014. Sanctions and Anti-Money Laundering Act Under the UK Sanctions and Anti-Money Laundering Act of 2018, beneficial ownership of companies in British overseas territories such as the Cayman Islands must be publicly registered for disclosure by 31 December 2020. The Government of the Cayman Islands plans to challenge this law, arguing that it violates the Constitutional sovereignty granted to the islands. The British National Crime Agency said in September 2018 that the authorities in the Cayman Islands were not supplying information about the beneficial ownership of firms registered in the Cayman Islands. Tourism Tourism is also a mainstay, accounting for about 70% of GDP and 75% of foreign currency earnings. The tourist industry is aimed at the luxury market and caters mainly to visitors from North America. Unspoiled beaches, duty-free shopping, scuba diving, and deep-sea fishing draw almost a million visitors to the islands each year. Due to the well-developed tourist industry, many citizens work in service jobs in that sector. Diversification The Cayman Islands is seeking to diversify beyond its two traditional industries, and invest in health care and technology. Health City Cayman Islands, opened in 2014, is a medical tourism hospital in East End, led by surgeon Devi Shetty. Cayman Enterprise City is a special economic zone that was opened in 2011 for technology, finance, and education investment. Cayman Sea Salt (producing gourmet sea salt) and Cayman Logwood products are now made in the Cayman Islands. Standard of living Because the islands cannot produce enough goods to support the population, about 90% of their food and consumer goods must be imported. In addition, the islands have few natural fresh water resources. Desalination of sea water is used to solve this. Despite those challenges, the Caymanians enjoy one of the highest outputs per capita and one of the highest standards of living in the world. Education is compulsory to the age of 16 and is free to all Caymanian children. Most schools follow the British educational system. Ten primary, one special education and two highs schools ('junior high and senior high') are operated by the government, along with eight private high schools. In addition, there is a law school, a university-college and a medical school. Poverty relief is provided by the Needs Assessment Unit, a government agency established by the Poor Persons (Relief) Law in January 1964. References See also Economy of the Caribbean Cayman Islands dollar Cayman Islands Monetary Authority Cayman Islands Stock Exchange Central banks and currencies of the Caribbean List of countries by credit rating List of Commonwealth of Nations countries by GDP List of Latin American and Caribbean countries by GDP growth List of Latin American and Caribbean countries by GDP (nominal) List of Latin American and Caribbean countries by GDP (PPP) List of countries by tax revenue as percentage of GDP List of countries by future gross government debt List of countries by leading trade partners
2,439
5,475
https://en.wikipedia.org/wiki/Transport%20in%20the%20Cayman%20Islands
Transport in the Cayman Islands
The transport infrastructure of the Cayman Islands consists of a public road network, two seaports, and three airports. Roads As of 2000, the Cayman Islands had a total of 488 miles (785 km) of paved highway. Driving is on the left, and speed is reckoned in miles per hour, as in the UK. The legal blood alcohol content is 100mg per 100ml (0.1%), the highest in the world. Seaports Two ports, Cayman Brac and George Town, serve the islands. One hundred and twenty-three ships (of 1,000 GT or more) are registered in the Cayman Islands, with a total capacity of 2,402,058 GT/. Some foreign ships (including vessels from Cyprus, Denmark, Greece, Norway, the UK, and US) are registered in the Cayman Islands under a flag of convenience. (All figures are 2002 estimates.) Airports There are three airports on the Islands. The main airport Owen Roberts International Airport serving Grand Cayman, Charles Kirkconnell International Airport serving Cayman Brac and Edward Bodden Airfield serving Little Cayman. Buses A fleet of Share taxi minibuses serves Grand Cayman. A daily service starts at 6.00 from the depot and runs as follows from George Town to: West Bay — every 15 minutes: 6.00–23.00 (24.00 on Fr, Sa). CI$1.50 each way. Bodden Town — every 30 minutes: 6.00–23.00 (24.00 on Fr, Sa). CI$1.50 each way. East End and North Side — every hour, 6.00–21.00 (24.00 on Fr). CI$2 each way. Colour-coded logos on the front and rear of the buses (white mini-vans) identify the routes: See also Cayman Islands References
2,441
5,486
https://en.wikipedia.org/wiki/Central%20African%20Armed%20Forces
Central African Armed Forces
The Central African Armed Forces (; FACA) are the armed forces of the Central African Republic and have been barely functional since the outbreak of the civil war in 2012. Today they are among the world's weakest armed forces, dependent on international support to provide security in the country. In recent years the government has struggled to form a unified national army. It consists of the Ground Force (which includes the air service), the gendarmerie, and the National Police. Its disloyalty to the president came to the fore during the mutinies in 1996–1997, and since then has faced internal problems. It has been strongly criticised by human rights organisations due to terrorism, including killings, torture and sexual violence. In 2013 when militants of the Séléka rebel coalition seized power and overthrew President Bozizé they executed many FACA troops. History Role of military in domestic politics The military has played an important role in the history of Central African Republic. The immediate former president, General François Bozizé was a former army chief-of-staff and his government included several high-level military officers. Among the country's five presidents since independence in 1960, three have been former army chiefs-of-staff, who have taken power through coups d'état. No president with a military background has, however, ever been succeeded by a new military president. The country's first president, David Dacko was overthrown by his army chief-of-staff, Jean-Bédel Bokassa in 1966. Following the ousting of Bokassa in 1979, David Dacko was restored to power, only to be overthrown once again in 1981 by his new army chief of staff, General André Kolingba. In 1993, Ange-Félix Patassé became the Central African Republic's first elected president. He soon became unpopular within the army, resulting in violent mutinies in 1996–1997. In May 2001, there was an unsuccessful coup attempt by Kolingba and once again Patassé had to turn to friends abroad for support, this time Libya and DR Congo. Some months later, at the end of October, Patassé sacked his army chief-of-staff, François Bozizé, and attempted to arrest him. Bozizé then fled to Chad and gathered a group of rebels. In 2002, he seized Bangui for a short period, and in March 2003 took power in a coup d'état. Importance of ethnicity When General Kolingba became president in 1981, he implemented an ethnicity-based recruitment policy for the administration. Kolingba was a member of the Yakoma people from the south of the country, which made up approximately 5% of the total population. During his rule, members of Yakoma were granted all key positions in the administration and made up a majority of the military. This later had disastrous consequences when Kolingba was replaced by a member of a northerner tribe, Ange-Félix Patassé. Army mutinies of 1996–1997 Soon after the election 1993, Patassé became unpopular within the army, not least because of his inability to pay their wages (partly due to economic mismanagement and partly because France suddenly ended its economic support for the soldiers' wages). Another reason for the irritation was that most of FACA consisted of soldiers from Kolingba's ethnic group, the Yakoma. During Patassé's rule they had become increasingly marginalised, while he created militias favouring his own Gbaya tribe, as well as neighbouring Sara and Kaba. This resulted in army mutinies in 1996–1997, where fractions of the military clashed with the presidential guard, the Unité de sécurité présidentielle (USP) and militias loyal to Patassé. On April 18, 1996, between 200 and 300 soldiers mutinied, claiming that they had not received their wages since 1992–1993. The confrontations between the soldiers and the presidential guard resulted in 9 dead and 40 wounded. French forces provided support (Operation Almandin I) and acted as negotiators. The unrest ended when the soldiers were finally paid their wages by France and the President agreed not to start legal proceedings against them. On May 18, 1996, a second mutiny was led by 500 soldiers who refused to be disarmed, denouncing the agreement reached in April. French forces were once again called to Bangui (Operation Almadin II), supported by the militaries of Chad and Gabon. 3,500 foreigners were evacuated during the unrest, which left 43 persons dead and 238 wounded. On May 26, a peace agreement was signed between France and the mutineers. The latter were promised amnesty, and were allowed to retain their weapons. Their security was ensured by the French military. On November 15, 1996, a third mutiny took place, and 1,500 French soldiers were flown in to ensure the safety of foreigners. The mutineers demanded the discharge of the president. On 6 December, a negotiation process started, facilitated by Gabon, Burkina-Faso, Chad and Mali. The military — supported by the opposition parties — insisted that Patassé had to resign. In January, 1997, however, the Bangui Agreements were signed and the French EFAO troop were replaced by the 1,350 soldiers of the Mission interafricaine de surveillance des Accords de Bangui (MISAB). In March, all mutineers were granted amnesty. The fighting between MISAB and the mutineers continued with a large offensive in June, resulting in up to 200 casualties. After this final clash, the mutineers calmed. After the mutinies, President Patassé suffered from a typical "dictator's paranoia", resulting in a period of cruel terror executed by the presidential guard and various militias within the FACA loyal to the president, such as the Karako. The violence was directed against the Yakoma tribe, of which it is estimated that 20,000 persons fled during this period. The oppression also targeted other parts of the society. The president accused his former ally France of supporting his enemies and sought new international ties. When he strengthened his presidential guard (creating the FORSIDIR, see below), Libya sent him 300 additional soldiers for his own personal safety. When former President Kolingba attempted a coup d'état in 2001 (which was, according to Patassé, supported by France), the Movement for the Liberation of the Congo (MLC) of Jean-Pierre Bemba in DR Congo came to his rescue. Crimes conducted by Patassé's militias and Congolese soldiers during this period are now being investigated by the International Criminal Court, who wrote that "sexual violence appears to have been a central feature of the conflict", having identified more than 600 rape victims. Present situation The FACA has been dominated by soldiers from the Yakoma ethnic group since the time of Kolingba. It has hence been considered disloyal by the two northerner presidents Patassé and Bozizé, both of whom have equipped and run their own militias outside FACA. The military also proved its disloyalty during the mutinies in 1996–1997. Although Francois Bozizé had a background in FACA himself (being its chief-of-staff from 1997 to 2001), he was cautious by retaining the defence portfolio, as well as by appointing his son Jean-Francis Bozizé cabinet director in charge of running the Ministry of Defence. He kept his old friend General Antoine Gambi as Chief of Staff. Due to failure to curb deepening unrest in the northern part of the country, Gambi was in July 2006 replaced with Bozizé's old friend from the military academy, Jules Bernard Ouandé. Military's relations with the society The forces assisting Bozizé in seizing the power in 2003 were not paid what they were promised and started looting, terrorising and killing ordinary citizens. Summary executions took place with the implicit approval of the government. The situation has deteriorated since early 2006, and the regular army and the presidential guard regularly execute extortion, torture, killings and other human rights violations. There is no possibility for the national judicial system to investigate these cases. At the end of 2006, there were an estimated 150,000 internally displaced people in CAR. During a UN mission in the northern part of the country in November 2006, the mission had a meeting with a prefect who said that he could not maintain law and order over the military and the presidential guards. The FACA currently conducts summary executions and burns houses. On the route between Kaga-Bandoro and Ouandago some 2,000 houses have been burnt, leaving an estimated 10,000 persons homeless. Reform of the army Both the Multinational Force in the Central African Republic (FOMUC) and France are assisting in the current reform of the army. One of the key priorities of the reform of the military is make it more ethnically diversified. It should also integrate Bozizé's own rebel group (mainly consisting of members of his own Gbaya tribe). Many of the Yakoma soldiers who left the country after the mutinies in 1996–1997 have now returned and must also be reintegrated into the army. At the same time, BONUCA holds seminars in topics such as the relationship between military and civil parts of society. 2018 saw Russia send mercenaries to help train and equip the CAR military and by 2020 Russia has increased its influence in the region. Army equipment Most of the army's heavy weapons and equipment were destroyed or captured by Séléka militants during the 2012–2014 civil war. In the immediate aftermath of the war, the army was only in possession of 70 rifles. The majority of its arsenals were plundered during the fighting by the Séléka coalition and other armed groups. Thousands of the army's small arms were also distributed to civilian supporters of former President Bozizé in 2013. Prior to 2014, the army's stocks of arms and ammunition were primarily of French, Soviet, and Chinese origin. In 2018, the army's equipment stockpiles were partly revitalized by a donation of 900 pistols, 5,200 rifles, and 270 unspecified rocket launchers from Russia. Infantry weapons Vehicles Artillery Foreign military presence in support of the Government Peacekeeping and peace enforcing forces Since the mutinies, a number of peacekeeping and peace enforcing international missions have been present in Central African Republic. There has been discussion of the deployment of a regional United Nations (UN) peacekeeping force in both Chad and Central African Republic, in order to potentially shore up the ineffectual Darfur Peace Agreement. The missions deployed in the country during the last 10 years are the following: Chad In addition to the multilateral forces, CAR has received bilateral support from other African countries, such as the Libyan and Congolese assistance to Patassé mentioned above. Bozizé is in many ways dependent on Chad's support. Chad has an interest in CAR, since it needs to ensure calmness close to its oil fields and the pipeline leading to the Cameroonian coast, close to CAR's troubled northwest. Before seizing power, Bozizé built up his rebel force in Chad, trained and augmented by the Chadian military. Chadian President Déby assisted him actively in taking the power in March 2003 (his rebel forces included 100 Chadian soldiers). After the coup, Chad provided another 400 soldiers. Current direct support includes 150 non-FOMUC Chadian troops that patrol the border area near Goré, a contingent of soldiers in Bangui, and troops within the presidential lifeguard. The CEMAC Force includes 121 Chadian soldiers. France There has been an almost uninterrupted French military presence in Central African Republic since independence, regulated through agreements between the two Governments. French troops were allowed to be based in the country and to intervene in cases of destabilisation. This was particularly important during the cold war era, when Francophone Africa was regarded as a natural French sphere of influence. Additionally, the strategic location of the country made it a more interesting location for military bases than its neighbours, and Bouar and Bangui were hence two of the most important French bases abroad. However, in 1997, following Lionel Jospin's expression "Neither interference nor indifference", France came to adopt new strategic principles for its presence in Africa. This included a reduced permanent presence on the continent and increased support for multilateral interventions. In Central African Republic, the Bouar base and the Béal Camp (at that time home to 1,400 French soldiers) in Bangui were shut down, as the French concentrated its African presence to Abidjan, Dakar, Djibouti, Libreville and N'Djamena and the deployment of a Force d'action rapide, based in France. However, due to the situation in the country, France has retained a military presence. During the mutinies, 2,400 French soldiers patrolled the streets of Bangui. Their official task was to evacuate foreign citizens, but this did not prevent direct confrontations with the mutineers (resulting in French and mutineer casualties). The level of French involvement resulted in protests among the Central African population, since many sided with the mutineers and accused France of defending a dictator against the people's will. Criticism was also heard in France, where some blamed their country for its protection of a discredited ruler, totally incapable of exerting power and managing the country. After the mutinies in 1997, the MISAB became a multilateral force, but it was armed, equipped, trained and managed by France. The Chadian, Gabonese and Congolese troops of the current Force multinationale en Centrafrique (FOMUC) mission in the country also enjoy logistical support from French soldiers. A study carried out by the US Congressional Research Service revealed that France has again increased its arms sales to Africa, and that during the 1998–2005 period it was the leading supplier of arms to the continent. Components and units Air Force The Air Force is almost inoperable. Lack of funding has almost grounded the air force apart from an AS 350 Ecureuil delivered in 1987. Mirage F1 planes from the French Air Force regularly patrolled troubled regions of the country and also participated in direct confrontations until they were withdrawn and retired in 2014. According to some sources, Bozizé used the money he got from the mining concession in Bakouma to buy two old MI 8 helicopters from Ukraine and one Lockheed C-130 Hercules, built in the 1950s, from the USA. In late 2019 Serbia offered two new Soko J-22 orao attack aircraft to the CAR Air Force but was it is unknown whether the orders were approved by the Air Force. The air force otherwise operates 7 light aircraft, including a single helicopter: Garde républicaine (GR) The Presidential Guard (garde présidentielle) or Republican Guard is officially part of FACA but it is often regarded as a separate entity under the direct command of the President. Since 2010 the Guard has received training from South Africa and Sudan, with Belgium and Germany providing support. GR consists of so-called patriots that fought for Bozizé when he seized power in 2003 (mainly from the Gbaya tribe), together with soldiers from Chad. They are guilty of numerous assaults on the civil population, such as terror, aggression, sexual violence. Only a couple of months after Bozizé's seizure of power, in May 2003, taxi and truck drivers conducted a strike against these outrages. However, post-civil leaders have been cautious in attempting to significantly reform the Republican Guard. New amphibious force Bozizé has created an amphibious force. It is called the Second Battalion of the Ground Forces and it patrols the Ubangi river. The staff of the sixth region in Bouali (mainly made up of members of the former president's lifeguard) was transferred to the city of Mongoumba, located on the river. This city had previously been plundered by forces from the MLC, that had crossed the CAR/Congo border. The riverine patrol force has approximately one hundred personnel and operates seven patrol boats. Veteran Soldiers A program for disarmament and reintegration of veteran soldiers is currently taking place. A national commission for the disarmament, demobilisation and reintegration was put in place in September 2004. The commission is in charge of implementing a program wherein approximately 7,500 veteran soldiers will be reintegrated in civil life and obtain education. Discontinued groups and units that are no longer part of FACA Séléka rebels: the French document Spécial investigation: Centrafrique, au cœur du chaos envisions Séléka rebels as mercenaries under the command of the president. In the documentary the Séléka fighters seem to use a large number of M16 rifles in their fight against the Anti-balaka forces. FORSIDIR: The presidential lifeguard, Unité de sécurité présidentielle (USP), was in March 1998 transformed into the Force spéciale de défense des institutions républicaines (FORSDIR). In contrary to the army – which consisted mainly of southerner Yakoma members and which thereby was unreliable for the northerner president – this unit consisted of northerners loyal to the president. Before eventually being dissolved in January 2000, this highly controversial group became feared for their terror and troubled Patassé's relations with important international partners, such as France. Of its 1,400 staff, 800 were subsequently reintegrated into FACA, under the command of the chief-of-staff. The remaining 400 recreated the USP (once again under the command of the chief-of-staff). Unité de sécurité présidentielle (USP): USP was Patassé's presidential guard before and after FORSIDIR. When he was overthrown by Bozizé in 2003, the USP was dissolved and while some of the soldiers have been absorbed by FACA, others are believed to have joined the pro-Patassé Democratic Front of the Central African People rebel group that is fighting FACA in the north of the country. The Patriots or Liberators: Accompanied Bozizé when he seized power in March 2003. They are now a part of Bozizé's lifeguard, the Garde républicaine, together with soldiers from Chad. Office central de répression du banditisme (OCRB): OCRB was a special unit within the police created to fight the looting after the army mutinies in 1996 and 1997. OCRB has committed numerous summary executions and arbitrary detentions, for which it has never been put on trial. MLPC Militia: Le Mouvement de libération du peuple centrafricain (MLPC) was the armed component of former president Patassé's political party. The MPLC's militia was already active during the 1993 election, but was strengthened during the mutinies 1996 and 1997, particularly through its Karako contingent. Its core consisted of Sara people from Chad and Central African Republic, but during the mutinies it recruited many young people in Bangui. DRC Militia: Rassemblement démocratique centrafricain (RDC) is the militia of the party of General Kolingba, who led the country during the 1980s. The RDC's militia is said to have camps in Mobaye and to have bonds with former officials of Kolingba's "cousin" Mobutu Sese Seko in DR Congo. References External links 'France donates equipment to CAR,' Jane's Defence Weekly, 28 January 2004, p. 20. First of three planned battalions of new army completed training and guaduated 15 January [2004]. See also JDW 12 November 2003. Africa Research Bulletin: Political, Social and Cultural Series, Volume 43 Issue 12, Pages 16909A – 16910A, Published Online: 26 January 2007: Operation Boali, French aid mission to FACA CIA World Factbook US Department of State – Bureau of African Affairs: Background note "Spécial investigation: Centrafrique, au cœur du chaos" Giraf Prod 13 jan 2014 Government of the Central African Republic Military of the Central African Republic
2,451
5,488
https://en.wikipedia.org/wiki/Chad
Chad
Chad (; , ; , ), officially the Republic of Chad, is a landlocked country at the crossroads of North and Central Africa. It is bordered by Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon to the southwest, Nigeria to the southwest (at Lake Chad), and Niger to the west. Chad has a population of 16 million, of which 1.6 million live in the capital and largest city of N'Djamena. Chad has several regions: a desert zone in the north, an arid Sahelian belt in the centre and a more fertile Sudanian Savanna zone in the south. Lake Chad, after which the country is named, is the second-largest wetland in Africa. Chad's official languages are Arabic and French. It is home to over 200 different ethnic and linguistic groups. Islam (55.1%) and Christianity (41.1%) are the main religions practiced in Chad. Beginning in the 7th millennium BC, human populations moved into the Chadian basin in great numbers. By the end of the 1st millennium AD, a series of states and empires had risen and fallen in Chad's Sahelian strip, each focused on controlling the trans-Saharan trade routes that passed through the region. France conquered the territory by 1920 and incorporated it as part of French Equatorial Africa. In 1960, Chad obtained independence under the leadership of François Tombalbaye. Resentment towards his policies in the Muslim north culminated in the eruption of a long-lasting civil war in 1965. In 1979 the rebels conquered the capital and put an end to the South's hegemony. The rebel commanders then fought amongst themselves until Hissène Habré defeated his rivals. The Chadian–Libyan conflict erupted in 1978 by the Libyan invasion which stopped in 1987 with a French military intervention (Operation Épervier). Hissène Habré was overthrown in turn in 1990 by his general Idriss Déby. With French support, a modernization of the Chad National Army was initiated in 1991. From 2003, the Darfur crisis in Sudan spilt over the border and destabilised the nation. Already poor, the nation and people struggled to accommodate the hundreds of thousands of Sudanese refugees who live in and around camps in eastern Chad. While many political parties participated in Chad's legislature, the National Assembly, power laid firmly in the hands of the Patriotic Salvation Movement during the presidency of Idriss Déby, whose rule was described as authoritarian. After President Déby was killed by FACT rebels in April 2021, the Transitional Military Council led by his son Mahamat Déby assumed control of the government and dissolved the Assembly. Chad remains plagued by political violence and recurrent attempted coups d'état. Chad ranks the 2nd lowest in the Human Development Index, with 0.394 in 2021 placed 190th, and a least developed country facing the effects of being one of the poorest and most corrupt countries in the world. Most of its inhabitants live in poverty as subsistence herders and farmers. Since 2003 crude oil has become the country's primary source of export earnings, superseding the traditional cotton industry. Chad has a poor human rights record, with frequent abuses such as arbitrary imprisonment, extrajudicial killings, and limits on civil liberties by both security forces and armed militias. History In the 7th millennium BC, ecological conditions in the northern half of Chadian territory favored human settlement, and its population increased considerably. Some of the most important African archaeological sites are found in Chad, mainly in the Borkou-Ennedi-Tibesti Region; some date to earlier than 2000 BC. For more than 2,000 years, the Chadian Basin has been inhabited by agricultural and sedentary people. The region became a crossroads of civilizations. The earliest of these was the legendary Sao, known from artifacts and oral histories. The Sao fell to the Kanem Empire, the first and longest-lasting of the empires that developed in Chad's Sahelian strip by the end of the 1st millennium AD. Two other states in the region, Sultanate of Bagirmi and Wadai Empire, emerged in the 16th and 17th centuries. The power of Kanem and its successors was based on control of the trans-Saharan trade routes that passed through the region. These states, at least tacitly Muslim, never extended their control to the southern grasslands except to raid for slaves. In Kanem, about a third of the population were slaves. French colonial expansion led to the creation of the in 1900. By 1920, France had secured full control of the colony and incorporated it as part of French Equatorial Africa. French rule in Chad was characterised by an absence of policies to unify the territory and sluggish modernisation compared to other French colonies. The French primarily viewed the colony as an unimportant source of untrained labour and raw cotton; France introduced large-scale cotton production in 1929. The colonial administration in Chad was critically understaffed and had to rely on the dregs of the French civil service. Only the Sara of the south was governed effectively; French presence in the Islamic north and east was nominal. The educational system was affected by this neglect. After World War II, France granted Chad the status of overseas territory and its inhabitants the right to elect representatives to the National Assembly and a Chadian assembly. The largest political party was the Chadian Progressive Party (, PPT), based in the southern half of the colony. Chad was granted independence on 11 August 1960 with the PPT's leader, François Tombalbaye, an ethnic Sara, as its first president. Two years later, Tombalbaye banned opposition parties and established a one-party system. Tombalbaye's autocratic rule and insensitive mismanagement exacerbated inter-ethnic tensions. In 1965, Muslims in the north, led by the National Liberation Front of Chad (, FRONILAT), began a civil war. Tombalbaye was overthrown and killed in 1975, but the insurgency continued. In 1979 the rebel factions led by Hissène Habré took the capital, and all central authority in the country collapsed. Armed factions, many from the north's rebellion, contended for power. The disintegration of Chad caused the collapse of France's position in the country. Libya moved to fill the power vacuum and became involved in Chad's civil war. Libya's adventure ended in disaster in 1987; the French-supported president, Hissène Habré, evoked a united response from Chadians of a kind never seen before and forced the Libyan army off Chadian soil. Habré consolidated his dictatorship through a power system that relied on corruption and violence with thousands of people estimated to have been killed under his rule. The president favoured his own Toubou ethnic group and discriminated against his former allies, the Zaghawa. His general, Idriss Déby, overthrew him in 1990. Attempts to prosecute Habré led to his placement under house arrest in Senegal in 2005; in 2013, Habré was formally charged with war crimes committed during his rule. In May 2016, he was found guilty of human-rights abuses, including rape, sexual slavery, and ordering the killing of 40,000 people, and sentenced to life in prison. Déby attempted to reconcile the rebel groups and reintroduced multiparty politics. Chadians approved a new constitution by referendum, and in 1996, Déby easily won a competitive presidential election. He won a second term five years later. Oil exploitation began in Chad in 2003, bringing with it hopes that Chad would, at last, have some chances of peace and prosperity. Instead, internal dissent worsened, and a new civil war broke out. Déby unilaterally modified the constitution to remove the two-term limit on the presidency; this caused an uproar among the civil society and opposition parties. In 2006 Déby won a third mandate in elections that the opposition boycotted. Ethnic violence in eastern Chad has increased; the United Nations High Commissioner for Refugees has warned that a genocide like that in Darfur may yet occur in Chad. In 2006 and in 2008 rebel forces attempted to take the capital by force, but failed on both occasions. An agreement for the restoration of harmony between Chad and Sudan, signed 15 January 2010, marked the end of a five-year war. The fix in relations led to the Chadian rebels from Sudan returning home, the opening of the border between the two countries after seven years of closure, and the deployment of a joint force to secure the border. In May 2013, security forces in Chad foiled a coup against President Idriss Déby that had been in preparation for several months. Chad is currently one of the leading partners in a West African coalition in the fight against Boko Haram and other Islamist militants. Chad's army announced the death of Déby on 20 April 2021, following an incursion in the northern region by the FACT group, during which the president was killed amid fighting on the front lines. Déby's son, General Mahamat Idriss Déby, has been named interim president by a Transitional Council of military officers. That transitional council has replaced the Constitution with a new charter, granting Mahamat Déby the powers of the presidency and naming him head of the armed forces. Geography Chad is a large landlocked country spanning north-central Africa. It covers an area of , lying between latitudes 7° and 24°N, and 13° and 24°E, and is the twentieth-largest country in the world. Chad is, by size, slightly smaller than Peru and slightly larger than South Africa. Chad is bounded to the north by Libya, to the east by Sudan, to the west by Niger, Nigeria and Cameroon, and to the south by the Central African Republic. The country's capital is from the nearest seaport, Douala, Cameroon. Because of this distance from the sea and the country's largely desert climate, Chad is sometimes referred to as the "Dead Heart of Africa". The dominant physical structure is a wide basin bounded to the north and east by the Ennedi Plateau and Tibesti Mountains, which include Emi Koussi, a dormant volcano that reaches above sea level. Lake Chad, after which the country is named (and which in turn takes its name from the Kanuri word for "lake"), is the remains of an immense lake that occupied of the Chad Basin 7,000 years ago. Although in the 21st century it covers only , and its surface area is subject to heavy seasonal fluctuations, the lake is Africa's second largest wetland. Chad is home to six terrestrial ecoregions: East Sudanian savanna, Sahelian Acacia savanna, Lake Chad flooded savanna, East Saharan montane xeric woodlands, South Saharan steppe and woodlands, and Tibesti-Jebel Uweinat montane xeric woodlands. The region's tall grasses and extensive marshes make it favourable for birds, reptiles, and large mammals. Chad's major rivers—the Chari, Logone and their tributaries—flow through the southern savannas from the southeast into Lake Chad. Each year a tropical weather system known as the intertropical front crosses Chad from south to north, bringing a wet season that lasts from May to October in the south, and from June to September in the Sahel. Variations in local rainfall create three major geographical zones. The Sahara lies in the country's northern third. Yearly precipitations throughout this belt are under ; only occasional spontaneous palm groves survive, all of them south of the Tropic of Cancer. The Sahara gives way to a Sahelian belt in Chad's centre; precipitation there varies from per year. In the Sahel, a steppe of thorny bushes (mostly acacias) gradually gives way to the south to East Sudanian savanna in Chad's Sudanese zone. Yearly rainfall in this belt is over . Wildlife Chad's animal and plant life correspond to the three climatic zones. In the Saharan region, the only flora is the date-palm groves of the oasis. Palms and acacia trees grow in the Sahelian region. The southern, or Sudanic, zone consists of broad grasslands or prairies suitable for grazing. As of 2002, there were at least 134 species of mammals, 509 species of birds (354 species of residents and 155 migrants), and over 1,600 species of plants throughout the country. Elephants, lions, buffalo, hippopotamuses, rhinoceroses, giraffes, antelopes, leopards, cheetahs, hyenas, and many species of snakes are found here, although most large carnivore populations have been drastically reduced since the early 20th century. Elephant poaching, particularly in the south of the country in areas such as Zakouma National Park, is a severe problem. The small group of surviving West African crocodiles in the Ennedi Plateau represents one of the last colonies known in the Sahara today. Chad had a 2018 Forest Landscape Integrity Index mean score of 6.18/10, ranking it 83rd globally out of 172 countries. Extensive deforestation has resulted in loss of trees such as acacias, baobab, dates and palm trees. This has also caused loss of natural habitat for wild animals; one of the main reasons for this is also hunting and livestock farming by increasing human settlements. Populations of animals like lions, leopards and rhino have fallen significantly. Efforts have been made by the Food and Agriculture Organization to improve relations between farmers, agro-pastoralists and pastoralists in the Zakouma National Park (ZNP), Siniaka-Minia, and Aouk reserve in southeastern Chad to promote sustainable development. As part of the national conservation effort, more than 1.2 million trees have been replanted to check the advancement of the desert, which incidentally also helps the local economy by way of financial return from acacia trees, which produce gum arabic, and also from fruit trees. Poaching is a serious problem in the country, particularly of elephants for the profitable ivory industry and a threat to lives of rangers even in the national parks such as Zakouma. Elephants are often massacred in herds in and around the parks by organized poaching. The problem is worsened by the fact that the parks are understaffed and that a number of wardens have been murdered by poachers. Demographics Chad's national statistical agency projected the country's 2015 population between 13,630,252 and 13,679,203, with 13,670,084 as its medium projection; based on the medium projection, 3,212,470 people lived in urban areas and 10,457,614 people lived in rural areas. The country's population is young: an estimated 47% is under 15. The birth rate is estimated at 42.35 births per 1,000 people, and the mortality rate at 16.69. The life expectancy is 52 years. Chad's population is unevenly distributed. Density is in the Saharan Borkou-Ennedi-Tibesti Region but in the Logone Occidental Region. In the capital, it is even higher. About half of the nation's population lives in the southern fifth of its territory, making this the most densely populated region. Urban life is concentrated in the capital, whose population is mostly engaged in commerce. The other major towns are Sarh, Moundou, Abéché and Doba, which are considerably smaller but growing rapidly in population and economic activity. Since 2003, 230,000 Sudanese refugees have fled to eastern Chad from war-ridden Darfur. With the 172,600 Chadians displaced by the civil war in the east, this has generated increased tensions among the region's communities. Polygamy is common, with 39% of women living in such unions. This is sanctioned by law, which automatically permits polygamy unless spouses specify that this is unacceptable upon marriage. Although violence against women is prohibited, domestic violence is common. Female genital mutilation is also prohibited, but the practice is widespread and deeply rooted in tradition; 45% of Chadian women undergo the procedure, with the highest rates among Arabs, Hadjarai, and Ouaddaians (90% or more). Lower percentages were reported among the Sara (38%) and the Toubou (2%). Women lack equal opportunities in education and training, making it difficult for them to compete for the relatively few formal-sector jobs. Although property and inheritance laws based on the French code do not discriminate against women, local leaders adjudicate most inheritance cases in favour of men, according to traditional practice. Largest cities, towns, and municipalities Ethnic groups The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. Chad has more than 200 distinct ethnic groups, which create diverse social structures. The colonial administration and independent governments have attempted to impose a national society, but for most Chadians the local or regional society remains the most important influence outside the immediate family. Nevertheless, Chad's people may be classified according to the geographical region in which they live. In the south live sedentary people such as the Sara, the nation's main ethnic group, whose essential social unit is the lineage. In the Sahel sedentary peoples live side by side with nomadic ones, such as the Arabs, the country's second major ethnic group. The north is inhabited by nomads, mostly Toubous. Languages Chad's official languages are Arabic and French, but over 100 languages and dialects are spoken. Due to the important role played by itinerant Arab traders and settled merchants in local communities, Chadian Arabic has become a lingua franca. Religion Chad is a religiously diverse country. Various estimates, including from Pew Research Center in 2010, found that 52–58% of the population was Muslim, while 39–44% were Christian, with 22% being Catholic and a further 17% being Protestant. According to a 2012 Pew Research survey, 48% of Muslim Chadians professed to be Sunni, 21% Shia, 4% Ahmadi and 23% non-denominational Muslim. Islam is expressed in diverse ways; for example, 55% of Muslim Chadians belong to Sufi orders. Its most common expression is the Tijaniyah, an order followed by the 35% of Chadian Muslims which incorporates some local African religious elements. In 2020, the ARDA estimated the vast majority of Muslims Chadians to be Sunni belonging to the Sufi brotherhood Tijaniyah. A small minority of the country's Muslims (5-10%) hold more fundamentalist practices, which, in some cases, may be associated with Saudi-oriented Salafi movements. Roman Catholics represent the largest Christian denomination in the country. Most Protestants, including the Nigeria-based "Winners' Chapel", are affiliated with various evangelical Christian groups. Members of the Baháʼí and Jehovah's Witnesses religious communities also are present in the country. Both faiths were introduced after independence in 1960 and therefore are considered to be "new" religions in the country. A small proportion of the population continues to practice indigenous religions. Animism includes a variety of ancestor and place-oriented religions whose expression is highly specific. Christianity arrived in Chad with the French and American missionaries; as with Chadian Islam, it syncretises aspects of pre-Christian religious beliefs. Muslims are largely concentrated in northern and eastern Chad, and animists and Christians live primarily in southern Chad and Guéra. Many Muslims also reside in southern Chad but the Christian presence in the north is minimal. The constitution provides for a secular state and guarantees religious freedom; different religious communities generally co-exist without problems. Chad is home to foreign missionaries representing both Christian and Islamic groups. Itinerant Muslim preachers, primarily from Sudan, Saudi Arabia, and Pakistan, also visit. Saudi Arabian funding generally supports social and educational projects and extensive mosque construction. Education Educators face considerable challenges due to the nation's dispersed population and a certain degree of reluctance on the part of parents to send their children to school. Although attendance is compulsory, only 68 percent of boys attend primary school, and more than half of the population is illiterate. Higher education is provided at the University of N'Djamena. At 33 percent, Chad has one of the lowest literacy rates of Sub-Saharan Africa. In 2013, the U.S. Department of Labor's Findings on the Worst Forms of Child Labor in Chad reported that school attendance of children aged 5 to 14 was as low as 39%. This can also be related to the issue of child labor as the report also stated that 53% of children aged 5 to 14 were working, and that 30% of children aged 7 to 14 combined work and school. A more recent DOL report listed cattle herding as a major agricultural activity that employed underage children. Government and politics Chad's constitution provides for a strong executive branch headed by a president who dominates the political system. The president has the power to appoint the prime minister and the cabinet, and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's para-statal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly, may declare a state of emergency. The president is directly elected by popular vote for a five-year term; in 2005 constitutional term limits were removed, allowing a president to remain in power beyond the previous two-term limit. Most of Déby's key advisers are members of the Zaghawa ethnic group, although southern and opposition personalities are represented in government. Chad's legal system is based on French civil law and Chadian customary law where the latter does not interfere with public order or constitutional guarantees of equality. Despite the constitution's guarantee of judicial independence, the president names most key judicial officials. The legal system's highest jurisdictions, the Supreme Court and the Constitutional Council, have become fully operational since 2000. The Supreme Court is made up of a chief justice, named by the president, and 15 councillors, appointed for life by the president and the National Assembly. The Constitutional Court is headed by nine judges elected to nine-year terms. It has the power to review legislation, treaties and international agreements prior to their adoption. The National Assembly makes legislation. The body consists of 155 members elected for four-year terms who meet three times per year. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions when called by the prime minister. Deputies elect a National Assembly president every two years. The president must sign or reject newly passed laws within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no confidence. However, if the National Assembly rejects the executive branch's programme twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through his party, the Patriotic Salvation Movement (MPS), which holds a large majority. Until the legalisation of opposition parties in 1992, Déby's MPS was the sole legal party in Chad. Since then, 78 registered political parties have become active. In 2005, opposition parties and human rights organisations supported the boycott of the constitutional referendum that allowed Déby to stand for re-election for a third term amid reports of widespread irregularities in voter registration and government censorship of independent media outlets during the campaign. Correspondents judged the 2006 presidential elections a mere formality, as the opposition deemed the polls a farce and boycotted them. Chad is listed as a failed state by the Fund for Peace (FFP). Chad had the seventh-highest rank in the Fragile States Index in 2021. Corruption is rife at all levels; Transparency International's Corruption Perceptions Index for 2021 ranked Chad 164th among the 180 countries listed. Critics of former President Déby had accused him of cronyism and tribalism. In southern Chad, bitter conflicts over land are becoming more and more common. They frequently turn violent. Long-standing community culture is being eroded – and so are the livelihoods of many farmers. Longtime Chad President Idriss Déby's death on 20 April 2021, resulted in both the nation's National Assembly and government being dissolved and national leadership being replaced with a transitional military council consisting of military officers and led by his son Mahamat Kaka. The constitution is currently suspended, pending replacement with one drafted by a civilian National Transitional Council, yet to be appointed. The military council has stated that elections will be held at the end of an 18-month transitional period. Internal opposition and foreign relations Déby faced armed opposition from groups who are deeply divided by leadership clashes but were united in their intention to overthrow him. These forces stormed the capital on 13 April 2006, but were ultimately repelled. Chad's greatest foreign influence is France, which maintains 1,000 soldiers in the country. Déby relied on the French to help repel the rebels, and France gives the Chadian army logistical and intelligence support for fear of a complete collapse of regional stability. Nevertheless, Franco-Chadian relations were soured by the granting of oil drilling rights to the American Exxon company in 1999. There have been numerous rebel groups in Chad throughout the last few decades. In 2007, a peace treaty was signed that integrated United Front for Democratic Change soldiers into the Chadian Army. The Movement for Justice and Democracy in Chad also clashed with government forces in 2003 in an attempt to overthrow President Idriss Déby. In addition, there have been various conflicts with Khartoum's Janjaweed rebels in eastern Chad, who killed civilians by use of helicopter gunships. Presently, the Union of Resistance Forces (UFR) are a rebel group that continues to battle with the government of Chad. In 2010, the UFR reportedly had a force estimating 6,000 men and 300 vehicles. Military The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011. Administrative divisions Since 2012 Chad has been divided into 23 regions. The subdivision of Chad in regions came about in 2003 as part of the decentralisation process, when the government abolished the previous 14 prefectures. Each region is headed by a presidentially appointed governor. Prefects administer the 61 departments within the regions. The departments are divided into 200 sub-prefectures, which are in turn composed of 446 cantons. The cantons are scheduled to be replaced by communautés rurales, but the legal and regulatory framework has not yet been completed. The constitution provides for decentralised government to compel local populations to play an active role in their own development. To this end, the constitution declares that each administrative subdivision be governed by elected local assemblies, but no local elections have taken place, and communal elections scheduled for 2005 have been repeatedly postponed. Economy The United Nations' Human Development Index ranks Chad as the seventh poorest country in the world, with 80% of the population living below the poverty line. The GDP (purchasing power parity) per capita was estimated as US$1,651 in 2009. Chad is part of the Bank of Central African States, the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA). Chad's currency is the CFA franc. In the 1960s, the mining industry of Chad produced sodium carbonate, or natron. There have also been reports of gold-bearing quartz in the Biltine Prefecture. However, years of civil war have scared away foreign investors; those who left Chad between 1979 and 1982 have only recently begun to regain confidence in the country's future. In 2000 major direct foreign investment in the oil sector began, boosting the country's economic prospects. Uneven inclusion in the global political economy as a site for colonial resource extraction (primarily cotton and crude oil), a global economic system that does not promote nor encourage the development of Chadian industrialization, and the failure to support local agricultural production has meant that the majority of Chadians live in daily uncertainty and hunger. Over 80% of Chad's population relies on subsistence farming and livestock raising for its livelihood. The crops grown and the locations of herds are determined by the local climate. In the southernmost 10% of the territory lies the nation's most fertile cropland, with rich yields of sorghum and millet. In the Sahel only the hardier varieties of millet grow, and with much lower yields than in the south. On the other hand, the Sahel is ideal pastureland for large herds of commercial cattle and for goats, sheep, donkeys and horses. The Sahara's scattered oases support only some dates and legumes. Chad's cities face serious difficulties of municipal infrastructure; only 48% of urban residents have access to potable water and only 2% to basic sanitation. Before the development of oil industry, cotton dominated industry and the labour market accounted for approximately 80% of export earnings. Cotton remains a primary export, although exact figures are not available. Rehabilitation of Cotontchad, a major cotton company weakened by a decline in world cotton prices, has been financed by France, the Netherlands, the European Union, and the International Bank for Reconstruction and Development (IBRD). The parastatal is now expected to be privatised. Other than cotton, cattle and gum arabic are dominant. According to the United Nations, Chad has been affected by a humanitarian crisis since at least 2001. , the country of Chad hosts over 280,000 refugees from the Sudan's Darfur region, over 55,000 from the Central African Republic, as well as over 170,000 internally displaced persons. In February 2008 in the aftermath of the Battle of N'Djamena, UN Under-Secretary-General for Humanitarian Affairs John Holmes expressed "extreme concern" that the crisis would have a negative effect on the ability of humanitarians to deliver life-saving assistance to half a million beneficiaries, most of whom – according to him – heavily rely on humanitarian aid for their survival. UN spokesperson Maurizio Giuliano stated to The Washington Post: "If we do not manage to provide aid at sufficient levels, the humanitarian crisis might become a humanitarian catastrophe". In addition, organizations such as Save the Children have suspended activities due to killings of aid workers. Chad has made some progress in reducing poverty, there was a decline in the national poverty rate from 55% to 47% between 2003 and 2011. However, the amount of poor people increased from 4.7 million (2011) to 6.5 million (2019) in absolute amounts. By 2018, 4.2 out of 10 people still live below the poverty line. Infrastructure Transport Civil war crippled the development of transport infrastructure; in 1987, Chad had only of paved roads. Successive road rehabilitation projects improved the network to by 2004. Nevertheless, the road network is limited; roads are often unusable for several months of the year. With no railways of its own, Chad depends heavily on Cameroon's rail system for the transport of Chadian exports and imports to and from the seaport of Douala. Chad had an estimated 59 airports, only 9 of which had paved runways. An international airport serves the capital and provides regular nonstop flights to Paris and several African cities. Energy Chad's energy sector has had years of mismanagement by the parastatal Chad Water and Electric Society (STEE), which provides power for 15% of the capital's citizens and covers only 1.5% of the national population. Most Chadians burn biomass fuels such as wood and animal manure for power. ExxonMobil leads a consortium of Chevron and Petronas that has invested $3.7 billion to develop oil reserves estimated at one billion barrels in southern Chad. Oil production began in 2003 with the completion of a pipeline (financed in part by the World Bank) that links the southern oilfields to terminals on the Atlantic coast of Cameroon. As a condition of its assistance, the World Bank insisted that 80% of oil revenues be spent on development projects. In January 2006 the World Bank suspended its loan programme when the Chadian government passed laws reducing this amount. On 14 July 2006, the World Bank and Chad signed a memorandum of understanding under which the Government of Chad commits 70% of its spending to priority poverty reduction programmes. Telecommunications The telecommunication system is basic and expensive, with fixed telephone services provided by the state telephone company SotelTchad. In 2000, there were only 14 fixed telephone lines per 10,000 inhabitants in the country, one of the lowest telephone densities in the world. Gateway Communications, a pan-African wholesale connectivity and telecommunications provider also has a presence in Chad. In September 2013, Chad's Ministry for Posts and Information & Communication Technologies (PNTIC) announced that the country will be seeking a partner for fiber optic technology. Chad is ranked last in the World Economic Forum's Network Readiness Index (NRI) – an indicator for determining the development level of a country's information and communication technologies. Chad ranked number 148 out of 148 overall in the 2014 NRI ranking, down from 142 in 2013. In September 2010 the mobile phone penetration rate was estimated at 24.3% over a population estimate of 10.7 million. Culture Because of its great variety of peoples and languages, Chad possesses a rich cultural heritage. The Chadian government has actively promoted Chadian culture and national traditions by opening the Chad National Museum and the Chad Cultural Centre. Six national holidays are observed throughout the year, and movable holidays include the Christian holiday of Easter Monday and the Muslim holidays of Eid ul-Fitr, Eid ul-Adha, and Eid Milad Nnabi. Cuisine Millet is the staple food of Chadian cuisine. It is used to make balls of paste that are dipped in sauces. In the north this dish is known as alysh; in the south, as biya. Fish is popular, which is generally prepared and sold either as salanga (sun-dried and lightly smoked Alestes and Hydrocynus) or as banda (smoked large fish). Carcaje is a popular sweet red tea extracted from hibiscus leaves. Alcoholic beverages, though absent in the north, are popular in the south, where people drink millet beer, known as billi-billi when brewed from red millet, and as coshate when from white millet. Music The music of Chad includes a number of instruments such as the kinde, a type of bow harp; the kakaki, a long tin horn; and the hu hu, a stringed instrument that uses calabashes as loudspeakers. Other instruments and their combinations are more linked to specific ethnic groups: the Sara prefer whistles, balafones, harps and kodjo drums; and the Kanembu combine the sounds of drums with those of flute-like instruments. The music group Chari Jazz formed in 1964 and initiated Chad's modern music scene. Later, more renowned groups such as African Melody and International Challal attempted to mix modernity and tradition. Popular groups such as Tibesti have clung faster to their heritage by drawing on sai, a traditional style of music from southern Chad. The people of Chad have customarily disdained modern music. However, in 1995 greater interest has developed and fostered the distribution of CDs and audio cassettes featuring Chadian artists. Piracy and a lack of legal protections for artists' rights remain problems to further development of the Chadian music industry. Literature As in other Sahelian countries, literature in Chad has seen an economic, political and spiritual drought that has affected its best known writers. Chadian authors have been forced to write from exile or expatriate status and have generated literature dominated by themes of political oppression and historical discourse. Since 1962, 20 Chadian authors have written some 60 works of fiction. Among the most internationally renowned writers are Joseph Brahim Seïd, Baba Moustapha, Antoine Bangui and Koulsy Lamko. In 2003 Chad's sole literary critic, Ahmat Taboye, published his to further knowledge of Chad's literature internationally and among youth and to make up for Chad's lack of publishing houses and promotional structure. Media and cinema Chad's television audience is limited to N'Djamena. The only television station is the state-owned Télé Tchad. Radio has a far greater reach, with 13 private radio stations. Newspapers are limited in quantity and distribution, and circulation figures are small due to transportation costs, low literacy rates, and poverty. While the constitution defends liberty of expression, the government has regularly restricted this right, and at the end of 2006 began to enact a system of prior censorship on the media. The development of a Chadian film industry, which began with the short films of Edouard Sailly in the 1960s, was hampered by the devastations of civil wars and from the lack of cinemas, of which there is currently only one in the whole country (the Normandie in N'Djamena). The Chadian feature film industry began growing again in the 1990s, with the work of directors Mahamat-Saleh Haroun, Issa Serge Coelo and Abakar Chene Massar. Haroun's film Abouna was critically acclaimed, and his Daratt won the Grand Special Jury Prize at the 63rd Venice International Film Festival. The 2010 feature film A Screaming Man won the Jury Prize at the 2010 Cannes Film Festival, making Haroun the first Chadian director to enter, as well as win, an award in the main Cannes competition. Issa Serge Coelo directed the films Daresalam and DP75: Tartina City. Sports Football is Chad's most popular sport. The country's national team is closely followed during international competitions and Chadian footballers have played for French teams. Basketball and freestyle wrestling are widely practiced, the latter in a form in which the wrestlers put on traditional animal hides and cover themselves with dust. See also Outline of Chad Index of Chad-related articles Notes References Alphonse, Dokalyo (2003) "Cinéma: un avenir plein d'espoir" , Tchad et Culture 214. "Background Note: Chad". September 2006. United States Department of State. Bambé, Naygotimti (April 2007); "", 256. Botha, D.J.J. (December 1992); "S.H. Frankel: Reminiscences of an Economist", The South African Journal of Economics 60 (4): 246–255. Boyd-Buggs, Debra & Joyce Hope Scott (1999); Camel Tracks: Critical Perspectives on Sahelian Literatures. Lawrenceville: Africa World Press. "Chad". Country Reports on Human Rights Practices 2006, 6 March 2007. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State. "Chad". Country Reports on Human Rights Practices 2004, 28 February 2005. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State. "Chad". International Religious Freedom Report 2006. 15 September 2006. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State. "Amnesty International Report 2006 ". Amnesty International Publications. "Chad" (PDF). African Economic Outlook 2007. OECD. May 2007. "Chad". The World Factbook. United States Central Intelligence Agency. 15 May 2007. "Chad" (PDF). Women of the World: Laws and Policies Affecting Their Reproductive Lives – Francophone Africa. Center for Reproductive Rights. 2000 . Freedom of the Press: 2007 Edition. Freedom House, Inc. "Chad". Human Rights Instruments. United Nations Commission on Human Rights. 12 December 1997. "Chad". Encyclopædia Britannica. (2000). Chicago: Encyclopædia Britannica, Inc. "Chad, Lake". Encyclopædia Britannica. (2000). "Chad – Community Based Integrated Ecosystem Management Project" (PDF). 24 September 2002. World Bank. (PDF). Cultural Profiles Project. Citizenship and Immigration Canada. "Chad Urban Development Project" (PDF). 21 October 2004. World Bank. "Chad: Humanitarian Profile – 2006/2007" (PDF). 8 January 2007. Office for the Coordination of Humanitarian Affairs. "Chad Livelihood Profiles" (PDF). March 2005. United States Agency for International Development. "Chad Poverty Assessment: Constraints to Rural Development" (PDF). World Bank. 21 October 1997. "Chad (2006) ". Country Report: 2006 Edition. Freedom House, Inc. . Country Analysis Briefs. January 2007. Energy Information Administration. "Chad leader's victory confirmed", BBC News, 14 May 2006. "Chad may face genocide, UN warns", BBC News, 16 February 2007. Chapelle, Jean (1981); . Paris: L'Harmattan. Chowdhury, Anwarul Karim & Sandagdorj Erdenbileg (2006); . New York: United Nations. Collelo, Thomas (1990); Chad: A Country Study, 2d ed. Washington: U.S. GPO. Dadnaji, Dimrangar (1999); East, Roger & Richard J. Thomas (2003); Profiles of People in Power: The World's Government Leaders. Routledge. Dinar, Ariel (1995); Restoring and Protecting the World's Lakes and Reservoirs. World Bank Publications. Gondjé, Laoro (2003); "", 214. "Chad: the Habré Legacy" . Amnesty International. 16 October 2001. Lange, Dierk (1988). "The Chad region as a crossroad" (PDF), in UNESCO General History of Africa – Africa from the Seventh to the Eleventh Century, vol. 3: 436–460. University of California Press. (PDF). . N. 3. September 2004. Macedo, Stephen (2006); Universal Jurisdiction: National Courts and the Prosecution of Serious Crimes Under International Law. University of Pennsylvania Press. Malo, Nestor H. (2003); "", 214. Manley, Andrew; "Chad's vulnerable president", BBC News, 15 March 2006. "Mirren crowned 'queen' at Venice", BBC News, 9 September 2006. Ndang, Tabo Symphorien (2005); " " (PDF). 4th PEP Research Network General Meeting. Poverty and Economic Policy. Pollack, Kenneth M. (2002); Arabs at War: Military Effectiveness, 1948–1991. Lincoln: University of Nebraska Press. "Rank Order – Area ". The World Factbook. United States Central Intelligence Agency. 10 May 2007. "Republic of Chad – Public Administration Country Profile " (PDF). United Nations, Department of Economic and Social Affairs. November 2004. Spera, Vincent (8 February 2004); . United States Department of Commerce. "Symposium on the evaluation of fishery resources in the development and management of inland fisheries". CIFA Technical Paper No. 2. FAO. 29 November – 1 December 1972. "". . UNESCO, Education for All. "" (PDF). International Crisis Group. 1 June 2006. Wolfe, Adam; , PINR, 6 December 2006. World Bank (14 July 2006). World Bank, Govt. of Chad Sign Memorandum of Understanding on Poverty Reduction. Press release. World Population Prospects: The 2006 Revision Population Database. 2006. United Nations Population Division. "Worst corruption offenders named", BBC News, 18 November 2005. Young, Neil (August 2002); An interview with Mahamet-Saleh Haroun, writer and director of Abouna ("Our Father"). External links Chad. The World Factbook. Central Intelligence Agency. Chad country study from Library of Congress Chad profile from the BBC News Key Development Forecasts for Chad from International Futures 1960 establishments in Africa Arabic-speaking countries and territories Central African countries Countries in Africa French-speaking countries and territories Landlocked countries Least developed countries Member states of the African Union Member states of the Organisation internationale de la Francophonie Member states of the Organisation of Islamic Cooperation Member states of the United Nations Republics Saharan countries States and territories established in 1960
2,453
5,490
https://en.wikipedia.org/wiki/History%20of%20Chile
History of Chile
The territory of Chile has been populated since at least 3000 BC. By the 16th century, Spanish conquistadors began to colonize the region of present-day Chile, and the territory was a colony between 1540 and 1818, when it gained independence from Spain. The country's economic development was successively marked by the export of first agricultural produce, then saltpeter and later copper. The wealth of raw materials led to an economic upturn, but also led to dependency, and even wars with neighboring states. Chile was governed during most of its first 150 years of independence by different forms of restricted government, where the electorate was carefully vetted and controlled by an elite. Failure to address the economic and social increases and increasing political awareness of the less-affluent population, as well as indirect intervention and economic funding to the main political groups by the CIA, as part of the Cold War, led to a political polarization under Socialist President Salvador Allende. This in turn resulted in the 1973 coup d'état and the military dictatorship of General Augusto Pinochet, whose subsequent 17-year regime was responsible for many human rights violations and deep market-oriented economic reforms. In 1990, Chile made a peaceful transition to democracy and initiate a succession of democratic governments. Early history (pre-1540) About 10,000 years ago, migrating Native Americans settled in the fertile valleys and coastal areas of what is present-day Chile. Pre-Hispanic Chile was home to over a dozen different Amerindian societies. The current prevalent theories are that the initial arrival of humans to the continent took place either along the Pacific coast southwards in a rather rapid expansion long preceding the Clovis culture, or even trans-Pacific migration. These theories are backed by findings in the Monte Verde archaeological site, which predates the Clovis site by thousands of years. Specific early human settlement sites from the very early human habitation in Chile include the Cueva del Milodon and the Pali Aike Crater's lava tube. Despite such diversity, it is possible to classify the indigenous people into three major cultural groups: the northern people, who developed rich handicrafts and were influenced by pre-Incan cultures; the Araucanian culture, who inhabited the area between the river Choapa and the island of Chiloé, and lived primarily off agriculture; and the Patagonian culture composed of various nomadic tribes, who supported themselves through fishing and hunting (and who in Pacific/Pacific Coast immigration scenario would be descended partly from the most ancient settlers). No elaborate, centralized, sedentary civilization reigned supreme. The Araucanians, a fragmented society of hunters, gatherers, and farmers, constituted the largest Native American group in Chile. Mobile people who engaged in trade and warfare with other indigenous groups lived in scattered family clusters and small villages. Although the Araucanians had no written language, they did use a common tongue. Those in what became central Chile were more settled and more likely to use irrigation. Those in the south combined slash-and-burn agriculture with hunting. Of the three Araucanian groups, the one that mounted the fiercest resistance to the attempts at seizure of their territory were the Mapuche, meaning "people of the land." The Inca Empire briefly extended their empire into what is now northern Chile, where they collected tribute from small groups of fishermen and oasis farmers but were not able to establish a strong cultural presence in the area. As the Spaniards would after them, the Incas encountered fierce resistance and so were unable to exert control in the south. During their attempts at conquest in 1460 and again in 1491, the Incas established forts in the Central Valley of Chile, but they could not colonize the region. The Mapuche fought against the Sapa Tupac Inca Yupanqui (c. 1471–1493) and his army. The result of the bloody three-day confrontation known as the Battle of the Maule was that the Inca conquest of the territories of Chile ended at the Maule river, which subsequently became the boundary between the Incan empire and the Mapuche lands until the arrival of the Spaniards. Scholars speculate that the total Araucanian population may have numbered 1.5 million at most when the Spaniards arrived in the 1530s; a century of European conquest and disease reduced that number by at least half. During the conquest, the Araucanians quickly added horses and European weaponry to their arsenal of clubs and bows and arrows. They became adept at raiding Spanish settlements and, albeit in declining numbers, managed to hold off the Spaniards and their descendants until the late 19th century. The Araucanians' valor inspired the Chileans to mythologize them as the nation's first national heroes, a status that did nothing, however, to elevate the wretched living standard of their descendants. The Chilean Patagonia located south of the Calle-Calle River in Valdivia was composed of many tribes, mainly Tehuelches, who were considered giants by Spaniards during Magellan's voyage of 1520. The name Patagonia comes from the word patagón used by Magellan to describe the native people whom his expedition thought to be giants. It is now believed the Patagons were actually Tehuelches with an average height of 1.80 m (~5′11″) compared to the 1.55 m (~5′1″) average for Spaniards of the time. The Argentine portion of Patagonia includes the provinces of Neuquén, Río Negro, Chubut and Santa Cruz, as well as the eastern portion of Tierra del Fuego archipelago. The Argentine politico-economic Patagonic Region includes the Province of La Pampa. The Chilean part of Patagonia embraces the southern part of Valdivia, Los Lagos in Lake Llanquihue, Chiloé, Puerto Montt and the Archaeological site of Monte Verde, also the fiords and islands south to the regions of Aisén and Magallanes, including the west side of Tierra del Fuego and Cape Horn. European conquest and colonization (1540–1810) The first European to sight Chilean territory was Ferdinand Magellan, who crossed the Strait of Magellan on November 1, 1520. However, the title of discoverer of Chile is usually assigned to Diego de Almagro. Almagro was Francisco Pizarro's partner, and he received the Southern area (Nueva Toledo). He organized an expedition that brought him to central Chile in 1537, but he found little of value to compare with the gold and silver of the Incas in Peru. Left with the impression that the inhabitants of the area were poor, he returned to Peru, later to be garotted following defeat by Hernando Pizarro in a Civil War. After this initial excursion there was little interest from colonial authorities in further exploring modern-day Chile. However, Pedro de Valdivia, captain of the army, realizing the potential for expanding the Spanish empire southward, asked Pizarro's permission to invade and conquer the southern lands. With a couple of hundred men, he subdued the local inhabitants and founded the city of Santiago de Nueva Extremadura, now Santiago de Chile, on February 12, 1541. Although Valdivia found little gold in Chile he could see the agricultural richness of the land. He continued his explorations of the region west of the Andes and founded over a dozen towns and established the first encomiendas. The greatest resistance to Spanish rule came from the Mapuche people, who opposed European conquest and colonization until the 1880s; this resistance is known as the Arauco War. Valdivia died at the Battle of Tucapel, defeated by Lautaro, a young Mapuche toqui (war chief), but the European conquest was well underway. The Spaniards never subjugated the Mapuche territories; various attempts at conquest, both by military and peaceful means, failed. The Great Uprising of 1598 swept all Spanish presence south of the Bío-Bío River except Chiloé (and Valdivia which was decades later reestablished as a fort), and the great river became the frontier line between Mapuche lands and the Spanish realm. North of that line cities grew up slowly, and Chilean lands eventually became an important source of food for the Viceroyalty of Peru. Valdivia became the first governor of the Captaincy General of Chile. In that post, he obeyed the viceroy of Peru and, through him, the King of Spain and his bureaucracy. Responsible to the governor, town councils known as Cabildo administered local municipalities, the most important of which was Santiago, which was the seat of a Royal Appeals Court (Real Audiencia) from 1609 until the end of colonial rule. Chile was the least wealthy realm of the Spanish Crown for most of its colonial history. Only in the 18th century did a steady economic and demographic growth begin, an effect of the reforms by Spain's Bourbon dynasty and a more stable situation along the frontier. Independence (1810–1818) The drive for independence from Spain was precipitated by the usurpation of the Spanish throne by Napoleon's brother Joseph Bonaparte. The Chilean War of Independence was part of the larger Spanish American independence movement, and it was far from having unanimous support among Chileans, who became divided between independentists and royalists. What started as an elitist political movement against their colonial master, finally ended as a full-fledged civil war between pro-Independence Criollos who sought political and economic independence from Spain and royalist Criollos, who supported the continued allegiance to and permanence within the Spanish Empire of the Captaincy General of Chile. The struggle for independence was a war within the upper class, although the majority of troops on both sides consisted of conscripted mestizos and Native Americans. The beginning of the Independence movement is traditionally dated as of September 18, 1810, when a national junta was established to govern Chile in the name of the deposed king Ferdinand VII. Depending on what terms one uses to define the end, the movement extended until 1821 (when the Spanish were expelled from mainland Chile) or 1826 (when the last Spanish troops surrendered and Chiloé was incorporated into the Chilean republic). The independence process is normally divided into three stages: Patria Vieja, Reconquista, and Patria Nueva. Chile's first experiment with self-government, the "Patria Vieja" (old fatherland, 1810–1814), was led by José Miguel Carrera, an aristocrat then in his mid-twenties. The military-educated Carrera was a heavy-handed ruler who aroused widespread opposition. Another of the earliest advocates of full independence, Bernardo O'Higgins, captained a rival faction that plunged the Criollos into civil war. For him and certain other members of the Chilean elite, the initiative for temporary self-rule quickly escalated into a campaign for permanent independence, although other Criollos remained loyal to Spain. Among those favouring independence, conservatives fought with liberals over the degree to which French revolutionary ideas would be incorporated into the movement. After several efforts, Spanish troops from Peru took advantage of the internecine strife to reconquer Chile in 1814, when they reasserted control by the Battle of Rancagua on October 12. O'Higgins, Carrera and many of the Chilean rebels escaped to Argentina. The second period was characterized by the Spanish attempts to reimpose arbitrary rule during the period known as the Reconquista of 1814–1817 ("Reconquest": the term echoes the Reconquista in which the Christian kingdoms retook Iberia from the Muslims). During this period, the harsh rule of the Spanish loyalists, who punished suspected rebels, drove more and more Chileans into the insurrectionary camp. More members of the Chilean elite were becoming convinced of the necessity of full independence, regardless of who sat on the throne of Spain. As the leader of guerrilla raids against the Spaniards, Manuel Rodríguez became a national symbol of resistance. In exile in Argentina, O'Higgins joined forces with José de San Martín. Their combined army freed Chile with a daring assault over the Andes in 1817, defeating the Spaniards at the Battle of Chacabuco on February 12 and marking the beginning of the Patria Nueva. San Martín considered the liberation of Chile a strategic stepping-stone to the emancipation of Peru, which he saw as the key to hemispheric victory over the Spanish. Chile won its formal independence when San Martín defeated the last large Spanish force on Chilean soil at the Battle of Maipú on April 5, 1818. San Martín then led his Argentine and Chilean followers north to liberate Peru; and fighting continued in Chile's southern provinces, the bastion of the royalists, until 1826. A declaration of independence was officially issued by Chile on February 12, 1818, and formally recognized by Spain in 1840, when full diplomatic relations were established. Republican era (1818–1891) Constitutional organization (1818–1833) From 1817 to 1823, Bernardo O'Higgins ruled Chile as supreme director. He won plaudits for defeating royalists and founding schools, but civil strife continued. O'Higgins alienated liberals and provincials with his authoritarianism, conservatives and the church with his anticlericalism, and landowners with his proposed reforms of the land tenure system. His attempt to devise a constitution in 1818 that would legitimize his government failed, as did his effort to generate stable funding for the new administration. O'Higgins's dictatorial behavior aroused resistance in the provinces. This growing discontent was reflected in the continuing opposition of partisans of Carrera, who was executed by the Argentine regime in Mendoza in 1821, like his two brothers were three years earlier. Although opposed by many liberals, O'Higgins angered the Roman Catholic Church with his liberal beliefs. He maintained Catholicism's status as the official state religion but tried to curb the church's political powers and to encourage religious tolerance as a means of attracting Protestant immigrants and traders. Like the church, the landed aristocracy felt threatened by O'Higgins, resenting his attempts to eliminate noble titles and, more important, to eliminate entailed estates. O'Higgins's opponents also disapproved of his diversion of Chilean resources to aid San Martín's liberation of Peru. O'Higgins insisted on supporting that campaign because he realized that Chilean independence would not be secure until the Spaniards were routed from the Andean core of the empire. However, amid mounting discontent, troops from the northern and southern provinces forced O'Higgins to resign. Embittered, O'Higgins departed for Peru, where he died in 1842. After O'Higgins went into exile in 1823, civil conflict continued, focusing mainly on the issues of anticlericalism and regionalism. Presidents and constitutions rose and fell quickly in the 1820s. The civil struggle's harmful effects on the economy, and particularly on exports, prompted conservatives to seize national control in 1830. In the minds of most members of the Chilean elite, the bloodshed and chaos of the late 1820s were attributable to the shortcomings of liberalism and federalism, which had been dominant over conservatism for most of the period. The political camp became divided by supporters of O'Higgins, Carrera, liberal Pipiolos and conservative Pelucones, being the two last the main movements that prevailed and absorbed the rest. The abolition of slavery in 1823—long before most other countries in the Americas—was considered one of the Pipiolos' few lasting achievements. One Pipiolo leader from the south, Ramón Freire, rode in and out of the presidency several times (1823–1827, 1828, 1829, 1830) but could not sustain his authority. From May 1827 to September 1831, with the exception of brief interventions by Freire, the presidency was occupied by Francisco Antonio Pinto, Freire's former vice president. In August 1828, Pinto's first year in office, Chile abandoned its short-lived federalist system for a unitary form of government, with separate legislative, executive, and judicial branches. By adopting a moderately liberal constitution in 1828, Pinto alienated both the federalists and the liberal factions. He also angered the old aristocracy by abolishing estates inherited by primogeniture (mayorazgo) and caused a public uproar with his anticlericalism. After the defeat of his liberal army at the Battle of Lircay on April 17, 1830, Freire, like O'Higgins, went into exile in Peru. Conservative Era (1830–1861) Although never president, Diego Portales dominated Chilean politics from the cabinet and behind the scenes from 1830 to 1837. He installed the "autocratic republic", which centralized authority in the national government. His political program enjoyed support from merchants, large landowners, foreign capitalists, the church, and the military. Political and economic stability reinforced each other, as Portales encouraged economic growth through free trade and put government finances in order. Portales was an agnostic who said that he believed in the clergy but not in God. He realized the importance of the Roman Catholic Church as a bastion of loyalty, legitimacy, social control and stability, as had been the case in the colonial period. He repealed Liberal reforms that had threatened church privileges and properties. The "Portalian State" was institutionalized by the Chilean Constitution of 1833. One of the most durable charters ever devised in Latin America, the Portalian constitution lasted until 1925. The constitution concentrated authority in the national government, more precisely, in the hands of the president, who was elected by a tiny minority. The chief executive could serve two consecutive five-year terms and then pick a successor. Although the Congress had significant budgetary powers, it was overshadowed by the president, who appointed provincial officials. The constitution also created an independent judiciary, guaranteed inheritance of estates by primogeniture, and installed Catholicism as the state religion. In short, it established an autocratic system under a republican veneer. Portales also achieved his objectives by wielding dictatorial powers, censoring the press, and manipulating elections. For the next forty years, Chile's armed forces would be distracted from meddling in politics by skirmishes and defensive operations on the southern frontier, although some units got embroiled in domestic conflicts in 1851 and 1859. The Portalian president was General Joaquín Prieto, who served two terms (1831–1836, 1836–1841). President Prieto had four main accomplishments: implementation of the 1833 constitution, stabilization of government finances, defeat of provincial challenges to central authority, and victory over the Peru-Bolivia Confederation. During the presidencies of Prieto and his two successors, Chile modernized through the construction of ports, railroads, and telegraph lines, some built by United States entrepreneur William Wheelwright. These innovations facilitated the export-import trade as well as domestic commerce. Prieto and his adviser, Portales, feared the efforts of Bolivian general Andrés de Santa Cruz to unite with Peru against Chile. These qualms exacerbated animosities toward Peru dating from the colonial period, now intensified by disputes over customs duties and loans. Chile also wanted to become the dominant South American military and commercial power along the Pacific. Santa Cruz united Peru and Bolivia in the Peru–Bolivian Confederation in 1836 with a desire to expand control over Argentina and Chile. Portales got Congress to declare war on the Confederation. Portales was killed by traitors in 1837. The general Manuel Bulnes defeated the Confederation in the Battle of Yungay in 1839. After his success Bulnes was elected president in 1841. He served two terms (1841–1846, 1846–1851). His administration concentrated on the occupation of the territory, especially the Strait of Magellan and the Araucanía. The Venezuelan Andres Bello make in this period important intellectual advances, specially the creation of the University of Santiago. But political tensions, including a liberal rebellion, led to the Chilean Civil War of 1851. Finally the conservatives defeated the liberals. The last conservative president was Manuel Montt, who also served two terms (1851–1856, 1856–1861), but his bad administration led to the liberal rebellion in 1859. Liberals triumphed in 1861 with the election of Jose Joaquin Perez as president. Liberal era (1861–1891) The political revolt brought little social change, however, and 19th century Chilean society preserved the essence of the stratified colonial social structure, which was greatly influenced by family politics and the Roman Catholic Church. A strong presidency eventually emerged, but wealthy landowners remained powerful. Toward the end of the 19th century, the government in Santiago consolidated its position in the south by persistently suppressing the Mapuche during the Occupation of the Araucanía. In 1881, it signed the Boundary Treaty of 1881 between Chile and Argentina confirming Chilean sovereignty over the Strait of Magellan, but conceding all of oriental Patagonia, and a considerable fraction of the territory it had during colonial times. As a result of the War of the Pacific with Peru and Bolivia (1879–1883), Chile expanded its territory northward by almost one-third and acquired valuable nitrate deposits, the exploitation of which led to an era of national affluence. In the 1870s, the church influence started to diminish slightly with the passing of several laws that took some old roles of the church into the State's hands such as the registry of births and marriages. In 1886, José Manuel Balmaceda was elected president. His economic policies visibly changed the existing liberal policies. He began to violate the constitution and slowly began to establish a dictatorship. Congress decided to depose Balmaceda, who refused to step down. Jorge Montt, among others, directed an armed conflict against Balmaceda, which soon extended into the 1891 Chilean Civil War. Defeated, Balmaceda fled to Argentina's embassy, where he committed suicide. Jorge Montt became the new president. Parliamentary era (1891–1925) The so-called Parliamentary Republic was not a true parliamentary system, in which the chief executive is elected by the legislature. It was, however, an unusual regime in presidentialist Latin America, for Congress really did overshadow the rather ceremonial office of the president and exerted authority over the chief executive's cabinet appointees. In turn, Congress was dominated by the landed elites. This was the heyday of classic political and economic liberalism. For many decades thereafter, historians derided the Parliamentary Republic as a quarrel-prone system that merely distributed spoils and clung to its laissez-faire policy while national problems mounted. The characterization is epitomized by an observation made by President Ramón Barros Luco (1910–1915), reputedly made in reference to labor unrest: "There are only two kinds of problems: those that solve themselves and those that can't be solved." At the mercy of Congress, cabinets came and went frequently, although there was more stability and continuity in public administration than some historians have suggested. Chile also temporarily resolved its border disputes with Argentina with the Puna de Atacama Lawsuit of 1899, the Boundary treaty of 1881 between Chile and Argentina and the 1902 General Treaty of Arbitration, though not without engaging in an expensive naval arms race beforehand. Political authority ran from local electoral bosses in the provinces through the congressional and executive branches, which reciprocated with payoffs from taxes on nitrate sales. Congressmen often won election by bribing voters in this clientelistic and corrupt system. Many politicians relied on intimidated or loyal peasant voters in the countryside, even though the population was becoming increasingly urban. The lackluster presidents and ineffectual administrations of the period did little to respond to the country's dependence on volatile nitrate exports, spiraling inflation, and massive urbanization. In recent years, however, particularly when the authoritarian regime of Augusto Pinochet is taken into consideration, some scholars have reevaluated the Parliamentary Republic of 1891–1925. Without denying its shortcomings, they have lauded its democratic stability. They have also hailed its control of the armed forces, its respect for civil liberties, its expansion of suffrage and participation, and its gradual admission of new contenders, especially reformers, to the political arena. In particular, two young parties grew in importance – the Democrat Party, with roots among artisans and urban workers, and the Radical Party, representing urban middle sectors and provincial elites. By the early 20th century, both parties were winning increasing numbers of seats in Congress. The more leftist members of the Democrat Party became involved in the leadership of labor unions and broke off to launch the Socialist Workers' Party ( – POS) in 1912. The founder of the POS and its best-known leader, Luis Emilio Recabarren, also founded the Communist Party of Chile ( – PCCh) in 1922. Presidential era (1925–1973) By the 1920s, the emerging middle and working classes were powerful enough to elect a reformist president, Arturo Alessandri Palma. Alessandri appealed to those who believed the social question should be addressed, to those worried by the decline in nitrate exports during World War I, and to those weary of presidents dominated by Congress. Promising "evolution to avoid revolution", he pioneered a new campaign style of appealing directly to the masses with florid oratory and charisma. After winning a seat in the Senate representing the mining north in 1915, he earned the sobriquet "Lion of Tarapacá." As a dissident Liberal running for the presidency, Alessandri attracted support from the more reformist Radicals and Democrats and formed the so-called Liberal Alliance. He received strong backing from the middle and working classes as well as from the provincial elites. Students and intellectuals also rallied to his banner. At the same time, he reassured the landowners that social reforms would be limited to the cities. Alessandri soon discovered that his efforts to lead would be blocked by the conservative Congress. Like Balmaceda, he infuriated the legislators by going over their heads to appeal to the voters in the congressional elections of 1924. His reform legislation was finally rammed through Congress under pressure from younger military officers, who were sick of the neglect of the armed forces, political infighting, social unrest, and galloping inflation, whose program was frustrated by a conservative congress. A double military coup set off a period of great political instability that lasted until 1932. First military right-wingers opposing Alessandri seized power in September 1924, and then reformers in favor of the ousted president took charge in January 1925. The Saber noise (ruido de sables) incident of September 1924, provoked by discontent of young officers, mostly lieutenants from middle and working classes, lead to the establishment of the September Junta led by General Luis Altamirano and the exile of Alessandri. However, fears of a conservative restoration in progressive sectors of the army led to another coup in January, which ended with the establishment of the January Junta as interim government while waiting for Alessandri's return. The latter group was led by two colonels, Carlos Ibáñez del Campo and Marmaduke Grove. They returned Alessandri to the presidency that March and enacted his promised reforms by decree. The latter re-assumed power in March, and a new Constitution encapsulating his proposed reforms was ratified in a plebiscite in September 1925. The new constitution gave increased powers to the presidency. Alessandri broke with the classical liberalism's policies of laissez-faire by creating a Central Bank and imposing a revenue tax. However, social discontents were also crushed, leading to the Marusia massacre in March 1925 followed by the La Coruña massacre. The longest lasting of the ten governments between 1924 and 1932 was that of General Carlos Ibáñez, who briefly held power in 1925 and then again between 1927 and 1931 in what was a de facto dictatorship. When constitutional rule was restored in 1932, a strong middle-class party, the Radicals, emerged. It became the key force in coalition governments for the next 20 years. The Seguro Obrero Massacre took place on September 5, 1938, in the midst of a heated three-way election campaign between the ultraconservative Gustavo Ross Santa María, the radical Popular Front's Pedro Aguirre Cerda, and the newly formed Popular Alliance candidate, Carlos Ibáñez del Campo. The National Socialist Movement of Chile supported Ibáñez's candidacy, which had been announced on September 4. In order to preempt Ross's victory, the National Socialists mounted a coup d'état that was intended to take down the rightwing government of Arturo Alessandri Palma and place Ibáñez in power. During the period of Radical Party dominance (1932–1952), the state increased its role in the economy. In 1952, voters returned Ibáñez to office for another 6 years. Jorge Alessandri succeeded Ibáñez in 1958. The 1964 presidential election of Christian Democrat Eduardo Frei Montalva by an absolute majority initiated a period of major reform. Under the slogan "Revolution in Liberty", the Frei administration embarked on far-reaching social and economic programs, particularly in education, housing, and agrarian reform, including rural unionization of agricultural workers. By 1967, however, Frei encountered increasing opposition from leftists, who charged that his reforms were inadequate, and from conservatives, who found them excessive. At the end of his term, Frei had accomplished many noteworthy objectives, but he had not fully achieved his party's ambitious goals. Popular Unity years In the 1970 presidential election, Senator Salvador Allende Gossens won a plurality of votes in a three-way contest. He was a Marxist physician and member of Chile's Socialist Party, who headed the "Popular Unity" (UP or "Unidad Popular") coalition of the Socialist, Communist, Radical, and Social-Democratic Parties, along with dissident Christian Democrats, the Popular Unitary Action Movement (MAPU), and the Independent Popular Action. Allende had two main competitors in the election — Radomiro Tomic, representing the incumbent Christian Democratic party, who ran a left-wing campaign with much the same theme as Allende's, and the right-wing former president Jorge Alessandri. In the end, Allende received a plurality of the votes cast, getting 36% of the vote against Alessandri's 35% and Tomic's 28%. Despite pressure from the government of the United States, the Chilean Congress, keeping with tradition, conducted a runoff vote between the leading candidates, Allende and former president Jorge Alessandri. This procedure had previously been a near-formality, yet became quite fraught in 1970. After assurances of legality on Allende's part, the murder of the Army Commander-in-Chief, General René Schneider and Frei's refusal to form an alliance with Alessandri to oppose Allende – on the grounds that the Christian Democrats were a workers' party and could not make common cause with the oligarchs – Allende was chosen by a vote of 153 to 35. The Popular Unity platform included the nationalization of U.S. interests in Chile's major copper mines, the advancement of workers' rights, deepening of the Chilean land reform, reorganization of the national economy into socialized, mixed, and private sectors, a foreign policy of "international solidarity" and national independence and a new institutional order (the "people's state" or "poder popular"), including the institution of a unicameral congress. Immediately after the election, the United States expressed its disapproval and raised a number of economic sanctions against Chile. In addition, the CIA's website reports that the agency aided three different Chilean opposition groups during that time period and "sought to instigate a coup to prevent Allende from taking office". The action plans to prevent Allende from coming to power were known as Track I and Track II. In the first year of Allende's term, the short-term economic results of Economics Minister Pedro Vuskovic's expansive monetary policy were unambiguously favorable: 12% industrial growth and an 8.6% increase in GDP, accompanied by major declines in inflation (down from 34.9% to 22.1%) and unemployment (down to 3.8%). Allende adopted measures including price freezes, wage increases, and tax reforms, which had the effect of increasing consumer spending and redistributing income downward. Joint public-private public works projects helped reduce unemployment. Much of the banking sector was nationalized. Many enterprises within the copper, coal, iron, nitrate, and steel industries were expropriated, nationalized, or subjected to state intervention. Industrial output increased sharply and unemployment fell during the administration's first year. However, these results were not sustainable and in 1972 the Chilean escudo had runaway inflation of 140%. An economic depression that had begun in 1967 peaked in 1972, exacerbated by capital flight, plummeting private investment, and withdrawal of bank deposits in response to Allende's socialist program. Production fell and unemployment rose. The combination of inflation and government-mandated price-fixing led to the rise of black markets in rice, beans, sugar, and flour, and a "disappearance" of such basic commodities from supermarket shelves. Recognizing that U.S. intelligence forces were trying to destabilize his presidency through a variety of methods, the KGB offered financial assistance to the first democratically elected Marxist president. However, the reason behind the U.S. covert actions against Allende concerned not the spread of Marxism but fear over losing control of its investments. "By 1968, 20 percent of total U.S. foreign investment was tied up in Latin America...Mining companies had invested $1 billion over the previous fifty years in Chile's copper mining industry – the largest in the world – but they had sent $7.2 billion home." Part of the CIA's program involved a propaganda campaign that portrayed Allende as a would-be Soviet dictator. In fact, however, "the U.S.'s own intelligence reports showed that Allende posed no threat to democracy." Nevertheless, the Richard Nixon administration organized and inserted secret operatives in Chile, in order to quickly destabilize Allende's government. In addition, Nixon gave instructions to make the Chilean economy scream, and international financial pressure restricted economic credit to Chile. Simultaneously, the CIA funded opposition media, politicians, and organizations, helping to accelerate a campaign of domestic destabilization. By 1972, the economic progress of Allende's first year had been reversed, and the economy was in crisis. Political polarization increased, and large mobilizations of both pro- and anti-government groups became frequent, often leading to clashes. By 1973, Chilean society had grown highly polarized, between strong opponents and equally strong supporters of Salvador Allende and his government. Military actions and movements, separate from the civilian authority, began to manifest in the countryside. The Tanquetazo was a failed military coup d'état attempted against Allende in June 1973. In its "Agreement", on August 22, 1973, the Chamber of Deputies of Chile asserted that Chilean democracy had broken down and called for "redirecting government activity", to restore constitutional rule. Less than a month later, on September 11, 1973, the Chilean military deposed Allende, who shot himself in the head to avoid capture as the Presidential Palace was surrounded and bombed. Subsequently, rather than restore governmental authority to the civilian legislature, Augusto Pinochet exploited his role as Commander of the Army to seize total power and to establish himself at the head of a junta. CIA involvement in the coup is documented. As early as the Church Committee Report (1975), publicly available documents have indicated that the CIA attempted to prevent Allende from taking office after he was elected in 1970; the CIA itself released documents in 2000 acknowledging this and that Pinochet was one of their favored alternatives to take power. According to the Vasili Mitrokhin and Christopher Andrew, the KGB and the Cuban Intelligence Directorate launched a campaign known as Operation TOUCAN. For instance, in 1976, the New York Times published 66 articles on alleged human rights abuses in Chile and only 4 on Cambodia, where the communist Khmer Rouge killed some 1.5 million people of 7.5 million people in the country. Military dictatorship (1973–1990) By early 1973, inflation had risen 600% under Allende's presidency. The crippled economy was further battered by prolonged and sometimes simultaneous strikes by physicians, teachers, students, truck owners, copper workers, and the small business class. A military coup overthrew Allende on September 11, 1973. As the armed forces bombarded the presidential palace (Palacio de La Moneda), Allende committed suicide. A military government, led by General Augusto Pinochet Ugarte, took over control of the country. The first years of the regime were marked by human rights violations. The junta jailed, tortured, and executed thousands of Chileans. In October 1973, at least 72 people were murdered by the Caravan of Death. At least a thousand people were executed during the first six months of Pinochet in office, and at least two thousand more were killed during the next sixteen years, as reported by the Rettig Report. At least 29,000 were imprisoned and tortured. According to the Latin American Institute on Mental Health and Human Rights (ILAS), "situations of extreme trauma" affected about 200,000 persons.; this figure includes individuals killed, tortured or exiled, and their immediate families. About 30,000 left the country. The four-man junta headed by General Augusto Pinochet abolished civil liberties, dissolved the national congress, banned union activities, prohibited strikes and collective bargaining, and erased the Allende administration's agrarian and economic reforms. The junta embarked on a radical program of liberalization, deregulation and privatization, slashing tariffs as well as government welfare programs and deficits. Economic reforms were drafted by a group of technocrats who became known as the Chicago Boys because many of them had been trained or influenced by University of Chicago professors. Under these new policies, the rate of inflation dropped: A new constitution was approved by plebiscite characterized by the absence of registration lists, on September 11, 1980, and General Pinochet became president of the republic for an 8-year term. In 1982–1983 Chile witnessed a severe economic crisis with a surge in unemployment and a meltdown of the financial sector. 16 out of 50 financial institutions faced bankruptcy. In 1982 the two biggest banks were nationalized to prevent an even worse credit crunch. In 1983 another five banks were nationalized and two banks had to be put under government supervision. The central bank took over foreign debts. Critics ridiculed the economic policy of the Chicago Boys as "Chicago way to socialism“. After the economic crisis, Hernán Büchi became Minister of Finance from 1985 to 1989, introducing a more pragmatic economic policy. He allowed the peso to float and reinstated restrictions on the movement of capital in and out of the country. He introduced Bank regulations, simplified and reduced the corporate tax. Chile went ahead with privatizations, including public utilities plus the re-privatization of companies that had returned to the government during the 1982–1983 crisis. From 1984 to 1990, Chile's gross domestic product grew by an annual average of 5.9%, the fastest on the continent. Chile developed a good export economy, including the export of fruits and vegetables to the northern hemisphere when they were out of season, and commanded high prices. The military junta began to change during the late 1970s. Due to problems with Pinochet, Leigh was expelled from the junta in 1978 and replaced by General Fernando Matthei. In the late 1980s, the government gradually permitted greater freedom of assembly, speech, and association, to include trade union and political activity. Due to the Caso Degollados ("slit throats case"), in which three Communist party members were assassinated, César Mendoza, member of the junta since 1973 and representants of the carabineros, resigned in 1985 and was replaced by Rodolfo Stange. The next year, Carmen Gloria Quintana was burnt alive in what became known as the Caso Quemado ("Burnt Alive case"). Chile's constitution established that in 1988 there would be another plebiscite in which the voters would accept or reject a single candidate proposed by the Military Junta. Pinochet was, as expected, the candidate proposed, but was denied a second 8-year term by 54.5% of the vote. Transition to democracy (1990–) Aylwin, Frei, and Lagos Chileans elected a new president and the majority of members of a two-chamber congress on December 14, 1989. Christian Democrat Patricio Aylwin, the candidate of a coalition of 17 political parties called the Concertación, received an absolute majority of votes (55%). President Aylwin served from 1990 to 1994, in what was considered a transition period. In February 1991 Aylwin created the National Commission for Truth and Reconciliation, which released in February 1991 the Rettig Report on human rights violations committed during the military rule. This report counted 2,279 cases of "disappearances" which could be proved and registered. Of course, the very nature of "disappearances" made such investigations very difficult. The same problem arose, several years later, with the Valech Report, released in 2004 and which counted almost 30,000 victims of torture, among testimonies from 35,000 persons. In December 1993, Christian Democrat Eduardo Frei Ruiz-Tagle, the son of previous president Eduardo Frei Montalva, led the Concertación coalition to victory with an absolute majority of votes (58%). Frei Ruiz-Tagle was succeeded in 2000 by Socialist Ricardo Lagos, who won the presidency in an unprecedented runoff election against Joaquín Lavín of the rightist Alliance for Chile, by a very tight score of less than 200,000 votes (51,32%). In 1998, Pinochet traveled to London for back surgery. But under orders of Spanish judge Baltasar Garzón, he was arrested there, attracting worldwide attention, not only because of the history of Chile and South America, but also because this was one of the first arrests of a former president based on the universal jurisdiction principle. Pinochet tried to defend himself by referring to the State Immunity Act of 1978, an argument rejected by the British justice. However, UK Home Secretary Jack Straw took the responsibility to release him on medical grounds, and refused to extradite him to Spain. Thereafter, Pinochet returned to Chile in March 2000. Upon descending the plane on his wheelchair, he stood up and saluted the cheering crowd of supporters, including an army band playing his favorite military march tunes, which was awaiting him at the airport in Santiago. President Ricardo Lagos later commented that the retired general's televised arrival had damaged the image of Chile, while thousands demonstrated against him. Bachelet and Piñera The Concertación coalition has continued to dominate Chilean politics for last two decades. In January 2006 Chileans elected their first female president, Michelle Bachelet, of the Socialist Party. She was sworn in on March 11, 2006, extending the Concertación coalition governance for another four years. In 2002 Chile signed an association agreement with the European Union (comprising a free trade agreement and political and cultural agreements), in 2003, an extensive free trade agreement with the United States, and in 2004 with South Korea, expecting a boom in import and export of local produce and becoming a regional trade-hub. Continuing the coalition's free trade strategy, in August 2006 President Bachelet promulgated a free trade agreement with China (signed under the previous administration of Ricardo Lagos), the first Chinese free trade agreement with a Latin American nation; similar deals with Japan and India were promulgated in August 2007. In October 2006, Bachelet promulgated a multilateral trade deal with New Zealand, Singapore and Brunei, the Trans-Pacific Strategic Economic Partnership (P4), also signed under Lagos' presidency. Regionally, she has signed bilateral free trade agreements with Panama, Peru and Colombia. After 20 years, Chile went in a new direction with the win of center-right Sebastián Piñera, in the Chilean presidential election of 2009–2010, defeating former President Eduardo Frei in the runoff. On 27 February 2010, Chile was struck by an 8.8 MW earthquake, the fifth largest ever recorded at the time. More than 500 people died (most from the ensuing tsunami) and over a million people lost their homes. The earthquake was also followed by multiple aftershocks. Initial damage estimates were in the range of US$15–30 billion, around 10 to 15 percent of Chile's real gross domestic product. Chile achieved global recognition for the successful rescue of 33 trapped miners in 2010. On 5 August 2010, the access tunnel collapsed at the San José copper and gold mine in the Atacama Desert near Copiapó in northern Chile, trapping 33 men below ground. A rescue effort organized by the Chilean government located the miners 17 days later. All 33 men were brought to the surface two months later on 13 October 2010 over a period of almost 24 hours, an effort that was carried on live television around the world. Despite good macroeconomic indicators, there was increased social dissatisfaction, focused on demands for better and fairer education, culminating in massive protests demanding more democratic and equitable institutions. Approval of Piñera's administration fell irrevocably. In 2013, Bachelet, a Social Democrat, was elected again as president, seeking to make the structural changes claimed in recent years by the society relative to education reform, tributary reform, same sex civil union, and definitely end the Binomial System, looking to further equality and the end of what remains of the dictatorship. In 2015 a series of corruption scandals (most notably Penta case and Caval case) became public, threatening the credibility of the political and business class. On 17 December 2017, Sebastián Piñera was elected president of Chile for a second term. He received 36% of the votes, the highest percentage among all 8 candidates. In the second round, Piñera faced Alejandro Guillier, a television news anchor who represented Bachelet's New Majority (Nueva Mayoría) coalition. Piñera won the elections with 54% of the votes. Estallido Social and Constitutional Referendum In October 2019 there were violent protests about costs of living and inequality, resulting in Piñera declaring a state of emergency. On 15 November, most of the political parties represented in the National Congress signed an agreement to call a national referendum in April 2020 regarding the creation of a new Constitution. But the COVID-19 pandemic postponed the date of the elections, while Chile was one of the hardest hit nations in the Americas as of May 2020. On October 25, 2020, Chileans voted 78.28 per cent in favor of a new constitution, while 21.72 per cent rejected the change. Voter turnout was 51 per cent. A second vote was held on April 11, 2021, to select 155 Chileans who form the convention which will draft the new constitution. On 19 December 2021, leftist candidate, the 35-year-old former student protest leader, Gabriel Boric, won Chile's presidential election to become the country's youngest ever leader, after the most polarizing election since democracy was restored, defeating right wing pinochetist and leader of the Chilean Republican Party José Antonio Kast. The center-left and center-right political conglomerates alternating power during the last 32 years (ex-Concertación and Chile Vamos) ended up in fourth and fifth place of the Presidential election. Gabriel Boric presidency (2022- ) On 11 March 2022, Gabriel Boric was sworn in as president to succeed outgoing President Sebastian Pinera. Out of 24 members of Gabriel Boric's female-majority Cabinet, 14 are women. On 4 September 2022, voters rejected overwhelmingly the new constitution in the constitutional referendum, which was put forward by the constitutional convention and strongly backed by President Boric. Prior to the dismissal of the proposed constitution the issue of constitutional plurinationalism was noted in polls as particularly divisive in Chile. See also Arauco War Chincha Islands War COVID-19 pandemic in Chile Economic history of Chile List of presidents of Chile Miracle of Chile Occupation of the Araucanía Politics of Chile Timeline of Chilean history U.S. intervention in Chile War of the Confederation War of the Pacific General: History of the Americas History of Latin America History of South America Spanish colonization of the Americas References Further reading In English (See pp. 153–160.) Antezana-Pernet, Corinne. "Peace in the World and Democracy at Home: The Chilean Women's Movement in the 1940s" in Latin America in the 1940s, David Rock, ed. Berkeley and Los Angeles: University of California Press 1994, pp. 166–186. Bergquist, Charles W. Labor in Latin America: Comparative Essays on Chile, Argentina, Venezuela, and Colombia. Stanford: Stanford University Press 1986. Burr, Robert N. By Reason or Force: Chile and the Balancing Power of South America 1830–1905. Berkeley and Los Angeles: University of California Press 1965. Collier, Simon. Ideas and Politics of Chilean Independence, 1808–1833. New York: Cambridge University Press 1967. Drake, Paul. Socialism and Populism in Chile, 1932–1952. Urbana: University of Illinois Press 1978. Drake, Paul. "International Crises and Popular Movements in Latin America: Chile and Peru from the Great Depression to the Cold War," in Latin America in the 1940s, David Rock, ed. Berkeley and Los Angeles: University of California Press 1994, 109–140. Harvey, Robert. "Liberators: Latin America`s Struggle For Independence, 1810–1830". John Murray, London (2000). Klubock, Thomas. La Frontera: Forests and Ecological Conflict in Chile's Frontier Territory. Durham: Duke University Press 2014. Mallon, Florencia. Courage Tastes of Blood: The Mapuche Community of Nicolás Ailío and the Chilean State, 1906–2001. Durham: Duke University Press 2005. Pike, Frederick B. Chile and the United States, 1880–1962: The Emergence of Chile's Social Crisis and challenge to United States Diplomacy. University of Notre Dame Press 1963. Stern, Steve J. Battling for Hearts and Minds: Memory Struggles in Pinochet's Chile, 1973–1988. Durham: Duke University Press 2006. In Spanish Cronología de Chile in the Spanish-language Wikipedia. Díaz, J.; Lüders. R. y Wagner, G. (2016). Chile 1810–2010. La República en Cifras. Historical Statistics. (Santiago: Ediciones Universidad Católica de Chile); a compendium of indicators, from macroeconomic aggregates to demographic trends and social policies, focused on economic and social history; more information; Data can be obtained from: online External links History of Chile (book by Chilean historian Luis Galdames)
2,455
5,497
https://en.wikipedia.org/wiki/Chilean%20Armed%20Forces
Chilean Armed Forces
The Chilean Armed Forces () is the unified military organization comprising the Chilean Army, Air Force, and Navy. The President of Chile is the commander-in-chief of the military, and formulates policy through the Minister of Defence. In recent years and after several major reequipment programs, the Chilean Armed Forces have become one of the most technologically advanced and professional armed forces in Latin America. The Chilean Army is mostly supplied with equipment from Germany, the United States, Brazil, Israel, France, and Spain. Structure Army The current commander-in-chief of the Chilean Army is General de Ejército Sr. Javier Iturriaga del Campo. The 46,350-person army is organized under six military administrative regions and six divisional headquarters. The forces include one special forces brigade, four armoured brigades, one armoured detachment, three motorized brigades, two motorized detachments, four mountain detachments and one aviation brigade. The army operates German Leopard 1 and 2 tanks as its main battle tanks, including 170+ Leopard 2A4 and 115 Leopard 1. The army has approximately 40,000 reservists. Navy Admiral Juan Andrés De La Maza Larraín directs the 19,800-person Chilean Navy, including 3,600 Marines. Of the fleet of 66 surface vessels, eight are major combatant ships and they are based in Valparaíso. The navy operates its own aircraft for transport and patrol; there are no fighters or bomber aircraft but they have attack helicopters. The Navy also operates four submarines based in Talcahuano. Air Force General Arturo Merino Nuñez heads 11,050-strong Chilean Air Force. Air assets are distributed among five air brigades headquartered in Iquique, Antofagasta, Santiago, Puerto Montt, and Punta Arenas. The Air Force also operates an airbase on King George Island, Antarctica. See also Chilean Army order of battle Chilean Navy Chilean Air Force Citations References External links Ejército de Chile (Army) Armada de Chile website (Navy) Fuerza Aérea de Chile website (Air Force)
2,460
5,553
https://en.wikipedia.org/wiki/Geography%20of%20Costa%20Rica
Geography of Costa Rica
Costa Rica is located on the Central American Isthmus, surrounding the point 10° north of the equator and 84° west of the prime meridian. It has 212 km of Caribbean Sea coastline and 1,016 on the North Pacific Ocean. The area is 51,100 km2 of which 40 km2 is water. It is slightly smaller than Bosnia and Herzegovina. Geology Costa Rica is located on the Caribbean Plate. It borders the Cocos Plate in the Pacific Ocean which is being subducted beneath it. This forms the volcanoes in Costa Rica, also known as the Central America Volcanic Arc. The Caribbean Plate began its eastward migration during the Late Cretaceous. During the Late Paleocene, a local sea-level low-stand assisted by the continental uplift of the western margin of South America, resulted in a land bridge over which several groups of mammals apparently took part in an interchange. Many earthquakes in Costa Rica have occurred. Political and human geography Costa Rica shares a border with Nicaragua to the north, and a 348-km border with Panama to the south. Costa Rica claims an exclusive economic zone of with and a territorial sea of . Land use: Arable land: 4.8%. Permanent crops: 6.66%. Other: 88.54%. Administrative divisions of Costa Rica include 7 provinces, 82 cantons, and 478 districts. There are also 24 indigenous territories. Physical geography Islands There are many islands of Costa Rica, the most remote being Cocos Island and the largest being Isla Calero. Mountain ranges The nation's coastal plain separated by the Cordillera Central and the Cordillera de Talamanca, which form the spine of the country and separate the Pacific and Caribbean drainage divides. The Cordillera de Guanacaste is in the north near the border with Nicaragua and forms part of the Continental Divide of the Americas. Much of the Cordillera de Talamanca is included in the La Amistad International Park, which is shared between Costa Rica and Panama. It contains the country's highest peaks: the Cerro Chirripó and the Cerro Kamuk. Much of the region is covered by the Talamancan montane forests. It also includes the Cerros de Escazú which borders the Costa Rican Central Valley to the south. Hydrology Irrigated land covers 1,031 km2. Rivers of Costa Rica all drain into the Caribbean or the Pacific. Extreme points Cocos Island is the southwestern extreme of the country. Otherwise to the north it's Peñas Blancas, to the south and east the Panama border, and to the west the Santa Elena Peninsula. The lowest point is sea level, and the highest is Cerro Chirripo: at 3810 m. Climate The climate is tropical and subtropical. Dry season (December to April); rainy season (May to November); cooler in highlands. Because Costa Rica is located between 8 and 12 degrees north of the Equator, the climate is tropical year round. However, the country has many microclimates depending on elevation, rainfall, topography, and by the geography of each particular region. Costa Rica's seasons are defined by how much rain falls during a particular period. The year can be split into two periods, the dry season known to the residents as summer (), and the rainy season, known locally as winter (). The "summer" or dry season goes from December to April, and "winter" or rainy season goes from May to November, which almost coincides with the Atlantic hurricane season, and during this time, it rains constantly in some regions. The location receiving the most rain is the Caribbean slopes of the Cordillera Central mountains, with an annual rainfall of over . Humidity is also higher on the Caribbean side than on the Pacific side. The mean annual temperature on the coastal lowlands is around , in the main populated areas of the Cordillera Central, and below on the summits of the highest mountains. Flora and fauna Costa Rica is a biodiversity hotspot. While the country has only about 0.03% of the world's landmass, it contains 5% of the world's biodiversity. It is home to about 12,119 species of plants, of which 950 are endemic. There are 117 native trees and more than 1,400 types of orchids; a third of them can be found in the Monteverde Cloud Forest Reserve. Almost a half of the country's land is covered by forests, though only 3.5% is covered by primary forests. Deforestation in Costa Rica has been reduced from some of the worst rates in the world from 1973 to 1989, to almost zero by 2005. The diversity of wildlife in Costa Rica is very high; there are 441 species of amphibians and reptiles, 838 species of birds, 232 species of mammals and 181 species of fresh water fish. Costa Rica has high levels of endemism; 81 species of amphibians and reptiles, 17 species of birds and 7 species of mammals are endemic to the country. However, many species are endangered. According to the World Conservation Monitoring Centre, 209 species of birds, mammals, reptiles, amphibians and plants are endangered. Some of the country's most endangered species are the harpy eagle, the giant anteater, the golden toad and the jaguar. The International Union for Conservation of Nature (IUCN) reports the golden toad as extinct. Over 25% of Costa Rica's national territory is protected by the National System of Conservation Areas (SINAC), which oversees all of the country's protected areas. There 29 national parks of Costa Rica many conservation areas of Costa Rica. Together protected areas comprise over one-fourth of Costa Rican territory. 9.3% of the country is protected under IUCN categories I-V. Around 25% of the country's land area is in protected national parks and protected areas, the largest percentage of protected areas in the world (developing world average 13%, developed world average 8%). Tortuguero National Park is home to monkeys, sloths, birds, and a variety of reptiles. The Monteverde Cloud Forest Reserve is home to about 2,000 plant species, including numerous orchids. Over 400 types of birds and more than 100 species of mammals can be found there. Over 840 species of birds have been identified in Costa Rica. As is the case in much of Central America, the avian species in Costa Rica are a mix of North and South American species. The country's abundant fruit trees, many of which bear fruit year round, are hugely important to the birds, some of whom survive on diets that consist only of one or two types of fruit. Some of the country's most notable avian species include the resplendent quetzal, scarlet macaw, three-wattled bellbird, bare-necked umbrellabird, and the keel-billed toucan. The Instituto Nacional de Biodiversidad is allowed to collect royalties on any biological discoveries of medical importance. Costa Rica is a center of biological diversity for reptiles and amphibians, including the world's fastest running lizard, the spiny-tailed iguana (Ctenosaura similis). Natural resources Hydropower is produced from Lake Arenal, the largest lake in Costa Rica. Total renewable water resources is 112.4 km3. Freshwater withdrawal is 5.77 km3/year (15%/9%/77%), or per capita: 1,582 m3/year. Agriculture is the largest water user demanding around 53% of total supplies while the sector contributes 6.5% to the Costa Rica GDP. Both total and per capita water usage is very high in comparison to other Central American countries but when measured against available freshwater sources, Costa Rica uses only 5% of its available supply. Increasing urbanization will put pressure on water resources management in Costa Rica. Gallery See also List of earthquakes in Costa Rica List of Faults in Costa Rica Costa Rica is party to the following treaties: Convention on Biological Diversity, Convention on Environmental Modification, United Nations Framework Convention on Climate Change, the Montreal Protocol, Ramsar Convention, International Convention for the Regulation of Whaling, Desertification Convention, Endangered Species Convention, Basel Convention, Convention on the Law of the Sea, Convention on Marine Dumping, and the Comprehensive Nuclear-Test-Ban Treaty. It has signed but not ratified the Convention on Marine Life Conservation and the Kyoto Protocol. References External links Map of the Republic of Costa Rica from 1891 Costa Rica - another historic map
2,471
5,554
https://en.wikipedia.org/wiki/Demographics%20of%20Costa%20Rica
Demographics of Costa Rica
This is a demography of the population of Costa Rica including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. According to the United Nations, in Costa Rica had an estimated population of people. White and Mestizos make up 83.4% of the population, 7% are black people (including mixed race), 2.4% Amerindians, 0.2% Chinese and 7% other/none. In 2010, just under 3% of the population is of African descent who are called Afro-Costa Ricans or West Indians and are English-speaking descendants of 19th-century black Jamaican immigrant workers. Another 1% is composed of those of Chinese origin, and less than 1% are West Asian, mainly of Lebanese descent but also Palestinians. The 2011 Census provided the following data: whites and mestizos make up 83.4% of the population, 7% are black people (including mixed race), 2.4% Amerindians, 0.2% Chinese, and 7% other/none. There is also a community of North American retirees from the United States and Canada, followed by fairly large numbers of European Union expatriates (esp. Scandinavians and from Germany) come to retire as well, and Australians. Immigration to Costa Rica made up 9% of the population in 2012. This included permanent settlers as well as migrants who were hoping to reach the U.S. In 2015, there were some 420,000 immigrants in Costa Rica and the number of asylum seekers (mostly from Honduras, El Salvador, Guatemala and Nicaragua) rose to more than 110,000. An estimated 10% of the Costa Rican population in 2014 was made up of Nicaraguans. The indigenous population today numbers about 60,000 (just over 1% of the population) with some Miskito and Garifuna (a population of mixed African and Carib Amerindian descent) living in the coastal regions. Costa Rica's emigration is the smallest in the Caribbean Basin and is among the smallest in the Americas. By 2015 about just 133,185 (2.77%) of the country's people live in another country as immigrants. The main destination countries are the United States (85,924), Nicaragua (10,772), Panama (7,760), Canada (5,039), Spain (3,339), Mexico (2,464), Germany (1,891), Italy (1,508), Guatemala (1,162) and Venezuela (1,127). Population and ancestry In , Costa Rica had a population of . The population is increasing at a rate of 1.5% per year. At current trends the population will increase to 9,158,000 in about 46 years. The population density is 94 people per square km, the third highest in Central America. Approximately 40% lived in rural areas and 60% in urban areas. The rate of urbanization estimated for the period 2005–2015 is 2.74% per annum, one of the highest among developing countries. About 75% of the population live in the upper lands (above 500 meters) where temperature is cooler and milder. The 2011 census counted a population of 4.3 million people distributed among the following groups: 83.6% whites or mestizos, 6.7% black mixed race, 2.4% Native American, 1.1% black or Afro-Caribbean; the census showed 1.1% as Other, 2.9% (141,304 people) as None, and 2.2% (107,196 people) as unspecified. In 2011, there were over 104,000 Native American or indigenous inhabitants, representing 2.4% of the population. Most of them live in secluded reservations, distributed among eight ethnic groups: Quitirrisí (in the Central Valley), Matambú or Chorotega (Guanacaste), Maleku (northern Alajuela), Bribri (southern Atlantic), Cabécar (Cordillera de Talamanca), Guaymí (southern Costa Rica, along the Panamá border), Boruca (southern Costa Rica) and Térraba (southern Costa Rica). The population includes European Costa Ricans (of European ancestry), primarily of Spanish descent, with significant numbers of Italian, German, English, Dutch, French, Irish, Portuguese, and Polish families, as well a sizable Jewish community. The majority of the Afro-Costa Ricans are Creole English-speaking descendants of 19th century black Jamaican immigrant workers. The 2011 census classified 83.6% of the population as white or Mestizo; the latter are persons of combined European and Amerindian descent. The Mulatto segment (mix of white and black) represented 6.7% and indigenous people made up 2.4% of the population. Native and European mixed blood populations are far less than in other Latin American countries. Exceptions are Guanacaste, where almost half the population is visibly mestizo, a legacy of the more pervasive unions between Spanish colonists and Chorotega Amerindians through several generations, and Limón, where the vast majority of the Afro-Costa Rican community lives. Education According to the United Nations, Costa Rica's literacy rate stands at 95.8%, the fifth highest among American countries. Costa Rica's Education Index in 2006 was 0.882; higher than that of richer countries, such as Singapore and Mexico. Costa Rica's gross enrollment ratio is 73.0%, smaller than that of the neighbors countries of El Salvador and Honduras. All students must complete primary school and secondary school, between 6 and 15 years. Some students drop out because they must work to help support their families. In 2007 there were 536,436 pupils enrolled in 3,771 primary schools and 377,900 students attended public and private secondary schools. Costa Rica's main universities are the University of Costa Rica, in San Pedro and the National University of Costa Rica, in Heredia. Costa Rica also has several small private universities. Emigration Costa Rica's emigration is among the smallest in the Caribbean Basin. About 3% of the country's people live in another country as immigrants. The main destination countries are the United States, Spain, Mexico and other Central American countries. In 2005, there were 127,061 Costa Ricans living in another country as immigrants. Remittances were $513,000,000 in 2006 and they represented 2.3% of the country's GDP. Immigration Costa Rica's immigration is among the largest in the Caribbean Basin. According to the 2011 census 385,899 residents were born abroad. The vast majority were born in Nicaragua (287,766). Other countries of origin were Colombia (20,514), United States (16,898), Spain (16,482) and Panama (11,250). Outward Remittances were $246,000,000 in 2006. Migrants According to the World Bank, about 489,200 migrants lived in the country in 2010 mainly from Nicaragua, Panama, El Salvador, Honduras, Guatemala, and Belize, while 125,306 Costa Ricans live abroad in the United States, Panama, Nicaragua, Spain, Mexico, Canada, Germany, Venezuela, Dominican Republic, and Ecuador. The number of migrants declined in later years but in 2015, there were some 420,000 immigrants in Costa Rica and the number of asylum seekers (mostly from Honduras, El Salvador, Guatemala and Nicaragua) rose to more than 110,000, a fivefold increase from 2012. In 2016, the country was called a "magnet" for migrants from South and Central America and other countries who were hoping to reach the U.S. European Costa Ricans European Costa Ricans are people from Costa Rica whose ancestry lies within the continent of Europe, most notably Spain. According to DNA studies, around 75% of the population have some level of European ancestry. Percentages of the Costa Rican population by race are known as the national census does have the question of ethnicity included in its form. As for 2012 65.80% of Costa Ricans identify themselves as white/castizo and 13.65% as mestizo, giving around 80% of Caucasian population. This, however, is based in self-identification and not in scientific studies. According to PLoS Genetics Geographic Patterns of Genome Admixture in Latin American Mestizos study of 2012, Costa Ricans have 68% of European ancestry, 29% Amerindian and 3% African. According to CIA Factbook, Costa Rica has white or mestizo population of the 83.6%. Cristopher Columbus and crew were the first Europeans ever to set foot on what is now Costa Rica in Columbus last trip when he arrived to Uvita Island (modern day Limón province) in 1502. Costa Rica was part of the Spanish Empire and colonized by Spaniards mostly Castilians, Basque and Sefardi Jews. After the independence large migrations of wealthy Americans, Germans, French and British businessmen came to the country encouraged by the government and followed by their families and employees (many of them technicians and professionals) creating colonies and mixing with the population, especially the high and middle classes. Later, more humble migrations of Italians, Spanish (mostly Catalans) and Arab (mostly Lebanese and Syrians) migrants visit the country escaping economical crisis in their home countries, setting in large, more closed colonies. Polish migrants, mostly Ashkenazi Jews escaping anti-Semitism and nazi persecution in Europe also migrated to the country in large numbers. In 1901 president Ascensión Esquivel Ibarra closes the country to all non-white immigration forbidding the entrance of all Black, Chinese, Arab, Turkish or Gypsy migration in the country. After the beginning of the Spanish Civil War large migration of Republican refugees also settle in the country, mostly Castilians, Galicians and Asturians, as later Chilean, Mexican and Colombian migrants would leave their countries traveling to Costa Rica escaping from war or dictatorships as Costa Rica is the longest running democracy in Latin America. Ethnic groups The following listing is taken from a publication of the Costa Rica 2011 Census: Mestizos and Whites - 3,597,847 = 83.64% Mulatto - 289,209 = 6.72% Indigenous - 104,143 = 2.42% Black/Afro-Caribbean - 45,228 = 1.05% Chinese - 9 170 = 0.21% Other - 36 334 = 0.84% Did not state - 95,140 = 2.21% Vital statistics Current vital statistics Structure of the population Structure of the population (01.07.2017) (Estimates - the source of data is the national household survey): Life expectancy at birth Source: UN World Population Prospects Demographic statistics Demographic statistics according to the World Population Review in 2022. One birth every 8 minutes One death every 19 minutes One net migrant every 131 minutes Net gain of one person every 12 minutes Demographic statistics according to the CIA World Factbook, unless otherwise indicated. Population 5,204,411 (2022 est.) 4,987,142 (July 2018 est.) 4,872,543 (July 2016 est.) Ethnic groups White or Mestizo 83.6%, Mulatto 6.7%, Indigenous 2.4%, Black or African descent 1.1%, other 1.1%, none 2.9%, unspecified 2.2% (2011 est.) Age structure 0-14 years: 22.08% (male 575,731/female 549,802) 15-24 years: 15.19% (male 395,202/female 379,277) 25-54 years: 43.98% (male 1,130,387/female 1,111,791) 55-64 years: 9.99% (male 247,267/female 261,847) 65 years and over: 8.76% (2020 est.) (male 205,463/female 241,221) 0-14 years: 22.43% (male 572,172 /female 546,464) 15-24 years: 15.94% (male 405,515 /female 389,433) 25-54 years: 44.04% (male 1,105,944 /female 1,090,434) 55-64 years: 9.48% (male 229,928 /female 242,696) 65 years and over: 8.11% (male 186,531 /female 218,025) (2018 est.) Median age total: 32.6 years. Country comparison to the world: 109th male: 32.1 years female: 33.1 years (2020 est.) Total: 31.7 years. Country comparison to the world: 109th Male: 31.2 years Female: 32.2 years (2018 est.) Total: 30.9 years Male: 30.4 years Female: 31.3 years (2016 est.) Birth rate 14.28 births/1,000 population (2022 est.) Country comparison to the world: 121st 15.3 births/1,000 population (2018 est.) Country comparison to the world: 121st Death rate 4.91 deaths/1,000 population (2022 est.) Country comparison to the world: 198th 4.8 deaths/1,000 population (2018 est.) Country comparison to the world: 200th Total fertility rate 1.86 children born/woman (2022 est.) Country comparison to the world: 134th 1.89 children born/woman (2018 est.) Country comparison to the world: 135th Net migration rate 0.77 migrant(s)/1,000 population (2022 est.) Country comparison to the world: 69th 0.8 migrant(s)/1,000 population (2018 est.) Country comparison to the world: 65th Population growth rate 1.01% (2022 est.) Country comparison to the world: 95th 1.13% (2018 est.) Country comparison to the world: 95th Contraceptive prevalence rate 70.9% (2018) Religions Roman Catholic 47.5%, Evangelical and Pentecostal 19.8%, Jehovah's Witness 1.4%, other Protestant 1.2%, other 3.1%, none 27% (2021 est.) Dependency ratios Total dependency ratio: 45.4 (2015 est.) Youth dependency ratio: 32.4 (2015 est.) Elderly dependency ratio: 12.9 (2015 est.) Potential support ratio: 7.7 (2015 est.) Urbanization urban population: 82% of total population (2022) rate of urbanization: 1.5% annual rate of change (2020-25 est.) Infant mortality rate Total: 8.3 deaths/1,000 live births Male: 9 deaths/1,000 live births Female: 7.4 deaths/1,000 live births (2016 est.) Life expectancy at birth total population: 79.64 years. Country comparison to the world: 58th male: 76.99 years female: 82.43 years (2022 est.) Total population: 78.9 years. Country comparison to the world: 55th Male: 76.2 years Female: 81.7 years (2018 est.) Total population: 78.6 years Male: 75.9 years Female: 81.3 years (2016 est.) HIV/AIDS Adult prevalence rate: 0.33% People living with HIV/AIDS: 10,000 Deaths:200 (2015 est.) Education expenditures 6.7% of GDP (2020) Country comparison to the world: 24th Literacy total population: 97.9% male: 97.8% female: 97.9% (2018) School life expectancy (primary to tertiary education) total: 17 years male: 16 years female: 17 years (2019) Unemployment, youth ages 15-24 total: 40.7% male: 34% female: 50.9% (2020 est.) Nationality Noun: Costa Rican(s) Adjective: Costa Rican Languages Spanish (official) English Sex ratio At birth: 1.05 male(s)/female 0–14 years: 1.05 male(s)/female 15–24 years: 1.04 male(s)/female 25–54 years: 1.01 male(s)/female 55–64 years: 0.95 male(s)/female 65 years and over: 0.86 male(s)/female Total population: 1.01 male(s)/female (2016 est.) Major infectious diseases degree of risk: intermediate (2020) food or waterborne diseases: bacterial diarrhea vectorborne diseases: dengue fever Languages Nearly all Costa Ricans speak Spanish; but many know English. Indigenous Costa Ricans also speak their own language, such as the case of the Ngobes. Religions According to the World Factbook the main religions are: Roman Catholic, 76.3%; Evangelical, 13.7%; Jehovah's Witnesses, 1.3%; other Protestant, 0.7%; other, 4.8%; none, 3.2%. The most recent nationwide survey of religion in Costa Rica, conducted in 2007 by the University of Costa Rica, found that 70.5 percent of the population identify themselves as Roman Catholics (with 44.9 percent practicing, 25.6 percent nonpracticing), 13.8 percent are Evangelical Protestants, 11.3 percent report that they do not have a religion, and 4.3 percent declare that they belong to another religion. Apart from the dominant Catholic religion, there are several other religious groups in the country. Methodist, Lutheran, Episcopal, Baptist, and other Protestant groups have significant membership. The Church of Jesus Christ of Latter-day Saints (LDS Church) claim more than 35,000 members and has a temple in San José that served as a regional worship center for Costa Rica, Panama, Nicaragua, and Honduras. Although they represent less than 1 percent of the population, Jehovah's Witnesses have a strong presence on the Caribbean coast. Seventh-day Adventists operate a university that attracts students from throughout the Caribbean Basin. The Unification Church maintains its continental headquarters for Latin America in San José. Non-Christian religious groups, including followers of Judaism, Islam, Taoism, Hare Krishna, Paganism, Wicca, Scientology, Tenrikyo, and the Baháʼí Faith, claim membership throughout the country, with the majority of worshipers residing in the Central Valley (the area of the capital). While there is no general correlation between religion and ethnicity, indigenous peoples are more likely to practice animism than other religions. Article 75 of the Costa Rican Constitution states that the "Catholic, Apostolic, and Roman Religion is the official religion of the Republic". That same article provides for freedom of religion, and the Government generally respects this right in practice. The US government found no reports of societal abuses or discrimination based on religious belief or practice in 2007. See also Ethnic groups in Central America References External links UNICEF Information about Costa Rica's Demographics INEC. National Institute of Statistics and Census
2,472
5,558
https://en.wikipedia.org/wiki/Transport%20in%20Costa%20Rica
Transport in Costa Rica
There are many modes of transport in Costa Rica but the country's infrastructure has suffered from a lack of maintenance and new investment. There is an extensive road system of more than 30,000 kilometers, although much of it is in disrepair; this also applies to ports, railways and water delivery systems. According to a 2016 U.S. government report, investment from China that attempted to improve the infrastructure found the "projects stalled by bureaucratic and legal concerns". Most parts of the country are accessible by road. The main highland cities in the country's Central Valley are connected by paved all-weather roads with the Atlantic and Pacific coasts and by the Pan American Highway with Nicaragua and Panama, the neighboring countries to the north and to the south Costa Rica's ports are struggling to keep pace with growing trade. They have insufficient capacity, and their equipment is in poor condition. The railroad didn't function for several years, until recent government effort to reactivate it for city transportation. An August 2016 OECD report provided this summary: "The road network is extensive but of poor quality, railways are in disrepair and only slowly being reactivated after having been shut down in the 1990s. Seaports’ quality and capacity are deficient. Internal transportation overly relies on private road vehicles as the public transport system, especially railways, is inadequate." Railways total: narrow gauge: of gauge ( electrified) Road transportation The road system in Costa Rica is not as developed as it might be expected for such a country. However, there are some two-lane trunk roads with restricted access under development. Total: Paved: Unpaved: National road network The Ministry of Public Works and Transport (MOPT), along with the National Road Council (Conavi), are the government organizations in charge of national road nomenclature and maintenance. There are three levels in the national road network: Primary roads: These are trunk roads devised to connect important cities, most of the national roads are connected to the capital city, San José. There are 19 national primary roads, numbered between 1 and 39. Secondary roads: These are roads that connect different cities, or primary routes, directly. There are 129 national secondary roads, numbered between 100 and 257. Tertiary roads: These roads connect main cities to villages or residential areas, there are 175 national tertiary roads, numbered between 301 and 935. Waterways , seasonally navigable by small craft Pipelines refined products Ports and harbors In 2016, the government pledged ₡93 million ($166,000) for a new cruise ship terminal for Puerto Limón. Atlantic Ocean Port of Moín, operated by JAPDEVA. Port of Limón, operated by JAPDEVA. Moín Container Terminal, operated by APM Terminals. Pacific Ocean Golfito Puerto Quepos Puntarenas (cruise ships only) Caldera Port Merchant marine total: 2 ships ( or over) / ships by type: passenger/cargo ships 2 Airports Total: 161 Airports - with paved runways total: 47 : 2 : 2 : 27 under : 16 Airports - with unpaved runways total: 114 : 18 under : 96 References
2,475
5,573
https://en.wikipedia.org/wiki/Croatia
Croatia
Croatia (, ; , ), officially the Republic of Croatia (, ), is a country at the crossroads of Central and Southeast Europe. Its coast lies entirely on the Adriatic Sea. It borders Slovenia to the northwest, Hungary to the northeast, Serbia to the east, Bosnia and Herzegovina and Montenegro to the southeast, and shares a maritime border with Italy to the west and southwest. Its capital and largest city, Zagreb, forms one of the country's primary subdivisions, with twenty counties. The country spans , and has a population of nearly 3.9 million. The Croats arrived in the late 6th century. By the 7th century, they had organized the territory into two duchies. Croatia was first internationally recognized as independent on 7 June 879 during the reign of Duke Branimir. Tomislav became the first king by 925, elevating Croatia to the status of a kingdom. During the succession crisis after the Trpimirović dynasty ended, Croatia entered a personal union with Hungary in 1102. In 1527, faced with Ottoman conquest, the Croatian Parliament elected Ferdinand I of Austria to the Croatian throne. In October 1918, the State of Slovenes, Croats, and Serbs, independent from Austria-Hungary, was proclaimed in Zagreb, and in December 1918, it merged into the Kingdom of Yugoslavia. Following the Axis invasion of Yugoslavia in April 1941, most of Croatia was incorporated into a Nazi-installed puppet state, the Independent State of Croatia. A resistance movement led to the creation of the Socialist Republic of Croatia, which after the war became a founding member and constituent of the Socialist Federal Republic of Yugoslavia. On 25 June 1991, Croatia declared independence, and the War of Independence was successfully fought over the next four years. Croatia is a republic and a parliamentary liberal democracy. It is a member of the European Union, the Eurozone, the Schengen Area, NATO, the United Nations, the Council of Europe, the OSCE, the World Trade Organization, and a founding member of the Union for the Mediterranean. An active participant in United Nations peacekeeping, Croatia contributed troops to the International Security Assistance Force and filled a nonpermanent seat on the United Nations Security Council for the 2008–2009 term. Since 2000, the Croatian government has invested in infrastructure, especially transport routes and facilities along the Pan-European corridors. Croatia is classified by the World Bank as a high-income economy and ranks high, 40th on the Human Development Index. Service, industrial sectors, and agriculture dominate the economy. Tourism is a significant source of revenue for the country, which is ranked among the 20 most popular tourist destinations. The state controls a part of the economy, with substantial government expenditure. The European Union is Croatia's most important trading partner. Croatia provides social security, universal health care, and tuition-free primary and secondary education while supporting culture through public institutions and corporate investments in media and publishing. Etymology Croatia's name derives from Medieval Latin , itself a derivation of North-West Slavic , by liquid metathesis from Common Slavic period *Xorvat, from proposed Proto-Slavic *Xъrvátъ which possibly comes from the 3rd-century Scytho-Sarmatian form attested in the Tanais Tablets as (, alternate forms comprise and ). The origin is uncertain, but most probably is from Proto-Ossetian / Alanian *xurvæt- or *xurvāt-, in the meaning of "one who guards" ("guardian, protector"). The oldest preserved record of the Croatian ethnonym *xъrvatъ is of the variable stem, attested in the Baška tablet in style zvъnъmirъ kralъ xrъvatъskъ ("Zvonimir, Croatian king"), although it was archaeologically confirmed that the ethnonym Croatorum is mentioned in a church inscription found in Bijaći near Trogir dated to the end of the 8th or early 9th century. The presumably oldest stone inscription with fully preserved ethnonym is the 9th-century Branimir inscription found near Benkovac, where Duke Branimir is styled Dux Cruatorvm, likely dated between 879 and 892, during his rule. The Latin term is attributed to a charter of Duke Trpimir I of Croatia, dated to 852 in a 1568 copy of a lost original, but it is not certain if the original was indeed older than the Branimir inscription. History Prehistory The area known as Croatia today was inhabited throughout the prehistoric period. Neanderthal fossils dating to the middle Palaeolithic period were unearthed in northern Croatia, best presented at the Krapina site. Remnants of Neolithic and Chalcolithic cultures were found in all regions. The largest proportion of sites is in the valleys of northern Croatia. The most significant are Baden, Starčevo, and Vučedol cultures. Iron Age hosted the early Illyrian Hallstatt culture and the Celtic La Tène culture. Antiquity Much later, the region was settled by Illyrians and Liburnians, while the first Greek colonies were established on the islands of Hvar, Korčula, and Vis. In 9 AD, the territory of today's Croatia became part of the Roman Empire. Emperor Diocletian was native to the region. He had a large palace built in Split, to which he retired after abdicating in AD 305. During the 5th century, the last de jure Western Roman Emperor Julius Nepos ruled a small realm from the palace after fleeing Italy in 475. The period ends with Avar and Croat invasions in the first half of the 7th century and the destruction of almost all Roman towns. Roman survivors retreated to more favourable sites on the coast, islands, and mountains. The city of Dubrovnik was founded by such survivors from Epidaurum. Middle Ages The ethnogenesis of Croats is uncertain. The most accepted theory, the Slavic theory, proposes migration of White Croats from White Croatia during the Migration Period. Conversely, the Iranian theory proposes Iranian origin, based on Tanais Tablets containing Ancient Greek inscriptions of given names Χορούαθος, Χοροάθος, and Χορόαθος (Khoroúathos, Khoroáthos, and Khoróathos) and their interpretation as anthroponyms of Croatian people. According to the work De Administrando Imperio written by 10th-century Byzantine Emperor Constantine VII, Croats arrived in the Roman province of Dalmatia in the first half of the 7th century after they defeated the Avars. However, that claim is disputed: competing hypotheses date the event between the late 6th-early 7th (mainstream) or the late 8th-early 9th (fringe) centuries, but recent archaeological data has established that the migration and settlement of the Slavs/Croats was in the late 6th and early 7th century. Eventually, a dukedom was formed, Duchy of Croatia, ruled by Borna, as attested by chronicles of Einhard starting in 818. The record represents the first document of Croatian realms, vassal states of Francia at the time. Its neighbor to the North was Principality of Lower Pannonia, at the time ruled by duke Ljudevit who ruled the territories between the Drava and Sava rivers, centred from his fort at Sisak. This population and territory throughout history was tightly related and connected to Croats and Croatia. According to Constantine VII the Christianisation of Croats began in the 7th century, but the claim is disputed, and generally, Christianisation is associated with the 9th century. It is assumed that initially encompassed only the elite and related people. The Frankish overlordship ended during the reign of Mislav, or his successor Trpimir I. The native Croatian royal dynasty was founded by duke Trpimir I in the mid 9th century, who defeated the Byzantine and Bulgarian forces. The first native Croatian ruler recognised by the Pope was duke Branimir, who received papal recognition from Pope John VIII on 7 June 879. Tomislav was the first king of Croatia, noted as such in a letter of Pope John X in 925. Tomislav defeated Hungarian and Bulgarian invasions. The medieval Croatian kingdom reached its peak in the 11th century during the reigns of Petar Krešimir IV (1058–1074) and Dmitar Zvonimir (1075–1089). When Stjepan II died in 1091, ending the Trpimirović dynasty, Dmitar Zvonimir's brother-in-law Ladislaus I of Hungary claimed the Croatian crown. This led to a war and personal union with Hungary in 1102 under Coloman. Personal union with Hungary (1102) and Habsburg Monarchy (1527) For the next four centuries, the Kingdom of Croatia was ruled by the Sabor (parliament) and a Ban (viceroy) appointed by the king. This period saw the rise of influential nobility such as the Frankopan and Šubić families to prominence, and ultimately numerous Bans from the two families. An increasing threat of Ottoman conquest and a struggle against the Republic of Venice for control of coastal areas ensued. The Venetians controlled most of Dalmatia by 1428, except the city-state of Dubrovnik, which became independent. Ottoman conquests led to the 1493 Battle of Krbava field and the 1526 Battle of Mohács, both ending in decisive Ottoman victories. King Louis II died at Mohács, and in 1527, the Croatian Parliament met in Cetin and chose Ferdinand I of the House of Habsburg as the new ruler of Croatia, under the condition that he protects Croatia against the Ottoman Empire while respecting its political rights. Following the decisive Ottoman victories, Croatia was split into civilian and military territories in 1538. The military territories became known as the Croatian Military Frontier and were under direct Habsburg control. Ottoman advances in Croatia continued until the 1593 Battle of Sisak, the first decisive Ottoman defeat, when borders stabilised. During the Great Turkish War (1683–1698), Slavonia was regained, but western Bosnia, which had been part of Croatia before the Ottoman conquest, remained outside Croatian control. The present-day border between the two countries is a remnant of this outcome. Dalmatia, the southern part of the border, was similarly defined by the Fifth and the Seventh Ottoman–Venetian Wars. The Ottoman wars drove demographic changes. During the 16th century, Croats from western and northern Bosnia, Lika, Krbava, the area between the rivers of Una and Kupa, and especially from western Slavonia, migrated towards Austria. Present-day Burgenland Croats are direct descendants of these settlers. To replace the fleeing population, the Habsburgs encouraged Bosnians to provide military service in the Military Frontier. The Croatian Parliament supported King Charles III's Pragmatic Sanction and signed their own Pragmatic Sanction in 1712. Subsequently, the emperor pledged to respect all privileges and political rights of the Kingdom of Croatia, and Queen Maria Theresa made significant contributions to Croatian affairs, such as introducing compulsory education. Between 1797 and 1809, the First French Empire increasingly occupied the eastern Adriatic coastline and its hinterland, ending the Venetian and the Ragusan republics, establishing the Illyrian Provinces. In response, the Royal Navy blockaded the Adriatic Sea, leading to the Battle of Vis in 1811. The Illyrian provinces were captured by the Austrians in 1813 and absorbed by the Austrian Empire following the Congress of Vienna in 1815. This led to the formation of the Kingdom of Dalmatia and the restoration of the Croatian Littoral to the Kingdom of Croatia under one crown. The 1830s and 1840s featured romantic nationalism that inspired the Croatian National Revival, a political and cultural campaign advocating the unity of South Slavs within the empire. Its primary focus was establishing a standard language as a counterweight to Hungarian while promoting Croatian literature and culture. During the Hungarian Revolution of 1848, Croatia sided with Austria. Ban Josip Jelačić helped defeat the Hungarians in 1849 and ushered in a Germanisation policy. By the 1860s, the failure of the policy became apparent, leading to the Austro-Hungarian Compromise of 1867. The creation of a personal union between the Austrian Empire and the Kingdom of Hungary followed. The treaty left Croatia's status to Hungary, which was resolved by the Croatian–Hungarian Settlement of 1868 when the kingdoms of Croatia and Slavonia were united. The Kingdom of Dalmatia remained under de facto Austrian control, while Rijeka retained the status of corpus separatum introduced in 1779. After Austria-Hungary occupied Bosnia and Herzegovina following the 1878 Treaty of Berlin, the Military Frontier was abolished. The Croatian and Slavonian sectors of the Frontier returned to Croatia in 1881, under provisions of the Croatian–Hungarian Settlement. Renewed efforts to reform Austria-Hungary, entailing federalisation with Croatia as a federal unit, were stopped by World War I. First Yugoslavia (1918–1941) On 29 October 1918 the Croatian Parliament (Sabor) declared independence and decided to join the newly formed State of Slovenes, Croats, and Serbs, which in turn entered into union with the Kingdom of Serbia on 4 December 1918 to form the Kingdom of Serbs, Croats, and Slovenes. The Croatian Parliament never ratified the union with Serbia and Montenegro. The 1921 constitution defining the country as a unitary state and abolition of Croatian Parliament and historical administrative divisions effectively ended Croatian autonomy. The new constitution was opposed by the most widely supported national political party—the Croatian Peasant Party (HSS) led by Stjepan Radić. The political situation deteriorated further as Radić was assassinated in the National Assembly in 1928, leading to King Alexander I to establish a dictatorship in January 1929. The dictatorship formally ended in 1931 when the king imposed a more unitary constitution. The HSS, now led by Vladko Maček, continued to advocate federalisation, resulting in the Cvetković–Maček Agreement of August 1939 and the autonomous Banovina of Croatia. The Yugoslav government retained control of defence, internal security, foreign affairs, trade, and transport while other matters were left to the Croatian Sabor and a crown-appointed Ban. World War II In April 1941, Yugoslavia was occupied by Nazi Germany and Fascist Italy. Following the invasion, a German-Italian installed puppet state named the Independent State of Croatia (NDH) was established. Most of Croatia, Bosnia and Herzegovina, and the region of Syrmia were incorporated into this state. Parts of Dalmatia were annexed by Italy, Hungary annexed the northern Croatian regions of Baranja and Međimurje. The NDH regime was led by Ante Pavelić and ultranationalist Ustaše, a fringe movement in pre-war Croatia. With German and Italian military and political support, the regime introduced racial laws and launched a genocide campaign against Serbs, Jews, and Roma. Many were imprisoned in concentration camps; the largest was the Jasenovac complex. Anti-fascist Croats were targeted by the regime as well. Several concentration camps (most notably the Rab, Gonars and Molat camps) were established in Italian-occupied territories, mostly for Slovenes and Croats. At the same time, the Yugoslav Royalist and Serbian nationalist Chetniks pursued a genocidal campaign against Croats and Muslims, aided by Italy. Nazi German forces committed crimes and reprisals against civilians in retaliation for Partisan actions, such as in the villages of Kamešnica and Lipa in 1944. A resistance movement emerged. On 22 June 1941, the 1st Sisak Partisan Detachment was formed near Sisak, the first military unit formed by a resistance movement in occupied Europe. That sparked the beginning of the Yugoslav Partisan movement, a communist, multi-ethnic anti-fascist resistance group led by Josip Broz Tito. In ethnic terms, Croats were the second-largest contributors to the Partisan movement after Serbs. In per capita terms, Croats contributed proportionately to their population within Yugoslavia. By May 1944 (according to Tito), Croats made up 30% of the Partisan's ethnic composition, despite making up 22% of the population. The movement grew fast, and at the Tehran Conference in December 1943, the Partisans gained recognition from the Allies. With Allied support in logistics, equipment, training and airpower, and with the assistance of Soviet troops taking part in the 1944 Belgrade Offensive, the Partisans gained control of Yugoslavia and the border regions of Italy and Austria by May 1945. Members of the NDH armed forces and other Axis troops, as well as civilians, were in retreat towards Austria. Following their surrender, many were killed in the Yugoslav death march of Nazi collaborators. In the following years, ethnic Germans faced persecution in Yugoslavia, and many were interned. The political aspirations of the Partisan movement were reflected in the State Anti-fascist Council for the National Liberation of Croatia, which developed in 1943 as the bearer of Croatian statehood and later transformed into the Parliament in 1945, and AVNOJ—its counterpart at the Yugoslav level. Based on the studies on wartime and post-war casualties by demographer Vladimir Žerjavić and statistician Bogoljub Kočović, a total of 295,000 people from the territory (not including territories ceded from Italy after the war) died, which amounted to 7.3% of the population, among whom were 125–137,000 Serbs, 118–124,000 Croats, 16–17,000 Jews, and 15,000 Roma. In addition, from areas joined to Croatia after the war, a total of 32,000 people died, among whom 16,000 were Italians and 15,000 were Croats. Approximately 200,000 Croats from the entirety of Yugoslavia (including Croatia) and abroad were killed in total throughout the war and its immediate aftermath, approximately 5.4% of the population. Second Yugoslavia (1945–1991) After World War II, Croatia became a single-party socialist federal unit of the SFR Yugoslavia, ruled by the Communists, but having a degree of autonomy within the federation. In 1967, Croatian authors and linguists published a Declaration on the Status and Name of the Croatian Standard Language demanding equal treatment for their language. The declaration contributed to a national movement seeking greater civil rights and redistribution of the Yugoslav economy, culminating in the Croatian Spring of 1971, which was suppressed by Yugoslav leadership. Still, the 1974 Yugoslav Constitution gave increased autonomy to federal units, basically fulfilling a goal of the Croatian Spring and providing a legal basis for independence of the federative constituents. Following Tito's death in 1980, the political situation in Yugoslavia deteriorated. National tension was fanned by the 1986 SANU Memorandum and the 1989 coups in Vojvodina, Kosovo, and Montenegro. In January 1990, the Communist Party fragmented along national lines, with the Croatian faction demanding a looser federation. In the same year, the first multi-party elections were held in Croatia, while Franjo Tuđman's win exacerbated nationalist tensions. Some of the Serbs in Croatia left Sabor and declared the autonomy of the unrecognised Republic of Serbian Krajina, intent on achieving independence from Croatia. Croatian War of Independence As tensions rose, Croatia declared independence on 25 June 1991. However, the full implementation of the declaration only came into effect after a three-month moratorium on the decision on 8 October 1991. In the meantime, tensions escalated into overt war when the Yugoslav People's Army (JNA) and various Serb paramilitary groups attacked Croatia. By the end of 1991, a high-intensity conflict fought along a wide front reduced Croatia's control to about two-thirds of its territory. Serb paramilitary groups then began a campaign of killing, terror, and expulsion of the Croats in the rebel territories, killing thousands of Croat civilians and expelling or displacing as many as 400,000 Croats and other non-Serbs from their homes. Serbs living in Croatian towns, especially those near the front lines, were subjected to various forms of discrimination. Croatian Serbs in Eastern and Western Slavonia and parts of the Krajina were forced to flee or were expelled by Croatian forces, though on a restricted scale and in lesser numbers. The Croatian Government publicly deplored these practices and sought to stop them, indicating that they were not a part of the Government's policy. On 15 January 1992, Croatia gained diplomatic recognition by the European Economic Community, followed by the United Nations. The war effectively ended in August 1995 with a decisive victory by Croatia; the event is commemorated each year on 5 August as Victory and Homeland Thanksgiving Day and the Day of Croatian Defenders. Following the Croatian victory, about 200,000 Serbs from the self-proclaimed Republic of Serbian Krajina fled the region and hundreds of mainly elderly Serb civilians were killed in the aftermath of the military operation. Their lands were subsequently settled by Croat refugees from Bosnia and Herzegovina. The remaining occupied areas were restored to Croatia following the Erdut Agreement of November 1995, concluding with the UNTAES mission in January 1998. Most sources number the war deaths at around 20,000. Independent Croatia (1991–present) After the end of the war, Croatia faced the challenges of post-war reconstruction, the return of refugees, establishing democracy, protecting human rights, and general social and economic development. The main law is the Constitution, as adopted on 22 December 1990. The post-2000 period is characterised by democratisation, economic growth, structural and social reforms, as well as problems such as unemployment, corruption, and the inefficiency of the public administration. In November 2000 and March 2001, the Parliament amended the Constitution, changing its bicameral structure back into its historic unicameral form and reducing presidential powers. Croatia joined the Partnership for Peace on 25 May 2000 and became a member of the World Trade Organization on 30 November 2000. On 29 October 2001, Croatia signed a Stabilisation and Association Agreement with the European Union, submitted a formal application for the EU membership in 2003, was given the status of candidate country in 2004, and began accession negotiations in 2005. In December 2011, Croatia completed EU accession negotiations and signed an EU accession treaty on 9 December 2011. Croatia joined the European Union on 1 July 2013. A recurring obstacle to the negotiations was Croatia's ICTY co-operation record and Slovenian blocking of the negotiations because of Croatia–Slovenia border disputes. Although the Croatian economy had enjoyed a significant boom in the early 2000s, the financial crisis in 2008 forced the government to cut spending, thus provoking a public outcry. Croatia served on the United Nations Security Council for the 2008–2009 term, assuming the presidency in December 2008. On 1 April 2009, Croatia joined NATO. A wave of anti-government protests in early 2011 reflected a general dissatisfaction with politics and economics. Croatia completed EU accession negotiations in 2011. A majority of Croatian voters opted in favour of EU membership in a 2012 referendum., Croatia joined the European Union effective 1 July 2013. Croatia was affected by the 2015 European migrant crisis when Hungary's closure of borders with Serbia pushed over 700,000 refugees and migrants to pass through Croatia on their way to other countries. On 19 October 2016, Andrej Plenković began serving as Croatian Prime Minister. The most recent presidential elections, on 5 January 2020, elected Zoran Milanović as president. Geography Croatia is situated in Central and Southeast Europe, on the coast of the Adriatic Sea. Hungary is to the northeast, Serbia to the east, Bosnia and Herzegovina and Montenegro to the southeast and Slovenia to the northwest. It lies mostly between latitudes 42° and 47° N and longitudes 13° and 20° E. Part of the territory in the extreme south surrounding Dubrovnik is a practical exclave connected to the rest of the mainland by territorial waters, but separated on land by a short coastline strip belonging to Bosnia and Herzegovina around Neum. The Pelješac Bridge connects the exclave with mainland Croatia. The territory covers , consisting of of land and of water. It is the world's 127th largest country. Elevation ranges from the mountains of the Dinaric Alps with the highest point of the Dinara peak at near the border with Bosnia and Herzegovina in the south to the shore of the Adriatic Sea which makes up its entire southwest border. Insular Croatia consists of over a thousand islands and islets varying in size, 48 of which permanently inhabited. The largest islands are Cres and Krk, each of them having an area of around . The hilly northern parts of Hrvatsko Zagorje and the flat plains of Slavonia in the east which is part of the Pannonian Basin are traversed by major rivers such as Danube, Drava, Kupa, and the Sava. The Danube, Europe's second longest river, runs through the city of Vukovar in the extreme east and forms part of the border with Vojvodina. The central and southern regions near the Adriatic coastline and islands consist of low mountains and forested highlands. Natural resources found in quantities significant enough for production include oil, coal, bauxite, low-grade iron ore, calcium, gypsum, natural asphalt, silica, mica, clays, salt, and hydropower. Karst topography makes up about half of Croatia and is especially prominent in the Dinaric Alps. Croatia hosts deep caves, 49 of which are deeper than , 14 deeper than and three deeper than . Croatia's most famous lakes are the Plitvice lakes, a system of 16 lakes with waterfalls connecting them over dolomite and limestone cascades. The lakes are renowned for their distinctive colours, ranging from turquoise to mint green, grey or blue. Climate Most of Croatia has a moderately warm and rainy continental climate as defined by the Köppen climate classification. Mean monthly temperature ranges between in January and in July. The coldest parts of the country are Lika and Gorski Kotar featuring a snowy, forested climate at elevations above . The warmest areas are at the Adriatic coast and especially in its immediate hinterland characterised by Mediterranean climate, as the sea moderates temperature highs. Consequently, temperature peaks are more pronounced in continental areas. The lowest temperature of was recorded on 3 February 1919 in Čakovec, and the highest temperature of was recorded on 4 August 1981 in Ploče. Mean annual precipitation ranges between and depending on geographic region and climate type. The least precipitation is recorded in the outer islands (Biševo, Lastovo, Svetac, Vis) and the eastern parts of Slavonia. However, in the latter case, rain occurs mostly during the growing season. The maximum precipitation levels are observed on the Dinara mountain range and in Gorski Kotar. Prevailing winds in the interior are light to moderate northeast or southwest, and in the coastal area, prevailing winds are determined by local features. Higher wind velocities are more often recorded in cooler months along the coast, generally as the cool northeasterly bura or less frequently as the warm southerly jugo. The sunniest parts are the outer islands, Hvar and Korčula, where more than 2700 hours of sunshine are recorded per year, followed by the middle and southern Adriatic Sea area in general, and northern Adriatic coast, all with more than 2000 hours of sunshine per year. Biodiversity Croatia can be subdivided into ecoregions based on climate and geomorphology. The country is one of the richest in Europe in terms of biodiversity. Croatia has four types of biogeographical regions—the Mediterranean along the coast and in its immediate hinterland, Alpine in most of Lika and Gorski Kotar, Pannonian along Drava and Danube, and Continental in the remaining areas. The most significant are karst habitats which include submerged karst, such as Zrmanja and Krka canyons and tufa barriers, as well as underground habitats. The country contains three ecoregions: Dinaric Mountains mixed forests, Pannonian mixed forests, and Illyrian deciduous forests. The karst geology harbours approximately 7,000 caves and pits, some of which are the habitat of the only known aquatic cave vertebrate—the olm. Forests are significantly present, as they cover representing 44% of Croatian land area. Other habitat types include wetlands, grasslands, bogs, fens, scrub habitats, coastal and marine habitats. In terms of phytogeography, Croatia is a part of the Boreal Kingdom and is a part of Illyrian and Central European provinces of the Circumboreal Region and the Adriatic province of the Mediterranean Region. The World Wide Fund for Nature divides Croatia between three ecoregions—Pannonian mixed forests, Dinaric Mountains mixed forests and Illyrian deciduous forests. Croatia hosts 37,000 known plant and animal species, but their actual number is estimated to be between 50,000 and 100,000. More than a thousand species are endemic, especially in Velebit and Biokovo mountains, Adriatic islands and karst rivers. Legislation protects 1,131 species. The most serious threat is habitat loss and degradation. A further problem is presented by invasive alien species, especially Caulerpa taxifolia algae. Croatia had a 2018 Forest Landscape Integrity Index mean score of 4.92/10, ranking it 113th of 172 countries. Invasive algae are regularly monitored and removed to protect benthic habitat. Indigenous cultivated plant strains and domesticated animal breeds are numerous. They include five breeds of horses, five of cattle, eight of sheep, two of pigs, and one poultry. Indigenous breeds include nine that are endangered or critically endangered. Croatia has 444 protected areas, encompassing 9% of the country. Those include eight national parks, two strict reserves, and ten nature parks. The most famous protected area and the oldest national park in Croatia is Plitvice Lakes National Park, a UNESCO World Heritage Site. Velebit Nature Park is a part of the UNESCO Man and the Biosphere Programme. The strict and special reserves, as well as the national and nature parks, are managed and protected by the central government, while other protected areas are managed by counties. In 2005, the National Ecological Network was set up, as the first step in the preparation of the EU accession and joining of the Natura 2000 network. Governance The Republic of Croatia is a unitary, constitutional state using a parliamentary system. Government powers in Croatia are legislative, executive, and judiciary powers. The President of the Republic () is the head of state, directly elected to a five-year term and is limited by the Constitution to two terms. In addition to serving as commander in chief of the armed forces, the president has the procedural duty of appointing the prime minister with the parliament and has some influence on foreign policy. The Government is headed by the Prime Minister, who has four deputy prime ministers and 16 ministers in charge of particular sectors. As the executive branch, it is responsible for proposing legislation and a budget, enforcing the laws, and guiding foreign and internal policies. The Government is seated at Banski dvori in Zagreb. Law and judicial system A unicameral parliament () holds legislative power. The number of Sabor members can vary from 100 to 160. They are elected by popular vote to serve four-year terms. Legislative sessions take place from 15 January to 15 July, and from 15 September to 15 December annually. The two largest political parties in Croatia are the Croatian Democratic Union and the Social Democratic Party of Croatia. Croatia has a civil law legal system in which law arises primarily from written statutes, with judges serving as implementers and not creators of law. Its development was largely influenced by German and Austrian legal systems. Croatian law is divided into two principal areas—private and public law. Before EU accession negotiations were completed, Croatian legislation had been fully harmonised with the Community acquis. The main national courts are the Constitutional Court, which oversees violations of the Constitution, and the Supreme Court, which is the highest court of appeal. Administrative, Commercial, County, Misdemeanor, and Municipal courts handle cases in their respective domains. Cases falling within judicial jurisdiction are in the first instance decided by a single professional judge, while appeals are deliberated in mixed tribunals of professional judges. Lay magistrates also participate in trials. The State's Attorney Office is the judicial body constituted of public prosecutors empowered to instigate prosecution of perpetrators of offences. Law enforcement agencies are organised under the authority of the Ministry of the Interior which consist primarily of the national police force. Croatia's security service is the Security and Intelligence Agency (SOA). Foreign relations Croatia has established diplomatic relations with 194 countries. supporting 57 embassies, 30 consulates and eight permanent diplomatic missions. 56 foreign embassies and 67 consulates operate in the country in addition to offices of international organisations such as the European Bank for Reconstruction and Development (EBRD), International Organization for Migration (IOM), Organization for Security and Co-operation in Europe (OSCE), World Bank, World Health Organization (WHO), International Criminal Tribunal for the former Yugoslavia (ICTY), United Nations Development Programme (UNDP), United Nations High Commissioner for Refugees (UNHCR), and UNICEF. As of 2019, the Croatian Ministry of Foreign Affairs and European Integration employed 1,381 personnel and expended 765.295 million kunas (€101.17 million). Stated aims of Croatian foreign policy include enhancing relations with neighbouring countries, developing international co-operation and promotion of the Croatian economy and Croatia itself. Croatia is a member of the European Union. As of 2021, Croatia had unsolved border issues with Bosnia and Herzegovina, Montenegro, Serbia, and Slovenia. Croatia is a member of NATO. On 1 January 2023, Croatia simultaneously joined both the Schengen Area and the Eurozone, having previously joined the ERM II on 10 July 2020. Military The Croatian Armed Forces (CAF) consist of the Air Force, Army, and Navy branches in addition to the Education and Training Command and Support Command. The CAF is headed by the General Staff, which reports to the Defence Minister, who in turn reports to the President. According to the constitution, the President is the commander-in-chief of the armed forces. In case of immediate threat during wartime, he issues orders directly to the General Staff. Following the 1991–95 war, defence spending and CAF size began a constant decline. , military spending was an estimated 1.68% of the country's GDP, 67th globally. In 2005 the budget fell below the NATO-required 2% of GDP, down from the record high of 11.1% in 1994. Traditionally relying on conscripts, the CAF went through a period of reforms focused on downsizing, restructuring and professionalisation in the years before accession to NATO in April 2009. According to a presidential decree issued in 2006, the CAF employed around 18,100 active duty military personnel, 3,000 civilians and 2,000 voluntary conscripts between 18 and 30 years old in peacetime. Compulsory conscription was abolished in January 2008. Until 2008 military service was obligatory for men at age 18 and conscripts served six-month tours of duty, reduced in 2001 from the earlier scheme of nine months. Conscientious objectors could instead opt for eight months of civilian service. , the Croatian military had 72 members stationed in foreign countries as part of United Nations-led international peacekeeping forces. , 323 troops served the NATO-led ISAF force in Afghanistan. Another 156 served with KFOR in Kosovo. Croatia has a military-industrial sector that exported around 493 million kunas (€65,176 million) worth of military equipment in 2020. Croatian-made weapons and vehicles used by CAF include the standard sidearm HS2000 manufactured by HS Produkt and the M-84D battle tank designed by the Đuro Đaković factory. Uniforms and helmets worn by CAF soldiers are locally produced and marketed to other countries. Administrative divisions Croatia was first divided into counties in the Middle Ages. The divisions changed over time to reflect losses of territory to Ottoman conquest and subsequent liberation of the same territory, changes of the political status of Dalmatia, Dubrovnik, and Istria. The traditional division of the country into counties was abolished in the 1920s when the Kingdom of Serbs, Croats and Slovenes and the subsequent Kingdom of Yugoslavia introduced oblasts and banovinas respectively. Communist-ruled Croatia, as a constituent part of post-World War II Yugoslavia, abolished earlier divisions and introduced municipalities, subdividing Croatia into approximately one hundred municipalities. Counties were reintroduced in 1992 legislation, significantly altered in terms of territory relative to the pre-1920s subdivisions. In 1918, the Transleithanian part was divided into eight counties with their seats in Bjelovar, Gospić, Ogulin, Osijek, Požega, Varaždin, Vukovar, and Zagreb. As of 1992, Croatia is divided into 20 counties and the capital city of Zagreb, the latter having the dual authority and legal status of a county and a city. County borders changed in some instances, last revised in 2006. The counties subdivide into 127 cities and 429 municipalities. Nomenclature of Territorial Units for Statistics (NUTS) division is performed in several tiers. NUTS 1 level considers the entire country in a single unit; three NUTS 2 regions come below that. Those are Northwest Croatia, Central and Eastern (Pannonian) Croatia, and Adriatic Croatia. The latter encompasses the counties along the Adriatic coast. Northwest Croatia includes Koprivnica-Križevci, Krapina-Zagorje, Međimurje, Varaždin, the city of Zagreb, and Zagreb counties and the Central and Eastern (Pannonian) Croatia includes the remaining areas—Bjelovar-Bilogora, Brod-Posavina, Karlovac, Osijek-Baranja, Požega-Slavonia, Sisak-Moslavina, Virovitica-Podravina, and Vukovar-Syrmia counties. Individual counties and the city of Zagreb also represent NUTS 3 level subdivision units in Croatia. The NUTS local administrative unit divisions are two-tiered. LAU 1 divisions match the counties and the city of Zagreb in effect making those the same as NUTS 3 units, while LAU 2 subdivisions correspond to cities and municipalities. Economy Croatia's economy qualifies as high-income. International Monetary Fund data projected that Croatian nominal GDP reached $67,84 billion, or $17.398 per capita for 2021 while purchasing power parity GDP was $132,88 billion, or $32.942 per capita. According to Eurostat, Croatian GDP per capita in PPS stood at 65% of the EU average in 2019. Real GDP growth in 2021 was per cent. The average net salary of a Croatian worker in October 2019 was 6,496 HRK per month (roughly 873 EUR), and the average gross salary was 8,813 HRK per month (roughly 1,185 EUR). , the unemployment rate dropped to 7.2% from 9.6% in December 2018. The number of unemployed persons was 106.703. The unemployment rate between 1996 and 2018 averaged 17.38%, reaching an all-time high of 23.60% in January 2002 and a record low of 8.40% in September 2018. In 2017, economic output was dominated by the service sector — accounting for 70.1% of GDP — followed by the industrial sector with 26.2% and agriculture accounting for 3.7%. According to 2017 data, 1.9% of the workforce were employed in agriculture, 27.3% by industry and 70.8% in services. Shipbuilding, food processing, pharmaceuticals, information technology, biochemical, and timber industry dominate the industrial sector. In 2018, Croatian exports were valued at 108 billion kunas (€14.61 billion) with 176 billion kunas (€23.82 billion) worth of imports. Croatia's largest trading partner was the rest of the European Union, led by Germany, Italy, and Slovenia. As a result of the war, economic infrastructure sustained massive damage, particularly the tourism industry. From 1989 to 1993, the GDP fell 40.5%. The Croatian state still controls significant economic sectors, with government expenditures accounting for 40% of GDP. A particular concern is a backlogged judiciary system, with inefficient public administration and corruption, upending land ownership. In the 2018 Corruption Perceptions Index, published by Transparency International, the country ranked 60th. At the end of June 2020, the national debt stood at 85,3% of GDP. Tourism Tourism dominates the Croatian service sector and accounts for up to 20% of GDP. Tourism income for 2019 was estimated to be €10.5 billion. Its positive effects are felt throughout the economy, increasing retail business, and increasing seasonal employment. The industry is counted as an export business because foreign visitor spending significantly reduces the country's trade imbalance. The tourist industry has rapidly grown, recording a fourfold rise in tourist numbers since independence, attracting more than 11 million visitors each year. Germany, Slovenia, Austria, Italy, Poland and Croatia itself provide the most visitors. Tourist stays averaged 4.7 days in 2019. Much of the tourist industry is concentrated along the coast. Opatija was the first holiday resort. It first became popular in the middle of the 19th century. By the 1890s, it had become one of the largest European health resorts. Resorts sprang up along the coast and islands, offering services catering to mass tourism and various niche markets. The most significant are nautical tourism, supported by marinas with more than 16 thousand berths, cultural tourism relying on the appeal of medieval coastal cities and cultural events taking place during the summer. Inland areas offer agrotourism, mountain resorts, and spas. Zagreb is a significant destination, rivalling major coastal cities and resorts. Croatia has unpolluted marine areas with nature reserves and 116 Blue Flag beaches. Croatia ranks as the 23rd-most popular tourist destination in the world. About 15% of these visitors, or over one million per year, participate in naturism, for which Croatia is famous. It was the first European country to develop commercial naturist resorts. Infrastructure Transport The motorway network was largely built in the late 1990s and the 2000s (decade). As of December 2020, Croatia had completed of motorways, connecting Zagreb to other regions and following various European routes and four Pan-European corridors. The busiest motorways are the A1, connecting Zagreb to Split and the A3, passing east to west through northwest Croatia and Slavonia. A widespread network of state roads in Croatia acts as motorway feeder roads while connecting major settlements. The high quality and safety levels of the Croatian motorway network were tested and confirmed by EuroTAP and EuroTest programmes. Croatia has an extensive rail network spanning , including of electrified railways and of double track railways. The most significant railways in Croatia are within the Pan-European transport corridors Vb and X connecting Rijeka to Budapest and Ljubljana to Belgrade, both via Zagreb. Croatian Railways operates all rail services. The construction of 2.4-kilometre-long Pelješac Bridge, the biggest infrastructure project in Croatia connects the two halves of Dubrovnik-Neretva County and shortens the route from the West to the Pelješac peninsula and the islands of Korčula and Lastovo by more than 32 km. The construction of the Pelješac Bridge started in July 2018 after Croatian road operator Hrvatske ceste (HC) signed a 2.08 billion kuna deal for the works with a Chinese consortium led by China Road and Bridge Corporation (CRBC). The project is co-financed by the European Union with 357 million euro. The construction was completed in July 2022. There are international airports in Dubrovnik, Osijek, Pula, Rijeka, Split, Zadar, and Zagreb. The largest and busiest is Franjo Tuđman Airport in Zagreb. , Croatia complies with International Civil Aviation Organization aviation safety standards and the Federal Aviation Administration upgraded it to Category 1 rating. Ports The busiest cargo seaport is the Port of Rijeka. The busiest passenger ports are Split and Zadar. Many minor ports serve ferries connecting numerous islands and coastal cities with ferry lines to several cities in Italy. The largest river port is Vukovar, located on the Danube, representing the nation's outlet to the Pan-European transport corridor VII. Energy of crude oil pipelines serve Croatia, connecting the Rijeka oil terminal with refineries in Rijeka and Sisak, and several transhipment terminals. The system has a capacity of 20 million tonnes per year. The natural gas transportation system comprises of trunk and regional pipelines, and more than 300 associated structures, connecting production rigs, the Okoli natural gas storage facility, 27 end-users and 37 distribution systems. Croatian energy production covers 85% of nationwide natural gas and 19% of oil demand. In 2008, 47.6% of Croatia's primary energy production involved natural gas (47.7%), hydropower (25.4%), crude oil (18.0%), fuelwood (8.4%), and other renewable energy sources (0.5%). In 2009, net total electrical power production reached 12,725 GWh. Croatia imported 28.5% of its electric power energy needs. Krško Nuclear Power Plant (Slovenia) supplies a large part of Croatian imports. 50% is owned by Hrvatska elektroprivreda, providing 15% of Croatia's electricity. Demographics With an estimated population of 4.13 million in 2019, Croatia ranks 127th by population in the world. Its 2018 population density was 72,9 inhabitants per square kilometre, making Croatia one of the more sparsely populated European countries. The overall life expectancy in Croatia at birth was 76.3 years in 2018. The total fertility rate of 1.41 children per mother, is one of the lowest in the world, far below the replacement rate of 2.1, it remains considerably below the high of 6.18 children rate in 1885. Croatia's death rate has continuously exceeded its birth rate since 1991. Croatia subsequently has one of the world's oldest populations, with an average age of 43.3 years. The population rose steadily from 2.1 million in 1857 until 1991, when it peaked at 4.7 million, with the exceptions of censuses taken in 1921 and 1948, i.e. following the world wars. The natural growth rate is negative with the demographic transition completed in the 1970s. In recent years, the Croatian government has been pressured to increase permit quotas for foreign workers, reaching an all-time high of 68.100 in 2019. In accordance with its immigration policy, Croatia is trying to entice emigrants to return. From 2008 to 2018, Croatia's population dropped by 10%. The population decrease was greater a result of war for independence. The war displaced large numbers of the population and emigration increased. In 1991, in predominantly occupied areas, more than 400,000 Croats were either removed from their homes by Serb forces or fled the violence. During the war's final days, about 150–200,000 Serbs fled before the arrival of Croatian forces during Operation Storm. After the war, the number of displaced persons fell to about 250,000. The Croatian government cared for displaced persons via the social security system and the Office of Displaced Persons and Refugees. Most of the territories abandoned during the war were settled by Croat refugees from Bosnia and Herzegovina, mostly from north-western Bosnia, while some displaced people returned to their homes. According to the 2013 United Nations report, 17.6% of Croatia's population were immigrants. According to the 2021 census, the majority of inhabitants are Croats (91.6%), followed by Serbs (3.2%), Bosniaks (0.62%), Roma (0.46%), Albanians (0.36%), Italians (0.36%), Hungarians (0.27%), Czechs (0.20%), Slovenes (0.20%), Slovaks (0.10%), Macedonians (0.09%), Germans (0.09%), Montenegrins (0.08%), and others (1.56%). Approximately 4 million Croats live abroad. Religion Croatia has no official religion. Freedom of religion is a Constitutional right that protects all religious communities as equal before the law and separated from the state. According to the 2011 census, 91.36% of Croatians identify as Christian; of these, Catholics make up the largest group, accounting for 86.28% of the population, after which follows Eastern Orthodoxy (4.44%), Protestantism (0.34%), and other Christians (0.30%). The largest religion after Christianity is Islam (1.47%). 4.57% of the population describe themselves as non-religious. In the Eurostat Eurobarometer Poll of 2010, 69% of the population responded that "they believe there is a God". In a 2009 Gallup poll, 70% answered yes to the question "Is religion an important part of your daily life?" However, only 24% of the population attends religious services regularly. Languages Croatian is the official language of Croatia and became the 24th official language of the European Union upon its accession in 2013. Minority languages are in official use in local government units where more than a third of the population consists of national minorities or where local enabling legislation applies. Those languages are Czech, Hungarian, Italian, Serbian, and Slovak. The following minority languages are also recognised: Albanian, Bosnian, Bulgarian, German, Hebrew, Macedonian, Montenegrin, Polish, Romanian, Istro-Romanian, Romani, Russian, Rusyn, Slovene, Turkish, and Ukrainian. According to the 2011 Census, 95.6% of citizens declared Croatian as their native language, 1.2% declared Serbian as their native language, while no other language reaches more than 0.5%. Croatian is a member of the South Slavic languages of Slavic languages group and is written using the Latin alphabet. There are three major dialects spoken on the territory of Croatia, with standard Croatian based on the Shtokavian dialect. The Chakavian and Kajkavian dialects are distinguished from Shtokavian by their lexicon, phonology and syntax. Croatian replaced Latin as the official language of the Croatian government in the 19th century. Following the Vienna Literary Agreement in 1850, the language and its Latin script underwent reforms to create an unified "Croatian or Serbian" or "Serbo-Croatian" standard, which under various names became the official language of Yugoslavia. In SFR Yugoslavia, from 1972 to 1989, the language was constitutionally designated as the "Croatian literary language" and the "Croatian or Serbian language". It was the result of the resistance to "Serbo-Croatian" in the form of a Declaration on the Status and Name of the Croatian Literary Language and Croatian Spring. Croats protect their language from foreign influences and are known for Croatian linguistic purism, as the language was under constant change and threats imposed by previous rulers. Croats reject loanwords in favor of Croatian counterparts. A 2011 survey revealed that 78% of Croats claim knowledge of at least one foreign language. According to a 2005 EC survey, 49% of Croats speak English as the second language, 34% speak German, 14% speak Italian, and 10% speak French. Russian is spoken by 4%, and 2% of Croats speak Spanish. However several large municipalities support minority languages. A majority of Slovenes (59%) have some knowledge of Croatian. The country is a part of various language-based international associations, most notably the European Union Language Association. Education Literacy in Croatia stands at 99.2 per cent. Primary education in Croatia starts at the age of six or seven and consists of eight grades. In 2007 a law was passed to increase free, noncompulsory education until 18 years of age. Compulsory education consists of eight grades of elementary school. Secondary education is provided by gymnasiums and vocational schools. As of 2019, there are 2,103 elementary schools and 738 schools providing various forms of secondary education. Primary and secondary education are also available in languages of recognised minorities in Croatia, where classes are held in Czech, German, Hungarian, Italian, and Serbian languages. There are 137 elementary and secondary level music and art schools, as well as 120 schools for disabled children and youth and 74 schools for adults. Nationwide leaving exams () were introduced for secondary education students in the school year 2009–2010. It comprises three compulsory subjects (Croatian language, mathematics, and a foreign language) and optional subjects and is a prerequisite for university education. Croatia has eight public universities and two private universities. The University of Zadar, the first university in Croatia, was founded in 1396 and remained active until 1807, when other institutions of higher education took over until the foundation of the renewed University of Zadar in 2002. The University of Zagreb, founded in 1669, is the oldest continuously operating university in Southeast Europe. There are also 15 polytechnics, of which two are private, and 30 higher education institutions, of which 27 are private. In total, there are 55 institutions of higher education in Croatia, attended by more than 157 thousand students. There are 205 companies, government or education system institutions and non-profit organisations in Croatia pursuing scientific research and development of technology. Combined, they spent more than 3 billion kuna (€400 million) and employed 10,191 full-time research staff in 2008. Among the scientific institutes operating in Croatia, the largest is the Ruđer Bošković Institute in Zagreb. The Croatian Academy of Sciences and Arts in Zagreb is a learned society promoting language, culture, arts and science from its inception in 1866. Croatia was ranked 42th in the Global Innovation Index in 2021 The European Investment Bank provided digital infrastructure and equipment to around 150 primary and secondary schools in Croatia. Twenty of these schools got specialised assistance in the form of gear, software, and services to help them integrate the teaching and administrative operations. Healthcare Croatia has a universal health care system, whose roots can be traced back to the Hungarian-Croatian Parliament Act of 1891, providing a form of mandatory insurance of all factory workers and craftsmen. The population is covered by a basic health insurance plan provided by statute and optional insurance. In 2017, annual healthcare related expenditures reached 22.0 billion kuna (€3.0 billion). Healthcare expenditures comprise only 0.6% of private health insurance and public spending. In 2017, Croatia spent around 6.6% of its GDP on healthcare. In 2020, Croatia ranked 41st in the world in life expectancy with 76.0 years for men and 82.0 years for women, and it had a low infant mortality rate of 3.4 per 1,000 live births. There are hundreds of healthcare institutions in Croatia, including 75 hospitals, and 13 clinics with 23,049 beds. The hospitals and clinics care for more than 700 thousand patients per year and employ 6,642 medical doctors, including 4,773 specialists. There is total of 69,841 health workers. There are 119 emergency units in health centres, responding to more than a million calls. The principal cause of death in 2016 was cardiovascular disease at 39.7% for men and 50.1% for women, followed by tumours, at 32.5% for men and 23.4% for women. In 2016 it was estimated that 37.0% of Croatians are smokers. According to 2016 data, 24.40% of the Croatian adult population is obese. Culture Because of its geographical position, Croatia represents a blend of four different cultural spheres. It has been a crossroads of influences from western culture and the east since the schism between the Western Roman Empire and the Byzantine Empire, and also from Central Europe and Mediterranean culture. The Illyrian movement was the most significant period of national cultural history, as the 19th century proved crucial to the emancipation of Croatians and saw unprecedented developments in all fields of art and culture, giving rise to many historical figures. The Ministry of Culture is tasked with preserving the nation's cultural and natural heritage and overseeing its development. Further activities supporting the development of culture are undertaken at the local government level. The UNESCO's World Heritage List includes ten sites in Croatia. The country is also rich with intangible culture and holds 15 of UNESCO's World's intangible culture masterpieces, ranking fourth in the world. A global cultural contribution from Croatia is the necktie, derived from the cravat originally worn by the 17th-century Croatian mercenaries in France. In 2019, Croatia had 95 professional theatres, 30 professional children's theatres, and 51 amateur theatres visited by more than 2.27 million viewers per year. Professional theatres employ 1,195 artists. There are 42 professional orchestras, ensembles, and choirs, attracting an annual attendance of 297 thousand. There are 75 cinemas with 166 screens and attendance of 5.026 million. Croatia has 222 museums, visited by more than 2.71 million people in 2016. Furthermore, there are 1,768 libraries, containing 26.8 million volumes, and 19 state archives. The book publishing market is dominated by several major publishers and the industry's centrepiece event—Interliber exhibition held annually at Zagreb Fair. Arts, literature, and music Architecture in Croatia reflects influences of bordering nations. Austrian and Hungarian influence is visible in public spaces and buildings in the north and the central regions, architecture found along coasts of Dalmatia and Istria exhibits Venetian influence. Squares named after culture heroes, parks, and pedestrian-only zones, are features of Croatian towns and cities, especially where large scale Baroque urban planning took place, for instance in Osijek (Tvrđa), Varaždin, and Karlovac. The subsequent influence of the Art Nouveau was reflected in contemporary architecture. The architecture is the Mediterranean with a Venetian and Renaissance influence in major coastal urban areas exemplified in works of Giorgio da Sebenico and Nicolas of Florence such as the Cathedral of St. James in Šibenik. The oldest preserved examples of Croatian architecture are the 9th-century churches, with the largest and the most representative among them being Church of St. Donatus in Zadar. Besides the architecture encompassing the oldest artworks, there is a history of artists in Croatia reaching the Middle Ages. In that period the stone portal of the Trogir Cathedral was made by Radovan, representing the most important monument of Romanesque sculpture from Medieval Croatia. The Renaissance had the greatest impact on the Adriatic Sea coast since the remainder was embroiled in the Hundred Years' Croatian–Ottoman War. With the waning of the Ottoman Empire, art flourished during the Baroque and Rococo. The 19th and 20th centuries brought affirmation of numerous Croatian artisans, helped by several patrons of the arts such as bishop Josip Juraj Strossmayer. Croatian artists of the period achieving renown were Vlaho Bukovac, Ivan Meštrović, and Ivan Generalić. Croatian music varies from classical operas to modern-day rock. Vatroslav Lisinski created the country's first opera, Love and Malice, in 1846. Ivan Zajc composed more than a thousand pieces of music, including masses and oratorios. Pianist Ivo Pogorelić has performed across the world. The Baška tablet, a stone inscribed with the glagolitic alphabet found on the Krk island and dated to circa 1100, is considered to be the oldest surviving prose in Croatian. The beginning of more vigorous development of Croatian literature is marked by the Renaissance and Marko Marulić. Besides Marulić, Renaissance playwright Marin Držić, Baroque poet Ivan Gundulić, Croatian national revival poet Ivan Mažuranić, novelist, playwright, and poet August Šenoa, children's writer Ivana Brlić-Mažuranić, writer and journalist Marija Jurić Zagorka, poet and writer Antun Gustav Matoš, poet Antun Branko Šimić, expressionist and realist writer Miroslav Krleža, poet Tin Ujević and novelist, and short story writer Ivo Andrić are often cited as the greatest figures in Croatian literature. Media In Croatia, the Constitution guarantees the freedom of the press and the freedom of speech. Croatia ranked 64th in the 2019 Press Freedom Index report compiled by Reporters Without Borders which noted that journalists who investigate corruption, organised crime or war crimes face challenges and that the Government was trying to influence the public broadcaster HRT's editorial policies. In its 2019 Freedom in the World report, the Freedom House classified freedoms of press and speech in Croatia as generally free from political interference and manipulation, noting that journalists still face threats and occasional attacks. The state-owned news agency HINA runs a wire service in Croatian and English on politics, economics, society, and culture. , there are thirteen nationwide free-to-air DVB-T television channels, with Croatian Radiotelevision (HRT) operating four, RTL Televizija three, and Nova TV operating two channels, and the Croatian Olympic Committee, Kapital Net d.o.o., and Author d.o.o. companies operate the remaining three. Also, there are 21 regional or local DVB-T television channels. The HRT is also broadcasting a satellite TV channel. In 2020, there were 155 radio stations and 27 TV stations in Croatia. Cable television and IPTV networks are gaining ground. Cable television already serves 450 thousand people, around 10% of the total population of the country. In 2010, 314 newspapers and 2,678 magazines were published in Croatia. The print media market is dominated by the Croatian-owned Hanza Media and Austrian-owned Styria Media Group who publish their flagship dailies Jutarnji list, Večernji list and 24sata. Other influential newspapers are Novi list and Slobodna Dalmacija. In 2020, 24sata was the most widely circulated daily newspaper, followed by Večernji list and Jutarnji list. Croatia's film industry is small and heavily subsidised by the government, mainly through grants approved by the Ministry of Culture with films often being co-produced by HRT. Croatian cinema produces between five and ten feature films per year. Pula Film Festival, the national film awards event held annually in Pula, is the most prestigious film event featuring national and international productions. Animafest Zagreb, founded in 1972, is the prestigious annual film festival dedicated to the animated film. The first greatest accomplishment by Croatian filmmakers was achieved by Dušan Vukotić when he won the 1961 Academy Award for Best Animated Short Film for Ersatz (). Croatian film producer Branko Lustig won the Academy Awards for Best Picture for Schindler's List and Gladiator. Cuisine Croatian traditional cuisine varies from one region to another. Dalmatia and Istria have culinary influences of Italian and other Mediterranean cuisines which prominently feature various seafood, cooked vegetables and pasta, and condiments such as olive oil and garlic. Austrian, Hungarian, and Turkish culinary styles influenced continental cuisine. In that area, meats, freshwater fish, and vegetable dishes are predominant. There are two distinct wine-producing regions in Croatia. The continental in the northeast of the country, especially Slavonia, produces premium wines, particularly whites. Along the north coast, Istrian and Krk wines are similar to those in neighbouring Italy, while further south in Dalmatia, Mediterranean-style red wines are the norm. Annual production of wine exceeds 140 million litres. Croatia was almost exclusively a wine-consuming country up until the late 18th century when a more massive beer production and consumption started. The annual consumption of beer in 2020 was 78.7 litres per capita which placed Croatia in 15th place among the world's countries. Sports There are more than 400,000 active sportspeople in Croatia. Out of that number, 277,000 are members of sports associations and nearly 4,000 are chess members and contract bridge associations. Association football is the most popular sport. The Croatian Football Federation (), with more than 118,000 registered players, is the largest sporting association. The Croatian national football team came in third in 1998 and 2022 and second in the 2018 FIFA World Cup. The Prva HNL football league attracts the highest average attendance of any professional sports league. In season 2010–11, it attracted 458,746 spectators. Croatian athletes competing at international events since Croatian independence in 1991 won 44 Olympic medals, including 15 gold medals. Also, Croatian athletes won 16 gold medals at world championships, including four in athletics at the World Championships in Athletics. In tennis, Croatia won Davis Cup in 2005 and 2018. Croatia's most successful male players Goran Ivanišević and Marin Čilić have both won Grand Slam titles and have got into the top 3 of the ATP rankings. Iva Majoli became the first Croatian female player to win the French Open when she won it in 1997. Croatia hosted several major sports competitions, including the 2009 World Men's Handball Championship, the 2007 World Table Tennis Championships, the 2000 World Rowing Championships, the 1987 Summer Universiade, the 1979 Mediterranean Games, and several European Championships. The governing sports authority is the Croatian Olympic Committee (), founded on 10 September 1991 and recognised by the International Olympic Committee since 17 January 1992, in time to permit the Croatian athletes to appear at the 1992 Winter Olympics in Albertville, France representing the newly independent nation for the first time at the Olympic Games. See also Outline of Croatia Index of Croatia-related articles Explanatory notes Citations General and cited references External links Key Development Forecasts for Croatia from International Futures Balkan countries Central European countries Countries in Europe Croatian-speaking countries and territories Member states of the European Union Member states of NATO Member states of the Union for the Mediterranean Member states of the United Nations Member states of the Three Seas Initiative Republics Southeastern European countries Southern European countries States and territories established in 1991
2,484
5,578
https://en.wikipedia.org/wiki/Economy%20of%20Croatia
Economy of Croatia
The economy of Croatia is a high-income, service-based social market economy with the tertiary sector accounting for 70% of total gross domestic product (GDP). Croatia has a fully integrated and globalized economy. Croatia's road to globalization started as soon as the country gained independence, with tourism as one of the country's core industries dependent on the global market. Croatia joined the World Trade Organization in 2000, NATO in 2009, has been a member of the European Union since 1 July 2013, and it finally joined the Eurozone and the Schengen Area on January 1st 2023. Croatia is also negotiating membership of OECD organization, which it hopes to join by 2025. Further integration into the EU structures will continue in the coming years, including participation in ESA, CERN as well as EEA membership in the next 24 months. With its entry into the Eurozone, Croatia is now classified as a developed country or an advanced economy, a designation given by the IMF to highly developed industrial nations, which includes all members of the Eurozone. Croatia was hit by 2008 global financial crisis really hard which affected Croatian economy with a significant downturn in economic growth as well as progress in economic reform which resulted in six years of recession and a cumulative decline in GDP of 12.5%. Croatia formally emerged from the recession in the fourth quarter of 2014, and had continuous GDP growth until 2020. The Croatian economy reached pre crisis levels in 2019, but due to the Coronavirus pandemic GDP decreased by 8.4% in 2020. Growth rebounded in 2021 and Croatia recorded its largest year-over-year GDP growth since 1991. Croatia's post-pandemic recovery was supported by strong private consumption, better-than-expected performance in the tourism industry, and a boom in merchandise exports. Croatian exports in 2021 and 2022 saw rapid growth of nearly 25% and 26% respectively, with exports in 2021 reaching 143.7 billion kuna and exports in 2022 expanding further by 26% to reach projected 182 billion kuna. Croatian Economy also saw continuation of rapid economic growth based on good tourism receipts and export noumbers, as well as rapidly expanding ICT sector which saw rapid growth and revenue that rival Croatian Tourism. ICT sector alone is generating €7 billion of service exports and it is expected to expand further in 2023 and 2024 at an average of 15%. In 2022, Croatian economy is expected to grow between 5.9 and 7.8% in real terms and it is expected to reach between $72 and $73.6 billion according to preliminary estimates by Croatian Government surpassing early estimates of 491 billion kuna or $68.5 billion. Croatian Purchasing Power Parity in 2022 for the first time should exceed $40 000, however considering Croatian economy experienced 6 years of deep recession, catching up will take several more years of high growth. Economic outlook for 2023 for Croatian economy are mixed, depends largely on how the big Eurozone economies perform, Croatia's largest trading partners; Italy, Germany, Austria, Slovenia and France are expected to slow down, but avoid recession according to latest economic projections and estimates, so Croatian economy as a result could see better then expected results in 2023, early projections of between 1 and 2.6% economic growth in 2023 with inflation at 7% is a significant slow down for the country,however country is experiencing major internal and inward investment cycle unparalleled in recent history. EU recovery funds in tune of €8.7 billion coupled with large EU investments in recently earthquake affected areas of Croatia, as well as major investments by local business in to renewable energy sector, also EU supported and funded, as well as major investments in transport infrastructure and rapidly expanding Croatia's ICT sector, Croatian economy could see continuation of rapid growth in 2023. Tourism is one of the main pillars of the Croatian economy, comprising 19.6% of Croatia's GDP. Croatia is working to become an energy powerhouse with its floating liquefied natural gas (LNG) regasification terminal on the island of Krk and investments in green energy, particularly wind energy, solar and geothermal energy, having opened 17 MW geothermal power plant in Ciglena in late 2019, that is the largest power plant in continental Europe with binary technology and starting the work on the second one in the summer of 2021. The government intends to spend about $1.4 billion on grid modernisation, with a goal of increasing renewable energy source connections by at least 800 MW by 2026 and 2,500 MW by 2030 and predicts that renewable energy resources as a share of total energy consumption will grow to 36.4% in 2030, and to 65.6% in 2050. In 2021 Croatia joined the list of countries with its own automobile industry, with Rimac Automobili's Nevera started being produced. The company also took over Bugatti Automobiles in November same year and started building its new HQ in Zagreb, titled as the ‘Rimac Campus’, that will serve as the company’s international research and development (R&D) and production base for all future Rimac products, as well as home of R&D for future Bugatti models. The company also plans to build battery systems for different manufacturers from the automotive industry This campus will also become the home of R&D for future Bugatti models due to the new joint venture, though these vehicles will be built at Bugatti’s Molsheim plant in France. On Friday, 12 November 2021 Fitch raised Croatia's credit rating by one level, from ‘BBB-‘ to ‘BBB’, Croatia's highest credit rating in history, with a positive outlook, noting progress in preparations for euro area membership and a strong recovery of the Croatian economy from the pandemic crisis. In late March 2022 Croatian Bureau of Statistics announced that Croatia's industrial output rose by 4% in February, thus growing for 15 months in a row. Croatia continued to have strong growth during 2022 fuelled by tourism revenue and increased exports. According to a preliminary estimate, Croatia's GDP in Q2 grew by 7.7% from the same period of 2021. The International Monetary Fund (IMF) projected in early September 2022 that Croatia's economy will expand by 5.9% in 2022, whilst EBRD expects Croatian GDP growth to reach 6.5% by the end of 2022. Pfizer announced launching a new production plant in Savski Marof whilst Croatian IT industry grew 3.3% confirming the trend that started with Coronavirus pandemic where the Croatia's digital economy increased by 16 percent on average annually from 2019 to 2021, and by 2030 its value could reach 15 percent of GDP, with the ICT sector the main driver of that growth. Croatia joined both the Eurozone and Schengen Area in January 2023 which helps strengthen the country’s integration into the European economy and make cross border trade with both European countries and European trade partners easier. The minimum wage is expected to rise to NET 700 EUR in 2023, further increasing consumer spending and combating the high inflation rate. History Pre-1990 When Croatia was still part of the Dual Monarchy, its economy was largely agricultural. However, modern industrial companies were also located in the vicinity of the larger cities. The Kingdom of Croatia had a high ratio of population working in agriculture. Many industrial branches developed in that time, like forestry and wood industry (stave fabrication, the production of potash, lumber mills, shipbuilding). The most profitable one was stave fabrication, the boom of which started in the 1820s with the clearing of the oak forests around Karlovac and Sisak and again in the 1850s with the marshy oak masses along the Sava and Drava rivers. Shipbuilding in Croatia played a huge role in the 1850s Austrian Empire, especially the long-range sailing boats. Sisak and Vukovar were the centres of river-shipbuilding. Slavonia was also mostly an agricultural land and it was known for its silk production. Agriculture and the breeding of cattle were the most profitable occupations of the inhabitants. It produced corn of all kinds, hemp, flax, tobacco, and great quantities of liquorice. The first steps towards industrialization began in the 1830s and in the following decades the construction of big industrial enterprises took place. During the 2nd half of the 19th and early 20th century there was an upsurge of industry in Croatia, strengthened by the construction of railways and the electric-power production. However, the industrial production was still lower than agricultural production. Regional differences were high. Industrialization was faster in inner Croatia than in other regions, while Dalmatia remained one of the poorest provinces of Austria-Hungary. The slow rate of modernization and rural overpopulation caused extensive emigration, particularly from Dalmatia. According to estimates, roughly 400,000 Croats emigrated from Austria-Hungary between 1880 and 1914. In 1910 8.5% of the population of Croatia-Slavonia lived in urban settlements. In 1918 Croatia became part of the Kingdom of Yugoslavia, which was in the interwar period one of the least developed countries in Europe. Most of its industry was based in Slovenia and Croatia, but further industrial development was modest and centered on textile mills, sawmills, brick yards and food-processing plants. The economy was still traditionally based on agriculture and raising of livestock, with peasants accounting for more than half of Croatia's population. In 1941 the Independent State of Croatia (NDH), a World War II puppet state of Germany and Italy, was established in parts of Axis-occupied Yugoslavia. The economic system of NDH was based on the concept of "Croatian socialism". The main characteristic of the new system was the concept of a planned economy with high levels of state involvement in economic life. The fulfillment of basic economic interests was primarily ensured with measures of repression. All large companies were placed under state control and the property of the regime's national enemies was nationalized. Its currency was the NDH kuna. The Croatian State Bank was the central bank, responsible for issuing currency. As the war progressed the government kept printing more money and its amount in circulation was rapidly increasing, resulting in high inflation rates. After World War II, the new Communist Party of Yugoslavia resorted to a command economy on the Soviet model of rapid industrial development. In accordance with the socialist plan, mainly companies in the pharmaceutical industry, the food industry and the consumer goods industry were founded in Croatia. Metal and heavy industry was mainly promoted in Bosnia and Serbia. By 1948 almost all domestic and foreign-owned capital had been nationalized. The industrialization plan relied on high taxation, fixed prices, war reparations, Soviet credits, and export of food and raw materials. Forced collectivization of agriculture was initiated in 1949. At that time 94% of agricultural land was privately owned, and by 1950 96% was under the control of the social sector. A rapid improvement of food production and the standard of living was expected, but due to bad results the program was abandoned three years later. Throughout the 1950s Croatia experienced rapid urbanization. Decentralization came in 1965 and spurred growth of several sectors including the prosperous tourist industry. SR Croatia was, after SR Slovenia, the second most developed republic in Yugoslavia with a ~55% higher GDP per capita than the Yugoslav average, generating 31.5% of Yugoslav GDP or $30.1Bn in 1990. Croatia and Slovenia accounted for nearly half of the total Yugoslav GDP, and this was reflected in the overall standard of living. In the mid-1960s, Yugoslavia lifted emigration restrictions and the number of emigrants increased rapidly. In 1971 224,722 workers from Croatia were employed abroad, mostly in West Germany. Foreign remittances contributed $2 billion annually to the economy by 1990. Profits gained through Croatia's industry were used to develop poor regions in other parts of former Yugoslavia, leading to Croatia contributing much more to the federal Yugoslav economy than it gained in return. This, coupled with austerity programs and hyperinflation in the 1980s, led to discontent in both Croatia and Slovenia which eventually fuelled political movements calling for independence. Transition and war years In the late 1980s and early 1990s, with the collapse of socialism and the beginning of economic transition, Croatia faced considerable economic problems stemming from: the legacy of longtime communist mismanagement of the economy; damage during the internecine fighting to bridges, factories, power lines, buildings, and houses; the large refugee and displaced population, both Croatian and Bosnian; the disruption of economic ties; and mishandled privatization At the time Croatia gained independence, its economy (and the whole Yugoslavian economy) was in the middle of recession. Privatization under the new government had barely begun when war broke out in 1991. As a result of the Croatian War of Independence, infrastructure sustained massive damage in the period 1991–92, especially the revenue-rich tourism industry. Privatization in Croatia and transformation from a planned economy to a market economy was thus slow and unsteady, largely as a result of public mistrust when many state-owned companies were sold to politically well-connected at below-market prices. With the end of the war, Croatia's economy recovered moderately, but corruption, cronyism, and a general lack of transparency stymied economic reforms and foreign investment. The privatization of large government-owned companies was practically halted during the war and in the years immediately following the conclusion of peace. As of 2000, roughly 70% of Croatia's major companies were still state-owned, including water, electricity, oil, transportation, telecommunications, and tourism. The early 1990s were characterized by high inflation rates. In 1991 the Croatian dinar was introduced as a transitional currency, but inflation continued to accelerate. The anti-inflationary stabilization steps in 1993 decreased retail price inflation from a monthly rate of 38.7% to 1.4%, and by the end of the year, Croatia experienced deflation. In 1994 Croatia introduced the kuna as its currency. As a result of the macro-stabilization programs, the negative growth of GDP during the early 1990s stopped and turned into a positive trend. Post-war reconstruction activity provided another impetus to growth. Consumer spending and private sector investments, both of which were postponed during the war, contributed to the growth in 1995–1997. Croatia began its independence with a relatively low external debt because the debt of Yugoslavia was not shared among its former republics at the beginning. In March 1995 Croatia agreed with the Paris Club of creditor governments and took 28.5% of Yugoslavia's previously non-allocated debt over 14 years. In July 1996 an agreement was reached with the London Club of commercial creditors, when Croatia took 29.5% of Yugoslavia's debt to commercial banks. In 1997 around 60 percent of Croatia's external debt was inherited from former Yugoslavia. At the beginning of 1998 value-added tax was introduced. The central government budget was in surplus in that year, most of which was used to repay foreign debt. Government debt to GDP had fallen from 27.30% to 26.20% at the end of 1998. However, the consumer boom was disrupted in mid 1998, as a result of the bank crisis when 14 banks went bankrupt. Unemployment increased and GDP growth slowed down to 1.9%. The recession that began at the end of 1998 continued through most of 1999, and after a period of expansion GDP in 1999 had a negative growth of −0.9%. In 1999 the government tightened its fiscal policy and revised the budget with a 7% cut in spending. In 1999 the private sector share in GDP reached 60%, which was significantly lower than in other former socialist countries. After several years of successful macroeconomic stabilization policies, low inflation and a stable currency, economists warned that the lack of fiscal changes and the expanding role of the state in the economy caused the decline in the late 1990s and were preventing sustainable economic growth. Economy since 2000 The new government led by the president of SDP, Ivica Račan, carried out a number of structural reforms after it won the parliamentary elections on 3 January 2000. The country emerged from the recession in the 4th quarter of 1999 and growth picked up in 2000. Due to overall increase in stability, the economic rating of the country improved and interest rates dropped. Economic growth in the 2000s was stimulated by a credit boom led by newly privatized banks, capital investment, especially in road construction, a rebound in tourism and credit-driven consumer spending. Inflation remained tame and the currency, the kuna, stable. In 2000 Croatia generated 5,899 billion kunas in total income from the shipbuilding sector, which employed 13,592 people. Total exports in 2001 amounted to $4,659,286,000, of which 54.7% went to the countries of the EU. Croatia's total imports were $9,043,699,000, 56% of which originated from the EU. Unemployment reached its peak in late 2002, but has since been steadily declining. In 2003, the nation's economy would officially recover to the amount of GDP it had in 1990. In late 2003 the new government led by HDZ took over the office. Unemployment continued falling, powered by growing industrial production and rising GDP, rather than only seasonal changes from tourism. Unemployment reached an all-time low in 2008 when the annual average rate was 8.6%, GDP per capita peaked at $16,158, while public debt as percentage of GDP decreased to 29%. Most economic indicators remained positive in this period except for the external debt as Croatian firms focused more on empowering the economy by taking loans from foreign resources. Between 2003 and 2007, Croatia's private-sector share of GDP increased from 60% to 70%. The Croatian National Bank had to take steps to curb further growth of indebtedness of local banks with foreign banks. The dollar debt figure is quite adversely affected by the EUR/USD ratio—over a third of the increase in debt since 2002 is due to currency value changes. 2009–2015 Economic growth has been hurt by the global financial crisis. Immediately after the crisis it seemed that Croatia did not suffer serious consequences like some other countries. However, in 2009, the crisis gained momentum and the decline in GDP growth, at a slower pace, continued during 2010. In 2011 the GDP stagnated as the growth rate was zero. Since the global crisis hit the country, the unemployment rate has been steadily increasing, resulting in the loss of more than 100,000 jobs. While unemployment was 9.6% in late 2007, in January 2014 it peaked at 22.4%. In 2010 Gini coefficient was 0,32. In September 2012, Fitch ratings agency unexpectedly improved Croatia's economic outlook from negative to stable, reaffirming Croatia's current BBB rating. The slow pace of privatization of state-owned businesses and an over-reliance on tourism have also been a drag on the economy. Croatia joined the European Union on 1 July 2013 as the 28th member state. The Croatian economy is heavily interdependent on other principal economies of Europe, and any negative trends in these larger EU economies also have a negative impact on Croatia. Italy, Germany and Slovenia are Croatia's most important trade partners. In spite of the rather slow post-recession recovery, in terms of income per capita it is still ahead of some European Union member states such as Bulgaria, and Romania. In terms of average monthly wage, Croatia is ahead of 9 EU members (Czech Republic, Estonia, Slovakia, Latvia, Poland, Hungary, Lithuania, Romania, and Bulgaria). The annual average unemployment rate in 2014 was 17.3% and Croatia has the third-highest unemployment rate in the European Union, after Greece (26.5%), and Spain (24.%). Of particular concern is the heavily backlogged judiciary system, combined with inefficient public administration, especially regarding the issues of land ownership and corruption in the public sector. Unemployment is regionally uneven: it is very high in eastern and southern parts of the country, nearing 20% in some areas, while relatively low in the north-west and in larger cities, where it is between 3 and 7%. In 2015 external debt rose by 2.7 billion euros since the end of 2014 and is now around €49.3 billion. 2016–2020 During 2015 the Croatian economy started with slow but upward economic growth, which continued during 2016 and conclusive at the end of the year seasonally adjusted was recorded at 3.5%. The better than expected figures during 2016 enabled the Croatian Government and with more tax receipts enabled the repayment of debt as well as narrow the current account deficit during Q3 and Q4 of 2016 This growth in economic output, coupled with the reduction of government debt has made a positive impact on the financial markets with many ratings agencies revising their outlook from negative to stable, which was the first upgrade of Croatia's credit rating since 2007. Due to consecutive months of economic growth and the demand for labour, plus the outflows of residents to other European countries, Croatia had recorded the biggest fall in the number of unemployed during the month of November 2016 from 16.1% to 12.7%. 2020– present 2020 COVID-19 Pandemic has caused more than 400,000 workers to file for economic aid of 4000.00 HRK./month. In the first quarter of 2020, Croatian GDP rose by 0.2% but then in Q2 Government of Croatia announced the biggest quarterly GDP plunge of -15.1% since GDP has been measured. Economic activity also plunged in Q3 2020 when GDP slid by an additional -10.0%. In autumn 2020 European Commission estimated total GDP loss in 2020 to be -9.6%. Growth was set to pick up in the last month of Q1 2021 and the second quarter of 2021 respectively +1.4% and +3.0%, meaning that Croatia was set to reach 2019 levels by 2022. 2021 In July 2021 projection was improved to 5.4% due to the strong outturn in the first quarter and the positive high-frequency indicators concerning consumption, construction, industry and tourism prospects. In November 2021 Croatia outperformed these projections and the real GDP growth was calculated to be 8.1% for the year 2021, improving its projection of 5.4% GDP growth made in July. The recovery was supported by strong private consumption, the better-than-expected performance of tourism and the ongoing resilience of the export sector. Preliminary data point to tourism-related expenditure already exceeding 2019 levels, which has been supportive of both employment and consumption. Exports of goods have also continued to perform strongly (up 43%yoy in 2Q21) pointing to resilient competitiveness. Expressed in euros, Croatian merchandise exports in the first nine months of 2021 amounted to 13.3 billion euros, an annual increase of 24.6 per cent. At the same time, imports rose 20.3 per cent to 20.4 billion euros. The coverage of imports by exports for the first nine months is 65.4 per cent. This made 2021 Croatian export's record year as the score from 2019 was exceeded by 2 billion euros. Exports recovered in all major markets, more precisely with all EU countries and CEFTA countries. Specifically, on the EU market, only a lower export result is recorded in relations with Sweden, Belgium and Luxembourg. Italy is again the main market for Croatian products, followed by Germany and Slovenia. Apart from the high contribution of crude oil that Ina sends to Hungary to the Mol refinery for processing, the export of artificial fertilizers from Petrokemija also has a significant contribution to growth. For 2022, the Commission revised downwards its projection for Croatia's economic growth to 5.6% from 5.9% previously predicted in July 2021. Commission again confirmed that the volume of Croatia's GDP should reach its 2019 level during 2022, while in 2023 the GDP will grow by 3.4%. The Commission warned that the key downside risks stem from Croatia's relatively low vaccination rates, which could lead to stricter containment measures, and continued delays of the earthquake-related reconstruction. On the upside, Croatia's entry into the Schengen area and euro adoption towards the end of the forecast period could benefit investment and trade. On Friday, 12 November 2021 Fitch raised Croatia's credit rating by one level, from ‘BBB-‘ to ‘BBB’, Croatia's highest credit rating in history, with a positive outlook, noting progress in preparations for Eurozone membership and a strong recovery of the Croatian economy from the pandemic crisis. This is also secured by the failure of the eurosceptic party Hrvatski Suverenisti in a bid on the referendum to block Euro adoption in Croatia. In December 2021 Croatia's industrial production increased for the thirteenth consecutive month, observing the growth of production increasing in all of the five aggregates. meaning that industrial production in 2021 increased by 6.7 percent. 2022 In late March 2022 Croatian Bureau of Statistics announced that Croatia's industrial output rose by 4% in February, thus growing for 15 months in a row. Croatia continued to have strong growth during 2022 fuelled by tourism revenue and increased exports. According to a preliminary estimate, Croatia's GDP in Q2 grew by 7.7% from the same period of 2021. The International Monetary Fund (IMF) projected in early September 2022 that Croatia's economy will expand by 5.9% in 2022, whilst EBRD expects Croatian GDP growth to reach 6.5% by the end of 2022. Pfizer announced launching a new production plant in Savski Marof whilst Croatian IT industry grew 3.3% confirming the trend that started with Coronavirus pandemic where the Croatia's digital economy increased by 16 percent on average annually from 2019 to 2021. It is estimated that by 2030 its value could reach 15 percent of GDP, with the ICT sector being the main driver of that growth. On 12 July 2022, the Eurogroup approved Croatia becoming the 20th member of the Eurozone, with the formal introduction of the Euro currency to take place on 1 January 2023. Croatia was also set to join the Schengen Area in 2023. By 2023, the minimum wage is ostensibly expected to rise to NET 700 EUR, increasing consumer spending. Sectors Industry Tourism Tourism is a notable source of income during the summer and a major industry in Croatia. It dominates the Croatian service sector and accounts for up to 20% of Croatian GDP. Annual tourist industry income for 2011 was estimated at €6.61 billion. Its positive effects are felt throughout the economy of Croatia in terms of increased business volume observed in retail business, processing industry orders and summer seasonal employment. The industry is considered an export business, because it significantly reduces the country's external trade imbalance. Since the conclusion of the Croatian War of Independence, the tourist industry has grown rapidly, recording a fourfold rise in tourist numbers, with more than 10 million tourists each year. The most numerous are tourists from Germany, Slovenia, Austria and the Czech Republic as well as Croatia itself. Length of a tourist stay in Croatia averages 4.9 days. The bulk of the tourist industry is concentrated along the Adriatic Sea coast. Opatija was the first holiday resort since the middle of the 19th century. By the 1890s, it became one of the most significant European health resorts. Later a large number of resorts sprang up along the coast and numerous islands, offering services ranging from mass tourism to catering and various niche markets, the most significant being nautical tourism, as there are numerous marinas with more than 16 thousand berths, cultural tourism relying on appeal of medieval coastal cities and numerous cultural events taking place during the summer. Inland areas offer mountain resorts, agrotourism and spas. Zagreb is also a significant tourist destination, rivalling major coastal cities and resorts. Croatia has unpolluted marine areas reflected through numerous nature reserves and 99 Blue Flag beaches and 28 Blue Flag marinas. Croatia is ranked as the 18th most popular tourist destination in the world. About 15% of these visitors (over one million per year) are involved with naturism, an industry for which Croatia is world-famous. It was also the first European country to develop commercial naturist resorts. Agriculture Croatian agricultural sector subsists from exports of blue water fish, which in recent years experienced a tremendous surge in demand, mainly from Japan and South Korea. Croatia is a notable producer of organic foods and much of it is exported to the European Union. Croatian wines, olive oil and lavender are particularly sought after. Value of Croatia's agriculture sector is around 3.1 billion according to preliminary data released by the national statistics office. Croatia has around 1.72 million hectares of agricultural land, however totally utilized land for agricultural in 2020 was around 1.506 million hectares, of these permanent pasture land constituted 536 000 hectares or some 35.5% of total land available to agriculture. Croatia imports significant quantity of fruits and olive oil, despite having large domestic production of the same. In terms of livestock Croatian agriculture had some 15.2 million poultry, 453 000 Cattle, 802 000 Sheep, 1.157 000 Pork/Pigs,88 000 Goats. Croatia also produced 67 000 tons of blue fish, some 9000 of these are Tuna fish, which are farmed and exported to Japan, South Korea and United States. Croatia produced in 2022: 1.66 million tons of maize; 970 thousand tons of wheat; 524 thousand tons of sugar beet (the beet is used to manufacture sugar and ethanol); 319 thousand tons of barley; 196 thousand tons of soybean; 107 thousand tons of potato; 59 thousand tons of rapeseed; 146 thousand tons of grape; 154 thousand tons of sunflower seed; In addition to smaller productions of other agricultural products, like apple (93 thousand tons), triticale (62 thousand tons) and olive (34 thousand tons). Infrastructure Transport The highlight of Croatia's recent infrastructure developments is its rapidly developed motorway network, largely built in the late 1990s and especially in the 2000s. By January 2022, Croatia had completed more than of motorways, connecting Zagreb to most other regions and following various European routes and four Pan-European corridors. The busiest motorways are the A1, connecting Zagreb to Split and the A3, passing east–west through northwest Croatia and Slavonia. A widespread network of state roads in Croatia acts as motorway feeder roads while connecting all major settlements in the country. The high quality and safety levels of the Croatian motorway network were tested and confirmed by several EuroTAP and EuroTest programs. Croatia has an extensive rail network spanning , including of electrified railways and of double track railways. The most significant railways in Croatia are found within the Pan-European transport corridors Vb and X connecting Rijeka to Budapest and Ljubljana to Belgrade, both via Zagreb. All rail services are operated by Croatian Railways. There are international airports in Zagreb, Zadar, Split, Dubrovnik, Rijeka, Osijek and Pula. As of January 2011, Croatia complies with International Civil Aviation Organization aviation safety standards and the Federal Aviation Administration upgraded it to Category 1 rating. The busiest cargo seaport in Croatia is the Port of Rijeka and the busiest passenger ports are Split and Zadar. In addition to those, a large number of minor ports serve an extensive system of ferries connecting numerous islands and coastal cities in addition to ferry lines to several cities in Italy. The largest river port is Vukovar, located on the Danube, representing the nation's outlet to the Pan-European transport corridor VII. Energy There are of crude oil pipelines in Croatia, connecting the Port of Rijeka oil terminal with refineries in Rijeka and Sisak, as well as several transhipment terminals. The system has a capacity of 20 million tonnes per year. The natural gas transportation system comprises of trunk and regional natural gas pipelines, and more than 300 associated structures, connecting production rigs, the Okoli natural gas storage facility, 27 end-users and 37 distribution systems. Croatian production of energy sources covers 85% of nationwide natural gas demand and 19% of oil demand. In 2008, 47.6% of Croatia's primary energy production structure comprised use of natural gas (47.7%), crude oil (18.0%), fuel wood (8.4%), hydro power (25.4%) and other renewable energy sources (0.5%). In 2009, net total electrical power production in Croatia reached 12,725 GWh and Croatia imported 28.5% of its electric power energy needs. The bulk of Croatian imports are supplied by the Krško Nuclear Power Plant in Slovenia, 50% owned by Hrvatska elektroprivreda, providing 16% of Croatia's electricity. Electricity: production: 14.728 GWh (2021) consumption: 18.869 GWh (2021) exports: 7.544 GWh (2021) imports: 11.505 GWh (2021) Electricity – production by source: hydro: 26% (2022) termo: 24% (2022) nuclear: 14% (2022) renewable: 8% (2022) import: 28% (2022) Crude oil: production: 615 thousand tons (2021) consumption: 2.456 million tons (2021) exports: 472 thousand tons (2021) imports: 2.300 million tons (2021) proved reserves: (2017) Natural gas: production: 746 million m³ (2021) consumption: 2.906 billion m³ (2021) exports: 126 million m³ (2021) imports: 2.291 billion m³ (2021) proved reserves: 21.094 billion m³ (2019) Stock exchanges Zagreb Stock Exchange Banking Central bank: Croatian National Bank Major commercial banks: Zagrebačka banka (owned by UniCredit from Italy) Privredna banka Zagreb (owned by Intesa Sanpaolo from Italy) Hrvatska poštanska banka OTP Banka (owned by OTP Bank from Hungary) Raiffeisen Bank Austria (owned by Raiffeisen from Austria) Erste & Steiermärkische Bank (former Riječka banka, owned by Erste Bank from Austria) Central Budget Overall Budget: Revenues: 187.30 billion kuna (€24.83 billion), 2023 Expenditures: 200.92 billion kuna (€26.63 billion), 2023 Expenditure by ministries for 2023: Labor and Pension System, Family and Social Policy – €8.12 billion Finance – €6.64 billion Science and Education – €3.41 billion Health – €2.72 billion Economy and Sustainable Development – €1.96 billion Maritime Affairs, Transport and Infrastructure – €1.41 billion Agriculture – €1.16 billion Interior – €1.05 billion Defence – €1.04 billion Justice and Public Administration – €0.54 billion Construction, Physical Planning and State Property – €0.51 billion Regional Development and EU funds – €0.50 billion Culture and Media – €0.42 billion Tourism and Sport – €0.17 billion Veterans' Affairs – €0.16 billion Foreign and European Affairs – €0.13 billion Economic indicators The following table shows the main economic indicators for the period 2000–2021 according to the Croatian Bureau of Statistics. From the CIA World Factbook 2021. Real GDP (purchasing power parity): $107.11 billion (2020 est.) Real GDP growth rate: 2.94% (2019 est.) Real GDP per capita: $26,500 (2020 est.) GDP (official exchange rate): $60,687 billion (2019 est.) Labor force: 1.656 million (2020 est.) Labor force – by occupation: agriculture 1.9%, industry 27.3%, services 70.8% (2017) Unemployment rate: 8.07% (2019 est.) Population below poverty line: 18.3% (2018 est.) Household income or consumption by percentage share: lowest 10%: 2.7% highest 10%: 23% (2015 est.) Distribution of family income – Gini index: 30.4 (2017) Inflation rate (consumer prices): 0.7% (2019 est.) Budget: revenues: $25.24 billion (2017 est.) expenditures: $24.83 billion, (2017 est.) Public debt: 77.8% of GDP (2017 est.) Taxes and revenues: 46.1% (of GDP) (2017 est.) Agricultural products: maize, wheat, sugar beet, milk, barley, soybeans, potatoes, pork, grapes, sunflower seed Industries: chemicals and plastics, machine tools, fabricated metal, electronics, pig iron and rolled steel products, aluminum, paper, wood products, construction materials, textiles, shipbuilding, petroleum and petroleum refining, food and beverages, tourism Industrial production growth rate: 1.2% (2017 est.) Current account balance: $1.597 billion (2019 est.) Exports: $23.66 billion (2020 est.) Exports – commodities: refined petroleum, packaged medicines, cars, medical cultures/vaccines, lumber (2019) Exports – partners: Italy 13%, Germany 13%, Slovenia 10%, Bosnia and Herzegovina 9%, Austria 6%, Serbia 5% (2019) Imports: $27.59 billion (2020 est.) Imports – commodities: crude petroleum, cars, refined petroleum, packaged medicines, electricity (2019) Imports – partners: Germany 14%, Italy 14%, Slovenia 11%, Hungary 7%, Austria 6% (2019) Reserves of foreign exchange and gold: $18.82 billion (31 December 2017 est.) Debt – external: $48.263 billion (2015 est.) Currency: kuna (HRK) Exchange rates: kuna per US$1 – 6.2474 (2020) Gross Domestic Product See also Economy of Europe Areas of Special State Concern (Croatia) Croatia and the euro Croatia and the World Bank Croatian brands Taxation in Croatia References External links Croatian National Bank Croatian Chamber of Economy GDP per inhabitant varied by one to six across the EU27 Member States Tariffs applied by Croatia as provided by ITC's ITCMarket Access Map , an online database of customs tariffs and market requirements. Croatia Croatia Croatia
2,489
5,580
https://en.wikipedia.org/wiki/Transport%20in%20Croatia
Transport in Croatia
Transport in Croatia relies on several main modes, including transport by car, train, ship and plane. Road transport incorporates a comprehensive network of state, county and local routes augmented by a network of highways for long-distance travelling. Water transport can be divided into sea, based on the ports of Rijeka, Ploče, Split and Zadar, and river transport, based on Sava, Danube and, to a lesser extent, Drava. Croatia has 9 international airports and several airlines, of which the most notable are Croatia Airlines and Trade Air. Rail network is fairly developed but regarding inter-city transport, bus tends to be far more common than the rail. Air transport Croatia counts 9 civil, 13 sport and 3 military airports. There are nine international civil airports: Zagreb Airport, Split Airport, Dubrovnik Airport, Zadar Airport, Pula Airport, Rijeka Airport (on the island of Krk), Osijek Airport, Bol and Mali Lošinj. The two busiest airports in the country are the ones serving Zagreb and Split. By the end of 2010, significant investments in the renovation of Croatian airports began. Since the middle of 2013, Croatia is a member of the European Union. New modern and spacious passenger terminals were opened in 2017 at Zagreb and Dubrovnik Airports and in 2019 at Split Airport. The new passenger terminals at Dubrovnik Airport and Zagreb Airport are the first in Croatia to feature jet bridges. Airports that serve cities on the Adriatic coast receive the majority of the traffic during the summer season due to the large number of flights from foreign air carriers (especially low-cost) that serve these airports with seasonal flights. Croatia Airlines is the state-owned flag carrier of Croatia. It is headquartered in Zagreb and its main hub is Zagreb Airport. Croatia is connected by air with a large number of foreign (especially European) destinations, while its largest cities are interconnected by a significant number of domestic air routes such as Zagreb-Split-Zagreb, Zagreb-Dubrovnik-Zagreb, Zagreb-Zadar-Zagreb, Osijek-Rijeka-Osijek, Osijek-Split-Osijek, Zadar-Pula-Zadar, etc. These routes are operated by domestic air carriers such as Croatia Airlines or Trade Air. Rail transport Railway corridors The Croatian railway network is classified into three groups: railways of international, regional and local significance. The most important railway lines follow Pan-European corridors V/branch B (Rijeka - Zagreb - Budapest) and X, which connect with each other in Zagreb. With international passenger trains, Croatia is directly connected with the majority of neighbouring (Slovenia, Hungary, Serbia) and medium-distanced Central European countries such as Czech Republic, Austria, Germany or Switzerland. Dubrovnik is the most populous and well known city in Croatia that is not connected with the railway, while the city of Pula (together with the rest of westernmost Istria County) can only be directly reached by railway through Slovenia (unless one takes the railway company's organized bus service between Rijeka and Lupoglav). As the most of the country's interior-based larger towns are connected with the railway (opposite to the coastal part of the country), there are many small inland towns, villages and remote areas that are served by the trains running on regional or local corridors. Types of passenger train lines All nationwide and commuter passenger rail services in Croatia are operated by the country's national railway company Croatian Railways. According to schedules, there are several different ranks of passenger trains operating inside Croatia, as follows. Inter-City trains – Croatian: Inter-City vlakovi – Inter-City trains represent the fairly limited amount of trains in Croatia. They operate on long routes and usually serve only the largest stations along the way. Inter-City Titling trains – Croatian: Inter City Nagibni vlakovi (ICN) – ICN services are connecting Zagreb with Split using tilting trains. Thanks to their tilting mechanism they can run faster than conventional trains and represent only daytime connections between Zagreb and Split, also serving decent amount of larger stations along their route. Contrary to regular overnight fast trains between Zagreb and Split with scheduled travelling time of circa 8 hours in total, tilting trains on the Zagreb–Split route (lines M202 and M604) offer passengers a faster journey with a riding time of about 6 hours. Fast trains – Croatian: Brzi vlakovi - Fast trains operate on medium to long distances, serving only stations in larger settlements along the track. Their purpose is very similar to Inter City trains. Semi-fast trains – Croatian: Ubrzani vlakovi - Semi-fast trains operate on medium to long distances and their purpose is to serve destinations which have justified number of passengers, therefore skipping certain smaller stations which have not. Regional and local trains – Croatian: Putnički vlakovi (lit. "passenger trains") – Regional and local trains cover short, medium and long distances and generally serve all stations along their route, representing the largest part of passenger trains on the nationwide level. They are mainly used by local residents traveling between smaller settlements and larger centres/railway hubs or by those who want to continue their journey further using mostly well-adjusted transfers - in both cases for daily migrations (school, work, hospital, shopping, etc.) or other reasons. These trains usually have daily frequencies that meet the needs of the local population. Suburban trains – Croatian: Prigradski vlakovi - Suburban trains operate exclusively on the Zagreb Commuter Rail corridor and have the most frequent daily schedules above all types of train lines in Croatia. They are run by light motor sets that can be started and stopped quickly, and like the most of regional/local trains, they serve every station along their way. On the train lines operating within suburban areas of other larger towns, certain number of regional/local trains play the role of suburban trains. Since large number of fast, semi fast, regional and local trains have commuter-oriented schedules, they often offer passengers daily migration to the large city areas from more distant towns and settlements – between – and vice versa. This can, for example, refer to the railway connection of the Central Croatia's wider region with Zagreb metropolitan area. Infrastructure condition In Croatia, railways are served by standard-gauge (1,435 mm; 4 ft 8+1⁄2). Construction length of the railway network is 2617 km; 1626.12 mi. (2341 km; /1454.63 mi. of single-track corridors and 276 km / 171.49 mi. of double-track corridors). 1013 km (629.44 mi.) of railways are electrified, according to the annual rail network public report of Croatian Railways (2023 issue). The largest part of country's railway infrastructure dates back from the pre-World War II period and more than half of the core routes were, in fact, built during the Habsburg monarchy i.e. before the World War I. More on that, there were also significant lack of investments and decrease of proper maintenance in Croatian railway infrastructure, roughly from the time of country's independence (1991) to late 2000's, which mainly resulted in slowing of permitted track speeds, increase of the riding times and decrease in the overall quality of passenger transport, especially on Inter City level. As a result, fair amount of routes lag significantly behind the West-European standards in the form of infrastructural condition. However, some major infrastructure improvements started to occur in early 2010's and continued through 2020's, such as full-profile reconstruction of some of the country's most important corridors like M102, R201 (section between Zaprešić and Zabok; including electrification), M101, M601 (Vinkovci - Vukovar; including electrification), M502, M201, M202 (section between Zagreb and Karlovac, including adding of the second track) and M103 (section between Dugo Selo and Novska, including adding of the second track), respectively. Those improvements, among other things, resulted in increasing of both maximum track speed and operation safety, shortening of the travel time and modernization of supporting infrastructure (stations, platforms and other equipment). First newly built railway in Croatia since 1967 (L214) was opened in December 2019. The official rail speed record in Croatia is . Maximum speed reached in regular service is on parts of the Novska–Tovarnik line. Rolling stock Rolling stock of the Croatian Railways consists of diesel locomotives, electric locomotives, diesel multiple units, electric multiple units and significant number of railroad cars. In 2004 Croatian Railways introduced a series of modern tilting trains (HŽ series 7 123) produced by the German branch of Bombardier Transportation. They usually operate on the mountainous route between the two largest Croatian cities, Zagreb and Split, although they are sometimes used on other routes in the country as well. In 2011, the modernization of the fairly outdated fleet of Croatian Railways officially began with the delivery of two KONČAR Group-maded prototypes of the electric train series 6112 for suburban and regional traffic. Between 2013 and 2025, Croatian Railways was supplied with a single diesel multiple unit set of 7022 series and 12 diesel multiple unit sets of the 7 023 series manufactured jointly by TŽV Gredelj and KONČAR Group together with the 54 additional electric multiple unit sets of the 6 112 series manufactured by KONČAR Group, which significantly modernized the company's rolling stock. 6 112, 7 022 and 7 023 units can be found operating on suburban, local, regional and inter-city routes. Road transport From the time of Napoleon and building the Louisiana road, the road transport in Croatia has significantly improved, topping most European countries. Croatian highways are widely regarded as being one of the most modern and safe in Europe. This is because the largest part of the Croatian motorway and expressway system (autoceste and brze ceste, resp.) has been recently constructed (mainly in the 2000s), and further construction is continuing. The motorways in Croatia connect most major Croatian cities and all major seaports. The two longest routes, the A1 and the A3, span the better part of the country and the motorway network connects most major border crossings. Tourism is of major importance for the Croatian economy, and as most tourists come to vacation in Croatia in their own cars, the highways serve to alleviate summer jams. They have also been used as a means of stimulating urgently needed economic growth, and for the sustainable development of this country. Croatia now has a considerable highway density for a country of its size, helping it cope with the consequences of being a transition economy and having suffered in the Croatian War of Independence. Some of the most impressive parts of the road infrastructure in Croatia includes the Sveti Rok and Mala Kapela tunnels on the A1 motorway, and the Pelješac Bridge in the southernmost part of the country. , Croatia has a total of of roads. Traffic laws The traffic signs adhere to the Vienna Convention on Road Signs and Signals. The general speed limits are: in inhabited areas 50 km/h outside of inhabited areas 90 km/h on marked expressways 110 km/h on marked motorways 130 km/h Some of the more technical safety measures include that all new Croatian tunnels have modern safety equipment and there are several control cereers, which monitor highway traffic. Motorways Motorways (, plural ) in Croatia applies to dual carriageway roads with at least two traffic lanes in each driving direction and an emergency lane. Direction road signs at Croatian motorways have green background with white lettering similar to the German Autobahn. The designations of motorways are "A" and the motorway number. , the Croatian motorway network is long, with additional of new motorways under construction. The list of completed motorways is as follows (see individual articles for further construction plans and status): A1, Zagreb - Bosiljevo - Split - Ploče (E71, E65) A2, Zagreb - Krapina - Macelj (E59) A3, Bregana - Zagreb - Lipovac (E70) A4, Goričan - Varaždin/Čakovec - Zagreb (E71) A5, Osijek - Đakovo - Sredanci (E73) A6, Bosiljevo - Rijeka (E65) A7, Rupa - Rijeka bypass (E61) A8, Kanfanar interchange - Matulji (E751) A9, Umag - Pula (E751) A10, A1 Ploče interchange - Metković border crossing A11, Velika Gorica - Lekenik Toll is charged on most Croatian motorways, and exceptions are the A11 motorway, Zagreb bypass and Rijeka bypass, as well as sections adjacent to border crossings (except eastbound A3). Payment is in kuna, all major credit cards and euros are accepted at all toll gates. Most motorways are covered by the closed toll collection system, where a driver receives a ticket at the entrance gates and pays at the exit gates according to the number of sections travelled. Open toll collection is used on some bridges and tunnels and short stretches of tolled highway, where drivers immediately pay the toll upon arriving. Various forms of prepaid electronic toll collection systems are in place which allow quicker collection of toll, usually at a discounted rate, as well as use of dedicated toll plaza lanes (for ENC system of the electronic toll collection). All heavily traveled routes towards Slovenia, Hungary and Serbia are motorway connections, and almost all parts of Croatia are now easy to reach using motorways. Numerous service areas and petrol stations had been constructed along all Croatian motorways. All Croatian motorways are equipped with enclosed service areas with gas stations and parking. Many areas have restaurants and children's playgrounds. Expressways The term brza cesta or expressway refers to limited-access roads specifically designated as such by legislation and marked with appropriate limited-access road traffic signs. The expressways may comprise two or more traffic lanes, while they normally do not have emergency lanes. Polu-autocesta or semi-highway refers to a two-lane, undivided road running on one roadway of a motorway while the other is in construction. By legal definition, all semi-highways are expressways. The expressway routes in Croatia usually correspond to a state road (see below) and are marked a "D" followed by a number. The "E" numbers are designations of European routes. State roads Major roads that aren't part of the motorway system are državne ceste (state routes). They are marked with the letter D and the road's number. The most traveled state routes in Croatia are: D1, connects Zagreb and Split via Lika - passes through Karlovac, Slunj, Plitvice, Korenica, Knin, Sinj. D2, connects Varaždin and Osijek via Podravina - passes through Koprivnica, Virovitica, Slatina, Našice. D8, connects Rijeka and Dubrovnik, widely known as Jadranska magistrala and part of E65 - runs along the coastline and connects many cities on the coast, including Crikvenica, Senj, Zadar, Šibenik, Trogir, Split, Omiš, Makarska and Ploče. Since the construction of A1 motorway beyond Gorski kotar started, D1 and D8 are much less used. These routes are monitored by Croatian roadside assistance because they connect important locations. Like all state routes outside major cities, they are only two-lane arterials and do not support heavy traffic. All state routes are routinely maintained by Croatian road authorities. The road sign for a state route has a blue background and the route's designation in white. State routes have one, two or three-digit numbers. County roads and minor roads Secondary routes are known as county roads. They are marked with signs with yellow background and road number. These roads' designations are rarely used, but usually marked on regional maps if these roads are shown. Formally, their designation is the letter Ž and the number. County roads have four-digit numbers. The least known are the so-called local roads. Their designations are never marked on maps or by roadside signs and as such are virtually unknown to public. Their designations consist of the letter L and a five-digit number. Bus traffic Buses represent the most-accepted, cheapest and widely used means of public transport. National bus traffic is very well developed - from express buses that cover longer distances to bus connections between the smallest villages in the country, therefore it's possible to reach most of the remotest parts of Croatia by bus on a daily basis. Every larger town usually has a bus station with the ticket office(s) and timetable information. Buses that run on national lines in Croatia (owned and run by private companies) are comfortable and modern-equipped vehicles, featuring air-conditioning and offering pleasant traveling comfort. National bus travel is generally divided in inter-city (Međugradski prijevoz), inter-county (Međužupanijski prijevoz) and county (local; Županijski prijevoz) transport. Although there can be bus companies whose primary goal is to serve inter-city lines, a certain bus company can - and most of them usually do - operate all or most of the above-mentioned modes of transport. The primary goal of intercity buses is to connect the largest cities in the country with each other in the shortest possible time. Buses on inter-city level usually offer far more frequent daily services and shorter riding time than trains, mostly due to the large amount of competing companies and great quality of the country's freeway network. According to timetables of bus companies, there are several types of inter-city bus lines. Some lines run directly on the highway to connect certain cities by the shortest route. Other lines run on lower-ranked roads (all the way or part of the way) even when there is a highway alternative, to connect settlements along the way, while some lines run on the highway and sometimes (one time or more) temporarily exit it to serve some smaller settlement nearby, thus giving the opportunity to a certain smaller settlement to be connected by express service. Buses on county lines usually run between larger cities or towns in a particular county, connecting towns and smaller villages along the way. These buses are mostly used by local residents - students or workers and occasional passengers, so the timetables and line frequencies of these bus routes are mostly adjusted according to the needs of passenger's daily migrations. Since there is no bus terminal in smaller villages, passengers which board buses from those stations buy a ticket from the driver while boarding the bus, unless they have a monthly student or worker pass, in which case they must validate it each time they board the vehicle. Buses running on inter-county lines usually have the same or very similar purpose, except they cross county borders to transport passengers to the more distanced larger town or area. There are many international bus routes from Croatia to the neighboring countries (Slovenia, Bosnia and Herzegovina, Serbia, Hungary) and to other European countries. International bus services correspond to European standards. Zagreb has the largest and busiest bus terminal in Croatia. It is located near the downtown in Trnje district on the Marin Držić Avenue. The bus terminal is close to the main railway station and it is easy to reach by tram lines and by car. Maritime and river transport Maritime transport Coastal infrastructure Republic of Croatia counts six ports open for public traffic of outstanding (international) economic importance and those are the ports: Rijeka, Zadar, Šibenik, Split, Ploče and Dubrovnik. There are also numerous smaller public ports located along the country's coast. Rijeka is the country's largest cargo port, followed by Ploče which is of great economic importance for the neighboring Bosnia and Herzegovina. The three most common destinations for foreign cruise ships are the ports of Dubrovnik, Split and Zadar. Split is the country's largest passenger port, serving as the public port for domestic ferry, conventional ship and catamaran services as well as for international ferry, cruise or mega cruise services. Zadar has two public transport ports opened for passenger traffic – one located in the town center served by conventional ship and catamaran services and the other located in the suburb of Gaženica, serving ferry and cruise ship services. Republic of Croatia defined the need to relieve the Zadar's passenger port and the historic center of Zadar and move ferry traffic from the city center to the new passenger port in Gaženica. Work on the construction of the new port began in 2009, and a new ferry port of approximately 100,000 square meters was opened to traffic in 2015. The advantages of the Port of Gaženica are the short distance from the city center (3.5 kilometers), the proximity of the airport and quality traffic connection with the A1 Motorway. The Port of Gaženica meets multiple traffic requirements - it serves for domestic ferry traffic, international ferry traffic, passenger traffic on mega cruisers and RO-RO traffic, with all the necessary infrastructure and accompanying upgrades. In 2019, the passenger port of Gaženica was named Port of the Year at the most prestigious Seatrade Cruise Awards held in Hamburg. Connection of islands and the mainland Performing of the public transport on national conventional ship, catamaran and ferry lines and all occasional public maritime lines in Croatia is supervised by the government-founded Agency for coastal line traffic (Agencija za obalni linijski promet). Croatia has about 50 inhabited islands along its coast (most of which are reached from either Zadar or Split ports), which means that there is a large number of local car ferry, conventional ship and catamaran connections. The vast majority of Croatian islands have a road network and several ports for public transport - usually a single ferry port and one or more additional ports mostly located near the bay settlements, served exclusively by conventional ships and catamarans. According to sailing schedules or in case of extraordinary conditions, conventional and catamaran ships can also serve ferry ports. There are also very small number of car-free islands that are accessible only by conventional ship or catamaran services, such as Silba in northern Dalmatia. Regarding national ferry lines, in the lead terms of the number of transported passengers and vehicles are the one between Split and Supetar on the island of Brač (central Dalmatia) and one between Valbiska (island of Krk) and Merag (island of Cres) in northern Kvarner Gulf. Ferry line between Zadar and Preko on the island of Ugljan (northern Dalmatia) is the most frequent one in Croatia and the rest of the Adriatic - in the summer sailing schedule on this 3 nautical mile long line (5.5 km / 3.45 mi) there is around 20 departures per day in each direction. The longest ferry line in Croatia is Zadar - Ist - Olib - Silba (passenger service only) - Premuda - Mali Lošinj (63.4 nautical miles; 117.4 km / 72.9 mi.), while the shortest one is between Biograd na Moru and Tkon on the island of Pašman (1.4 nautical miles; 2,6 km / 1.6 mi.), both operating in northern Dalmatia. Almost all ferry lines in Croatia are provided by the state-owned shipping company Jadrolinija (included in the list of the world's top 10 passenger shipping companies), except the ferry service between Stinica and Mišnjak on the island of Rab (Kvarner Gulf area) which is operated by the company “Rapska Plovidba d.d”. Catamaran and passenger ship services are operated by Jadrolinija and several other companies such as "Krilo - Kapetan Luka" , "G&V Line Iadera" , Tankerska plovidba, "Miatours d.o.o." etc. Jadrolinija alone provides a total of 34 national lines with almost 600 departures per day during the summer tourist season, when the number of ferry, conventional ship and catamaran lines on the most capacity-demanding routes is significantly higher compared to the off-season period. International routes With its largest vessels, Jadrolinija connects Croatia with Italy by operating international cross-Adriatic routes Split - Ancona - Split, Zadar - Ancona - Zadar and Dubrovnik - Bari - Dubrovnik. Ferry line between Split and Ancona is also operated by Italian operator SNAV. Vessels Most ferries sailing on Croatian national lines are, among other things, equipped with modern navigation/safety equipment and luxurious enclosed air-conditioned longues for passengers. From 2004 to 2018, a total of 13 new ferries were built in Croatian shipyards, all of which sail on national local lines - 12 are owned by Jadrolinija, and one is owned by Rapska Plovidba. In addition to these ferries, Jadrolinija has a large number of ferries purchased on foreign markets (used and reconstructed/refurbished) or built in Croatian shipyards during the time of Yugoslavia. All ferries are equipped with hydraulic ramps for loading/unloading of vehicles. Older ferries are usually smaller in size, have capacities that vary from 200 to 700 passengers and from 30 to 70 cars per vessel. They sail on lines connecting less populated islands or on lines connecting ports located in shallower bays in parts of the islands that are located very close to the mainland. Ferries built in more recent times (after 2000, either in Croatia or abroad) have a significantly higher capacity which range from 600 to 1200 passengers and from 100 to 200 cars per vessel, mostly sailing on the country's most frequent and/or busiest local lines, such as Valbiska - Merag (Kvarner Gulf area), Zadar - Preko, Zadar - Brbinj (Zadar Archipelago area), Split - Supetar, Split - Stari Grad, Split - Vela Luka - Ubli (Split Archipelago) etc. The three largest Croatian ships sailing on international routes were built abroad and later bought by Jadrolinija as used vessels: Marko Polo (built in 1972 and bought in 1988), Dubrovnik (built in 1979 and bought in 1996) and Zadar (built in 1993 and bougt in 2004). Each of these three ships has a capacity of about 1000 passengers and about 300 vehicles and offer a variety of additional facilities such as self-service restaurants, sleeping cabins, cafes, children's playrooms, etc. Catamarans and passenger ships sailing on Croatian national lines are relatively small vessels mostly built in foreign shipyards. Capacity varies from vessel to vessel, and generally ranges from 200 to 400 passengers per ship. They are equipped with air-conditioned passenger lounges on one or two levels, cafe service, luggage space, etc. Croatian shipping companies have a wide range of catamarans - from those built in the late 1980s to the most modern built in 2010's with the latest technical advances in the navigation system. Passenger ships were mostly built in the late 1980s or early 1990s (except for a very small number of still active Jadrolinija passenger ships built in Croatian shipyards during Yugoslavia, i.e. in the late 1950s and early 1960s). All passenger ships also have a covered deck with seats, like the ferry vessels. River transport Croatia is also on the important Danube waterway which connects Eastern and Central Europe. The major Danube port is Vukovar, but there are also some smaller ports in Osijek, Sisak and Slavonski Brod. Navigable rivers: Danube(E 80) - 137,5 km from entering Croatia near Batina to exits near Ilok; VIc class Sava(E 80–12) - 383.2 km from Sisak until it exits Croatia near Gunja; II-IV class Drava(E 80–08) - 14 km from the mouth of the Danube to Osijek; IV class Total waterway length (2021): 534.7 km Pipelines The projected capacity of the oil pipeline is 34 million tons of oil per year, and the installed 20 million tons of oil per year. The system was built for the needs of refineries in Croatia, Slovenia, Serbia and Bosnia and Herzegovina, as well as users in Hungary, the Czech Republic and Slovakia. The total capacity of the storage space today is 2,100,000 m3 for crude oil and 242,000 m3 for petroleum products. The pipeline is long and it is fully controlled by JANAF. The system consists of: reception and dispatch Terminal Omišalj on the island of Krk, with two berths for tankers and storage space for oil and derivatives, receiving and dispatching terminals in Sisak, Virje and Slavonski Brod with oil storage space at the Sisak and Virje terminals, Žitnjak Terminal in Zagreb, for storage of petroleum products with railway and truck transfer stations for delivery, reception and dispatch of derivatives. Natural gas is transported by Plinacro, which operates of the transmission system in 19 counties, with more than 450 overhead transmission system facilities, including a compressor station and 156 metering and reduction stations through which gas is delivered to system users. The system houses the Okoli underground storage facility with a working volume of 553 million cubic meters of natural gas. Public transport Public transport within the most of the largest cities (and their suburbs/satellite towns) in Croatia is mostly provided by the city buses owned and operated by municipal organizations such as Zagrebački električni tramvaj in Zagreb, Promet Split in Split, "Autotrolej" d.o.o." in Rijeka, "Liburnija Zadar" in Zadar, "Gradski Prijevoz Putnika d.o.o." in Osijek, etc. In addition to city buses, the cities of Zagreb and Osijek have tram networks. Tram lines in Zagreb are operated by Zagrebački električni tramvaj, while the tram lines in Osijek are operated by "Gradski Prijevoz Putnika d.o.o.". Tram network in the capital city of Zagreb is, however, far more extensive than the one in Osijek. See also Croatian car number plates Transport in Zagreb Hrvatske autoceste Croatian Railways List of E-roads in Croatia References
2,490
5,598
https://en.wikipedia.org/wiki/Economy%20of%20Cyprus
Economy of Cyprus
The economy of Cyprus is a high-income economy as classified by the World Bank, and was included by the International Monetary Fund in its list of advanced economies in 2001. Cyprus adopted the euro as its official currency on 1 January 2008, replacing the Cypriot pound at an irrevocable fixed exchange rate of CYP 0.585274 per €1. The 2012–2013 Cypriot financial crisis, part of the wider European debt crisis, has dominated the country's economic affairs in recent times. In March 2013, the Cypriot government reached an agreement with its eurozone partners to split the country's second biggest bank, the Cyprus Popular Bank (also known as Laiki Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the larger Bank of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International Monetary Fund, the Cypriot government would be required to impose a significant haircut on uninsured deposits. Insured deposits of €100,000 or less would not be affected. After a three-and-a-half-year recession, Cyprus returned to growth in the first quarter of 2015. Cyprus successfully concluded its three-year financial assistance programme at the end of March 2016, having borrowed a total of €6.3 billion from the European Stability Mechanism and €1 billion from the IMF. The remaining €2.7 billion of the ESM bailout was never dispensed, due to the Cypriot government's better than expected finances over the course of the programme. Economy in the government-controlled area Cyprus has an open, free-market, service-based economy with some light manufacturing. Internationally, Cyprus promotes its geographical location as a "bridge" between East and West, along with its educated English-speaking population, moderate local costs, good airline connections, and telecommunications. Since gaining independence from the United Kingdom in 1960, Cyprus has had a record of successful economic performance, reflected in strong growth, full employment conditions and relative stability. The underdeveloped agrarian economy inherited from colonial rule has been transformed into a modern economy, with dynamic services, industrial and agricultural sectors and an advanced physical and social infrastructure. The Cypriots are among the most prosperous people in the Mediterranean region, with GDP per capita in 2022 approaching $30,000 in nominal terms and $50,000 on the basis of purchasing power parity. Their standard of living is reflected in the country's "very high" Human Development Index, and Cyprus is ranked 23rd in the world in terms of the Quality-of-life Index. However, after more than three decades of unbroken growth, the Cypriot economy contracted in 2009. This reflected the exposure of Cyprus to the Great Recession and European debt crisis. In recent times, concerns have been raised about the state of public finances and spiralling borrowing costs. Furthermore, Cyprus was dealt a severe blow by the Evangelos Florakis Naval Base explosion in July 2011, with the cost to the economy estimated at €1–3 billion, or up to 17% of GDP. The economic achievements of Cyprus during the preceding decades have been significant, bearing in mind the severe economic and social dislocation created by the Turkish invasion of 1974 and the continuing occupation of the northern part of the island by Turkey. The Turkish invasion inflicted a serious blow to the Cyprus economy and in particular to agriculture, tourism, mining and Quarrying: 70 percent of the island's wealth-producing resources were lost, the tourist industry lost 65 percent of its hotels and tourist accommodation, the industrial sector lost 46 percent, and mining and quarrying lost 56 percent of production. The loss of the port of Famagusta, which handled 83 percent of the general cargo, and the closure of Nicosia International Airport, in the buffer zone, were additional setbacks. The success of Cyprus in the economic sphere has been attributed, inter alia, to the adoption of a market-oriented economic system, the pursuance of sound macroeconomic policies by the government as well as the existence of a dynamic and flexible entrepreneurship and a highly educated labor force. Moreover, the economy benefited from the close cooperation between the public and private sectors. In the past 30 years, the economy has shifted from agriculture to light manufacturing and services. The services sector, including tourism, contributes almost 80% to GDP and employs more than 70% of the labor force. Industry and construction account for approximately one-fifth of GDP and labor, while agriculture is responsible for 2.1% of GDP and 8.5% of the labor force. Potatoes and citrus are the principal export crops. After robust growth rates in the 1980s (average annual growth was 6.1%), economic performance in the 1990s was mixed: real GDP growth was 9.7% in 1992, 1.7% in 1993, 6.0% in 1994, 6.0% in 1995, 1.9% in 1996 and 2.3% in 1997. This pattern underlined the economy's vulnerability to swings in tourist arrivals (i.e., to economic and political conditions in Cyprus, Western Europe, and the Middle East) and the need to diversify the economy. Declining competitiveness in tourism and especially in manufacturing are expected to act as a drag on growth until structural changes are effected. Overvaluation of the Cypriot pound prior to the adoption of the euro in 2008 had kept inflation in check. Trade is vital to the Cypriot economy — the island is not self-sufficient in food and until the recent offshore gas discoveries had few known natural resources – and the trade deficit continues to grow. Cyprus must import fuels, most raw materials, heavy machinery, and transportation equipment. More than 50% of its trade is with the rest of the European Union, especially Greece and the United Kingdom, while the Middle East receives 20% of exports. In 1991, Cyprus introduced a value-added tax (VAT), which is at 19% as of 13 January 2014. Cyprus ratified the new world trade agreement (General Agreement on Tariffs and Trade, GATT) in 1995 and began implementing it fully on 1 January 1996. EU accession negotiations started on 31 March 1998, and concluded when Cyprus joined the organization as a full member in 2004. Investment climate The Cyprus legal system is founded on English law, and is therefore familiar to most international financiers. Cyprus's legislation was aligned with EU norms in the period leading up to EU accession in 2004. Restrictions on foreign direct investment were removed, permitting 100% foreign ownership in many cases. Foreign portfolio investment in the Cyprus Stock Exchange was also liberalized. In 2002 a modern, business-friendly tax system was put in place with a 12.5% corporate tax rate, one of the lowest in the EU. Cyprus has concluded treaties on double taxation with more than 40 countries, and, as a member of the Eurozone, has no exchange restrictions. Non-residents and foreign investors may freely repatriate proceeds from investments in Cyprus. Role as a financial hub In the years following the dissolution of the Soviet Union it gained great popularity as a portal for investment from the West into Russia and Eastern Europe, becoming for companies of that origin the most common tax haven. More recently, there have been increasing investment flows from the West through Cyprus into Asia, particularly China and India, South America and the Middle East. In addition, businesses from outside the EU use Cyprus as their entry-point for investment into Europe. The business services sector remains the fastest growing sector of the economy, and had overtaken all other sectors in importance. CIPA has been fundamental towards this trend. As of 2016, CySEC (the Financial Regulator), regulates many of the world's biggest brands in retail forex as they generally see it as an efficient way to get an EU operating license and industry know-how. Agriculture Cyprus produced in 2018: 106 thousand tons of potato; 37 thousand tons of tangerine; 23 thousand tons of grape; 20 thousand tons of orange; 19 thousand tons of grapefruit; 19 thousand tons of olive; 18 thousand tons of wheat; 18 thousand tons of barley; 15 thousand tons of tomato; 13 thousand tons of watermelon; 10 thousand tons of melon; In addition to smaller productions of other agricultural products. Oil and gas Surveys suggest more than 100 trillion cubic feet (2.831 trillion cubic metres) of reserves lie untapped in the eastern Mediterranean basin between Cyprus and Israel – almost equal to the world's total annual consumption of natural gas. In 2011, Noble Energy estimated that a pipeline to Leviathan gas field could be in operation as soon as 2014 or 2015. In January 2012, Noble Energy announced a natural gas field discovery. It attracted Shell, Delek and Avner as partners. Several production sharing contracts for exploration were signed with international companies, including Eni, KOGAS, TotalEnergies, ExxonMobil and QatarEnergy. It is necessary to develop infrastructure for landing the gas in Cyprus and for liquefaction for export. Role as a shipping hub Cyprus constitutes one of the largest ship management centers in the world; around 50 ship management companies and marine-related foreign enterprises are conducting their international activities in the country while the majority of the largest ship management companies in the world have established fully fledged offices on the island. Its geographical position at the crossroads of three continents and its proximity to the Suez Canal has promoted merchant shipping as an important industry for the island nation. Cyprus has the tenth-largest registered fleet in the world, with 1,030 vessels accounting for 31,706,000 dwt as of 1 January 2013. Tourism Tourism is an important factor of the island state's economy, culture, and overall brand development. With over 2 million tourist arrivals per year, it is the 40th most popular destination in the world. However, per capita of local population, it ranks 17th. The industry has been honored with various international awards, spanning from the Sustainable Destinations Global Top 100, VISION on Sustainable Tourism, Totem Tourism and Green Destination titles bestowed to Limassol and Paphos in December 2014. The island beaches have been awarded with 57 Blue Flags. Cyprus became a full member of the World Tourism Organization when it was created in 1975. According to the World Economic Forum's 2013 Travel and Tourism Competitiveness Index, Cyprus' tourism industry ranks 29th in the world in terms of overall competitiveness. In terms of Tourism Infrastructure, in relation to the tourism industry Cyprus ranks 1st in the world. The Cyprus Tourism Organization has a status of a semi-governmental organisation charged with overseeing the industry practices and promoting the island worldwide. Trade In 2008 fiscal aggregate value of goods and services exported by Cyprus was in the region of $1.53 billion. It primarily exported goods and services such as citrus fruits, cement, potatoes, clothing and pharmaceuticals. At that same period total financial value of goods and services imported by Cyprus was about $8.689 billion. Prominent goods and services imported by Cyprus in 2008 were consumer goods, machinery, petroleum and other lubricants, transport equipment and intermediate goods. Cypriot trade partners Traditionally Greece has been a major export and import partner of Cyprus. In fiscal 2007, it amounted for 21.1 percent of total exports of Cyprus. At that same period it was responsible for 17.7 percent of goods and services imported by Cyprus. Some other important names in this regard are UK and Italy. Eurozone crisis In 2012, Cyprus became affected by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it would need € of foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch down-grading Cyprus's credit rating to junk status. Fitch said Cyprus would need an additional € to support its banks and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank (Cyprus's 3 largest banks) to the Greek financial crisis. In June 2012 the Cypriot finance minister, Vassos Shiarly, stated that the European Central Bank, European commission and IMF officials are to carry out an in-depth investigation into Cyprus' economy and banking sector to assess the level of funding it requires. The Ministry of Finance rejected the possibility that Cyprus would be forced to undergo the sweeping austerity measures that have caused turbulence in Greece, but admitted that there would be "some negative repercussion". In November 2012 international lenders negotiating a bailout with the Cypriot government have agreed on a key capital ratio for banks and a system for the sector's supervision. Both commercial banks and cooperatives will be overseen by the Central Bank and the Ministry of Finance. They also set a core Tier 1 ratio – a measure of financial strength – of 9% by the end of 2013 for banks, which could then rise to 10% in 2014. In 2014, Harris Georgiades pointed that exiting the Memorandum with the European troika required a return to the markets. This he said, required "timely, effective and full implementation of the program." The Finance Minister stressed the need to implement the Memorandum of understanding without an additional loan. In 2015, Cyprus was praised by the President of the European Commission for adopting the austerity measures and not hesitating to follow a tough reform program. In 2016, Moody's Investors Service changed its outlook on the Cypriot banking system to positive from stable, reflecting the view that the recovery will restore banks to profitability and improve asset quality. The quick economic recovery was driven by tourism, business services and increased consumer spending. Creditor confidence was also strengthened, allowing Bank of Cyprus to reduce its Emergency Liquidity Assistance to €2.0 billion (from €9.4 billion in 2013). Within the same period, Bank of Cyprus chairman Josef Ackermann urged the European Union to pledge financial support for a permanent solution to the Cyprus dispute. Economy of Northern Cyprus The economy of Turkish-occupied northern Cyprus is about one-fifth the size of the economy of the government-controlled area, while GDP per capita is around half. Because the de facto administration is recognized only by Turkey, it has had much difficulty arranging foreign financing, and foreign firms have hesitated to invest there. The economy mainly revolves around the agricultural sector and government service, which together employ about half of the work force. The tourism sector also contributes substantially into the economy. Moreover, the small economy has seen some downfalls because the Turkish lira is legal tender. To compensate for the economy's weakness, Turkey has been known to provide significant financial aid. In both parts of the island, water shortage is a growing problem, and several desalination plants are planned. The economic disparity between the two communities is pronounced. Although the economy operates on a free-market basis, the lack of private and government investment, shortages of skilled labor and experienced managers, and inflation and the devaluation of the Turkish lira continue to plague the economy. Trade with Turkey Turkey is by far the main trading partner of Northern Cyprus, supplying 55% of imports and absorbing 48% of exports. In a landmark case, the European Court of Justice (ECJ) ruled on 5 July 1994 against the British practice of importing produce from Northern Cyprus based on certificates of origin and phytosanitary certificates granted by the de facto authorities. The ECJ decided that only goods bearing certificates of origin from the internationally recognized Republic of Cyprus could be imported by EU member states. The decision resulted in a considerable decrease of Turkish Cypriot exports to the EU: from $36.4 million (or 66.7% of total Turkish Cypriot exports) in 1993 to $24.7 million in 1996 (or 35% of total exports) in 1996. Even so, the EU continues to be the second-largest trading partner of Northern Cyprus, with a 24.7% share of total imports and 35% share of total exports. The most important exports of Northern Cyprus are citrus and dairy products. These are followed by rakı, scrap and clothing. Assistance from Turkey is the mainstay of the Turkish Cypriot economy. Under the latest economic protocol (signed 3 January 1997), Turkey has undertaken to provide loans totalling $250 million for the purpose of implementing projects included in the protocol related to public finance, tourism, banking, and privatization. Fluctuation in the Turkish lira, which suffered from hyperinflation every year until its replacement by the Turkish new lira in 2005, exerted downward pressure on the Turkish Cypriot standard of living for many years. The de facto authorities have instituted a free market in foreign exchange and permit residents to hold foreign-currency denominated bank accounts. This encourages transfers from Turkish Cypriots living abroad. Happiness Economic factors such as the GDP and national income strongly correlate with the happiness of a nation's citizens. In a study published in 2005, citizens from a sample of countries were asked to rate how happy or unhappy they were as a whole on a scale of 1 to 7 (Ranking: 1. Completely happy, 2. Very happy, 3. Fairly happy,4. Neither happy nor unhappy, 5. Fairly unhappy, 6. Very unhappy, 7. Completely unhappy.) Cyprus had a score of 5.29. On the question of how satisfied citizens were with their main job, Cyprus scored 5.36 on a scale of 1 to 7 (Ranking: 1. Completely satisfied, 2. Very satisfied, 3. Fairly satisfied, 4. Neither satisfied nor dissatisfied, 5. Fairly dissatisfied, 6. Very dissatisfied, 7. Completely dissatisfied.) In another ranking of happiness, Northern Cyprus ranks 58 and Cyprus ranks 61, according to the 2018 World Happiness Report. The report rates 156 countries based on variables including income, healthy life expectancy, social support, freedom, trust, and generosity. Economic factors play a significant role in the general life satisfaction of Cyprus citizens, especially with women who participate in the labor force at a lower rate, work in lower ranks, and work in more public and service sector jobs than the men. Women of different skill-sets and "differing economic objectives and constraints" participate in the tourism industry. Women participate in this industry through jobs like hotel work to serve and/or bring pride to their family, not necessarily to satisfy their own selves. In this study, women with income higher than the mean household income reported higher levels of satisfaction with their lives while those with lower income reported the opposite. When asked who they compare themselves with (those with lower, same, or higher economic status), results showed that those that compared themselves with people of higher economic statuses than them had the lowest level of life satisfaction. While the correlation of income and happiness is positive, it is significantly low; there is stronger correlation between comparison and happiness. This indicates that not only income level but income level in relation to that of others affects their amount of life satisfaction. Classified as a Mediterranean welfare regime, Cyprus has a weak public Welfare system. This means there is a strong reliance on the family, instead of the state, for both familial and economic support. Another finding is that being a full-time housewife has a stronger negative effect on happiness for women of Northern Cyprus than being unemployed, showing how the combination of gender and the economic factor of participating in the labor force affects life satisfaction. Economic factors also negatively correlate with the happiness levels of those that live in the capital city: citizens living in the capital express lower levels of happiness. As found in this study, citizens of Cyprus that live in its capital, Nicosia, are significantly less happy than others whether or not socio-economic variables are controlled for. Another finding was that the young people in the capital are unhappier than the rest of Cyprus; the old are not. See also Cypriot pound Economy of Europe References Cyprus. The World Factbook. Central Intelligence Agency. External links Cyprus Cyprus
2,501
5,622
https://en.wikipedia.org/wiki/C.%20Northcote%20Parkinson
C. Northcote Parkinson
Cyril Northcote Parkinson (30 July 1909 – 9 March 1993) was a British naval historian and author of some 60 books, the most famous of which was his best-seller Parkinson's Law (1957), in which Parkinson advanced Parkinson's law, stating that "work expands so as to fill the time available for its completion", an insight which led him to be regarded as an important scholar in public administration and management. Early life and education The youngest son of William Edward Parkinson (1871–1927), an art master at North East County School and from 1913 principal of York School of Arts and Crafts, and his wife, Rose Emily Mary Curnow (born 1877), Parkinson attended St. Peter's School, York, where in 1929 he won an Exhibition to study history at Emmanuel College, Cambridge. He received a BA degree in 1932. As an undergraduate, Parkinson developed an interest in naval history, which he pursued when the Pellew family gave him access to family papers at the recently established National Maritime Museum. The papers formed the basis of his first book, Edward Pellew, Viscount Exmouth, Admiral of the Red. In 1934, then a graduate student at King's College London, he wrote his PhD thesis on Trade and War in the Eastern Seas, 1803–1810, which was awarded the Julian Corbett Prize in Naval History for 1935. Academic and military career While a graduate student in 1934, Parkinson was commissioned into the Territorial Army in the 22nd London Regiment (The Queen's), was promoted to lieutenant the same year, and commanded an infantry company at the jubilee of King George V in 1935. In the same year, Emmanuel College, Cambridge elected him a research fellow. While at Cambridge, he commanded an infantry unit of the Cambridge University Officers' Training Corps. He was promoted to captain in 1937. He became senior history master at Blundell's School in Tiverton, Devon in 1938 (and a captain in the school's OTC), then instructor at the Royal Naval College, Dartmouth in 1939. In 1940, he joined the Queen's Royal Regiment as a captain and undertook a range of staff and military teaching positions in Britain. In 1943 he married Ethelwyn Edith Graves (born 1915), a nurse tutor at Middlesex Hospital, with whom he had two children. Demobilized as a major in 1945, he was a lecturer in history at the University of Liverpool from 1946 to 1949. In 1950, he was appointed Raffles Professor of History at the new University of Malaya in Singapore. While there, he initiated an important series of historical monographs on the history of Malaya, publishing the first in 1960. A movement developed in the mid-1950s to establish two campuses, one in Kuala Lumpur and one in Singapore. Parkinson attempted to persuade the authorities to avoid dividing the university by maintaining it in Johor Bahru to serve both Singapore and Malaya. His efforts were unsuccessful and the two campuses were established in 1959. The Singapore campus later became the University of Singapore. Parkinson divorced in 1952 and he married the writer and journalist Ann Fry (1921–1983), with whom he had two sons and a daughter. In 1958, while still in Singapore, he published his most famous work, Parkinson's Law, which expanded upon a humorous article that he had published in the Economist magazine in November 1955, satirising government bureaucracies. The 120-page book of short studies, published in the United States and then in Britain, was illustrated by Osbert Lancaster and became an instant best seller. It explained the inevitability of bureaucratic expansion, arguing that 'work expands to fill the time available for its completion'. Typical of his satire and cynical humour, it included a discourse on Parkinson's Law of Triviality (debates about expenses for a nuclear plant, a bicycle shed, and refreshments), a note on why driving on the left side of the road (see road transport) is natural, and suggested that the Royal Navy would eventually have more admirals than ships. After serving as visiting professor at Harvard University in 1958, the University of Illinois and the University of California, Berkeley in 1959–60, he resigned his post in Singapore to become an independent writer. To avoid high taxation in Britain, he moved to the Channel Islands and settled at St Martin's, Guernsey, where he purchased Les Caches Hall. In Guernsey, he was a very active member of the community and was even committed to the feudal heritage of the island. He even financed a historical re-enactment of the Chevauche de Saint Michel (Cavalcade) by the Court of Seigneurs and wrote a newspaper article about it . He was official member of the Royal Court of Chief Pleas in his quality of Seigneur d'Anneville as he had acquired the manorial rights of the Fief d'Anneville . Attendance at the Royal Chief of Chief Pleas is considered very important in Guernsey , as it is the island's oldest court and its first historical self-governing body. As a feudal member, he could therefore be the equivalent of a temporal lord in Guernsey . As Anneville is in some ways considered the oldest fief of the island and his possessor is considered "the first in rank after the clergy" , he was very interested in his fief and its historical possessions. In 1968 he purchased and restored Anneville Manor, the historic manor house of the Seigneurie (or fief) d'Anneville, and in 1971 he restored the Chapel of Thomas d'Anneville pertaining to the same fief. His writings from this period included a series of historical novels featuring a fictional naval officer from Guernsey, Richard Delancey, during the Napoleonic era. In the novel, Richard Delancey was Seigneur of the Fief d'Anneville , and Cyril Northcote Parkinson also loved to boast about being Seigneur of the fief d'Anneville and had even ended up transferring himself to Anneville Manor (le manoir d'Anneville), so in a way Richard Delancey seems to be a mirror image of Cyril Northcote Parkinson. In 1969 he was invited to deliver the MacMillan Memorial Lecture to the Institution of Engineers and Shipbuilders in Scotland. He chose the subject "The Status of the Engineer". Parkinson and his 'law' Parkinson's law, which provides insight into a primary barrier to efficient time management, states that, "work expands so as to fill the time available for its completion". This articulates a situation and an unexplained force that many have come to take for granted and accept. "In exactly the same way nobody bothered and nobody cared, before Newton's day, why an apple should drop to the ground when it might so easily fly up after leaving the tree," wrote Straits Times editor-in-chief, Allington Kennard who continued, "There is less gravity in Professor Parkinson's Law, but hardly less truth." Parkinson first published his law in a humorous satirical article in the Economist on 19 November 1955, meant as a critique on the efficiency of public administration and civil service bureaucracy, and the continually rising headcount, and related cost, attached to these. That article noted that, "Politicians and taxpayers have assumed (with occasional phases of doubt) that a rising total in the number of civil servants must reflect a growing volume of work to be done." The law examined two sub-laws, The Law of Multiplication of Subordinates, and The Law of Multiplication of Work, and provided 'scientific proof' of the validity of these, including mathematical formulae. Two years later, the law was revisited when Parkinson's new books, Parkinson's Law And Other Studies in Administration and Parkinson's Law: Or The Pursuit of Progress were published in 1957. In Singapore, where he was teaching at the time, this began a series of talks where he addressed diverse audiences in person, in print and over the airwaves on 'Parkinson's Law'. For example, on 16 October 1957, at 10 a.m., he spoke on this at the International Women's Club programme talk held at the Y.W.C.A. at Raffles Quay. The advent of his new book as well as an interview during his debut talk was covered in an editorial in The Straits Times, shortly after, entitled, "A professor's cocktail party secret: They arrive half an hour late and rotate." Time, which also wrote about the book noted that its theme was "a delightfully unprofessional diagnosis of the widespread 20th century malady — galloping orgmanship." Orgmanship, according to Parkinson, was "the tendency of all administrative departments to increase the number of subordinate staff, 'irrespective of the amount of work (if any) to be done," as noted by the Straits Times. Parkinson, it was reported, wanted to trace the illegibility of signatures, the attempt being made to fix the point in a successful executive career at which the handwriting becomes meaningless, even to the executive himself." Straits Times editor-in-chief Allington Kennard's editorial, "Twice the staff for half the work," in mid-April, 1958 touched on further aspects or sub-laws, like Parkinson's Law of Triviality, and also other interesting, if dangerous areas like, "the problem of the retirement age, how not to pay Singapore income tax when a millionaire, the point of vanishing interest in high finance, how to get rid of the company chairman," etc. The author supported Parkinson's Law of Triviality — which states that, "The time spent on any item of an agenda is in inverse proportion to the sum involved," with a local example where it took the Singapore City Council "six hours to pick a new man for the gasworks and two and a half minutes to approve a $100 million budget." It is possible that the book, humorous though it is, may have touched a raw nerve among the administration at that time. As J. D. Scott, in his review of Parkinson's book two weeks later, notes, "Of course, Parkinson's Law, like all satire, is serious — it wouldn't be so comic if it weren't — and because it is serious there will be some annoyance and even dismay under the smiles." His celebrity did not remain local. Parkinson travelled to England, arriving there aboard the P. & O. Canton, in early June 1958, as reported by Reuters, and made the front page of the Straits Times on the 9th of June. Reporting from London on Saturday 14 June 1958, Hall Romney wrote, "Prof. C. N. Parkinson of the University of Malaya, whose book, Parkinson's Law has sold more than 80,000 copies, has had a good deal of publicity since he arrived in England in the Canton." Romney noted that, "a television interview was arranged, a profile of him appeared in a highbrow Sunday newspaper, columnists gave him almost as much space as they gave to Leslie Charteris, and he was honoured by the Institute of Directors, whose reception was attended by many of the most notable men in the commercial life of London." And then, all of a sudden, satire was answered with some honesty when, as another Reuters release republished in the Straits Times under the title, Parkinson's Law at work in the UK," quoted, "A PARLIAMENTARY committee, whose Job is to see that British Government departments do not waste the taxpayer's money, said yesterday it was alarmed at the rate of staff increases in certain sections of the War Office. Admiralty and Air Ministry..." In March 1959, further publicity occurred when, the Royal Navy in Singapore took umbrage at a remark Parkinson had made during his talk, about his new book on the wastage of public money, in Manchester, shortly before. Parkinson is reported to have said "Britain spent about $500 million building a naval base there [Singapore] and the only fleet which has used it is the Japanese." A navy spokesman, then, attempting to counter that statement said that the Royal Navy's Singapore base had only been completed in 1939, and, while it was confirmed that the Japanese had, indeed used it during the Second World War, it had been used extensively by the Royal Navy's Far East fleet, after the war. Emeritus Professor of Japanese Studies at the University of Oxford, Richard Storry, writing in the Oxford Mail, 16 May 1962, noted, "The fall of Singapore is still viewed with anger and shame in Britain." On Thursday 10 September 1959, at 10 p.m., Radio Singapore listeners got to experience his book, Parkinson's Law, set to music by Nesta Pain. The serialised program continued until the end of February 1960. Parkinson, and Parkinson's law, continued to find its way into Singapore newspapers through the decades. University of Malaya Singapore was introduced to him almost immediately upon his arrival there, through exposure in the newspaper and a number of public appearances. Parkinson started teaching at the University of Malaya in Singapore at the beginning of April 1950. Public lectures The first lecture of the Raffles Professor of History was a public lecture given at the Oei Tiong Ham Hall, on 19 May. Parkinson, who was speaking on "The Task of the Historian," began by noting the new Raffles history chair was aptly named because it was Sir Stamford Raffles who had tried to found the university in 1823 and because Raffles himself was a historian. There was a large audience, including Professor Alexander Oppenheim, the university's Dean of the Faculty of Arts. The text of his lecture was then reproduced and published over two issues of The Straits Times a few days later. On 17 April 1953, he addressed the public on "The Historical Aspect of the Coronation," at the Singapore YMCA Hall. Sponsored by the Malayan Historical Society, Parkinson gave a talk on the "Modern history of Taiping" at the residence of the District Officer, Larut and Matang on 12 August 1953. Sponsored by the Singapore branch of the Malayan Historical Society, on 5 February 1954 Parkinson gave a public lecture on "Singapore in the sixties" [1860s] at St. Andrew's Cathedral War Memorial Hall. Sponsored by the Seremban branch of the Historical Society of Malaya, Parkinson spoke on Tin Mining at the King George V School, Seremban. He said, in the past, Chinese labourers were imported from China at $32 a head to work the tin fields of Malaya. He said that mining developed steadily after British protection had been established and that tin from Negri Sembilan in the 1870s came from Sungei Ujong and Rembau, and worked with capital from Malacca. He noted that Chinese working side-by-side with Europeans, did better with their primitive methods and made great profits when they took over mines that Europeans abandoned. Arranged by the Indian University Graduates Association of Singapore, Parkinson gave a talk on "Indian Political Thought," at the USIS theatrette on 16 February 1955. On 10 March 1955, he spoke on "What I think about Colonialism," at the British Council Hall, Stamford Road, Singapore at 6.30 p.m. In his lecture, he argued that nationalism which was generally believed to be good, and colonialism which was seen as the reverse, were not necessarily opposite ideas but the same thing seen from different angles. He thought the gifts from Britain that Malaya and Singapore should value most and retain when they became self-governing included, debate, literature (not comics), armed forces' tradition (not police state), arts, tolerance and humour (not puritanism) and public spirit. Public exhibitions On 18 August 1950, Parkinson opened a week-long exhibition on the "History of English Handwriting," at the British Council centre, Stamford Road, Singapore. On 21 March 1952, he opened an exhibition of photographs from The Times of London which had been shown widely in different parts of the world. The exhibition comprised a selection of photographs spanning 1921 to 1951. 140 photographs were on display for a month at the British Council Hall, Singapore, showing scenes ranging from the German surrender to the opening of the Festival of Britain by the late King. He opened an exhibition of photographs taken by students of the University of Malaya during their tour of India, at the University Arts Theatre in Cluny Road, Singapore, 10 October 1953. Victor Purcell Towards the end of August, Professor of Far Eastern History at Cambridge University, Dr. Victor Purcell, who was also a former Acting Secretary of Chinese Affairs in Singapore, addressed the Kuala Lumpur Rotary Club. The Straits Times, quoting Purcell, noted, "Professor C. N. Parkinson had been appointed to the Chair of History at the University of Malaya and 'we can confidently anticipate that under his direction academic research into Malaya's history will assume a creative aspect which it has not possessed before.'" Johore Transfer Committee In October, Parkinson was appointed, by the Senate of the University of Malaya, to head a special committee of experts to consult on technical details regarding the transfer of the University to Johore. Along with him were Professor R. E. Holttum (Botany), and Acting Professors C. G. Webb (Physics) and D. W. Fryer (Geography). Library and Museum In November, Parkinson was appointed a member of the Committee for the management of Raffles Library and Museum, replacing Professor G. G. Hough who had resigned. In March 1952, Parkinson proposed a central public library, for Singapore, as a memorial to King George VI, commemorating that monarch's reign. He is reported to have said, "Perhaps the day has gone by for public monuments except in a useful form. And if that be so, might not, some enterprise of local importance be graced with the late King's name? One plan he could certainly have warmly approved would be that of building a Central Public Library," he opined. Parkinson noted that the Raffles Library was growing in usefulness and would, in short time, outgrow the building that then housed it. He said, given the educational work that was producing a large literate population demanding books in English, Malay and Chinese, what was surely needed was a genuinely public library,air-conditioned to preserve the books, and of a design to make those books readily accessible. He suggested that the building, equipment and maintenance of the public library ought to be the responsibility of the Municipality rather than the Government. T. P. F. McNeice, the then President of the Singapore City Council, as well as leading educationists of the time, thought the suggestion "an excellent, first-class suggestion to meet a definite and urgent need." McNeice also agreed that the project ought to be the responsibility of the City Council. Also in favour of the idea was Director of Education, A. W. Frisby who thought that there ought to be branches of the library, which could be fed by the central library, Raffles Institution Principal P. F. Howitt, Canon R. K. S. Adams (Principal of St. Andrews School) and Homer Cheng, the President of the Chinese Y.M.C.A. Principal of the Anglo-Chinese School, H. H. Peterson suggested the authorities also consider a mobile school library. While Parkinson had originally suggested that this be a Municipal and not a Government undertaking, something changed. A public meeting, convened by the Friends of Singapore - Parkinson was its President - at the British Council Hall on 15 May, decided that Singapore's memorial to King George VI would take the form of a public library, possibly with mobile units and sub-libraries in the out-of-town districts. Parkinson, in addressing the assembly noted that Raffles Library was not a free library, did not have vernacular sections, and its building could not be air-conditioned. McNeice, the Municipal President then proposed a resolution be sent to Government that the meeting considered the most appropriate memorial to the late King ought to take the form of a library (or libraries) and urged Government to set up a committee with enough non-Government representation, to consider the matter. The Government got involved, and a Government spokesperson spoke to the Straits Times about this on 16 May, saying that the Singapore Government welcomed proposals from the public on the form in which a memorial to King George ought to take, whether a public library, as suggested by Parkinson, or some other form. In the middle of 1952, the Singapore Government began setting up a committee to consider the suggestions made on the form Singapore's memorial to King George VI ought to take. G. G. Thomson, the Government's Public Relations Secretary informed the Straits Times that the committee would have official and non-Government representation and added that, apart from Parkinson's suggestion of a free public library, a polytechnic had also been suggested. W. L. Blythe, the Colonial Secretary, making it clear where his vote lay, pointed out that Singapore, at that time, already had a library, the Raffles Library. From news coverage we learn that yet another committee had been formed, this time to consider what would be necessary to establish an institution along the lines of the London Polytechnic. Blythe stated that the arguments he had heard in favour of a polytechnic were very strong. Director of Raffles Library and Museum, W. M. F. Tweedie was in favour of the King George VI free public library but up to the end of November, nothing had been heard of any developments towards that end. Tweedie suggested the ground beside the British Council as being suitable for such a library, and, if the public library was built, he would suggest for all the books at the Raffles Library to be moved to the new site, so that the space thus vacated could be used for a public art gallery. Right after, the Government, who were not supposed to have been involved in the first place - the suggestion made by Parkinson and accepted by City Council President T. P. F. McNeice that this be a Municipal and not Government undertaking - approved the proposal to set up a polytechnic as a memorial to King George IV. And Singapore continued with its subscription library and was without a free public library as envisioned by Parkinson. However, his call did not go unheeded. The following year, in August, 1953, the Lee Foundation pledged a dollar-for-dollar match up to $375,000 towards the establishment of a national library, provided that it was a free, without-cost, public library, open to men and women of every race, class, creed, and colour. It was not, however until November 1960, that Parkinson's vision was realised, when the new library, free and for all, was completed and opened to the public. Film Censorship Consultative Committee That same month he was also appointed, by the Singapore Government, Chairman of a committee set up to study film censorship in the Colony and suggest changes, if necessary. Their terms of reference were to enquire into the existing procedure and legislation relating to cinematograph film censorship and to make recommendations with a view to improving the system, including legislation. They were also asked to consider whether the Official Film Censor should continue to be the controller of the British film s quota, and to consider the memorandum of the film trade submitted to the Governor earlier that year. Investigating, archiving and writing Malaya's past At the beginning of December 1950, Parkinson made an appeal, at the Singapore Rotary Club, for old log books, diaries, newspaper files, ledgers or maps accumulated over the years. He asked that these be passed to the Raffles Library or the University of Malaya library, instead of being thrown away, as they might aid research and help those studying the history of the country to set down an account of what had happened in Malaya since 1867. "The time will come when school-children will be taught the history of their own land rather than of Henry VIII or the capture of Quebec. Parkinson told his audience that there was a large volume of documentary evidence about Malaya written in Portuguese and Dutch. He said that the arrival of the Pluto in Singapore, one of the first vessels to pass through the Suez Canal when it opened in 1869, might be described as the moment when British Malaya was born. "I would urge you not to scrap old correspondence just because it clutters up the office. Send it to a library where it may some day be of great value," he said. In September 1951 the magazine, British Malaya, published Parkinson's letter that called for the formation of one central Archives Office where all the historical records of Malaya and Singapore could be properly preserved, pointing out that it would be of inestimable value to administrators, historians, economists, social science investigators and students. In his letter, Parkinson, who was still abroad attending the Anglo-American Conference of Historians, in London, said that the formation of an Archives Office was already in discussion, and was urgent, in view of the climate where documents were liable to damage by insects and mildew. He said that many private documents relating to Malaya were kept in the U.K., where they were not appreciated because names like Maxwell, Braddell and Swettenham might mean nothing there. "The establishment of a Malayan Archives Office would do much to encourage the transfer of these documents," he wrote. On 22 May 1953, Parkinson convened a meeting at the British Council, Stamford Road, Singapore, to form the Singapore branch of the Malayan Historical Society. Speaking at the inaugural meeting of the society's Singapore branch, Parkinson, addressing the more than 100 people attending, said the aims of the branch would be to assist in the recording of history, folklore, tradition and customs of Malaya and its people and to encourage the preservation of objects of historical and cultural interest. Of Malayan history, he said, it "has mostly still to be written. Nor can it even be taught in the schools until that writing has been done." Parkinson had been urging the Singapore and Federation Governments to set up a national archives since 1950. In June 1953 he urged the speedy establishment of a national archives, where, "in air-conditioned rooms, on steel shelves, with proper skilled supervision and proper precaution against fire and theft, the records of Malayan history might be preserved indefinitely and at small expense. He noted that cockroaches had nibbled away at many vital documents and records, shrouding many years of Malaya's past in mystery, aided by moths and silverfish and abetted by negligent officials. A start had, by then, already been made - an air-conditioned room at the Federal Museum had already been set aside for storing important historical documents and preserving them from cockroaches and decay, the work of Peter Williams-Hunt, the Federation Director of Museums and Adviser on Aborigine Affairs who had died that month. He noted, however, that the problems of supervising archives and collecting old documents, had still to be solved. In January 1955 Parkinson formed University of Malaya's Archaeological Society and became its first President. Upon commencement, The Society had a membership of 53 which was reported to be the largest of its kind in Southeast Asia at the time. "Drive to discover the secrets of S.E. Asia. Hundreds of amateurs will delve into mysteries of the past." In April 1956 it was reported that 'For the first time, a long-needed Standard History of Malaya is to be published for students.' According to the news report a large-scale project, developing a ten-volume series, the result of ten years of research by University of Malaya staff, was currently in progress, detailing events dating back to the Portuguese occupation of 1511, to the, then, present day. The first volume, written by Parkinson, covered the years 1867 to 1877 and was to be published within three months thence. It was estimated that the last volume would be released after 1960. The report noted that, as at that time, Parkinson and his wife had already released two books on history for junior students, entitled "The Heroes" and "Malayan Fables." Three months passed by and the book remained unpublished. It was not till 1960 that British Intervention in Malaya (1867-1877), that first volume, finally found its way on bookshelves and into libraries. By that time, the press reported, the series had expanded into a twelve-volume set. Malayan history syllabus In January 1951 Parkinson was interviewed by New Zealand film producer and director, Wynona “Noni” Hope Wright. He told of his reorganisation of the Department of History during the last term to facilitate a new syllabus. The interview took place in Parkinson's sitting room beneath a frieze depicting Malaya's history, painted by Parkinson. Departing from the usual syllabus, Parkinson had decided to leave out European History almost entirely in order to give greater focus to Southeast Asia, particularly Malaya. The course, designed experimentally, takes in the study of world history up to 1497 in the first year, the impact of different European nations on Southeast Asia in the second year, and the study of Southeast Asia, particularly Malaya, after the establishment of British influence at the Straits Settlements in the third year. The students who make it through and decide to specialise in history will, then, have been brought to a point where they can profitably undertake original research in the history of modern Malaya, i.e. the 19th and 20th centuries, an area where, according to Parkinson, little had been done, with hardly any serious research attempted for the period after 'the transfer,' in 1867. Parkinson hoped that lecturing on this syllabus would ultimately produce a full-scale history of Malaya. This would include discovering documentation from Portuguese and Dutch sources from the time when those two countries still had a foothold in Malaya. He said that, while the period of development of the Straits Settlements under the East India Company were well-documented - the bulk of these archived at the Raffles Museum, local records after 1867 were not as plentiful and that it would be necessary to reconstruct those records from microfilm copies of documents kept in the United Kingdom. The task for the staff at the History Department was made formidable because their unfamiliarity with the Dutch and Portuguese languages. "I have no doubt that the history of Malaya must finally be written by Malayans, but we can at least do very much to prepare the way." Parkinson told Wright. "Scholars trained at this University in the spirit and technique of historical research, a study divorced from all racial and religious animosities, a study concerned only with finding the truth and explaining it in a lucid and attractive literary form, should be able to make a unique contribution to the mutual understanding of East and West," he said. "History apart, nothing seems to be of more vital importance in our time than the promotion of this understanding. In no field at the present time does the perpetuation of distrust and mutual incomprehension seem more dangerous. If we can, from this university, send forth graduates who can combine learning and ways of thought of the Far East and of the West, they may play a great part in overcoming the barriers of prejudice, insularity and ignorance," he concluded. Radio Malaya Programs In March 1951 Parkinson wrote a historical feature, "The China Fleet," for Radio Malaya, offering a what was said to be a true account, in dramatic form, of an incident in the annals of the East India Company that had such an influence on Malaya and other parts of Southeast Asia in the early part of the nineteenth century. On 28 January 1952, at 9.40 p.m. he talked about the founding of Singapore. Special Constabulary In the middle of April 1951, Parkinson was sworn in as special constable by ASP Watson of the Singapore Special Constabulary at the Oei Tion Ham Hall, together with other members of the staff, and students who were then placed under Parkinson's supervision. The special constabulary, The University Corp, being informed of their duties and powers of arrest were then issued batons and charged with the defence of the University in the event of trouble. Lecturer in Economics, P. Sherwood was appointed Parkinson's assistant. These measures were taken to ensure that rioters were dispersed and ejected if they trespassed onto University grounds. Parkinson signed a notice that noted that some of the rioters who took part in the December disorders came from an area near the University buildings in Bukit Timah. These precautions were taken in advance of the Maria Hertogh appeal on Monday 16 April. The case was postponed a number of times, after which it was finally heard at the end of July. Anglo-American Conference of Historians Parkinson departed Singapore on Monday 18 June 1951 for London, where he represented the University of Malaya at the Fifth Anglo-American Conference of Historians, there, from 9 to 14 July. He was to return in October at the start of the new academic year. Resignation In October 1958, while still on sabbatical in America – together with his wife and two young children, he had set off for America in May 1958 for study and travel and was due to return to work in April 1959 – Parkinson, through a letter sent from New York, resigned his position at the University of Malaya. K. G. Tregonning was, at that time, Acting Head of the History Department. Parkinson had not been the only one to resign while on leave. Professor E. H. G. Dobby of the Geography Department had also submitted his resignation while away on sabbatical leave. After deliberations, the University Council had decided, before the university's new constitution came into force on 15 January, that no legal action would be taken against Dobby – the majority of the council feeling that there was no case against Dobby as his resignation occurred before new regulations governing sabbatical leave benefits were introduced. In Parkinson's case, however, the council determined that that resignation had been submitted after the regulations came into effect, and a decision had been made to write to him, asking that he report back to work before a certain date, failing which the council said it was free to take any action they thought appropriate. In July 1959, K. G. Tregonning, acting head of the History Department, and History Lecturer at the University of Malaya since 1952, was appointed to fill the Raffles History Chair left vacant by Parkinson's resignation. There was nothing in the press about whether the matter between Parkinson and the university had been resolved, or not. Later life and death After the death of his second wife in 1984, in 1985 Parkinson married Iris Hilda Waters (d. 1994) and moved to the Isle of Man. After two years there, they moved to Canterbury, Kent, where he died in March 1993, at the age of 83. He was buried in Canterbury, and the law named after him is quoted as his epitaph. Published works Richard Delancey series of naval novels The Devil to Pay (1973)(2) The Fireship (1975)(3) Touch and Go (1977)(4) Dead Reckoning (1978)(6) So Near, So Far (1981)(5) The Guernseyman (1982)(1) Other nautical fiction Manhunt (1990) Other fiction Ponies Plot (1965) Biographies of fictional characters The Life and Times of Horatio Hornblower (1970) Jeeves: A Gentleman's Personal Gentleman (1979) Naval history Edward Pellew, Viscount Exmouth (1934) The Trade Winds, Trade in the French Wars 1793–1815 (1948) Samuel Walters, Lieut. RN (1949) War in the Eastern Seas, 1793–1815 (1954) Trade in the Eastern Seas (1955) British Intervention in Malaya, 1867–1877 (1960) Britannia Rules (1977) Portsmouth Point, The Navy in Fiction, 1793–1815 (1948) Other non-fiction The Rise of the Port of Liverpool (1952) Parkinson's law (1957) The Evolution of Political Thought (1958) The Law and the Profits (1960) In-Laws and Outlaws (1962) East and West (1963) Parkinsanities (1965) Left Luggage (1967) Mrs. Parkinson's Law: and Other Studies in Domestic Science (1968) The Law of Delay (1970) The fur-lined mousetrap (1972) The Defenders, Script for a "Son et Lumière" in Guernsey (1975) Gunpowder, Treason and Plot (1978) The Law, or Still in Pursuit (1979) Audio recordings Discusses Political Science with Julian H. Franklin (10 LPs) (1959) Explains "Parkinson's Law" (1960) References Sources consulted C. Northcote Parkinson on the Fantastic Fiction website Turnbull, C. M. (2004) "Parkinson, Cyril Northcote (1909–1993)", in Oxford Dictionary of National Biography Endnotes External links Parkinson's law and other texts analysed on BibNum (click "A télécharger", and find the English version) C. Northcote Parkinson, Parkinson's Law - extract (1958) English non-fiction writers English satirists English historical novelists 1909 births 1993 deaths Military personnel from County Durham People from Barnard Castle Alumni of Emmanuel College, Cambridge Academic staff of the National University of Singapore Alumni of King's College London London Regiment officers Officers' Training Corps officers Queen's Royal Regiment officers Fellows of Emmanuel College, Cambridge Academics of the University of Liverpool Nautical historical novelists People educated at St Peter's School, York 20th-century English novelists 20th-century English historians English male novelists
2,506
5,626
https://en.wikipedia.org/wiki/Cognitive%20science
Cognitive science
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution. History The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modernist philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist. The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks. Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation. The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition. In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego. In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input. Principles Levels of analysis A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis: The computational theory, specifying the goals of the computation; Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and The hardware implementation, or how algorithm and representation may be physically realized. Interdisciplinary nature Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural. Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition. Cognitive science: the term The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth-conditional semantics. The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato. Scope Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states. Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field. Artificial intelligence Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See .) There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain. Attention Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it. Bodily processes related to cognition Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science. Knowledge and processing of language The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience. Memory Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")? Perception and action Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action. Consciousness Consciousness is the awareness of experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self. Research methods Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory. Behavioral experiments In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. Brain imaging Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience. Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution. Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution. Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution. Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains. Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields. Computational modeling Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid. Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer. Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level. Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning. All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.). Neurobiological methods Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system. Single-unit recording Direct brain stimulation Animal models Postmortem studies Key findings Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect. Notable researchers Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism. Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent. Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association. Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird. Epistemics Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge. Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated." In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs. In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics. Binding problem in cognitive science One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" in connectionism). See also Affective science Cognitive anthropology Cognitive biology Cognitive computing Cognitive ethology Cognitive linguistics Cognitive neuropsychology Cognitive neuroscience Cognitive psychology Cognitive science of religion Computational neuroscience Computational-representational understanding of mind Concept mining Decision field theory Decision theory Dynamicism Educational neuroscience Educational psychology Embodied cognition Embodied cognitive science Enactivism Epistemology Folk psychology Heterophenomenology Human Cognome Project Human–computer interaction Indiana Archives of Cognitive Science Informatics (academic field) List of cognitive scientists List of psychology awards Malleable intelligence Neural Darwinism Personal information management (PIM) Qualia Quantum cognition Simulated consciousness Situated cognition Society of Mind theory Spatial cognition Speech–language pathology Outlines Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more. Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more. References External links "Cognitive Science" on the Stanford Encyclopedia of Philosophy Cognitive Science Society Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences List of leading thinkers in cognitive science
2,508
5,630
https://en.wikipedia.org/wiki/Copula%20%28linguistics%29
Copula (linguistics)
In linguistics, a copula (plural: copulas or copulae; abbreviated ) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being co-operative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things. A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages. Most languages have one main copula (in English, the verb "to be"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas". Grammatical function The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below: The book is on the table. In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. The whole expression is on the table may (in some theories of grammar) be called a predicate or a verb phrase. The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above) or an adverb or another adverbial phrase expressing time or location. Examples are given below (with the copula in bold and the predicative expression in italics): The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible: In many questions and other clauses with subject–auxiliary inversion, the copula moves in front of the subject: Are you happy? In inverse copular constructions (see below) the predicative expression precedes the copula, but the subject follows it: In the room were three men. It is also possible, in certain circumstances, for one (or even two) of the three components to be absent: In null-subject (pro-drop) languages, the subject may be omitted, as it may from other types of sentence. In Italian, means ‘I am tired’, literally ‘am tired’. In non-finite clauses in languages like English, the subject is often absent, as in the participial phrase being tired or the infinitive phrase to be tired. The same applies to most imperative sentences like Be good! For cases in which no copula appears, see below. Any of the three components may be omitted as a result of various general types of ellipsis. In particular, in English, the predicative expression may be elided in a construction similar to verb phrase ellipsis, as in short sentences like I am; Are they? (where the predicative expression is understood from the previous context). Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase. Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian ; notice the use of the plural to agree with plural "these photos" rather than with singular "the cause". In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. "What kind of birds are those?" The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well. The boy became a man. The girl grew more excited as the holiday preparations intensified. The dog felt tired from the activity. And more tenuously The milk turned sour. The food smells good. You seem upset. Meanings Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept: They may also express membership of a class or a subset relationship: Similarly they may express some property, relation or position, permanent or temporary: Other special uses of copular verbs are described in some of the following sections. Essence vs. state Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish. Forms In many languages the principal copula is a verb, like English (to) be, German , Mixtec , Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details). Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: in Inuit languages. In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns. For cases in which the copula is omitted or takes zero form, see below. Additional uses of copular verbs A copular verb may also have other uses supplementary to or distinct from its uses as a copula. As auxiliary verbs The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle): Other languages' copulas have additional uses as auxiliaries. For example, French can be used to express passive voice similarly to English be; both French and German are used to express the perfect forms of certain verbs (formerly English be was also): The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival). Another auxiliary usage in English is (together with the to-infinitive) to denote an obligatory action or expected occurrence: "I am to serve you;" "The manager is to resign." This can be put also into past tense: "We were to leave at 9." For forms like "if I was/were to come," see English conditional sentences. (Note that, by certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.) Existential usage The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning "to exist." This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence. Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are and , where and are the equivalents of English "am," normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version (where the verb "to exist" is used rather than the copula or ‘to be’). Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French (which uses parts of the verb ‘to have,’ not the copula) or the Swedish (the passive voice of the verb for "to find"). For details, see existential clause. Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions. Zero copula In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: , ‘I (am a) person;’ Indonesian: ‘I (am) a human;’ Turkish: ‘s/he (is a) human;’ Hungarian: ‘s/he (is) a human;’ Arabic: أنا إنسان, ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam "I (am a) human;" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana "I (am a) man" / "(a) man I (am)"; Southern Quechua: payqa runam "s/he (is) a human." The usage is known generically as the zero copula. Note that in other tenses (sometimes in forms other than third person singular), the copula usually reappears. Some languages drop the copula in poetic or aphorismic contexts. Examples in English include The more, the better. Out of many, one. True that. Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages. In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers in informal contexts. An example is the sentence "I saw twelve men, each a soldier." Examples in specific languages In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: , "the house is large," can be written , "large the house (is)." In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam  — "(s)he is a human;" but: (paykuna) runakunam kanku "(they) are human."ap In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally "a big the house," "the house (is) big;" I te tēpu te pukapuka, literally "at (past locative particle) the table the book," "the book (was) on the table;" Nō Ingarangi ia, literally "from England (s)he," "(s)he (is) from England," Kei te kai au, literally "at the (act of) eating I," "I (am) eating." Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare "The house is big;" Ko te pukapuka kei te tēpu "It is the book (that is) on the table;" Ko au kei te kai "It is me eating." However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka "This is my book;" Ko Ōtautahi he tāone i Te Waipounamu "Christchurch is a city in the South Island (of New Zealand);" Ko koe tōku hoa "You are my friend." When expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying "it is this, it is my book" in English). In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — "s/he is a human"/"they are humans;" but: (én) ember vagyok "I am a human," (te) ember vagy "you are a human," mi emberek vagyunk "we are humans," (ti) emberek vagytok "you (all) are humans." The copula also reappears for stating locations: az emberek a házban vannak, "the people are in the house," and for stating time: hat óra van, "it is six o'clock." However, the copula may be omitted in colloquial language: hat óra (van), "it is six o'clock." Hungarian uses copula lenni for expressing location: Itt van Róbert "Bob is here," but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg "Bob is old;" ők éhesek "They are hungry;" Kati nyelvtudós "Cathy is a linguist" (but Róbert öreg volt "Bob was old," éhesek voltak "They were hungry," Kati nyelvtudós volt "Cathy was a linguist). In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean "Ali is here," and Onlar aç and Onlar açlar both mean "They are hungry." Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal. The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but "Bora benim" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases). Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well. Additional copulas Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics): (This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.) Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion. These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae. In particular languages Indo-European In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-). English The English copular verb be has eight forms (more than any other English verb): be, am, is, are, being, was, were, been. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula. The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under . A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar. The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings. Persian In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be). {| border="0" cellspacing="2" cellpadding="1" |- | Aseman abi ast. |آسمان آبی است | the sky is blue |- | Aseman abi khahad bood. |آسمان آبی خواهد بود | the sky will be blue |- | Aseman abi bood. |آسمان آبی بود | the sky was blue |} Hindustani In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below: Besides the verb होना honā (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā (to stay), जाना jānā (to go), and आना ānā (to come). The following table shows the conjugations of the copula होना honā in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919. Romance Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be," the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. egon and izan). (Note that the English words just used, "essential" and "state," are also cognate with the Latin infinitives esse and stare. The word "stay" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese. In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese: Slavic Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case. As noted above under , Russian and other East Slavic languages generally omit the copula in the present tense. Irish In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics. Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences. {| border="0" cellspacing="2" cellpadding="1" valign="top" | align=left valign=top| || align=right valign=top | || align=left valign=top | |- |Is fear é Liam.|| "Liam is a man." ||(Lit., "Is man Liam.") |- |Is leabhar é sin.|| "That is a book." ||(Lit., "Is book it that.") |} The word is is the copula (rhymes with the English word "miss"). The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad. To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running." Arabic dialects North Levantine Arabic The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by and a suffixed pronoun. Bantu languages Chichewa In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí): iyé ndi mphunzitsi "he is a teacher" iyé sí mphunzitsi "he is not a teacher" For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I": ine ndine mphunzitsi "I am a teacher" iwe ndiwe mphunzitsi "you (singular) are a teacher" ine síndine mphunzitsi "I am not a teacher" For temporary states and location, the copula is the appropriate form of the defective verb -li: iyé ali bwino "he is well" iyé sáli bwino "he is not well" iyé ali ku nyumbá "he is in the house" For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix: ine ndili bwino "I am well" iwe uli bwino "you (sg.) are well" kunyumbá kuli bwino "at home (everything) is fine" In the past tenses, -li is used for both types of copula: iyé analí bwino "he was well (this morning)" iyé ánaalí mphunzitsi "he was a teacher (at that time)" In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula: máwa ákhala bwino "he'll be fine tomorrow" Muylaq' Aymaran Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry. Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, it is also relevant to note that in a verb phrase like "It is old," the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old." It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa "I go." However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa "I usually go." It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root uta-ni "house-attributive" is verbalized with the copulative verbalizer, but note that the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | La Paz Aymara: |ma: jisk'a uta-ni-:-ct(a)-wa |- | Muylaq' Aymara: |ma isk'a uta-ni-ʋ-ct-wa |} Georgian As in English, the verb "to be" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Masc'avlebeli var. | "I am a teacher." |- | Masc'avlebeli viknebi. | "I will be a teacher." |- | Masc'avlebeli viqavi. | "I was a teacher." |- | Masc'avlebeli vqopilvar. | "I have been a teacher." |- | Masc'avlebeli vqopiliqavi. | "I had been a teacher." |} Note that, in the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Ich bin Lehrer gewesen. | "I have been a teacher," literally "I am teacher been." |- | Ich war Lehrer gewesen. | "I had been a teacher," literally "I was teacher been." |} Here, gewesen is the past participle of sein ("to be") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects. Haitian Creole Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration). Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French (written c'est), which is the normal French contraction of (that, written ce) and the copula (is, written est) (a form of the verb être). The derivation of ye is less obvious; but we can assume that the French source was ("he/it is," written il est), which, in rapidly spoken French, is very commonly pronounced as (typically written y est). The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula. Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules: 1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Li te Ø an Ayiti. | "She was in Haiti." || (Lit., "She past-tense in Haiti.") |- | Liv-la Ø jon. | "The book is yellow." || (Lit., "Book-the yellow.") |- | Timoun-yo Ø lakay. | "The kids are [at] home." || (Lit., "Kids-the home.") |} 2. Use se when the complement is a noun phrase. But note that, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Chal se ekriven. | "Charles is writer." |- | Chal, ki se ekriven, pa vini. | "Charles, who is writer, not come." |} 3. Use se where French and English have a dummy "it" subject: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Se mwen! | "It's me!" French C'est moi! |- | Se pa fasil. | "It's not easy," colloquial French C'est pas facile. |} 4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Kijan ou ye? | "How you are?" |- | Pou kimoun liv-la te ye? | "Whose book was it?" || (Lit., "Of who book-the past-tense is?) |- | M pa konnen kimoun li ye. | "I don't know who he is." || (Lit., "I not know who he is.") |- | Se yon ekriven Chal ye. | "Charles is a writer!" || (Lit., "It's a writer Charles is;" cf. French C'est un écrivain qu'il est.) |} The above is, however, only a simplified analysis. Japanese The Japanese copula (most often translated into English as an inflected form of "to be") has many forms. E.g., The form da is used predicatively, na – attributively, de – adverbially or as a connector, and desu – predicatively or as a politeness indicator. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 私は学生だ。 | Watashi wa gakusei da. || "I'm a student." || (lit., I TOPIC student COPULA) |- | これはペンです。 | Kore wa pen desu. || "This is a pen." || (lit., this TOPIC pen COPULA-POLITE) |} Desu is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ only in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | あれはホテルだ。 | Are wa hoteru da.|| "That's a hotel." || (lit., that TOPIC hotel COPULA) |- | あれはホテルです。 | Are wa hoteru desu.|| "That is a hotel." || (lit., that TOPIC hotel COPULA-POLITE) |} A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | このビールはおいしい。 | Kono bīru wa oishii. || "This beer is delicious." |- | このビールはおいしいです。 | Kono bīru wa oishii desu. || "This beer is delicious." |- | *このビールはおいしいだ。 | *Kono bīru wa oishii da. || colspan=2 | This is grammatically incorrect because da can only be coupled with a noun to form a predicate. |} Other forms of copula: である de aru, であります de arimasu (used in writing and formal speaking) でございます de gozaimasu (used in public announcements, notices, etc.) The copula is subject to dialectal variation throughout Japan, resulting in forms like や ya in Kansai and じゃ ja in Hiroshima (see map above). Japanese also has two verbs corresponding to English "to be": aru and iru. They are not copulas but existential verbs. Aru is used for inanimate objects, including plants, whereas iru is used for animate things like people, animals, and robots, though there are exceptions to this generalization. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 本はテーブルにある。 | Hon wa tēburu ni aru.|| "The book is on a table." |- | 小林さんはここにいる。 | Kobayashi-san wa koko ni iru.|| "Kobayashi is here." |} Japanese speakers, when learning English, often drop the auxiliary verbs "be" and "do," incorrectly believing that "be" is a semantically empty copula equivalent to "desu" and "da." Korean For sentences with predicate nominatives, the copula "이" (i-) is added to the predicate nominative (with no space in between). {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 바나나는 과일이다. | Ba-na-na-neun gwa-il-i-da. || "Bananas are a fruit." |} Some adjectives (usually colour adjectives) are nominalized and used with the copula "이"(i-). 1. Without the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨개요. | Jang-mi-neun ppal-gae-yo.|| "Roses are red." |} 2. With the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨간색이다. | Jang-mi-neun ppal-gan-saek-i-da.|| "Roses are red-coloured." |} Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context. Chinese N.B. The characters used are simplified ones, and the transcriptions given in italics reflect Standard Chinese pronunciation, using the pinyin system. In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" (累 lèi), "to be hungry" (饿 è), "to be located at" (在 zài), "to be stupid" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è ("I am hungry"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very," "not," "quite," etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means "very," but in this use often has no particular meaning. Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": . This is used frequently; for example, instead of having a verb meaning "to be Chinese," the usual expression is "to be a Chinese person" (; "I am a Chinese person;" "I am Chinese"). This is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in ), the noun being omitted: Before the Han Dynasty, the character 是 served as a demonstrative pronoun meaning "this." (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of "early" and "straight." Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean "yes" or to show agreement. For example: Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? "Is your car red or not?"Response: 是的 shì de "Is," meaning "Yes," or 不是 bú shì "Not is," meaning "No." (A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct," 对 duì; the corresponding negative answer is 不对 bú duì, "not right.") Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see . In Hokkien 是 sī acts as the copula, and 是 is the equivalent in Wu Chinese. Cantonese uses 係 () instead of 是; similarly, Hakka uses 係 he55. Siouan languages In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas. For example, the word wičháša refers to a man, and the verb "to-be-a-man" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man). In order to express the statement "I am a doctor of profession," one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT). In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. "Robert is in the house" could be translated as Robert thimáhel yaŋké (yeló), whereas "There's one restaurant next to the gas station" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé. Constructed languages The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a predicate that means to be (among) what it follows. For example, me la .bob. (me'u) means "to be Bob," and me le ci mensi (me'u) means "to be one of the three sisters." Another one is du, which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but isn't one, is cu. It merely indicates that the word which follows is the main predicate of the sentence. For example, lo pendo be mi cu zgipre means "my friend is a musician," but the word cu does not correspond to English is; instead, the word zgipre, which is a predicate, corresponds to the entire phrase "is a musician". The word cu is used to prevent lo pendo be mi zgipre, which would mean "the friend-of-me type of musician". See also Indo-European copula Nominal sentence Stative verb Subject complement Zero copula Citations General references (See "copular sentences" and "existential sentences and expletive there" in Volume II.) Moro, A. (1997) The Raising of Predicates. Cambridge University Press, Cambridge, England. Tüting, A. W. (December 2003). Essay on Lakota syntax. . Further reading External links Parts of speech Verb types
2,509
5,636
https://en.wikipedia.org/wiki/Chemist
Chemist
A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists. Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products. History of chemistry The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glasses. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called alchemy. The word chemist is derived from the New Latin noun chimista, an abbreviation of alchimista (alchemist). Alchemists discovered many chemical processes that led to the development of modern chemistry. Chemistry as we know it today, was invented by Antoine Lavoisier with his law of conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table by Dmitri Mendeleev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery since the start of the 20th century. Education Jobs for chemists generally require at least a bachelor's degree in chemistry, but many positions, especially those in research, require a Master of Science or a Doctor of Philosophy (PhD.). Most undergraduate programs emphasize mathematics and physics as well as chemistry, partly because chemistry is also known as "the central science", thus chemists ought to have a well-rounded knowledge about science. At the Master's level and higher, students tend to specialize in a particular field. Fields of specialization include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions. Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant. In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist. It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.). In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph. D. holders instead. Employment The three major employers of chemists are academic institutions, industry, especially the chemical industry and the pharmaceutical industry, and government laboratories. Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, for example, in medicinal chemistry. Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. The Inorganic chemistry is also the study of atomic and molecular structure and bonding. Medicinal chemistry is the science involved with designing, synthesizing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships. Organic chemistry is the study of the structure, properties, composition, mechanisms, and chemical reaction of carbon compounds. Physical chemistry is the study of the physical fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, quantum chemistry, statistical mechanics, and spectroscopy. Physical chemistry has a large overlap with theoretical chemistry and molecular physics. Physical chemistry involves the use of calculus in deriving equations. Theoretical chemistry is the study of chemistry via theoretical reasoning (usually within mathematics or physics). In particular, the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with condensed matter physics and molecular physics. See reductionism. All the above major areas of chemistry employ chemists. Other fields where chemical degrees are useful include astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemo-informatics, electrochemistry, environmental science, forensic science, geochemistry, green chemistry, history of chemistry, materials science, medical science, molecular biology, molecular genetics, nanotechnology, nuclear chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, polymer chemistry, supramolecular chemistry and surface chemistry. Professional societies Chemists may belong to professional societies specifically for professionals and researchers within the field of Chemistry, such as the Royal Society of Chemistry in the United Kingdom, or the American Chemical Society (ACS) in the United States. Honors and awards The highest honor awarded to chemists is the Nobel Prize in Chemistry, awarded since 1901, by the Royal Swedish Academy of Sciences. See also Pharmacist List of chemistry topics List of chemists List of Russian chemists List of important publications in chemistry List of scientific journals in chemistry List of compounds List of Chemistry Societies References External links American Chemical Society website Chemical Abstracts Service indexes and abstracts the world's chemistry-related literature and patents Chemists and Materials Scientists from the U.S. Department of Labor's Occupational Outlook Handbook Royal Society of Chemistry website History of Chemistry links for chemists Luminaries of the Chemical Sciences accomplishments, biography, and publications from 44 of the most influential chemists Selected Classic Papers from the History of Chemistry Links for Chemists guide to web sites related to chemistry ChemistryViews.org website Science occupations
2,511
5,638
https://en.wikipedia.org/wiki/Combustion
Combustion
Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While the activation energy must be overcome to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure): 2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law. Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous. Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process. Types Complete and incomplete Complete In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative. Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess. In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel. The amount of air required for complete combustion to take place is known as pure air. However, in practice, the air used is 2-3 times that of pure air. Incomplete Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide. For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide. The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards. The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today. Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen. Problems associated with incomplete combustion Environmental problems These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog. Human health problems Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This would reduce the capacity of red blood cells to carry oxygen throughout the body. Smoldering Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires. Rapid Rapid combustion is a form of combustion, otherwise known as a fire, in which large amounts of heat and light energy are released, which often results in a flame. This is used in a form of machinery such as internal combustion engines and in thermobaric weapons. Such a combustion is frequently called a Rapid combustion, though for an internal combustion engine, this is inaccurate. An internal combustion engine nominally operates on a controlled rapid burn. When the fuel-air mixture in an internal combustion engine explodes, that is known as detonation. Spontaneous Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion. Turbulent Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O where . For example, the stoichiometric burning of propane in oxygen is: \underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O} Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Note that treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol: where . For example, the stoichiometric combustion of propane (C3H8) in air is: The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue. Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas. Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke. The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s). Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions. The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers. Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by: The Relaxation Redistribution Method (RRM) The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments The invariant-constrained equilibrium edge preimage curve method. A few variational approaches The Computational Singular perturbation (CSP) method and further developments. The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach. The G-Scheme. The Method of Invariant Grids (MIG). Kinetic modelling The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis. Temperature Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas). In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following: the heating value; the stoichiometric air to fuel ratio ; the specific heat capacity of fuel and air; the air and fuel inlet temperatures. The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one. Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas. In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used. Instabilities Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability. The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index. See also Related concepts Air–fuel ratio Autoignition temperature Chemical looping combustion Deflagration Detonation Explosion Fire Flame Heterogeneous combustion Markstein number Phlogiston theory (historical) Spontaneous combustion Machines and equipment Boiler Bunsen burner External combustion engine Furnace Gas turbine Internal combustion engine Rocket engine Scientific and engineering societies International Flame Research Foundation The Combustion Institute Other List of light sources References Further reading Chemical reactions
2,513
5,645
https://en.wikipedia.org/wiki/Cult%20film
Cult film
A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that. Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge. Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection. Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an "instant cult classic" now, occasionally before they are released. Fickle fans on the Internet have latched on to unreleased films only to abandon them later on release. At the same time, other films have acquired massive, quick cult followings, owing to spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films. Definition A cult film is any film that has a cult following, although the term is not easily defined and can be applied to a wide variety of films. Some definitions exclude films that have been released by major studios or have big budgets, that try specifically to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content. This may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or "elastic, a catchall for anything slightly maverick or strange". Academic Mark Shiel has criticized the term itself as being a weak concept, reliant on subjectivity; different groups can interpret films in their own terms. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, and Shiel criticizes this recontextualization as cultural commodification. In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films primarily in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions also required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, while Jonathan Rosenbaum rejected the continued existence of cult films and called the term a marketing buzzword. Mathijs suggests that cult films help to understand ambiguity and incompleteness in life given the difficulty in even defining the term. That cult films can have opposing qualities – such as good and bad, failure and success, innovative and retro – helps to illustrate that art is subjective and never self-evident. This ambiguity leads critics of postmodernism to accuse cult films of being beyond criticism, as the emphasis is now on personal interpretation rather than critical analysis or metanarratives. These inherent dichotomies can lead audiences to be split between ironic and earnest fans. Writing in Defining Cult Movies, Jancovich et al. quote academic Jeffrey Sconce, who defines cult films in terms of paracinema, marginal films that exist outside critical and cultural acceptance: everything from exploitation to beach party musicals to softcore pornography. However, they reject cult films as having a single unifying feature; instead, they state that cult films are united in their "subcultural ideology" and opposition to mainstream tastes, itself a vague and undefinable term. Cult followings themselves can range from adoration to contempt, and they have little in common except for their celebration of nonconformity – even the bad films ridiculed by fans are artistically nonconformist, albeit unintentionally. At the same time, they state that bourgeois, masculine tastes are frequently reinforced, which makes cult films more of an internal conflict within the bourgeoisie, rather than a rebellion against it. This results in an anti-academic bias despite the use of formal methodologies, such as defamiliarization. This contradiction exists in many subcultures, especially those dependent on defining themselves in terms of opposition to the mainstream. This nonconformity is eventually co-opted by the dominant forces, such as Hollywood, and marketed to the mainstream. Academic Xavier Mendik also defines cult films as opposing the mainstream and further proposes that films can become cult by virtue of their genre or content, especially if it is transgressive. Due to their rejection of mainstream appeal, Mendik says cult films can be more creative and political; times of relative political instability produce more interesting films. General overview Cult films have existed since the early days of cinema. Film critic Harry Allan Potamkin traces them back to 1910s France and the reception of Pearl White, William S. Hart, and Charlie Chaplin, which he described as "a dissent from the popular ritual". Nosferatu (1922) was an unauthorized adaptation of Bram Stoker's Dracula. Stoker's widow sued the production company and drove it to bankruptcy. All known copies of the film were destroyed, and Nosferatu become an early cult film, kept alive by a cult following that circulated illegal bootlegs. Academic Chuck Kleinhans identifies the Marx Brothers as making other early cult films. On their original release, some highly regarded classics from the Golden Age of Hollywood were panned by critics and audiences, relegated to cult status. The Night of the Hunter (1955) was a cult film for years, quoted often and championed by fans, before it was reassessed as an important and influential classic. During this time, American exploitation films and imported European art films were marketed similarly. Although critics Pauline Kael and Arthur Knight argued against arbitrary divisions into high and low culture, American films settled into rigid genres; European art films continued to push the boundaries of simple definitions, and these exploitative art films and artistic exploitation films would go on to influence American cult films. Much like later cult films, these early exploitation films encouraged audience participation, influenced by live theater and vaudeville. Modern cult films grew from 1960s counterculture and underground films, popular among those who rejected mainstream Hollywood films. These underground film festivals led to the creation of midnight movies, which attracted cult followings. The term cult film itself was an outgrowth of this movement and was first used in the 1970s, though cult had been in use for decades in film analysis with both positive and negative connotations. These films were more concerned with cultural significance than the social justice sought by earlier avant-garde films. Midnight movies became more popular and mainstream, peaking with the release of The Rocky Horror Picture Show (1975), which finally found its audience several years after its release. Eventually, the rise of home video would marginalize midnight movies once again, after which many directors joined the burgeoning independent film scene or went back underground. Home video would give a second life to box-office flops, as positive word-of-mouth or excessive replay on cable television led these films to develop an appreciative audience, as well as obsessive replay and study. For example, The Beastmaster (1982), despite its failure at the box office, became one of the most played movies on American cable television and developed into a cult film. Home video and television broadcasts of cult films were initially greeted with hostility. Joanne Hollows states that they were seen as turning cult films mainstream – in effect, feminizing them by opening them to distracted, passive audiences. Releases from major studios – such as The Big Lebowski (1998), which was distributed by Universal Studios – can become cult films when they fail at the box office and develop a cult following through reissues, such as midnight movies, festivals, and home video. Hollywood films, due to their nature, are more likely to attract this kind of attention, which leads to a mainstreaming effect of cult culture. With major studios behind them, even financially unsuccessful films can be re-released multiple times, which plays into a trend to capture audiences through repetitious reissues. The constant use of profanity and drugs in otherwise mainstream, Hollywood films, such as The Big Lebowski, can alienate critics and audiences yet lead to a large cult following among more open-minded demographics not often associated with cult films, such as Wall Street bankers and professional soldiers. Thus, even comparatively mainstream films can satisfy the traditional demands of a cult film, perceived by fans as transgressive, niche, and uncommercial. Discussing his reputation for making cult films, Bollywood director Anurag Kashyap said, "I didn't set out to make cult films. I wanted to make box-office hits." Writing in Cult Cinema, academics Ernest Mathijs and Jamie Sexton state that this acceptance of mainstream culture and commercialism is not out of character, as cult audiences have a more complex relationship to these concepts: they are more opposed to mainstream values and excessive commercialism than they are anything else. In a global context, popularity can vary widely by territory, especially with regard to limited releases. Mad Max (1979) was an international hit – except in America where it became an obscure cult favorite, ignored by critics and available for years only in a dubbed version though it earned over $100M internationally. Foreign cinema can put a different spin on popular genres, such as Japanese horror, which was initially a cult favorite in America. Asian imports to the West are often marketed as exotic cult films and of interchangeable national identity, which academic Chi-Yun Shin criticizes as reductive. Foreign influence can affect fan response, especially on genres tied to a national identity; when they become more global in scope, questions of authenticity may arise. Filmmakers and films ignored in their own country can become the objects of cult adoration in another, producing perplexed reactions in their native country. Cult films can also establish an early viability for more mainstream films both for filmmakers and national cinema. The early cult horror films of Peter Jackson were so strongly associated with his homeland that they affected the international reputation of New Zealand and its cinema. As more artistic films emerged, New Zealand was perceived as a legitimate competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy. Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect. Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years. Transgression and censorship Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as "downtown culture". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen. According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films. Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism. Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported. The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called "video nasties" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films. Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture. Subcultural appeal and fandom Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters. A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the "extreme" nature of the film, which increases the appeal to youth subcultures fond of extreme sports. Matt Hills' concept of the "cult blockbuster" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film was an instant hit when released, it has also developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue have drawn a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream. Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow. Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a "sub-subculture", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon. Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the "Other", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of "otherness" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity. Types "So bad it's good" The critic Michael Medved characterized examples of the "so bad it's good" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), The Room (2003), and the Ugandan action-comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as "weird for weirdness' sake" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label "so bad it's good" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, "so bad it's good" films, known as , have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether "so bad it's good" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers. Camp and guilty pleasures Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic. Nostalgia According to academic Brigid Cherry, nostalgia "is a strong element of certain kinds of cult appeal." When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as "New Hollywood in extremis" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds. Midnight movies Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box-office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds. Art and exploitation Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are "the most masculinized, youth-oriented, populist, and openly pornographic softcore area." The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as "crass, coarse, and camp ... perfect fodder for a cult following." "Sick films", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas. B and genre films Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to the more non-traditional, such as Cry-Baby (1990), which parodies musicals, and Rocky Horror, which uses a rock soundtrack. Romantic fairy tale The Princess Bride (1987) failed to attract audiences in its original release, as the studio did not know how to market it. The freedom and excitement associated with cars can be an important part of drawing cult film fans to genre films, and they can signify action and danger with more ambiguity than a gun. Ad Week writes that cult B films, when released on home video, market themselves and need only enough advertising to raise curiosity or nostalgia. Animation Animation can provide wide open vistas for stories. The French film Fantastic Planet (1973) explored ideas beyond the limits of traditional, live-action science fiction films. Ralph Bakshi's career has been marked with controversy: Fritz the Cat (1972), the first animated film to be rated "X" by the MPAA, provoked outrage for its racial caricatures and graphic depictions of sex, and Coonskin (1975) was decried as racist. Bakshi recalls that older animators had tired of "kid stuff" and desired edgier work, whereas younger animators hated his work for "destroying the Disney images". Eventually, his work would be reassessed and cult followings, which include Quentin Tarantino and Robert Rodriguez, developed around several of his films. Heavy Metal (1981) faced similar denunciations from critics. Donald Liebenson of the Los Angeles Times cites the violence and sexual imagery as alienating critics, who did not know what to make of the film. It would go on to become a popular midnight movie and frequently bootlegged by fans, as licensing issues kept it from being released on video for many years. Phil Hoad of The Guardian identifies Akira (1988) as introducing violent, adult Japanese animation (known as anime) to the West and paving the way for later works. Anime, according to academic Brian Ruh, is not a cult genre, but the lack of individual fandoms inside anime fandom itself lends itself to a bleeding over of cult attention and can help spread works internationally. Anime, which is frequently presented as a series (with movies either rising from existing series, or spinning off series based on the film), provides its fans with alternative fictional canons and points of view that can drive fan activity. The Ghost in the Shell films, for example, provided Japanese fans with enough bonus material and spinoffs that it encouraged cult tendencies. Markets that did not support the sale of these materials saw less cult activity. The claymation film Gumby: The Movie (1995), which made only $57,100 at the box office against its $2.8 million budget but sold a million copies on VHS alone, was subsequently released on DVD and remastered in high definition for Blu-ray due to its strong cult following. Like many cult films, RiffTrax made their own humorous audio commentary for Gumby: The Movie in 2021. Nonfiction Sensationalistic documentaries called mondo films replicate the most shocking and transgressive elements of exploitation films. They are usually modeled after "sick films" and cover similar subject matter. In The Cult Film Reader, academics Mathijs and Mendik write that these documentaries often present non-Western societies as "stereotypically mysterious, seductive, immoral, deceptive, barbaric or savage". Though they can be interpreted as racist, Mathijs and Mendik state that they also "exhibit a liberal attitude towards the breaking of cultural taboos". Mondo films like Faces of Death mix real and fake footage freely, and they gain their cult following through the outrage and debate over authenticity that results. Like "so bad it's good" cult films, old propaganda and government hygiene films may be enjoyed ironically by more modern audiences for the camp value of the outdated themes and outlandish claims made about perceived social threats, such as drug use. Academic Barry K. Grant states that Frank Capra's Why We Fight World War II propaganda films are explicitly not cult, because they are "slickly made and have proven their ability to persuade an audience." The sponsored film Mr. B Natural became a cult hit when it was broadcast on the satirical television show Mystery Science Theater 3000; cast member Trace Beaulieu cited these educational shorts as his favorite to mock on the show. Mark Jancovich states that cult audiences are drawn to these films because of their "very banality or incoherence of their political positions", unlike traditional cult films, which achieve popularity through auteurist radicalism. Mainstream popularity Mark Shiel explains the rising popularity of cult films as an attempt by cinephiles and scholars to escape the oppressive conformity and mainstream appeal of even independent film, as well as a lack of condescension in both critics and the films; Academic Donna de Ville says it is a chance to subvert the dominance of academics and cinephiles. According to Xavier Mendik, "academics have been really interested in cult movies for quite a while now." Mendik has sought to bring together academic interest and fandom through Cine-Excess, a film festival. I. Q. Hunter states that "it's much easier to be a cultist now, but it is also rather more inconsequential." Citing the mainstream availability of Cannibal Holocaust, Jeffrey Sconce rejects definitions of cult films based on controversy and excess, as they've now become meaningless. Cult films have influenced such diverse industries as cosmetics, music videos, and fashion. Cult films have shown up in less expected places; as a sign of his popularity, a bronze statue of Ed Wood has been proposed in his hometown, and L'Osservatore Romano, the official newspaper of the Holy See, has courted controversy for its endorsement of cult films and pop culture. When cities attempt to renovate neighborhoods, fans have called attempts to demolish iconic settings from cult films "cultural vandalism". Cult films can also drive tourism, even when it is unwanted. From Latin America, Alejandro Jodorowsky's film El Topo (1970) has attracted attention of rock musicians such as John Lennon, Mick Jagger, and Bob Dylan. As far back as the 1970s, Attack of the Killer Tomatoes (1978) was designed specifically to be a cult film, and The Rocky Horror Picture Show was produced by 20th Century Fox, a major Hollywood studio. Over its decades-long release, Rocky Horror became the seventh highest grossing R-rated film when adjusted for inflation; journalist Matt Singer has questioned whether Rocky Horrors popularity invalidates its cult status. Founded in 1974, Troma Entertainment, an independent studio, would become known for both its cult following and cult films. In the 1980s, Danny Peary's Cult Movies (1981) would influence director Edgar Wright and film critic Scott Tobias of The A.V. Club. The rise of home video would have a mainstreaming effect on cult films and cultish behavior, though some collectors would be unlikely to self-identify as cult film fans. Film critic Joe Bob Briggs began reviewing drive-in theater and cult films, though he faced much criticism as an early advocate of exploitation and cult films. Briggs highlights the mainstreaming of cult films by pointing out the respectful obituaries that cult directors have received from formerly hostile publications and acceptance of politically incorrect films at mainstream film festivals. This acceptance is not universal, though, and some critics have resisted this mainstreaming of paracinema. Beginning in the 1990s, director Quentin Tarantino would have the greatest success in turning cult films mainstream. Tarantino later used his fame to champion obscure cult films that had influenced him and set up the short-lived Rolling Thunder Pictures, which distributed several of his favorite cult films. Tarantino's clout led Phil Hoad of The Guardian to call Tarantino the world's most influential director. As major Hollywood studios and audiences both become savvy to cult films, productions once limited to cult appeal have instead become popular hits, and cult directors have become hot properties known for more mainstream and accessible films. Remarking on the popular trend of remaking cult films, Claude Brodesser-Akner of New York magazine states that Hollywood studios have been superstitiously hoping to recreate past successes rather than trading on nostalgia. Their popularity would bring some critics to proclaim the death of cult films now that they have finally become successful and mainstream, are too slick to attract a proper cult following, lack context, or are too easily found online. In response, David Church says that cult film fans have retreated to more obscure and difficult to find films, often using illegal distribution methods, which preserves the outlaw status of cult films. Virtual spaces, such as online forums and fan sites, replace the traditional fanzines and newsletters. Cult film fans consider themselves collectors, rather than consumers, as they associate consumers with mainstream, Hollywood audiences. This collecting can take the place of fetishization of a single film. Addressing concerns that DVDs have revoked the cult status of films like Rocky Horror, academic Mikel J. Koven states that small scale screenings with friends and family can replace midnight showings. Koven also identifies television shows, such as Twin Peaks, as retaining more traditional cult activities inside popular culture. David Lynch himself has not ruled out another television series, as studios have become reluctant to take chances on non-mainstream ideas. Despite this, the Alamo Drafthouse has capitalized on cult films and the surrounding culture through inspiration drawn from Rocky Horror and retro promotional gimmickry. They sell out their shows regularly and have acquired a cult following of their own. Academic Bob Batchelor, writing in Cult Pop Culture, states that the internet has democratized cult culture and destroyed the line between cult and mainstream. Fans of even the most obscure films can communicate online with each other in vibrant communities. Although known for their big-budget blockbusters, Steven Spielberg and George Lucas have criticized the current Hollywood system of gambling everything on the opening weekend of these productions. Geoffrey Macnab of The Independent instead suggests that Hollywood look to capitalize on cult films, which have exploded in popularity on the internet. The rise of social media has been a boon to cult films. Sites such as Twitter have displaced traditional venues for fandom and courted controversy from cultural critics who are unamused by campy cult films. After a clip from one of his films went viral, director-producer Roger Corman made a distribution deal with YouTube. Found footage which had originally been distributed as cult VHS collections eventually went viral on YouTube, which opened them to new generations of fans. Films such as Birdemic (2008) and The Room (2003) gained quick, massive popularity, as prominent members of social networking sites discussed them. Their rise as "instant cult classics" bypasses the years of obscurity that most cult films labor under. In response, critics have described the use of viral marketing as astroturfing and an attempt to manufacture cult films. I. Q. Hunter identifies a prefabricated cult film style which includes "deliberately, insulting bad films", "slick exercises in dysfunction and alienation", and mainstream films "that sell themselves as worth obsessing over". Writing for NPR, Scott Tobias states that Don Coscarelli, whose previous films effortlessly attracted cult followings, has drifted into this realm. Tobias criticizes Coscarelli as trying too hard to appeal to cult audiences and sacrificing internal consistency for calculated quirkiness. Influenced by the successful online hype of The Blair Witch Project (1999), other films have attempted to draw online cult fandom with the use of prefabricated cult appeal. Snakes on a Plane (2006) is an example that attracted massive attention from curious fans. Uniquely, its cult following preceded the film's release and included speculative parodies of what fans imagined the film might be. This reached the point of convergence culture when fan speculation began to impact on the film's production. Although it was proclaimed a cult film and major game-changer before it was released, it failed to win either mainstream audiences or maintain its cult following. In retrospect, critic Spencer Kornhaber would call it a serendipitous novelty and a footnote to a "more naive era of the Internet". However, it became influential in both marketing and titling. This trend of "instant cult classics" which are hailed yet fail to attain a lasting following is described by Matt Singer, who states that the phrase is an oxymoron. Cult films are often approached in terms of auteur theory, which states that the director's creative vision drives a film. This has fallen out of favor in academia, creating a disconnect between cult film fans and critics. Matt Hills states that auteur theory can help to create cult films; fans that see a film as continuing a director's creative vision are likely to accept it as cult. According to academic Greg Taylor, auteur theory also helped to popularize cult films when middlebrow audiences found an accessible way to approach avant-garde film criticism. Auteur theory provided an alternative culture for cult film fans while carrying the weight of scholarship. By requiring repeated viewings and extensive knowledge of details, auteur theory naturally appealed to cult film fans. Taylor further states that this was instrumental in allowing cult films to break through to the mainstream. Academic Joe Tompkins states that this auteurism is often highlighted when mainstream success occurs. This may take the place of – and even ignore – political readings of the director. Cult films and directors may be celebrated for their transgressive content, daring, and independence, but Tompkins argues that mainstream recognition requires they be palatable to corporate interests who stand to gain much from the mainstreaming of cult film culture. While critics may champion revolutionary aspects of filmmaking and political interpretation, Hollywood studios and other corporate interests will instead highlight only the aspects that they wish to legitimize in their own films, such as sensational exploitation. Someone like George Romero, whose films are both transgressive and subversive, will have the transgressive aspects highlighted while the subversive aspects are ignored. See also Cult video game List of cult films Sleeper hit Mark Kermode's Secrets of Cinema: Cult Movies List of cult television shows References Film and video fandom Film and video terminology Film genres Cult following Articles containing video clips
2,519
5,664
https://en.wikipedia.org/wiki/Consciousness
Consciousness
Consciousness, at its simplest, is sentience and awareness of internal and external existence. However, the lack of definitions has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked. Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain. Inter-disciplinary perspectives Western philosophers since the time of Descartes and Locke have struggled to comprehend the nature of consciousness and how it fits into a larger picture of the world. These questions remain central to both continental and analytic philosophy, in phenomenology and the philosophy of mind, respectively. Consciousness has also become a significant topic of interdisciplinary research in cognitive science, involving fields such as psychology, linguistics, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale. Etymology In the late 20th century, philosophers like Hamlyn, Rorty, and Wilkes have disagreed with Kahn, Hardie and Modrak as to whether Aristotle even had a concept of consciousness. Aristotle does not use any single word or terminology to name the phenomenon; it is used only much later, especially by John Locke. Caston contends that for Aristotle, perceptual awareness was somewhat the same as what modern philosophers call consciousness. The origin of the modern concept of consciousness is often attributed to Locke's Essay Concerning Human Understanding, published in 1690. Locke defined consciousness as "the perception of what passes in a man's own mind". His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnson's celebrated Dictionary (1755). "Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and d'Alembert's Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do". The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as the English word—it meant "knowing with", in other words, "having joint or common knowledge with another". There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness". Locke's definition from 1690 illustrates that a gradual shift in meaning had taken place. A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio). The problem of definition The dictionary definitions of the word consciousness extend through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a "mental entity" or "mental activity" that is not physical. The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows: awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self inward awareness of an external object, state, or fact concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness] the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . . the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something." The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world." Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows: Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition: A partisan definition such as Sutherland's can hugely affect researchers' assumptions and the direction of their work: Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), that it encompasses a variety of distinct meanings with no simple element in common, or that we should eliminate this concept from our understanding of the mind, a position known as consciousness semanticism. Philosophy of mind Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues. Coherence of the concept Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings. Types Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness. Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms. There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility." Mind–body problem Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown. The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland. Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought. Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness. A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum. Problem of other minds Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids. The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologues can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologues have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homologue/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness. Models A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories. Integrated information theory (IIT) postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Orchestrated objective reduction (Orch OR) postulates that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. However the details of the mechanism would go beyond current quantum theory. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested. In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella. As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above). Altered states There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance. The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed. Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention. A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role. There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness. The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts. Medical aspects The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end. Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works. Assessment In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious. The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language. In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity. Disorders Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category. Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that doesn't make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary. Outside human adults In children Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age." In animals The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence. On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society." "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors." In artifacts The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Stream of consciousness William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890. According to James, the "stream of thought" is governed by five characteristics: Every thought tends to be part of a personal consciousness. Within each personal consciousness thought is always changing. Within each personal consciousness thought is sensibly continuous. It always appears to deal with objects independent of itself. It is interested in some parts of these objects to the exclusion of others. A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyses various phenomena in the world, or analyses the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics. Narrative form In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers. Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom: Spiritual approaches To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world. The mystical psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who are enlightened. Many more examples could be given, such as the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff. Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels. See also Chaitanya (consciousness): Pure consciousness in Hindu philosophy. Models of consciousness: Ideas for a scientific mechanism underlying consciousness. Plant perception (paranormal): A pseudoscientific theory. Sakshi (Witness): Pure awareness in Hindu philosophy. Vertiginous question: On the uniqueness of a person's consciousness. References Further reading External links Cognitive neuroscience Cognitive psychology Concepts in epistemology Concepts in metaphysics Concepts in the philosophy of mind Concepts in the philosophy of science Emergence Mental processes Metaphysics of mind Neuropsychological assessment Ontology Phenomenology Theory of mind
2,533
5,665
https://en.wikipedia.org/wiki/Currency
Currency
A currency is a standardization of money in any form, in use or circulation as a medium of exchange, for example banknotes and coins. A more general definition is that a currency is a system of money in common use within a specific environment over time, especially for people in a nation state. Under this definition, the British Pound Sterling (£), euros (€), Japanese yen (¥), and U.S. dollars (US$) are examples of (government-issued) fiat currencies. Currencies may act as stores of value and be traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are either chosen by users or decreed by governments, and each type has limited boundaries of acceptance; i.e., legal tender laws may require a particular unit of account for payments to government agencies. Other definitions of the term "currency" appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries. One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value (the economy at large vs. the government's physical metal reserves). Some currencies function as legal tender in certain jurisdictions, or for specific purposes, such as payment to a government (taxes), or government agencies (fees, fines). Others simply get traded for their economic value. Digital currency has arisen with the popularity of computers and the Internet. Whether government-backed digital notes and coins (such as the digital renminbi in China, for example) will be successfully developed and utilized remains dubious. Decentralized digital currencies, such as cryptocurrencies, are different because they are not issued by a government monetary authority; specifically, bitcoin, the first cryptocurrency and leader in terms of market capitalization, has a fixed supply and is therefore ostensibly deflationary. Many warnings issued by various countries note the opportunities that cryptocurrencies create for illegal activities such as scams, ransomware, money laundering and terrorism. In 2014, the United States IRS issued a statement explaining that virtual currency is treated as property for Federal income-tax purposes, and it provide examples of how long-standing tax principles applicable to transactions involving property apply to virtual currency. History Early currency Originally, currency was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt. In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities. This formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place that was safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast. It is not known what was used as a currency for these exchanges, but it is thought that oxhide-shaped ingots of copper, produced in Cyprus, may have functioned as a currency. It is thought that the increase in piracy and raiding associated with the Bronze Age collapse, possibly produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end. It was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, and the appearance of real coinage, possibly first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ingots, ivory, various forms of weapons, livestock, the manilla currency, and ochre and other earth oxides. The manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, and in many places, various forms of barter still apply. Coinage The prevalence of metal coins possibly led to the metal itself being the store of value: first copper, then both silver and gold, and at one point also bronze. Today other non-precious metals are used for coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of the metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics). Most major economies using coinage had several tiers of coins of different values, made of copper, silver, and gold. Gold coins were the most valuable and were used for large purchases, payment of the military, and backing of state activities. Units of account were often defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, and sometimes also defined a unit of account, while coins of copper or silver, or some mixture of them (see debasement), might be used for everyday transactions. This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied greatly between different eras and places; for example, the opening of silver mines in the Harz mountains of central Europe made silver relatively less valuable, as did the flood of New World silver after the Spanish conquests. However, the rarity of gold consistently made it more valuable than silver, and likewise silver was consistently worth more than copper. Paper money In premodern China, the need for lending and for a medium of exchange that was less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process that lasted from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry. The Song government granted several shops the right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still only locally and temporarily valid: it was not until the mid 13th century that a standard and uniform government issue of paper money became an acceptable nationwide currency. The already widespread methods of woodblock printing and then Bi Sheng's movable type printing by the 11th century were the impetus for the mass production of paper money in premodern China. At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, cheques, promissory notes, savings accounts, transaction accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits. In Europe, paper currency was first introduced on a regular basis in Sweden in 1661 (although Washington Irving records an earlier emergency use of it, by the Spanish in a siege during the Conquest of Granada). As Sweden was rich in copper, many copper coins were in circulation, but its relatively low value necessitated extraordinarily big coins, often weighing several kilograms. The advantages of paper currency were numerous: it reduced the need to transport gold and silver, which was risky; it facilitated loans of gold or silver at interest, since the underlying specie (money in the form of gold or silver coins rather than notes) never left the possession of the lender until someone else redeemed the note; and it allowed a division of currency into credit- and specie-backed forms. It enabled the sale of stock in joint-stock companies and the redemption of those shares in a paper. But there were also disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more notes than they had specie to back them with. Second, because this increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. Thus paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock. At that time, both silver and gold were considered a legal tender and accepted by governments for taxes. However, the instability in the exchange rate between the two grew over the course of the 19th century, with the increases both in the supply of these metals, particularly silver, and in trade. The parallel use of both metals is called bimetallism, and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed. By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's law: keeping the gold and silver they received but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971, an action which was known as the Nixon shock. No country has an enforceable gold standard or silver standard currency system. Banknote era A banknote or a bill is a type of currency and it is commonly used as legal tender in many jurisdictions. Together with coins, banknotes make up the cash form of a currency. Banknotes were initially mostly paper, but Australia's Commonwealth Scientific and Industrial Research Organisation developed a polymer currency in the 1980s; it went into circulation on the nation's bicentenary in 1988. Polymer banknotes had already been introduced in the Isle of Man in 1983. As of 2016, polymer currency is used in over 20 countries (over 40 if counting commemorative issues), and dramatically increases the life span of banknotes and reduces counterfeiting. Modern currencies The currency used is based on the concept of lex monetae; that a sovereign state decides which currency it shall use. The International Organization for Standardization has introduced a system of three-letter codes (ISO 4217) to denote currency (as opposed to simple names or currency signs), in order to remove the confusion arising because there are dozens of currencies called the dollar and several called the franc. Even the "pound" is used in nearly a dozen different countries; most of these are tied to the pound sterling, while the remainder has varying values. In general, the three-letter code uses the ISO 3166-1 country code for the first two letters and the first letter of the name of the currency (D for dollar, for example) as the third letter. United States currency, for instance, is globally referred to as USD. Currencies such as the pound sterling have different codes, as the first two letters denote not the exact country name but an alternative name also used to describe the country. The pound's code is GBP where GB denotes Great Britain instead of the United Kingdom. The former currencies include the marks that were in circulation in Germany and Finland. The International Monetary Fund uses a different system when referring to national currencies. Alternative currencies Distinct from centrally controlled government-issued currencies, private decentralized trust-reduced networks support alternative currencies such as Cardano (blockchain platform), bitcoin, Ethereum's ether, Litecoin, Monero, Peercoin, or Dogecoin, which are classified as cryptocurrency since the transfer of value is assured through cryptographic signatures validated by all users. There are also branded currencies, for example 'obligation' based stores of value, such as quasi-regulated BarterCard, Loyalty Points (Credit Cards, Airlines) or Game-Credits (MMO games) that are based on reputation of commercial products, or highly regulated 'asset-backed' 'alternative currencies' such as mobile-money schemes like MPESA (called E-Money Issuance). The currency may be Internet-based and digital, for instance, bitcoin is not tied to any specific country, or the IMF's SDR that is based on a basket of currencies (and assets held). Possession and sale of alternative forms of currencies is often outlawed by governments in order to preserve the legitimacy of the constitutional currency for the benefit of all citizens. For example, Article I, section 8, clause 5 of the United States Constitution delegates to Congress the power to coin money and to regulate the value thereof. This power was delegated to Congress in order to establish and preserve a uniform standard of value and to insure a singular monetary system for all purchases and debts in the United States, public and private. Along with the power to coin money, the United States Congress has the concurrent power to restrain the circulation of money which is not issued under its own authority in order to protect and preserve the constitutional currency. It is a violation of federal law for individuals, or organizations to create private coin or currency systems to compete with the official coinage and currency of the United States. Control and production In most cases, a central bank has the exclusive power to issue all forms of currency, including coins and banknotes (fiat money), and to restrain the circulation alternative currencies for its own area of circulation (a country or group of countries); it regulates the production of currency by banks (credit) through monetary policy. An exchange rate is a price at which two currencies can be exchanged against each other. This is used for trade between the two currency zones. Exchange rates can be classified as either floating or fixed. In the former, day-to-day movements in exchange rates are determined by the market; in the latter, governments intervene in the market to buy or sell their currency to balance supply and demand at a static exchange rate. In cases where a country has control of its own currency, that control is exercised either by a central bank or by a Ministry of Finance. The institution that has control of monetary policy is referred to as the monetary authority. Monetary authorities have varying degrees of autonomy from the governments that create them. A monetary authority is created and supported by its sponsoring government, so independence can be reduced by the legislative or executive authority that creates it. Several countries can use the same name for their own separate currencies (for example, a dollar in Australia, Canada, and the United States). By contrast, several countries can also use the same currency (for example, the euro or the CFA franc), or one country can declare the currency of another country to be legal tender. For example, Panama and El Salvador have declared US currency to be legal tender, and from 1791 to 1857, Spanish dollars were legal tender in the United States. At various times countries have either re-stamped foreign coins or used currency boards, issuing one note of currency for each note of a foreign government held, as Ecuador currently does. Each currency typically has a main currency unit (the dollar, for example, or the euro) and a fractional unit, often defined as of the main unit: 100 cents = 1 dollar, 100 centimes = 1 franc, 100 pence = 1 pound, although units of or occasionally also occur. Some currencies do not have any smaller units at all, such as the Icelandic króna and the Japanese yen. Mauritania and Madagascar are the only remaining countries that have theoretical fractional units not based on the decimal system; instead, the Mauritanian ouguiya is in theory divided into 5 khoums, while the Malagasy ariary is theoretically divided into 5 iraimbilanja. In these countries, words like dollar or pound "were simply names for given weights of gold". Due to inflation khoums and iraimbilanja have in practice fallen into disuse. (See non-decimal currencies for other historic currencies with non-decimal divisions.) Currency convertibility Subject to variation around the world, local currency can be converted to another currency or vice versa with or without central bank/government intervention. Such conversions take place in the foreign exchange market. Based on the above restrictions or free and readily conversion features, currencies are classified as: Fully convertible When there are no restrictions or limitations on the amount of currency that can be traded on the international market, and the government does not artificially impose a fixed value or minimum value on the currency in international trade. The US dollar is one of the main fully convertible currencies. Partially convertible Central banks control international investments flowing into and out of a country. While most domestic transactions are handled without any special requirements, there are significant restrictions on international investing, and special approval is often required in order to convert into other currencies. The Indian rupee and the renminbi are examples of partially convertible currencies. Nonconvertible A government neither participates in the international currency market nor allows the conversion of its currency by individuals or companies. These currencies are also known as blocked, e.g. the North Korean won and the Cuban peso. According to the three aspects of trade in goods and services, capital flows and national policies, the supply-demand relationship of different currencies determines the exchange ratio between currencies. Trade in goods and services Through cost transfer, goods and services circulating in the country (such as hotels, tourism, catering, advertising, household services) will indirectly affect the trade cost of goods and services and the price of export trade. Therefore, services and goods involved in international trade are not the only reason affecting the exchange rate. The large number of international tourists and overseas students has resulted in the flow of services and goods at home and abroad. It also represents that the competitiveness of global goods and services directly affects the change of international exchange rates. Capital flows National currencies will be traded on international markets for investment purposes. Investment opportunities in each country attract other countries into investment programs, so that these foreign currencies become the reserves of the central banks of each country. The exchange rate mechanism, in which currencies are quoted continuously between countries, is based on foreign exchange markets in which currencies are invested by individuals and traded or speculated by central banks and investment institutions. In addition, changes in interest rates, capital market fluctuations and changes in investment opportunities will affect the global capital inflows and outflows of countries around the world, and exchange rates will fluctuate accordingly. National policies The country's foreign trade, monetary and fiscal policies affect the exchange rate fluctuations. Foreign trade includes policies such as tariffs and import standards for commodity exports. The impact of monetary policy on the total amount and yield of money directly determines the changes in the international exchange rate. Fiscal policies, such as transfer payments, taxation ratios, and other factors, dominate the profitability of capital and economic development, and the ratio of national debt issuance to deficit determines the repayment capacity and credit rating of the country. Such policies determine the mechanism of linking domestic and foreign currencies and therefore have a significant impact on the generation of exchange rates. Currency convertibility is closely linked to economic development and finance. There are strict conditions for countries to achieve currency convertibility, which is a good way for countries to improve their economies. The currencies of some countries or regions in the world are freely convertible, such as the US dollar, Australian dollar and Japanese yen. The requirements for currency convertibility can be roughly divided into four parts: Sound microeconomic agency With a freely convertible currency, domestic firms will have to compete fiercely with their foreign counterparts. The development of competition among them will affect the implementation effect of currency convertibility. In addition, microeconomics is a prerequisite for macroeconomic conditions. The macroeconomic situation and policies are stable Since currency convertibility is the cross-border flow of goods and capital, it will have an impact on the macro economy. This requires that the national economy be in a normal and orderly state, that is, there is no serious inflation and economic overheating. In addition, the government should use macro policies to make mature adjustments to deal with the impact of currency exchange on the economy. A reasonable and open economy The maintainability of international balance of payments is the main performance of reasonable economic structure. Currency convertibility not only causes difficulties in the sustainability of international balance of payments but also affects the government's direct control over international economic transactions. To eliminate the foreign exchange shortage, the government needs adequate international reserves. Appropriate exchange rate regime and level The level of exchange rate is an important factor in maintaining exchange rate stability, both before and after currency convertibility. The exchange rate of freely convertible currency is too high or too low, which can easily trigger speculation and undermine the stability of macroeconomic and financial markets. Therefore, to maintain the level of exchange rate, a proper exchange rate regime is crucial. Local currency In economics, a local currency is a currency not backed by a national government and intended to trade only in a small area. Advocates such as Jane Jacobs argue that this enables an economically depressed region to pull itself up, by giving the people living there a medium of exchange that they can use to exchange services and locally produced goods (in a broader sense, this is the original purpose of all money). Opponents of this concept argue that local currency creates a barrier that can interfere with economies of scale and comparative advantage and that in some cases they can serve as a means of tax evasion. Local currencies can also come into being when there is economic turmoil involving the national currency. An example of this is the Argentinian economic crisis of 2002 in which IOUs issued by local governments quickly took on some of the characteristics of local currencies. One of the best examples of a local currency is the original LETS currency, founded on Vancouver Island in the early 1980s. In 1982, the Canadian Central Bank’s lending rates ran up to 14% which drove chartered bank lending rates as high as 19%. The resulting currency and credit scarcity left island residents with few options other than to create a local currency. List of major world payment currencies The following table are estimates of the 20 most frequently used currencies in world payments in January 2023 by SWIFT. See also Related concepts Counterfeit money Currency band Currency transaction tax Debasement Exchange rate Fiscal localism Foreign exchange market Foreign exchange reserves Functional currency History of banking History of money Mutilated currency Optimum currency area Slang terms for money Virtual currency World currency Accounting units Currency pair Currency symbol Currency strength European Currency Unit Fictional currency Franc Poincaré Local currencies Petrocurrency Special drawing rights Lists ISO 4217 List of alternative names for currency List of currencies List of circulating currencies List of proposed currencies List of historical currencies List of historical exchange rates List of international trade topics List of motifs on banknotes Notes References External links Foreign exchange market
2,534
5,669
https://en.wikipedia.org/wiki/Chromium
Chromium
Chromium is a chemical element with the symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal. Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored. Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium. In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential. While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC). Abandoned chromium production sites often require environmental cleanup. Physical properties Atomic Chromium is the fourth transition metal found on the periodic table, and has an electron configuration of [Ar] 3d5 4s1. It is also the first element in the periodic table whose ground-state electron configuration violates the Aufbau principle. This occurs again later in the periodic table with other elements and their electron configurations, such as copper, niobium, and molybdenum. This occurs because electrons in the same orbital repel each other due to their like charges. In the previous elements, the energetic cost of promoting an electron to the next higher energy level is too great to compensate for that released by lessening inter-electronic repulsion. However, in the 3d transition metals, the energy gap between the 3d and the next-higher 4s subshell is very small, and because the 3d subshell is more compact than the 4s subshell, inter-electron repulsion is smaller between 4s electrons than between 3d electrons. This lowers the energetic cost of promotion and increases the energy released by it, so that the promotion becomes energetically feasible and one or even two electrons are always promoted to the 4s subshell. (Similar promotions happen for every transition metal atom but one, palladium.) Chromium is the first element in the 3d series where the 3d electrons start to sink into the nucleus; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides. Bulk Chromium is extremely hard, and is the third hardest element behind carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium. Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the Period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters. Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance. Passivation Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids. Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts. Isotopes Naturally occurring chromium is composed of four stable isotopes; 50Cr, 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 50Cr is observationally stable, as it is theoretically capable of decaying to 50Ti via double electron capture with a half-life of no less than 1.3 years. Twenty-five radioisotopes have been characterized, ranging from 42Cr to 70Cr; the most stable radioisotope is 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. 53Cr is the radiogenic decay product of 53Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. The isotopes of chromium range in atomic mass from 43 u (43Cr) to 67 u (67Cr). The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay. 53Cr has been posited as a proxy for atmospheric oxygen concentration. Chemistry and compounds Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist. Common oxidation states Chromium(0) Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry. Chromium(II) Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond. Chromium(III) A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum. Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–. Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum. Chromium(VI) Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH: 2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020. Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible. Both the chromate and dichromate anions are strong oxidizing reagents at low pH: + 14 + 6 e− → 2 + 21 (ε0 = 1.33 V) They are, however, only moderately oxidizing at high pH: + 4 + 3 e− → + 5 (ε0 = −0.13 V) Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct . Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent. Other oxidation states Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C. Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known. Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4)  pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions. Occurrence Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore. About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds. The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI). History Early applications Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later. In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald. During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased. Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924. Production Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium." The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production. The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction. For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate. 4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2 2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium. Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO Cr2O3 + 2 Al → Al2O3 + 2 Cr Applications The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries. Metallurgy The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency." The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany. The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used. In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development. Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds. Pigment The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 µm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide. Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves. Other uses Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color. Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996. Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage. The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI). Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst. Chromates of metals are used in humistor. Uses of compounds Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes. Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge. Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free. Potassium dichromate is a chemical reagent, used as a titrating agent. Chromates are added to drilling muds to prevent corrosion of steel under wet conditions. Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning. Biological role The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium. In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD). "Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium (III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway. The chromium content of common foods is generally low (1-13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect. Dietary recommendations There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not. The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree. Labeling For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium. Supplementation Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven. Approved and disapproved health claims In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for very specific label wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue. Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome. Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim. Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat. Fresh-water fish Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification. Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions. There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks. Precautions Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m3. Chromium(VI) toxicity The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic. The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction. Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers. Environmental issues Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications. In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit. The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants. Notes References General bibliography External links ATSDR Case Studies in Environmental Medicine: Chromium Toxicity U.S. Department of Health and Human Services IARC Monograph "Chromium and Chromium compounds" It's Elemental – The Element Chromium The Merck Manual – Mineral Deficiency and Toxicity National Institute for Occupational Safety and Health – Chromium Page Chromium at The Periodic Table of Videos (University of Nottingham) Chemical elements Dietary minerals Native element minerals Occupational safety and health Chemical elements with body-centered cubic structure
2,538
5,672
https://en.wikipedia.org/wiki/Cadmium
Cadmium
Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel-cadmium batteries have been replaced with nickel-metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels. Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Characteristics Physical properties Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Chemical properties Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride. Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2 The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined. Isotopes Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is ) and 116Cd (two-neutrino double beta decay, half-life is ). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours). The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium). One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons. Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay. History Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application. Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains". In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton. After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium. The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of to total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel-cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006. Occurrence Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I. Metallic cadmium can be found in the Vilyuy River basin in Siberia. Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash. Cadmium in soil can be absorbed by crops such as rice. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil. Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution. The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. Nickel-cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver-cadmium battery. Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%. In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths. Cadmium selenide quantum dots emit bright luminescence under UV excitation (He-Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope. In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α. Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity. Biological role and research Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage. However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy. Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis. Environment The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment. Safety Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death. Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables. There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing. In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors. Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law. The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females. Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses. The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking. In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed. Regulations Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation. The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder. The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3. In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries. Product recalls In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores. In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, NJ, USA. See also Red List building materials Toxic heavy metal References Further reading External links Cadmium at The Periodic Table of Videos (University of Nottingham) ATSDR Case Studies in Environmental Medicine: Cadmium Toxicity U.S. Department of Health and Human Services National Institute for Occupational Safety and Health – Cadmium Page NLM Hazardous Substances Databank – Cadmium, Elemental Chemical elements Transition metals Endocrine disruptors IARC Group 1 carcinogens Occupational safety and health Soil contamination Testicular toxicants Native element minerals Chemical elements with hexagonal close-packed structure
2,540
5,675
https://en.wikipedia.org/wiki/Curium
Curium
Curium is a transuranic, radioactive chemical element with the symbol Cm and atomic number 96. This actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium. Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer. All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. History Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron. Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown. The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron: ^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay: ^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days. Another isotope 240Cm was produced in a similar reaction in March 1945: ^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n The α-decay half-life of 240Cm was correctly determined as 26.7 days. The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor. The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin: "As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored." The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium. Characteristics Physical A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fmm and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III. Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering. In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium. Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes. Chemical Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (): this was prepared from beta decay of americium-242 in the americium(V) ion . Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V). Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry. Isotopes About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.1 years, respectively. All isotopes 242Cm-248Cm, and 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present. The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 gram for 245Cm, 155 gram for 243Cm and 1550 gram for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups. Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons). Occurrence The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of 247Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed. Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm. Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils. The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Curium, and other non-primordial actinides, have also been detected in the spectrum of Przybylski's Star. Synthesis Isotope preparation Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu. Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm: For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm: Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk. The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%. Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk. Metal preparation Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation. Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents. Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride. Compounds and reactions Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: 4CmO2 ->[\Delta T] 2Cm2O3 + O2. Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: 2CmO2 + H2 -> Cm2O3 + H2O Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium. Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well. Halides The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal). The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C: Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride: Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature. Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet. Formation of the complexes of the type (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence. Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them. Applications Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical. A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley: + → + Only about 5,000 atoms of californium were produced in this experiment. The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel. X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source. An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium. Safety Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer. Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium. References Bibliography Holleman, Arnold F. and Wiberg, Nils Lehrbuch der Anorganischen Chemie, 102 Edition, de Gruyter, Berlin 2007, . Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 External links Curium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Curium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides American inventions Synthetic elements Marie Curie Pierre Curie
2,541
5,676
https://en.wikipedia.org/wiki/Californium
Californium
Californium is a radioactive chemical element with the symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). The element was named after the university and the U.S. state of California. Two crystalline forms exist for californium at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. 252Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. Characteristics Physical properties Californium is a silvery-white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials. The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm3 and the β form exists above 600–800 °C with a density of 8.74 g/cm3. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond. The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa). Chemical properties and compounds Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents. The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid. Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate. Isotopes Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are 251Cf with half-life 898 years, 249Cf with half-life 351 years, 250Cf with half-life 13.08 years, and 252Cf with half-life 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-life less than 20 minutes. 249Cf is formed from beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section). Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. 252Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of 252Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96). History Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950. To produce californium, a microgram-size target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron (). + → + To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes. The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above #98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California". Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes 249Cf to 252Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid. The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium. The Atomic Energy Commission sold 252Cf to industrial and academic customers in the early 1970s for $10 per microgram, and an average of of 252Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films. Occurrence Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles. Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities. Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of 254Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56. The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008. Production Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 () with neutrons, forming berkelium-250 () via neutron capture (n,γ) which, in turn, quickly beta decays (β−) to californium-250 () in the following reaction: (n,γ) → + β− Bombardment of californium-250 with neutrons produces californium-251 and californium-252. Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255. Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively. Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram). Applications Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of 252Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries. Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use 252Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of 252Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most 252Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from 252Cf were used for wireless data transmission. 251Cf has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element. In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding 249Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of 249Cf deposited on a titanium foil of 32 cm2 area. Californium has also been used to produce other transuranium elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei. Precautions Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment. Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone. The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer. Notes References Bibliography External links Californium at The Periodic Table of Videos (University of Nottingham) NuclearWeaponArchive.org – Californium Hazardous Substances Databank – Californium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Synthetic elements Neutron sources Ferromagnetic materials
2,542
5,679
https://en.wikipedia.org/wiki/Christian%20Social%20Union%20in%20Bavaria
Christian Social Union in Bavaria
The Christian Social Union in Bavaria (German: , CSU) is a Christian-democratic and conservative political party in Germany. Having a regionalist identity, the CSU operates only in Bavaria while its larger counterpart, the Christian Democratic Union (CDU), operates in the other fifteen states of Germany. It differs from the CDU by being somewhat more conservative in social matters, following Catholic social teaching. The CSU is considered the de facto successor of the Weimar-era Catholic Bavarian People's Party. At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction (die Unionsfraktion) or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the second smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union. Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957. History Franz Josef Strauß (1915–1988) had left behind the strongest legacy as a leader of the party, having led the party from 1961 until his death in 1988. His political career in the federal cabinet was unique in that he had served four ministerial posts in the years between 1953 and 1969. From 1978 until his death in 1988, Strauß served as the Minister-President of Bavaria. Strauß was the first leader of the CSU to be a candidate for the German chancellery in 1980. In the 1980 federal election, Strauß ran against the incumbent Helmut Schmidt of the Social Democratic Party of Germany (SPD) but lost thereafter as the SPD and the Free Democratic Party (FDP) managed to secure an absolute majority together, forming a social-liberal coalition. The CSU has led the Bavarian state government since it came into existence in 1946, save from 1954 to 1957 when the SPD formed a state government in coalition with the Bavaria Party and the state branches of the GB/BHE and FDP. Initially, the separatist Bavaria Party (BP) successfully competed for the same electorate as the CSU, as both parties saw and presented themselves as successors to the BVP. The CSU was ultimately able to win this power struggle for itself. Among other things, the BP was involved in the "casino affair" under dubious circumstances by the CSU at the end of the 1950s and lost considerable prestige and votes. In the 1966 state election, the BP finally left the state parliament. Before the 2008 elections in Bavaria, the CSU perennially achieved absolute majorities at the state level by itself. This level of dominance is unique among Germany's 16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran for Chancellor of Germany in 2002, but his preferred CDU/CSU–FDP coalition lost against the SPD candidate Gerhard Schröder's SPD–Green alliance. In the 2003 Bavarian state election, the CSU won 60.7% of the vote and 124 of 180 seats in the state parliament. This was the first time any party had won a two-thirds majority in a German state parliament. The Economist later suggested that this exceptional result was due to a backlash against Schröder's government in Berlin. The CSU's popularity declined in subsequent years. Stoiber stepped down from the posts of Minister-President and CSU chairman in September 2007. A year later, the CSU lost its majority in the 2008 Bavarian state election, with its vote share dropping from 60.7% to 43.4%. The CSU remained in power by forming a coalition with the FDP. In the 2009 general election, the CSU received only 42.5% of the vote in Bavaria in the 2009 election, which by then constituted its weakest showing in the party's history. The CSU made gains in the 2013 Bavarian state election and the 2013 federal election, which were held a week apart in September 2013. The CSU regained their majority in the Bavarian Landtag and remained in government in Berlin. They had three ministers in the Fourth Merkel cabinet, namely Horst Seehofer (Minister of the Interior, Building and Community), Andreas Scheuer (Minister of Transport and Digital Infrastructure) and Gerd Müller (Minister for Economic Cooperation and Development). The 2018 Bavarian state election yielded the worst result for the CSU in the state elections (top candidate Markus Söder) since 1950 with 37.2% of votes, a decline of over ten percentage points compared to the last result in 2013. After that, the CSU had to form a new coalition government with the minor partner Free Voters of Bavaria. The 2021 German federal election saw the worst election result ever for the Union. The CSU also had a weak showing with 5.2% of votes nationally and 31.7% of the total in Bavaria. Relationship with the CDU The CSU is the sister party of the Christian Democratic Union (CDU). Together, they are called the Union. The CSU operates only within Bavaria, and the CDU operates in all states other than Bavaria. While virtually independent, at the federal level the parties form a common CDU/CSU faction. No Chancellor has ever come from the CSU, although Strauß and Edmund Stoiber were CDU/CSU candidates for Chancellor in the 1980 federal election and the 2002 federal election, respectively, which were both won by the Social Democratic Party of Germany (SPD). Below the federal level, the parties are entirely independent. Since its formation, the CSU has been more conservative than the CDU. CSU and the state of Bavaria decided not to sign the Grundgesetz of the Federal Republic of Germany as they could not agree with the division of Germany into two states after World War II. Although Bavaria like all German states has a separate police and justice system (distinctive and non-federal), the CSU has actively participated in all political affairs of the German Parliament, the German government, the German Bundesrat, the parliamentary elections of the German President, the European Parliament and meetings with Mikhail Gorbachev in Russia. Like the CDU, the CSU is pro-European, although some Eurosceptic tendencies were shown in the past. Leaders Party chairmen Ministers-president The CSU has contributed eleven of the twelve Ministers-President of Bavaria since 1945, with only Wilhelm Hoegner (1945–1946, 1954–1957) of the SPD also holding the office. Election results Federal parliament (Bundestag) European Parliament Landtag of Bavaria See also List of Christian Social Union of Bavaria politicians Politics of Germany Notes and references Further reading Alf Mintzel (1975). Die CSU. Anatomie einer konservativen Partei 1945–1972 . Opladen. . External links Christlich-Soziale Union – official website (English page) Christian-Social Union (Bavaria, Germany) Christian-Social Union of Bavaria (CSU) 1945 establishments in Germany Bavarian nationalism Catholic political parties Centre-right parties in Europe Christian democratic parties in Germany Conservative parties in Germany International Democrat Union member parties Member parties of the European People's Party Parties represented in the European Parliament Political parties established in 1945 Politics of Bavaria Pro-European political parties in Germany Regional parties in Germany Social conservative parties
2,543
5,688
https://en.wikipedia.org/wiki/Colin%20Dexter
Colin Dexter
Norman Colin Dexter (29 September 1930 – 21 March 2017) was an English crime writer known for his Inspector Morse series of novels, which were written between 1975 and 1999 and adapted as an ITV television series, Inspector Morse, from 1987 to 2000. His characters have spawned a sequel series, Lewis from 2006 to 2015, and a prequel series, Endeavour from 2012 to 2023. Early life and career Dexter was born in Stamford, Lincolnshire, to Alfred and Dorothy Dexter. He had an elder brother, John, a fellow classicist, who taught Classics at The King's School, Peterborough, and a sister, Avril. Alfred ran a small garage and taxi company from premises in Scotgate, Stamford. Dexter was educated at St John's Infants School and Bluecoat Junior School, from which he gained a scholarship to Stamford School, a boys' grammar school, where a younger contemporary was England cricket captain and England rugby player M. J. K. Smith. After leaving school, Dexter completed his national service with the Royal Corps of Signals and then read Classics at Christ's College, Cambridge, graduating in 1953 and receiving a master's degree in 1958. In 1954, Dexter began his teaching career as assistant Classics master at Wyggeston Grammar School for Boys in Leicester. There he helped the school's Christian Union. However, in 2000 he stated that he shared the same views on politics and religion as Inspector Morse, who was portrayed in the final Morse novel, The Remorseful Day, as an atheist. A post at Loughborough Grammar School followed in 1957, then he took up the position of senior Classics teacher at Corby Grammar School, Northamptonshire, in 1959. In 1966, he was forced by the onset of deafness to retire from teaching and took up the post of senior assistant secretary at the University of Oxford Delegacy of Local Examinations (UODLE) in Oxford, a job he held until his retirement in 1988. In November 2008, Dexter featured prominently in the BBC Four programme "How to Solve a Cryptic Crossword" as part of the Timeshift series, in which he recounted some of the crossword clues solved by Morse. Writing career The initial books written by Dexter were general studies textbooks. He began writing mysteries in 1972 during a family holiday. Last Bus to Woodstock was published in 1975 and introduced the character of Inspector Morse, the irascible detective whose penchants for cryptic crosswords, English literature, cask ale, and music by Wagner reflected Dexter's own enthusiasms. Dexter's plots used false leads and other red herrings, "presenting Morse, and his readers, with fiendishly difficult puzzles to solve". The success of the 33 two-hour episodes of the ITV television series Inspector Morse, produced between 1987 and 2000, brought further attention to Dexter's writings. The show featured Inspector Morse, played by John Thaw, and his assistant Sergeant Robert Lewis, played by Kevin Whately. In the manner of Alfred Hitchcock, Dexter made a cameo appearance in almost all episodes. From 2006 to 2015, Morse's assistant Lewis was featured in a 33-episode ITV series titled Lewis (Inspector Lewis in the United States). Lewis is assisted by DS James Hathaway, played by Laurence Fox. A prequel series, Endeavour, features a young Morse and stars Shaun Evans and Roger Allam. Endeavour was first broadcast on the ITV network in 2012 and the ninth and final series will be broadcast in 2023, taking young Morse's career into 1972. Dexter was a consultant for Lewis and the first few years of Endeavour. As with Morse, Dexter occasionally made cameo appearances in both Lewis and Endeavour. Although Dexter's military service was as a Morse code operator in the Royal Corps of Signals, the character was named after his friend Sir Jeremy Morse, a crossword devotee like Dexter. The music for the television series, written by Barrington Pheloung, used a motif based on the Morse code for Morse's name. Awards and honours Dexter received several Crime Writers' Association awards: two Silver Daggers for Service of All the Dead in 1979 and The Dead of Jericho in 1981; two Gold Daggers for The Wench is Dead in 1989 and The Way Through the Woods in 1992; and a Cartier Diamond Dagger for lifetime achievement in 1997. In 1996, Dexter received a Macavity Award for his short story "Evans Tries an O-Level". In 1980, he was elected a member of the by-invitation-only Detection Club. In 2005 Dexter became a Fellow by Special Election of St Cross College, Oxford. In the 2000 Birthday Honours Dexter was appointed an Officer of the Order of the British Empire for services to literature. In 2001 he was awarded the Freedom of the City of Oxford. In September 2011, the University of Lincoln awarded Dexter an honorary Doctor of Letters degree. Personal life In 1956 he married Dorothy Cooper. They had a daughter, Sally, and a son, Jeremy. Death On 21 March 2017 Dexter's publisher, Macmillan, said in a statement "With immense sadness, Macmillan announces the death of Colin Dexter who died peacefully at his home in Oxford this morning." Bibliography Inspector Morse novels Last Bus to Woodstock (1975) Last Seen Wearing (1976) The Silent World of Nicholas Quinn (1977) Service of All the Dead (1979) The Dead of Jericho (1981) The Riddle of the Third Mile (1983) The Secret of Annexe 3 (1986) The Wench is Dead (1989) The Jewel That Was Ours (1991) The Way Through the Woods (1992) The Daughters of Cain (1994) Death Is Now My Neighbour (1996) The Remorseful Day (1999) Novellas and short story collections The Inside Story (1993) Neighbourhood Watch (1993) Morse's Greatest Mystery (1993); also published as As Good as Gold "As Good as Gold" (Morse) "Morse's Greatest Mystery" (Morse) "Evans Tries an O-Level" "Dead as a Dodo" (Morse) "At the Lulu-Bar Motel" "Neighbourhood Watch" (Morse) "A Case of Mis-Identity" (a Sherlock Holmes pastiche) "The Inside Story" (Morse) "Monty's Revolver" "The Carpet-Bagger" "Last Call" (Morse) Uncollected short stories "The Burglar" in You, The Mail on Sunday (1994) "The Double Crossing" in Mysterious Pleasures (2003) "Between the Lines" in The Detection Collection (2005) "The Case of the Curious Quorum" (featuring Inspector Lewis) in The Verdict of Us All (2006) "The Other Half" in The Strand Magazine (February–May 2007) "Morse and the Mystery of the Drunken Driver" in Daily Mail (December 2008) "Clued Up" (4-page story featuring Lewis and Morse solving a crossword) in Cracking Cryptic Crosswords (2009) Other Foreword to Chambers Crossword Manual (2001) Chambers Book of Morse Crosswords (2006) Foreword to Oxford: A Cultural and Literary History (2007) Cracking Cryptic Crosswords: A Guide to Solving Cryptic Crosswords (2010) Foreword to Oxford Through the Lens (2016) See also Diogenes Small References External links 1930 births 2017 deaths People from Stamford, Lincolnshire People educated at Stamford School Alumni of Christ's College, Cambridge Cartier Diamond Dagger winners English crime fiction writers English male novelists English mystery writers British detective fiction writers Fellows of St Cross College, Oxford Writers from Oxford Inspector Morse Macavity Award winners Members of the Detection Club Officers of the Order of the British Empire Crossword compilers Royal Corps of Signals soldiers 20th-century British Army personnel
2,547
5,689
https://en.wikipedia.org/wiki/College
College
A college (Latin: collegium) is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, or a secondary school. In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status (often without its own degree-awarding powers), or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to (primarily public) higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education. Etymology The word "college" is from the Latin verb lego, legere, legi, lectum, "to collect, gather together, pick", plus the preposition cum, "with", thus meaning "selected together". Thus "colleagues" are literally "persons who have been selected to work together". In ancient Rome a collegium was a "body, guild, corporation united in colleagueship; of magistrates, praetors, tribunes, priests, augurs; a political club or trade guild". Thus a college was a form of corporation or corporate body, an artificial legal person (body/corpus) with its own legal personality, with the capacity to enter into legal contracts, to sue and be sued. In mediaeval England there were colleges of priests, for example in chantry chapels; modern survivals include the Royal College of Surgeons in England (originally the Guild of Surgeons Within the City of London), the College of Arms in London (a body of heralds enforcing heraldic law), an electoral college (to elect representatives), etc., all groups of persons "selected in common" to perform a specified function and appointed by a monarch, founder or other person in authority. As for the modern "college of education", it was a body created for that purpose, for example Eton College was founded in 1440 by letters patent of King Henry VI for the constitution of a college of Fellows, priests, clerks, choristers, poor scholars, and old poor men, with one master or governor, whose duty it shall be to instruct these scholars and any others who may resort thither from any part of England in the knowledge of letters, and especially of grammar, without payment". Overview Higher education Within higher education, the term can be used to refer to: A constituent part of a collegiate university, for example King's College, Cambridge, or of a federal university, for example King's College London. A liberal arts college, an independent institution of higher education focusing on undergraduate education, such as Williams College or Amherst College. A liberal arts division of a university whose undergraduate program does not otherwise follow a liberal arts model, such as the Yuanpei College at Peking University. An institute providing specialised training, such as a college of further education, for example Belfast Metropolitan College, a teacher training college, or an art college. In the United States, college is sometimes but rarely a synonym for a research university, such as Dartmouth College, one of the eight universities in the Ivy League. In the United States, the undergraduate college of a university which also confers graduate degrees, such as Yale College, the undergraduate college within Yale University. Further education A sixth form college or college of further education is an educational institution in England, Wales, Northern Ireland, Belize, the Caribbean, Malta, Norway, Brunei, or Southern Africa, among others, where students aged 16 to 19 typically study for advanced school-level qualifications, such as A-levels, BTEC, HND or its equivalent and the International Baccalaureate Diploma, or school-level qualifications such as GCSEs. In Singapore and India, this is known as a junior college. The municipal government of the city of Paris uses the phrase "sixth form college" as the English name for a lycée. Secondary education In some national education systems, secondary schools may be called "colleges" or have "college" as part of their title. In Australia the term "college" is applied to any private or independent (non-government) primary and, especially, secondary school as distinct from a state school. Melbourne Grammar School, Cranbrook School, Sydney and The King's School, Parramatta are considered colleges. There has also been a recent trend to rename or create government secondary schools as "colleges". In the state of Victoria, some state high schools are referred to as secondary colleges, although the pre-eminent government secondary school for boys in Melbourne is still named Melbourne High School. In Western Australia, South Australia and the Northern Territory, "college" is used in the name of all state high schools built since the late 1990s, and also some older ones. In New South Wales, some high schools, especially multi-campus schools resulting from mergers, are known as "secondary colleges". In Queensland some newer schools which accept primary and high school students are styled state college, but state schools offering only secondary education are called "State High School". In Tasmania and the Australian Capital Territory, "college" refers to the final two years of high school (years 11 and 12), and the institutions which provide this. In this context, "college" is a system independent of the other years of high school. Here, the expression is a shorter version of matriculation college. In a number of Canadian cities, many government-run secondary schools are called "collegiates" or "collegiate institutes" (C.I.), a complicated form of the word "college" which avoids the usual "post-secondary" connotation. This is because these secondary schools have traditionally focused on academic, rather than vocational, subjects and ability levels (for example, collegiates offered Latin while vocational schools offered technical courses). Some private secondary schools (such as Upper Canada College, Vancouver College) choose to use the word "college" in their names nevertheless. Some secondary schools elsewhere in the country, particularly ones within the separate school system, may also use the word "college" or "collegiate" in their names. In New Zealand the word "college" normally refers to a secondary school for ages 13 to 17 and "college" appears as part of the name especially of private or integrated schools. "Colleges" most frequently appear in the North Island, whereas "high schools" are more common in the South Island. In the Netherlands, "college" is equivalent to HBO (Higher professional education). It is oriented towards professional training with clear occupational outlook, unlike universities which are scientifically oriented. In South Africa, some secondary schools, especially private schools on the English public school model, have "college" in their title. Thus no less than six of South Africa's Elite Seven high schools call themselves "college" and fit this description. A typical example of this category would be St John's College. Private schools that specialize in improving children's marks through intensive focus on examination needs are informally called "cram-colleges". In Sri Lanka the word "college" (known as Vidyalaya in Sinhala) normally refers to a secondary school, which usually signifies above the 5th standard. During the British colonial period a limited number of exclusive secondary schools were established based on English public school model (Royal College Colombo, S. Thomas' College, Mount Lavinia, Trinity College, Kandy) these along with several Catholic schools (St. Joseph's College, Colombo, St Anthony's College) traditionally carry their name as colleges. Following the start of free education in 1931 large group of central colleges were established to educate the rural masses. Since Sri Lanka gained Independence in 1948, many schools that have been established have been named as "college". Other As well as an educational institution, the term, in accordance with its etymology, may also refer to any formal group of colleagues set up under statute or regulation; often under a Royal Charter. Examples include an electoral college, the College of Arms, a college of canons, and the College of Cardinals. Other collegiate bodies include professional associations, particularly in medicine and allied professions. In the UK these include the Royal College of Nursing and the Royal College of Physicians. Examples in the United States include the American College of Physicians, the American College of Surgeons, and the American College of Dentists. An example in Australia is the Royal Australian College of General Practitioners. College by country The different ways in which the term "College" is used to describe educational institutions in various regions of the world is listed below: Americas Canada In Canadian English, the term "college" usually refers to a trades school, applied arts/science/technology/business/health school or community college. These are post-secondary institutions granting certificates, diplomas, associate degrees and (in some cases) bachelor's degrees. The French acronym specific to public institutions within Quebec’s particular system of pre-university and technical education is CEGEP (Collège d'enseignement général et professionnel, "college of general and professional education"). They are collegiate-level institutions that a student typically enrols in if they wish to continue onto university in the Quebec education system, or to learn a trade. In Ontario and Alberta, there are also institutions that are designated university colleges, which only grant undergraduate degrees. This is to differentiate between universities, which have both undergraduate and graduate programs and those that do not. In Canada, there is a strong distinction between "college" and "university". In conversation, one specifically would say either "they are going to university" (i.e., studying for a three- or four-year degree at a university) or "they are going to college" (i.e., studying at a technical/career training). Usage in a university setting The term college also applies to distinct entities that formally act as an affiliated institution of the university, formally referred to as federated college, or affiliated colleges. A university may also formally include several constituent colleges, forming a collegiate university. Examples of collegiate universities in Canada include Trent University, and the University of Toronto. These types of institutions act independently, maintaining their own endowments, and properties. However, they remain either affiliated, or federated with the overarching university, with the overarching university being the institution that formally grants the degrees. For example, Trinity College was once an independent institution, but later became federated with the University of Toronto. Several centralized universities in Canada have mimicked the collegiate university model; although constituent colleges in a centralized university remains under the authority of the central administration. Centralized universities that have adopted the collegiate model to a degree includes the University of British Columbia, with Green College and St. John's College; and the Memorial University of Newfoundland, with Sir Wilfred Grenfell College. Occasionally, "college" refers to a subject specific faculty within a university that, while distinct, are neither federated nor affiliated—College of Education, College of Medicine, College of Dentistry, College of Biological Science among others. The Royal Military College of Canada is a military college which trains officers for the Canadian Armed Forces. The institution is a full-fledged university, with the authority to issue graduate degrees, although it continues to word the term college in its name. The institution's sister schools, Royal Military College Saint-Jean also uses the term college in its name, although it academic offering is akin to a CEGEP institution in Quebec. A number of post-secondary art schools in Canada formerly used the word college in their names, despite formally being universities. However, most of these institutions were renamed, or re-branded in the early 21st century, omitting the word college from its name. Usage in secondary education The word college continues to be used in the names public separate secondary schools in Ontario. A number of independent schools across Canada also use the word college in its name. Public secular school boards in Ontario also refer to their secondary schools as collegiate institutes. However, usage of the word collegiate institute varies between school boards. Collegiate institute is the predominant name for secondary schools in Lakehead District School Board, and Toronto District School Board, although most school boards in Ontario use collegiate institute alongside high school, and secondary school in the names of their institutions. Similarly, secondary schools in Regina, and Saskatoon are referred to as Collegiate. Chile In Chile, the term "college" is usually used in the name of some bilingual schools, like Santiago College, Saint George's College etc. Since 2009 the Pontifical Catholic University of Chile incorporated college as a bachelor's degree, it has a Bachelor of Natural Sciences and Mathematics, a Bachelor of Social Science and a Bachelor of Arts and Humanities. It has the same system as the American universities, it combines majors and minors. And it let the students continue a higher degree in the same university once finished. United States In the United States, there were 5,916 post-secondary institutions (universities and colleges) having peaked at 7,253 in 2012–13 and fallen every year since. A "college" in the US can refer to a constituent part of a university (which can be a residential college, the sub-division of the university offering undergraduate courses, or a school of the university offering particular specialized courses), an independent institution offering bachelor's-level courses, or an institution offering instruction in a particular professional, technical or vocational field. In popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education. Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called "public" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition. Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs. Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions. While there is no national standard in the United States, the term "university" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a "university" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term "college" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions. Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges. The terms "university" and "college" do not exhaust all possible titles for an American institution of higher education. Other options include "institute" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), "academy" (United States Military Academy), "union" (Cooper Union), "conservatory" (New England Conservatory), and "school" (Juilliard School). In colloquial use, they are still referred to as "college" when referring to their undergraduate studies. The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, "colleges" are collections of academic programs and other units that share some common characteristics, mission, or disciplinary focus (the "college of engineering", the "college of nursing", and so forth). There exist other variants for historical reasons, including some uses that exist because of mergers and acquisitions; for example, Duke University, which was called Trinity College until the 1920s, still calls its main undergraduate subdivision Trinity College of Arts and Sciences. Residential colleges Some American universities, such as Princeton, Rice, and Yale have established residential colleges (sometimes, as at Harvard, the first to establish such a system in the 1930s, known as houses) along the lines of Oxford or Cambridge. Unlike the Oxbridge colleges, but similarly to Durham, these residential colleges are not autonomous legal entities nor are they typically much involved in education itself, being primarily concerned with room, board, and social life. At the University of Michigan, University of California, San Diego and the University of California, Santa Cruz, each residential college teaches its own core writing courses and has its own distinctive set of graduation requirements. Many U.S. universities have placed increased emphasis on their residential colleges in recent years. This is exemplified by the creation of new colleges at Ivy League schools such as Yale University and Princeton University, and efforts to strengthen the contribution of the residential colleges to student education, including through a 2016 taskforce at Princeton on residential colleges. Origin of the U.S. usage The founders of the first institutions of higher education in the United States were graduates of the University of Oxford and the University of Cambridge. The small institutions they founded would not have seemed to them like universities – they were tiny and did not offer the higher degrees in medicine and theology. Furthermore, they were not composed of several small colleges. Instead, the new institutions felt like the Oxford and Cambridge colleges they were used to – small communities, housing and feeding their students, with instruction from residential tutors (as in the United Kingdom, described above). When the first students graduated, these "colleges" assumed the right to confer degrees upon them, usually with authority—for example, The College of William & Mary has a Royal Charter from the British monarchy allowing it to confer degrees while Dartmouth College has a charter permitting it to award degrees "as are usually granted in either of the universities, or any other college in our realm of Great Britain." The leaders of Harvard College (which granted America's first degrees in 1642) might have thought of their college as the first of many residential colleges that would grow up into a New Cambridge university. However, over time, few new colleges were founded there, and Harvard grew and added higher faculties. Eventually, it changed its title to university, but the term "college" had stuck and "colleges" have arisen across the United States. In U.S. usage, the word "college" not only embodies a particular type of school, but has historically been used to refer to the general concept of higher education when it is not necessary to specify a school, as in "going to college" or "college savings accounts" offered by banks. In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone. By comparison, the group says that's the equivalent of 39 percent of tuition and fees at a community college, and 14 percent of tuition and fees at a four-year public university. Morrill Land-Grant Act In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as "...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education." The Morrill Act "...made it possible for the new western states to establish colleges for the citizens." Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in "...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time." The act was eventually extended to allow all states that had remained with the Union during the American Civil War, and eventually all states, to establish such institutions. Most of the colleges established under the Morrill Act have since become full universities, and some are among the elite of the world. Benefits of college Selection of a four-year college as compared to a two-year junior college, even by marginal students such as those with a C+ grade average in high school and SAT scores in the mid 800s, increases the probability of graduation and confers substantial economic and social benefits. Asia Bangladesh In Bangladesh, educational institutions offering higher secondary (11th–12th grade) education are known as colleges. Hong Kong In Hong Kong, the term 'college' is used by tertiary institutions as either part of their names or to refer to a constituent part of the university, such as the colleges in the collegiate The Chinese University of Hong Kong; or to a residence hall of a university, such as St. John's College, University of Hong Kong. Many older secondary schools have the term 'college' as part of their names. India The modern system of education was heavily influenced by the British starting in 1835. In India, the term "college" is commonly reserved for institutions that offer high school diplomas at year 12 ("Junior College", similar to American high schools), and those that offer the bachelor's degree; some colleges, however, offer programmes up to PhD level. Generally, colleges are located in different parts of a state and all of them are affiliated to a regional university. The colleges offer programmes leading to degrees of that university. Colleges may be either Autonomous or non-autonomous. Autonomous Colleges are empowered to establish their own syllabus, and conduct and assess their own examinations; in non-autonomous colleges, examinations are conducted by the university, at the same time for all colleges under its affiliation. There are several hundred universities and each university has affiliated colleges, often a large number. The first liberal arts and sciences college in India was "Cottayam College" or the "Syrian College", Kerala in 1815. The First inter linguistic residential education institution in Asia was started at this college. At present it is a Theological seminary which is popularly known as Orthodox Theological Seminary or Old Seminary. After that, CMS College, Kottayam, established in 1817, and the Presidency College, Kolkata, also 1817, initially known as Hindu College. The first college for the study of Christian theology and ecumenical enquiry was Serampore College (1818). The first Missionary institution to impart Western style education in India was the Scottish Church College, Calcutta (1830). The first commerce and economics college in India was Sydenham College, Mumbai (1913). In India a new term has been introduced that is Autonomous Institutes & Colleges. An autonomous Colleges are colleges which need to be affiliated to a certain university. These colleges can conduct their own admission procedure, examination syllabus, fees structure etc. However, at the end of course completion, they cannot issue their own degree or diploma. The final degree or diploma is issued by the affiliated university. Also, some significant changes can pave way under the NEP (New Education Policy 2020) which may affect the present guidelines for universities and colleges. Israel In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called "Academic Colleges" (; plural ). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree. Academic colleges: Any educational facility that had been approved to offer at least bachelor's degree is entitled by CHE to use the term academic college in its name. Engineering academic college: Any academic facility that offer at least bachelor's degree and most of it faculties are providing an Engineering degree and Engineering license. Educational academic college: After an educational facility that had been approved for "Teachers seminar" status is then approved to provide a Bachelor of Education, its name is changed to include "Educational Academic college." Technical college: A "Technical college" () is an educational facility that is approved to allow to provide P.E degree (הנדסאי) (14'th class) or technician (טכנאי) (13'th class) diploma and licenses. Training College: A "Training College" ( or ) is an educational facility that provides basic training allowing a person to receive a working permit in a field such as alternative medicine, cooking, Art, Mechanical, Electrical and other professions. A trainee could receive the right to work in certain professions as apprentice (j. mechanic, j. Electrician etc.). After working in the training field for enough time an apprentice could have a license to operate (Mechanic, Electrician). This educational facility is mostly used to provide basic training for low tech jobs and for job seekers without any training that are provided by the nation's Employment Service (שירות התעסוקה). Macau Following the Portuguese usage, the term "college" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College. Philippines In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines. A state college may not have the word "college" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification. Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university. When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college. Singapore The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively. The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth. Sri Lanka There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges. Turkey In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College. Africa South Africa Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa. Zimbabwe The term college is mainly used by private or independent secondary schools with Advanced Level (Upper 6th formers) and also Polytechnic Colleges which confer diplomas only. A student can complete secondary education (International General Certificate of Secondary Education, IGCSE) at 16 years and proceed straight to a poly-technical college or they can proceed to Advanced level (16 to 19 years) and obtain a General Certificate of Education (GCE) certificate which enables them to enroll at a university, provided they have good grades. Alternatively, with lower grades, the GCE certificate holders will have an added advantage over their GCSE counterparts if they choose to enroll at a polytechnical college. Some schools in Zimbabwe choose to offer the International Baccalaureate studies as an alternative to the IGCSE and GCE. Europe Greece Kollegio (in Greek Κολλέγιο) refers to the Centers of Post-Lyceum Education (in Greek Κέντρο Μεταλυκειακής Εκπαίδευσης, abbreviated as KEME), which are principally private and belong to the Greek post-secondary education system. Some of them have links to EU or US higher education institutions or accreditation organizations, such as the NEASC. Kollegio (or Kollegia in plural) may also refer to private non-tertiary schools, such as the Athens College. Ireland In Ireland the term "college" is normally used to describe an institution of tertiary education. University students often say they attend "college" rather than "university". Until 1989, no university provided teaching or research directly; they were formally offered by a constituent college of the university. There are number of secondary education institutions that traditionally used the word "college" in their names: these are either older, private schools (such as Belvedere College, Gonzaga College, Castleknock College, and St. Michael's College) or what were formerly a particular kind of secondary school. These secondary schools, formerly known as "technical colleges," were renamed "community colleges," but remain secondary schools. The country's only ancient university is the University of Dublin. Created during the reign of Elizabeth I, it is modelled on the collegiate universities of Cambridge and Oxford. However, only one constituent college was ever founded, hence the curious position of Trinity College Dublin today; although both are usually considered one and the same, the university and college are completely distinct corporate entities with separate and parallel governing structures. Among more modern foundations, the National University of Ireland, founded in 1908, consisted of constituent colleges and recognised colleges until 1997. The former are now referred to as constituent universities – institutions that are essentially universities in their own right. The National University can trace its existence back to 1850 and the creation of the Queen's University of Ireland and the creation of the Catholic University of Ireland in 1854. From 1880, the degree awarding roles of these two universities was taken over by the Royal University of Ireland, which remained until the creation of the National University in 1908 and Queen's University Belfast. The state's two new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this. Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names. A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities. Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers. A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8. Netherlands In the Netherlands there are 3 main educational routes after high school. MBO (middle-level applied education), which is the equivalent of junior college. Designed to prepare students for either skilled trades and technical occupations and workers in support roles in professions such as engineering, accountancy, business administration, nursing, medicine, architecture, and criminology or for additional education at another college with more advanced academic material. HBO (higher professional education), which is the equivalent of college and has a professional orientation. After HBO (typically 4–6 years), pupils can enroll in a (professional) master's program (1–2 years) or enter the job market. The HBO is taught in vocational universities (hogescholen), of which there are over 40 in the Netherlands, each of which offers a broad variety of programs, with the exception of some that specialize in arts or agriculture. Note that the hogescholen are not allowed to name themselves university in Dutch. This also stretches to English and therefore HBO institutions are known as universities of applied sciences. WO (Scientific education), which is the equivalent to university level education and has an academic orientation. HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis. Portugal Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school. Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542. United Kingdom Secondary education and further education Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College. Higher education In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees. In England, , over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London. Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised. The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university. A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college. Oceania Australia In Australia a college may be an institution of tertiary education that is smaller than a university, run independently or as part of a university. Following a reform in the 1980s many of the formerly independent colleges now belong to a larger universities. Referring to parts of a university, there are residential colleges which provide residence for students, both undergraduate and postgraduate, called university colleges. These colleges often provide additional tutorial assistance, and some host theological study. Many colleges have strong traditions and rituals, so are a combination of dormitory style accommodation and fraternity or sorority culture. Most technical and further education institutions (TAFEs), which offer certificate and diploma vocational courses, are styled "TAFE colleges" or "Colleges of TAFE". In some places, such as Tasmania, college refers to a type of school for Year 11 and 12 students, e.g. Don College. New Zealand The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of "college", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as "Teacher-training colleges" now style themselves "College of education". Some universities, such as the University of Canterbury, have divided their university into constituent administrative "Colleges" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above. Like the United Kingdom some professional bodies in New Zealand style themselves as "colleges", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians. In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have "College" in their name, such as Rangitoto College, New Zealand's largest secondary. Notes References External links See also Community college Residential college University college Vocational university Madrasa Ashrama (stage) Educational stages Higher education Types of university or college Youth
2,548
5,690
https://en.wikipedia.org/wiki/Chalmers%20University%20of%20Technology
Chalmers University of Technology
Chalmers University of Technology (, often shortened to Chalmers) is a Swedish university located in Gothenburg that conducts research and education in technology and natural sciences. The university has approximately 3100 employees and 10,000 students, and offers education in engineering, science, shipping, architecture and other management areas. Chalmers is a highly-reputed university in education and research worldwide while it is categorized among the leading European technical universities and is constantly ranked among the top universities in the world. Chalmers is coordinating the Graphene Flagship, the European Union's biggest research initiative to bring graphene innovation out of the lab and into commercial applications, and leading the development of a Swedish quantum computer. History The university was founded in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an "industrial school". The university was run as a private institution until 1937 when it became the second state-owned technical university. In 1994 the government of Sweden reorganised Chalmers into a private company (aktiebolag) owned by a government-controlled foundation. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University. Departments Chalmers University of Technology has the following 13 department: Architecture and Civil Engineering Biology and Biological Engineering Chemistry and Chemical Engineering Communication and Learning in Science Computer Science and Engineering Electrical Engineering Industrial and Materials Science Mathematical Sciences Mechanics and Maritime Sciences Microtechnology and Nanoscience Physics Space, Earth and Environment Technology Management and Economics Furthermore, Chalmers is home to six Areas of Advance and six national competence centers in key fields such as materials, mathematical modelling, environmental science, and vehicle safety. Research infrastructure Chalmers University of Technology's research infrastructure includes everything from advanced real or virtual labs to large databases, computer capacity for large-scale calculations and research facilities. Chalmers AI Research Centre, CHAIR Chalmers Centre for Computational Science and Engineering, C3SE Chalmers Mass Spectrometry Infrastructure, CMSI Chalmers Power Central Chalmers Materials Analysis Laboratory Chalmers Simulator Centre Chemical Imaging Infrastructure Facility for Computational Systems Biology HSB Living Lab Nanofabrication Laboratory Onsala Space Observatory Revere – Chalmers Resource for Vehicle Research The National laboratory in terahertz characterisation SAFER - Vehicle and Traffic Safety Centre at Chalmers Rankings and reputation Since 2012, Chalmers has achieved the highest reputation for Swedish Universities by the Kantar Sifo's Reputation Index. According to the survey, Chalmers is the most well-known university in Sweden regarded as a successful and competitive high-class institution with a large contribution to society and credibility in media. In 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education. Additionally, based on the U-Multirank rankings, the European Commission recognized Chalmers as one of Europe's top universities in 2019, while in 2022, Chalmers characterized as a top performing university across various indicators (i.e., teaching & learning, research, knowledge transfer and international orientation) with the highest number of ‘A’ (very good) scores on the institutional level for Sweden. Furthermore, in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness, while the QS World University Rankings, placed Chalmers 81st in the world in graduate employability. Additionally, in 2021, the Academic Ranking of World Universities, placed Chalmers 51–75 in the world in the field of electrical & electronic engineering, the QS World University Rankings, placed Chalmers 79th in the world in the field of engineering & technology, the Times Higher Education World University Rankings, ranked Chalmers 68th in the world for engineering & technology and the U.S. News & World Report Best Global University Ranking placed Chalmers 84th in the world for engineering. In the 2011 International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer (CEO) or equivalent in one of the Fortune Global 500 companies, Chalmers ranked 38th in the world, ranking 1st in Sweden and 15th in Europe. Ties and partnerships Chalmers is a member of the IDEA League network, a strategic alliance between five leading European universities of science and technology. The scope of the network is to provide the environment for students, researchers and staff to share knowledge, experience and resources. Chalmers is also a member of the Nordic Five Tech network, a strategic alliance of the five leading technical universities in Denmark, Finland, Norway and Sweden. The Nordic Five Tech universities are amongst the top international technical universities with the goal of creating synergies within education, research and innovation. Moreover, Chalmers is a partner of the UNITECH International, an organization consisting of distinguished technical universities and multinational companies across Europe. UNITECH helps bridge the gap between the industrial and academic world offering exchange programs consisting of studies as well as an integrated internship at one of the corporate partners. Additionally, Chalmers has established formal agreements with three leading materials science centers: University of California, Santa Barbara, ETH Zurich and Stanford University. Within the framework of the agreements, a yearly bilateral workshop is organized, and exchange of researchers is supported. Chalmers has general exchange agreements with many European and U.S. universities and maintains a special exchange program agreement with National Chiao Tung University (NCTU) in Taiwan where the exchange students from the two universities maintain offices for, among other things, helping local students with applying and preparing for an exchange year as well as acting as representatives. Furthermore, Chalmers has strong partnerships with major industries such as Ericsson, Volvo, Saab AB and AstraZeneca. Students Approximately 40% of Sweden's graduate engineers and architects are educated at Chalmers. Each year, around 250 postgraduate degrees are awarded as well as 850 graduate degrees. About 1,000 post-graduate students attend programmes at the university, and many students are taking Master of Science engineering programmes and the Master of Architecture programme. Since 2007, all master's programmes are taught in English for both national and international students. This was a result of the adaptation to the Bologna process that started in 2004 at Chalmers (as the first technical university in Sweden). Currently, about 10% of all students at Chalmers come from countries outside Sweden to enrol in a master's or PhD program. Around 2,700 students also attend Bachelor of Science engineering programmes, merchant marine and other undergraduate courses at Campus Lindholmen. Chalmers also shares some students with Gothenburg University in the joint IT University project. The IT University focuses exclusively on information technology and offers bachelor's and master's programmes with degrees issued from either Chalmers or Gothenburg University, depending on the programme. Chalmers confers honorary doctoral degrees to people outside the university who have shown great merit in their research or in society. Organization Chalmers is an aktiebolag with 100 shares à 1,000 SEK, all of which are owned by the Chalmers University of Technology Foundation, a private foundation, which appoints the university board and the president. The foundation has its members appointed by the Swedish government (4 to 8 seats), the departments appoint one member, the student union appoints one member and the president automatically gains one chair. Each department is led by a department head, usually a member of the faculty of that department. The faculty senate represents members of the faculty when decisions are taken. Campuses In 1937, the school moved from the city centre to the new Gibraltar Campus, named after the mansion which owned the grounds, where it is now located. The Lindholmen College Campus was created in the early 1990s and is located on the island Hisingen. Campus Johanneberg and Campus Lindholmen, as they are now called, are connected by bus lines. Student societies and traditions Traditions include the graduation ceremony and the Cortège procession, an annual public event. Chalmers Students' Union Chalmers Aerospace Club – founded in 1981. In Swedish frequently also referred to as Chalmers rymdgrupp (roughly Chalmers Space Group). Members of CAC led the ESA funded CACTEX (Chalmers Aerospace Club Thermal EXperiment) project where the thermal conductivity of alcohol at zero gravity was investigated using a sounding rocket. Chalmers Alternative Sports – Student association organizing trips and other activities working to promote alternative sports. Every year the Chalmers Wake arranges a pond wakeboard contest in the fountain outside the architecture building at Chalmers. Chalmersbaletten Chalmers Ballong Corps Chalmers Baroque Ensemble Chalmers Business Society (CBS) CETAC Chalmers Choir ETA - (E-sektionens Teletekniska Avdelning) Founded in 1935, it's a student-run amateur radio society that also engages in hobby electronics. Chalmers Film and Photography Committee (CFFC) Chalmersspexet – Amateur theater group which has produced new plays since 1948 Chalmers International Reception Committee (CIRC) XP – Committee that is responsible for the experimental workshop, a workshop open for students Chalmers Program Committee – PU Chalmers Students for Sustainability (CSS) – promoting sustainable development among the students and runs projects, campaigns and lectures Föreningen Chalmers Skeppsbyggare, Chalmers Naval Architecture Students' Society (FCS) Chalmers Sailing Society RANG – Chalmers Indian Association Caster – Developing and operating a Driver in the Loop (DIL) simulator, which is used in various courses and projects Notable alumni Christopher Ahlberg, computer scientist and entrepreneur, Spotfire and Recorded Future founder Rune Andersson, Swedish Industrialist, owner of Mellby Gård AB and billionaire Abbas Anvari, former chancellor of Sharif University of Technology Linn Berggren, artist and former member of Ace of Base Gustaf Dalén, Nobel Prize in Physics Sigfrid Edström, director ASEA, president IOC Claes-Göran Granqvist, physicist Margit Hall, first female architect in Sweden Harald Hammarström, linguist Krister Holmberg, professor of Surface Chemistry at Chalmers University of Technology. Mats Hillert, metallurgist Ivar Jacobson, computer scientist Erik Johansson, photographic surrealist Jan Johansson, jazz musician Leif Johansson, former CEO Volvo Olav Kallenberg, probability theorist Marianne Kärrholm, chemical engineer and Chalmers professor Hjalmar Kumlien, architect Abraham Langlet, chemist Martin Lorentzon, Spotify and TradeDoubler founder Ingemar Lundström, physicist, chairman of the Nobel Committee for Physics Carl Magnusson, industrial designer and inventor Semir Mahjoub, businessman and entrepreneur Peter Nordin, computer scientist and entrepreneur Åke Öberg, biomedical scientist Leif Östling, CEO Scania AB PewDiePie (Felix Arvid Ulf Kjellberg), YouTuber (no degree) Carl Abraham Pihl, engineer and director of first Norwegian railroad (Hovedbanen) Richard Soderberg, businessman, inventor and professor at Massachusetts Institute of Technology Hans Stråberg, former President and CEO of Electrolux Ludvig Strigeus, computer scientist and entrepreneur Per Håkan Sundell, computer scientist and entrepreneur Jan Wäreby, businessman Gert Wingårdh, architect Vera Sandberg, engineer Anna von Hausswolff, musician Anita Schjøll Brede, entrepreneur Presidents Although the official Swedish title for the head is "rektor", the university now uses "President" as the English translation. See also Chalmers School of Entrepreneurship IT University of Göteborg List of universities in Sweden Marie Rådbo, astronomer The International Science Festival in Gothenburg University of Gothenburg (Göteborg University) References External links Chalmers University of Technology – official site Chalmers Student Union Chalmers Alumni Association Educational institutions established in 1829 Technical universities and colleges in Sweden Higher education in Gothenburg Engineering universities and colleges in Sweden 1829 establishments in Sweden
2,549
5,691
https://en.wikipedia.org/wiki/Codex
Codex
The codex (plural codices ) was the historical ancestor of the modern book. Instead of being composed of sheets of paper, it used sheets of vellum, papyrus, or other materials. The term codex is often used for ancient manuscript books, with handwritten contents. A codex, much like the modern book, is bound by stacking the pages and securing one set of edges by a variety of methods over the centuries, yet in a form analogous to modern bookbinding. Modern books are divided into paperback or softback and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. The Ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century. Etymology and origins The word codex comes from the Latin word caudex, meaning "trunk of a tree", “block of wood” or “book”. The codex began to replace the scroll almost as soon as it was invented. In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature. The change from rolls to codices roughly coincides with the transition from papyrus to parchment as the preferred writing material, but the two developments are unconnected. In fact, any combination of codices and scrolls with papyrus and parchment is technically feasible and common in the historical record. Technically, even modern paperbacks are codices, but publishers and scholars reserve the term for manuscript (hand-written) books produced from Late antiquity until the Middle Ages. The scholarly study of these manuscripts is sometimes called codicology. The study of ancient documents in general is called paleography. The codex provided considerable advantages over other book formats, primarily its compactness, sturdiness, economic use of materials by using both sides (recto and verso), and ease of reference (a codex accommodates random access, as opposed to a scroll, which uses sequential access). History The Romans used precursors made of reusable wax-covered tablets of wood for taking notes and other informal writings. Two ancient polyptychs, a pentaptych and octoptych excavated at Herculaneum, used a unique connecting system that presages later sewing on of thongs or cords. Julius Caesar may have been the first Roman to reduce scrolls to bound pages in the form of a note-book, possibly even as a papyrus codex. At the turn of the 1st century AD, a kind of folded parchment notebook called pugillares membranei in Latin became commonly used for writing in the Roman Empire. Theodore Cressy Skeat theorized that this form of notebook was invented in Rome and then spread rapidly to the Near East. Codices are described in certain works by the Classical Latin poet, Martial. He wrote a series of five couplets meant to accompany gifts of literature that Romans exchanged during the festival of Saturnalia. Three of these books are specifically described by Martial as being in the form of a codex; the poet praises the compendiousness of the form (as opposed to the scroll), as well as the convenience with which such a book can be read on a journey. In another poem by Martial, the poet advertises a new edition of his works, specifically noting that it is produced as a codex, taking less space than a scroll and being more comfortable to hold in one hand. According to Theodore Cressy Skeat, this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time. In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, "its mere existence is evidence that this book form had a prehistory", and that "early experiments with this book form may well have taken place outside of Egypt." Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). Early codices weren't always cohesive. They often contained multiple languages, various topics and even multiple authors. "Such codices formed libraries in their own right." The parchment notebook pages were "more durable, and could withstand being folded and stitched to other sheets". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. "Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth." As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews. The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160. In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport. The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost. The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl. In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang dynasty (618–907), improved by the 'butterfly' bindings of the Song dynasty (960–1279), the wrapped back binding of the Yuan dynasty (1271–1368), the stitched binding of the Ming (1368–1644) and Qing dynasties (1644–1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures. Judaism still retains the Torah scroll, at least for ceremonial use. From scrolls to codex Among the experiments of earlier centuries, scrolls were sometimes unrolled horizontally, as a succession of columns. (The Dead Sea Scrolls are a famous example of this format.) This made it possible to fold the scroll as an accordion. The next evolutionary step was to cut the folios and sew and glue them at their centers, making it easier to use the papyrus or vellum recto-verso as with a modern book. Traditional bookbinders would call one of these assembled, trimmed and bound folios (that is, the "pages" of the book as a whole, comprising the front matter and contents) a codex in contradistinction to the cover or case, producing the format of book now colloquially known as a hardcover. In the hardcover bookbinding process, the procedure of binding the codex is very different to that of producing and attaching the case. Preparation The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they are folded into two conjoint leaves, also known as a bifolium. Historians have found evidence of manuscripts in which the scribe wrote down the medieval instructions now followed by modern membrane makers. Defects can often be found in the membrane, whether they are from the original animal, human error during the preparation period, or from when the animal was killed. Defects can also appear during the writing process. Unless the manuscript is kept in perfect condition, defects can also appear later in its life. Preparation of pages for writing Firstly, the membrane must be prepared. The first step is to set up the quires. The quire is a group of several sheets put together. Raymond Clemens and Timothy Graham point out, in "Introduction to Manuscript Studies", that "the quire was the scribe's basic writing unit throughout the Middle Ages": Pricking is the process of making holes in a sheet of parchment (or membrane) in preparation of it ruling. The lines were then made by ruling between the prick marks.... The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns. Forming quire From the Carolingian period to the end of the Middle Ages, different styles of folding the quire came about. For example, in continental Europe throughout the Middle Ages, the quire was put into a system in which each side folded on to the same style. The hair side met the hair side and the flesh side to the flesh side. This was not the same style used in the British Isles, where the membrane was folded so that it turned out an eight-leaf quire, with single leaves in the third and sixth positions. The next stage was tacking the quire. Tacking is when the scribe would hold together the leaves in quire with thread. Once threaded together, the scribe would then sew a line of parchment up the "spine" of the manuscript to protect the tacking. Materials The materials codices are made with are their support, and include papyrus, parchment (sometimes referred to as membrane or vellum), and paper. They are written and drawn on with metals, pigments and ink. The quality, size, and choice of support determine the status of a codex. Papyrus is found only in late antiquity and the early Middle Ages. Codices intended for display were bound with more durable materials than vellum. Parchment varied widely due to animal species and finish, and identification of animals used to make it has only begun to be studied in the 21st century. How manufacturing influenced the final products, technique, and style, is little understood. However, changes in style are underpinned more by variation in technique. Before the 14th and 15th century, paper was expensive, and its use may mark off the deluxe copy. Structure The structure of a codex includes its size, format/ordinatio(its quires or gatherings), consisting of sheets folded a number of times, often twice- a bifolio), sewing, bookbinding and rebinding. A quire consisted of a number of folded sheets inserting into one another- at least three, but most commonly four bifolia, that is eight sheets and sixteen pages: Latin quaternio or Greek tetradion, which became a synonym for quires. Unless an exemplar (text to be copied) was copied exactly, format differed. In preparation for writing codices, ruling patterns were used that determined the layout of each page. Holes were prickled with a spiked lead wheel and a circle. Ruling was then applied separately on each page or once through the top folio. Ownership markings, decorations and illumination are also a part of it. They are specific to the scriptoria, or any production center, and libraries of codices. Pages Watermarks may provide, although often approximate, dates for when the copying occurred. The layout– size of the margin and the number of lines– is determined. There may be textual articulations, running heads, openings, chapters and paragraphs. Space was reserved for illustrations and decorated guide letters. The apparatus of books for scholars became more elaborate during the 13th and 14th centuries when chapter, verse, page numbering, marginalia finding guides, indexes, glossaries and tables of contents were developed. The libraire By a close examination of the physical attributes of a codex, it is sometimes possible to match up long-separated elements originally from the same book. In 13th-century book publishing, due to secularization, stationers or libraires emerged. They would receive commissions for texts, which they would contract out to scribes, illustrators, and binders, to whom they supplied materials. Due to the systematic format used for assembly by the libraire, the structure can be used to reconstruct the original order of a manuscript. However, complications can arise in the study of a codex. Manuscripts were frequently rebound, and this resulted in a particular codex incorporating works of different dates and origins, thus different internal structures. Additionally, a binder could alter or unify these structures to ensure a better fit for the new binding. Completed quires or books of quires might constitute independent book units- booklets, which could be returned to the stationer, or combined with other texts to make anthologies or miscellanies. Exemplars were sometimes divided into quires for simultaneous copying and loaned out to students for study. To facilitate this, catchwords were used- a word at the end of a page providing the next page's first word. See also Aztec codices Grimoire History of books History of scrolls List of codices List of florilegia and botanical codices List of New Testament papyri List of New Testament uncials Maya codices Traditional Chinese bookbinding Volume (bibliography) Citations General and cited references External links Centre for the History of the Book The Codex and Canon Consciousness – Draft paper by Robert Kraft on the change from scroll to codex The Construction of the Codex In Classic- and Postclassic-Period Maya Civilization Maya Codex and Paper Making Encyclopaedia Romana: "Scroll and codex" K. C. Hanson, Catalogue of New Testament Papyri & Codices 2nd—10th Centuries Medieval and Renaissance manuscripts, including Vulgates, Breviaries, Contracts, and Herbal Texts from 12 -17th century, Center for Digital Initiatives, University of Vermont Libraries 1st-century introductions Books by type Codicology Italian inventions Manuscripts by type
2,550
5,692
https://en.wikipedia.org/wiki/Calf%20%28animal%29
Calf (animal)
A calf (: calves) is a young domestic cow or bull. Calves are reared to become adult cattle or are slaughtered for their meat, called veal, and hide. The term calf is also used for some other species. See "Other animals" below. Terminology "Calf" is the term used from birth to weaning, when it becomes known as a weaner or weaner calf, though in some areas the term "calf" may be used until the animal is a yearling. The birth of a calf is known as calving. A calf that has lost its mother is an orphan calf, also known as a poddy or poddy-calf in British. Bobby calves are young calves which are to be slaughtered for human consumption. A vealer is a calf weighing less than about which is at about eight to nine months of age. A young female calf from birth until she has had a calf of her own is called a heifer (). In the American Old West, a motherless or small, runty calf was sometimes referred to as a dodie. The term "calf" is also used for some other species. See "Other animals" below. Early development Calves may be produced by natural means, or by artificial breeding using artificial insemination or embryo transfer. Calves are born after nine months. They usually stand within a few minutes of calving, and suckle within an hour. However, for the first few days they are not easily able to keep up with the rest of the herd, so young calves are often left hidden by their mothers, who visit them several times a day to suckle them. By a week old the calf is able to follow the mother all the time. Some calves are ear tagged soon after birth, especially those that are stud cattle in order to correctly identify their dams (mothers), or in areas (such as the EU) where tagging is a legal requirement for cattle. Typically when the calves are about two months old they are branded, ear marked, castrated and vaccinated. Calf rearing systems The single suckler system of rearing calves is similar to that occurring naturally in wild cattle, where each calf is suckled by its own mother until it is weaned at about nine months old. This system is commonly used for rearing beef cattle throughout the world. Cows kept on poor forage (as is typical in subsistence farming) produce a limited amount of milk. A calf left with such a mother all the time can easily drink all the milk, leaving none for human consumption. For dairy production under such circumstances, the calf's access to the cow must be limited, for example by penning the calf and bringing the mother to it once a day after partly milking her. The small amount of milk available for the calf under such systems may mean that it takes a longer time to rear, and in subsistence farming it is therefore common for cows to calve only in alternate years. In more intensive dairy farming, cows can easily be bred and fed to produce far more milk than one calf can drink. In the multi-suckler system, several calves are fostered onto one cow in addition to her own, and these calves' mothers can then be used wholly for milk production. More commonly, calves of dairy cows are fed formula milk from soon after birth, usually from a bottle or bucket. Purebred female calves of dairy cows are reared as replacement dairy cows. Most purebred dairy calves are produced by artificial insemination (AI). By this method each bull can serve many cows, so only a very few of the purebred dairy male calves are needed to provide bulls for breeding. The remainder of the male calves may be reared for beef or veal; however, some extreme dairy breeds carry so little muscle that rearing the purebred male calves may be uneconomic, and in this case they are often killed soon after birth and disposed of. Only a proportion of purebred heifers are needed to provide replacement cows, so often some of the cows in dairy herds are put to a beef bull to produce crossbred calves suitable for rearing as beef. Veal calves may be reared entirely on milk formula and killed at about 18 or 20 weeks as "white" veal, or fed on grain and hay and killed at 22 to 35 weeks to produce red or pink veal. Growth A commercial steer or bull calf is expected to put on about per month. A nine-month-old steer or bull is therefore expected to weigh about . Heifers will weigh at least at eight months of age. Calves are usually weaned at about eight to nine months of age, but depending on the season and condition of the dam, they might be weaned earlier. They may be paddock weaned, often next to their mothers, or weaned in stockyards. The latter system is preferred by some as it accustoms the weaners to the presence of people and they are trained to take feed other than grass. Small numbers may also be weaned with their dams with the use of weaning nose rings or nosebands which results in the mothers rejecting the calves' attempts to suckle. Many calves are also weaned when they are taken to the large weaner auction sales that are conducted in the south eastern states of Australia. Victoria and New South Wales have yardings of up to 8,000 weaners (calves) for auction sale in one day. The best of these weaners may go to the butchers. Others will be purchased by re-stockers to grow out and fatten on grass or as potential breeders. In the United States these weaners may be known as feeders and would be placed directly into feedlots. At about 12 months old a beef heifer reaches puberty if she is well grown. Diseases Calves suffer from few congenital abnormalities but the Akabane virus is widely distributed in temperate to tropical regions of the world. The virus is a teratogenic pathogen which causes abortions, stillbirths, premature births and congenital abnormalities, but occurs only during some years. Uses Calf meat for human consumption is called veal, and is usually produced from the male calves of Dairy cattle. Also eaten are calf's brains and calf liver. The hide is used to make calfskin, or tanned into leather and called calf leather, or sometimes in the US "novillo", the Spanish term. The fourth compartment of the stomach of slaughtered milk-fed calves is the source of rennet. The intestine is used to make Goldbeater's skin, and is the source of Calf Intestinal Alkaline Phosphatase (CIP). Dairy cows can only produce milk after having calved, and dairy cows need to produce one calf each year in order to remain in production. Female calves will become a replacement dairy cow. Male dairy calves are generally reared for beef or veal; relatively few are kept for breeding purposes. Other animals In English the term "calf" is used by extension for the young of various other large species of mammal. In addition to other bovid species (such as bison, yak and water buffalo), these include the young of camels, dolphins, elephants, giraffes, hippopotamuses, deer (such as moose, elk (wapiti) and red deer), rhinoceroses, porpoises, whales, walruses and larger seals. (Generally, the adult males of these same species are called "bulls" and the adult females "cows".) However, common domestic species tend to have their own specific names, such as lamb, foal used for all Equidae, or piglet used for all suidae. References External links Weaning-beef-calves Calving on Ropin' the Web, Agriculture and Food, Government of Alberta, Canada Winter Feeding Sites and Calf Scours, Kansas State University Cattle Vertebrate developmental biology Articles containing video clips
2,551
5,697
https://en.wikipedia.org/wiki/Civil%20Rights%20Memorial
Civil Rights Memorial
The Civil Rights Memorial is an American memorial in Montgomery, Alabama, created by Maya Lin. The names of 41 people are inscribed on the granite fountain as martyrs who were killed in the civil rights movement. The memorial is sponsored by the Southern Poverty Law Center. Design The names included in the memorial belong to those who were killed between 1955 and 1968. Those dates were chosen because in 1956 the U.S. Supreme Court ruled that racial segregation in schools was unlawful and 1968 is the year of the assassination of Martin Luther King Jr. The monument was created by Maya Lin, who is best known for creating the Vietnam Veterans Memorial in Washington, D.C. The Civil Rights Memorial was dedicated in 1989. The concept of Lin's design is based on the soothing and healing effect of water. It was inspired by a passage from King's "I Have a Dream" speech "...we will not be satisfied "until justice rolls down like waters and righteousness like a mighty stream..." The quotation in the passage, which is inscribed on the memorial, is a direct paraphrase of Amos 5:24, as translated in the American Standard Version of the Bible. The memorial is a fountain in the form of an asymmetric inverted stone cone. A film of water flows over the base of the cone, which contains the 41 names included. It is possible to touch the smooth film of water and to alter it temporarily, which quickly returns to smoothness. As such, the memorial represents the aspirations of the civil rights movement to end legal racial segregation. Tours and location The memorial is in downtown Montgomery, at 400 Washington Avenue, in an open plaza in front of the Civil Rights Memorial Center, which was the offices of the Southern Poverty Law Center until it moved across the street into a new building in 2001. The memorial may be visited freely 24 hours a day, 7 days a week. The Civil Rights Memorial Center offers guided group tours, lasting approximately one hour. Tours are available by appointment, Monday to Saturday. The memorial is only a few blocks from other historic sites, including the Dexter Avenue King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum. Names included "Civil Rights Martyrs" The 41 names included in the Civil Rights Memorial are those of: Louis Allen Willie Brewster Benjamin Brown Johnnie Mae Chappell James Chaney Addie Mae Collins Vernon Dahmer Jonathan Daniels Henry Hezekiah Dee Roman Ducksworth Jr. Willie Edwards Medgar Evers Andrew Goodman Paul Guihard Samuel Hammond Jr. Jimmie Lee Jackson Wharlest Jackson Martin Luther King Jr. Bruce W. Klunder George W. Lee Herbert Lee Viola Liuzzo Denise McNair Delano Herman Middleton Charles Eddie Moore Oneal Moore William Lewis Moore Mack Charles Parker Lemuel Penn James Reeb John Earl Reese Carole Robertson Michael Schwerner Henry Ezekial Smith Lamar Smith Emmett Till Clarence Triggs Virgil Lamar Ware Cynthia Wesley Ben Chester White Sammy Younge Jr. "The Forgotten" "The Forgotten" are 74 people who are identified in a display at the Civil Rights Memorial Center. These names were not inscribed on the Memorial because there was insufficient information about their deaths at the time the Memorial was created. However, it is thought that these people were killed as a result of racially motivated violence between 1952 and 1968. Andrew Lee Anderson Frank Andrews Isadore Banks Larry Bolden James Brazier Thomas Brewer Hilliard Brooks Charles Brown Jessie Brown Carrie Brumfield Eli Brumfield Silas (Ernest) Caston Clarence Cloninger Willie Countryman Vincent Dahmon Woodrow Wilson Daniels Joseph Hill Dumas Pheld Evans J. E. Evanston Mattie Greene Jasper Greenwood Jimmie Lee Griffith A. C. Hall Rogers Hamilton Collie Hampton Alphonso Harris Izell Henry Arthur James Hill Ernest Hunter Luther Jackson Ernest Jells Joe Franklin Jeter Marshall Johnson John Lee Willie Henry Lee Richard Lillard George Love Robert McNair Maybelle Mahone Sylvester Maxwell Clinton Melton James Andrew Miller Booker T. Mixon Nehemiah Montgomery Frank Morris James Earl Motley Sam O'Quinn Hubert Orsby Larry Payne C. H. Pickett Albert Pitts David Pitts Ernest McPharland Jimmy Powell William Roy Prather Johnny Queen Donald Rasberry Fred Robinson Johnny Robinson Willie Joe Sanford Marshall Scott Jr. Jessie James Shelby W. G. Singleton Ed Smith Eddie James Stewart Isaiah Taylor Freddie Lee Thomas Saleam Triggs Hubert Varner Clifton Walker James Waymers John Wesley Wilder Rodell Williamson Archie Wooden See also Civil rights movement in popular culture History of fountains in the United States Title I of the Civil Rights Act of 1968 References External links Official Site Civil Rights Martyrs 1989 establishments in Alabama 1989 sculptures Buildings and structures in Montgomery, Alabama Fountains in Alabama History of civil rights in the United States History of Montgomery, Alabama Monuments and memorials in Alabama Monuments and memorials of the civil rights movement Southern Poverty Law Center Tourist attractions in Montgomery, Alabama
2,556
5,703
https://en.wikipedia.org/wiki/Cyberpunk
Cyberpunk
Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a "combination of lowlife and high tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cybernetics, juxtaposed with societal collapse, dystopia or decay. Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction. Comics exploring cyberpunk themes began appearing as early as Judge Dredd, first published in 1977. Released in 1984, William Gibson's influential debut novel Neuromancer helped solidify cyberpunk as a genre, drawing influence from punk subculture and early hacker culture. Other influential cyberpunk writers included Bruce Sterling and Rudy Rucker. The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation (also directed by Otomo) later popularizing the subgenre. Early films in the genre include Ridley Scott's 1982 film Blade Runner, one of several of Philip K. Dick's works that have been adapted into films (in this case, Do Androids Dream of Electric Sheep?). The "first cyberpunk television series" was the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based upon short stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films. Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film; Dredd (2012), which was not a sequel to the original movie; Upgrade (2018); Alita: Battle Angel (2019), based on the 1990s Japanese manga Battle Angel Alita; the 2018 Netflix TV series Altered Carbon, based on Richard K. Morgan's 2002 novel of the same name; the 2020 remake of 1997 role-playing video game Final Fantasy VII; and the video game Cyberpunk 2077 (2020), based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk. Background Lawrence Person has attempted to define the content and ethos of the cyberpunk literary movement stating: Cyberpunk plots often center on conflict among artificial intelligences, hackers, and megacorporations, and tend to be set in a near-future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov's Foundation or Frank Herbert's Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors ("the street finds its own uses for things"). Much of the genre's atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction. There are sources who view that cyberpunk has shifted from a literary movement to a mode of science fiction due to the limited number of writers and its transition to a more generalized cultural formation. History and origins The origins of cyberpunk are rooted in the New Wave science fiction movement of the 1960s and 1970s, where New Worlds, under the editorship of Michael Moorcock, began inviting and encouraging stories that examined new writing styles, techniques, and archetypes. Reacting to conventional storytelling, New Wave authors attempted to present a world where society coped with a constant upheaval of new technology and culture, generally with dystopian outcomes. Writers like Roger Zelazny, J. G. Ballard, Philip José Farmer, Samuel R. Delany, and Harlan Ellison often examined the impact of drug culture, technology, and the sexual revolution with an avant-garde style influenced by the Beat Generation (especially William S. Burroughs' science fiction writing), Dadaism, and their own ideas. Ballard attacked the idea that stories should follow the "archetypes" popular since the time of Ancient Greece, and the assumption that these would somehow be the same ones that would call to modern readers, as Joseph Campbell argued in The Hero with a Thousand Faces. Instead, Ballard wanted to write a new myth for the modern reader, a style with "more psycho-literary ideas, more meta-biological and meta-chemical concepts, private time systems, synthetic psychologies and space-times, more of the sombre half-worlds one glimpses in the paintings of schizophrenics." This had a profound influence on a new generation of writers, some of whom would come to call their movement "cyberpunk". One, Bruce Sterling, later said: Ballard, Zelazny, and the rest of New Wave was seen by the subsequent generation as delivering more "realism" to science fiction, and they attempted to build on this. Samuel R. Delany's 1968 novel Nova is also considered one of the major forerunners of the cyberpunk movement. It prefigures, for instance, cyberpunk's staple trope of human interfacing with computers via implants. Writer William Gibson claimed to be greatly influenced by Delany, and his novel Neuromancer includes allusions to Nova. Similarly influential, and generally cited as proto-cyberpunk, is the Philip K. Dick novel Do Androids Dream of Electric Sheep, first published in 1968. Presenting precisely the general feeling of dystopian post-economic-apocalyptic future as Gibson and Sterling later deliver, it examines ethical and moral problems with cybernetic, artificial intelligence in a way more "realist" than the Isaac Asimov Robot series that laid its philosophical foundation. Dick's protege and friend K. W. Jeter wrote a novel called Dr. Adder in 1972 that, Dick lamented, might have been more influential in the field had it been able to find a publisher at that time. It was not published until 1984, after which Jeter made it the first book in a trilogy, followed by The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of Electric Sheep, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon. Do Androids Dream of Electric Sheep was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story, "Johnny Mnemonic" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds. The term cyberpunk first appeared as the title of a short story written by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. It was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine, and popularized in his editorials. Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus: Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said "About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks'—Sterling, Gibson, Shiner, Cadigan, Bear." About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk "virtual reality", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called "Neuromantics", a pun on the name of the novel plus "New Romantics", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: "the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer". Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986 he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective. In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992. Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way: The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives. Style and ethos Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement. Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable. Setting Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian science fiction in his 1981 short story "The Gernsback Continuum," which pokes fun at and, to a certain extent, condemns utopian science fiction. In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers and internet connectivity. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power. The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, "Modern Japan simply was cyberpunk." Cyberpunk is often set in urbanized, artificial landscapes, and "city lights, receding" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be "Hong Kong on a very bad day". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's strange and chaotic streets where "old and new exist in confusing relationships", fit the theme of the film well. Hong Kong's Kowloon Walled City is particularly notable for its disorganized hyper-urbanization and breakdown in traditional urban planning to be an inspiration to cyberpunk landscapes. Portrayals of East Asia and Asians in Western cyberpunk have been criticized as Orientalist and promoting racist tropes playing on American and European fears of East Asian dominance; this has been referred to as "techno-Orientalism". Protagonists One of the cyberpunk genre's prototype characters is Case, from Gibson's Neuromancer. Case is a "console cowboy," a brilliant drug addicted hacker who has betrayed his organized criminal partners. Robbed of his talent through a crippling injury inflicted by the vengeful partners, Case unexpectedly receives a once-in-a-lifetime opportunity to be healed by expert medical care but only if he participates in another criminal enterprise with a new crew. Like Case, many cyberpunk protagonists are manipulated or forced into situations where they have little or no control. They often begin their story with little to no power, starting in roles of subordinates or burnouts. The story usually involves them breaking out of these lowly roles early on. They typically have bittersweet or negative endings, rarely make great gains by the end of the story. Protagonists often fit into the role of outcasts, criminals, misfits and malcontents, expressing the "punk" component of cyberpunk. Due to the morally ambiguous nature of the worlds they inhabit, cyberpunk protagonists are usually antiheroes. They often engage with their society's drug subcultures or some other vice. Though they may morally or ethically oppose some of the more bleak aspects of their worlds, they are often too pragmatic or defeated to change them. Society and government Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of cultural revolution in science fiction. In the words of author and critic David Brin: ...a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic ...Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite. Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form. Some observers cite that cyberpunk tends to marginalize sectors of society such as women and Africans. It is claimed that, for instance, cyberpunk depicts fantasies that ultimately empower masculinity using fragmentary and decentered aesthetic that culminate in a masculine genre populated by male outlaws. Critics also note the absence of any reference to Africa or an African-American character in the quintessential cyberpunk film Blade Runner while other films reinforce stereotypes. Media Literature Minnesota writer Bruce Bethke coined the term in 1983 for his short story "Cyberpunk," which was published in an issue of Amazing Science Fiction Stories. The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement's chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker's significance. John Brunner's 1975 novel The Shockwave Rider is considered by many to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois. William Gibson with his novel Neuromancer (1984) is arguably the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as "the archetypal cyberpunk work," Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson's popular debut novel. According to the Jargon File, "Gibson's near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly naïve and tremendously stimulating." Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality. Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the science fiction New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned. Furthermore, while Neuromancer's narrator may have had an unusual "voice" for science fiction, much older examples can be found: Gibson's narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939). Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers' works—often citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisław Lem, Samuel R. Delany, and even William S. Burroughs. For example, Philip K. Dick's works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities. The influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth's Wolfbane (1959) and Roger Zelazny's Creatures of Light and Darkness (1968). In 1994, scholar Brian Stonehill suggested that Thomas Pynchon's 1973 novel Gravity's Rainbow "not only curses but precurses what we now glibly dub cyberspace." Other important predecessors include Alfred Bester's two most celebrated novels, The Demolished Man and The Stars My Destination, as well as Vernor Vinge's novella True Names. Reception and impact Science-fiction writer David Brin describes cyberpunk as "the finest free promotion campaign ever waged on behalf of science fiction." It may not have attracted the "real punks," but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the "self-important rhetoric and whines of persecution" on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the "rebels did shake things up. We owe them a debt." Fredric Jameson considers cyberpunk the "supreme literary expression if not of postmodernism, then of late capitalism itself". Cyberpunk further inspired many professional writers who were not among the "original" cyberpunks to incorporate cyberpunk ideas into their own works, such as George Alec Effinger's When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today's cyberpunk fans, which Paula Yoo claims "proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world." Film and television The film Blade Runner (1982)—adapted from Philip K. Dick's Do Androids Dream of Electric Sheep?—is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who "retire" (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film. Since the movie omits the religious and mythical elements of Dick's original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision for Neuromancer, a book he was then working on. The film's tone has since been the staple of many cyberpunk movies, such as The Matrix trilogy (1999–2003), which uses a wide variety of cyberpunk elements. The number of films in the genre or at least using a few genre elements has grown steadily since Blade Runner. Several of Philip K. Dick's works have been adapted to the silver screen. The films Johnny Mnemonic and New Rose Hotel, both based upon short stories by William Gibson, flopped commercially and critically. These box offices misses significantly slowed the development of cyberpunk as a literary or cultural form although a sequel to the 1982 film Blade Runner was released in October 2017 with Harrison Ford reprising his role from the original film. A rigorous implementation of all core cyberpunk hallmarks is the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. Max Headroom has been called "the first cyberpunk television series", with "deep roots in the Western philosophical tradition". In addition, "tech-noir" film as a hybrid genre, means a work of combining neo-noir and science fiction or cyberpunk. It includes many cyberpunk films such as Blade Runner, Burst City, Robocop, 12 Monkeys, The Lawnmower Man, Hackers, Hardware, and Strange Days, Total Recall. Anime and manga The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation, which Otomo directed, later popularizing the subgenre. Akira inspired a wave of Japanese cyberpunk works, including manga and anime series such as Ghost in the Shell, Battle Angel Alita, and Cowboy Bebop. Other early Japanese cyberpunk works include the 1982 film Burst City, the 1985 original video animation Megazone 23, and the 1989 film Tetsuo: The Iron Man. In contrast to Western cyberpunk which has roots in New Wave science fiction literature, Japanese cyberpunk has roots in underground music culture, specifically the Japanese punk subculture that arose from the Japanese punk music scene in the 1970s. The filmmaker Sogo Ishii introduced this subculture to Japanese cinema with the punk film Panic High School (1978) and the punk biker film Crazy Thunder Road (1980), both portraying the rebellion and anarchy associated with punk, and the latter featuring a punk biker gang aesthetic. Ishii's punk films paved the way for Otomo's seminal cyberpunk work Akira. Cyberpunk themes are widely visible in anime and manga. In Japan, where cosplay is popular and not only teenagers display such fashion styles, cyberpunk has been accepted and its influence is widespread. William Gibson's Neuromancer, whose influence dominated the early cyberpunk movement, was also set in Chiba, one of Japan's largest industrial areas, although at the time of writing the novel Gibson did not know the location of Chiba and had no idea how perfectly it fit his vision in some ways. The exposure to cyberpunk ideas and fiction in the 1980s has allowed it to seep into the Japanese culture. Cyberpunk anime and manga draw upon a futuristic vision which has elements in common with Western science fiction and therefore have received wide international acceptance outside Japan. "The conceptualization involved in cyberpunk is more of forging ahead, looking at the new global culture. It is a culture that does not exist right now, so the Japanese concept of a cyberpunk future, seems just as valid as a Western one, especially as Western cyberpunk often incorporates many Japanese elements." William Gibson is now a frequent visitor to Japan, and he came to see that many of his visions of Japan have become a reality: Modern Japan simply was cyberpunk. The Japanese themselves knew it and delighted in it. I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, "You see? You see? It is Blade Runner town." And it was. It so evidently was. Cyberpunk themes have appeared in many anime and manga, including the ground-breaking Appleseed, Ghost in the Shell, Ergo Proxy, Megazone 23, Goku Midnight Eye, Cyber City Oedo 808, Cyberpunk: Edgerunners, Bubblegum Crisis, A.D. Police: Dead End City, Angel Cop, Blame!, Armitage III, Texhnolyze, Psycho-Pass and No Guns Life. Influence Akira (1982 manga) and its 1988 anime film adaptation have influenced numerous works in animation, comics, film, music, television and video games. Akira has been cited as a major influence on Hollywood films such as The Matrix, Chronicle, Looper, Midnight Special, and Inception, as well as cyberpunk-influenced video games such as Hideo Kojima's Snatcher and Metal Gear Solid, Valve's Half-Life series and Dontnod Entertainment's Remember Me. Akira has also influenced the work of musicians such as Kanye West, who paid homage to Akira in the "Stronger" music video, and Lupe Fiasco, whose album Tetsuo & Youth is named after Tetsuo Shima. The popular bike from the film, Kaneda's Motorbike, appears in Steven Spielberg'''s film Ready Player One, and CD Projekt's video game Cyberpunk 2077.Ghost in the Shell (1995) influenced a number of prominent filmmakers, most notably the Wachowskis in The Matrix (1999) and its sequels. The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell and a sushi magazine the wife of the senior designer of the animation, Simon Witheley, had in the kitchen at the time., and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar. The original video animation Megazone 23 (1985) has a number of similarities to The Matrix. Battle Angel Alita (1990) has had a notable influence on filmmaker James Cameron, who was planning to adapt it into a film since 2000. It was an influence on his TV series Dark Angel, and he is the producer of the 2019 film adaptation Alita: Battle Angel. Comics In 1975, artist Moebius collaborated with writer Dan O'Bannon on a story called The Long Tomorrow, published in the French magazine Métal Hurlant. One of the first works featuring elements now seen as exemplifying cyberpunk, it combined influences from film noir and hardboiled crime fiction with a distant sci-fi environment. Author William Gibson stated that Moebius' artwork for the series, along with other visuals from Métal Hurlant, strongly influenced his 1984 novel Neuromancer. The series had a far-reaching impact in the cyberpunk genre, being cited as an influence on Ridley Scott's Alien (1979) and Blade Runner. Moebius later expanded upon The Long Tomorrow's aesthetic with The Incal, a graphic novel collaboration with Alejandro Jodorowsky published from 1980 to 1988. The story centers around the exploits of a detective named John Difool in various science fiction settings, and while not confined to the tropes of cyberpunk, it features many elements of the genre. Concurrently with many other foundational cyberpunk works, DC Comics published Frank Miller's six-issue miniseries Rōnin from 1983 to 1984. The series, incorporating aspects of Samurai culture, martial arts films and manga, is set in a dystopian near-future New York. It explores the link between an ancient Japanese warrior and the apocalyptic, crumbling cityscape he finds himself in. The comic also bears several similarities to Akira, with highly powerful telepaths playing central roles, as well as sharing many key visuals.Rōnin would go on to influence many later works, including Samurai Jack and the Teenage Mutant Ninja Turtles, as well as video games such as Cyberpunk 2077. Two years later, Miller himself would incorporate several toned-down elements of Rōnin into his acclaimed 1986 miniseries The Dark Knight Returns, in which a retired Bruce Wayne once again takes up the mantle of Batman in a Gotham that is increasingly becoming more dystopian. Paul Pope's Batman: Year 100, published in 2006, also exhibits several traits typical of cyberpunk fiction, such as a rebel protagonist opposing a future authoritarian state, and a distinct retrofuturist aesthetic that makes callbacks to both The Dark Knight Returns and Batman's original appearances in the 1940s. Games There are many cyberpunk video games. Popular series include Final Fantasy VII and its spin-offs and remake, the Megami Tensei series, Kojima's Snatcher and Metal Gear series, Deus Ex series, Syndicate series, and System Shock and its sequel. Other games, like Blade Runner, Ghost in the Shell, and the Matrix series, are based upon genre movies, or role-playing games (for instance the various Shadowrun games). Several RPGs called Cyberpunk exist: Cyberpunk, Cyberpunk 2020, Cyberpunk v3.0 and Cyberpunk Red written by Mike Pondsmith and published by R. Talsorian Games, and GURPS Cyberpunk, published by Steve Jackson Games as a module of the GURPS family of RPGs. Cyberpunk 2020 was designed with the settings of William Gibson's writings in mind, and to some extent with his approval, unlike the approach taken by FASA in producing the transgenre Shadowrun game and its various sequels, which mixes cyberpunk with fantasy elements such as magic and fantasy races such as orcs and elves. Both are set in the near future, in a world where cybernetics are prominent. In addition, Iron Crown Enterprises released an RPG named Cyberspace, which was out of print for several years until recently being re-released in online PDF form. CD Projekt Red released Cyberpunk 2077, a cyberpunk open world first-person shooter/role-playing video game (RPG) based on the tabletop RPG Cyberpunk 2020, on December 10, 2020. In 1990, in a convergence of cyberpunk art and reality, the United States Secret Service raided Steve Jackson Games's headquarters and confiscated all their computers. Officials denied that the target had been the GURPS Cyberpunk sourcebook, but Jackson would later write that he and his colleagues "were never able to secure the return of the complete manuscript; [...] The Secret Service at first flatly refused to return anything – then agreed to let us copy files, but when we got to their office, restricted us to one set of out-of-date files – then agreed to make copies for us, but said "tomorrow" every day from March 4 to March 26. On March 26 we received a set of disks which purported to be our files, but the material was late, incomplete and well-nigh useless." Steve Jackson Games won a lawsuit against the Secret Service, aided by the new Electronic Frontier Foundation. This event has achieved a sort of notoriety, which has extended to the book itself as well. All published editions of GURPS Cyberpunk have a tagline on the front cover, which reads "The book that was seized by the U.S. Secret Service!" Inside, the book provides a summary of the raid and its aftermath. Cyberpunk has also inspired several tabletop, miniature and board games such as Necromunda by Games Workshop. Netrunner is a collectible card game introduced in 1996, based on the Cyberpunk 2020 role-playing game. Tokyo NOVA, debuting in 1993, is a cyberpunk role-playing game that uses playing cards instead of dice.Cyberpunk 2077 set a new record for the largest number of simultaneous players in a single player game, with a record 1,003,262 playing just after the December 10th launch, according to Steam Database. That tops the previous Steam record of 472,962 players set by Fallout 4 back in 2015. Music Invariably the origin of cyberpunk music lies in the synthesizer-heavy scores of cyberpunk films such as Escape from New York (1981) and Blade Runner (1982). Some musicians and acts have been classified as cyberpunk due to their aesthetic style and musical content. Often dealing with dystopian visions of the future or biomechanical themes, some fit more squarely in the category than others. Bands whose music has been classified as cyberpunk include Psydoll, Front Line Assembly, Clock DVA, Angelspit and Sigue Sigue Sputnik. Some musicians not normally associated with cyberpunk have at times been inspired to create concept albums exploring such themes. Albums such as the British musician and songwriter Gary Numan's Replicas, The Pleasure Principle and Telekon were heavily inspired by the works of Philip K. Dick. Kraftwerk's The Man-Machine and Computer World albums both explored the theme of humanity becoming dependent on technology. Nine Inch Nails' concept album Year Zero also fits into this category. Fear Factory concept albums are heavily based upon future dystopia, cybernetics, clash between man and machines, virtual worlds. Billy Idol's Cyberpunk drew heavily from cyberpunk literature and the cyberdelic counter culture in its creation. 1. Outside, a cyberpunk narrative fueled concept album by David Bowie, was warmly met by critics upon its release in 1995. Many musicians have also taken inspiration from specific cyberpunk works or authors, including Sonic Youth, whose albums Sister and Daydream Nation take influence from the works of Philip K. Dick and William Gibson respectively. Madonna's 2001 Drowned World Tour opened with a cyberpunk section, where costumes, asethetics and stage props were used to accentuate the dystopian nature of the theatrical concert. Lady Gaga used a cyberpunk-persona and visual style for her sixth studio album Chromatica (2020). Vaporwave and synthwave are also influenced by cyberpunk. The former has been inspired by one of the messages of cyberpunk and is interpreted as a dystopian critique of capitalism in the vein of cyberpunk and the latter is more surface-level, inspired only by the aesthetic of cyberpunk as a nostalgic retrofuturistic revival of aspects of cyberpunk's origins. Social impact Art and architecture Writers David Suzuki and Holly Dressel describe the cafes, brand-name stores and video arcades of the Sony Center in the Potsdamer Platz public square of Berlin, Germany, as "a vision of a cyberpunk, corporate urban future". Society and counterculture Several subcultures have been inspired by cyberpunk fiction. These include the cyberdelic counter culture of the late 1980s and early 1990s. Cyberdelic, whose adherents referred to themselves as "cyberpunks", attempted to blend the psychedelic art and drug movement with the technology of cyberculture. Early adherents included Timothy Leary, Mark Frauenfelder and R. U. Sirius. The movement largely faded following the dot-com bubble implosion of 2000. Cybergoth is a fashion and dance subculture which draws its inspiration from cyberpunk fiction, as well as rave and Gothic subcultures. In addition, a distinct cyberpunk fashion of its own has emerged in recent years which rejects the raver and goth influences of cybergoth, and draws inspiration from urban street fashion, "post apocalypse", functional clothing, high tech sports wear, tactical uniform and multifunction. This fashion goes by names like "tech wear", "goth ninja" or "tech ninja". The Kowloon Walled City in Hong Kong (demolished in 1994) is often referenced as the model cyberpunk/dystopian slum as, given its poor living conditions at the time coupled with the city's political, physical, and economic isolation has caused many in academia to be fascinated by the ingenuity of its spawning. Related genres As a wider variety of writers began to work with cyberpunk concepts, new subgenres of science fiction emerged, some of which could be considered as playing off the cyberpunk label, others which could be considered as legitimate explorations into newer territory. These focused on technology and its social effects in different ways. One prominent subgenre is "steampunk," which is set in an alternate history Victorian era that combines anachronistic technology with cyberpunk's bleak film noir world view. The term was originally coined around 1987 as a joke to describe some of the novels of Tim Powers, James P. Blaylock, and K.W. Jeter, but by the time Gibson and Sterling entered the subgenre with their collaborative novel The Difference Engine'' the term was being used earnestly as well. Another subgenre is "biopunk" (cyberpunk themes dominated by biotechnology) from the early 1990s, a derivative style building on biotechnology rather than informational technology. In these stories, people are changed in some way not by mechanical means, but by genetic manipulation. Cyberpunk works have been described as well situated within postmodern literature. Registered trademark status In the United States, the term "Cyberpunk" is a registered trademark by R. Talsorian Games Inc. for its tabletop role-playing game. Within the European Union, the "Cyberpunk" trademark is owned by two parties: CD Projekt SA for "games and online gaming services" (particularly for the video game adaptation of the former) and by Sony Music for use outside games. See also Corporate warfare Cyborg Digital dystopia Postcyberpunk Posthumanization Steampunk Solarpunk Transhumanism Type 1 civilization Utopian and dystopian fiction References External links Cyberpunk on The Encyclopedia of Science Fiction The Cyberpunk Directory—Comprehensive directory of cyberpunk resources Cyberpunk Media Archive Archive of cyberpunk media The Cyberpunk Project—A project dedicated toward maintaining a cyberpunk database, library, and other information cyberpunks.com A website dedicated to cyberpunk themed news and media Cyberpunk Dystopian fiction Subcultures Postmodernism Postmodern art Science fiction culture 1960s neologisms Government by algorithm in fiction
2,560
5,705
https://en.wikipedia.org/wiki/Continuum%20hypothesis
Continuum hypothesis
In mathematics, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that or equivalently, that In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. The problem is one of the most important problems in mathematics. History Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen. Cardinality of infinite sets Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}. With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets. Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question. The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, and the continuum hypothesis says that there is no set for which . Assuming the axiom of choice, there is a unique smallest cardinal number greater than , and the continuum hypothesis is in turn equivalent to the equality . Independence from ZFC The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen. Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories. Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof. The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if is a cardinal of uncountable cofinality, then there is a forcing extension in which . However, per König's theorem, it is not consistent to assume is or or any cardinal with cofinality . The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well. The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. Hilbert's problem remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status. The continuum hypothesis was not the first statement shown to be independent of ZFC. An immediate consequence of Gödel's incompleteness theorem, which was published in 1931, is that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC that is independent of ZFC, assuming that ZFC is consistent. The continuum hypothesis and the axiom of choice were among the first mathematical statements shown to be independent of ZF set theory. Arguments for and against the continuum hypothesis Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH. Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH. Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false. At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively true" but others have disagreed. A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the , or "Star axiom". The Star axiom would imply that is , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture. Solomon Feferman has argued that CH is not a definite mathematical problem. He proposes a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggests that a proposition is mathematically "definite" if the semi-intuitionistic theory can prove . He conjectures that CH is not definite according to this notion, and proposes that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article. Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC". The generalized continuum hypothesis The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set of S, then it has the same cardinality as either S or . That is, for any infinite cardinal there is no cardinal such that . GCH is equivalent to: for every ordinal (occasionally called Cantor's aleph hypothesis). The beth numbers provide an alternate notation for this condition: for every ordinal . The continuum hypothesis is the special case for the ordinal . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore. Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than which is smaller than its own Hartogs number—this uses the equality ; for the full proof, see Gillman. Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals to fail to satisfy . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that holds for every infinite cardinal . Later Woodin extended this by showing the consistency of for every . Carmi Merimovich showed that, for each n ≥ 1, it is consistent with ZFC that for each κ, 2κ is the nth successor of κ. On the other hand, László Patai proved that if γ is an ordinal and for each infinite cardinal κ, 2κ is the γth successor of κ, then γ is finite. For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B, . If A and B are finite, the stronger inequality holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals. Implications of GCH for cardinal exponentiation Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation in all cases. GCH implies that: when α ≤ β+1; when β+1 < α and , where cf is the cofinality operation; and when β+1 < α and . The first equality (when α ≤ β+1) follows from: , while: ; The third equality (when β+1 < α and ) follows from: , by König's theorem, while: Where, for every γ, GCH is used for equating and ; is used as it is equivalent to the axiom of choice. See also Absolute Infinite Beth number Cardinality Ω-logic Wetzel's problem References Sources Further reading Gödel, K.: What is Cantor's Continuum Problem?, reprinted in Benacerraf and Putnam's collection Philosophy of Mathematics, 2nd ed., Cambridge University Press, 1983. An outline of Gödel's arguments against CH. Martin, D. (1976). "Hilbert's first problem: the continuum hypothesis," in Mathematical Developments Arising from Hilbert's Problems, Proceedings of Symposia in Pure Mathematics XXVIII, F. Browder, editor. American Mathematical Society, 1976, pp. 81–92. External links Forcing (mathematics) Independence results Basic concepts in infinite set theory Hilbert's problems Infinity Hypotheses Cardinal numbers
2,562
5,711
https://en.wikipedia.org/wiki/Nepeta
Nepeta
Nepeta is a genus of flowering plants in the family Lamiaceae. The genus name is reportedly in reference to Nepete, an ancient Etruscan city. There are about 250 species. The genus is native to Europe, Asia, and Africa, and has also naturalized in North America. Some members of this group are known as catnip or catmint because of their effect on house cats – the nepetalactone contained in some Nepeta species binds to the olfactory receptors of cats, typically resulting in temporary euphoria. Description Most of the species are herbaceous perennial plants, but some are annuals. They have sturdy stems with opposite heart-shaped, green to gray-green leaves. Nepeta plants are usually aromatic in foliage and flowers. The tubular flowers can be lavender, blue, white, pink, or lilac, and spotted with tiny lavender-purple dots. The flowers are located in verticillasters grouped on spikes; or the verticillasters are arranged in opposite cymes, racemes, or panicles – toward the tip of the stems. The calyx is tubular or campanulate, they are slightly curved or straight, and the limbs are often 2-lipped with five teeth. The lower lip is larger, with 3-lobes, and the middle lobe is the largest. The flowers have 4 hairless stamens that are nearly parallel, and they ascend under the upper lip of the corolla. Two stamen are longer and stamens of pistillate flowers are rudimentary. The style protrudes outside of the mouth of the flowers. The fruits are nutlets, which are oblong-ovoid, ellipsoid, ovoid, or obovoid in shape. The surfaces of the nutlets can be slightly ribbed, smooth or warty. Selected species Some species formerly classified as Nepeta are now in the genera Dracocephalum, Glechoma, and Calamintha . Species include: Nepeta adenophyta Hedge Nepeta agrestis Loisel. Nepeta alaghezi Pojark. Nepeta alatavica Lipsky Nepeta algeriensis Noë Nepeta amicorum Rech.f. Nepeta amoena Stapf Nepeta anamurensis Gemici & Leblebici Nepeta annua Pall. Nepeta apuleji Ucria Nepeta argolica Bory & Chaub. Nepeta assadii Jamzad Nepeta assurgens Hausskn. & Bornm. Nepeta astorensis Shinwari & Chaudhri Nepeta atlantica Ball Nepeta autraniana Bornm. Nepeta azurea R.Br. ex Benth. Nepeta badachschanica Kudrjasch. Nepeta bakhtiarica Rech.f. Nepeta ballotifolia Hochst. ex A.Rich. Nepeta balouchestanica Jamzad & Ingr. Nepeta barfakensis Rech.f. Nepeta baytopii Hedge & Lamond Nepeta bazoftica Jamzad Nepeta bellevii Prain Nepeta betonicifolia C.A.Mey. Nepeta binaloudensis Jamzad Nepeta bodeana Bunge Nepeta × boissieri Willk. Nepeta bokhonica Jamzad Nepeta bombaiensis Dalzell Nepeta bornmuelleri Hausskn. ex Bornm. Nepeta botschantzevii Czern. Nepeta brachyantha Rech.f. & Edelb. Nepeta bracteata Benth. Nepeta brevifolia C.A.Mey. Nepeta bucharica Lipsky Nepeta caerulea Aiton Nepeta caesarea Boiss. Nepeta campestris Benth. Nepeta camphorata Boiss. & Heldr. Nepeta × campylantha Rech.f. Nepeta cataria L. Nepeta cephalotes Boiss. Nepeta chionophila Boiss. & Hausskn. Nepeta ciliaris Benth. Nepeta cilicica Boiss. ex Benth. Nepeta clarkei Hook.f. Nepeta coerulescens Maxim. Nepeta concolor Boiss. & Heldr. ex Benth. Nepeta conferta Hedge & Lamond Nepeta congesta Fisch. & C.A.Mey. Nepeta connata Royle ex Benth. Nepeta consanguinea Pojark. Nepeta crinita Montbret & Aucher ex Benth. Nepeta crispa Willd. Nepeta curviflora Boiss. Nepeta cyanea Steven Nepeta cyrenaica Quézel & Zaffran Nepeta czegemensis Pojark. Nepeta daenensis Boiss. Nepeta deflersiana Schweinf. ex Hedge Nepeta densiflora Kar. & Kir. Nepeta dentata C.Y.Wu & S.J.Hsuan Nepeta denudata Benth. Nepeta dirmencii Yild. & Dinç Nepeta discolor Royle ex Benth. Nepeta distans Royle Nepeta duthiei Prain & Mukerjee Nepeta elliptica Royle ex Benth. Nepeta elymaitica Bornm. Nepeta erecta (Royle ex Benth.) Benth. Nepeta eremokosmos Rech.f. Nepeta eremophila Hausskn. & Bornm. Nepeta eriosphaera Rech.f. & Köie Nepeta eriostachya Benth. Nepeta ernesti-mayeri Diklic & V.Nikolic Nepeta everardii S.Moore Nepeta × faassenii Bergmans ex Stearn Nepeta flavida Hub.-Mor. Nepeta floccosa Benth. Nepeta foliosa Moris Nepeta fordii Hemsl. Nepeta formosa Kudrjasch. Nepeta freitagii Rech.f. Nepeta glechomifolia (Dunn) Hedge Nepeta gloeocephala Rech.f. Nepeta glomerata Montbret & Aucher ex Benth. Nepeta glomerulosa Boiss. Nepeta glutinosa Benth. Nepeta gontscharovii Kudrjasch. Nepeta govaniana (Wall. ex Benth.) Benth. Nepeta graciliflora Benth. Nepeta granatensis Boiss. Nepeta grandiflora M.Bieb. Nepeta grata Benth. Nepeta griffithii Hedge Nepeta heliotropifolia Lam. Nepeta hemsleyana Oliv. ex Prain Nepeta henanensis C.S.Zhu Nepeta hindostana (B.Heyne ex Roth) Haines Nepeta hispanica Boiss. & Reut. Nepeta hormozganica Jamzad Nepeta humilis Benth. Nepeta hymenodonta Boiss. Nepeta isaurica Boiss. & Heldr. ex Benth. Nepeta ispahanica Boiss. Nepeta italica L. Nepeta jakupicensis Micevski Nepeta jomdaensis H.W.Li Nepeta juncea Benth. Nepeta knorringiana Pojark. Nepeta koeieana Rech.f. Nepeta kokamirica Regel Nepeta kokanica Regel Nepeta komarovii E.A.Busch Nepeta kotschyi Boiss. Nepeta kurdica Hausskn. & Bornm. Nepeta kurramensis Rech.f. Nepeta ladanolens Lipsky Nepeta laevigata (D.Don) Hand.-Mazz. Nepeta lagopsis Benth. Nepeta lamiifolia Willd. Nepeta lamiopsis Benth. ex Hook.f. Nepeta lasiocephala Benth. Nepeta latifolia DC. Nepeta leucolaena Benth. ex Hook.f. Nepeta linearis Royle ex Benth. Nepeta lipskyi Kudrjasch. Nepeta longibracteata Benth. Nepeta longiflora Vent. Nepeta longituba Pojark. Nepeta ludlow-hewittii Blakelock Nepeta macrosiphon Boiss. Nepeta mahanensis Jamzad & M.Simmonds Nepeta manchuriensis S.Moore Nepeta mariae Regel Nepeta maussarifii Lipsky Nepeta melissifolia Lam. Nepeta membranifolia C.Y.Wu Nepeta menthoides Boiss. & Buhse Nepeta meyeri Benth. Nepeta micrantha Bunge Nepeta minuticephala Jamzad Nepeta mirzayanii Rech.f. & Esfand. Nepeta mollis Benth. Nepeta monocephala Rech.f. Nepeta monticola Kudr. Nepeta multibracteata Desf. Nepeta multicaulis Mukerjee Nepeta multifida L. Nepeta natanzensis Jamzad Nepeta nawarica Rech.f. Nepeta nepalensis Spreng. Nepeta nepetella L. Nepeta nepetellae Forssk. Nepeta nepetoides (Batt. ex Pit.) Harley Nepeta nervosa Royle ex Benth. Nepeta nuda L. Nepeta obtusicrena Boiss. & Kotschy ex Hedge Nepeta odorifera Lipsky Nepeta olgae Regel Nepeta orphanidea Boiss. Nepeta pabotii Mouterde Nepeta paktiana Rech.f. Nepeta pamirensis Franch. Nepeta parnassica Heldr. & Sart. Nepeta paucifolia Mukerjee Nepeta persica Boiss. Nepeta petraea Benth. Nepeta phyllochlamys P.H.Davis Nepeta pilinux P.H.Davis Nepeta podlechii Rech.f. Nepeta podostachys Benth. Nepeta pogonosperma Jamzad & Assadi Nepeta polyodonta Rech.f. Nepeta praetervisa Rech.f. Nepeta prattii H.Lév. Nepeta prostrata Benth. Nepeta pseudokokanica Pojark. Nepeta pubescens Benth. Nepeta pungens (Bunge) Benth. Nepeta racemosa Lam. Nepeta raphanorhiza Benth. Nepeta rechingeri Hedge Nepeta rivularis Bornm. Nepeta roopiana Bordz. Nepeta rtanjensis Diklic & Milojevic Nepeta rubella A.L.Budantzev Nepeta rugosa Benth. Nepeta saccharata Bunge Nepeta santoana Popov Nepeta saturejoides Boiss. Nepeta schiraziana Boiss. Nepeta schmidii Rech.f. Nepeta schugnanica Lipsky Nepeta scordotis L. Nepeta septemcrenata Ehrenb. ex Benth. Nepeta sessilis C.Y.Wu & S.J.Hsuan Nepeta shahmirzadensis Assadi & Jamzad Nepeta sheilae Hedge & R.A.King Nepeta sibirica L. Nepeta sorgerae Hedge & Lamond Nepeta sosnovskyi Askerova Nepeta souliei H.Lév. Nepeta spathulifera Benth. Nepeta sphaciotica P.H.Davis Nepeta spruneri Boiss. Nepeta stachyoides Coss. ex Batt. Nepeta staintonii Hedge Nepeta stenantha Kotschy & Boiss. Nepeta stewartiana Diels Nepeta straussii Hausskn. & Bornm. Nepeta stricta (Banks & Sol.) Hedge & Lamond Nepeta suavis Stapf Nepeta subcaespitosa Jehan Nepeta subhastata Regel Nepeta subincisa Benth. Nepeta subintegra Maxim. Nepeta subsessilis Maxim. Nepeta sudanica F.W.Andrews Nepeta sulfuriflora P.H.Davis Nepeta sulphurea C. Koch Nepeta sungpanensis C.Y.Wu Nepeta supina Steven Nepeta taxkorganica Y.F.Chang Nepeta tenuiflora Diels Nepeta tenuifolia Benth. Nepeta teucriifolia Willd. Nepeta teydea Webb & Berthel. Nepeta tibestica Maire Nepeta × tmolea Boiss. Nepeta trachonitica Post Nepeta transiliensis Pojark. Nepeta trautvetteri Boiss. & Buhse Nepeta trichocalyx Greuter & Burdet Nepeta tuberosa L. Nepeta tytthantha Pojark. Nepeta uberrima Rech.f. Nepeta ucranica L. Nepeta veitchii Duthie Nepeta velutina Pojark. Nepeta viscida Boiss. Nepeta vivianii (Coss.) Bég. & Vacc. Nepeta wettsteinii Heinr.Braun Nepeta wilsonii Duthie Nepeta woodiana Hedge Nepeta yanthina Franch. Nepeta yesoensis (Franch. & Sav.) B.D.Jacks. Nepeta zandaensis H.W.Li Nepeta zangezura Grossh. Gallery Uses Cultivation Some Nepeta species are cultivated as ornamental plants. They can be drought tolerant – water conserving, often deer repellent, with long bloom periods from late spring to autumn. Some species also have repellent properties to insect pests, including aphids and squash bugs, when planted in a garden. Nepeta species are used as food plants by the larvae of some Lepidoptera (butterfly and moth) species including Coleophora albitarsella, and as nectar sources for pollinators, such as honey bees and hummingbirds. Selected ornamental species Nepeta cataria (catnip, catswort) – the "true catnip", cultivated as an ornamental plant, has become an invasive species in some habitats. Nepeta grandiflora (giant catmint, Caucasus catmint) – lusher than true catnip and has dark green leaves and dark blue flowers. Nepeta × faassenii (garden catmint) – a hybrid of garden source with gray-green foliage and lavender flowers. It is drought-tolerant and deer-resistant. The cultivar 'Walker's Low' was named Perennial of the Year for 2007 by the Perennial Plant Association. Nepeta racemosa (raceme catnip) – commonly used in landscaping. It is hardy, rated for USDA hardiness zone 5b. References Further reading External links GRIN Species Records of Nepeta [http://www.efloras.org/browse.aspx?flora_id=110&start_taxon_id=122138 Flora of Nepal: Nepeta'] Drugs.com: Catnip "Nepetalactone: What is in catnip anyway?" HowStuffWorks, Inc.: How does catnip work? Sciencedaily.com: "Catnip Repels Mosquitoes More Effectively Than DEET" – reported at the 2001 American Chemical Society meeting''. Lamiaceae genera Perennial plants Cat attractants Drought-tolerant plants Herbs Medicinal plants Garden plants of Africa Garden plants of Asia Garden plants of Europe Taxa named by Carl Linnaeus
2,565
5,724
https://en.wikipedia.org/wiki/Cape%20Breton%20Island
Cape Breton Island
Cape Breton Island (, formerly ; or ; ) is an island on the Atlantic coast of North America and part of the province of Nova Scotia, Canada. The island accounts for 18.7% of Nova Scotia's total area. Although the island is physically separated from the Nova Scotia peninsula by the Strait of Canso, the long Canso Causeway connects it to mainland Nova Scotia. The island is east-northeast of the mainland with its northern and western coasts fronting on the Gulf of Saint Lawrence with its western coast forming the eastern limits of the Northumberland Strait. The eastern and southern coasts front the Atlantic Ocean with its eastern coast also forming the western limits of the Cabot Strait. Its landmass slopes upward from south to north, culminating in the highlands of its northern cape. One of the world's larger saltwater lakes, ("Arm of Gold" in French), dominates the island's centre. The total population at the 2016 census numbered 132,010 Cape Bretoners, which is approximately 15% of the provincial population. Cape Breton Island has experienced a decline in population of approximately 2.9% since the 2011 census. Approximately 75% of the island's population is in the Cape Breton Regional Municipality (CBRM), which includes all of Cape Breton County and is often referred to as Industrial Cape Breton. Toponymy Cape Breton Island takes its name from its easternmost point, Cape Breton. At least two theories for this name have been put forward. The first connects it to the Gascon fishing port of Capbreton. Basque whalers and fishermen traded with the Miꞌkmaq of this island from the early sixteenth century. The second connects it to the Bretons of northwestern France. A Portuguese mappa mundi of 1516–1520 includes the label "terra q(ue) foy descuberta por Bertomes" in the vicinity of the Gulf of St Lawrence, which means "land discovered by Bretons". The name "Cape Breton" first appears on a map of 1516, as C(abo) dos Bretoes, and became the general name for both the island and the cape toward the end of the 16th century. The Breton origin of the name is not universally accepted, however. William Francis Ganong argued that the Portuguese term Bertomes referred to Englishmen or Britons, and that the name should be interpreted as "Cape of the English". History Cape Breton Island's first residents were likely archaic maritime natives, ancestors of the Mi'kmaq people. These peoples and their progeny inhabited the island (known as Unama'ki) for several thousand years and continue to live there to this day. Their traditional lifestyle centred around hunting and fishing because of the unfavourable agricultural conditions of their maritime home. This ocean-centric lifestyle did, however, make them among the first Indigenous peoples to discover European explorers and sailors fishing in the St Lawrence Estuary. Italian explorer (sailing for the British crown) John Cabot reportedly visited the island in 1497. However, European histories and maps of the period are of too poor quality to be sure whether Cabot first visited Newfoundland or Cape Breton Island. This discovery is commemorated by Cape Breton's Cabot Trail, and by the Cabot's Landing Historic Site & Provincial Park, near the village of Dingwall. The local Mi'kmaq peoples began trading with European fishermen when the fishermen began landing in their territories as early as the 1520s. In about 1521–22, the Portuguese under João Álvares Fagundes established a fishing colony on the island. As many as two hundred settlers lived in a village, the name of which is not known, located according to some historians at what is now Ingonish on the island's northeastern peninsula. These fishermen traded with the local population but did not maintain a permanent settlement. This Portuguese colony's fate is unknown, but it is mentioned as late as 1570. During the Anglo-French War of 1627 to 1629, under King Charles I, the Kirkes took Quebec City, James Stewart, 4th Lord Ochiltree, planted a colony on Unama'ki at Baleine, Nova Scotia, and Alexander's son, William Alexander, 1st Earl of Stirling, established the first incarnation of "New Scotland" at Port Royal. These claims, and larger ideals of European colonization were the first time the island was incorporated as European territory, though it would be several decades later that treaties would actually be signed. However, no copies of these treaties exist. These Scottish triumphs, which left Cape Sable as the only major French holding in North America, did not last. Charles I's haste to make peace with France on the terms most beneficial to him meant the new North American gains would be bargained away in the Treaty of Saint-Germain-en-Laye, which established which European power had laid claim over the territories. The French quickly defeated the Scots at Baleine, and established the first European settlements on Île Royale, which is present day Englishtown (1629) and St. Peter's (1630). These settlements lasted only one generation, until Nicolas Denys left in 1659. The island did not have any European settlers for another fifty years before those communities along with Louisbourg were re-established in 1713, after which point European settlement was permanently established on the island. Île Royale Known as Île Royale ("Royal Island") to the French, the island also saw active settlement by France. After the French ceded their claims to Newfoundland and the Acadian mainland to the British by the Treaty of Utrecht in 1713, the French relocated the population of Plaisance, Newfoundland, to Île Royale and the French garrison was established in the central eastern part at Sainte Anne. As the harbour at Sainte Anne experienced icing problems, it was decided to build a much larger fortification at Louisbourg to improve defences at the entrance to the Gulf of Saint Lawrence and to defend France's fishing fleet on the Grand Banks. The French also built the Louisbourg Lighthouse in 1734, the first lighthouse in Canada and one of the first in North America. In addition to Cape Breton Island, the French colony of Île Royale also included Île Saint-Jean, today called Prince Edward Island, and Les Îles-de-la-Madeleine. Seven Years' War Louisbourg itself was one of the most important commercial and military centres in New France. Louisbourg was captured by New Englanders with British naval assistance in the Siege of Louisbourg (1745) and by British forces in 1758. The French population of Île Royale was deported to France after each siege. While French settlers returned to their homes in Île Royale after the Treaty of Aix-la-Chapelle was signed in 1748, the fortress was demolished after the second siege in 1758. Île Royale remained formally part of New France until it was ceded to Great Britain by the Treaty of Paris in 1763. It was then merged with the adjacent British colony of Nova Scotia (present day peninsular Nova Scotia and New Brunswick). Acadians who had been expelled from Nova Scotia and Île Royale were permitted to settle in Cape Breton beginning in 1764, and established communities in northwestern Cape Breton, near Chéticamp, and southern Cape Breton, on and near Isle Madame. Some of the first British-sanctioned settlers on the island following the Seven Years' War were Irish, although upon settlement they merged with local French communities to form a culture rich in music and tradition. From 1763 to 1784, the island was administratively part of the colony of Nova Scotia and was governed from Halifax. The first permanently settled Scottish community on Cape Breton Island was Judique, settled in 1775 by Michael Mor MacDonald. He spent his first winter using his upside-down boat for shelter, which is reflected in the architecture of the village's Community Centre. He composed a song about the area called "O 's àlainn an t-àite", or "O, Fair is the Place." American Revolution During the American Revolution, on 1 November 1776, John Paul Jones, the father of the American Navy, set sail in command of Alfred to free hundreds of American prisoners working in the area's coal mines. Although winter conditions prevented the freeing of the prisoners, the mission did result in the capture of Mellish, a vessel carrying a vital supply of winter clothing intended for John Burgoyne's troops in Canada. Major Timothy Hierlihy and his regiment on board HMS Hope worked in and protected the coal mines at Sydney Cape Breton from privateer attacks. Sydney, Cape Breton provided a vital supply of coal for Halifax throughout the war. The British began developing the mining site at Sydney Mines in 1777. On 14 May 1778, Major Hierlihy arrived at Cape Breton. While there, Hierlihy reported that he "beat off many piratical attacks, killed some and took other prisoners." A few years into the war, there was also a naval engagement between French ships and a British convoy off Sydney, Nova Scotia, near Spanish River (1781), Cape Breton. French ships, fighting with the Americans, were re-coaling and defeated a British convoy. Six French and 17 British sailors were killed, with many more wounded. Colony of Cape Breton In 1784, Britain split the colony of Nova Scotia into three separate colonies: New Brunswick, Cape Breton Island, and present-day peninsular Nova Scotia, in addition to the adjacent colonies of St. John's Island (renamed Prince Edward Island in 1798) and Newfoundland. The colony of Cape Breton Island had its capital at Sydney on its namesake harbour fronting on Spanish Bay and the Cabot Strait. Its first Lieutenant-Governor was Joseph Frederick Wallet DesBarres (1784–1787) and his successor was William Macarmick (1787). A number of United Empire Loyalists emigrated to the Canadian colonies, including Cape Breton. David Mathews, the former Mayor of New York City during the American Revolution, emigrated with his family to Cape Breton in 1783. He succeeded Macarmick as head of the colony and served from 1795 to 1798. From 1799 to 1807, the military commandant was John Despard, brother of Edward. An order forbidding the granting of land in Cape Breton, issued in 1763, was removed in 1784. The mineral rights to the island were given over to the Duke of York by an order-in-council. The British government had intended that the Crown take over the operation of the mines when Cape Breton was made a colony, but this was never done, probably because of the rehabilitation cost of the mines. The mines were in a neglected state, caused by careless operations dating back at least to the time of the final fall of Louisbourg in 1758. Large-scale shipbuilding began in the 1790s, beginning with schooners for local trade, moving in the 1820s to larger brigs and brigatines, mostly built for British ship owners. Shipbuilding peaked in the 1850s, marked in 1851 by the full-rigged ship Lord Clarendon, which was the largest wooden ship ever built in Cape Breton. Merger with Nova Scotia In 1820, the colony of Cape Breton Island was merged for the second time with Nova Scotia. This development is one of the factors which led to large-scale industrial development in the Sydney Coal Field of eastern Cape Breton County. By the late 19th century, as a result of the faster shipping, expanding fishery and industrialization of the island, exchanges of people between the island of Newfoundland and Cape Breton increased, beginning a cultural exchange that continues to this day. The 1920s were some of the most violent times in Cape Breton. They were marked by several severe labour disputes. The famous murder of William Davis by strike breakers, and the seizing of the New Waterford power plant by striking miners led to a major union sentiment that persists to this day in some circles. William Davis Miners' Memorial Day continues to be celebrated in coal mining towns to commemorate the deaths of miners at the hands of the coal companies. 20th century The turn of the 20th century saw Cape Breton Island at the forefront of scientific achievement with the now-famous activities launched by inventors Alexander Graham Bell and Guglielmo Marconi. Following his successful invention of the telephone and being relatively wealthy, Bell acquired land near Baddeck in 1885. He chose the land, which he named Beinn Bgreagh, largely due to its resemblance to his early surroundings in Scotland. He established a summer estate complete with research laboratories, working with deaf people including Helen Keller, and continued to invent. Baddeck would be the site of his experiments with hydrofoil technologies as well as the Aerial Experiment Association, financed by his wife Mabel Gardiner Hubbard. These efforts resulted in the first powered flight in Canada when the AEA Silver Dart took off from the ice-covered waters of Bras d'Or Lake. Bell also built the forerunner to the iron lung and experimented with breeding sheep. Marconi's contributions to Cape Breton Island were also quite significant, as he used the island's geography to his advantage in transmitting the first North American trans-Atlantic radio message from a station constructed at Table Head in Glace Bay to a receiving station at Poldhu in Cornwall, England. Marconi's pioneering work in Cape Breton marked the beginning of modern radio technology. Marconi's station at Marconi Towers, on the outskirts of Glace Bay, became the chief communication centre for the Royal Canadian Navy in World War I through to the early years of World War II. Promotions for tourism beginning in the 1950s recognized the importance of the Scottish culture to the province, as the provincial government started encouraging the use of Gaelic once again. The establishment of funding for the Gaelic College of Celtic Arts and Crafts and formal Gaelic language courses in public schools are intended to address the near-loss of this culture to assimilation into Anglophone Canadian culture. In the 1960s, the Fortress of Louisbourg was partially reconstructed by Parks Canada, using the labour of unemployed coal miners. Since 2009, this National Historic Site of Canada has attracted an average of 90 000 visitors per year. Geography The irregularly-shaped rectangular island is about 100 km wide and 150 long, for a total of in area. It lies in the southeastern extremity of the Gulf of St. Lawrence. Cape Breton is separated from the Nova Scotia peninsula by the very deep Strait of Canso. The island is joined to the mainland by the Canso Causeway. Cape Breton Island is composed of rocky shores, rolling farmland, glacial valleys, barren headlands, highlands, woods and plateaus. Geology The island is characterized by a number of elevations of ancient crystalline and metamorphic rock rising up from the south to the north, and contrasted with eroded lowlands. The bedrock of blocks that developed in different places around the globe, at different times, and then were fused together via tectonics. Cape Breton is formed from three terranes. These are fragments of the earth's crust formed on a tectonic plate and attached by accretion or suture to crust lying on another plate. Each of these has its own distinctive geologic history, which is different from that of the surrounding areas. The southern half of the island formed from the Avalon terrane, which was once a microcontinent in the Paleozoic era. It is made up of volcanic rock that formed near what is now called Africa. Most of the northern half of the island is on the Bras d'Or terrane (part of the Ganderia terrane). It contains volcanic and sedimentary rock formed off the coast of what is now South America. The third terrane is the relatively small Blair River inlier on the far northwestern tip. It contains the oldest rock in the Maritimes, formed up to 1.6 billion years ago. These rocks, which can be seen in the Polletts Cove - Aspy Fault Wilderness Area north of Pleasant Bay, are likely part of the Canadian Shield, a large area of Precambrian igneous and metamorphic rock that forms the core of the North American continent. The Avalon and Bras d'Or terranes were pushed together about 500 million years ago when the supercontinent Gondwana was formed. The Blair River inlier was sandwiched in between the two when Laurussia was formed 450-360 million years ago, at which time the land was found in the tropics. This collision also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest which later formed coal deposits. Much later, the land was shaped by repeated ice ages which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent. Climate The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean. Ecology Lowlands There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands. Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia. Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot. Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east. Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges. Cape Breton Hills This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest. It includes North Mountain, Kellys Mountain, and East Bay Hills. Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty. In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce. Highlands and the Northern Plateau The Highlands comprise a tableland in the northern portions of Inverness and Victoria counties. An extension of the Appalachian mountain chain, elevations average 350 metres at the edges of the plateau and rise to more than 500 metres at the centre. The area has broad, gently rolling hills bisected with deep valleys and steep-walled canyons. A majority of the land is a taiga of balsam fir, with some white birch, white spruce, mountain ash, and heart-leaf birch. The northern and western edges of the plateau, particularly at high elevations, resemble arctic tundra. Trees 30–90 high, overgrown with reindeer lichens, can be 150 years old. At very high elevations some areas are exposed bedrock without any vegetation apart from Cladonia lichens. There are many barrens, or heaths, dominated by bushy species of the Ericaceae family. Spruce, killed by spruce budworm in the late 1970s, has reestablished at lower elevations, but not at higher elevations due to moose browsing. Decomposition is slow, leaving thick layers of plant litter. Ground cover includes wood aster, twinflower, liverworts, wood sorrel, bluebead lily, goldthread, various ferns, and lily-of-the-valley, with bryophyte and large-leaved goldenrod at higher elevations. The understory can include striped maple, mountain ash, ferns, and mountain maple. Near water, bog birch, alder, and mountain-ash are found. There are many open wetlands populated with stunted tamarack and black spruce. Poor drainage has led to the formation of peatlands which can support tufted clubrush, Bartram's serviceberry, coastal sedge, and bakeapple. Cape Breton coastal The eastern shore is unique in that while not at a high elevation, it has a cool climate with much rain and fog, strong winds, and low summer temperatures. It is dominated by a boreal forest of black spruce and balsam fir. Sheltered areas support tolerant hardwoods such as white birch and red maple. Many salt marshes, fens, and bogs are found there. There are many beaches on the highly crenalated coastline. Unlike elsewhere on the island, these are rocky and support plants unlike those of sandy beaches. The coast provides habitat for common coast bird species like common eider, black legged kittiwake, black guillemot, whimbrel, and great cormorant. Hydrology Land is drained into the Gulf of Saint Lawrence via the rivers Aspy, Sydney, Mira, Framboise, Margaree, and Chéticamp. The largest freshwater lake is Lake Ainslie. Government Local government on the island is provided by the Cape Breton Regional Municipality, the Municipality of the County of Inverness, the Municipality of the County of Richmond, and the Municipality of the County of Victoria, along with the Town of Port Hawkesbury. The island has five Miꞌkmaq Indian reserves: Eskasoni (the largest in population and land area), Membertou, Wagmatcook, Waycobah, and Potlotek. Demographics The island's residents can be grouped into five main cultures: Scottish, Mi'kmaq, Acadian, Irish, English, with respective languages Scottish Gaelic, Mi'kmaq, French, and English. English is now the primary language, including a locally distinctive Cape Breton accent, while Mi'kmaq, Scottish Gaelic and Acadian French are still spoken in some communities. Later migrations of Black Loyalists, Italians, and Eastern Europeans mostly settled in the island's eastern part around the industrial Cape Breton region. Cape Breton Island's population has been in decline two decades with an increasing exodus in recent years due to economic conditions. Population trend Religious groups Statistics Canada in 2001 reported a "religion" total of 145,525 for Cape Breton, including 5,245 with "no religious affiliation." Major categories included: Roman Catholic: 96,260 (includes Eastern Catholic, Polish National Catholic Church, Old Catholic) Protestant: 42,390 Christian, not included elsewhere: 580 Orthodox: 395 Jewish: 250 Muslim: 145 Economy Much of the recent economic history of Cape Breton Island can be tied to the coal industry. The island has two major coal deposits: the Sydney Coal Field in the southeastern part of the island along the Atlantic Ocean drove the Industrial Cape Breton economy throughout the 19th and 20th centuries—until after World War II, its industries were the largest private employers in Canada. the Inverness Coal Field in the western part of the island along the Gulf of St. Lawrence is significantly smaller but hosted several mines. Sydney has traditionally been the main port, with facilities in a large, sheltered, natural harbour. It is the island's largest commercial centre and home to the Cape Breton Post daily newspaper, as well as one television station, CJCB-TV (CTV), and several radio stations. The Marine Atlantic terminal at North Sydney is the terminal for large ferries traveling to Channel-Port aux Basques and seasonally to Argentia, both on the island of Newfoundland. Point Edward on the west side of Sydney Harbour is the location of Sydport, a former navy base () now converted to commercial use. The Canadian Coast Guard College is nearby at Westmount. Petroleum, bulk coal, and cruise ship facilities are also in Sydney Harbour. Glace Bay, the second largest urban community in population, was the island's main coal mining centre until its last mine closed in the 1980s. Glace Bay was the hub of the Sydney & Louisburg Railway and a major fishing port. At one time, Glace Bay was known as the largest town in Nova Scotia, based on population. Port Hawkesbury has risen to prominence since the completion of the Canso Causeway and Canso Canal created an artificial deep-water port, allowing extensive petrochemical, pulp and paper, and gypsum handling facilities to be established. The Strait of Canso is completely navigable to Seawaymax vessels, and Port Hawkesbury is open to the deepest-draught vessels on the world's oceans. Large marine vessels may also enter Bras d'Or Lake through the Great Bras d'Or channel, and small craft can use the Little Bras d'Or channel or St. Peters Canal. While commercial shipping no longer uses the St. Peters Canal, it remains an important waterway for recreational vessels. The industrial Cape Breton area faced several challenges with the closure of the Cape Breton Development Corporation's (DEVCO) coal mines and the Sydney Steel Corporation's (SYSCO) steel mill. In recent years, the Island's residents have tried to diversify the area economy by investing in tourism developments, call centres, and small businesses, as well as manufacturing ventures in fields such as auto parts, pharmaceuticals, and window glazings. While the Cape Breton Regional Municipality is in transition from an industrial to a service-based economy, the rest of Cape Breton Island outside the industrial area surrounding Sydney-Glace Bay has been more stable, with a mixture of fishing, forestry, small-scale agriculture, and tourism. Tourism in particular has grown throughout the post-Second World War era, especially the growth in vehicle-based touring, which was furthered by the creation of the Cabot Trail scenic drive. The scenery of the island is rivalled in northeastern North America by only Newfoundland; and Cape Breton Island tourism marketing places a heavy emphasis on its Scottish Gaelic heritage through events such as the Celtic Colours Festival, held each October, as well as promotions through the Gaelic College of Celtic Arts and Crafts. Whale-watching is a popular attraction for tourists. Whale-watching cruises are operated by vendors from Baddeck to Chéticamp. The most popular species of whale found in Cape Breton's waters is the pilot whale. The Cabot Trail is a scenic road circuit around and over the Cape Breton Highlands with spectacular coastal vistas; over 400,000 visitors drive the Cabot Trail each summer and fall. Coupled with the Fortress of Louisbourg, it has driven the growth of the tourism industry on the island in recent decades. The Condé Nast travel guide has rated Cape Breton Island as one of the world's best island destinations. Transport The island's primary east–west road is Highway 105, the Trans-Canada Highway, although Trunk 4 is also heavily used. Highway 125 is an important arterial route around Sydney Harbour in the Cape Breton Regional Municipality. The Cabot Trail, circling the Cape Breton Highlands, and Trunk 19, along the island's western coast, are important secondary roads. The Cape Breton and Central Nova Scotia Railway maintains railway connections between the port of Sydney to the Canadian National Railway in Truro. Cape Breton Island is served by several airports, the largest, the JA Douglas McCurdy Sydney Airport, situated on Trunk 4 between the communities of Sydney and Glace Bay, as well as smaller airports at Port Hawksbury, Margaree, and Baddeck. Culture Language Gaelic speakers in Cape Breton, as elsewhere in Nova Scotia, constituted a large proportion of the local population from the 18th century on. They brought with them a common culture of poetry, traditional songs and tales, music and dance, and used this to develop distinctive local traditions. Most Gaelic settlement in Nova Scotia happened between 1770 and 1840, with probably over 50,000 Gaelic speakers emigrating from the Scottish Highlands and the Hebrides to Nova Scotia and Prince Edward Island. Such emigration was facilitated by changes in Gaelic society and the economy, with sharp increases in rents, confiscation of land and disruption of local customs and rights. In Nova Scotia, poetry and song in Gaelic flourished. George Emmerson argues that an "ancient and rich" tradition of storytelling, song, and Gaelic poetry emerged during the 18th century and was transplanted from the Highlands of Scotland to Nova Scotia, where the language similarly took root there. The majority of those settling in Nova Scotia from the end of the 18th century through to middle of the next were from the Scottish Highlands, rather than the Lowlands, making the Highland tradition's impact more profound on the region. Gaelic settlement in Cape Breton began in earnest in the early nineteenth century. The Gaelic language became dominant from Colchester County in the west of Nova Scotia into Cape Breton County in the east. It was reinforced in Cape Breton in the first half of the 19th century with an influx of Highland Scots numbering approximately 50,000 as a result of the Highland Clearances. From 1892 to 1904, Jonathon MacKinnon published the Scottish Gaelic-language biweekly newspaper () in Sydney, Nova Scotia. During the 1920s, several Scottish Gaelic-language newspapers were printed in Sydney for distribution primarily on Cape Breton, including the (), which included Gaelic-language lessons; the United Church-affiliated (); and MacKinnon's later endeavor, (). Gaelic speakers, however, tended to be poor; they were largely illiterate and had little access to education. This situation persisted into the early days of the twentieth century. In 1921 Gaelic was approved as an optional subject in the curriculum of Nova Scotia, but few teachers could be found and children were discouraged from using the language in schools. By 1931 the number of Gaelic speakers in Nova Scotia had fallen to approximately 25,000, mostly in discrete pockets. In Cape Breton it was still a majority language, but the proportion was falling. Children were no longer being raised with Gaelic. From 1939 on, attempts were made to strengthen its position in the public school system in Nova Scotia, but funding, official commitment and the availability of teachers continued to be a problem. By the 1950s the number of speakers was less than 7,000. The advent of multiculturalism in Canada in the 1960s meant that new educational opportunities became available, with a gradual strengthening of the language at secondary and tertiary level. At present several schools in Cape Breton offer Gaelic Studies and Gaelic language programs, and the language is taught at Cape Breton University. The 2016 Canadian Census shows that there are only 40 reported speakers of Gaelic as a mother tongue in Cape Breton. On the other hand, there are families and individuals who have recommenced intergenerational transmission. They include fluent speakers from Gaelic-speaking areas of Scotland and speakers who became fluent in Nova Scotia and who in some cases studied in Scotland. Other revitalization activities include adult education, community cultural events and publishing. Traditional music Cape Breton is well known for its traditional fiddle music, which was brought to North America by Scottish immigrants during the Highland Clearances. The traditional style has been well preserved in Cape Breton, and cèilidhs have become a popular attraction for tourists. Inverness County in particular has a heavy concentration of musical activity, with regular performances in communities such as Mabou and Judique. Judique is recognized as "" () or the 'Home of Celtic Music', featuring the Celtic Music Interpretive Centre. The traditional fiddle music of Cape Breton is studied by musicians around the world, where its global recognition continues to rise. Local performers who have received significant recognition outside of Cape Breton include Angus Chisholm; Buddy MacMaster; Joseph Cormier, the first Cape Breton fiddler to record an album made available in Europe (1974); Lee Cremo; Bruce Guthro; Natalie MacMaster; Ashley MacIsaac; The Rankin Family; Aselin Debison; Gordie Sampson; John Allan Cameron; and the Barra MacNeils. The Men of the Deeps are a male choral group of current and former miners from the industrial Cape Breton area. Film and television My Bloody Valentine: 1981 slasher film shot on location in Sydney Mines. The Bay Boy: 1984 semi-autobiographical drama film about growing up in Glace Bay. Margaret's Museum 1995 drama film which tells the story of a young girl living in a coal mining town where the death of men from accidents in "the pit" (the mines) has become almost routine. Pit Pony 1999 TV series about small-town life in Glace Bay in 1904. The plot line revolves around the lives of the families of the men and boys who work in the coal mines. Photo gallery See also Canadian Gaelic Cape Breton accent Cape Breton Labour Party Cape Breton Regional Municipality Provinces and territories of Canada Province of Cape Breton Island Sydney Tar Ponds Cape Breton Highlands National Park List of people from Cape Breton Notes References Further reading External links Cape Breton Island Official Travel Guide British North America Canadian Gaelic Former British colonies and protectorates in the Americas Geographic regions of Nova Scotia Islands of Nova Scotia
2,575
5,738
https://en.wikipedia.org/wiki/Cervix
Cervix
The cervix or cervix uteri (Latin, 'neck of the uterus') is the lower part of the uterus (womb) in the human female reproductive system. The cervix is usually 2 to 3 cm long (~1 inch) and roughly cylindrical in shape, which changes during pregnancy. The narrow, central cervical canal runs along its entire length, connecting the uterine cavity and the lumen of the vagina. The opening into the uterus is called the internal os, and the opening into the vagina is called the external os. The lower part of the cervix, known as the vaginal portion of the cervix (or ectocervix), bulges into the top of the vagina. The cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago. The cervical canal is a passage through which sperm must travel to fertilize an egg cell after sexual intercourse. Several methods of contraception, including cervical caps and cervical diaphragms, aim to block or prevent the passage of sperm through the cervical canal. Cervical mucus is used in several methods of fertility awareness, such as the Creighton model and Billings method, due to its changes in consistency throughout the menstrual period. During vaginal childbirth, the cervix must flatten and dilate to allow the fetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth. The cervical canal is lined with a single layer of column-shaped cells, while the ectocervix is covered with multiple layers of cells topped with flat cells. The two types of epithelia meet at the squamocolumnar junction. Infection with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can often detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding sex, using condoms, and HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of cervical cancer by preventing infections from the main cancer-causing strains of HPV. Structure The cervix is part of the female reproductive system. Around in length, it is the lower narrower part of the uterus continuous above with the broader upper part—or body—of the uterus. The lower end of the cervix bulges through the anterior wall of the vagina, and is referred to as the vaginal portion of cervix (or ectocervix) while the rest of the cervix above the vagina is called the supravaginal portion of cervix. A central canal, known as the cervical canal, runs along its length and connects the cavity of the body of the uterus with the lumen of the vagina. The openings are known as the internal os and external orifice of the uterus (or external os), respectively. The mucosa lining the cervical canal is known as the endocervix, and the mucosa covering the ectocervix is known as the exocervix. The cervix has an inner mucosal layer, a thick layer of smooth muscle, and posteriorly the supravaginal portion has a serosal covering consisting of connective tissue and overlying peritoneum. In front of the upper part of the cervix lies the bladder, separated from it by cellular connective tissue known as parametrium, which also extends over the sides of the cervix. To the rear, the supravaginal cervix is covered by peritoneum, which runs onto the back of the vaginal wall and then turns upwards and onto the rectum, forming the recto-uterine pouch. The cervix is more tightly connected to surrounding structures than the rest of the uterus. The cervical canal varies greatly in length and width between women or over the course of a woman's life, and it can measure 8 mm (0.3 inch) at its widest diameter in premenopausal adults. It is wider in the middle and narrower at each end. The anterior and posterior walls of the canal each have a vertical fold, from which ridges run diagonally upwards and laterally. These are known as palmate folds, due to their resemblance to a palm leaf. The anterior and posterior ridges are arranged in such a way that they interlock with each other and close the canal. They are often effaced after pregnancy. The ectocervix (also known as the vaginal portion of the cervix) has a convex, elliptical shape and projects into the cervix between the anterior and posterior vaginal fornices. On the rounded part of the ectocervix is a small, depressed external opening, connecting the cervix with the vagina. The size and shape of the ectocervix and the external opening (external os) can vary according to age, hormonal state, and whether natural or normal childbirth has taken place. In women who have not had a vaginal delivery, the external opening is small and circular, and in women who have had a vaginal delivery, it is slit-like. On average, the ectocervix is long and wide. Blood is supplied to the cervix by the descending branch of the uterine artery and drains into the uterine vein. The pelvic splanchnic nerves, emerging as S2–S3, transmit the sensation of pain from the cervix to the brain. These nerves travel along the uterosacral ligaments, which pass from the uterus to the anterior sacrum. Three channels facilitate lymphatic drainage from the cervix. The anterior and lateral cervix drains to nodes along the uterine arteries, travelling along the cardinal ligaments at the base of the broad ligament to the external iliac lymph nodes and ultimately the paraaortic lymph nodes. The posterior and lateral cervix drains along the uterine arteries to the internal iliac lymph nodes and ultimately the paraaortic lymph nodes, and the posterior section of the cervix drains to the obturator and presacral lymph nodes. However, there are variations as lymphatic drainage from the cervix travels to different sets of pelvic nodes in some people. This has implications in scanning nodes for involvement in cervical cancer. After menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. During most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. However, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. These changes are also accompanied by changes in cervical mucus, described below. Development As a component of the female reproductive system, the cervix is derived from the two paramesonephric ducts (also called Müllerian ducts), which develop around the sixth week of embryogenesis. During development, the outer parts of the two ducts fuse, forming a single urogenital canal that will become the vagina, cervix and uterus. The cervix grows in size at a smaller rate than the body of the uterus, so the relative size of the cervix over time decreases, decreasing from being much larger than the body of the uterus in fetal life, twice as large during childhood, and decreasing to its adult size, smaller than the uterus, after puberty. Previously it was thought that during fetal development, the original squamous epithelium of the cervix is derived from the urogenital sinus and the original columnar epithelium is derived from the paramesonephric duct. The point at which these two original epithelia meet is called the original squamocolumnar junction. New studies show, however, that all the cervical as well as large part of the vaginal epithelium are derived from Müllerian duct tissue and that phenotypic differences might be due to other causes. Histology The endocervical mucosa is about thick and lined with a single layer of columnar mucous cells. It contains numerous tubular mucous glands, which empty viscous alkaline mucus into the lumen. In contrast, the ectocervix is covered with nonkeratinized stratified squamous epithelium, which resembles the squamous epithelium lining the vagina. The junction between these two types of epithelia is called the squamocolumnar junction. Underlying both types of epithelium is a tough layer of collagen. The mucosa of the endocervix is not shed during menstruation. The cervix has more fibrous tissue, including collagen and elastin, than the rest of the uterus. In prepubertal girls, the functional squamocolumnar junction is present just within the cervical canal. Upon entering puberty, due to hormonal influence, and during pregnancy, the columnar epithelium extends outward over the ectocervix as the cervix everts. Hence, this also causes the squamocolumnar junction to move outwards onto the vaginal portion of the cervix, where it is exposed to the acidic vaginal environment. The exposed columnar epithelium can undergo physiological metaplasia and change to tougher metaplastic squamous epithelium in days or weeks, which is very similar to the original squamous epithelium when mature. The new squamocolumnar junction is therefore internal to the original squamocolumnar junction, and the zone of unstable epithelium between the two junctions is called the transformation zone of the cervix. Histologically, the transformation zone is generally defined as surface squamous epithelium with surface columnar epithelium or stromal glands/crypts, or both. After menopause, the uterine structures involute and the functional squamocolumnar junction moves into the cervical canal. Nabothian cysts (or Nabothian follicles) form in the transformation zone where the lining of metaplastic epithelium has replaced mucous epithelium and caused a strangulation of the outlet of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone. Function Fertility The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after sexual intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the "upsuck theory" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors. Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning. Cervical mucus Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the "ferning" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation. At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This "infertile" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy. A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge. Childbirth The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement. Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than . The second phase of labor begins when the cervix has dilated to , which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth. Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth. Contraception Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness. Clinical significance Cancer In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there. Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development. Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology. A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions. A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort. Inflammation Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment. Anatomical abnormalities Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix. Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their increased risk of associated pathologies. Cervical agenesis is a rare congenital condition in which the cervix completely fails to develop, often associated with the concurrent failure of the vagina to develop. Other congenital cervical abnormalities exist, often associated with abnormalities of the vagina and uterus. The cervix may be duplicated in situations such as bicornuate uterus and uterine didelphys. Cervical polyps, which are benign overgrowths of endocervical tissue, if present, may cause bleeding, or a benign overgrowth may be present in the cervical canal. Cervical ectropion refers to the horizontal overgrowth of the endocervical columnar lining in a one-cell-thick layer over the ectocervix. Other mammals Female marsupials have paired uteri and cervices. Most eutherian (placental) mammal species have a single cervix and single, bipartite or bicornuate uterus. Lagomorphs, rodents, aardvarks and hyraxes have a duplex uterus and two cervices. Lagomorphs and rodents share many morphological characteristics and are grouped together in the clade Glires. Anteaters of the family myrmecophagidae are unusual in that they lack a defined cervix; they are thought to have lost the characteristic rather than other mammals developing a cervix on more than one lineage. In domestic pigs, the cervix contains a series of five interdigitating pads that hold the boar's corkscrew-shaped penis during copulation. Etymology and pronunciation The word cervix () came to English from Latin, where it means "neck", and like its Germanic counterpart, it can refer not only to the neck [of the body] but also to an analogous narrowed part of an object. The cervix uteri (neck of the uterus) is thus the uterine cervix, but in English the word cervix used alone usually refers to it. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer). Latin cervix came from the Proto-Indo-European root ker-, referring to a "structure that projects". Thus, the word cervix is linguistically related to the English word "horn", the Persian word for "head" ( sar), the Greek word for "head" ( koruphe), and the Welsh and Romanian words for "deer" (, Romanian: cerb). The cervix was documented in anatomical literature in at least the time of Hippocrates; cervical cancer was first described more than 2,000 years ago, with descriptions provided by both Hippocrates and Aretaeus. However, there was some variation in word sense among early writers, who used the term to refer to both the cervix and the internal uterine orifice. The first attested use of the word to refer to the cervix of the uterus was in 1702. References Citations Cited texts External links Human female reproductive system Women's health
2,582
5,750
https://en.wikipedia.org/wiki/Cognitive%20behavioral%20therapy
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. CBT focuses on challenging and changing cognitive distortions (such as thoughts, beliefs, and attitudes) and their associated behaviors to improve emotional regulation and develop personal coping strategies that target solving current problems. Though it was originally designed to treat depression, its uses have been expanded to include the treatment of many mental health conditions, including anxiety, substance use disorders, marital problems, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies. CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from historical approaches to psychotherapy, such as the psychoanalytic approach where the therapist looks for the unconscious meaning behind the behaviors, and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms. When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, anxiety, post-traumatic stress disorder (PTSD), tics, substance use disorders, eating disorders, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice. History Early roots Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Another example of Stoic influence on cognitive theorists is Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill. The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two. First wave: behavior therapy roots Groundbreaking work of behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning. During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull. In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative. At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed behavior therapy with their respective work on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of the Australian Claire Weekes dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy. The emphasis on behavioral factors constituted the "first wave" of CBT. Second wave: cognitive therapy roots One of the first therapists to address cognition in psychotherapy was Alfred Adler (1870–1937), notably with his idea of basic mistakes and how they contributed to creation of unhealthy or useless behavioral and life goals. Abraham Low (1891–1954) believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive therapy (contemporarily known as rational emotive behavioral therapy, or REBT). The first version was announced to the public in 1956. In the late 1950s, Aaron T. Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts "automatic thoughts". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as "the father of cognitive behavioral therapy". It was these two therapies, rational emotive therapy, and cognitive therapy, that started the "second wave" of CBT, which was the emphasis on cognitive factors. Third wave: behavior and cognitive therapies merge Although the early behavioral approaches were successful in many of the neurotic disorders, they had little success in treating depression. Behaviorism was also losing in popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present. In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US. Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, rational emotive behavior therapy (REBT), cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy. All of these therapies are a blending of cognitive- and behavior-based elements. This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the "third wave" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression. Description Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself. The goal of cognitive behavioral therapy is not to diagnose a person with a particular disease, but to look at the person as a whole and decide what can be altered. Cognitive distortions Therapists or computer-based programs use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions, such as "overgeneralizing, magnifying negatives, minimizing positives and catastrophizing" with "more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact. Skills Mainstream CBT helps individuals replace "maladaptive ... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training. Phases in therapy CBT can be seen as having six phases: Assessment or psychological assessment; Reconceptualization; Skills acquisition; Skills consolidation and application training; Generalization and maintenance; Post-treatment assessment follow-up. These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, "If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed." The steps in the assessment phase include: Identify critical behaviors Determine whether critical behaviors are excesses or deficits Evaluate critical behaviors for frequency, duration, or intensity (obtain a baseline) If excess, attempt to decrease frequency, duration, or intensity of behaviors; if deficits, attempt to increase behaviors. The re-conceptualization phase makes up much of the "cognitive" portion of CBT. A summary of modern CBT approaches is given by Hofmann. Delivery protocols There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches. Related techniques CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process. Medical uses In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and with post-spinal cord injuries. In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and posttraumatic stress disorder (PTSD), as well as tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been applied to a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect. Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression. Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues. The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression. Patient age CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support. Depression and anxiety disorders Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. The American Psychiatric Association Practice Guidelines (April 2000) indicated that, among psychotherapeutic approaches, cognitive behavioral therapy and interpersonal psychotherapy had the best-documented efficacy for treatment of major depressive disorder. A 2001 meta-analysis comparing CBT and psychodynamic psychotherapy suggested the approaches were equally effective in the short term for depression. In contrast, a 2013 meta-analyses suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression. According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders. CBT has been shown to be effective in the treatment of adults with anxiety disorders. Results from a 2018 systematic review found a high strength of evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. CBT has also been shown to be effective for posttraumatic stress disorder in very young children (3 to 6 years of age). A Cochrane review found low quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents. A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists." Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression. Theoretical approaches One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations. Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema. A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation). Specialised forms of CBT CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable. Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety. Combined with other treatments Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders. Prevention For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence. For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression. Bipolar disorder Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency. Psychosis In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions). For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT. Schizophrenia INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia. A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn. Addiction and substance use disorders Pathological and problem gambling CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown. Smoking cessation CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment. In a 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence. Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction. A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation. Substance use disorders Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency. Internet addiction Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. Eating disorders Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa. With autistic adults Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. Dementia and mild cognitive impairment A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI. The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI. Other uses Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency. There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia. A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions. Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners. CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders. Individuals with medical conditions In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children. There is limited evidence to support its use in coping with the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution. Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome, however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness." Methods of access Therapist A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links. Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure. Computerized or Internet-delivered (CCBT) Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning. Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for posttraumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations. In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product. Smartphone app-delivered Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Reading self-help materials Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional). Group educational course Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT. Types Brief cognitive behavioral therapy Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed "suicidal mode", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide. Breakdown of treatment Orientation Commitment to treatment Crisis response and safety planning Means restriction Survival kit Reasons for living card Model of suicidality Treatment journal Lessons learned Skill focus Skill development worksheets Coping cards Demonstration Practice Skill refinement Relapse prevention Skill generalization Skill refinement Cognitive emotional behavioral therapy Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy. Structured cognitive behavioral training Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism. Moral reconation therapy Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months. Stress inoculation training This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This usually is used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization. The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practised in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc. The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals. Activity-guided CBT: Group-knitting A newly developed group therapy model based on CBT integrates knitting into the therapeutical process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on the embeddedness of the therapy method in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutical process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us. Mindfulness-based cognitive behavioral hypnotherapy Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT focusing on awareness in reflective approach with addressing of subconscious tendencies. It is more the process that contains basically three phases that are used for achieving wanted goals. Unified Protocol The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of and anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together. The UP includes a common set of components: Psycho-education Cognitive reappraisal Emotion regulation Changing behaviour The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder. Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols. Criticisms Relative effectiveness The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other bona fide treatments, and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments. A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in. The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindedness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low. Declining effectiveness Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only. High drop-out rates Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors. Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious. Philosophical concerns with CBT methods The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question. Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for. Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes. Side effects CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration. A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with "negative wellbeing/distress" being the most frequent. Socio-political concerns The writer and group analyst Farhad Dalal questions the socio-political assumptions behind the introduction of CBT. According to one reviewer, Dalal connects the rise of CBT with "the parallel rise of neoliberalism, with its focus on marketization, efficiency, quantification and managerialism", and he questions the scientific basis of CBT, suggesting that "the 'science' of psychological treatment is often less a scientific than a political contest". In his book, Dalal also questions the ethical basis of CBT. Society and culture The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes "a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness." The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to "a watered down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff". The NICE also recommends offering CBT to people with schizophrenia, as well as those at risk of a psychotic episode. References Further reading External links Association for Behavioral and Cognitive Therapies (ABCT) British Association for Behavioural and Cognitive Psychotherapies National Association of Cognitive-Behavioral Therapists International Association of Cognitive Psychotherapy Information on Research-based CBT Treatments Associated Counsellors & Psychologists CBT Therapists Addiction Addiction medicine Treatment of obsessive–compulsive disorder
2,587
5,751
https://en.wikipedia.org/wiki/Chinese%20language
Chinese language
Chinese (, especially when referring to written Chinese) is a group of languages spoken natively by the ethnic Han Chinese majority and many minority ethnic groups in Greater China. About 1.3 billion people (or approximately 16% of the world's population) speak a variety of Chinese as their first language. Chinese languages form the Sinitic branch of the Sino-Tibetan languages family. The spoken varieties of Chinese are usually considered by native speakers to be dialects of a single language. However, their lack of mutual intelligibility means they are sometimes considered to be separate languages in a family. Investigation of the historical relationships among the varieties of Chinese is ongoing. Currently, most classifications posit 7 to 13 main regional groups based on phonetic developments from Middle Chinese, of which the most spoken by far is Mandarin (with about 800 million speakers, or 66%), followed by Min (75 million, e.g. Southern Min), Wu (74 million, e.g. Shanghainese), and Yue (68 million, e.g. Cantonese). These branches are unintelligible to each other, and many of their subgroups are unintelligible with the other varieties within the same branch (e.g. Southern Min). There are, however, transitional areas where varieties from different branches share enough features for some limited intelligibility, including New Xiang with Southwest Mandarin, Xuanzhou Wu with Lower Yangtze Mandarin, Jin with Central Plains Mandarin and certain divergent dialects of Hakka with Gan (though these are unintelligible with mainstream Hakka). All varieties of Chinese are tonal to at least some degree, and are largely analytic. The earliest Chinese written records are Shang dynasty-era oracle bone inscriptions, which can be dated to 1250 BCE. The phonetic categories of Old Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern dynasties period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language (Guanhua) based on Nanjing dialect of Lower Yangtze Mandarin. Standard Chinese (Standard Mandarin), based on the Beijing dialect of Mandarin, was adopted in the 1930s and is now an official language of both the People's Republic of China and the Republic of China (Taiwan), one of the four official languages of Singapore, and one of the six official languages of the United Nations. The written form, using the logograms known as Chinese characters, is shared by literate speakers of mutually unintelligible dialects. Since the 1950s, simplified Chinese characters have been promoted for use by the government of the People's Republic of China, while Singapore officially adopted simplified characters in 1976. Traditional characters remain in use in Taiwan, Hong Kong, Macau, and other countries with significant overseas Chinese speaking communities such as Malaysia (which although adopted simplified characters as the de facto standard in the 1980s, traditional characters still remain in widespread use). Classification Linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated. History The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard. Old and Middle Chinese The earliest examples of Chinese (Old Chinese) are divinatory inscriptions on oracle bones from around 1250 BCE in the late Shang dynasty. The next attested stage came from inscriptions on bronze artifacts of the Western Zhou period (1046–771 BCE), the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles. Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th through 10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics. Classical and literary forms The relationship between spoken and written Chinese is rather complex ("diglossia"). Its spoken varieties have evolved at different rates, while written Chinese itself has changed much less. Classical Chinese literature began in the Spring and Autumn period. Rise of northern dialects After the fall of the Northern Song dynasty and subsequent reign of the Jin (Jurchen) and Yuan (Mongol) dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The Zhongyuan Yinyun (1324) was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects. Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as Guānhuà (/, literally "language of officials"). For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court. In the 1930s, a standard national language, Guóyǔ (/ ; "national language") was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it pǔtōnghuà (/; "common speech"). The national language is now used in education, the media, and formal situations in both Mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools. Influence Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later Korea, Japan, and Vietnam developed strong central governments modeled on Chinese institutions, with Literary Chinese as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese. Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean. Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines. Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters (kanji) and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters - hanja - is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet. Examples of loan words in English include "tea", from Hokkien (Min Nan) (), "dim sum", from Cantonese dim2 sam1 () and "kumquat", from Cantonese gam1gwat1 (). Varieties Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbors. For instance, Wuzhou is about upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighboring counties or even villages may be mutually unintelligible. Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou. Grouping Local varieties of Chinese are conventionally classified into seven dialect groups, largely on the basis of the different evolution of Middle Chinese voiced initials: Mandarin, including Standard Chinese, Pekingese, Sichuanese, and also the Dungan language spoken in Central Asia Wu, including Shanghainese, Suzhounese, and Wenzhounese Gan Xiang Min, including Fuzhounese, Hainanese, Hokkien and Teochew Hakka Yue, including Cantonese and Taishanese The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups: Jin, previously included in Mandarin. Huizhou, previously included in Wu. Pinghua, previously included in Yue. Some varieties remain unclassified, including Danzhou dialect (spoken in Danzhou, on Hainan Island), Waxianghua (spoken in western Hunan) and Shaozhou Tuhua (spoken in northern Guangdong). Standard Chinese Standard Chinese, often called Mandarin, is the official standard language of China, the de facto official language of Taiwan, and one of the four official languages of Singapore (where it is called "Huáyŭ" / or Chinese). Standard Chinese is based on the Beijing dialect, the dialect of Mandarin as spoken in Beijing. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools. In China and Taiwan, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai might speak Shanghainese; and, if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese also speak Taiwanese Hokkien (commonly "Taiwanese" ), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other Taiwanese languages, and this mixture is considered normal in daily or informal speech. Due to their traditional cultural ties to Guangdong province and colonial histories, Cantonese is used as the standard variant of Chinese in Hong Kong and Macau instead. Nomenclature The official Chinese designation for the major branches of Chinese is fāngyán (, literally "regional speech"), whereas the more closely related varieties within these are called dìdiǎn fāngyán (/ "local speech"). Conventional English-language usage in Chinese linguistics is to use dialect for the speech of a particular place (regardless of status) and dialect group for a regional grouping such as Mandarin or Wu. Because varieties from different groups are not mutually intelligible, some scholars prefer to describe Wu and others as separate languages. Jerry Norman called this practice misleading, pointing out that Wu, which itself contains many mutually unintelligible varieties, could not be properly called a single language under the same criterion, and that the same is true for each of the other groups. Mutual intelligibility is considered by some linguists to be the main criterion for determining whether varieties are separate languages or dialects of a single language, although others do not regard it as decisive, particularly when cultural factors interfere as they do with Chinese. As explains, linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity. John DeFrancis argues that it is inappropriate to refer to Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. On the other hand, he also objects to considering them as separate languages, as it incorrectly implies a set of disruptive "religious, economic, political, and other differences" between speakers that exist, for example, between French Catholics and English Protestants in Canada, but not between speakers of Cantonese and Mandarin in China, owing to China's near-uninterrupted history of centralized government. Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include vernacular, lect, regionalect, topolect, and variety. Most Chinese people consider the spoken varieties as one single language because speakers share a common culture and history, as well as a shared national identity and a common written form. Phonology Syllables in the Chinese languages have some unique characteristics. They are tightly related to the morphology and also to the characters of the writing system; and phonologically they are structured according to fixed rules. The structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant+glide; zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants and can stand alone as their own syllable. In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals , , , the retroflex approximant , and voiceless stops , , , or . Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only , , and . The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more multisyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English. Tones All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese. A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese (along with the neutral tone) to the syllable ma. The tones are exemplified by the following five Chinese words: Standard Cantonese, in contrast, has six tones. Historically, finals that end in a stop consonant were considered to be "checked tones" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such: Grammar Chinese is often described as a "monosyllabic" language. However, this is only partially correct. It is largely accurate when describing Classical Chinese and Middle Chinese; in Classical Chinese, for example, perhaps 90% of words correspond to a single syllable and a single character. In the modern varieties, it is usually the case that a morpheme (unit of meaning) is a single syllable; in contrast, English has many multi-syllable morphemes, both bound and free, such as "seven", "elephant", "para-" and "-able". Some of the conservative southern varieties of modern Chinese have largely monosyllabic words, especially among the more basic vocabulary. In modern Mandarin, however, most nouns, adjectives and verbs are largely disyllabic. A significant cause of this is phonological attrition. Sound change over time has steadily reduced the number of possible syllables. In modern Mandarin, there are now only about 1,200 possible syllables, including tonal distinctions, compared with about 5,000 in Vietnamese (still largely monosyllabic) and over 8,000 in English. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí (tone 2): 'ten'; / 'real, actual'; / 'know (a person), recognize'; 'stone'; / 'time'; 'food, eat'. These were all pronounced differently in Early Middle Chinese; in William H. Baxter's transcription they were , , , , and respectively. They are still pronounced differently in today's Cantonese; in Jyutping they are sap9, sat9, sik7, sek9, si4, sik9. In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is; Yuen Ren Chao's modern poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced (in speech, if not in writing) with a longer, less-ambiguous compound. Only the first one, 'ten', normally appears as such when spoken; the rest are normally replaced with, respectively, shíjì / (lit. 'actual-connection'); rènshi / (lit. 'recognize-know'); shítou / (lit. 'stone-head'); shíjiān / (lit. 'time-interval'); shíwù (lit. 'foodstuff'). In each case, the homophone was disambiguated by adding another morpheme, typically either a synonym or a generic word of some sort (for example, 'head', 'thing'), the purpose of which is to indicate which of the possible meanings of the other, homophonic syllable should be selected. However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, shí alone, not shítou /, appears in compounds meaning 'stone-', for example, shígāo 'plaster' (lit. 'stone cream'), shíhuī 'lime' (lit. 'stone dust'), shíkū 'grotto' (lit. 'stone cave'), shíyīng 'quartz' (lit. 'stone flower'), shíyóu 'petroleum' (lit. 'stone oil'). Most modern varieties of Chinese have the tendency to form new words through disyllabic, trisyllabic and tetra-character compounds. In some cases, monosyllabic words have become disyllabic without compounding, as in kūlong from kǒng 孔; this is especially common in Jin. Chinese morphology is strictly bound to a set number of syllables with a fairly rigid construction. Although many of these single-syllable morphemes (zì, ) can stand alone as individual words, they more often than not form multi-syllabic compounds, known as cí (/), which more closely resembles the traditional Western notion of a word. A Chinese cí ('word') can consist of more than one character-morpheme, usually two, but there can be three or more. For example: / 'cloud' , /, / 'hamburger' 'I, me' / 'goalkeeper' 'people, human, mankind' 'The Earth' / 'lightning' / 'dream' All varieties of modern Chinese are analytic languages, in that they depend on syntax (word order and sentence structure) rather than morphology—i.e., changes in form of a word—to indicate the word's function in a sentence. In other words, Chinese has very few grammatical inflections—it possesses no tenses, no voices, no numbers (singular, plural; though there are plural markers, for example for personal pronouns), and only a few articles (i.e., equivalents to "the, a, an" in English). They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin Chinese, this involves the use of particles like le (perfective), hái / ('still'), yǐjīng / ('already'), and so on. Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages like Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping. Although the grammars of the spoken varieties share many traits, they do possess differences. Vocabulary The entire Chinese character corpus since antiquity comprises well over 50,000 characters, of which only roughly 10,000 are in use and only about 3,000 are frequently used in Chinese media and newspapers. However Chinese characters should not be confused with Chinese words. Because most Chinese words are made up of two or more characters, there are many more Chinese words than characters. A more accurate equivalent for a Chinese character is the morpheme, as characters represent the smallest grammatical units with individual meanings in the Chinese language. Estimates of the total number of Chinese words and lexicalized phrases vary greatly. The Hanyu Da Zidian, a compendium of Chinese characters, includes 54,678 head entries for characters, including bone oracle versions. The Zhonghua Zihai (1994) contains 85,568 head entries for character definitions, and is the largest reference work based purely on character and its literary variants. The CC-CEDICT project (2010) contains 97,404 contemporary entries including idioms, technology terms and names of political figures, businesses and products. The 2009 version of the Webster's Digital Chinese Dictionary (WDCD), based on CC-CEDICT, contains over 84,000 entries. The most comprehensive pure linguistic Chinese-language dictionary, the 12-volume Hanyu Da Cidian, records more than 23,000 head Chinese characters and gives over 370,000 definitions. The 1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary entry definitions under 19,485 Chinese characters, including proper names, phrases and common zoological, geographical, sociological, scientific and technical terms. The 7th (2016) edition of Xiandai Hanyu Cidian, an authoritative one-volume dictionary on modern standard Chinese language as used in mainland China, has 13,000 head characters and defines 70,000 words. Loanwords Like any other language, Chinese has absorbed a sizable number of loanwords from other cultures. Most Chinese words are formed out of native Chinese morphemes, including words describing imported objects and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times. Some early Indo-European loanwords in Chinese have been proposed, notably mì "honey", / shī "lion," and perhaps also / mǎ "horse", / zhū "pig", quǎn "dog", and / é "goose". Ancient words borrowed from along the Silk Road since Old Chinese include pútáo "grape", shíliu/shíliú "pomegranate" and / shīzi "lion". Some words were borrowed from Buddhist scriptures, including Fó "Buddha" and / Púsà "bodhisattva." Other words came from nomadic peoples to the north, such as hútòng "hutong". Words borrowed from the peoples along the Silk Road, such as "grape," generally have Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical languages of North India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast regions generally have Altaic etymologies, such as pípá, the Chinese lute, or lào/luò "cheese" or "yogurt", but from exactly which source is not always clear. Modern borrowings Modern neologisms are primarily translated into Chinese in one of three ways: free translation (calque, or by meaning), phonetic translation (by sound), or a combination of the two. Today, it is much more common to use existing Chinese morphemes to coin new words to represent imported concepts, such as technical expressions and international scientific vocabulary. Any Latin or Greek etymologies are dropped and converted into the corresponding Chinese characters (for example, anti- typically becomes "", literally opposite), making them more comprehensible for Chinese but introducing more difficulties in understanding foreign texts. For example, the word telephone was initially loaned phonetically as / (Shanghainese: télífon , Mandarin: délǜfēng) during the 1920s and widely used in Shanghai, but later / diànhuà (lit. "electric speech"), built out of native Chinese morphemes, became prevalent ( is in fact from the Japanese denwa; see below for more Japanese loans). Other examples include / diànshì (lit. "electric vision") for television, / diànnǎo (lit. "electric brain") for computer; / shǒujī (lit. "hand machine") for mobile phone, / lányá (lit. "blue tooth") for Bluetooth, and / wǎngzhì (lit. "internet logbook") for blog in Hong Kong and Macau Cantonese. Occasionally half-transliteration, half-translation compromises are accepted, such as / hànbǎobāo ( hànbǎo "Hamburg" + bāo "bun") for "hamburger". Sometimes translations are designed so that they sound like the original while incorporating Chinese morphemes (phono-semantic matching), such as / Mǎlì'ào for the video game character Mario. This is often done for commercial purposes, for example / bēnténg (lit. "dashing-leaping") for Pentium and / Sàibǎiwèi (lit. "better-than hundred tastes") for Subway restaurants. Foreign words, mainly proper nouns, continue to enter the Chinese language by transcription according to their pronunciations. This is done by employing Chinese characters with similar pronunciations. For example, "Israel" becomes Yǐsèliè, "Paris" becomes Bālí. A rather small number of direct transliterations have survived as common words, including / shāfā "sofa", / mǎdá "motor", yōumò "humor", / luóji/luójí "logic", / shímáo "smart, fashionable", and xiēsīdǐlǐ "hysterics". The bulk of these words were originally coined in the Shanghai dialect during the early 20th century and were later loaned into Mandarin, hence their pronunciations in Mandarin may be quite off from the English. For example, / "sofa" and / "motor" in Shanghainese sound more like their English counterparts. Cantonese differs from Mandarin with some transliterations, such as so1 faa3*2 "sofa" and mo1 daa2 "motor". Western foreign words representing Western concepts have influenced Chinese since the 20th century through transcription. From French came bālěi "ballet" and / xiāngbīn, "champagne"; from Italian, kāfēi "caffè". English influence is particularly pronounced. From early 20th century Shanghainese, many English words are borrowed, such as / gāoěrfū "golf" and the above-mentioned / shāfā "sofa". Later, the United States soft influences gave rise to dísikē/dísīkē "disco", / kělè "cola", and mínǐ "mini [skirt]". Contemporary colloquial Cantonese has distinct loanwords from English, such as kaa1 tung1 "cartoon", gei1 lou2 "gay people", dik1 si6*2 "taxi", and baa1 si6*2 "bus". With the rising popularity of the Internet, there is a current vogue in China for coining English transliterations, for example, / fěnsī "fans", hēikè "hacker" (lit. "black guest"), and bókè "blog". In Taiwan, some of these transliterations are different, such as hàikè for "hacker" and bùluògé for "blog" (lit. "interconnected tribes"). Another result of the English influence on Chinese is the appearance in Modern Chinese texts of so-called / zìmǔcí (lit. "lettered words") spelled with letters from the English alphabet. This has appeared in magazines, newspapers, on web sites, and on TV: / "3rd generation cell phones" ( sān "three" + G "generation" + / shǒujī "mobile phones"), "IT circles" (IT "information technology" + jiè "industry"), HSK (Hànyǔ Shuǐpíng Kǎoshì, /), GB (Guóbiāo, /), / (CIF "Cost, Insurance, Freight" + / jià "price"), "e-home" (e "electronic" + jiātíng "home"), / "wireless era" (W "wireless" + / shídài "era"), "TV watchers" (TV "television" + zú "social group; clan"), / "post-PC era" (/ hòu "after/post-" + PC "personal computer" + /), and so on. Since the 20th century, another source of words has been Japanese using existing kanji (Chinese characters used in Japanese). Japanese re-molded European concepts and inventions into , and many of these words have been re-loaned into modern Chinese. Other terms were coined by the Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical Chinese literature. For example, jīngjì (/; keizai in Japanese), which in the original Chinese meant "the workings of the state", was narrowed to "economy" in Japanese; this narrowed definition was then reimported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese coined them first. As a result of this loaning, Chinese, Korean, Japanese, and Vietnamese share a corpus of linguistic terms describing modern terminology, paralleling the similar corpus of terms built from Greco-Latin and shared among European languages. Writing system The Chinese orthography centers on Chinese characters, which are written within imaginary square blocks, traditionally arranged in vertical columns, read from top to bottom down a column, and right to left across columns, despite alternative arrangement with rows of characters from left to right within a row and from top to bottom across rows (like English and other Western writing systems) having become more popular since the 20th century. Chinese characters denote morphemes independent of phonetic variation in different languages. Thus the character ("one") is uttered in Standard Chinese, in Cantonese and it in Hokkien (a form of Min). Most written Chinese documents in the modern time, especially the more formal ones, are created using the grammar and syntax of the Standard Mandarin Chinese variants, regardless of dialectical background of the author or targeted audience. This replaced the old writing language standard of Literary Chinese before the 20th century. However, vocabularies from different Chinese-speaking areas have diverged, and the divergence can be observed in written Chinese. Meanwhile, colloquial forms of various Chinese language variants have also been written down by their users, especially in less formal settings. The most prominent example of this is the written colloquial form of Cantonese, which has become quite popular in tabloids, instant messaging applications, and on the internet amongst Hong-Kongers and Cantonese-speakers elsewhere. Because some Chinese variants have diverged and developed a number of unique morphemes that are not found in Standard Mandarin (despite all other common morphemes), unique characters rarely used in Standard Chinese have also been created or inherited from archaic literary standard to represent these unique morphemes. For example, characters like and for Cantonese and Hakka, are actively used in both languages while being considered archaic or unused in standard written Chinese. The Chinese had no uniform phonetic transcription system for most of its speakers until the mid-20th century, although enunciation patterns were recorded in early rime books and dictionaries. Early Indian translators, working in Sanskrit and Pali, were the first to attempt to describe the sounds and enunciation patterns of Chinese in a foreign language. After the 15th century, the efforts of Jesuits and Western court missionaries resulted in some Latin character transcription/writing systems, based on various variants of Chinese languages. Some of these Latin character based systems are still being used to write various Chinese variants in the modern era. In Hunan, women in certain areas write their local Chinese language variant in Nü Shu, a syllabary derived from Chinese characters. The Dungan language, considered by many a dialect of Mandarin, is nowadays written in Cyrillic, and was previously written in the Arabic script. The Dungan people are primarily Muslim and live mainly in Kazakhstan, Kyrgyzstan, and Russia; some of the related Hui people also speak the language and live mainly in China. Chinese characters Each Chinese character represents a monosyllabic Chinese word or morpheme. In 100 CE, the famed Han dynasty scholar Xu Shen classified characters into six categories, namely pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and derivative characters. Of these, only 4% were categorized as pictographs, including many of the simplest characters, such as rén (human), rì (sun), shān (mountain; hill), shuǐ (water). Between 80% and 90% were classified as phonetic compounds such as chōng (pour), combining a phonetic component zhōng (middle) with a semantic radical (water). Almost all characters created since have been made using this format. The 18th-century Kangxi Dictionary recognized 214 radicals. Modern characters are styled after the regular script. Various other written styles are also used in Chinese calligraphy, including seal script, cursive script and clerical script. Calligraphy artists can write in traditional and simplified characters, but they tend to use traditional characters for traditional art. There are currently two systems for Chinese characters. The traditional system, used in Hong Kong, Taiwan, Macau and Chinese speaking communities (except Singapore and Malaysia) outside mainland China, takes its form from standardized character forms dating back to the late Han dynasty. The Simplified Chinese character system, introduced by the People's Republic of China in 1954 to promote mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common cursive shorthand variants. Singapore, which has a large Chinese community, was the second nation to officially adopt simplified characters, although it has also become the de facto standard for younger ethnic Chinese in Malaysia. The Internet provides the platform to practice reading these alternative systems, be it traditional or simplified. Most Chinese users in the modern era are capable of, although not necessarily comfortable with, reading (but not writing) the alternative system, through experience and guesswork. A well-educated Chinese reader today recognizes approximately 4,000 to 6,000 characters; approximately 3,000 characters are required to read a Mainland newspaper. The PRC government defines literacy amongst workers as a knowledge of 2,000 characters, though this would be only functional literacy. School-children typically learn around 2,000 characters whereas scholars may memorize up to 10,000. A large unabridged dictionary, like the Kangxi Dictionary, contains over 40,000 characters, including obscure, variant, rare, and archaic characters; fewer than a quarter of these characters are now commonly used. Romanization Romanization is the process of transcribing a language into the Latin script. There are many systems of romanization for the Chinese varieties, due to the lack of a native phonetic transcription until modern times. Chinese is first known to have been written in Latin characters by Western Christian missionaries in the 16th century. Today the most common romanization standard for Standard Mandarin is Hanyu Pinyin, introduced in 1956 by the People's Republic of China, and later adopted by Singapore and Taiwan. Pinyin is almost universally employed now for teaching standard spoken Chinese in schools and universities across the Americas, Australia, and Europe. Chinese parents also use Pinyin to teach their children the sounds and tones of new words. In school books that teach Chinese, the Pinyin romanization is often shown below a picture of the thing the word represents, with the Chinese character alongside. The second-most common romanization system, the Wade–Giles, was invented by Thomas Wade in 1859 and modified by Herbert Giles in 1892. As this system approximates the phonology of Mandarin Chinese into English consonants and vowels, i.e. it is largely an Anglicization, it may be particularly helpful for beginner Chinese speakers of an English-speaking background. Wade–Giles was found in academic use in the United States, particularly before the 1980s, and until 2009 was widely used in Taiwan. When used within European texts, the tone transcriptions in both pinyin and Wade–Giles are often left out for simplicity; Wade–Giles' extensive use of apostrophes is also usually omitted. Thus, most Western readers will be much more familiar with Beijing than they will be with Běijīng (pinyin), and with Taipei than T'ai²-pei³ (Wade–Giles). This simplification presents syllables as homophones which really are none, and therefore exaggerates the number of homophones almost by a factor of four. Here are a few examples of Hanyu Pinyin and Wade–Giles, for comparison: Other systems of romanization for Chinese include Gwoyeu Romatzyh, the French EFEO, the Yale system (invented during WWII for U.S. troops), as well as separate systems for Cantonese, Min Nan, Hakka, and other Chinese varieties. Other phonetic transcriptions Chinese varieties have been phonetically transcribed into many other writing systems over the centuries. The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of premodern forms of Chinese. Zhuyin (colloquially bopomofo), a semi-syllabary is still widely used in Taiwan's elementary schools to aid standard pronunciation. Although zhuyin characters are reminiscent of katakana script, there is no source to substantiate the claim that Katakana was the basis for the zhuyin system. A comparison table of zhuyin to pinyin exists in the zhuyin article. Syllables based on pinyin and zhuyin can also be compared by looking at the following articles: Pinyin table Zhuyin table There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius system. As a foreign language With the growing importance and influence of China's economy globally, Mandarin instruction has been gaining popularity in schools throughout East Asia, Southeast Asia, and the Western world. Besides Mandarin, Cantonese is the only other Chinese language that is widely taught as a foreign language, largely due to the economic and cultural influence of Hong Kong and its widespread usage among significant Overseas Chinese communities. In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test (also known as HSK, comparable to the English Cambridge Certificate), but by 2005 the number of candidates had risen sharply to 117,660 and in 2010 to 750,000. See also Chinese exclamative particles Chinese honorifics Chinese numerals Chinese punctuation Classical Chinese grammar Four-character idiom Han unification Languages of China North American Conference on Chinese Linguistics Protection of the Varieties of Chinese Notes References Citations Sources Further reading R. L. G. "Language borrowing Why so little Chinese in English?" The Economist. 6 June 2013. External links Classical Chinese texts – Chinese Text Project Marjorie Chan's ChinaLinks at the Ohio State University with hundreds of links to Chinese related web pages Analytic languages Isolating languages Language Languages of China Languages of Hong Kong Languages of Macau Languages of Singapore Languages of Taiwan Languages with own distinct writing systems Lingua francas
2,588
5,763
https://en.wikipedia.org/wiki/Cantonese%20%28disambiguation%29
Cantonese (disambiguation)
Cantonese is a language originating in Canton, Guangdong. Cantonese may also refer to: Yue Chinese, Chinese languages that include Cantonese Cantonese cuisine, the cuisine of Guangdong Province Cantonese people, the native people of Guangdong and Guangxi Lingnan culture, the regional culture often referred to as Cantonese culture See also Cantonese Braille, a Cantonese-language version of Braille in Hong Kong Cantopop, Cantonese pop music
2,592
5,766
https://en.wikipedia.org/wiki/Clement%20Attlee
Clement Attlee
Clement Richard Attlee, 1st Earl Attlee, (3 January 18838 October 1967) was a British statesman and politician who served as Prime Minister of the United Kingdom from 1945 to 1951 and Leader of the Labour Party from 1935 to 1955. He was Deputy Prime Minister during the wartime coalition government under Winston Churchill, and served twice as Leader of the Opposition from 1935 to 1940 and from 1951 to 1955. Attlee remains the longest serving Labour leader. Attlee was born into an upper-middle-class family, the son of a wealthy London solicitor. After attending the public school Haileybury College and the University of Oxford, he practised as a barrister. The volunteer work he carried out in London's East End exposed him to poverty, and his political views shifted leftwards thereafter. He joined the Independent Labour Party, gave up his legal career, and began lecturing at the London School of Economics. His work was interrupted by service as an officer in the First World War. In 1919, he became mayor of Stepney and in 1922 was elected Member of Parliament for Limehouse. Attlee served in the first Labour minority government led by Ramsay MacDonald in 1924, and then joined the Cabinet during MacDonald's second minority (1929–1931). After retaining his seat in Labour's landslide defeat of 1931, he became the party's Deputy Leader. Elected Leader of the Labour Party in 1935, and at first advocating pacificism and opposing re-armament, he became a critic of Neville Chamberlain's appeasement of Adolf Hitler and Benito Mussolini in the lead-up to the Second World War. Attlee took Labour into the wartime coalition government in 1940 and served under Winston Churchill, initially as Lord Privy Seal and then as Deputy Prime Minister from 1942. As the European front of WWII reached its conclusion, the war cabinet headed by Churchill was dissolved and elections were scheduled to be held. The Labour Party, led by Attlee, won a landslide victory in the 1945 general election, on their post-war recovery platform. Following the election, Attlee led the construction of the first Labour majority government. His government's Keynesian approach to economic management aimed to maintain full employment, a mixed economy and a greatly enlarged system of social services provided by the state. To this end, it undertook the nationalisation of public utilities and major industries, and implemented wide-ranging social reforms, including the passing of the National Insurance Act 1946 and National Assistance Act, the formation of the National Health Service (NHS) in 1948, and the enlargement of public subsidies for council house building. His government also reformed trade union legislation, working practices and children's services; it created the National Parks system, passed the New Towns Act 1946 and established the town and country planning system. Attlee's foreign policy focused on decolonization efforts which he delegated to Ernest Bevin, but personally oversaw the partition of India (1947), the independence of Burma and Ceylon, and the dissolution of the British mandates of Palestine and Transjordan. He and Bevin encouraged the United States to take a vigorous role in the Cold War; unable to afford military intervention in Greece during its civil war, he called on Washington to counter the communists there. The strategy of containment was formalized between the two nations through the Truman Doctrine. He supported the Marshall Plan to rebuild Western Europe with American money and, in 1949, promoted the NATO military alliance against the Soviet bloc. After leading Labour to a narrow victory at the 1950 general election, he sent British troops to fight alongside South Korea in the Korean War. Attlee had inherited a country close to bankruptcy following the Second World War and beset by food, housing and resource shortages; despite his social reforms and economic programme, these problems persisted throughout his premiership, alongside recurrent currency crises and dependence on US aid. His party was narrowly defeated by the Conservatives in the 1951 general election, despite winning the most votes. He continued as Labour leader but retired after losing the 1955 election and was elevated to the House of Lords, where he served until his death in 1967. In public, he was modest and unassuming, but behind the scenes his depth of knowledge, quiet demeanour, objectivity and pragmatism proved decisive. He is often ranked as one of the greatest British prime ministers. Attlee's reputation among scholars has grown, thanks to his role in the Second World War, creation of the modern welfare state, and the establishment of the NHS. He is also commended for continuing the special relationship with the US and active involvement in NATO. Early life Attlee was born on 3 January 1883 in Putney, Surrey (now part of London), into an upper middle-class family, the seventh of eight children. His father was Henry Attlee (1841–1908), a solicitor, and his mother was Ellen Bravery Watson (1847–1920), daughter of Thomas Simons Watson, secretary for the Art Union of London. His parents were "committed Anglicans" who read prayers and psalms each morning at breakfast. Attlee grew up in a two-storey villa with a large garden and tennis court, staffed by three servants and a gardener. His father, a political Liberal, had inherited family interests in milling and brewing, and became a senior partner in the law firm of Druces, also serving a term as president of the Law Society of England and Wales. In 1898 he purchased a estate in Thorpe-le-Soken, Essex. At the age of nine, Attlee was sent to board at Northaw Place, a boys' preparatory school in Hertfordshire. In 1896 he followed his brothers to Haileybury College, where he was a middling student. He was influenced by the Darwinist views of his housemaster Frederick Webb Headley, and in 1899 he published an attack on striking London cab-drivers in the school magazine, predicting they would soon have to "beg for their fares". In 1901, Attlee went up to University College, Oxford, reading modern history. He and his brother Tom "were given a generous stipend by their father and embraced the university lifestyle—rowing, reading and socializing". He was later described by a tutor as "level-headed, industrious, dependable man with no brilliance of style ... but with excellent sound judgement". At university he had little interest in politics or economics, later describing his views at this time as "good old fashioned imperialist conservative". He graduated Bachelor of Arts in 1904 with second-class honours. Attlee then trained as a barrister at the Inner Temple and was called to the bar in March 1906. He worked for a time at his father's law firm Druces and Attlee but did not enjoy the work, and had no particular ambition to succeed in the legal profession. He also played football for non-League club Fleet. Attlee's father died in 1908, leaving an estate valued for probate at £75,394 (equivalent to £ in ). Early career In 1906, he became a volunteer at Haileybury House, a charitable club for working-class boys in Stepney in the East End of London run by his old school, and from 1907 to 1909 he served as the club's manager. Until then, his political views had been more conservative. However, after his shock at the poverty and deprivation he saw while working with the slum children, he came to the view that private charity would never be sufficient to alleviate poverty and that only direct action and income redistribution by the state would have any serious effect. This sparked a process that caused him to convert to socialism. He subsequently joined the Independent Labour Party (ILP) in 1908 and became active in local politics. In 1909, he stood unsuccessfully at his first election, as an ILP candidate for Stepney Borough Council. He also worked briefly as a secretary for Beatrice Webb in 1909, before becoming a secretary for Toynbee Hall. He worked for Webb's campaign of popularisation of the Minority Report as he was very active in Fabian Society circles, in which he would go round visiting many political societies—Liberal, Conservative and socialist—to explain and popularise the ideas, as well as recruiting lecturers deemed suitable to work on the campaign. In 1911, he was employed by the Government as an "official explainer"—touring the country to explain Chancellor of the Exchequer David Lloyd George's National Insurance Act. He spent the summer of that year touring Essex and Somerset on a bicycle, explaining the Act at public meetings. A year later, he became a lecturer at the London School of Economics, teaching Social science and Public administration. Military service Following the outbreak of the First World War in August 1914, Attlee applied to join the British Army. Initially his application was turned down, as at the age of 31 he was seen as being too old; however, he was eventually commissioned as a temporary lieutenant in the 6th (Service) Battalion, South Lancashire Regiment, on 30 September 1914. On 9 February 1915 he was promoted to captain, and on 14 March was appointed battalion adjutant. The 6th South Lancashires were part of the 38th Brigade of the 13th (Western) Division, which served in the Gallipoli campaign in Turkey. Attlee's decision to fight caused a rift between him and his older brother Tom, who, as a conscientious objector, spent much of the war in prison. After a period spent fighting in Gallipoli, Attlee collapsed after falling ill with dysentery and was put on a ship bound for England to recover. When he woke up he wanted to get back to action as soon as possible, and asked to be let off the ship in Malta, where he stayed in hospital in order to recover. His hospitalisation coincided with the Battle of Sari Bair, which saw a large number of his comrades killed. Upon returning to action, he was informed that his company had been chosen to hold the final lines during the evacuation of Suvla. As such, he was the penultimate man to be evacuated from Suvla Bay, the last being General Stanley Maude. The Gallipoli Campaign had been engineered by the First Lord of the Admiralty, Winston Churchill. Although it was unsuccessful, Attlee believed that it was a bold strategy which could have been successful if it had been better implemented on the ground. This led to an admiration for Churchill as a military strategist, something which would make their working relationship in later years productive. He later served in the Mesopotamian campaign in what is now Iraq, where in April 1916 he was badly wounded, being hit in the leg by shrapnel from friendly fire while storming an enemy trench during the Battle of Hanna. The battle was an unsuccessful attempt to relieve the Siege of Kut, and many of Attlee's fellow soldiers were also wounded or killed. He was sent firstly to India, and then back to the UK to recover. On 18 December 1916 he was transferred to the Heavy Section of the Machine Gun Corps, and 1 March 1917 he was promoted to the temporary rank of major, leading him to be known as "Major Attlee" for much of the inter-war period. He would spend most of 1917 training soldiers at various locations in England. From 2 to 9 July 1917, he was the temporary commanding officer (CO) of the newly formed L (later 10th) Battalion, the Tank Corps at Bovington Camp, Dorset. From 9 July, he assumed command of 30th Company of the same battalion; however, he did not deploy to France with it in December 1917, as he was transferred back to the South Lancashire Regiment on 28 November. After fully recovering from his injuries, he was sent to France in June 1918 to serve on the Western Front for the final months of the war. After being discharged from the Army in January 1919, he returned to Stepney, and returned to his old job lecturing part-time at the London School of Economics. Early political career Local politics Attlee returned to local politics in the immediate post-war period, becoming mayor of the Metropolitan Borough of Stepney, one of London's most deprived inner-city boroughs, in 1919. During his time as mayor, the council undertook action to tackle slum landlords who charged high rents but refused to spend money on keeping their property in habitable condition. The council served and enforced legal orders on homeowners to repair their property. It also appointed health visitors and sanitary inspectors, reducing the infant mortality rate, and took action to find work for returning unemployed ex-servicemen. In 1920, while mayor, he wrote his first book, The Social Worker, which set out many of the principles that informed his political philosophy and that were to underpin the actions of his government in later years. The book attacked the idea that looking after the poor could be left to voluntary action. He wrote on page 30:In a civilised community, although it may be composed of self-reliant individuals, there will be some persons who will be unable at some period of their lives to look after themselves, and the question of what is to happen to them may be solved in three ways – they may be neglected, they may be cared for by the organised community as of right, or they may be left to the goodwill of individuals in the community. and went on to say at page 75:Charity is only possible without loss of dignity between equals. A right established by law, such as that to an old age pension, is less galling than an allowance made by a rich man to a poor one, dependent on his view of the recipient's character, and terminable at his caprice. In 1921, George Lansbury, the Labour mayor of the neighbouring borough of Poplar, and future Labour Party leader, launched the Poplar Rates Rebellion; a campaign of disobedience seeking to equalise the poor relief burden across all the London boroughs. Attlee, who was a personal friend of Lansbury, strongly supported this. However, Herbert Morrison, the Labour mayor of nearby Hackney, and one of the main figures in the London Labour Party, strongly denounced Lansbury and the rebellion. During this period, Attlee developed a lifelong dislike of Morrison. Member of Parliament At the 1922 general election, Attlee became the Member of Parliament (MP) for the constituency of Limehouse in Stepney. At the time, he admired Ramsay MacDonald and helped him get elected as Labour Party leader at the 1922 leadership election. He served as MacDonald's Parliamentary Private Secretary for the brief 1922 parliament. His first taste of ministerial office came in 1924, when he served as Under-Secretary of State for War in the short-lived first Labour government, led by MacDonald. Attlee opposed the 1926 General Strike, believing that strike action should not be used as a political weapon. However, when it happened, he did not attempt to undermine it. At the time of the strike, he was chairman of the Stepney Borough Electricity Committee. He negotiated a deal with the Electrical Trade Union so that they would continue to supply power to hospitals, but would end supplies to factories. One firm, Scammell and Nephew Ltd, took a civil action against Attlee and the other Labour members of the committee (although not against the Conservative members who had also supported this). The court found against Attlee and his fellow councillors and they were ordered to pay £300 damages. The decision was later reversed on appeal, but the financial problems caused by the episode almost forced Attlee out of politics. In 1927, he was appointed a member of the multi-party Simon Commission, a royal commission set up to examine the possibility of granting self-rule to India. Due to the time he needed to devote to the commission, and contrary to a promise MacDonald made to Attlee to induce him to serve on the commission, he was not initially offered a ministerial post in the Second Labour Government, which entered office after the 1929 general election. Attlee's service on the Commission equipped him with a thorough exposure to India and many of its political leaders. By 1933 he argued that British rule was alien to India and was unable to make the social and economic reforms necessary for India's progress. He became the British leader most sympathetic to Indian independence (as a dominion), preparing him for his role in deciding on independence in 1947. In May 1930, Labour MP Oswald Mosley left the party after its rejection of his proposals for solving the unemployment problem, and Attlee was given Mosley's post of Chancellor of the Duchy of Lancaster. In March 1931, he became Postmaster General, a post he held for five months until August, when the Labour government fell, after failing to agree on how to tackle the financial crisis of the Great Depression. That month MacDonald and a few of his allies formed a National Government with the Conservatives and Liberals, leading them to be expelled from Labour. MacDonald offered Attlee a job in the National Government, but he turned down the offer and opted to stay loyal to the main Labour party. After Ramsay MacDonald formed the National Government, Labour was deeply divided. Attlee had long been close to MacDonald and now felt betrayed—as did most Labour politicians. During the course of the second Labour government, Attlee had become increasingly disillusioned with MacDonald, whom he came to regard as vain and incompetent, and of whom he later wrote scathingly in his autobiography. He would write: In the old days I had looked up to MacDonald as a great leader. He had a fine presence and great oratorical power. The unpopular line which he took during the First World War seemed to mark him as a man of character. Despite his mishandling of the Red Letter episode, I had not appreciated his defects until he took office a second time. I then realised his reluctance to take positive action and noted with dismay his increasing vanity and snobbery, while his habit of telling me, a junior Minister, the poor opinion he had of all his Cabinet colleagues made an unpleasant impression. I had not, however, expected that he would perpetrate the greatest betrayal in the political history of this country ... The shock to the Party was very great, especially to the loyal workers of the rank-and-file who had made great sacrifices for these men. Deputy Leader The 1931 general election held later that year was a disaster for the Labour Party, which lost over 200 seats, returning only 52 MPs to Parliament. The vast majority of the party's senior figures, including the Leader Arthur Henderson, lost their seats. Attlee, however, narrowly retained his Limehouse seat, with his majority being slashed from 7,288 to just 551. He was one of only three Labour MPs who had experience of government to retain their seats, along with George Lansbury and Stafford Cripps. Accordingly, Lansbury was elected Leader unopposed with Attlee as his deputy. Most of the remaining Labour MPs after 1931 were elderly trade union officials who could not contribute much to debates, Lansbury was in his 70s, and Stafford Cripps another main figure of the Labour front bench who had entered Parliament in 1931, was inexperienced. As one of the most capable and experienced of the remaining Labour MPs, Attlee therefore shouldered a lot of the burden of providing an opposition to the National Government in the years 1931–35, during this time he had to extend his knowledge of subjects which he had not studied in any depth before, such as finance and foreign affairs in order to provide an effective opposition to the government. Attlee effectively served as acting leader for nine months from December 1933, after Lansbury fractured his thigh in an accident, which raised Attlee's public profile considerably. It was during this period, however, that personal financial problems almost forced Attlee to quit politics altogether. His wife had become ill, and at that time there was no separate salary for the Leader of the Opposition. On the verge of resigning from Parliament, he was persuaded to stay by Stafford Cripps, a wealthy socialist, who agreed to make a donation to party funds to pay him an additional salary until Lansbury could take over again. During 1932–33 Attlee flirted with, and then drew back from radicalism, influenced by Stafford Cripps who was then on the radical wing of the party. He was briefly a member of the Socialist League, which had been formed by former Independent Labour Party (ILP) members, who opposed the ILP's disaffiliation from the main Labour Party in 1932. At one point he agreed with the proposition put forward by Cripps that gradual reform was inadequate and that a socialist government would have to pass an emergency powers act, allowing it to rule by decree to overcome any opposition by vested interests until it was safe to restore democracy. He admired Oliver Cromwell's strong-armed rule and use of major generals to control England. After looking more closely at Hitler, Mussolini, Stalin, and even his former colleague Oswald Mosley, leader of the new blackshirt fascist movement in Britain, Attlee retreated from his radicalism, and distanced himself from the League, and argued instead that the Labour Party must adhere to constitutional methods and stand forthright for democracy and against totalitarianism of either the left or right. He always supported the crown, and as Prime Minister was close to King George VI. Leader of the Opposition George Lansbury, a committed pacifist, resigned as the Leader of the Labour Party at the 1935 Party Conference on 8 October, after delegates voted in favour of sanctions against Italy for its aggression against Abyssinia. Lansbury had strongly opposed the policy, and felt unable to continue leading the party. Taking advantage of the disarray in the Labour Party, the Prime Minister Stanley Baldwin announced on 19 October that a general election would be held on 14 November. With no time for a leadership contest, the party agreed that Attlee should serve as interim leader, on the understanding that a leadership election would be held after the general election. Attlee therefore led Labour through the 1935 election, which saw the party stage a partial comeback from its disastrous 1931 performance, winning 38 per cent of the vote, the highest share Labour had won up to that point, and gaining over one hundred seats. Attlee stood in the subsequent leadership election, held soon afterward, where he was opposed by Herbert Morrison, who had just re-entered parliament in the recent election, and Arthur Greenwood: Morrison was seen as the favourite, but was distrusted by many sections of the party, especially the left wing. Arthur Greenwood meanwhile was a popular figure in the party; however, his leadership bid was severely hampered by his alcohol problem. Attlee was able to come across as a competent and unifying figure, particularly having already led the party through a general election. He went on to come first in both the first and second ballots, formally being elected Leader of the Labour Party on 3 December 1935. Throughout the 1920s and most of the 1930s, the Labour Party's official policy had been to oppose rearmament, instead supporting internationalism and collective security under the League of Nations. At the 1934 Labour Party Conference, Attlee declared that, "We have absolutely abandoned any idea of nationalist loyalty. We are deliberately putting a world order before our loyalty to our own country. We say we want to see put on the statute book something which will make our people citizens of the world before they are citizens of this country". During a debate on defence in Commons a year later, Attlee said "We are told (in the White Paper) that there is danger against which we have to guard ourselves. We do not think you can do it by national defence. We think you can only do it by moving forward to a new world. A world of law, the abolition of national armaments with a world force and a world economic system. I shall be told that that is quite impossible". Shortly after those comments, Adolf Hitler proclaimed that German rearmament offered no threat to world peace. Attlee responded the next day noting that Hitler's speech, although containing unfavourable references to the Soviet Union, created "A chance to call a halt in the armaments race ... We do not think that our answer to Herr Hitler should be just rearmament. We are in an age of rearmaments, but we on this side cannot accept that position". Attlee played little part in the events that would lead up to the abdication of Edward VIII, for despite Baldwin's threat to step down if Edward attempted to remain on the throne after marrying Wallis Simpson, Labour was widely accepted not to be a viable alternative government, owing to the National Government's overwhelming majority in the Commons. Attlee, along with Liberal leader Archibald Sinclair, was eventually consulted with by Baldwin on 24 November 1936, and Attlee agreed with both Baldwin and Sinclair that Edward could not remain on the throne, firmly eliminating any prospect of any alternative government forming were Baldwin to resign. In April 1936, the Chancellor of the Exchequer, Neville Chamberlain, introduced a Budget which increased the amount spent on the armed forces. Attlee made a radio broadcast in opposition to it, saying: In June 1936, the Conservative MP Duff Cooper called for an Anglo-French alliance against possible German aggression and called for all parties to support one. Attlee condemned this: "We say that any suggestion of an alliance of this kind—an alliance in which one country is bound to another, right or wrong, by some overwhelming necessity—is contrary to the spirit of the League of Nations, is contrary to the Covenant, is contrary to Locarno is contrary to the obligations which this country has undertaken, and is contrary to the professed policy of this Government". At the Labour Party conference at Edinburgh in October Attlee reiterated that "There can be no question of our supporting the Government in its rearmament policy". However, with the rising threat from Nazi Germany, and the ineffectiveness of the League of Nations, this policy eventually lost credibility. By 1937, Labour had jettisoned its pacifist position and came to support rearmament and oppose Neville Chamberlain's policy of appeasement. At the end of 1937, Attlee and a party of three Labour MPs visited Spain and visited the British Battalion of the International Brigades fighting in the Spanish Civil War. One of the companies was named the "Major Attlee Company" in his honour. Attlee was supportive of the Republican government, and at the 1937 Labour conference moved the wider Labour Party towards opposing what he considered the "farce" of the Non-Intervention Committee organised by the British and French governments. In the House of Commons, Attlee stated "I cannot understand the delusion that if Franco wins with Italian and German aid, he will immediately become independent. I think it is a ridiculous proposition." Dalton, the Labour Party's spokesman on foreign policy, also thought that Franco would ally with Germany and Italy. However, Franco's subsequent behaviour proved it was not such a ridiculous proposition. As Dalton later acknowledged, Franco skilfully maintained Spanish neutrality, whereas Hitler would have occupied Spain if Franco had lost the Civil War. In 1938, Attlee opposed the Munich Agreement, in which Chamberlain negotiated with Hitler to give Germany the German-speaking parts of Czechoslovakia, the Sudetenland: We all feel relief that war has not come this time. Every one of us has been passing through days of anxiety; we cannot, however, feel that peace has been established, but that we have nothing but an armistice in a state of war. We have been unable to go in for care-free rejoicing. We have felt that we are in the midst of a tragedy. We have felt humiliation. This has not been a victory for reason and humanity. It has been a victory for brute force. At every stage of the proceedings there have been time limits laid down by the owner and ruler of armed force. The terms have not been terms negotiated; they have been terms laid down as ultimata. We have seen to-day a gallant, civilised and democratic people betrayed and handed over to a ruthless despotism. We have seen something more. We have seen the cause of democracy, which is, in our view, the cause of civilisation and humanity, receive a terrible defeat. ... The events of these last few days constitute one of the greatest diplomatic defeats that this country and France have ever sustained. There can be no doubt that it is a tremendous victory for Herr Hitler. Without firing a shot, by the mere display of military force, he has achieved a dominating position in Europe which Germany failed to win after four years of war. He has overturned the balance of power in Europe. He has destroyed the last fortress of democracy in Eastern Europe which stood in the way of his ambition. He has opened his way to the food, the oil and the resources which he requires in order to consolidate his military power, and he has successfully defeated and reduced to impotence the forces that might have stood against the rule of violence. and: The cause [of the crisis which we have undergone] was not the existence of minorities in Czechoslovakia; it was not that the position of the Sudeten Germans had become intolerable. It was not the wonderful principle of self-determination. It was because Herr Hitler had decided that the time was ripe for another step forward in his design to dominate Europe. ... The minorities question is no new one. It existed before the [First World] War and it existed after the War, because the problem of Germans in Czechoslovakia succeeded that of the Czechs in German Austria, just as the problem of Germans in the Tyrol succeeded that of the Italians in Trieste, and short of a drastic and entire reshuffling of these populations there is no possible solution to the problem of minorities in Europe except toleration. However, the new Czechoslovakian state did not provide equal rights to the Slovaks and Sudeten Germans, with the historian Arnold J. Toynbee already having noted that "for the Germans, Magyars and Poles, who account between them for more than one quarter of the whole population, the present regime in Czechoslovakia is not essentially different from the regimes in the surrounding countries". Anthony Eden in the Munich debate acknowledged that there had been "discrimination, even severe discrimination" against the Sudeten Germans. In 1937, Attlee wrote a book entitled The Labour Party in Perspective that sold fairly well in which he set out some of his views. He argued that there was no point in Labour compromising on its socialist principles in the belief that this would achieve electoral success. He wrote: "I find that the proposition often reduces itself to this – that if the Labour Party would drop its socialism and adopt a Liberal platform, many Liberals would be pleased to support it. I have heard it said more than once that if Labour would only drop its policy of nationalisation everyone would be pleased, and it would soon obtain a majority. I am convinced it would be fatal for the Labour Party." He also wrote that there was no point in "watering down Labour's socialist creed in order to attract new adherents who cannot accept the full socialist faith. On the contrary, I believe that it is only a clear and bold policy that will attract this support". In the late 1930s, Attlee sponsored a Jewish mother and her two children, enabling them to leave Germany in 1939 and move to the UK. On arriving in Britain, Attlee invited one of the children into his home in Stanmore, north-west London, where he stayed for several months. Deputy Prime Minister Attlee remained as Leader of the Opposition when the Second World War broke out in September 1939. The ensuing disastrous Norwegian campaign would result in a motion of no confidence in Neville Chamberlain. Although Chamberlain survived this, the reputation of his administration was so badly and publicly damaged that it became clear a coalition government would be necessary. Even if Attlee had personally been prepared to serve under Chamberlain in an emergency coalition government, he would never have been able to carry Labour with him. Consequently, Chamberlain tendered his resignation, and Labour and the Conservatives entered a coalition government led by Winston Churchill on 10 May 1940, with Attlee joining the Cabinet as Lord Privy Seal on 12 May. Attlee and Churchill quickly agreed that the War Cabinet would consist of three Conservatives (initially Churchill, Chamberlain and Lord Halifax) and two Labour members (initially himself and Arthur Greenwood) and that Labour should have slightly more than one third of the posts in the coalition government. Attlee and Greenwood played a vital role in supporting Churchill during a series of War Cabinet debates over whether or not to negotiate peace terms with Hitler following the Fall of France in May 1940; both supported Churchill and gave him the majority he needed in the War Cabinet to continue Britain's resistance. Only Attlee and Churchill remained in the War Cabinet from the formation of the Government of National Unity in May 1940 through to the election in May 1945. Attlee was initially the Lord Privy Seal, before becoming Britain's first ever Deputy Prime Minister in 1942, as well as becoming the Dominions Secretary and Lord President of the Council on 28 September 1943. Attlee himself played a generally low key but vital role in the wartime government, working behind the scenes and in committees to ensure the smooth operation of government. In the coalition government, three inter-connected committees effectively ran the country. Churchill chaired the first two, the War cabinet and the Defence Committee, with Attlee deputising for him in these, and answering for the government in Parliament when Churchill was absent. Attlee himself instituted, and later chaired the third body, the Lord President's Committee, which was responsible for overseeing domestic affairs. As Churchill was most concerned with overseeing the war effort, this arrangement suited both men. Attlee himself had largely been responsible for creating these arrangements with Churchill's backing, streamlining the machinery of government and abolishing many committees. He also acted as a conciliator in the government, smoothing over tensions which frequently arose between Labour and Conservative Ministers. Many Labour activists were baffled by the top leadership role for a man they regarded as having little charisma; Beatrice Webb wrote in her diary in early 1940: He looked and spoke like an insignificant elderly clerk, without distinction in the voice, manner or substance of his discourse. To realise that this little nonentity is the Parliamentary Leader of the Labour Party ... and presumably the future P.M. [Prime Minister] is pitiable". 1945 election Following the defeat of Nazi Germany and the end of the War in Europe in May 1945, Attlee and Churchill favoured the coalition government remaining in place until Japan had been defeated. However, Herbert Morrison made it clear that the Labour Party would not be willing to accept this, and Churchill was forced to tender his resignation as Prime Minister and call an immediate election. The war had set in motion profound social changes within Britain, and had ultimately led to a widespread popular desire for social reform. This mood was epitomised in the Beveridge Report of 1942, by the Liberal economist William Beveridge. The Report assumed that the maintenance of full employment would be the aim of post-war governments, and that this would provide the basis for the welfare state. Immediately upon its release, it sold hundreds of thousands of copies. All major parties committed themselves to fulfilling this aim, but most historians say that Attlee's Labour Party was seen by the electorate as the party most likely to follow it through. Labour campaigned on the theme of "Let Us Face the Future", positioning themselves as the party best placed to rebuild Britain following the war, and were widely viewed as having run a strong and positive campaign, while the Conservative campaign centred entirely around Churchill. Despite opinion polls indicating a strong Labour lead, opinion polls were then viewed as a novelty which had not proven their worth, and most commentators expected that Churchill's prestige and status as a "war hero" would ensure a comfortable Conservative victory. Before polling day, The Manchester Guardian surmised that "the chances of Labour sweeping the country and obtaining a clear majority ... are pretty remote". The News of the World predicted a working Conservative majority, while in Glasgow a pundit forecast the result as Conservatives 360, Labour 220, Others 60. Churchill, however, made some costly errors during the campaign. In particular, his suggestion during one radio broadcast that a future Labour Government would require "some form of a gestapo" to implement their policies was widely regarded as being in very bad taste, and massively backfired. When the results of the election were announced on 26 July, they came as a surprise to most, including Attlee himself. Labour had won power by a huge landslide, winning 47.7 per cent of the vote to the Conservatives' 36 per cent. This gave them 393 seats in the House of Commons, a working majority of 146. This was the first time in history that the Labour Party had won a majority in Parliament. When Attlee went to see King George VI at Buckingham Palace to be appointed Prime Minister, the notoriously laconic Attlee and the famously tongue-tied King stood in silence; Attlee finally volunteered the remark, "I've won the election". The King replied "I know. I heard it on the Six O'Clock News". Prime Minister As Prime Minister, Attlee appointed Hugh Dalton as Chancellor of the Exchequer, Ernest Bevin as Foreign Secretary, and Herbert Morrison as Deputy Prime Minister, with overall responsibility for nationalisation. Additionally, Stafford Cripps was made President of the Board of Trade, Aneurin Bevan became Minister of Health, and Ellen Wilkinson, the only woman to serve in Attlee's cabinet, was appointed Minister of Education. The Attlee government proved itself to be a radical, reforming government. From 1945 to 1948, over 200 public Acts of Parliament were passed, with eight major pieces of legislation placed on the statute book in 1946 alone. Domestic policy Francis (1995) argues there was consensus both in the Labour's national executive committee and at party conferences on a definition of socialism that stressed moral improvement as well as material improvement. The Attlee government was committed to rebuilding British society as an ethical commonwealth, using public ownership and controls to abolish extremes of wealth and poverty. Labour's ideology contrasted sharply with the contemporary Conservative Party's defence of individualism, inherited privileges, and income inequality. On 5 July 1948, Clement Attlee replied to a letter dated 22 June from James Murray and ten other MPs who raised concerns about West Indians who arrived on board the . As for the prime minister himself, he was not much focused on economic policy, letting others handle the issues. Health Attlee's Health Minister, Aneurin Bevan, fought hard against the general disapproval of the medical establishment, including the British Medical Association, by creating the National Health Service (NHS) in 1948. This was a publicly funded healthcare system, which offered treatment free of charge for all at the point of use. Reflecting pent-up demand that had long existed for medical services, the NHS treated some 8 and a half million dental patients and dispensed more than 5 million pairs of spectacles during its first year of operation. Welfare The government set about implementing the wartime plans of Liberal William Beveridge for the creation of a "cradle to grave" welfare state. It set in place an entirely new system of social security. Among the most important pieces of legislation was the National Insurance Act 1946, in which people in work were required to pay a flat rate of National Insurance. In return, they (and the wives of male contributors) were eligible for a wide range of benefits, including pensions, sickness benefit, unemployment benefit, and funeral benefit. Various other pieces of legislation provided for child benefit and support for people with no other source of income. In 1949, unemployment, sickness and maternity benefits were exempted from tax. Housing The New Towns Act 1946 set up development corporations to construct new towns, while the Town and Country Planning Act 1947 instructed county councils to prepare development plans and also provided compulsory purchase powers. The Attlee government also extended the powers of local authorities to requisition houses and parts of houses, and made the acquisition of land less difficult than before. The Housing (Scotland) Act 1949 provided grants of 75 per cent (87.5 per cent in the Highlands and Islands) towards modernisation costs payable by Treasury to local authorities. In 1949, local authorities were empowered to provide people suffering from poor health with public housing at subsidised rents. To assist home ownership, the limit on the amount of money that people could borrow from their local authority to purchase or build a home was raised from £800 to £1,500 in 1945, and to £5,000 in 1949. Under the National Assistance Act 1948 local authorities had a duty "to provide emergency temporary accommodation for families which become homeless through no fault of their own". A large housebuilding programme was carried out with the intention of providing millions of people with high-quality homes. The Housing (Financial and Miscellaneous Provisions) Act 1946 increased Treasury subsidies for the construction of local authority housing in England and Wales. Four out of five houses constructed under Labour were council properties built to more generous specifications than before the Second World War, and subsidies kept down council rents. Altogether, these policies provided public-sector housing with its biggest-ever boost up until that point, while low-wage earners particularly benefited from these developments. Although the Attlee government failed to meet its targets, primarily due to economic constraints, over a million new homes were built between 1945 and 1951 (a significant achievement under the circumstances) which ensured that decent, affordable housing was available to many low-income families for the first time ever. Women and children A number of reforms were embarked upon to improve conditions for women and children. In 1946, universal family allowances were introduced to provide financial support to households for raising children. These benefits had been legislated for the previous year by Churchill's Family Allowances Act 1945, and was the first measure pushed through parliament by Attlee's government. Conservatives would later criticise Labour for having been "too hasty" in introducing family allowances. A Married Women (Restraint Upon Anticipation) Act was passed in 1949 "to equalise, to render inoperative any restrictions upon anticipation or alienation attached to the enjoyment of property by a woman", while the Married Women (Maintenance) Act 1949 was enacted with the intention of improving the adequacy and duration of financial benefits for married women. The Criminal Law (Amendment) Act 1950 amended the Criminal Law Amendment Act 1885 to bring prostitutes within the law and safeguard them from abduction and abuse. The Criminal Justice Act 1948 restricted imprisonment for juveniles and brought improvements to the probation and remand centres systems, while the passage of the Justices of the Peace Act 1949 led to extensive reforms of magistrates' courts. The Attlee government also abolished the marriage bar in the Civil Service, thereby enabling married women to work in that institution. In 1946 the government set up a National Institute of Houseworkers as a means of providing a social democratic variety of domestic service. By late 1946, agreed standards of training were established, which was followed by the opening of a training headquarters and the opening of an additional nine training centres in Wales, Scotland, and then throughout Great Britain. The National Health Service Act 1946 indicated that domestic help should be provided for households where that help is required "owing to the presence of any person who is ill, lying-in, an expectant mother, mentally defective, aged or a child not over compulsory school age". 'Home help' therefore included the provision of home-helps for nursing and expectant mothers and for mothers with children under the age of five, and by 1952 some 20,000 women were engaged in this service. Planning and development Development rights were nationalised while the government attempted to take all development profits for the State. Strong planning authorities were set up to control land use, and issued manuals of guidance which stressed the importance of safeguarding agricultural land. A chain of regional offices was set up within its planning ministry to provide a strong lead in regional development policies. Comprehensive Development Areas (CDAs), a designation under the Town and Country Planning Act 1947, allowed local authorities to acquire property in the designated areas using powers of compulsory purchase in order to re-plan and develop urban areas suffering from urban blight or war damage. Workers' rights Various measures were carried out to improve conditions in the workplace. Entitlement to sick leave was greatly extended, and sick pay schemes were introduced for local authority administrative, professional and technical workers in 1946 and for various categories of manual workers in 1948. Worker's compensation was also significantly improved. The Fair Wages Resolution of 1946 required any contractor working on a public project to at least match the pay rates and other employment conditions set in the appropriate collective agreement. In 1946, Purchase Tax was removed completely from kitchen fittings and crockery, while the rate was reduced on various gardening items. The Fire Services Act 1947 introduced a new pension scheme for fire-fighters, while the Electricity Act 1947 introduced better retirement benefits for workers in that industry. A Workers' Compensation (Supplementation) Act was passed in 1948 that introduced benefits for workers with certain asbestos-related diseases which had occurred before 1948. The Merchant Shipping Act 1948 and the Merchant Shipping (Safety Convention) Act 1949 were passed to improve conditions for seamen. The Shops Act 1950 consolidated previous legislation which provided that no one could be employed in a shop for more than six hours without having a break for at least 20 minutes. The legislation also required a lunch break of at least 45 minutes for anyone who worked between 11:30 am and 2:30 pm and a half-hour tea break for anyone working between 4 pm and 7 pm. The government also strengthened a Fair Wages Resolution, with a clause that required all employers getting government contracts to recognise the rights of their workers to join trade unions. The Trade Disputes and Trade Unions Act 1927 was repealed, and the Dock Labour Scheme was introduced in 1947 to put an end to the casual system of hiring labour in the docks. This scheme gave registered dockers the legal right to minimum work and decent conditions. Through the National Dock Labour Board (on which trade unions and employers had equal representation) the unions acquired control over recruitment and dismissal. Registered dockers laid off by employers within the Scheme had the right either to be taken on by another, or to generous compensation. All dockers were registered under the Dock Labour Scheme, giving them a legal right to minimum work, holidays and sick pay. Wages for members of the police force were significantly increased. The introduction of a Miner's Charter in 1946 instituted a five-day work week for miners and a standardised day wage structure, and in 1948 a Colliery Workers Supplementary Scheme was approved, providing supplementary allowances to disabled coal-workers and their dependants. In 1948, a pension scheme was set up to provide pension benefits for employees of the new NHS, as well as their dependants. Under the Coal Industry Nationalisation (Superannuation) Regulations 1950, a pension scheme for mineworkers was established. Improvements were also made in farmworkers' wages, and the Agricultural Wages Board in 1948 not only safeguarded wage levels, but also ensured that workers were provided with accommodation. A number of regulations aimed at safeguarding the health and safety of people at work were also introduced during Attlee's time in office. Regulations issued in February 1946 applied to factories involved with "manufacturing briquettes or blocks of fuel consisting of coal, coal dust, coke or slurry with pitch as a binding substance", and concerned "dust and ventilation, washing facilities and clothing accommodation, medical supervision and examination, skin and eye protection and messrooms". Nationalisation Attlee's government also carried out their manifesto commitment for nationalisation of basic industries and public utilities. The Bank of England and civil aviation were nationalised in 1946. Coal mining, the railways, road haulage, canals and Cable and Wireless were nationalised in 1947, and electricity and gas followed in 1948. The steel industry was nationalised in 1951. By 1951 about 20 per cent of the British economy had been taken into public ownership. Nationalisation failed to provide workers with a greater say in the running of the industries in which they worked. It did, however, bring about significant material gains for workers in the form of higher wages, reduced working hours, and improvements in working conditions, especially in regards to safety. As historian Eric Shaw noted of the years following nationalisation, the electricity and gas supply companies became "impressive models of public enterprise" in terms of efficiency, and the National Coal Board was not only profitable, but working conditions for miners had significantly improved as well. Within a few years of nationalisation, a number of progressive measures had been carried out which did much to improve conditions in the mines, including better pay, a five-day working week, a national safety scheme (with proper standards at all the collieries), a ban on boys under the age of 16 going underground, the introduction of training for newcomers before going down to the coalface, and the making of pithead baths into a standard facility. The newly established National Coal Board offered sick pay and holiday pay to miners. As noted by Martin Francis: Union leaders saw nationalisation as a means to pursue a more advantageous position within a framework of continued conflict, rather than as an opportunity to replace the old adversarial form of industrial relations. Moreover, most workers in nationalised industries exhibited an essentially instrumentalist attitude, favouring public ownership because it secured job security and improved wages rather than because it promised the creation of a new set of socialist relationships in the workplace. Agriculture The Attlee government placed strong emphasis on improving the quality of life in rural areas, benefiting both farmers and other consumers. Security of tenure for farmers was introduced, while consumers were protected by food subsidies and the redistributive effects of deficiency payments. Between 1945 and 1951, the quality of rural life was improved by improvements in gas, electricity, and water services, as well as in leisure and public amenities. In addition, the 1947 Transport Act improved provision of rural bus services, while the Agriculture Act 1947 established a more generous subsidy system for farmers. Legislation was also passed in 1947 and 1948 which established a permanent Agricultural Wages Board to fix minimum wages for agricultural workers. Attlee's government made it possible for farm workers to borrow up to 90 per cent of the cost of building their own houses, and received a subsidy of £15 a year for 40 years towards that cost. Grants were also made to meet up to half the cost of supplying water to farm buildings and fields, the government met half the cost of bracken eradication and lime spreading, and grants were paid for bringing hill farming land into use that had previously been considered unfit for farming purposes. In 1946, the National Agricultural Advisory Service was set up to supply agricultural advice and information. The Hill Farming Act 1946 introduced for upland areas a system of grants for buildings, land improvement, and infrastructural improvements such as roads and electrification. The act also continued a system of headage payments for hill sheep and cattle that had been introduced during the war. The Agricultural Holdings Act 1948 enabled (in effect) tenant farmers to have lifelong tenancies and made provision for compensation in the event of cessations of tenancies. In addition, the Livestock Rearing Act 1951 extended the provisions of the 1946 Hill Farming Act to the upland store cattle and sheep sector. At a time of world food shortages, it was vital that farmers produced the maximum possible quantities. The government encouraged farmers via subsidies for modernisation, while the National Agricultural Advisory Service provided expertise and price guarantees. As a result of the Attlee government's initiatives in agriculture, there was a 20 per cent increase in output between 1947 and 1952, while Britain adopted one of the most mechanised and efficient farming industries in the world. Education The Attlee government ensured provisions of the Education Act 1944 were fully implemented, with free secondary education becoming a right for the first time. Fees in state grammar schools were eliminated, while new, modern secondary schools were constructed. The school leaving age was raised to 15 in 1947, an accomplishment helped brought into fruition by initiatives such as the HORSA ("Huts Operation for Raising the School-leaving Age") scheme and the S.F.O.R.S.A. (furniture) scheme. University scholarships were introduced to ensure that no one who was qualified "should be deprived of a university education for financial reasons", while a large school building programme was organised. A rapid increase in the number of trained teachers took place, and the number of new school places was increased. Increased Treasury funds were made available for education, particularly for upgrading school buildings suffering from years of neglect and war damage. Prefabricated classrooms were built and 928 new primary schools were constructed between 1945 and 1950. The provision of free school meals was expanded, and opportunities for university entrants were increased. State scholarships to universities were increased, and the government adopted a policy of supplementing university scholarships awards to a level sufficient to cover fees plus maintenance. Many thousands of ex-servicemen were assisted to go through college who could never have contemplated it before the war. Free milk was also made available to all schoolchildren for the first time. In addition, spending on technical education rose, and the number of nursery schools was increased. Salaries for teachers were also improved, and funds were allocated towards improving existing schools. In 1947 the Arts Council of Great Britain was set up to encourage the arts. The Ministry of Education was established under the 1944 Act, and free County Colleges were set up for the compulsory part-time instruction of teenagers between the ages of 15 and 18 who were not in full-time education. An Emergency Training Scheme was also introduced which turned out an extra 25,000 teachers in 1945–1951. In 1947, Regional Advisory Councils were set up to bring together industry and education to find out the needs of young workers "and advise on the provision required, and to secure reasonable economy of provision". That same year, thirteen Area Training Organisations were set up in England and one in Wales to coordinate teacher training. Attlee's government, however, failed to introduce the comprehensive education for which many socialists had hoped. This reform was eventually carried out by Harold Wilson's government. During its time in office, the Attlee government increased spending on education by over 50 per cent, from £6.5 billion to £10 billion. Economy The most significant problem facing Attlee and his ministers remained the economy, as the war effort had left Britain nearly bankrupt. The war had cost Britain about a quarter of her national wealth. Overseas investments had been used up to pay for the war. The transition to a peacetime economy, and the maintaining of strategic military commitments abroad led to continuous and severe problems with the balance of trade. This resulted in strict rationing of food and other essential goods continuing in the post war period to force a reduction in consumption in an effort to limit imports, boost exports, and stabilise the Pound Sterling so that Britain could trade its way out of its financial state. The abrupt end of the American Lend-Lease programme in August 1945 almost caused a crisis. Some relief was provided by the Anglo-American loan, negotiated in December 1945. The conditions attached to the loan included making the pound fully convertible to the US dollar. When this was introduced in July 1947, it led to a currency crisis and convertibility had to be suspended after just five weeks. The UK benefited from the American Marshall Aid program in 1948, and the economic situation improved significantly. Another balance of payments crisis in 1949 forced Chancellor of the Exchequer, Stafford Cripps, into devaluation of the pound. Despite these problems, one of the main achievements of Attlee's government was the maintenance of near full employment. The government maintained most of the wartime controls over the economy, including control over the allocation of materials and manpower, and unemployment rarely rose above 500,000, or 3 per cent of the total workforce. Labour shortages proved a more frequent problem. The inflation rate was also kept low during his term. The rate of unemployment rarely rose above 2 per cent during Attlee's time in office, whilst there was no hard-core of long-term unemployed. Both production and productivity rose as a result of new equipment, while the average working week was shortened. The government was less successful in housing, which was the responsibility of Aneurin Bevan. The government had a target to build 400,000 new houses a year to replace those which had been destroyed in the war, but shortages of materials and manpower meant that less than half this number were built. Nevertheless, millions of people were rehoused as a result of the Attlee government's housing policies. Between August 1945 and December 1951, 1,016,349 new homes were completed in England, Scotland, and Wales. When the Attlee government was voted out of office in 1951, the economy had been improved compared to 1945. The period from 1946 to 1951 saw continuous full employment and steadily rising living standards, which increased by about 10 per cent each year. During that same period, the economy grew by 3 per cent a year, and by 1951 the UK had "the best economic performance in Europe, while output per person was increasing faster than in the United States". Careful planning after 1945 also ensured that demobilisation was carried out without having a negative impact upon economic recovery, and that unemployment stayed at very low levels. In addition, the number of motor cars on the roads rose from 3 million to 5 million from 1945 to 1951, and seaside holidays were taken by far more people than ever before. A Monopolies and Restrictive Practices (Inquiry and Control) Act was passed in 1948, which allowed for investigations of restrictive practices and monopolies. Energy 1947 proved a particularly difficult year for the government; an exceptionally cold winter that year caused coal mines to freeze and cease production, creating widespread power cuts and food shortages. The Minister of Fuel and Power, Emanuel Shinwell was widely blamed for failing to ensure adequate coal stocks, and soon resigned from his post. The Conservatives capitalised on the crisis with the slogan 'Starve with Strachey and shiver with Shinwell' (referring to the Minister of Food John Strachey). The crisis led to an unsuccessful plot by Hugh Dalton to replace Attlee as Prime Minister with Ernest Bevin. Later that year Stafford Cripps tried to persuade Attlee to stand aside for Bevin. These plots petered out after Bevin refused to cooperate. Later that year, Dalton resigned as Chancellor after inadvertently leaking details of the budget to a journalist. He was replaced by Cripps. Foreign policy Europe and the Cold War In foreign affairs, the Attlee government was concerned with four main issues; post-war Europe, the onset of the Cold War, the establishment of the United Nations, and decolonisation. The first two were closely related, and Attlee was assisted by Foreign Secretary Ernest Bevin. Attlee also attended the later stages of the Potsdam Conference, where he negotiated with President Harry S. Truman and Joseph Stalin. In the immediate aftermath of the war, the Government faced the challenge of managing relations with Britain's former war-time ally, Stalin and the Soviet Union. Ernest Bevin was a passionate anti-communist, based largely on his experience of fighting communist influence in the trade union movement. Bevin's initial approach to the USSR as Foreign Secretary was "wary and suspicious, but not automatically hostile". Attlee himself sought warm relations with Stalin. He put his trust in the United Nations, rejected notions that the Soviet Union was bent on world conquest, and warned that treating Moscow as an enemy would turn it into one. This put Attlee at sword's point with his foreign minister, the Foreign Office, and the military who all saw the Soviets as a growing threat to Britain's role in the Middle East. Suddenly in January 1947, Attlee reversed his position and agreed with Bevin on a hard-line anti-Soviet policy. In an early "good-will" gesture that was later heavily criticised, the Attlee government allowed the Soviets to purchase, under the terms of a 1946 UK-USSR Trade agreement, a total of 25 Rolls-Royce Nene jet engines in September 1947 and March 1948. The agreement included an agreement not to use them for military purposes. The price was fixed under a commercial contract; a total of 55 jet engines were sold to the USSR in 1947. However, the Cold War intensified during this period and the Soviets, who at the time were well behind the West in jet technology, reverse-engineered the Nene and installed their own version in the MiG-15 interceptor. This was used to good effect against US-UK forces in the subsequent Korean War, as well as in several later MiG models. After Stalin took political control of most of Eastern Europe, and began to subvert other governments in the Balkans, Attlee's and Bevin's worst fears of Soviet intentions were realised. The Attlee government then became instrumental in the creation of the successful NATO defence alliance to protect Western Europe against any Soviet expansion. In a crucial contribution to the economic stability of post-war Europe, Attlee's Cabinet was instrumental in promoting the American Marshall Plan for the economic recovery of Europe. He called it one of the "most bold, enlightened and good-natured acts in the history of nations". A group of Labour MPs, organised under the banner of "Keep Left", urged the government to steer a middle way between the two emerging superpowers, and advocated the creation of a "third force" of European powers to stand between the US and USSR. However, deteriorating relations between Britain and the USSR, as well as Britain's economic reliance on America following the Marshall Plan, steered policy towards supporting the US. In January 1947, fear of both Soviet and American nuclear intentions led to a secret meeting of the Cabinet, where the decision was made to press ahead with the development of Britain's independent nuclear deterrent, an issue which later caused a split in the Labour Party. Britain's first successful nuclear test, however, did not occur until 1952, one year after Attlee had left office. The London dock strike of July 1949, led by Communists, was suppressed when the Attlee government sent in 13,000 Army troops and passed special legislation to promptly end the strike. His response reveals Attlee's growing concern that Soviet expansionism, supported by the British Communist Party, was a genuine threat to national security, and that the docks were highly vulnerable to sabotage ordered by Moscow. He noted that the strike was caused not by local grievances, but to help communist unions who were on strike in Canada. Attlee agreed with MI5 that he faced "a very present menace". Decolonisation Decolonisation was never a major election issue but Attlee gave the matter a great deal of attention and was the chief leader in beginning the process of decolonisation of the British Empire. China and Hong Kong In August 1948, the Chinese Communists' victories caused Attlee to begin preparing for a Communist takeover of China. It kept open consulates in Communist-controlled areas and rejected the Chinese Nationalists' requests that British citizens assist in the defence of Shanghai. By December, the government concluded that although British property in China would likely be nationalised, British traders would benefit in the long run from a stable, industrialising Communist China. Retaining Hong Kong was especially important to him; although the Chinese Communists promised to not interfere with its rule, Britain reinforced the Hong Kong Garrison during 1949. When the victorious Chinese Communists government declared on 1 October 1949 that it would exchange diplomats with any country that ended relations with the Chinese Nationalists, Britain became the first western country to formally recognise the People's Republic of China in January 1950. In 1954, a Labour Party delegation including Attlee visited China at the invitation of then Foreign Minister Zhou Enlai. Attlee became the first high-ranking western politician to meet Mao Zedong. India and Pakistan Attlee orchestrated the granting of independence to India and Pakistan in 1947. Attlee in 1928–1934 had been a member of the Indian Statutory Commission (otherwise known as the Simon Commission). He became the Labour Party expert on India and by 1934 was committed to granting India the same independent dominion status that Canada, Australia, New Zealand and South Africa had recently been given. He faced strong resistance from the die-hard Conservative imperialists, led by Churchill, who opposed both independence and efforts led by Prime Minister Stanley Baldwin to set up a system of limited local control by Indians themselves. Attlee and the Labour leadership were sympathetic to the Congress movement led by Mahatma Gandhi and Jawaharlal Nehru, and Pakistan movement led by Muhammad Ali Jinnah. During the Second World War, Attlee was in charge of Indian affairs. He set up the Cripps Mission in 1942, which tried and failed to bring the factions together. When the Congress called for passive resistance in the "Quit India" movement of 1942–1945, it was Attlee who ordered the arrest and internment for the duration of tens of thousands of Congress leaders and crushed the revolt. Labour's election Manifesto in 1945 called for "the advancement of India to responsible self-government". In 1942 the British Raj tried to enlist all major political parties in support of the war effort. Congress, led by Nehru and Gandhi, demanded immediate independence and full control by Congress of all of India. That demand was rejected by the British, and Congress opposed the war effort with its "Quit India campaign". The Raj immediately responded in 1942 by imprisoning the major national, regional and local Congress leaders for the duration. Attlee did not object. By contrast, the Muslim League led by Muhammad Ali Jinnah, and also the Sikh community, strongly supported the war effort. They greatly enlarged their membership and won favour from London for their decision. Attlee retained a fondness for Congress and until 1946, accepted their thesis that they were a non-religious party that accepted Hindus, Muslims, Sikhs, and everyone else. The Muslim league insisted that it was the only true representative of all of the Muslims of India, and by 1946 Attlee had come to agree with them. With violence escalating in India after the war, but with British financial power at a low ebb, large-scale military involvement was impossible. Viceroy Wavell said he needed a further seven army divisions to prevent communal violence if independence negotiations failed. No divisions were available; independence was the only option. Given the demands of the Muslim league, independence implied a partition that set off heavily Muslim Pakistan from the main portion of India. After becoming Prime Minister in 1945 Attlee originally planned to give India Dominion status in 1948, Attlee suggested in his memoirs that "traditional" colonial rule in Asia was no longer viable. He said that he expected it to meet renewed opposition after the war both by local national movements as well as by the United States. The prime minister's biographer John Bew says that Attlee hoped for a transition to a multilateral world order and a Commonwealth, and that the old British empire "should not be supported beyond its natural lifespan" and instead be ended "on the right note." His exchequer Hugh Dalton meanwhile feared that post-war Britain could no longer afford to garrison its empire. Ultimately the Labour government gave full independence to India and Pakistan in 1947. Historian Andrew Roberts says the independence of India was a "national humiliation" but it was necessitated by urgent financial, administrative, strategic and political needs. Churchill in 1940–1945 had tightened the hold on India and imprisoned the Congress leadership, with Attlee's approval. Labour had looked forward to making it a fully independent dominion like Canada or Australia. Many of the Congress leaders in the India had studied in England, and were highly regarded as fellow idealistic socialists by Labour leaders. Attlee was the Labour expert on India and took special charge of decolonisation. Attlee found that Churchill's viceroy, Field Marshal Wavell, was too imperialistic, too keen on military solutions, and too neglectful of Indian political alignments. The new Viceroy was Lord Mountbatten, the dashing war hero and a cousin of the King. The boundary between the newly created states of Pakistan and India involved the widespread resettlement of millions of Muslims and Hindus (and many Sikhs). Extreme violence ensued when Punjab and Bengal provinces were split. Historian Yasmin Khan estimates that between a half-million and a million men, women and children were killed. Gandhi himself was assassinated by a Hindutva activist in January 1948. Attlee remarked Gandhi as "greatest citizen" of India. Attlee also sponsored the peaceful transition to independence in 1948 of Burma (Myanmar) and Ceylon (Sri Lanka). Palestine One of the most urgent problems facing Attlee concerned the future of the British mandate in Palestine, which had become too troublesome and expensive to handle. British policies in Palestine were perceived by the Zionist movement and the Truman administration to be pro-Arab and anti-Jewish, and Britain soon found itself unable to maintain public order in the face of a Jewish insurgency and a civil war. During this period, 70,000 Holocaust survivors attempted to reach Palestine as part of the Aliyah Bet refugee movement. Attlee's government tried several tactics to prevent the migration. Five ships were bombed by the Secret Intelligence Service (though with no casualties) with a fake Palestinian group created to take responsibility. The navy apprehended over 50,000 refugees en route, interning them in detention camps in Cyprus. Conditions in the camps were harsh and faced global criticism. Later, the refugee ship Exodus 1947 would be sent back to mainland Europe, instead of being taken to Cyprus. In response to the increasingly unpopular mandate, Attlee ordered the evacuation of all British military personnel and handed over the issue to the United Nations, a decision which was widely supported by the general public in Britain. With the establishment of the state of Israel in 1948, the camps in Cyprus were eventually closed, with their former occupants finally completing their journey to the new country. African colonies The government's policies with regard to the other colonies, particularly those in Africa, focused on keeping them as strategic Cold War assets while modernising their economies. The Labour Party had long attracted aspiring leaders from Africa and had developed elaborate plans before the war. Implementing them overnight with an empty treasury proved too challenging. A major military base was built in Kenya, and the African colonies came under an unprecedented degree of direct control from London. Development schemes were implemented to help solve Britain's post-war balance of payments crisis and raise African living standards. This "new colonialism" worked slowly, and had failures such as the Tanganyika groundnut scheme. 1950 election The 1950 election gave Labour a massively reduced majority of five seats compared to the triple-digit majority of 1945. Although re-elected, the result was seen by Attlee as very disappointing, and was widely attributed to the effects of post-war austerity denting Labour's appeal to middle-class voters. With such a small majority leaving him dependent on a small number of MPs to govern, Attlee's second term was much tamer than his first. Some major reforms were nevertheless passed, particularly regarding industry in urban areas and regulations to limit air and water pollution. 1951 election By 1951, the Attlee government was exhausted, with several of its most senior ministers ailing or ageing, and with a lack of new ideas. Attlee's record for settling internal differences in the Labour Party fell in April 1951, when there was a damaging split over an austerity Budget brought in by the Chancellor, Hugh Gaitskell, to pay for the cost of Britain's participation in the Korean War. Aneurin Bevan resigned to protest against the new charges for "teeth and spectacles" in the National Health Service introduced by that Budget, and was joined in this action by several senior ministers, including the future Prime Minister Harold Wilson, then the President of the Board of Trade. Thus escalated a battle between the left and right wings of the Party that continues today. Finding it increasingly impossible to govern, Attlee's only chance was to call a snap election in October 1951, in the hope of achieving a more workable majority and to regain authority. The gamble failed: Labour narrowly lost to the Conservative Party, despite winning considerably more votes (achieving the largest Labour vote in electoral history). Attlee tendered his resignation as Prime Minister the following day, after six years and three months in office. Return to opposition Following the defeat in 1951, Attlee continued to lead the party as Leader of the Opposition. His last four years as leader were, however, widely seen as one of the Labour Party's weaker periods. The period was dominated by infighting between the Labour Party's right wing, led by Hugh Gaitskell, and its left, led by Aneurin Bevan. Many Labour MPs felt that Attlee should have retired following 1951 election and allowed a younger man to lead the party. Bevan openly called for him to stand down in the summer of 1954. One of his main reasons for staying on as leader was to frustrate the leadership ambitions of Herbert Morrison, whom Attlee disliked for both political and personal reasons. At one time, Attlee had favoured Aneurin Bevan to succeed him as leader, but this became problematic after Bevan almost irrevocably split the party. In an interview with the News Chronicle columnist Percy Cudlipp in mid-September 1955, Attlee made clear his own thinking together with his preference for the leadership succession, stating: Attlee, now aged 72, contested the 1955 general election against Anthony Eden, which saw Labour lose 18 seats, and the Conservatives increase their majority. He retired as Leader of the Labour Party on 7 December 1955, having led the party for twenty years, and on 14 December Hugh Gaitskell was elected as his successor. Retirement He subsequently retired from the House of Commons and was elevated to the peerage as Earl Attlee and Viscount Prestwood on 16 December 1955, taking his seat in the House of Lords on 25 January. He believed Eden had been forced into taking a strong stand on the Suez Crisis by his backbenchers. In 1958, Attlee, along with numerous notables, established the Homosexual Law Reform Society: this campaigned for the decriminalisation of homosexual acts in private by consenting adults, a reform that was voted through Parliament nine years later. In May 1961, he travelled to Washington, D.C., to meet with President Kennedy. In 1962, he spoke twice in the House of Lords against the British government's application for the UK to join the European Communities ("Common Market"). In his second speech delivered in November, Attlee claimed that Britain had a separate parliamentary tradition from the Continental European countries that comprised the EC. He also claimed that if Britain became a member, EC rules would prevent the British government from planning the economy and that Britain's traditional policy had been outward-looking rather than Continental. He attended Winston Churchill's funeral in January 1965. He was frail by that time, and had to remain seated in the freezing cold as the coffin was carried, having tired himself out by standing at the rehearsal the previous day. He lived to see the Labour Party return to power under Harold Wilson in 1964, and also to see his old constituency of Walthamstow West fall to the Conservatives in a by-election in September 1967. Death Attlee died peacefully in his sleep of pneumonia, at the age of 84 at Westminster Hospital on 8 October 1967. Two thousand people attended his funeral in November, including the then-Prime Minister Harold Wilson and the Duke of Kent, representing the Queen. He was cremated and his ashes were buried at Westminster Abbey. Upon his death, the title passed to his son Martin Richard Attlee, 2nd Earl Attlee (1927–1991), who defected from Labour to the SDP in 1981. It is now held by Clement Attlee's grandson John Richard Attlee, 3rd Earl Attlee. The third earl (a member of the Conservative Party) retained his seat in the Lords as one of the hereditary peers to remain under an amendment to Labour's House of Lords Act 1999. Attlee's estate was sworn for probate purposes at a value of £7,295, (equivalent to £ in ) a relatively modest sum for so prominent a figure, and only a fraction of the £75,394 in his father's estate when he died in 1908. Legacy The quotation about Attlee, "A modest man, but then he has so much to be modest about", is commonly ascribed to Churchill—though Churchill denied saying it, and respected Attlee's service in the War cabinet. Attlee's modesty and quiet manner hid a great deal that has only come to light with historical reappraisal. Attlee himself is said to have responded to critics with a limerick: "There were few who thought him a starter, Many who thought themselves smarter. But he ended PM, CH and OM, an Earl and a Knight of the Garter". The journalist and broadcaster Anthony Howard called him "the greatest Prime Minister of the 20th century". His leadership style of consensual government, acting as a chairman rather than a president, won him much praise from historians and politicians alike. Christopher Soames, the British Ambassador to France during the Conservative government of Edward Heath and cabinet minister under Margaret Thatcher, remarked that "Mrs Thatcher was not really running a team. Every time you have a Prime Minister who wants to make all the decisions, it mainly leads to bad results. Attlee didn't. That's why he was so damn good". Thatcher herself wrote in her 1995 memoirs, which charted her life from her beginnings in Grantham to her victory at the 1979 general election, that she admired Attlee, writing: "Of Clement Attlee, however, I was an admirer. He was a serious man and a patriot. Quite contrary to the general tendency of politicians in the 1990s, he was all substance and no show". Attlee's government presided over the successful transition from a wartime economy to peacetime, tackling problems of demobilisation, shortages of foreign currency, and adverse deficits in trade balances and government expenditure. Further domestic policies that he brought about included the creation of the National Health Service and the post-war Welfare state, which became key to the reconstruction of post-war Britain. Attlee and his ministers did much to transform the UK into a more prosperous and egalitarian society during their time in office with reductions in poverty and a rise in the general economic security of the population. In foreign affairs, he did much to assist with the post-war economic recovery of Europe. He proved a loyal ally of the US at the onset of the Cold War. Due to his style of leadership, it was not he, but Ernest Bevin who masterminded foreign policy. It was Attlee's government that decided Britain should have an independent nuclear weapons programme, and work on it began in 1947. Bevin, Attlee's Foreign Secretary, famously stated that "We've got to have it [nuclear weapons] and it's got to have a bloody Union Jack on it". The first operational British nuclear bomb was not detonated until October 1952, about one year after Attlee had left office. Independent British atomic research was prompted partly by the US McMahon Act, which nullified wartime expectations of postwar US–UK collaboration in nuclear research, and prohibited Americans from communicating nuclear technology even to allied countries. British atomic bomb research was kept secret even from some members of Attlee's own cabinet, whose loyalty or discretion seemed uncertain. Although a socialist, Attlee still believed in the British Empire of his youth. He thought of it as an institution that was a power for good in the world. Nevertheless, he saw that a large part of it needed to be self-governing. Using the Dominions of Canada, Australia, and New Zealand as a model, he continued the transformation of the empire into the modern-day British Commonwealth. His greatest achievement, surpassing many of these, was perhaps the establishment of a political and economic consensus about the governance of Britain that all three major parties subscribed to for three decades, fixing the arena of political discourse until the late-1970s. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics organised by Ipsos MORI. A blue plaque unveiled in 1979 commemorates Attlee at 17 Monkhams Avenue, in Woodford Green in the London borough of Redbridge. Attlee was elected a Fellow of the Royal Society in 1947. Attlee was awarded an Honorary Fellowship of Queen Mary College on 15 December 1948. In the 1960s a new suburb near Curepipe in British Mauritius was given the name Cité Atlee in his honour. Statues of Clement Attlee On 30 November 1988, a bronze statue of Clement Attlee was unveiled by Harold Wilson (the next Labour Prime Minister after Attlee) outside Limehouse Library in Attlee's former constituency. By then Wilson was the last surviving member of Attlee's cabinet, and the unveiling of the statue would be one of the last public appearances by Wilson, who was by that point in the early stages of Alzheimer's disease; he died at the age of 79 in May 1995. Limehouse Library was closed in 2003, after which the statue was vandalised. The council surrounded it with protective hoarding for four years, before eventually removing it for repair and recasting in 2009. The restored statue was unveiled by Peter Mandelson in April 2011, in its new position less than a mile away at the Queen Mary University of London's Mile End campus. There is also a statue of Clement Attlee in the Houses of Parliament that was erected, instead of a bust, by parliamentary vote in 1979. The sculptor was Ivor Roberts-Jones. Honours Personal life Attlee met Violet Millar while on a long trip with friends to Italy in 1921. They fell in love and were soon engaged, marrying at Christ Church, Hampstead, on 10 January 1922. It would come to be a devoted marriage, with Attlee providing protection and Violet providing a home that was an escape for Attlee from political turmoil. She died in 1964. They had four children: Lady Janet Helen (1923–2019), she married the scientist Harold Shipton (1920–2007) at Ellesborough Parish Church in 1947. Lady Felicity Ann (1925–2007), married the business executive John Keith Harwood (1926–1989) at Little Hampden in 1955 Martin Richard, Viscount Prestwood, later 2nd Earl Attlee (1927–1991) Lady Alison Elizabeth (1930–2016), married Richard Davis at Great Missenden in 1952. Arms Religious views Although one of his brothers became a clergyman and one of his sisters a missionary, Attlee himself is usually regarded as an agnostic. In an interview he described himself as "incapable of religious feeling", saying that he believed in "the ethics of Christianity" but not "the mumbo-jumbo". When asked whether he was an agnostic, Attlee replied "I don't know". Cultural depictions Major legislation enacted during the Attlee government Housing (Financial and Miscellaneous Provisions) Act 1946 Coal Industry Nationalisation Act 1946 Furnished Houses (Rent Control) Act 1946 National Health Service Act 1946 National Insurance Act 1946 National Insurance (Industrial Injuries) Act 1946 New Towns Act 1946 Trade Disputes and Trade Unions Act 1946 Hill Farming Act 1946 Agriculture Act 1947 Pensions (Increase) Act 1947 Electricity Act 1947 Town and Country Planning Act 1947 Transport Act 1947 National Assistance Act 1948 Children Act 1948 Factories Act 1948 Education (Miscellaneous Provisions) Act 1948 Agricultural Holdings Act 1948 British Nationality Act 1948 Employment and Training Act 1948 Nurseries and Child-Minders Regulation Act 1948 Law Reform (Personal Injuries) Act 1948 Local Government Act 1948 Representation of the People Act 1948 Housing Act 1949 Superannuation Act 1949 House of Commons (Redistribution of Seats) Act 1949 Landlord and Tenant (Rent Control) Act 1949 Lands Tribunal Act 1949 Legal Aid and Advice Act 1949 Adoption of Children Act 1949 Marriage Act 1949 National Parks and Access to the Countryside Act 1949 Parliament Act 1949 Representation of the People Act 1949 Distribution of Industry Act 1950 Coal-Mining (Subsidence) Act 1950 Allotments Act 1950 Workmen's Compensation (Supplementation) Act 1951 See also Ethical socialism Information Research Department Malayan Emergency Briggs Plan New village Notes References Bibliography Further reading Biographical Beckett, Francis. Clem Attlee (1998) – updated and revised and expanded edition, Clem Attlee: Labour's Great Reformer (2015) Bew, John. Citizen Clem: A Biography of Attlee, (London: 2016, British edition); Clement Attlee: The Man Who Made Modern Britain (New York: Oxford University Press, 2017, U.S. edition) Burridge, Trevor. Clement Attlee: A Political Biography (1985), scholarly Cohen, David. Churchill & Attlee: The Unlikely Allies who Won the War (Biteback Publishing, 2018), popular Crowcroft, Robert. Attlee's War: World War II and the Making of a Labour Leader (IB Tauris, 2011) Harris, Kenneth. Attlee (1982), scholarly authorised biography Howell, David. Attlee (2006) Jago, Michael. Clement Attlee: The Inevitable Prime Minister (2014) Pearce, Robert. Attlee (1997), 206pp Thomas-Symonds, Nicklaus. Attlee: A Life in Politics (IB Tauris, 2010). Whiting, R. C. "Attlee, Clement Richard, first Earl Attlee (1883–1967)", Oxford Dictionary of National Biography, 2004; online edn, Jan 2011 accessed 12 June 2013 doi:10.1093/ref:odnb/30498 Biographies of his cabinet and associates Rosen, Greg. ed. Dictionary of Labour Biography. (Politicos Publishing, 2002); Morgan, Kenneth O. Labour people: Leaders and Lieutenants, Hardie to Kinnock (1987) Scholarly studies Addison, Paul. No Turning Back: The Peaceful Revolutions of Post-War Britain (2011) excerpt and text search , detailed coverage of nationalisation, welfare state and planning. Crowcroft, Robert, and Kevin Theakston. "The Fall of the Attlee Government, 1951", in Timothy Heppell and Kevin Theakston, eds. How Labour Governments Fall (Palgrave Macmillan UK, 2013). PP 61–82. Francis, Martin. Ideas and policies under Labour, 1945–1951: building a new Britain (Manchester University Press, 1997). Golant, W. "The Emergence of CR Attlee as Leader of the Parliamentary Labour Party in 1935", Historical Journal, 13#2 (1970): 318–332. in JSTOR Jackson, Ben. "Citizen and Subject: Clement Attlee's Socialism", History Workshop Journal (2018). Vol. 86 pp 291–298. online. Jeffreys, Kevin. "The Attlee Years, 1935–1955", in Brivati, Brian, and Heffernan, Richard, eds., The Labour Party: A Centenary History, Palgrave Macmillan UK, 2000. 68–86. Kynaston, David. Austerity Britain, 1945–1951 (2008). Mioni, Michele. "The Attlee government and welfare state reforms in post-war Italian Socialism (1945–51): Between universalism and class policies", Labor History, 57#2 (2016): 277–297. DOI:10.1080/0023656X.2015.1116811 Morgan, Kenneth O. Labour in Power 1945–1951 (1984), 564 pp. Ovendale, R. ed., The foreign policy of the British Labour governments, 1945–51 (1984) · Pugh, Martin. Speak for Britain!: A New History of the Labour Party (2011) excerpt and text search Swift, John. Labour in Crisis: Clement Attlee & the Labour Party in Opposition, 1931–1940 (2001) Tomlinson, Jim. Democratic Socialism and Economic Policy: The Attlee Years, 1945–1951 (2002) Excerpt and text search Weiler, Peter. "British Labour and the cold war: the foreign policy of the Labour governments, 1945–1951", Journal of British Studies, 26#1 (1987): 54–82. in JSTOR Works Clement Attlee published his memoirs, As it Happened, in 1954. Francis Williams' A Prime Minister Remembers, based on interviews with Attlee, was published in 1961. Attlee's other publications The Social Worker (1920) Metropolitan Borough Councils Their Constitution, Powers and Duties – Fabian Tract No 190 (1920) The Town Councillor (1925) The Will and the Way to Socialism (1935) The Labour Party in Perspective (1937) Collective Security Under the United Nations (1958) Empire into Commonwealth (1961) External links Clement Attlee – Thanksgiving Speech 1950 – UK Parliament Living Heritage More about Clement Attlee on the Downing Street website. Annotated bibliography for Clement Attlee from the Alsos Digital Library for Nuclear Issues Drawing of Clement Attlee in the UK Parliamentary Collections 1883 births 1967 deaths 20th-century English politicians 20th-century prime ministers of the United Kingdom Academics of Ruskin College Academics of the London School of Economics Alumni of University College, Oxford Alumni of the Inns of Court School of Law Association footballers not categorized by position British Army personnel of World War I British Empire in World War II British Secretaries of State for Dominion Affairs British Secretaries of State British socialists British Zionists British people of World War II British social democrats Burials at Westminster Abbey Chancellors of the Duchy of Lancaster Deaths from pneumonia in England Deputy Prime Ministers of the United Kingdom Clement English agnostics English footballers Fellows of the Royal Society Fleet Town F.C. players Foreign Office personnel of World War II Knights of the Garter Labour Party (UK) MPs for English constituencies Labour Party (UK) hereditary peers Labour Party prime ministers of the United Kingdom Leaders of the Labour Party (UK) Leaders of the Opposition (United Kingdom) Lord Presidents of the Council Lords Privy Seal Mayors of places in Greater London Members of Stepney Metropolitan Borough Council Members of the Fabian Society Members of the Order of Merit Members of the Order of the Companions of Honour Members of the Privy Council of the United Kingdom Ministers in the Attlee governments, 1945–1951 Ministers in the Churchill wartime government, 1940–1945 National Council for Civil Liberties people People educated at Haileybury and Imperial Service College People from Putney People of the Cold War Prime Ministers of the United Kingdom South Lancashire Regiment officers UK MPs 1922–1923 UK MPs 1923–1924 UK MPs 1924–1929 UK MPs 1929–1931 UK MPs 1931–1935 UK MPs 1935–1945 UK MPs 1945–1950 UK MPs 1950–1951 UK MPs 1951–1955 UK MPs 1955–1959 UK MPs who were granted peerages United Kingdom Postmasters General United Kingdom home front during World War II World War II political leaders Earls created by Elizabeth II Members of the Inner Temple British Eurosceptics Labour Party (UK) mayors
2,594
5,769
https://en.wikipedia.org/wiki/C.%20S.%20Forester
C. S. Forester
Cecil Louis Troughton Smith (27 August 1899 – 2 April 1966), known by his pen name Cecil Scott "C. S." Forester, was an English novelist known for writing tales of naval warfare, such as the 12-book Horatio Hornblower series depicting a Royal Navy officer during the Napoleonic wars. The Hornblower novels A Ship of the Line and Flying Colours were jointly awarded the James Tait Black Memorial Prize for fiction in 1938. His other works include The African Queen (1935; turned into a 1951 film by John Huston) and The Good Shepherd (1955; turned into a 2020 film, Greyhound, adapted by and starring Tom Hanks). Early years Forester was born in Cairo. After the family broke up when he was still at an early age his mother took him with her to London, where he was educated at Alleyn's School and Dulwich College. He began to study medicine at Guy's Hospital, but left without completing his degree. He was of good height and somewhat athletic, but wore glasses and had a slender physique, so he failed his Army physical and was told that there was no chance that he would be accepted. He began writing seriously, using his pen name, in around 1921. Second World War During the Second World War Forester moved to the United States, where he worked for the British Ministry of Information and wrote propaganda to encourage the U.S. to join the Allies. He eventually settled in Berkeley, California. In 1942, while he was living in Washington, D.C., he met Roald Dahl and encouraged him to write about his experiences in the RAF. According to Dahl's autobiography, Lucky Break, Forester asked him about his experiences as a fighter pilot, and this prompted Dahl to write his first story, "A Piece of Cake". Literary career Forester wrote many novels, but he is best known for the 12-book Horatio Hornblower series about an officer in the Royal Navy during the Napoleonic Wars. He began the series with Hornblower fairly high in rank in the first novel, which was published in 1937, but demand for more stories led him to fill in Hornblower's life story, and he wrote novels detailing his rise from the rank of midshipman. The last completed novel was published in 1962. Hornblower's fictional adventures were based on real events, but Forester wrote the body of the works carefully to avoid entanglements with real world history, so that Hornblower is always off on another mission when a great naval battle occurs during the Napoleonic Wars. Forester's other novels include The African Queen (1935) and The General (1936); two novels about the Peninsular War, Death to the French (published in the United States as Rifleman Dodd) and The Gun (filmed as The Pride and the Passion in 1957); and seafaring stories that do not involve Hornblower, such as Brown on Resolution (1929), The Captain from Connecticut (1941), The Ship (1943), and Hunting the Bismarck (1959), which was used as the basis of the screenplay for the film Sink the Bismarck! (1960). Several of his novels have been filmed, including The African Queen (1951), directed by John Huston. Forester is also credited as story writer on several films not based on his published novels, including Commandos Strike at Dawn (1942). Forester also wrote several volumes of short stories set during the Second World War. Those in The Nightmare (1954) were based on events in Nazi Germany, ending at the Nuremberg trials. The stories in The Man in the Yellow Raft (1969) follow the career of the destroyer USS Boon, while many of the stories in Gold from Crete (1971) follow the destroyer HMS Apache. The last of the stories in Gold from Crete is If Hitler Had Invaded England, which offers an imagined sequence of events starting with Hitler's attempt to implement Operation Sea Lion and culminating in the early military defeat of Nazi Germany in the summer of 1941. His non-fiction works about seafaring include The Age of Fighting Sail (1956), an account of the sea battles between Great Britain and the United States in the War of 1812. Forester also published the crime novels Payment Deferred (1926) and Plain Murder (1930), as well as two children's books. Poo-Poo and the Dragons (1942) was created as a series of stories told to his son George to encourage him to finish his meals. George had mild food allergies and needed encouragement to eat. The Barbary Pirates (1953) is a children's history of early 19th-century pirates. Forester appeared as a contestant on the television quiz programme You Bet Your Life, hosted by Groucho Marx, in an episode broadcast on 1 November 1956. A previously unknown novel of Forester's, The Pursued, was discovered in 2003 and published by Penguin Classics on 3 November 2011. Personal life Forester married Kathleen Belcher in 1926. They had two sons, John, born in 1929, and George, born in 1933. The couple divorced in 1945. In 1947 he married Dorothy Foster. Forester died in Fullerton, California on 2 April 1966. John Forester wrote a two-volume biography of his father, including many elements of Forester's life which became clear to his son only after his father's death. Bibliography Horatio Hornblower 1950 Mr Midshipman Hornblower. Michael Joseph. 1941 "The Hand of Destiny".Collier's 1950 "Hornblower and the Widow McCool" ("Hornblower’s Temptation" ""Hornblower and the Big Decision"). The Saturday Evening Post 1952 Lieutenant Hornblower. Michael Joseph. 1962 Hornblower and the Hotspur. Michael Joseph. 1967 Hornblower and the Crisis, an unfinished novel. Michael Joseph. Published in the US as Hornblower During the Crisis (posthumous) 1953 Hornblower and the Atropos. Michael Joseph. 1937 The Happy Return. Michael Joseph. Published in the US as Beat to Quarters 1938 A Ship of the Line. Michael Joseph. 1941 "Hornblower's Charitable Offering". Argosy 1938 Flying Colours. Michael Joseph. 1941 "Hornblower and His Majesty". Collier's 1945 The Commodore. Michael Joseph. Published in the US as Commodore Hornblower 1946 Lord Hornblower. Michael Joseph. 1958 Hornblower in the West Indies. Michael Joseph. Published in the US as Admiral Hornblower in the West Indies 1967 "The Last Encounter". Sunday Mirror, 8 May 1966 (posthumous). 1964 The Hornblower Companion. Michael Joseph. (Supplementary book comprising another short story, "The Point and the Edge" only as an outline, "The Hornblower Atlas" and "Some Personal Notes") Omnibus 1964 The Young Hornblower. (a compilation of books 1, 2 & 3). Michael Joseph. 1965 Captain Hornblower (a compilation of books 5, 6 & 7). Michael Joseph. 1968 Admiral Hornblower (a compilation of books 8, 9, 10 & 11). Michael Joseph. 2011 Hornblower Addendum – Five Short Stories (originally published in magazines) Other novels 1924 A Pawn among Kings. Methuen. 1924 The Paid Piper. Methuen. 1926 Payment Deferred. Methuen. 1927 Love Lies Dreaming. John Lane. 1927 The Wonderful Week. John Lane. 1928 The Daughter of the Hawk. John Lane. 1929 Brown on Resolution. John Lane. 1930 Plain Murder. John Lane. 1931 Two-and-Twenty. John Lane. 1932 Death to the French. John Lane. Published in the U.S. as Rifleman Dodd. Little Brown. 1933 The Gun. John Lane. 1934 The Peacemaker. Heinemann. 1935 The African Queen. Heinemann. 1935 The Pursued (a lost novel rediscovered in 1999 and published by Penguin Classics in 2011) 1936 The General. Michael Joseph. First published as a serial in the News Chronicle 14–18 January 1935 1940 The Earthly Paradise. Michael Joseph. Published in the U.S. as To the Indies. 1941 The Captain from Connecticut. Michael Joseph. 1942 Poo-Poo and the Dragons. Michael Joseph. 1943 The Ship. Michael Joseph. 1948 The Sky and the Forest. Michael Joseph. 1951 Randall and the River of Time. Michael Joseph. 1955 The Good Shepherd. Michael Joseph. Short stories "The Wandering Gentile", Liverpool Echo, 1955 Posthumous 1967 Long before Forty (autobiographical). Michael Joseph. 1971 Gold from Crete (short stories). Michael Joseph. 2011 The Pursued (novel). Penguin. Collections 1944 The Bedchamber Mystery; to which is added the story of The Eleven Deckchairs and Modernity and Maternity. S. J. Reginald Saunders. Published in the US as Three Matronly Mysteries. eNet Press 1954 The Nightmare. Michael Joseph 1969 The Man in the Yellow Raft. Michael Joseph (posthumous) Plays in three acts; John Lane 1931 U 97 1933 Nurse Cavell. (with C. E. Bechhofer Roberts) Non-fiction 1922 Victor Emmanuel II. Methuen (?) 1927 Victor Emmanuel II and the Union of Italy. Methuen. 1924 Napoleon and his Court. Methuen. 1925 Josephine, Napoleon’s Empress. Methuen. 1928 Louis XIV, King of France and Navarre. Methuen. 1929 Lord Nelson. John Lane. 1929 The Voyage of the Annie Marble. John Lane. 1930 The Annie Marble in Germany. John Lane. 1936 Marionettes at Home. Michael Joseph Ltd. 1953 The Adventures of John Wetherell. Doubleday & Company, Inc. 1953 The Barbary Pirates. Landmark Books, Random House. Published in the UK in 1956 by Macdonald & Co. 1957 The Naval War of 1812. Michael Joseph. Published in the US as The Age of Fighting Sail 1959 Hunting the Bismarck. Michael Joseph. Published in the US as The Last Nine Days of the Bismark and Sink the Bismarck Non-fiction short pieces "Calmness under Air Raids in Franco Territory". Western Mail, 28 April 1937 "Who Is Financing Franco?". Aberdeen Press & Journal, 5 May 1937 ”Sabotage”. Sunday Graphic, 11 September 1938 "Saga of the Submarines". Falkirk Herald, 1 August 1945 "Hollywood Coincidence". Leicester Chronicle, 3 September 1955 Film adaptations In addition to providing the source material for numerous adaptations (not all of which are listed below), Forester was also credited as "adapted for the screen by" for Captain Horatio Hornblower. Payment Deferred (1932), based on a 1931 play which was in turn based on Forester's novel of the same name Brown on Resolution (1935), based on the novel of the same name Eagle Squadron (1942), story Commandos Strike at Dawn (1942), short story "The Commandos" Forever and a Day (1943), story Captain Horatio Hornblower (1951), based on the novels The Happy Return, A Ship of the Line and Flying Colours The African Queen (1951), the novel of the same name Sailor of the King (1953), the novel Brown on Resolution The Pride and the Passion (1957), the novel The Gun Sink the Bismarck! (1960), the novel The Last Nine Days of the Bismarck Hornblower (1998–2003 series of made-for-television movies), based on the novels Mr. Midshipman Hornblower, Lieutenant Hornblower and Hornblower and the Hotspur Greyhound (2020), the novel The Good Shepherd See also Honor Harrington – a fictional space captain and admiral in the Honorverse novels by David Weber, inspired by Horatio Hornblower (see dedication in On Basilisk Station) Patrick O'Brian – author of the Aubrey–Maturin series Dudley Pope – author of the Ramage series Richard Woodman - author of the Nathaniel Drinkwater series Douglas Reeman (writing as Alexander Kent) - The Bolitho novels References Further reading Sternlicht, Sanford V., C.S. Forester and the Hornblower saga (Syracuse University Press, 1999) Van der Kiste, John, C.S. Forester's Crime Noir: A view of the murder stories (KDP, 2018) External links C. S. Forester Collection at the Harry Ransom Center C. S. Forester Society, which publishes the e-journal Reflections C. S. Forester on You Bet Your Life in 1956 1899 births 1966 deaths 20th-century English novelists 20th-century English male writers 20th-century pseudonymous writers Alumni of King's College London English historical novelists English male novelists James Tait Black Memorial Prize recipients Nautical historical novelists People educated at Alleyn's School People educated at Dulwich College Writers about the Age of Sail Writers from London Writers of historical fiction set in the modern age
2,596
5,778
https://en.wikipedia.org/wiki/Cave
Cave
A cave or cavern is a natural void in the ground, specifically a space large enough for a human to enter. Caves often form by the weathering of rock and often extend deep underground. The word cave can refer to smaller openings such as sea caves, rock shelters, and grottos, that extend a relatively short distance into the rock and they are called exogene caves. Caves which extend further underground than the opening is wide are called endogene caves. Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking. Formation types The formation and development of caves is known as speleogenesis; it can occur over the course of millions of years. Caves can range widely in size, and are formed by various geological processes. These may involve a combination of chemical processes, erosion by water, tectonic forces, microorganisms, pressure, and atmospheric influences. Isotopic dating techniques can be applied to cave sediments, to determine the timescale of the geological events which formed and shaped present-day caves. It is estimated that a cave cannot be more than vertically beneath the surface due to the pressure of overlying rocks. This does not, however, impose a maximum depth for a cave which is measured from its highest entrance to its lowest point, as the amount of rock above the lowest point is dependent on the topography of the landscape above it. For karst caves the maximum depth is determined on the basis of the lower limit of karst forming processes, coinciding with the base of the soluble carbonate rocks. Most caves are formed in limestone by dissolution. Caves can be classified in various other ways as well, including a contrast between active and relict: active caves have water flowing through them; relict caves do not, though water may be retained in them. Types of active caves include inflow caves ("into which a stream sinks"), outflow caves ("from which a stream emerges"), and through caves ("traversed by a stream"). Solutional Solutional caves or karst caves are the most frequently occurring caves. Such caves form in rock that is soluble; most occur in limestone, but they can also form in other rocks including chalk, dolomite, marble, salt, and gypsum. Rock is dissolved by natural acid in groundwater that seeps through bedding planes, faults, joints, and comparable features. Over time cracks enlarge to become caves and cave systems. The largest and most abundant solutional caves are located in limestone. Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as karst, characterized by sinkholes and underground drainage. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws and columns. These secondary mineral deposits in caves are called speleothems. The portions of a solutional cave that are below the water table or the local level of the groundwater will be flooded. Lechuguilla Cave in New Mexico and nearby Carlsbad Cavern are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of oil give off sulfurous fumes. This gas mixes with groundwater and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating from the surface. Primary Caves formed at the same time as the surrounding rock are called primary caves. Lava tubes are formed through volcanic activity and are the most common primary caves. As lava flows downhill, its surface cools and solidifies. Hot liquid lava continues to flow under that crust, and if most of it flows out, a hollow tube remains. Such caves can be found in the Canary Islands, Jeju-do, the basaltic plains of Eastern Idaho, and in other places. Kazumura Cave near Hilo, Hawaii is a remarkably long and deep lava tube; it is . Lava caves include but are not limited to lava tubes. Other caves formed through volcanic activity include rifts, lava molds, open vertical conduits, inflationary, blisters, among others. Sea or littoral Sea caves are found along coasts around the world. A special case is littoral caves, which are formed by wave action in zones of weakness in sea cliffs. Often these weaknesses are faults, but they may also be dykes or bedding-plane contacts. Some wave-cut caves are now above sea level because of later uplift. Elsewhere, in places such as Thailand's Phang Nga Bay, solutional caves have been flooded by the sea and are now subject to littoral erosion. Sea caves are generally around in length, but may exceed . Corrasional or erosional Corrasional or erosional caves are those that form entirely by erosion by flowing streams carrying rocks and other sediments. These can form in any type of rock, including hard rocks such as granite. Generally there must be some zone of weakness to guide the water, such as a fault or joint. A subtype of the erosional cave is the wind or aeolian cave, carved by wind-born sediments. Many caves formed initially by solutional processes often undergo a subsequent phase of erosional or vadose enlargement where active streams or rivers pass through them. Glacier Glacier caves are formed by melting ice and flowing water within and under glaciers. The cavities are influenced by the very slow flow of the ice, which tends to collapse the caves again. Glacier caves are sometimes misidentified as "ice caves", though this latter term is properly reserved for bedrock caves that contain year-round ice formations. Fracture Fracture caves are formed when layers of more soluble minerals, such as gypsum, dissolve out from between layers of less soluble rock. These rocks fracture and collapse in blocks of stone. Talus Talus caves are formed by the openings among large boulders that have fallen down into a random heap, often at the bases of cliffs. These unstable deposits are called talus or scree, and may be subject to frequent rockfalls and landslides. Anchialine Anchialine caves are caves, usually coastal, containing a mixture of freshwater and saline water (usually sea water). They occur in many parts of the world, and often contain highly specialized and endemic fauna. Physical patterns Branchwork caves resemble surface dendritic stream patterns; they are made up of passages that join downstream as tributaries. Branchwork caves are the most common of cave patterns and are formed near sinkholes where groundwater recharge occurs. Each passage or branch is fed by a separate recharge source and converges into other higher order branches downstream. Angular network caves form from intersecting fissures of carbonate rock that have had fractures widened by chemical erosion. These fractures form high, narrow, straight passages that persist in widespread closed loops. Anastomotic caves largely resemble surface braided streams with their passages separating and then meeting further down drainage. They usually form along one bed or structure, and only rarely cross into upper or lower beds. Spongework caves are formed when solution cavities are joined by mixing of chemically diverse water. The cavities form a pattern that is three-dimensional and random, resembling a sponge. Ramiform caves form as irregular large rooms, galleries, and passages. These randomized three-dimensional rooms form from a rising water table that erodes the carbonate rock with hydrogen-sulfide enriched water. Pit caves (vertical caves, potholes, or simply "pits") consist of a vertical shaft rather than a horizontal cave passage. They may or may not be associated with one of the above structural patterns. Geographic distribution Caves are found throughout the world, although the distribution of documented cave system is heavily skewed towards those countries where caving has been popular for many years (such as France, Italy, Australia, the UK, the United States, etc.). As a result, explored caves are found widely in Europe, Asia, North America and Oceania, but are sparse in South America, Africa, and Antarctica. This is a rough generalization, as large expanses of North America and Asia contain no documented caves, whereas areas such as the Madagascar dry deciduous forests and parts of Brazil contain many documented caves. As the world's expanses of soluble bedrock are researched by cavers, the distribution of documented caves is likely to shift. For example, China, despite containing around half the world's exposed limestone—more than —has relatively few documented caves. Records and superlatives The cave system with the greatest total length of surveyed passage is Mammoth Cave in Kentucky, US, at . The longest surveyed underwater cave, and second longest overall, is Sistema Sac Actun in Yucatán, Mexico at . The deepest known cave — measured from its highest entrance to its lowest point — is Veryovkina Cave in Abkhazia, Georgia, with a depth of . This was the first cave to be explored to a depth of more than . (The first cave to be descended below was Gouffre Berger in France.) The Sarma and Illyuzia-Mezhonnogo-Snezhnaya caves in Georgia, (, and respectively) are the current second- and third-deepest caves. The deepest outside Georgia is Lamprechtsofen Vogelschacht Weg Schacht in Austria, which is deep. The deepest vertical shaft in a cave is in Vrtoglavica Cave in Slovenia. The second deepest is Ghar-e-Ghala at in the Parau massif near Kermanshah in Iran. The deepest underwater cave bottomed by a remotely operated underwater vehicle at , is the Hranice Abyss in the Czech Republic. The largest known room is Sarawak Chamber, in the Gunung Mulu National Park (Miri, Sarawak, Borneo, Malaysia), a sloping, boulder strewn chamber with an area of approximately and a height of . The nearby Clearwater Cave System is believed to be the world's largest cave by volume, with a calculated volume of . The largest room in a show cave is the salle de La Verna in the French Pyrenees. The largest passage ever discovered is in the Son Doong Cave in Phong Nha-Kẻ Bàng National Park in Quảng Bình Province, Vietnam. It is in length, high and wide over most of its length, but over high and wide for part of its length. Five longest surveyed Mammoth Cave, Kentucky, US Sistema Sac Actun/Sistema Dos Ojos, Mexico Jewel Cave, South Dakota, US Sistema Ox Bel Ha, Mexico Shuanghedong Cave Network, China Ecology Cave-inhabiting animals are often categorized as troglobites (cave-limited species), troglophiles (species that can live their entire lives in caves, but also occur in other environments), trogloxenes (species that use caves, but cannot complete their life cycle fully in caves) and accidentals (animals not in one of the previous categories). Some authors use separate terminology for aquatic forms (for example, stygobites, stygophiles, and stygoxenes). Of these animals, the troglobites are perhaps the most unusual organisms. Troglobitic species often show a number of characteristics, termed troglomorphic, associated with their adaptation to subterranean life. These characteristics may include a loss of pigment (often resulting in a pale or white coloration), a loss of eyes (or at least of optical functionality), an elongation of appendages, and an enhancement of other senses (such as the ability to sense vibrations in water). Aquatic troglobites (or stygobites), such as the endangered Alabama cave shrimp, live in bodies of water found in caves and get nutrients from detritus washed into their caves and from the feces of bats and other cave inhabitants. Other aquatic troglobites include cave fish, and cave salamanders such as the olm and the Texas blind salamander. Cave insects such as Oligaphorura (formerly Archaphorura) schoetti are troglophiles, reaching in length. They have extensive distribution and have been studied fairly widely. Most specimens are female, but a male specimen was collected from St Cuthberts Swallet in 1969. Bats, such as the gray bat and Mexican free-tailed bat, are trogloxenes and are often found in caves; they forage outside of the caves. Some species of cave crickets are classified as trogloxenes, because they roost in caves by day and forage above ground at night. Because of the fragility of cave ecosystems, and the fact that cave regions tend to be isolated from one another, caves harbor a number of endangered species, such as the Tooth cave spider, liphistius trapdoor spider, and the gray bat. Caves are visited by many surface-living animals, including humans. These are usually relatively short-lived incursions, due to the lack of light and sustenance. Cave entrances often have typical florae. For instance, in the eastern temperate United States, cave entrances are most frequently (and often densely) populated by the bulblet fern, Cystopteris bulbifera. Archaeological and cultural importance Throughout history, primitive peoples have made use of caves. The earliest human fossils found in caves come from a series of caves near Krugersdorp and Mokopane in South Africa. The cave sites of Sterkfontein, Swartkrans, Kromdraai B, Drimolen, Malapa, Cooper's D, Gladysvale, Gondolin and Makapansgat have yielded a range of early human species dating back to between three and one million years ago, including Australopithecus africanus, Australopithecus sediba and Paranthropus robustus. However, it is not generally thought that these early humans were living in the caves, but that they were brought into the caves by carnivores that had killed them. The first early hominid ever found in Africa, the Taung Child in 1924, was also thought for many years to come from a cave, where it had been deposited after being predated on by an eagle. However, this is now debated (Hopley et al., 2013; Am. J. Phys. Anthrop.). Caves do form in the dolomite of the Ghaap Plateau, including the Early, Middle and Later Stone Age site of Wonderwerk Cave; however, the caves that form along the escarpment's edge, like that hypothesised for the Taung Child, are formed within a secondary limestone deposit called tufa. There is numerous evidence for other early human species inhabiting caves from at least one million years ago in different parts of the world, including Homo erectus in China at Zhoukoudian, Homo rhodesiensis in South Africa at the Cave of Hearths (Makapansgat), Homo neanderthalensis and Homo heidelbergensis in Europe at Archaeological Site of Atapuerca, Homo floresiensis in Indonesia, and the Denisovans in southern Siberia. In southern Africa, early modern humans regularly used sea caves as shelter starting about 180,000 years ago when they learned to exploit the sea for the first time. The oldest known site is PP13B at Pinnacle Point. This may have allowed rapid expansion of humans out of Africa and colonization of areas of the world such as Australia by 60–50,000 years ago. Throughout southern Africa, Australia, and Europe, early modern humans used caves and rock shelters as sites for rock art, such as those at Giant's Castle. Caves such as the yaodong in China were used for shelter; other caves were used for burials (such as rock-cut tombs), or as religious sites (such as Buddhist caves). Among the known sacred caves are China's Cave of a Thousand Buddhas and the sacred caves of Crete. Caves and acoustics The importance of sound in caves predates a modern understanding of acoustics. Archaeologists have uncovered relationships between paintings of dots and lines, in specific areas of resonance, within the caves of Spain and France, as well as instruments depicting paleolithic motifs, indicators of musical events and rituals. Clusters of paintings were often founds in areas with notable acoustics, sometimes even replicating the sounds of the animals depicted on the walls. The human voice was also theorized to be used as an echolocation device to navigate darker areas of the caves where torches were less useful. Dots of red ochre are often found in spaces with the highest resonance, where the production of paintings was too difficult. Caves continue to provide usage for modern-day explorers of acoustics. Today Cumberland Caverns provides one of the best examples for modern musical usages of caves. Not only are caves utilized for the reverberations, but for the dampening qualities of their abnormal faces as well. The irregularities in the walls of the Cumberland Caverns diffuse sounds bouncing off the walls and give the space and almost recording studio-like quality. During the 20th century musicians began to explore the possibility of using caves as locations as clubs and concert halls, including the likes of Dinah Shore, Roy Acuff, and Benny Goodman. Unlike today, these early performances were typically held in the mouths of the caves, as the lack of technology made depths of the interior inaccessible with musical equipment. In Luray Caverns, Virginia, a functioning organ has been developed that generates sound by mallets striking stalactites, each with a different pitch. See also References Erosion landforms Fluvial landforms
2,601
5,781
https://en.wikipedia.org/wiki/Chinese%20numerals
Chinese numerals
Chinese numerals are words and characters used to denote numbers in Chinese. Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials. The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals. Characters used to represent numbers The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., "one thousand nine hundred forty-five"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not. Standard numbers There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as xiǎoxiě (), and one for use in commercial, accounting or financial contexts, known as dàxiě (). The latter arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters 三十 (30) to 五千 (5000) just by adding a few strokes. That would not be possible when writing using the financial characters 參拾 (30) and 伍仟 (5000). They are also referred to as "banker's numerals", "anti-fraud numerals", or "banker's anti-fraud numerals". For the same reason, rod numerals were never used in commercial records. T denotes Traditional Chinese characters, while S denotes Simplified Chinese characters. Characters with regional usage Large numbers For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, Wujing suanshu (Arithmetic in Five Classics). In modern Chinese only the second system is used, in which the same ancient names are used, but each represents a number 10,000 (myriad, 萬 wàn) times the previous: In practice, this situation does not lead to ambiguity, with the exception of 兆 (zhào), which means 1012 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 106 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses 万亿 (wànyì) or 太 (tài, as the translation for tera) instead. Partly due to this, combinations of 万 and 亿 are often used instead of the larger units of the traditional system as well, for example 亿亿 (yìyì) instead of 京. The ROC government in Taiwan uses 兆 (zhào) to mean 1012 in official documents. Large numbers from Buddhism Numerals beyond 載 zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings. Small numbers The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse. Small numbers from Buddhism SI prefixes In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (兆, 京, 垓, 秭, 穰) and smaller Chinese numerals (微, 纖, 沙, 塵, 渺) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral. The Republic of China (Taiwan) defined 百萬 as the translation for mega and 兆 as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use 兆赫 to represent "megahertz". Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation. Reading and transcribing numbers Whole numbers Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit. In Mandarin, the multiplier (liǎng) is often used rather than (èr) for all numbers 200 and greater with the "2" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both 兩 (liǎng) or 二 (èr) are acceptable for the number 200. When writing in the Cantonese dialect, 二 (yi6) is used to represent the "2" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), 兩 (no6) is used to represent the "2" numeral in all numbers from 200 onwards. Thus: For the numbers 11 through 19, the leading "one" () is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading "one" and the trailing zeroes are omitted. Sometimes, the one before "ten" in the middle of a number, such as 213, is omitted. Thus: Notes: Nothing is ever omitted in large and more complicated numbers such as this. In certain older texts like the Protestant Bible or in poetic usage, numbers such as 114 may be written as [100] [10] [4] (). Outside of Taiwan, digits are sometimes grouped by myriads instead of thousands. Hence it is more convenient to think of numbers here as in groups of four, thus 1,234,567,890 is regrouped here as 12,3456,7890. Larger than a myriad, each number is therefore four zeroes longer than the one before it, thus 10000 × () = (). If one of the numbers is between 10 and 19, the leading "one" is omitted as per the above point. Hence (numbers in parentheses indicate that the number has been written as one number rather than expanded): In Taiwan, pure Arabic numerals are officially always and only grouped by thousands. Unofficially, they are often not grouped, particularly for numbers below 100,000. Mixed Arabic-Chinese numerals are often used in order to denote myriads. This is used both officially and unofficially, and come in a variety of styles: Interior zeroes before the unit position (as in 1002) must be spelt explicitly. The reason for this is that trailing zeroes (as in 1200) are often omitted as shorthand, so ambiguity occurs. One zero is sufficient to resolve the ambiguity. Where the zero is before a digit other than the units digit, the explicit zero is not ambiguous and is therefore optional, but preferred. Thus: Fractional values To construct a fraction, the denominator is written first, followed by , then the literary possessive particle , and lastly the numerator. This is the opposite of how fractions are read in English, which is numerator first. Each half of the fraction is written the same as a whole number. For example, to express "two thirds", the structure "three parts of-this two" is used. Mixed numbers are written with the whole-number part first, followed by , then the fractional part. Percentages are constructed similarly, using as the denominator. (The number 100 is typically expressed as , like the English "one hundred". However, for percentages, is used on its own.) Because percentages and other fractions are formulated the same, Chinese are more likely than not to express 10%, 20% etc. as "parts of 10" (or 1/10, 2/10, etc. i.e. ; shí fēnzhī yī, ; shí fēnzhī èr, etc.) rather than "parts of 100" (or 10/100, 20/100, etc. i.e. ; bǎi fēnzhī shí, ; bǎi fēnzhī èrshí, etc.) In Taiwan, the most common formation of percentages in the spoken language is the number per hundred followed by the word ; pā, a contraction of the Japanese ; pāsento, itself taken from the English "percent". Thus 25% is ; èrshíwǔ pā. Decimal numbers are constructed by first writing the whole number part, then inserting a point (), and finally the fractional part. The fractional part is expressed using only the numbers for 0 to 9, similarly to English. functions as a number and therefore requires a measure word. For example: . Ordinal numbers Ordinal numbers are formed by adding ("sequence") before the number. The Heavenly Stems are a traditional Chinese ordinal system. Negative numbers Negative numbers are formed by adding fù () before the number. Usage Chinese grammar requires the use of classifiers (measure words) when a numeral is used together with a noun to express a quantity. For example, "three people" is expressed as , "three ( particle) person", where / is a classifier. There exist many different classifiers, for use with different sets of nouns, although / is the most common, and may be used informally in place of other classifiers. Chinese uses cardinal numbers in certain situations in which English would use ordinals. For example, (literally "three story/storey") means "third floor" ("second floor" in British ). Likewise, (literally "twenty-one century") is used for "21st century". Numbers of years are commonly spoken as a sequence of digits, as in ("two zero zero one") for the year 2001. Names of months and days (in the Western system) are also expressed using numbers: ("one month") for January, etc.; and ("week one") for Monday, etc. There is only one exception: Sunday is , or informally , both literally "week day". When meaning "week", "" and "" are interchangeable. "" or "" means "day of worship". Chinese Catholics call Sunday "" , "Lord's day". Full dates are usually written in the format 2001年1月20日 for January 20, 2001 (using "year", "month", and "day") – all the numbers are read as cardinals, not ordinals, with no leading zeroes, and the year is read as a sequence of digits. For brevity the , and may be dropped to give a date composed of just numbers. For example "6-4" in Chinese is "six-four", short for "month six, day four" i.e. June Fourth, a common Chinese shorthand for the 1989 Tiananmen Square protests (because of the violence that occurred on June 4). For another example 67, in Chinese is sixty seven, short for year nineteen sixty seven, a common Chinese shorthand for the Hong Kong 1967 leftist riots. Counting rod and Suzhou numerals In the same way that Roman numerals were standard in ancient and medieval Europe for mathematics and commerce, the Chinese formerly used the rod numerals, which is a positional system. The Suzhou numerals () system is a variation of the Southern Song rod numerals. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices. Hand gestures There is a common method of using of one hand to signify the numbers one to ten. While the five digits on one hand can easily express the numbers one to five, six to ten have special signs that can be used in commerce or day-to-day communication. Historical use of numerals in China Most Chinese numerals of later periods were descendants of the Shang dynasty oracle numerals of the 14th century BC. The oracle bone script numerals were found on tortoise shell and animal bones. In early civilizations, the Shang were able to express any numbers, however large, with only nine symbols and a counting board though it was still not positional . Some of the bronze script numerals such as 1, 2, 3, 4, 10, 11, 12, and 13 became part of the system of rod numerals. In this system, horizontal rod numbers are used for the tens, thousands, hundred thousands etc. It's written in Sunzi Suanjing that "one is vertical, ten is horizontal". The counting rod numerals system has place value and decimal numerals for computation, and was used widely by Chinese merchants, mathematicians and astronomers from the Han dynasty to the 16th century. In 690 AD, Empress Wǔ promulgated Zetian characters, one of which was "〇". The word is now used as a synonym for the number zero. Alexander Wylie, Christian missionary to China, in 1853 already refuted the notion that "the Chinese numbers were written in words at length", and stated that in ancient China, calculation was carried out by means of counting rods, and "the written character is evidently a rude presentation of these". After being introduced to the rod numerals, he said "Having thus obtained a simple but effective system of figures, we find the Chinese in actual use of a method of notation depending on the theory of local value [i.e. place-value], several centuries before such theory was understood in Europe, and while yet the science of numbers had scarcely dawned among the Arabs." During the Ming and Qing dynasties (after Arabic numerals were introduced into China), some Chinese mathematicians used Chinese numeral characters as positional system digits. After the Qing period, both the Chinese numeral characters and the Suzhou numerals were replaced by Arabic numerals in mathematical writings. Cultural influences Traditional Chinese numeric characters are also used in Japan and Korea and were used in Vietnam before the 20th century. In vertical text (that is, read top to bottom), using characters for numbers is the norm, while in horizontal text, Arabic numerals are most common. Chinese numeric characters are also used in much the same formal or decorative fashion that Roman numerals are in Western cultures. Chinese numerals may appear together with Arabic numbers on the same sign or document. See also Chinese number gestures Numbers in Chinese culture Chinese units of measurement Chinese classifier Chinese grammar Japanese numerals Korean numerals Vietnamese numerals Celestial stem List of numbers in Sinitic languages Notes References Numerals Chinese language Chinese mathematics
2,602
5,797
https://en.wikipedia.org/wiki/Cluster%20sampling
Cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research. In this sampling plan, the total population is divided into these groups (known as clusters) and a simple random sample of the groups is selected. The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a "one-stage" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a "two-stage" cluster sampling plan. A common motivation for cluster sampling is to reduce the total number of interviews and costs given the desired accuracy. For a fixed sample size, the expected random error is smaller when most of the variation in the population is present internally within the groups, and not between the groups. Cluster elements The population within a cluster should ideally be as heterogeneous as possible, but there should be homogeneity between clusters. Each cluster should be a small-scale representation of the total population. The clusters should be mutually exclusive and collectively exhaustive. A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters. The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled. A common motivation for cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision. There is also multistage cluster sampling, where at least two stages are taken in selecting elements from clusters. When clusters are of different sizes Without modifying the estimated parameter, cluster sampling is unbiased when the clusters are approximately the same size. In this case, the parameter is computed by combining all the selected clusters. When the clusters are of different sizes there are several options: One method is to sample clusters and then survey all elements in that cluster. Another method is a two-stage method of sampling a fixed proportion of units (be it 5% or 50%, or another number, depending on cost considerations) from within each of the selected clusters. Relying on the sample drawn from these options will yield an unbiased estimator. However, the sample size is no longer fixed upfront. This leads to a more complicated formula for the standard error of the estimator, as well as issues with the optics of the study plan (since the power analysis and the cost estimations often relate to a specific sample size). A third possible solution is to use probability proportionate to size sampling. In this sampling plan, the probability of selecting a cluster is proportional to its size, so a large cluster has a greater probability of selection than a small cluster. The advantage here is that when clusters are selected with probability proportionate to size, the same number of interviews should be carried out in each sampled cluster so that each unit sampled has the same probability of selection. Applications of cluster sampling An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster. It is usually necessary to increase the total sample size to achieve equivalent precision in the estimators, but cost savings may make such an increase in sample size feasible. Cluster sampling is used to estimate high mortalities in cases such as wars, famines and natural disasters. Advantage Can be cheaper than other sampling plans – e.g. fewer travel expenses, and administration costs. Feasibility: This sampling plan takes large populations into account. Since these groups are so large, deploying any other sampling plan would be very costly. Economy: The regular two major concerns of expenditure, i.e., traveling and listing, are greatly reduced in this method. For example: Compiling research information about every household in a city would be very costly, whereas compiling information about various blocks of the city will be more economical. Here, traveling as well as listing efforts will be greatly reduced. Reduced variability: in the rare case of a negative intraclass correlation between subjects within a cluster, the estimators produced by cluster sampling will yield more accurate estimates than data obtained from a simple random sample (i.e. the design effect will be smaller than 1). This is not a commonplace scenario. Major use: when the sampling frame of all elements is not available we can resort only to cluster sampling. Disadvantage Higher sampling error, which can be expressed by the design effect: the ratio between the variance of an estimator made from the samples of the cluster study and the variance of an estimator obtained from a sample of subjects in an equally reliable, randomly sampled unclustered study. The larger the intraclass correlation is between subjects within a cluster the worse the design effect becomes (i.e. the larger it gets from 1. Indicating a larger expected increase in the variance of the estimator). In other words, the more there is heterogeneity between clusters and more homogeneity between subjects within a cluster, the less accurate our estimators become. This is because in such cases we are better off sampling as many clusters as we can and making do with a small sample of subjects from within each cluster (i.e. two-stage cluster sampling). Complexity. Cluster sampling is more sophisticated and requires more attention with how to plan and how to analyze (i.e.: to take into account the weights of subjects during the estimation of parameters, confidence intervals, etc.) More on cluster sampling Two-stage cluster sampling Two-stage cluster sampling, a simple case of multistage sampling, is obtained by selecting cluster samples in the first stage and then selecting a sample of elements from every sampled cluster. Consider a population of N clusters in total. In the first stage, n clusters are selected using the ordinary cluster sampling method. In the second stage, simple random sampling is usually used. It is used separately in every cluster and the numbers of elements selected from different clusters are not necessarily equal. The total number of clusters N, the number of clusters selected n, and the numbers of elements from selected clusters need to be pre-determined by the survey designer. Two-stage cluster sampling aims at minimizing survey costs and at the same time controlling the uncertainty related to estimates of interest. This method can be used in health and social sciences. For instance, researchers used two-stage cluster sampling to generate a representative sample of the Iraqi population to conduct mortality surveys. Sampling in this method can be quicker and more reliable than other methods, which is why this method is now used frequently. Inference when the number of clusters is small Cluster sampling methods can lead to significant bias when working with a small number of clusters. For instance, it can be necessary to cluster at the state or city-level, units that may be small and fixed in number. Microeconometrics methods for panel data often use short panels, which is analogous to having few observations per clusters and many clusters. The small cluster problem can be viewed as an incidental parameter problem. While the point estimates can be reasonably precisely estimated, if the number of observations per cluster is sufficiently high, we need the number of clusters for the asymptotics to kick in. If the number of clusters is low the estimated covariance matrix can be downward biased. Small numbers of clusters are a risk when there is serial correlation or when there is intraclass correlation as in the Moulton context. When having few clusters, we tend to underestimate serial correlation across observations when a random shock occurs, or the intraclass correlation in a Moulton setting. Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem. In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observations per cluster is fixed at n. Below, stands for the covariance matrix adjusted for clustering, stands for the covariance matrix not adjusted for clustering, and ρ stands for the intraclass correlation: The ratio on the left-hand side indicates how much the unadjusted scenario overestimates the precision. Therefore, a high number means a strong downward bias of the estimated covariance matrix. A small cluster problem can be interpreted as a large n: when the data is fixed and the number of clusters is low, the number of data within a cluster can be high. It follows that inference, when the number of clusters is small, will not have the correct coverage. Several solutions for the small cluster problem have been proposed. One can use a bias-corrected cluster-robust variance matrix, make T-distribution adjustments, or use bootstrap methods with asymptotic refinements, such as the percentile-t or wild bootstrap, that can lead to improved finite sample inference. Cameron, Gelbach and Miller (2008) provide microsimulations for different methods and find that the wild bootstrap performs well in the face of a small number of clusters. See also Multistage sampling Sampling (statistics) Simple random sampling Stratified sampling References Sampling techniques Market research
2,610
5,804
https://en.wikipedia.org/wiki/Charles%20Baudelaire
Charles Baudelaire
Charles Pierre Baudelaire (, ; ; 9 April 1821 – 31 August 1867) was a French poet who also produced notable work as an essayist, art critic and translator. His poems exhibit mastery of rhyme and rhythm, contain an exoticism inherited from Romantics, and are based on observations of real life. His most famous work, a book of lyric poetry titled Les Fleurs du mal (The Flowers of Evil), expresses the changing nature of beauty in the rapidly industrializing Paris during the mid-19th century. Baudelaire's original style of prose-poetry influenced a generation of poets including Paul Verlaine, Arthur Rimbaud and Stéphane Mallarmé, among many others. He coined the term modernity (modernité) to designate the fleeting experience of life in an urban metropolis, and the responsibility of artistic expression to capture that experience. Marshall Berman has credited Baudelaire as being the first Modernist. Early life Baudelaire was born in Paris, France, on 9 April 1821, and baptized two months later at Saint-Sulpice Roman Catholic Church. His father, Joseph-François Baudelaire (1759–1827), a senior civil servant and amateur artist, was 34 years older than Baudelaire's mother, Caroline (née Dufaÿs) (1794–1871). Joseph-François died during Baudelaire's childhood, at rue Hautefeuille, Paris, on 10 February 1827. The following year, Caroline married Lieutenant Colonel , who later became a French ambassador to various noble courts. Baudelaire's biographers have often seen this as a crucial moment, considering that finding himself no longer the sole focus of his mother's affection left him with a trauma, which goes some way to explaining the excesses later apparent in his life. He stated in a letter to her that, "There was in my childhood a period of passionate love for you." Baudelaire regularly begged his mother for money throughout his career, often promising that a lucrative publishing contract or journalistic commission was just around the corner. Baudelaire was educated in Lyon, where he boarded. At 14, he was described by a classmate as "much more refined and distinguished than any of our fellow pupils...we are bound to one another...by shared tastes and sympathies, the precocious love of fine works of literature." Baudelaire was erratic in his studies, at times diligent, at other times prone to "idleness". Later, he attended the Lycée Louis-le-Grand in Paris, studying law, a popular course for those not yet decided on any particular career. He began to frequent prostitutes and may have contracted gonorrhea and syphilis during this period. He also began to run up debts, mostly for clothes. Upon gaining his degree in 1839, he told his brother "I don't feel I have a vocation for anything." His stepfather had in mind a career in law or diplomacy, but instead Baudelaire decided to embark upon a literary career. His mother later recalled: "Oh, what grief! If Charles had let himself be guided by his stepfather, his career would have been very different...He would not have left a name in literature, it is true, but we should have been happier, all three of us." His stepfather sent him on a voyage to Calcutta, India in 1841 in the hope of ending his dissolute habits. The trip provided strong impressions of the sea, sailing, and exotic ports, that he later employed in his poetry. (Baudelaire later exaggerated his aborted trip to create a legend about his youthful travels and experiences, including "riding on elephants".) On returning to the taverns of Paris, he began to compose some of the poems of "Les Fleurs du Mal". At 21, he received a sizable inheritance but squandered much of it within a few years. His family obtained a decree to place his property in trust, which he resented bitterly, at one point arguing that allowing him to fail financially would have been the one sure way of teaching him to keep his finances in order. Baudelaire became known in artistic circles as a dandy and free-spender, going through much of his inheritance and allowance in a short period of time. During this time, Jeanne Duval became his mistress. She was rejected by his family. His mother thought Duval a "Black Venus" who "tortured him in every way" and drained him of money at every opportunity. Baudelaire made a suicide attempt during this period. He took part in the Revolutions of 1848 and wrote for a revolutionary newspaper. However, his interest in politics was passing, as he was later to note in his journals. In the early 1850s, Baudelaire struggled with poor health, pressing debts, and irregular literary output. He often moved from one lodging to another to escape creditors. He undertook many projects that he was unable to complete, though he did finish translations of stories by Edgar Allan Poe. Upon the death of his stepfather in 1857, Baudelaire received no mention in the will but he was heartened nonetheless that the division with his mother might now be mended. At 36, he wrote to her: "believe that I belong to you absolutely, and that I belong only to you." His mother died on 16 August 1871, outliving her son by almost four years. Publishing career His first published work, under the pseudonym Baudelaire Dufaÿs, was his art review "Salon of 1845", which attracted immediate attention for its boldness. Many of his critical opinions were novel in their time, including his championing of Delacroix, and some of his views seem remarkably in tune with the future theories of the Impressionist painters. In 1846, Baudelaire wrote his second Salon review, gaining additional credibility as an advocate and critic of Romanticism. His continued support of Delacroix as the foremost Romantic artist gained widespread notice. The following year Baudelaire's novella La Fanfarlo was published. The Flowers of Evil Baudelaire was a slow and very attentive worker. However, he often was sidetracked by indolence, emotional distress and illness, and it was not until 1857 that he published Les Fleurs du mal (The Flowers of Evil), his first and most famous volume of poems. Some of these poems had already appeared in the Revue des deux mondes (Review of Two Worlds) in 1855, when they were published by Baudelaire's friend Auguste Poulet-Malassis. Some of the poems had appeared as "fugitive verse" in various French magazines during the previous decade. The poems found a small, yet appreciative audience. However, greater public attention was given to their subject matter. The effect on fellow artists was, as Théodore de Banville stated, "immense, prodigious, unexpected, mingled with admiration and with some indefinable anxious fear". Gustave Flaubert, recently attacked in a similar fashion for Madame Bovary (and acquitted), was impressed and wrote to Baudelaire: "You have found a way to rejuvenate Romanticism...You are as unyielding as marble, and as penetrating as an English mist." The principal themes of sex and death were considered scandalous for the period. He also touched on lesbianism, sacred and profane love, metamorphosis, melancholy, the corruption of the city, lost innocence, the oppressiveness of living, and wine. Notable in some poems is Baudelaire's use of imagery of the sense of smell and of fragrances, which is used to evoke feelings of nostalgia and past intimacy. The book, however, quickly became a byword for unwholesomeness among mainstream critics of the day. Some critics called a few of the poems "masterpieces of passion, art and poetry," but other poems were deemed to merit no less than legal action to suppress them. J. Habas led the charge against Baudelaire, writing in Le Figaro: "Everything in it which is not hideous is incomprehensible, everything one understands is putrid." Baudelaire responded to the outcry in a prophetic letter to his mother: "You know that I have always considered that literature and the arts pursue an aim independent of morality. Beauty of conception and style is enough for me. But this book, whose title (Fleurs du mal) says everything, is clad, as you will see, in a cold and sinister beauty. It was created with rage and patience. Besides, the proof of its positive worth is in all the ill that they speak of it. The book enrages people. Moreover, since I was terrified myself of the horror that I should inspire, I cut out a third from the proofs. They deny me everything, the spirit of invention and even the knowledge of the French language. I don't care a rap about all these imbeciles, and I know that this book, with its virtues and its faults, will make its way in the memory of the lettered public, beside the best poems of V. Hugo, Th. Gautier and even Byron." Baudelaire, his publisher and the printer were successfully prosecuted for creating an offense against public morals. They were fined, but Baudelaire was not imprisoned. Six of the poems were suppressed, but printed later as Les Épaves (The Wrecks) (Brussels, 1866). Another edition of Les Fleurs du mal, without these poems, but with considerable additions, appeared in 1861. Many notables rallied behind Baudelaire and condemned the sentence. Victor Hugo wrote to him: "Your fleurs du mal shine and dazzle like stars...I applaud your vigorous spirit with all my might." Baudelaire did not appeal the judgment, but his fine was reduced. Nearly 100 years later, on 11 May 1949, Baudelaire was vindicated, the judgment officially reversed, and the six banned poems reinstated in France. In the poem "Au lecteur" ("To the Reader") that prefaces Les Fleurs du mal, Baudelaire accuses his readers of hypocrisy and of being as guilty of sins and lies as the poet: ... If rape or arson, poison or the knife Has wove no pleasing patterns in the stuff Of this drab canvas we accept as life— It is because we are not bold enough! (Roy Campbell's translation) Final years Baudelaire next worked on a translation and adaptation of Thomas De Quincey's Confessions of an English Opium-Eater. Other works in the years that followed included Petits Poèmes en prose (Small Prose poems); a series of art reviews published in the Pays, Exposition universelle (Country, World Fair); studies on Gustave Flaubert (in L'Artiste, 18 October 1857); on Théophile Gautier (Revue contemporaine, September 1858); various articles contributed to Eugène Crépet's Poètes français; Les Paradis artificiels: opium et haschisch (French poets; Artificial Paradises: opium and hashish) (1860); and Un Dernier Chapitre de l'histoire des oeuvres de Balzac (A Final Chapter of the history of works of Balzac) (1880), originally an article "Comment on paye ses dettes quand on a du génie" ("How one pays one's debts when one has genius"), in which his criticism turns against his friends Honoré de Balzac, Théophile Gautier, and Gérard de Nerval. By 1859, his illnesses, his long-term use of laudanum, his life of stress, and his poverty had taken a toll and Baudelaire had aged noticeably. But at last, his mother relented and agreed to let him live with her for a while at Honfleur. Baudelaire was productive and at peace in the seaside town, his poem Le Voyage being one example of his efforts during that time. In 1860, he became an ardent supporter of Richard Wagner. His financial difficulties increased again, however, particularly after his publisher Poulet Malassis went bankrupt in 1861. In 1864, he left Paris for Belgium, partly in the hope of selling the rights to his works and to give lectures. His long-standing relationship with Jeanne Duval continued on-and-off, and he helped her to the end of his life. Baudelaire's relationships with actress Marie Daubrun and with courtesan Apollonie Sabatier, though the source of much inspiration, never produced any lasting satisfaction. He smoked opium, and in Brussels he began to drink to excess. Baudelaire suffered a massive stroke in 1866 and paralysis followed. After more than a year of aphasia, he received the last rites of the Catholic Church. The last two years of his life were spent in a semi-paralyzed state in various "maisons de santé" in Brussels and in Paris, where he died on 31 August 1867. Baudelaire is buried in the Cimetière du Montparnasse, Paris. Many of Baudelaire's works were published posthumously. After his death, his mother paid off his substantial debts, and she found some comfort in Baudelaire's emerging fame. "I see that my son, for all his faults, has his place in literature." She lived another four years. Poetry {{blockquote|Who among us has not dreamt, in moments of ambition, of the miracle of a poetic prose, musical without rhythm and rhyme, supple and staccato enough to adapt to the lyrical stirrings of the soul, the undulations of dreams, and sudden leaps of consciousness. This obsessive idea is above all a child of giant cities, of the intersecting of their myriad relations.|Dedication of Le Spleen de Paris}} Baudelaire is one of the major innovators in French literature. His poetry is influenced by the French romantic poets of the earlier 19th century, although its attention to the formal features of verse connects it more closely to the work of the contemporary "Parnassians". As for theme and tone, in his works we see the rejection of the belief in the supremacy of nature and the fundamental goodness of man as typically espoused by the romantics and expressed by them in rhetorical, effusive and public voice in favor of a new urban sensibility, an awareness of individual moral complexity, an interest in vice (linked with decadence) and refined sensual and aesthetic pleasures, and the use of urban subject matter, such as the city, the crowd, individual passers-by, all expressed in highly ordered verse, sometimes through a cynical and ironic voice. Formally, the use of sound to create atmosphere, and of "symbols" (images that take on an expanded function within the poem), betray a move towards considering the poem as a self-referential object, an idea further developed by the Symbolists Verlaine and Mallarmé, who acknowledge Baudelaire as a pioneer in this regard. Beyond his innovations in versification and the theories of symbolism and "correspondences", an awareness of which is essential to any appreciation of the literary value of his work, aspects of his work that regularly receive much critical discussion include the role of women, the theological direction of his work and his alleged advocacy of "satanism", his experience of drug-induced states of mind, the figure of the dandy, his stance regarding democracy and its implications for the individual, his response to the spiritual uncertainties of the time, his criticisms of the bourgeois, and his advocacy of modern music and painting (e.g., Wagner, Delacroix). He made Paris the subject of modern poetry. He brought the city's details to life in the eyes and hearts of his readers. Critiques Baudelaire was an active participant in the artistic life of his times. As critic and essayist, he wrote extensively and perceptively about the luminaries and themes of French culture. He was frank with friends and enemies, rarely took the diplomatic approach and sometimes responded violently verbally, which often undermined his cause. His associations were numerous, including Gustave Courbet, Honoré Daumier, Félicien Rops, Franz Liszt, Champfleury, Victor Hugo, Gustave Flaubert, and Balzac. Edgar Allan Poe In 1847, Baudelaire became acquainted with the works of Poe, in which he found tales and poems that had, he claimed, long existed in his own brain but never taken shape. Baudelaire saw in Poe a precursor and tried to be his French contemporary counterpart. From this time until 1865, he was largely occupied with translating Poe's works; his translations were widely praised. Baudelaire was not the first French translator of Poe, but his "scrupulous translations" were considered among the best. These were published as Histoires extraordinaires (Extraordinary stories) (1856), Nouvelles histoires extraordinaires (New extraordinary stories) (1857), Aventures d'Arthur Gordon Pym, Eureka, and Histoires grotesques et sérieuses (Grotesque and serious stories) (1865). Two essays on Poe are to be found in his Œuvres complètes (Complete works) (vols. v. and vi.). Eugène Delacroix A strong supporter of the Romantic painter Delacroix, Baudelaire called him "a poet in painting". Baudelaire also absorbed much of Delacroix's aesthetic ideas as expressed in his journals. As Baudelaire elaborated in his "Salon of 1846", "As one contemplates his series of pictures, one seems to be attending the celebration of some grievous mystery...This grave and lofty melancholy shines with a dull light.. plaintive and profound like a melody by Weber." Delacroix, though appreciative, kept his distance from Baudelaire, particularly after the scandal of Les Fleurs du mal. In private correspondence, Delacroix stated that Baudelaire "really gets on my nerves" and he expressed his unhappiness with Baudelaire's persistent comments about "melancholy" and "feverishness". Richard Wagner Baudelaire had no formal musical training, and knew little of composers beyond Beethoven and Weber. Weber was in some ways Wagner's precursor, using the leitmotif and conceiving the idea of the "total art work" ("Gesamtkunstwerk"), both of which gained Baudelaire's admiration. Before even hearing Wagner's music, Baudelaire studied reviews and essays about him, and formulated his impressions. Later, Baudelaire put them into his non-technical analysis of Wagner, which was highly regarded, particularly his essay "Richard Wagner et Tannhäuser à Paris". Baudelaire's reaction to music was passionate and psychological. "Music engulfs (possesses) me like the sea." After attending three Wagner concerts in Paris in 1860, Baudelaire wrote to the composer: "I had a feeling of pride and joy in understanding, in being possessed, in being overwhelmed, a truly sensual pleasure like that of rising in the air." Baudelaire's writings contributed to the elevation of Wagner and to the cult of Wagnerism that swept Europe in the following decades. Théophile Gautier Gautier, writer and poet, earned Baudelaire's respect for his perfection of form and his mastery of language, though Baudelaire thought he lacked deeper emotion and spirituality. Both strove to express the artist's inner vision, which Heinrich Heine earlier stated: "In artistic matters, I am a supernaturalist. I believe that the artist can not find all his forms in nature, but that the most remarkable are revealed to him in his soul." Gautier's frequent meditations on death and the horror of life are themes which influenced Baudelaire's writings. In gratitude for their friendship and commonality of vision, Baudelaire dedicated Les Fleurs du mal to Gautier. Édouard Manet Manet and Baudelaire became constant companions from around 1855. In the early 1860s, Baudelaire accompanied Manet on daily sketching trips and often met him socially. Manet also lent Baudelaire money and looked after his affairs, particularly when Baudelaire went to Belgium. Baudelaire encouraged Manet to strike out on his own path and not succumb to criticism. "Manet has great talent, a talent which will stand the test of time. But he has a weak character. He seems to me crushed and stunned by shock." In his painting Music in the Tuileries, Manet includes portraits of his friends Théophile Gautier, Jacques Offenbach, and Baudelaire. While it's difficult to differentiate who influenced whom, both Manet and Baudelaire discussed and expressed some common themes through their respective arts. Baudelaire praised the modernity of Manet's subject matter: "almost all our originality comes from the stamp that 'time' imprints upon our feelings." When Manet's famous Olympia (1865), a portrait of a nude prostitute, provoked a scandal for its blatant realism mixed with an imitation of Renaissance motifs, Baudelaire worked privately to support his friend, though he offered no public defense (he was, however, ill at the time). When Baudelaire returned from Belgium after his stroke, Manet and his wife were frequent visitors at the nursing home and she played passages from Wagner for Baudelaire on the piano. Nadar Nadar (Félix Tournachon) was a noted caricaturist, scientist and important early photographer. Baudelaire admired Nadar, one of his close friends, and wrote: "Nadar is the most amazing manifestation of vitality." They moved in similar circles and Baudelaire made many social connections through him. Nadar's ex-mistress Jeanne Duval became Baudelaire's mistress around 1842. Baudelaire became interested in photography in the 1850s, and denouncing it as an art form, advocated its return to "its real purpose, which is that of being the servant to the sciences and arts". Photography should not, according to Baudelaire, encroach upon "the domain of the impalpable and the imaginary". Nadar remained a stalwart friend right to Baudelaire's last days and wrote his obituary notice in Le Figaro. Philosophy Many of Baudelaire's philosophical proclamations were considered scandalous and intentionally provocative in his time. He wrote on a wide range of subjects, drawing criticism and outrage from many quarters. Along with Poe, Baudelaire named the arch-reactionary Joseph de Maistre as his maître à penser and adopted increasingly aristocratic views. In his journals, he wrote "There is no form of rational and assured government save an aristocracy. […] There are but three beings worthy of respect: the priest, the warrior and the poet. To know, to kill and to create. The rest of mankind may be taxed and drudged, they are born for the stable, that is to say, to practise what they call professions." Influence and legacy Baudelaire's influence on the direction of modern French (and English) language literature was considerable. The most significant French writers to come after him were generous with tributes; four years after his death, Arthur Rimbaud praised him in a letter as "the king of poets, a true God". In 1895, Stéphane Mallarmé published "Le Tombeau de Charles Baudelaire", a sonnet in Baudelaire's memory. Marcel Proust, in an essay published in 1922, stated that, along with Alfred de Vigny, Baudelaire was "the greatest poet of the nineteenth century". In the English-speaking world, Edmund Wilson credited Baudelaire as providing an initial impetus for the Symbolist movement by virtue of his translations of Poe. In 1930, T. S. Eliot, while asserting that Baudelaire had not yet received a "just appreciation" even in France, claimed that the poet had "great genius" and asserted that his "technical mastery which can hardly be overpraised...has made his verse an inexhaustible study for later poets, not only in his own language". In a lecture delivered in French on "Edgar Allan Poe and France" (Edgar Poe et la France) in Aix-en-Provence in April 1948, Eliot stated that "I am an English poet of American origin who learnt his art under the aegis of Baudelaire and the Baudelairian lineage of poets." Eliot also alluded to Baudelaire's poetry directly in his own poetry. For example, he quoted the last line of Baudelaire's "Au Lecteur" in the last line of Section I of The Waste Land. At the same time that Eliot was affirming Baudelaire's importance from a broadly conservative and explicitly Christian viewpoint, left-wing critics such as Wilson and Walter Benjamin were able to do so from a dramatically different perspective. Benjamin translated Baudelaire's Tableaux Parisiens into German and published a major essay on translation as the foreword. In the late 1930s, Benjamin used Baudelaire as a starting point and focus for Das Passagenwerk, his monumental attempt at a materialist assessment of 19th-century culture. For Benjamin, Baudelaire's importance lay in his anatomies of the crowd, of the city and of modernity. He says that, in Les Fleurs du mal, "the specific devaluation of the world of things, as manifested in the commodity, is the foundation of Baudelaire's allegorical intention." François Porche published a poetry collection called Charles Baudelaire: Poetry Collection in memory of Baudelaire. The novel A Singular Conspiracy (1974) by Barry Perowne is a fictional treatment of the unaccounted period in Edgar Allan Poe's life from January to May 1844, in which (among other things) Poe becomes involved with a young Baudelaire in a plot to expose Baudelaires' stepfather to blackmail, to free up Baudelaires' patrimony. Vanderbilt University has "assembled one of the world's most comprehensive research collections on...Baudelaire". Les Fleurs du mal has a number of scholarly references. Works Salon de 1845, 1845 Salon de 1846, 1846 La Fanfarlo, 1847 Les Fleurs du mal, 1857 Les paradis artificiels, 1860 Réflexions sur Quelques-uns de mes Contemporains, 1861 Le Peintre de la Vie Moderne, 1863 Curiosités Esthétiques, 1868 L'art romantique, 1868 Le Spleen de Paris, 1869. Paris Spleen (Contra Mundum Press: 2021) Translations from Charles Baudelaire, 1869 (Early English translation of several of Baudelaire's poems, by Richard Herne Shepherd) Oeuvres Posthumes et Correspondance Générale, 1887–1907 Fusées, 1897 Mon Coeur Mis à Nu, 1897. My Heart Laid Bare & Other Texts (Contra Mundum Press: 2017; 2020) Oeuvres Complètes, 1922–53 (19 vols.) Mirror of Art, 1955 The Essence of Laughter, 1956 Curiosités Esthétiques, 1962 The Painter of Modern Life and Other Essays, 1964 Baudelaire as a Literary Critic, 1964 Arts in Paris 1845–1862, 1965 Selected Writings on Art and Artists, 1972 Selected Letters of Charles Baudelaire, 1986 Twenty Prose Poems, 1988 Critique d'art; Critique musicale, 1992 Belgium Stripped Bare (Contra Mundum Press: 2019) Musical adaptations French composer Claude Debussy set five of Baudelaire's poems to music in 1890: Cinq poèmes de Charles Baudelaire (Le Balcon, Harmonie du soir, Le Jet d'eau, Recueillement and La Mort des amants). French composer Henri Duparc set two of Baudelaire's poems to music: "L'Invitation au voyage" in 1870, and "La vie antérieure" in 1884. English composer Mark-Anthony Turnage composed settings of two of Baudelaire's poems, "Harmonie du soir" and "L'Invitation au voyage", for soprano and seven instruments. American electronic musician Ruth White (composer) recorded some of Baudelaire's poems in Les Fleurs du Mal as chants over electronic music in a 1969 recording, Flowers of Evil. French singer-songwriter Léo Ferré devoted himself to set Baudelaire's poetry into music in three albums: Les Fleurs du mal in 1957 (12 poems), Léo Ferré chante Baudelaire in 1967 (24 poems, including one from Le Spleen de Paris), and the posthumous Les Fleurs du mal (suite et fin) (21 poems), recorded in 1977 but released in 2008. Soviet/Russian composer David Tukhmanov has set Baudelaire's poem to music (cult album On a Wave of My Memory, 1975). American avant-garde composer, vocalist and performer Diamanda Galás made an interpretation in original French of Les Litanies de Satan from Les Fleurs du mal, in her debut album titled The Litanies of Satan, which consists of tape and electronics effects with layers of her voice. French singer David TMX recorded the poems "Lesbos" and "Une Charogne" from The Flowers of Evil. French metal/shoegaze groups Alcest and Amesoeurs used his poetry for the lyrics of the tracks "Élévation" (on Le Secret) and "Recueillement" (on Amesoeurs), respectively. Celtic Frost used his poem Tristesses de la lune as a lyrics for song on album Into the Pandemonium. French Black Metal bands Mortifera and Peste Noire used Baudelaire's poems as lyrics for the songs "Le revenant" and "Ciel brouillé" (on Vastiia Tenebrd Mortifera by Mortifera) and "Le mort joyeux" and "Spleen" (on La Sanie des siècles – Panégyrique de la dégénérescence by Peste Noire) Israeli singer Maor Cohen's 2005 album, the Hebrew name of which translates to French as "Les Fleurs du Mal", is a compilation of songs from Baudelaire's book of the same name. The texts were translated to Hebrew by Israeli poet Dori Manor, and the music was composed by Cohen. Italian singer Franco Battiato set Invitation au voyage to music as Invito Al Viaggio on his 1999 album Fleurs (Esempi Affini Di Scritture E Simili). American composer Gérard Pape set Tristesses de la lune/Sorrows of the Moon from Fleurs du Mal for voice and electronic tape. French band Marc Seberg wrote an adaptation of Recueillement for their 1985 album Le Chant Des Terres. Dutch composer Marjo Tal set several of Baudelaire’s poems to music. Russian heavy metal band Black Obelisk used Russian translations of several Baudelaire poems as lyrics for their songs. French singer Mylène Farmer performed "L'Horloge" to music by Laurent Boutonnat on the album Ainsi soit je and the opening number of her 1989 concert tour. On her latest album "Désobéissance" (2018) she recorded Baudelaire's preface to "Les Fleurs du Mal", "Au lecteur". The French journalist Hugues Royer mentioned several allusions and interpretations of Baudelaire's poems and quotations used by Farmer in various songs in his book "Mylène" (published in 2008). In 2009 the Italian rock band C.F.F. e il Nomade Venerabile released Un jour noir, a song inspired by Spleen, contained in the album Lucidinervi (Otium Records / Compagnia Nuove Indye). The video clip is available on YouTube. German aggrotech band C-Drone-Defect used the translation of "Le Rebelle" by Roy Campbell as lyrics for the song "Rebellis" on their 2009 album Dystopia. English rock band The Cure used the translation of "Les yeux des pauvres" as lyrics for the song "How Beautiful You Are". French singer-songwriter and musician Serge Gainsbourg has set Baudelaire's poem "Dancing Snake" (Le serpent qui danse) to music in his 1964 song "Baudelaire". Greek black metal band Rotting Christ adapted Baudelaire's poem "Les Litanies De Satan" from Fleurs du Mal for a song of the same name in their 2016 album Rituals. Belgian female-fronted band Exsangue released the debut video for the single "A une Malabaraise", and the lyrics are based on Baudelaire's same-named sonnet in 2016. Belgian electronic music band Modern Cubism has released two albums where poems of Baudelaire are used as lyrics, Les Plaintes d’un Icare in 2008, and live album Live Complaints in 2010. American rapper Tyler, the Creator released his album Call Me If You Get Lost in 2021. Throughout the album, Tyler, the Creator refers to himself as "Tyler Baudelaire". Canadian singer-songwriter Pierre Lapointe set Baudelaire's poem "Le serpent qui danse" to music on his 2022 album L'heure mauve. See also Épater la bourgeoisie References Notes Sources External links Charles Baudelaire's Cats The Baudelaire Song Project – site of The Baudelaire Song Project, a UK-based AHRC-funded academic project examining song settings of Baudelaire's poetry Twilight to Dawn: Charles Baudelaire – Cordite Poetry Review www.baudelaire.cz – largest Internet site dedicated to Charles Baudelaire. Poems and prose are available in English, French and Czech. Charles Baudelaire – site dedicated to Baudelaire's poems and prose, containing Fleurs du mal, Petit poemes et prose, Fanfarlo and more in French Charles Baudelaire International Association Nikolas Kompridis on Baudelaire's poetry, art, and the "memory of loss" (Flash/HTML5) baudelaireetbengale.blogspot.com – the influence of Baudelaire on Bengali poetry Harmonie du soir – Tina Noiret Online texts Charles Baudelaire – largest site dedicated to Baudelaire's poems and prose, containing Fleurs du mal, Petit poemes et prose, Fanfarlo and more in French Poems by Charles Baudelaire – selected works at Poetry Archive Baudelaire's poems at Poems Found in Translation Baudelaire – Eighteen Poems "baudelaire in english", Onedit.net – Sean Bonney's experimental translations of Baudelaire (humor) Works by Charles Baudelaire Baudelaire par ses Amis Single works FleursDuMal.org – Definitive online presentation of Fleurs du mal'', featuring the original French alongside multiple English translations An illustrated version (8 Mb) of Les Fleurs du Mal, 1861 edition (Charles Baudelaire / une édition illustrée par inkwatercolor.com) "The Rebel" – poem by Baudelaire Les Foules (The Crowds) – English translation 1821 births 1867 deaths 19th-century French journalists French male journalists 19th-century French poets 19th-century French translators English–French translators Translators of Edgar Allan Poe Burials at Montparnasse Cemetery Deaths from syphilis Decadent literature French art critics Lycée Louis-le-Grand alumni Obscenity controversies in literature Poètes maudits French psychedelic drug advocates Sonneteers Symbolist poets Writers from Paris 19th-century male writers Philosophical pessimists
2,611
5,810
https://en.wikipedia.org/wiki/Classical%20guitar
Classical guitar
The classical guitar (also known as the nylon-string guitar or Spanish guitar) is a member of the guitar family used in classical music and other styles. An acoustic wooden string instrument with strings made of gut or nylon, it is a precursor of the modern acoustic and electric guitars, both of which use metal strings. Classical guitars derive from the Spanish vihuela and gittern of the fifteenth and sixteenth century. Those instruments evolved into the seventeenth and eighteenth-century baroque guitar—and by the mid-nineteenth century, early forms of the modern classical guitar. For a right-handed player, the traditional classical guitar has twelve frets clear of the body and is properly held up by the left leg, so that the hand that plucks or strums the strings does so near the back of the sound hole (this is called the classical position). However, the right-hand may move closer to the fretboard to achieve different tonal qualities. The player typically holds the left leg higher by the use of a foot rest. The modern steel string guitar, on the other hand, usually has fourteen frets clear of the body (see Dreadnought) and is commonly held with a strap around the neck and shoulder. The phrase "classical guitar" may refer to either of two concepts other than the instrument itself: The instrumental finger technique common to classical guitar—individual strings plucked with the fingernails or, less frequently, fingertips The instrument's classical music repertoire The term modern classical guitar sometimes distinguishes the classical guitar from older forms of guitar, which are in their broadest sense also called classical, or more specifically, early guitars. Examples of early guitars include the six-string early romantic guitar (c. 1790–1880), and the earlier baroque guitars with five courses. The materials and the methods of classical guitar construction may vary, but the typical shape is either modern classical guitar or that historic classical guitar similar to the early romantic guitars of France and Italy. Classical guitar strings once made of gut are now made of materials such as nylon or fluoropolymers, typically with silver-plated copper fine wire wound about the acoustically lower (d-A-E in standard tuning) strings. A guitar family tree may be identified. The flamenco guitar derives from the modern classical, but has differences in material, construction and sound. Today's modern classical guitar was established by the late designs of the 19th-century Spanish luthier, Antonio Torres Jurado. Contexts The classical guitar has a long history and one is able to distinguish various: instruments repertoire (composers and their compositions, arrangements, improvisations) Both instrument and repertoire can be viewed from a combination of various perspectives: Historical (chronological period of time) Baroque guitar – 1600 to 1750 Early romantic guitars – 1750 to 1850 (for music from the Classical and Romantic periods) Modern classical guitars Geographical Spanish guitars (Torres) and French guitars (René Lacôte, ...), etc. Cultural Baroque court music, nineteenth-century opera and its influences, nineteenth-century folk songs, Latin American music Historical perspective Early guitars While "classical guitar" is today mainly associated with the modern classical guitar design, there is an increasing interest in early guitars; and understanding the link between historical repertoire and the particular period guitar that was originally used to perform this repertoire. The musicologist and author Graham Wade writes: Nowadays it is customary to play this repertoire on reproductions of instruments authentically modelled on concepts of musicological research with appropriate adjustments to techniques and overall interpretation. Thus over recent decades we have become accustomed to specialist artists with expertise in the art of vihuela (a 16th-century type of guitar popular in Spain), lute, Baroque guitar, 19th-century guitar, etc. Different types of guitars have different sound aesthetics, e.g. different colour-spectrum characteristics (the way the sound energy is spread in the fundamental frequency and the overtones), different response, etc. These differences are due to differences in construction; for example, modern classical guitars usually use a different bracing (fan-bracing) from that used in earlier guitars (they had ladder-bracing); and a different voicing was used by the luthier. There is a historical parallel between musical styles (baroque, classical, romantic, flamenco, jazz) and the style of "sound aesthetic" of the musical instruments used, for example: Robert de Visée played a baroque guitar with a very different sound aesthetic from the guitars used by Mauro Giuliani and Luigi Legnani – they used 19th-century guitars. These guitars in turn sound different from the Torres models used by Segovia that are suited for interpretations of romantic-modern works such as Moreno Torroba. When considering the guitar from a historical perspective, the musical instrument used is as important as the musical language and style of the particular period. As an example: It is impossible to play a historically informed de Visee or Corbetta (baroque guitarist-composers) on a modern classical guitar. The reason is that the baroque guitar used courses, which are two strings close together (in unison), that are plucked together. This gives baroque guitars an unmistakable sound characteristic and tonal texture that is an integral part of an interpretation. Additionally, the sound aesthetic of the baroque guitar (with its strong overtone presence) is very different from modern classical type guitars, as is shown below. Today's use of Torres and post-Torres type guitars for repertoire of all periods is sometimes critically viewed: Torres and post-Torres style modern guitars (with their fan-bracing and design) have a thick and strong tone, very suitable for modern-era repertoire. However, they are considered to emphasize the fundamental too heavily (at the expense of overtone partials) for earlier repertoire (Classical/Romantic: Carulli, Sor, Giuliani, Mertz, ...; Baroque: de Visee, ...; etc.). "Andrés Segovia presented the Spanish guitar as a versatile model for all playing styles" to the extent, that still today, "many guitarists have tunnel-vision of the world of the guitar, coming from the modern Segovia tradition". While fan-braced modern classical Torres and post-Torres style instruments coexisted with traditional ladder-braced guitars at the beginning of the 20th century, the older forms eventually fell away. Some attribute this to the popularity of Segovia, considering him "the catalyst for change toward the Spanish design and the so-called 'modern' school in the 1920s and beyond." The styles of music performed on ladder-braced guitars were becoming unfashionable—and, e.g., in Germany, more musicians were turning towards folk music (Schrammel-music and the Contraguitar). This was localized in Germany and Austria and became unfashionable again. On the other hand, Segovia was playing concerts around the world, popularizing modern classical guitar—and, in the 1920s, Spanish romantic-modern style with guitar works by Moreno Torroba, de Falla, etc. The 19th-century classical guitarist Francisco Tárrega first popularized the Torres design as a classical solo instrument. However, some maintain that Segovia's influence led to its domination over other designs. Factories around the world began producing them in large numbers. Characteristics Vihuela, renaissance guitars and baroque guitars have a bright sound, rich in overtones, and their courses (double strings) give the sound a very particular texture. Early guitars of the classical and romantic period (early romantic guitars) have single strings, but their design and voicing are still such that they have their tonal energy more in the overtones (but without starved fundamental), giving a bright intimate tone. Later in Spain a style of music emerged that favoured a stronger fundamental:"With the change of music a stronger fundamental was demanded and the fan bracing system was approached. ... the guitar tone has been changed from a transparent tone, rich in higher partials to a more 'broad' tone with a strong fundamental." Thus modern guitars with fan bracing (fan strutting) have a design and voicing that gives them a thick, heavy sound, with far more tonal energy found in the fundamental. Style periods Renaissance Composers of the Renaissance period who wrote for four-course guitar include Alonso Mudarra, Miguel de Fuenllana, Adrian Le Roy, , Guillaume de Morlaye, and . Instrument Four-course guitar Baroque Some well known composers of the Baroque guitar were Gaspar Sanz, Robert de Visée, Francesco Corbetta and Santiago de Murcia. Examples of instruments Baroque guitar by Nicolas Alexandre Voboam II: This French instrument has the typical design of the period with five courses of double-strings and a flat back. Baroque guitar attributed to Matteo Sellas : This Italian instrument has five courses and a rounded back. Classical and romantic From approximately 1780 to 1850, the guitar had numerous composers and performers including: Filippo Gragnani (1767–1820) Antoine de Lhoyer (1768–1852) Ferdinando Carulli (1770–1841) Wenzel Thomas Matiegka (1773–1830) Francesco Molino (1774–1847) Fernando Sor (1778–1839) (c. 1780–1850) Mauro Giuliani (1781–1829) Niccolò Paganini (1782–1840) Dionisio Aguado (1784–1849) Luigi Legnani (1790–1877) Matteo Carcassi (1792–1853) Napoléon Coste (1805–1883) Johann Kaspar Mertz (1806–1856) Giulio Regondi (1822–1872) Hector Berlioz studied the guitar as a teenager; Franz Schubert owned at least two and wrote for the instrument; and Ludwig van Beethoven, after hearing Giuliani play, commented the instrument was "a miniature orchestra in itself". Niccolò Paganini was also a guitar virtuoso and composer. He once wrote: "I love the guitar for its harmony; it is my constant companion in all my travels". He also said, on another occasion: "I do not like this instrument, but regard it simply as a way of helping me to think." Francisco Tárrega The guitarist and composer Francisco Tárrega (November 21, 1852 – December 15, 1909) was one of the great guitar virtuosos and teachers and is considered the father of modern classical guitar playing. As a professor of guitar at the conservatories of Madrid and Barcelona, he defined many elements of the modern classical technique and elevated the importance of the guitar in the classical music tradition. Modern period At the beginning of the 1920s, Andrés Segovia popularized the guitar with tours and early phonograph recordings. Segovia collaborated with the composers Federico Moreno Torroba and Joaquín Turina with the aim of extending the guitar repertoire with new music. Segovia's tour of South America revitalized public interest in the guitar and helped the guitar music of Manuel Ponce and Heitor Villa-Lobos reach a wider audience. The composers Alexandre Tansman and Mario Castelnuovo-Tedesco were commissioned by Segovia to write new pieces for the guitar. Luiz Bonfá popularized Brazilian musical styles such as the newly created Bossa Nova, which was well-received by audiences in the USA. "New music" – avant-garde The classical guitar repertoire also includes modern contemporary works – sometimes termed "New Music" – such as Elliott Carter's Changes, Cristóbal Halffter's Codex I, Luciano Berio's Sequenza XI, Maurizio Pisati's Sette Studi, Maurice Ohana's Si Le Jour Paraît, Sylvano Bussotti's Rara (eco sierologico), Ernst Krenek's Suite für Guitarre allein, Op. 164, Franco Donatoni's Algo: Due pezzi per chitarra, Paolo Coggiola's Variazioni Notturne, etc. Performers who are known for including modern repertoire include Jürgen Ruck, Elena Càsoli, Leo Brouwer (when he was still performing), John Schneider, Reinbert Evers, Maria Kämmerling, Siegfried Behrend, David Starobin, Mats Scheidegger, Magnus Andersson, etc. This type of repertoire is usually performed by guitarists who have particularly chosen to focus on the avant-garde in their performances. Within the contemporary music scene itself, there are also works which are generally regarded as extreme. These include works such as Brian Ferneyhough's Kurze Schatten II, Sven-David Sandström's away from and Rolf Riehm's Toccata Orpheus etc. which are notorious for their extreme difficulty. There are also a variety of databases documenting modern guitar works such as Sheer Pluck and others. Background The evolution of the classical guitar and its repertoire spans more than four centuries. It has a history that was shaped by contributions from earlier instruments, such as the lute, the vihuela, and the baroque guitar. History Overview of the classical guitar's history The origins of the modern guitar are not known with certainty. Some believe it is indigenous to Europe, while others think it is an imported instrument. Guitar-like instruments appear in ancient carvings and statues recovered from Egyptian, Sumerian, and Babylonian civilizations. This means that contemporary Iranian instruments such as the tanbur and setar are distantly related to the European guitar, as they all derive ultimately from the same ancient origins, but by very different historical routes and influences. During the late Middle Ages, gitterns called "guitars" were in use, but their construction and tuning were different from modern guitars. The guitarra latina in Spain had curved sides and a single hole. The guitarra morisca, which appears to have had Moorish influences, had an oval soundbox and many sound holes on its soundboard. By the 15th century, a four-course double-string instrument called the vihuela de mano, which was tuned like the later modern guitar except on one string and similar construction, first appeared in Spain and spread to France and Italy. In the 16th century, a fifth double-string was added. During this time, composers wrote mostly in tablature notation. In the middle of the 16th century, influences from the vihuela and the Renaissance guitar were combined and the baroque five-string guitar appeared in Spain. The baroque guitar quickly superseded the vihuela in popularity in Spain, France and Italy and Italian players and composers became prominent. In the late 18th century the six-string guitar quickly became popular at the expense of the five-string guitars. During the 19th century, the Spanish luthier and player Antonio de Torres gave the modern classical guitar its definitive form, with a broadened body, increased waist curve, thinned belly, and improved internal bracing. The modern classical guitar replaced an older form for the accompaniment of song and dance called flamenco, and a modified version, known as the flamenco guitar, was created. Renaissance guitar Alonso de Mudarra's book Tres Libros de Música, published in Spain in 1546, contains the earliest known written pieces for a four-course guitarra. This four-course "guitar" was popular in France, Spain, and Italy. In France this instrument gained popularity among aristocrats. A considerable volume of music was published in Paris from the 1550s to the 1570s: Simon Gorlier's Le Troysième Livre... mis en tablature de Guiterne was published in 1551. In 1551 Adrian Le Roy also published his Premier Livre de Tablature de Guiterne, and in the same year he also published Briefve et facile instruction pour apprendre la tablature a bien accorder, conduire, et disposer la main sur la Guiterne. Robert Ballard, Grégoire Brayssing from Augsburg, and Guillaume Morlaye (c. 1510 – c. 1558) significantly contributed to its repertoire. Morlaye's Le Premier Livre de Chansons, Gaillardes, Pavannes, Bransles, Almandes, Fantasies – which has a four-course instrument illustrated on its title page – was published in partnership with Michel Fedenzat, and among other music, they published six books of tablature by lutenist Albert de Rippe (who was very likely Guillaume's teacher). Vihuela The written history of the classical guitar can be traced back to the early 16th century with the development of the vihuela in Spain. While the lute was then becoming popular in other parts of Europe, the Spaniards did not take to it well because of its association with the Moors. Instead, the lute-like vihuela appeared with two more strings that gave it more range and complexity. In its most developed form, the vihuela was a guitar-like instrument with six double strings made of gut, tuned like a modern classical guitar with the exception of the third string, which was tuned half a step lower. It has a high sound and is rather large to hold. Few have survived and most of what is known today come from diagrams and paintings. Baroque guitar "Early romantic guitar" or "Guitar during the Classical music era" The earliest extant six-string guitar is believed to have been built in 1779 by Gaetano Vinaccia (1759 – after 1831) in Naples, Italy; however, the date on the label is a little ambiguous. The Vinaccia family of luthiers is known for developing the mandolin. This guitar has been examined and does not show tell-tale signs of modifications from a double-course guitar. The authenticity of guitars allegedly produced before the 1790s is often in question. This also corresponds to when Moretti's 6-string method appeared, in 1792. Modern classical guitar The modern classical guitar (also known as the "Spanish guitar"), the immediate forerunner of today's guitars, was developed in the 19th century by Antonio de Torres Jurado, Ignacio Fleta, Hermann Hauser Sr., and Robert Bouchet. Technique The fingerstyle is used fervently on the modern classical guitar. The thumb traditionally plucks the bass – or root note – whereas the fingers ring the melody and its accompanying parts. Often classical guitar technique involves the use of the nails of the right hand to pluck the notes. Noted players were: Francisco Tárrega, Emilio Pujol, Andrés Segovia, Julian Bream, Agustín Barrios, and John Williams (guitarist). Performance The modern classical guitar is usually played in a seated position, with the instrument resting on the left lap – and the left foot placed on a footstool. Alternatively – if a footstool is not used – a guitar support can be placed between the guitar and the left lap (the support usually attaches to the instrument's side with suction cups). (There are of course exceptions, with some performers choosing to hold the instrument another way.) Right-handed players use the fingers of the right hand to pluck the strings, with the thumb plucking from the top of a string downwards (downstroke) and the other fingers plucking from the bottom of the string upwards (upstroke). The little finger in classical technique as it evolved in the 20th century is used only to ride along with the ring finger without striking the strings and to thus physiologically facilitate the ring finger's motion. In contrast, Flamenco technique, and classical compositions evoking Flamenco, employ the little finger semi-independently in the Flamenco four-finger rasgueado, that rapid strumming of the string by the fingers in reverse order employing the back of the fingernail—a familiar characteristic of Flamenco. Flamenco technique, in the performance of the rasgueado also uses the upstroke of the four fingers and the downstroke of the thumb: the string is hit not only with the inner, fleshy side of the fingertip but also with the outer, fingernail side. This was also used in a technique of the vihuela called dedillo which has recently begun to be introduced on the classical guitar. Some modern guitarists, such as Štěpán Rak and Kazuhito Yamashita, use the little finger independently, compensating for the little finger's shortness by maintaining an extremely long fingernail. Rak and Yamashita have also generalized the use of the upstroke of the four fingers and the downstroke of the thumb (the same technique as in the rasgueado of the Flamenco: as explained above the string is hit not only with the inner, fleshy side of the fingertip but also with the outer, fingernail side) both as a free stroke and as a rest stroke. Direct contact with strings As with other plucked instruments (such as the lute), the musician directly touches the strings (usually plucking) to produce the sound. This has important consequences: Different tone/timbre (of a single note) can be produced by plucking the string in different manners (apoyando or tirando) and in different positions (such as closer and further away from the guitar bridge). For example, plucking an open string will sound brighter than playing the same note(s) on a fretted position (which would have a warmer tone). The instrument's versatility means it can create a variety of tones, but this finger-picking style also makes the instrument harder to learn than a standard acoustic guitar's strumming technique. Fingering notation In guitar scores the five fingers of the right-hand (which pluck the strings) are designated by the first letter of their Spanish names namely p = thumb (pulgar), i = index finger (índice), m = middle finger (mayor), a = ring finger (anular), c = little finger or pinky (meñique/chiquito) The four fingers of the left hand (which fret the strings) are designated 1 = index, 2 = major, 3 = ring finger, 4 = little finger. 0 designates an open string—a string not stopped by a finger and whose full length thus vibrates when plucked. It is rare to use the left hand thumb in performance, the neck of a classical guitar being too wide for comfort, and normal technique keeps the thumb behind the neck. However Johann Kaspar Mertz, for example, is notable for specifying the thumb to fret bass notes on the sixth string, notated with an up arrowhead (⌃). Scores (contrary to tablatures) do not systematically indicate the string to pluck (though the choice is usually obvious). When indicating the string is useful, the score uses the numbers 1 to 6 inside circles (highest-pitch sting to lowest). Scores don't systematically indicate fretboard positions (where to put the first finger of the fretting hand), but when helpful (mostly with barrés chords) the score indicates positions with Roman numerals from the first position I (index finger on the 1st fret: F-B flat-E flat-A flat-C-F) to the twelfth position XII (index finger on the 12th fret: E-A-D-G-B-E. The 12th fret is where the body begins) or even higher up to position XIX (the classical guitar most often having 19 frets, with the 19th fret being most often split and not being usable to fret the 3rd and 4th strings). Alternation To achieve tremolo effects and rapid, fluent scale passages, the player must practice alternation, that is, never plucking a string with the same finger twice in a row. Using p to indicate the thumb, i the index finger, m the middle finger and a the ring finger, common alternation patterns include: i-m-i-m : Basic melody line on the treble strings. Has the appearance of "walking along the strings". This is often used for playing Scale (music) passages. p-i-m-a-i-m-a : Arpeggio pattern example. However, there are many arpeggio patterns incorporated into the classical guitar repertoire. p-a-m-i-p-a-m-i : Classical guitar tremolo pattern. p-m-p-m : A way of playing a melody line on the lower strings. Repertoire Music written specifically for the classical guitar dates from the addition of the sixth string (the baroque guitar normally had five pairs of strings) in the late 18th century. A guitar recital may include a variety of works, e.g., works written originally for the lute or vihuela by composers such as John Dowland (b. England 1563) and Luis de Narváez (b. Spain c. 1500), and also music written for the harpsichord by Domenico Scarlatti (b. Italy 1685), for the baroque lute by Sylvius Leopold Weiss (b. Germany 1687), for the baroque guitar by Robert de Visée (b. France c. 1650) or even Spanish-flavored music written for the piano by Isaac Albéniz (b. Spain 1860) and Enrique Granados (b. Spain 1867). The most important composer who did not write for the guitar but whose music is often played on it is Johann Sebastian Bach (b. Germany 1685), whose baroque lute works have proved highly adaptable to the instrument. Of music written originally for guitar, the earliest important composers are from the classical period and include Fernando Sor (b. Spain 1778) and Mauro Giuliani (b. Italy 1781), both of whom wrote in a style strongly influenced by Viennese classicism. In the 19th-century guitar composers such as Johann Kaspar Mertz (b. Slovakia, Austria 1806) were strongly influenced by the dominance of the piano. Not until the end of the nineteenth century did the guitar begin to establish its own unique identity. Francisco Tárrega (b. Spain 1852) was central to this, sometimes incorporating stylized aspects of flamenco's Moorish influences into his romantic miniatures. This was part of late 19th century mainstream European musical nationalism. Albéniz and Granados were central to this movement; their evocation of the guitar was so successful that their compositions have been absorbed into the standard guitar repertoire. The steel-string and electric guitars characteristic to the rise of rock and roll in the post-WWII era became more widely played in North America and the English-speaking world. Agustín Barrios Mangoré of Paraguay composed many works and brought into the mainstream the characteristics of Latin American music, as did the Brazilian composer Heitor Villa-Lobos. Andrés Segovia commissioned works from Spanish composers such as Federico Moreno Torroba and Joaquín Rodrigo, Italians such as Mario Castelnuovo-Tedesco and Latin American composers such as Manuel Ponce of Mexico. Other prominent Latin American composers are Leo Brouwer of Cuba, Antonio Lauro of Venezuela and Enrique Solares of Guatemala. Julian Bream of Britain managed to get nearly every British composer from William Walton and Benjamin Britten to Peter Maxwell Davies to write significant works for guitar. Bream's collaborations with tenor Peter Pears also resulted in song cycles by Britten, Lennox Berkeley and others. There are significant works by composers such as Hans Werner Henze of Germany, Gilbert Biberian of England and Roland Chadwick of Australia. The classical guitar also became widely used in popular music and rock & roll in the 1960s after guitarist Mason Williams popularized the instrument in his instrumental hit Classical Gas. Guitarist Christopher Parkening is quoted in the book Classical Gas: The Music of Mason Williams as saying that it is the most requested guitar piece besides Malagueña and perhaps the best-known instrumental guitar piece today. In the field of New Flamenco, the works and performances of Spanish composer and player Paco de Lucía are known worldwide. Not many classical guitar concertos were written through history. Nevertheless, some guitar concertos are nowadays widely known and popular, especially Joaquín Rodrigo's Concierto de Aranjuez (with the famous theme from 2nd movement) and Fantasía para un gentilhombre. Composers, who also wrote famous guitar concertos are: Antonio Vivaldi (originally for mandolin or lute), Mauro Giuliani, Heitor Villa-Lobos, Mario Castelnuovo-Tedesco, Manuel Ponce, Leo Brouwer, Lennox Berkeley and Malcolm Arnold. Nowadays, more and more contemporary composers decide to write a guitar concerto, among them Bosco Sacro by Federico Biscione, for guitar and string orchestra, is one of the most inspired. Physical characteristics The classical guitar is distinguished by a number of characteristics: It is an acoustic instrument. The sound of the plucked string is amplified by the soundboard and resonant cavity of the guitar. It has six strings, though some classical guitars have seven or more strings. All six strings are made from nylon, or nylon wrapped with metal, as opposed to the metal strings found on other acoustic guitars. Nylon strings also have a much lower tension than steel strings, as do the predecessors to nylon strings, gut strings (made from ox or sheep gut). The lower three strings ('bass strings') are wound with metal, commonly silver-plated copper. Because of the low string tension The neck can be made entirely of wood without a steel truss rod The interior bracing can be lighter Typical modern six-string classical guitars are 48–54 mm wide at the nut, compared to around 42 mm for electric guitars. Classical fingerboards are normally flat and without inlaid fret markers, or just have dot inlays on the side of the neck—steel string fingerboards usually have a slight radius and inlays. Classical guitarists use their right hand to pluck the strings. Players shape their fingernails for ideal tone and feel against the strings. Strumming is a less common technique in classical guitar, and is often referred to by the Spanish term "rasgueo", or for strumming patterns "rasgueado", and uses the backs of the fingernails. Rasgueado is integral to Flamenco guitar. Machine heads at the headstock of a classical guitar point backwards—in contrast to most steel-string guitars, which have machine heads that point outward. The overall design of a Classical Guitar is very similar to the slightly lighter and smaller Flamenco guitar. Parts Parts of typical classical guitars include: Headstock Nut Machine heads (or pegheads, tuning keys, tuning machines, tuners) Frets Neck Heel Body Bridge Bottom deck Soundboard Body sides Sound hole, with rosette inlay Strings Saddle (Bridge nut) Fretboard Fretboard The fretboard (also called the fingerboard) is a piece of wood embedded with metal frets that constitutes the top of the neck. It is flat or slightly curved. The curvature of the fretboard is measured by the fretboard radius, which is the radius of a hypothetical circle of which the fretboard's surface constitutes a segment. The smaller the fretboard radius, the more noticeably curved the fretboard is. Fretboards are most commonly made of ebony, but may also be made of rosewood, some other hardwood, or of phenolic composite ("micarta"). Frets Frets are the metal strips (usually nickel alloy or stainless steel) embedded along the fingerboard and placed at points that divide the length of string mathematically. The strings' vibrating length is determined when the strings are pressed down behind the frets. Each fret produces a different pitch and each pitch spaced a half-step apart on the 12 tone scale. The ratio of the widths of two consecutive frets is the twelfth root of two (), whose numeric value is about 1.059463. The twelfth fret divides the string into two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. This arrangement of frets results in equal tempered tuning. Neck A classical guitar's frets, fretboard, tuners, headstock, all attached to a long wooden extension, collectively constitute its neck. The wood for the fretboard usually differs from the wood in the rest of the neck. The bending stress on the neck is considerable, particularly when heavier gauge strings are used. The most common scale length for classical guitar is 650mm (calculated by measuring the distance between the end of the nut and the center of the 12th fret, then doubling that measurement). However, scale lengths may vary from 635-664mm or more. Neck joint or 'heel' This is the point where the neck meets the body. In the traditional Spanish neck joint, the neck and block are one piece with the sides inserted into slots cut in the block. Other necks are built separately and joined to the body either with a dovetail joint, mortise or flush joint. These joints are usually glued and can be reinforced with mechanical fasteners. Recently many manufacturers use bolt-on fasteners. Bolt-on neck joints were once associated only with less expensive instruments but now some top manufacturers and hand builders are using variations of this method. Some people believed that the Spanish-style one piece neck/block and glued dovetail necks have better sustain, but testing has failed to confirm this. While most traditional Spanish style builders use the one-piece neck/heel block, Fleta, a prominent Spanish builder, used a dovetail joint due to the influence of his early training in violin making. One reason for the introduction of mechanical joints was to make it easier to repair necks. This is more of a problem with steel string guitars than with nylon strings, which have about half the string tension. This is why nylon string guitars often don't include a truss rod either. Body The body of the instrument is a major determinant of the overall sound variety for acoustic guitars. The guitar top, or soundboard, is a finely crafted and engineered element often made of spruce or red cedar. Considered the most prominent factor in determining the sound quality of a guitar, this thin (often 2 or 3 mm thick) piece of wood has a uniform thickness and is strengthened by different types of internal bracing. The back is made in rosewood and Brazilian rosewood is especially coveted, but mahogany or other decorative woods are sometimes used. The majority of the sound is caused by the vibration of the guitar top as the energy of the vibrating strings is transferred to it. Different patterns of wood bracing have been used through the years by luthiers (Torres, Hauser, Ramírez, Fleta, and C.F. Martin being among the most influential designers of their times); to not only strengthen the top against collapsing under the tremendous stress exerted by the tensioned strings, but also to affect the resonance of the top. Some contemporary guitar makers have introduced new construction concepts such as "double-top" consisting of two extra-thin wooden plates separated by Nomex, or carbon-fiber reinforced lattice – pattern bracing. The back and sides are made out of a variety of woods such as mahogany, maple, cypress Indian rosewood and highly regarded Brazilian rosewood (Dalbergia nigra). Each one is chosen for its aesthetic effect and structural strength, and such choice can also play a role in determining the instrument's timbre. These are also strengthened with internal bracing, and decorated with inlays and purfling. Antonio de Torres Jurado proved that it was the top, and not the back and sides of the guitar that gave the instrument its sound, in 1862 he built a guitar with back and sides of papier-mâché. (This guitar resides in the Museu de la Musica in Barcelona, and before the year 2000 it was restored to playable condition by the brothers Yagüe, Barcelona). The body of a classical guitar is a resonating chamber that projects the vibrations of the body through a sound hole, allowing the acoustic guitar to be heard without amplification. The sound hole is normally a single round hole in the top of the guitar (under the strings), though some have different placement, shapes, or numbers of holes. How much air an instrument can move determines its maximum volume. Binding, purfling and kerfing The top, back and sides of a classical guitar body are very thin, so a flexible piece of wood called kerfing (because it is often scored, or kerfed so it bends with the shape of the rim) is glued into the corners where the rim meets the top and back. This interior reinforcement provides 5 to 20 mm of solid gluing area for these corner joints. During final construction, a small section of the outside corners is carved or routed out and filled with binding material on the outside corners and decorative strips of material next to the binding, which are called purfling. This binding serves to seal off the endgrain of the top and back. Binding and purfling materials are generally made of either wood or high-quality plastic materials. Bridge The main purpose of the bridge on a classical guitar is to transfer the vibration from the strings to the soundboard, which vibrates the air inside of the guitar, thereby amplifying the sound produced by the strings. The bridge holds the strings in place on the body. Also, the position of the saddle, usually a strip of bone or plastic that supports the strings off the bridge, determines the distance to the nut (at the top of the fingerboard). Sizes The modern full-size classical guitar has a scale length of around , with an overall instrument length of . The scale length has remained quite consistent since it was chosen by the originator of the instrument, Antonio de Torres. This length may have been chosen because it's twice the length of a violin string. As the guitar is tuned to one octave below that of the violin, the same size gut could be used for the first strings of both instruments. Smaller-scale instruments are produced to assist children in learning the instrument as the smaller scale leads to the frets being closer together, making it easier for smaller hands. The scale-size for the smaller guitars is usually in the range , with an instrument length of . Full-size instruments are sometimes referred to as 4/4, while the smaller sizes are 3/4, 1/2 or 1/4. Tuning A variety of different tunings are used. The most common by far, which one could call the "standard tuning" is: eI – b – g – d – A – E The above order is the tuning from the 1st string (highest-pitched string e'—spatially the bottom string in playing position) to the 6th string – lowest-pitched string E—spatially the upper string in playing position, and hence comfortable to pluck with the thumb. The explanation for this "asymmetrical" tuning (in the sense that the maj 3rd is not between the two middle strings, as in the tuning of the viola da gamba) is probably that the guitar originated as a 4-string instrument (actually an instrument with 4 double courses of strings, see above) with a maj 3rd between the 2nd and 3rd strings, and it only became a 6-string instrument by gradual addition of a 5th string and then a 6th string tuned a 4th apart: "The development of the modern tuning can be traced in stages. One of the tunings from the 16th century is C-F-A-D. This is equivalent to the top four strings of the modern guitar tuned a tone lower. However, the absolute pitch for these notes is not equivalent to modern "concert pitch". The tuning of the four-course guitar was moved up by a tone and toward the end of the 16th century, five-course instruments were in use with an added lower string tuned to A. This produced A-D-G-B-E, one of a wide number of variant tunings of the period. The low E string was added during the 18th century." This tuning is such that neighboring strings are at most 5 semitones apart. There are also a variety of commonly used alternate tunings. The most common is known as Drop D tuning which has the 6th string tuned down from an E to a D. Bibliography The Guitar and its Music (From the Renaissance to the Classical Era) (2007) by James Tyler, Paul Sparks. Cambridge Studies in Performance Practice (No. 6): Performance on Lute, Guitar, and Vihuela (2005) edited by Victor Anand Coelho. The Guitar: From the Renaissance to the Present Day by Harvey Turnbull; published by Bold Strummer, 1991. The Guitar; by Sinier de Ridder; published by Edizioni Il Salabue; La Chitarra, Quattro secoli di Capolavori (The Guitar: Four centuries of Masterpieces) by Giovanni Accornero, Ivan Epicoco, Eraldo Guerci; published by Edizioni Il Salabue Rosa sonora – Esposizione di chitarre XVII – XX secolo by Giovanni Accornero; published by Edizioni Il Salabue Lyre-guitar. Étoile charmante, between the 18th and 19th century by Eleonora Vulpiani Summerfield, Maurice, The Classical Guitar: Its Evolution, Players and Personalities since 1800 – 5th Edition, Blaydon : Ashley Mark Publishing Company, 2002. Various, Classical Guitar Magazine, Blaydon : Ashley Mark Publishing Company, monthly publication first published in 1982. Wade, Graham, Traditions of the Classical Guitar, London : Calder, 1980. Antoni Pizà: Francesc Guerau i el seu temps (Palma de Mallorca: Govern de les Illes Balears, Conselleria d'Educació i Cultura, Direcció General de Cultura, Institut d'Estudis Baleàrics, 2000) See also Classical guitar strings Classical guitar pedagogy Early classical guitar recordings International classical guitar competitions Guitar Foundation of America Guitar Chordophones Typaldos D. children's choir, a Greek children's choir with classical guitars Related instruments Brahms guitar Extended-range classical guitar Harp guitar Lyre-guitar Six-string alto guitar Lists Bibliography of classical guitar List of classical guitarists List of composers for the classical guitar List of composers for the classical guitar (nationality) References External links Thematic essay: The guitar Jayson Kerr Dobney, Wendy Powers (The Metropolitan Museum of Art) Classical & Fingerstyle Guitar Classical Guitar Library A vibrant library of guitar sheet music, which can serve in accomplishing diverse teaching and research needs. Acoustic guitars String instruments Articles containing video clips Spanish inventions
2,613
5,816
https://en.wikipedia.org/wiki/Cenozoic
Cenozoic
The Cenozoic ( ; ) is Earth's current geological era, representing the last 66million years of Earth's history. It is characterised by the dominance of mammals, birds and flowering plants, a cooling and drying climate, and the current configuration of continents. It is the latest of three geological eras since complex life evolved, preceded by the Mesozoic and Paleozoic. It started with the Cretaceous–Paleogene extinction event, when many species, including the non-avian dinosaurs, became extinct in an event attributed by most experts to the impact of a large asteroid or other celestial body, the Chicxulub impactor. The Cenozoic is also known as the Age of Mammals because the terrestrial animals that dominated both hemispheres were mammalsthe eutherians (placentals) in the northern hemisphere and the metatherians (marsupials, now mainly restricted to Australia and to some extent South America) in the southern hemisphere. The extinction of many groups allowed mammals and birds to greatly diversify so that large mammals and birds dominated life on Earth. The continents also moved into their current positions during this era. The climate during the early Cenozoic was warmer than today, particularly during the Paleocene–Eocene Thermal Maximum. However, the Eocene to Oligocene transition and the Quaternary glaciation dried and cooled Earth. Nomenclature Cenozoic derives from the Greek words ( 'new') and ( 'life'). The name was proposed in 1840 by the British geologist John Phillips (1800–1874), who originally spelled it Kainozoic. The era is also known as the Cænozoic, Caenozoic, or Cainozoic (). In name, the Cenozoic () is comparable to the preceding Mesozoic ('middle life') and Paleozoic ('old life') Eras, as well as to the Proterozoic ('earlier life') Eon. Divisions The Cenozoic is divided into three periods: the Paleogene, Neogene, and Quaternary; and seven epochs: the Paleocene, Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene. The Quaternary Period was officially recognised by the International Commission on Stratigraphy in June 2009. In 2004, the Tertiary Period was officially replaced by the Paleogene and Neogene Periods. The common use of epochs during the Cenozoic helps palaeontologists better organise and group the many significant events that occurred during this comparatively short interval of time. Knowledge of this era is more detailed than any other era because of the relatively young, well-preserved rocks associated with it. Paleogene The Paleogene spans from the extinction of non-avian dinosaurs, 66 million years ago, to the dawn of the Neogene, 23.03 million years ago. It features three epochs: the Paleocene, Eocene and Oligocene. The Paleocene Epoch lasted from 66 million to 56 million years ago. Modern placental mammals originated during this time. The devastation of the K–Pg extinction event included the extinction of large herbivores, which permitted the spread of dense but usually species-poor forests. The Early Paleocene saw the recovery of Earth. The continents began to take their modern shape, but all the continents and the subcontinent of India were separated from each other. Afro-Eurasia was separated by the Tethys Sea, and the Americas were separated by the strait of Panama, as the isthmus had not yet formed. This epoch featured a general warming trend, with jungles eventually reaching the poles. The oceans were dominated by sharks as the large reptiles that had once predominated were extinct. Archaic mammals filled the world such as creodonts (extinct carnivores, unrelated to existing Carnivora). The Eocene Epoch ranged from 56 million years to 33.9 million years ago. In the Early-Eocene, species living in dense forest were unable to evolve into larger forms, as in the Paleocene. All known mammals were under 10 kilograms. Among them were early primates, whales and horses along with many other early forms of mammals. At the top of the food chains were huge birds, such as Paracrax. Carbon dioxide levels were approximately 1,400 ppm. The temperature was 30 degrees Celsius with little temperature gradient from pole to pole. In the Mid-Eocene, the Circumpolar-Antarctic current between Australia and Antarctica formed. This disrupted ocean currents worldwide and as a result caused a global cooling effect, shrinking the jungles. This allowed mammals to grow to mammoth proportions, such as whales which, by that time, had become almost fully aquatic. Mammals like Andrewsarchus were at the top of the food-chain. The Late Eocene saw the rebirth of seasons, which caused the expansion of savanna-like areas, along with the evolution of grasses. The end of the Eocene was marked by the Eocene-Oligocene extinction event, the European face of which is known as the Grande Coupure. The Oligocene Epoch spans from 33.9 million to 23.03 million years ago. The Oligocene featured the expansion of grasslands which had led to many new species to evolve, including the first elephants, cats, dogs, marsupials and many other species still prevalent today. Many other species of plants evolved in this period too. A cooling period featuring seasonal rains was still in effect. Mammals still continued to grow larger and larger. Neogene The Neogene spans from 23.03 million to 2.58 million years ago. It features 2 epochs: the Miocene, and the Pliocene. The Miocene Epoch spans from 23.03 to 5.333 million years ago and is a period in which grasses spread further, dominating a large portion of the world, at the expense of forests. Kelp forests evolved, encouraging the evolution of new species, such as sea otters. During this time, perissodactyla thrived, and evolved into many different varieties. Apes evolved into 30 species. The Tethys Sea finally closed with the creation of the Arabian Peninsula, leaving only remnants as the Black, Red, Mediterranean and Caspian Seas. This increased aridity. Many new plants evolved: 95% of modern seed plants evolved in the mid-Miocene. The Pliocene Epoch lasted from 5.333 to 2.58 million years ago. The Pliocene featured dramatic climatic changes, which ultimately led to modern species of flora and fauna. The Mediterranean Sea dried up for several million years (because the ice ages reduced sea levels, disconnecting the Atlantic from the Mediterranean, and evaporation rates exceeded inflow from rivers). Australopithecus evolved in Africa, beginning the human branch. The isthmus of Panama formed, and animals migrated between North and South America during the great American interchange, wreaking havoc on local ecologies. Climatic changes brought: savannas that are still continuing to spread across the world; Indian monsoons; deserts in central Asia; and the beginnings of the Sahara desert. The world map has not changed much since, save for changes brought about by the glaciations of the Quaternary, such as the Great Lakes, Hudson Bay, and the Baltic sea. Quaternary The Quaternary spans from 2.58 million years ago to present day, and is the shortest geological period in the Phanerozoic Eon. It features modern animals, and dramatic changes in the climate. It is divided into two epochs: the Pleistocene and the Holocene. The Pleistocene lasted from 2.58 million to 11,700 years ago. This epoch was marked by ice ages as a result of the cooling trend that started in the Mid-Eocene. There were at least four separate glaciation periods marked by the advance of ice caps as far south as 40° N in mountainous areas. Meanwhile, Africa experienced a trend of desiccation which resulted in the creation of the Sahara, Namib, and Kalahari deserts. Many animals evolved including mammoths, giant ground sloths, dire wolves, sabre-toothed cats, and most famously Homo sapiens. 100,000 years ago marked the end of one of the worst droughts in Africa, and led to the expansion of primitive humans. As the Pleistocene drew to a close, a major extinction wiped out much of the world's megafauna, including some of the hominid species, such as Neanderthals. All the continents were affected, but Africa to a lesser extent. It still retains many large animals, such as hippos. The Holocene began 11,700 years ago and lasts to the present day. All recorded history and "the Human history" lies within the boundaries of the Holocene Epoch. Human activity is blamed for a mass extinction that began roughly 10,000 years ago, though the species becoming extinct have only been recorded since the Industrial Revolution. This is sometimes referred to as the "Sixth Extinction". It is often cited that over 322 recorded species have become extinct due to human activity since the Industrial Revolution, but the rate may be as high as 500 vertebrate species alone, the majority of which have occurred after 1900. Tectonics Geologically, the Cenozoic is the era when the continents moved into their current positions. Australia-New Guinea, having split from Pangea during the early Cretaceous, drifted north and, eventually, collided with South-east Asia; Antarctica moved into its current position over the South Pole; the Atlantic Ocean widened and, later in the era (2.8 million years ago), South America became attached to North America with the isthmus of Panama. India collided with Asia creating the Himalayas; Arabia collided with Eurasia, closing the Tethys Ocean and creating the Zagros Mountains, around . The break-up of Gondwana in Late Cretaceous and Cenozoic times led to a shift in the river courses of various large African rivers including the Congo, Niger, Nile, Orange, Limpopo and Zambezi. Climate In the Cretaceous, the climate was hot and humid with lush forests at the poles, there was no permanent ice and sea levels were around 300 metres higher than today. This continued for the first 10 million years of the Paleocene, culminating in the Paleocene–Eocene Thermal Maximum about . Around Earth entered a period of long term cooling. This was mainly due to the collision of India with Eurasia, which caused the rise of the Himalayas: the upraised rocks eroded and reacted with in the air, causing a long-term reduction in the proportion of this greenhouse gas in the atmosphere. Around permanent ice began to build up on Antarctica. The cooling trend continued in the Miocene, with relatively short warmer periods. When South America became attached to North America creating the Isthmus of Panama around , the Arctic region cooled due to the strengthening of the Humboldt and Gulf Stream currents, eventually leading to the glaciations of the Quaternary ice age, the current interglacial of which is the Holocene Epoch. Recent analysis of the geomagnetic reversal frequency, oxygen isotope record, and tectonic plate subduction rate, which are indicators of the changes in the heat flux at the core mantle boundary, climate and plate tectonic activity, shows that all these changes indicate similar rhythms on million years' timescale in the Cenozoic Era occurring with the common fundamental periodicity of ~13 Myr during most of the time. Life Early in the Cenozoic, following the K-Pg event, the planet was dominated by relatively small fauna, including small mammals, birds, reptiles, and amphibians. From a geological perspective, it did not take long for mammals and birds to greatly diversify in the absence of the dinosaurs that had dominated during the Mesozoic. Some flightless birds grew larger than humans. These species are sometimes referred to as "terror birds", and were formidable predators. Mammals came to occupy almost every available niche (both marine and terrestrial), and some also grew very large, attaining sizes not seen in most of today's terrestrial mammals. During the Cenozoic, mammals proliferated from a few small, simple, generalised forms into a diverse collection of terrestrial, marine, and flying animals, giving this period its other name, the Age of Mammals. The Cenozoic is just as much the age of savannas, the age of co-dependent flowering plants and insects, and the age of birds. Grasses also played a very important role in this era, shaping the evolution of the birds and mammals that fed on them. One group that diversified significantly in the Cenozoic as well were the snakes. Evolving in the Cenozoic, the variety of snakes increased tremendously, resulting in many colubrids, following the evolution of their current primary prey source, the rodents. In the earlier part of the Cenozoic, the world was dominated by the gastornithid birds, terrestrial crocodiles like Pristichampsus, and a handful of primitive large mammal groups like uintatheres, mesonychids, and pantodonts. But as the forests began to recede and the climate began to cool, other mammals took over. The Cenozoic is full of mammals both strange and familiar, including chalicotheres, creodonts, whales, primates, entelodonts, sabre-toothed cats, mastodons and mammoths, three-toed horses, giant rhinoceros like Paraceratherium, the rhinoceros-like brontotheres, various bizarre groups of mammals from South America, such as the vaguely elephant-like pyrotheres and the dog-like marsupial relatives called borhyaenids and the monotremes and marsupials of Australia. See also Cretaceous–Paleogene boundary (K–T boundary) Geologic time scale Late Cenozoic Ice Age References Further reading External links Western Australian Museum – The Age of the Mammals Cenozoic (chronostratigraphy scale) Geological eras
2,616
5,828
https://en.wikipedia.org/wiki/Cryptozoology
Cryptozoology
Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson. Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars have studied cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism), noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims. Terminology, history, and approach As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French Sur la Piste des Bêtes Ignorées) in 1955, a landmark work among cryptozoologists that was followed by numerous other like works. Similarly, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). The term cryptozoology dates from 1959 or before—Heuvelmans attributes the coinage of the term cryptozoology 'the study of hidden animals' (from Ancient Greek: κρυπτός, kryptós "hidden, secret"; Ancient Greek ζῷον, zōion "animal", and λόγος, logos, i.e. "knowledge, study") to Sanderson. Patterned after cryptozoology, the term cryptid was coined in 1983 by cryptozoologist J. E. Wall in the summer issue of the International Society of Cryptozoology newsletter. According to Wall "[It has been] suggested that new terms be coined to replace sensational and often misleading terms like 'monster'. My suggestion is 'cryptid', meaning a living thing having the quality of being hidden or unknown ... describing those creatures which are (or may be) subjects of cryptozoological investigation." The Oxford English Dictionary defines the noun cryptid as "an animal whose existence or survival to the present day is disputed or unsubstantiated; any animal of interest to a cryptozoologist". While used by most cryptozoologists, the term cryptid is not used by academic zoologists. In a textbook aimed at undergraduates, academics Caleb W. Lack and Jacques Rousseau note that the subculture's focus on what it deems to be "cryptids" is a pseudoscientic extension of older belief in monsters and other similar entities from the folkloric record, yet with a "new, more scientific-sounding name: cryptids". While biologists regularly identify new species, cryptozoologists often focus on creatures from the folkloric record. Most famously, these include the Loch Ness Monster, Bigfoot, the chupacabra, as well as other "imposing beasts that could be labeled as monsters". In their search for these entities, cryptozoologists may employ devices such as motion-sensitive cameras, night-vision equipment, and audio-recording equipment. While there have been attempts to codify cryptozoological approaches, unlike biologists, zoologists, botanists, and other academic disciplines, however, "there are no accepted, uniform, or successful methods for pursuing cryptids". Some scholars have identified precursors to modern cryptozoology in certain medieval approaches to the folkloric record, and the psychology behind the cryptozoology approach has been the subject of academic study. Few cryptozoologists have a formal science education, and fewer still have a science background directly relevant to cryptozoology. Adherents often misrepresent the academic backgrounds of cryptozoologists. According to writer Daniel Loxton and paleontologist Donald Prothero, "Cryptozoologists have often promoted 'Professor Roy Mackal, PhD.' as one of their leading figures and one of the few with a legitimate doctorate in biology. What is rarely mentioned, however, is that he had no training that would qualify him to undertake competent research on exotic animals. This raises the specter of 'credential mongering', by which an individual or organization feints a person's graduate degree as proof of expertise, even though his or her training is not specifically relevant to the field under consideration." Besides Heuvalmans, Sanderson, and Mackal, other notable cryptozoologists with academic backgrounds include Grover Krantz, Karl Shuker, and Richard Greenwell. Historically, notable cryptozoologists have often identified instances featuring "irrefutable evidence" (such as Sanderson and Krantz), only for the evidence to be revealed as the product of a hoax. This may occur during a closer examination by experts or upon confession of the hoaxer. Young Earth creationism A subset of cryptozoology promotes the pseudoscience of Young Earth creationism, rejecting conventional science in favor of a Biblical interpretation and promoting concepts such as "living dinosaurs". Science writer Sharon A. Hill observes that the Young Earth creationist segment of cryptozoology is "well-funded and able to conduct expeditions with a goal of finding a living dinosaur that they think would invalidate evolution." Anthropologist Jeb J. Card says that "Creationists have embraced cryptozoology and some cryptozoological expeditions are funded by and conducted by creationists hoping to disprove evolution." In a 2013 interview, paleontologist Donald Prothero notes an uptick in creationist cryptozoologists. He observes that "[p]eople who actively search for Loch Ness monsters or Mokele Mbembe do it entirely as creationist ministers. They think that if they found a dinosaur in the Congo it would overturn all of evolution. It wouldn't. It would just be a late-occurring dinosaur, but that's their mistaken notion of evolution." Citing a 2013 exhibit at the Petersburg, Kentucky-based Creation Museum, which claimed that dragons were once biological creatures who walked the earth alongside humanity and is broadly dedicated to Young Earth creationism, religious studies academic Justin Mullis notes that "Cryptozoology has a long and curious history with Young Earth Creationism, with this new exhibit being just one of the most recent examples". Academic Paul Thomas analyzes the influence and connections between cryptozoology in his 2020 study of the Creation Museum and the creationist theme park Ark Encounter. Thomas comments that, "while the Creation Museum and the Ark Encounter are flirting with pseudoarchaeology, coquettishly whispering pseudoarchaeological rhetoric, they are each fully in bed with cryptozoology" and observes that "Young-earth creationists and cryptozoologists make natural bed fellows. As with pseudoarchaeology, both young-earth creationists and cryptozoologists bristle at the rejection of mainstream secular science and lament a seeming conspiracy to prevent serious consideration of their claims." Lack of critical media coverage Media outlets have often uncritically disseminated information from cryptozoologist sources, including newspapers that repeat false claims made by cryptozoologists or television shows that feature cryptozoologists as monster hunters (such as the popular and purportedly nonfiction American television show MonsterQuest, which aired from 2007 to 2010). Media coverage of purported "cryptids" often fails to provide more likely explanations, further propagating claims made by cryptozoologists. Reception and pseudoscience There is a broad consensus among academics that cryptozoology is a pseudoscience. The subculture is regularly criticized for reliance on anecdotal information and because in the course of investigating animals that most scientists believe are unlikely to have existed, cryptozoologists do not follow the scientific method. No academic course of study nor university degree program grants the status of cryptozoologist and the subculture is primarily the domain of individuals without training in the natural sciences. Anthropologist Jeb J. Card summarizes cryptozoology in a survey of pseudoscience and pseudoarchaeology: Cryptozoology purports to be the study of previously unidentified animal species. At first glance, this would seem to differ little from zoology. New species are discovered by field and museum zoologists every year. Cryptozoologists cite these discoveries as justification of their search but often minimize or omit the fact that the discoverers do not identify as cryptozoologists and are academically trained zoologists working in an ecological paradigm rather than organizing expeditions to seek out supposed examples of unusual and large creatures. Card notes that "cryptozoologists often show their disdain and even hatred for professional scientists, including those who enthusiastically participated in cryptozoology", which he traces back to Heuvelmans's early "rage against critics of cryptozoology". He finds parallels with cryptozoology and other pseudosciences, such as ghost hunting and ufology, and compares the approach of cryptozoologists to colonial big-game hunters, and to aspects of European imperialism. According to Card, "Most cryptids are framed as the subject of indigenous legends typically collected in the heyday of comparative folklore, though such legends may be heavily modified or worse. Cryptozoology's complicated mix of sympathy, interest, and appropriation of indigenous culture (or non-indigenous construction of it) is also found in New Age circles and dubious "Indian burial grounds" and other legends...invoked in hauntings such as the "Amityville" hoax ...". In a 2011 foreword for The American Biology Teacher, then National Association of Biology Teachers president Dan Ward uses cryptozoology as an example of "technological pseudoscience" that may confuse students about the scientific method. Ward says that "Cryptozoology ... is not valid science or even science at all. It is monster hunting." Historian of science Brian Regal includes an entry for cryptozoology in his Pseudoscience: A Critical Encyclopedia (2009). Regal says that "as an intellectual endeavor, cryptozoology has been studied as much as cryptozoologists have sought hidden animals". In a 1992 issue of Folklore, folklorist Véronique Campion-Vincent says: Unexplained appearances of mystery animals are reported all over the world today. Beliefs in the existence of fabulous and supernatural animals are ubiquitous and timeless. In the continents discovered by Europe indigenous beliefs and tales have strongly influenced the perceptions of the conquered confronted by a new natural environment. In parallel with the growing importance of the scientific approach, these traditional mythical tales have been endowed with sometimes highly artificial precision and have given birth to contemporary legends solidly entrenched in their territories. The belief self-perpetuates today through multiple observations enhanced by the media and encouraged (largely with the aim of gain for touristic promotion) by the local population, often genuinely convinced of the reality of this profitable phenomenon." Campion-Vincent says that "four currents can be distinguished in the study of mysterious animal appearances": "Forteans" ("compiler[s] of anomalies" such as via publications like the Fortean Times), "occultists" (which she describes as related to "Forteans"), "folklorists", and "cryptozoologists". Regarding cryptozoologists, Campion-Vincent says that "this movement seems to deserve the appellation of parascience, like parapsychology: the same corpus is reviewed; many scientists participate, but for those who have an official status of university professor or researcher, the participation is a private hobby". In her Encyclopedia of American Folklore, academic Linda Watts says that "folklore concerning unreal animals or beings, sometimes called monsters, is a popular field of inquiry" and describes cryptozoology as an example of "American narrative traditions" that "feature many monsters". In his analysis of cryptozoology, folklorist Peter Dendle says that "cryptozoology devotees consciously position themselves in defiance of mainstream science" and that: The psychological significance of cryptozoology in the modern world...serves to channel guilt over the decimation of species and destruction of the natural habitat; to recapture a sense of mysticism and danger in a world now perceived as fully charted and over-explored; and to articulate resentment of and defiance against a scientific community perceived as monopolising the pool of culturally acceptable beliefs. In a paper published in 2013, Dendle refers to cryptozoologists as "contemporary monster hunters" that "keep alive a sense of wonder in a world that has been very thoroughly charted, mapped, and tracked, and that is largely available for close scrutiny on Google Earth and satellite imaging" and that "on the whole the devotion of substantial resources for this pursuit betrays a lack of awareness of the basis for scholarly consensus (largely ignoring, for instance, evidence of evolutionary biology and the fossil record)." According to historian Mike Dash, few scientists doubt there are thousands of unknown animals, particularly invertebrates, awaiting discovery; however, cryptozoologists are largely uninterested in researching and cataloging newly discovered species of ants or beetles, instead focusing their efforts towards "more elusive" creatures that have often defied decades of work aimed at confirming their existence. Paleontologist George Gaylord Simpson (1984) lists cryptozoology among examples of human gullibility, along with creationism: Humans are the most inventive, deceptive, and gullible of all animals. Only those characteristics can explain the belief of some humans in creationism, in the arrival of UFOs with extraterrestrial beings, or in some aspects of cryptozoology. ...In several respects the discussion and practice of cryptozoology sometimes, although not invariably, has demonstrated both deception and gullibility. An example seems to merit the old Latin saying 'I believe because it is incredible,' although Tertullian, its author, applied it in a way more applicable to the present day creationists. Paleontologist Donald Prothero (2007) cites cryptozoology as an example of pseudoscience and categorizes it, along with Holocaust denial and UFO abductions claims, as aspects of American culture that are "clearly baloney". In Scientifical Americans: The Culture of Amateur Paranormal Researchers (2017), Hill surveys the field and discusses aspects of the subculture, noting internal attempts at creating more scientific approaches and the involvement of Young Earth creationists and a prevalence of hoaxes. She concludes that many cryptozoologists are "passionate and sincere in their belief that mystery animals exist. As such, they give deference to every report of a sighting, often without critical questioning. As with the ghost seekers, cryptozoologists are convinced that they will be the ones to solve the mystery and make history. With the lure of mystery and money undermining diligent and ethical research, the field of cryptozoology has serious credibility problems." Organizations There have been several organizations, of varying types, dedicated or related to cryptozoology. These include: International Fortean Organization – a network of professional Fortean researchers and writers based in the United States International Society of Cryptozoology – an American organisation that existed from 1982 to 1998 Kosmopoisk – a Russian organisation whose interests include cryptozoology and Ufology Museums and exhibitions The zoological and cryptozoological collection and archive of Bernard Heuvelmans is held at the Musée Cantonal de Zoologie in Lausanne and consists of around "1,000 books, 25,000 files, 25,000 photographs, correspondence, and artifacts". In 2006, the Bates College Museum of Art held the "Cryptozoology: Out of Time Place Scale" exhibition, which compared cryptozoological creatures with recently extinct animals like the thylacine and extant taxa like the coelacanth, once thought long extinct (living fossils). The following year, the American Museum of Natural History put on a mixed exhibition of imaginary and extinct animals, including the elephant bird Aepyornis maximus and the great ape Gigantopithecus blacki, under the name "Mythic Creatures: Dragons, Unicorns and Mermaids". See also Ethnozoology Fearsome critters, fabulous beasts that were said to inhabit the timberlands of North America Folk belief List of cryptids, a list of cryptids notable within cryptozoology List of cryptozoologists, a list of notable cryptozoologists Scientific skepticism Notes and citations References Bartholomew, Robert E. 2012. The Untold Story of Champ: A Social History of America's Loch Ness Monster. State University of New York Press. Campion-Vincent, Véronique. 1992. “Appearances of Beasts and Mystery-cats in France”. Folklore 103.2 (1992): 160–183. Card, Jeb J. 2016. "Steampunk Inquiry: A Comparative Vivisection of Discovery Pseudoscience" in Card, Jeb J. and Anderson, David S. Lost City, Found Pyramid: Understanding Alternative Archaeologies and Pseudoscientific Practices, pp. 24–25. University of Alabama Press. Church, Jill M. (2009). Cryptozoology. In H. James Birx. Encyclopedia of Time: Science, Philosophy, Theology & Culture, Volume 1. SAGE Publications. pp. 251–252. Dash, Mike. 2000. Borderlands: The Ultimate Exploration of the Unknown. Overlook Press. Dendle, Peter. 2006. "Cryptozoology in the Medieval and Modern Worlds". Folklore, Vol. 117, No. 2 (Aug., 2006), pp. 190–206. Taylor & Francis. Dendle, Peter. 2013. "Monsters and the Twenty-First Century" in The Ashgate Research Companion to Monsters and the Monstrous. Ashgate Publishing. Hill, Sharon A. 2017. Scientifical Americans: The Culture of Amateur Paranormal Researchers. McFarland. Lack, Caleb W. and Jacques Rousseau. 2016. Critical Thinking, Science, and Pseudoscience: Why We Can't Trust Our Brains. Springer. Lee, Jeffrey A. 2000. The Scientific Endeavor: A Primer on Scientific Principles and Practice. Benjamin Cummings. Loxton, Daniel and Donald Prothero. 2013. Abominable Science: Origins of the Yeti, Nessie, and other Famous Cryptids. Columbia University Press. Mullis, Justin. 2019. "Cryptofiction! Science Fiction and the Rise of Cryptozoology" in Caterine, Darryl & John W. Morehead (ed.). 2019. The Paranormal and Popular Culture: A Postmodern Religious Landscape, pp. 240–252. Routledge. . Mullis, Justin. 2021. "Thomas Jefferson: The First Cryptozoologist?". In Joseph P. Laycock & Natasha L. Mikles (eds). Religion, Culture, and the Monstrous: Of Gods and Monsters, pp. 185–197. Lexington Books. Nagel, Brian. 2009. Pseudoscience: A Critical Encyclopedia. ABC-CLIO. Paxton, C.G.M. 2011. "Putting the 'ology' into cryptozoology." Biofortean Notes. Vol. 7, pp. 7–20, 310. Prothero, Donald R. 2007. Evolution: What the Fossils Say and Why It Matters. Columbia University Press. Radford, Benjamin. 2014. "Bigfoot at 50: Evaluating a Half-Century of Bigfoot Evidence" in Farha, Bryan (ed.). Pseudoscience and Deception: The Smoke and Mirrors of Paranormal Claims. University Press of America. Regal, Brian. 2011a. "Cryptozoology" in McCormick, Charlie T. and Kim Kennedy (ed.). Folklore: An Encyclopedia of Beliefs, Customs, Tales, Music, and Art, pp. 326–329. 2nd edition. ABC-CLIO. . Regal, Brian. 2011b. Sasquatch: Crackpots, Eggheads, and Cryptozoology. Springer. . Roesch, Ben S & John L. Moore. (2002). Cryptozoology. In Michael Shermer (ed.). The Skeptic Encyclopedia of Pseudoscience: Volume One. ABC-CLIO. pp. 71–78. Shea, Rachel Hartigan. 2013. "The Science Behind Bigfoot and Other Monsters".National Geographic, September 9, 2013. Online. Shermer, Michael. 2003. "Show Me the Body" in Scientific American, issue 288 (5), p. 27. Online. Simpson, George Gaylord (1984). "Mammals and Cryptozoology". Proceedings of the American Philosophical Society. Vol. 128, No. 1 (Mar. 30, 1984), pp. 1–19. American Philosophical Society. Thomas, Paul. 2020. Storytelling the Bible at the Creation Museum, Ark Encounter, and Museum of the Bible. Bloomsbury Publishing. Uscinski, Joseph. 2020. Conspiracy Theories: A Primer. Rowman & Littlefield Publishers. Wall, J. E. 1983. The ISC Newsletter, vol. 2, issue 10, p. 10. International Society of Cryptozoology. Ward, Daniel. 2011. "From the President". The American Biology Teacher, 73.8 (2011): 440–440. Watts, Linda S. 2007. Encyclopedia of American Folklore. Facts on File. External links Forteana Pseudoscience Subcultures Young Earth creationism Zoology
2,621
5,830
https://en.wikipedia.org/wiki/County%20Mayo
County Mayo
County Mayo (; , meaning "Plain of the yew trees") is a county in Ireland. In the West of Ireland, in the province of Connacht, it is named after the village of Mayo, now generally known as Mayo Abbey. Mayo County Council is the local authority. The population was 137,231 at the 2022 census. The boundaries of the county, which was formed in 1585, reflect the Mac William Íochtar lordship at that time. Geography It is bounded on the north and west by the Atlantic Ocean; on the south by County Galway; on the east by County Roscommon; and on the northeast by County Sligo. Mayo is the third-largest of Ireland's 32 counties in area and 18th largest in terms of population. It is the second-largest of Connacht's five counties in both size and population. Mayo has of coastline, or approximately 21% of the total coastline of the State. It is one of three counties which claims to have the longest coastline in Ireland, alongside Cork and Donegal. There is a distinct geological difference between the west and the east of the county. The west consists largely of poor subsoils and is covered with large areas of extensive Atlantic blanket bog, whereas the east is largely a limestone landscape. Agricultural land is therefore more productive in the east than in the west. The highest point in Mayo (and Connacht) is Mweelrea, at The River Moy in the northeast of the county is renowned for its salmon fishing Ireland's largest island, Achill Island, lies off Mayo's west coast Mayo has Ireland's highest cliffs at Croaghaun, Achill Island, while the Benwee Head cliffs in Kilcommon Erris drop almost perpendicularly into the Atlantic Ocean. The northwest areas of County Mayo have some of the best renewable energy resources in Europe, if not the world, in terms of wind resources, ocean wave, tidal and hydroelectric resources There are nine historic baronies, four in the northern area and five in the south of the county: North Mayo Erris (north-west, containing Belmullet, Gweesalia, Bangor Erris, Kilcommon, Ballycroy etc.) Burrishoole (west, containing Achill, Mulranny and Newport, County Mayo) Gallen (east, containing Bonniconlon, Foxford) Tyrawley (north-east, containing Ballina, Ballycastle, Killala, Moygownagh) South Mayo Clanmorris, (south-east, containing Claremorris and Balla) Costello (east-south-east, containing Kilkelly Ballyhaunis) etc. Murrisk (south-west, containing Westport, Louisburgh, Croagh Patrick etc.) Kilmaine (south, containing Ballinrobe, Cong etc.) Carra (south, containing Castlebar, Partry etc.) Largest towns by population According to the 2016 census: Castlebar 12,068 Ballina 10,171 Westport 6,198 Claremorris 3,687 Ballinrobe 2,786 Ballyhaunis 2,366 Swinford 1,394 Foxford 1,315 Kiltimagh 1,069 Crossmolina 1,044 Flora and fauna A survey of the terrestrial and freshwater algae of Clare Island was made between 1990 and 2005 and published in 2007. A record of Gunnera tinctoria is also noted. Consultants working for the Corrib gas project have carried out extensive surveys of wildlife flora and fauna in Kilcommon Parish, Erris between 2002 and 2009. This information is published in the Corrib Gas Proposal Environmental impact statements 2009 and 2010. History Prehistory There is evidence of human occupation of what is now County Mayo going far back into prehistory. At Belderrig on the north Mayo coast, there is evidence for Mesolithic (Middle Stone Age) communities around 4500 BC. while throughout the county there is a wealth of archaeological remains from the Neolithic (New Stone Age) period (ca. 4,000 BC to 2,500 BC), particularly in terms of megalithic tombs and ritual stone circles. The first people who came to Ireland – mainly to coastal areas as the interior was heavily forested – arrived during the Middle Stone Age, as far back as eleven thousand years ago. Artefacts of hunter/gatherers are sometimes found in middens, rubbish pits around hearths where people would have rested and cooked over large open fires. Once cliffs erode, midden remains become exposed as blackened areas containing charred stones, bones, and shells. They are usually found a metre below the surface. Mesolithic people did not have major rituals associated with burial, unlike those of the Neolithic (New Stone Age) period. The Neolithic period followed the Mesolithic around 6,000 years ago. People began to farm the land, domesticate animals for food and milk, and settle in one place for longer periods. These people had skills such as making pottery, building houses from wood, weaving, and knapping (stone tool working). The first farmers cleared forestry to graze livestock and grow crops. In North Mayo, where the ground cover was fragile, thin soils washed away and blanket bog covered the land farmed by the Neolithic people. Extensive pre-bog field systems have been discovered under the blanket bog, particularly along the North Mayo coastline in Erris and north Tyrawley at sites such as the Céide Fields, centred on the northeast coast. The Neolithic people developed rituals associated with burying their dead; this is why they built huge, elaborate, galleried stone tombs for their dead leaders, known nowadays as megalithic tombs. There are over 160 recorded megaliths in County Mayo, such as Faulagh. Megalithic tombs There are four distinct types of Irish megalithic tombs—court tombs, portal tombs, passage tombs and wedge tombs—examples of all of which can be found in County Mayo. Areas particularly rich in megalithic tombs include Achill, Kilcommon, Ballyhaunis, Moygownagh, Killala and the Behy/Glenurla area around the Céide Fields. Bronze Age (ca. 2,500 BC to 500 BC) Megalithic tomb building continued into the Bronze Age when metal began to be used for tools alongside the stone tools. The Bronze Age lasted approximately from 4,500 years ago to 2,500 years ago (2,500 BC to 500 BC). Archaeological remains from this period include stone alignments, stone circles and fulachta fiadh (early cooking sites). They continued to bury their chieftains in megalithic tombs which changed design during this period, more being of the wedge tomb type and cist burials. Iron Age (ca. 500 BC to AD 325) Around 2,500 years ago the Iron Age took over from the Bronze Age as more and more metalworking took place. This is thought to have coincided with the arrival of Celtic speaking peoples and the introduction of the ancestor of the Irish language. Towards the end of this period, the Roman Empire was at its height in Britain but it is not thought that the Roman Empire extended into Ireland. Remains from this period, which lasted until the Early Christian period began about AD 325 (with the arrival of Saint Patrick into Ireland, as a slave) include crannógs (Lake dwellings), promontory forts, ringforts and souterrains of which there are numerous examples across the county. The Iron Age was a time of tribal warfare and kingships, each fighting neighbouring kings, vying for control of territories and taking slaves. Territories were marked by tall stone markers, Ogham stones, using the first written down words using the Ogham alphabet. The Iron Age is the time period in which the mythological tales of the Ulster Cycle and sagas took place, as well as that of the Táin Bó Flidhais, whose narrative is set in mainly in Erris. Early Christian period (ca. AD 325 to AD 800) Christianity came to Ireland around the start of the 5th century. It brought many changes including the introduction of the Latin alphabet. The tribal 'tuatha' and new Christian religious settlements existed side by side. Sometimes it suited the chieftains to become part of the early Churches, other times they remained as separate entities. St. Patrick (4th century) may have spent time in County Mayo and it is believed that he spent forty days and forty nights on Croagh Patrick praying for the people of Ireland. From the middle of the 6th-century hundreds of small monastic settlements were established around the county. Some examples of well-known early monastic sites in Mayo include Mayo Abbey, Aughagower, Ballintubber, Errew Abbey, Cong Abbey, Killala, Turlough on the outskirts of Castlebar, and island settlements off the Mullet Peninsula like the Inishkea Islands, Inishglora and Duvillaun. In 795 the first of the Viking raids took place. The Vikings came from Scandinavia to raid the monasteries as they were places of wealth with precious metal working taking place in them. Some of the larger ecclesiastical settlements erected round towers to prevent their precious items from being plundered and also to show their status and strength against these pagan raiders from the north. There are round towers at Aughagower, Balla, Killala, Turlough and Meelick. The Vikings established settlements that later developed into towns (Dublin, Cork, Wexford, Waterford etc.) but none were in County Mayo. Between the reigns of Kings of Connacht Cathal mac Conchobar mac Taidg (973–1010) and Tairrdelbach Ua Conchobair (1106–1156), various tribal territories were incorporated into the kingdom of Connacht and ruled by the Siol Muirdaig dynasty, based initially at Rathcroghan in County Roscommon, and from 1050 at Tuam. The families of O'Malley and O'Dowd of Mayo served as admirals of the fleet of Connacht, while families such as O'Lachtnan, Mac Fhirbhisigh, and O'Cleary were ecclesiastical and bardic clans. Anglo-Normans (12th to 16th centuries) In AD 1169 when one of the warring kings in the east of Ireland, Dermot MacMurrough, appealed to the King of England for help in his fight with a neighbouring king, the response resulted in the Anglo-Norman colonisation of Ireland. County Mayo came under Norman control in AD 1235. Norman control meant the eclipse of many Gaelic lords and chieftains, chiefly the O'Connors of Connacht. During the 1230s, the Anglo-Normans and Welsh under Richard Mór de Burgh (c. 1194 – 1242) invaded and settled in the county, introducing new families such as Burke, Gibbons, Staunton, Prendergast, Morris, Joyce, Walsh, Barrett, Lynott, Costello, Padden and Price, Norman names are still common in County Mayo. Following the collapse of the lordship in the 1330s, all these families became estranged from the Anglo-Irish administration based in Dublin and assimilated with the Gaelic-Irish, adopting their language, religion, dress, laws, customs and culture and marrying into Irish families. They became "more Irish than the Irish themselves". The most powerful clan to emerge during this era were the Mac William Burkes, also known as the Mac William Iochtar (see Burke Civil War 1333–1338), descended from Sir William Liath de Burgh, who defeated the Gaelic-Irish at the Second Battle of Athenry in August 1316. They were frequently at war with their cousins, Clanricarde of Galway, and in alliance with or against various factions of the O'Conor's of Siol Muiredaig and O'Kelly's of Uí Maine. The O'Donnell's of Tyrconnell regularly invaded in an attempt to secure their right to rule. The Anglo-Normans encouraged and established many religious orders from continental Europe to settle in Ireland. Mendicant orders—Augustinians, Carmelites, Dominicans and Franciscans began new settlements across Ireland and built large churches, many under the patronage of prominent Gaelic families. Some of these sites include Cong, Strade, Ballintubber, Errew Abbey, Burrishoole Abbey and Mayo Abbey. During the 15th and 16th centuries, despite regular conflicts between them as England chopped and changed between religious beliefs, the Irish usually regarded the King of England as their King. When Elizabeth I came to the throne in the mid-16th century, the English people, as was customary at that time, followed the religious practices of the reigning monarch and became Protestant. Many Irish people such as Grace O'Malley, the famous pirate queen, had close relationships with the English monarchy, and the English kings and queens were welcome visitors to Irish shores. The Irish however, generally held onto their Catholic religious practices and beliefs. The early plantations of settlers in Ireland began during the reign of Queen Mary in the mid-16th century and continued throughout the long reign of Queen Elizabeth I until 1603. By then the term County Mayo had come into use. In the summer of 1588, the galleons of the Spanish Armada were wrecked by storms along the west coast of Ireland. Some of the hapless Spaniards came ashore in Mayo, only to be robbed and imprisoned, and in many cases slaughtered. Almost all the religious foundations set up by the Anglo-Normans were suppressed in the wake of the Reformation in the 16th century. Protestant settlers from Scotland, England, and elsewhere in Ireland, settled in the County in the early 17th century. Many would be killed or forced to flee because of the 1641 Rebellion, during which a number of massacres were committed by the Catholic Gaelic Irish, most notably at Shrule in 1642. A third of the overall population was reported to have perished due to warfare, famine and plague between 1641 and 1653, with several areas remaining disturbed and frequented by Reparees into the 1670s. 17th and 18th centuries Pirate Queen Grace O'Malley is probably the best-known person from County Mayo between the mid-16th century and the turn of the 17th century. In the 1640s, when Oliver Cromwell overthrew the English monarchy and set up a parliamentarian government, Ireland suffered severely. With a stern regime in absolute control needing to pay its armies and allies, the need to pay them with grants of land in Ireland led to the 'to hell or to Connaught' policies. Displaced native Irish families from other (eastern and southern mostly) parts of the country were either forced to leave the country or were awarded grants of land 'west of the Shannon' and put off their own lands in the east. The land in the west was divided and sub-divided between more and more people as huge estates were granted on the best land in the east to those who best pleased the English. Mayo does not seem to have been affected much during the Williamite War in Ireland, though many natives were outlawed and exiled. For the vast majority of people in County Mayo the 18th century was a period of unrelieved misery. Because of the penal laws, Catholics had no hope of social advancement while they remained in their native land. Some, like William Brown (1777–1857), left Foxford with his family at the age of nine and thirty years later was an admiral in the fledgeling Argentine Navy. Today he is a national hero in that country. The general unrest in Ireland was felt just as keenly across Mayo, and as the 19th century approached and news reached Ireland about the American War of Independence and the French Revolution, the downtrodden Irish, constantly suppressed by Government policies and decisions from Dublin and London, began to rally themselves for their own stand against British rule in their country. 1798 saw Mayo become a central part of the United Irishmen Rebellion when General Humbert from France landed in Killala with over 1,000 soldiers playing to support the main uprising. They marched across the county towards the administrative centre of Castlebar, leading to the Battle of Castlebar. Taking the garrison by surprise Humbert's army was victorious. He established a ' Republic of Connacht' with John Moore of the Moore family from Moore Hall near Partry as its head. Humbert's army marched on towards Sligo, Leitrim and Longford where they were suddenly faced with a massive British army and were forced to surrender in less than half an hour. The French soldiers were treated honourably, but for the Irish the surrender meant slaughter. Many died on the scaffold in towns like Castlebar and Claremorris, where the high sheriff for County Mayo, the Honourable Denis Browne, M.P., brother of Lord Altamont, wreaked a terrible vengeance – thus earning for himself the nickname which has survived in folk memory to the present day, 'Donnchadh an Rópa' (Denis of the Rope). In the 18th century and early 19th century, sectarian tensions arose as evangelical Protestant missionaries sought to 'redeem the Irish poor from the errors of Popery'. One of the best known was the Rev. Edward Nangle's mission at Dugort in Achill. These too were the years of the campaign for Catholic Emancipation and, later, for the abolition of the tithes, which a predominately Catholic population was forced to pay for the upkeep of the clergy of the Established (Protestant) Church. 19th and 20th centuries During the early years of the 19th century, famine was a common occurrence, particularly where population pressure was a problem. The population of Ireland grew to over eight million people prior to the Irish Famine (or Great Famine) of 1845–47. The Irish people depended on the potato crop for their sustenance. Disaster struck in August 1845, when a killer fungus (later diagnosed as Phytophthora infestans) started to destroy the potato crop. When widespread famine struck, about a million people died and a further million left the country. People died in the fields of starvation and disease. The catastrophe was particularly bad in County Mayo, where nearly ninety per cent of the population depended on the potato as their staple food. By 1848, Mayo was a county of total misery and despair, with any attempts at alleviating measures in complete disarray. There are numerous reminders of the Great Famine to be seen on the Mayo landscape: workhouse sites, famine graves, sites of soup kitchens, deserted homes and villages and even traces of undug 'lazy-beds' in fields on the sides of hills. Many roads and lanes were built as famine relief measures. There were nine workhouses in the county: Ballina, Ballinrobe, Belmullet, Castlebar, Claremorris, Killala, Newport, Swinford and Westport. A small poverty-stricken place called Knock, County Mayo, made headlines when it was announced that an apparition of the Blessed Virgin Mary, St. Joseph and St. John had taken place there on 21 August 1879, witnessed by fifteen local people. A national movement was initiated in County Mayo during 1879 by Michael Davitt, James Daly, and others, which brought about a major social change in Ireland. Michael Davitt, a labourer whose family had moved to England joined forces with Charles Stewart Parnell to win back the land for the people from the landlords and stop evictions for non-payment of rents. The organisation became known as the Irish National Land League, and its struggle to win rights for poor farmers in Ireland was known as the Land War. It was in this era of agrarian unrest that a new verb was introduced to the English language by Mayo - "to boycott". Charles Boycott was an English landlord deeply unpopular with his tenants. When Charles Steward Parnell made a speech in Ennis, County Clare, urging nonviolent resistance against landlords, his tactics were enthusiastically taken in Mayo against Boycott. The entire Catholic community around Lough Mask in South Mayo where Boycott had his estate became a campaign of total social ostracisation against Boycott, a tactic that would one day come to bear his name. The campaign against Boycott became a in the British press after he wrote a letter to The Times. The British elite rallied to his cause and Fifty Orangemen from County Cavan and County Monaghan travelled to his estate to harvest the crops, while a regiment of the 19th Royal Hussars and more than 1,000 men of the Royal Irish Constabulary were deployed to protect the harvesters. However, the cost of doing this was completely uneconomic: It cost the British government somewhere in the region of £10,000 to simply harvest £500 worth of crops. Boycott sold off the estate and the British government's resolve to try to break boycotts in this completely dissolved, resulting in victory for the tenants. The "Land Question" was gradually resolved by a scheme of state-aided land purchase schemes. The tenants became the owners of their lands under the newly set-up Land Commission. A Mayo nun, Mother Agnes Morrogh-Bernard, set up the Foxford Woollen Mill in 1892. She made Foxford synonymous throughout the world with high-quality tweeds, rugs and blankets. Mayo, as all parts of what became the Irish Free State, was affected by the events of the Irish revolutionary period, including the Irish War of Independence and the subsequent Irish Civil War. Major John MacBride of Westport was amongst those who took part in the 1916 Easter Rising and was subsequently executed by the British for his participation. His death served as a rallying call for Republicans in Mayo and led to Mayo men such as P. J. Ruttledge, Ernie O'Malley, Michael Kilroy and Thomas Derrig to rise up during the War of Independence. In the ensuing Civil War, many of these leading figures chose the Anti-treaty side and fought in bitter battles such as those in Ballina, which changed hands between pro and anti-treaty forces a number of times. In the aftermath of the Civil War, there was a consolidation of many of those with anti-treaty feelings into the new political party Fianna Fáil. PJ Ruttledge and Thomas Derrig would become founding members of the party and served in Éamon de Valera's first-ever Fianna Fáil government as ministers. Mayo politicians would continue to contribute to the national political scene over the decades. In 1990 Mary Robinson, from County Mayo, became the first-ever female President of Ireland, and is widely credited with revitalising the position with importance and focus it had never possessed before. During her tenure she unveiled Ireland's National Famine Memorial which is situated in the village of Murrisk, County Mayo. In 2011 Enda Kenny became the first politician from a Mayo constituency and the second Mayo native to serve as Taoiseach, the head of government of Ireland. Kenny went on to become the longest-serving Fine Gael Taoiseach in Irish history. Clans and families In the early historic period, what is now County Mayo consisted of a number of large kingdoms, minor lordships and tribes of obscure origins. They included: Calraige – pre-historic tribe found in the parishes of Attymass, Kilgarvan, Crossmolina and the River Moy Ciarraige – settlers from Munster found in south-east Mayo around Kiltimagh and west County Roscommon Conmaicne – a people located in the barony of Kilmaine, alleged descendants of Fergus mac Róich Fir Domnann – branch of the Laigin, originally from Britain, located in Erris Gamanraige – pre-historic kings of Connacht, famous for battle with Medb & Ailill of Cruachan in Táin Bó Flidhais. Based in Erris, Carrowmore Lake, Killala Bay, Lough Conn Gailenga – kingdom extending east from Castlebar to adjoining parts of Mayo Uí Fiachrach Muidhe – a sept of the Connachta, based around Ballina, some of whom were kings of Connacht Partraige – apparently a pre-Gaelic people of Lough Mask and Lough Carra, namesakes of Partry Umaill – kingdom surrounding Clew Bay, east towards Castlebar, its rulers adopted the surname O'Malley Politics Local government and political subdivisions Mayo County Council is the authority responsible for local government. As a county council, it is governed by the Local Government Act 2001. The county is divided into four municipal districts of Ballina, Castlebar, Claremorris and Westport–Belmullet, each with a population of roughly 32,000 to 34,000 people. The council is responsible for housing and community, roads and transportation, urban planning and development, amenity and culture, and environment. County Mayo is divided into six local electoral areas (LEAs). Councillors are elected for a five-year term. The county town is at Áras an Contae in Castlebar, the main population centre located in the centre of the county. National politics Since 2016, Mayo has been represented on a national political level by four TDs who represent the constituency of Mayo in Dáil Eireann. Previous to 2016 the constituency had five TDs but this was reduced based on the county's current population in line with proportional representation. The electoral divisions of Cong, Dalgan, Houndswood, Kilmaine, Neale, Shrule, in the former Rural District of Ballinrobe, are in Galway West. Voting patterns and political history Historically, Mayo has tended to vote Fianna Fáil, as Fianna Fáil managed to position themselves in the 20th century as the party best fit to represent farmers with small holdings, who were plentiful in Mayo. With so many of Mayo's electorate being small farmers, the county became a base for the emergence of Clann na Talmhan, an agrarian party in the 1940s and 1950s. Clann an Talmhan's second leader, Joseph Blowick came from South Mayo and that is where his seat was. The party was not able to last in the long run though as it was unable to hold together its voting bloc of both small farmers in the west of Ireland and large farmers in the east. Towards the start of the 21st century, the balance of power in Mayo began to shift towards Fine Gael, thanks in part to the emergence of Enda Kenny and Michael Ring. Kenny, who became Taoiseach in 2011, led Fine Gael to a historic victory in the 2011 Irish general election which included securing four out of five available seats for his party in Mayo. In 2020, Rose Conway-Walsh came within 200 votes of topping the poll and became the first Sinn Féin TD for Mayo since 1927, riding a nationwide surge for Sinn Féin that year. Despite being historically the third-largest party in Ireland, Labour has struggled to ever make inroads into Mayo. The party has only ever had one TD for Mayo, former party leader Thomas J. O'Connell, who represented South Mayo between 1927 and 1932. While Labour has not proven itself electorally successful in Mayo, Mayo has provided important members to the Labour Party. Mary Robinson from Ballina became the first-ever female President of Ireland as a Labour candidate while Pat Rabbitte, originally from Claremorris, served as leader of the Labour Party from 2002 to 2007. Serving alongside Rabbitte was Emmet Stagg, one of the longest-standing Labour TDs of the modern era, himself from Hollymount not far from Claremorris. Demographics The county has experienced perhaps the highest emigration out of Ireland. In the 1840s–1880s, waves of emigrants left the rural townlands of the county. Initially triggered by the Great Famine and then in search of work in the newly industrialising United Kingdom and the United States, the population plummeted from 388,887 in 1841 to 199,166 in 1901. It reached a low of 109,525 in 1971. Emigration slowed dramatically as the Irish economy began to expand in the 1990s and early 2000s, and the population of Mayo increased from 110,713 in 1991 to 130,638 in 2011. Religion In the 2006 National Census, the religious demographic breakdown for County Mayo was 114,215 Roman Catholics, 2,476 Church of Ireland, 733 Muslims, 409 other Christians, 280 Presbyterians, 250 Orthodox Christians, 204 Methodists, 853 other stated religions, 3,267 no religion and 1,152 no stated religion. Irish language 9% of the population of County Mayo live in the Gaeltacht. The Gaeltacht Irish-speaking region in County Mayo is the third-largest in Ireland with 10,886 inhabitants. Tourmakeady is the largest village in this area. All schools in the area use Irish as the language of instruction. Mayo has four gaelscoileanna in its four major towns, providing primary education to students through Irish. Transport Rail Westport railway station is the terminus station on the Dublin to Westport Rail service. Railway stations are also located at Ballyhaunis, Claremorris, Castlebar, Manulla, Ballina and Foxford. All railway stations are located on the same railway line, with the exception of Ballina and Foxford which requires passengers to change at Manulla Junction. There are currently four services each way every day on the line. There are also proposals to reopen the currently disused Western Railway Corridor connecting Limerick with Sligo. Road There are a number of national primary roads in the county including the N5 road connecting Westport with Dublin, the N17 road connecting the county with Galway and Sligo and the N26 road connecting Ballina with Dublin via the N5. There are a number of national secondary roads in the county also including the N58 road, N59 road, N60 road, N83 road & N84 road. As of 2021, a new road running from northwest of Westport to east of Castlebar is under construction. The road is a dual carriageway with junctions at the N59, N84 and N60 and will open in late 2022. Air Ireland West Airport Knock is an international airport located in the county. The name is derived from the nearby village of Knock. Recent years have seen the airport's passenger numbers grow to over 650,000 yearly with a number of UK and European destinations. August 2014 saw the airport have its busiest month on record with 102,774 passengers using the airport. Places of interest Media Newspapers in County Mayo include The Mayo News, the Connaught Telegraph, the Connacht Tribune, Western People, and Mayo Advertiser, which is Mayo's only free newspaper. Mayo Now is a monthly entertainment and culture magazine for the towns of Ballina, Foxford, Killala, Crossmolina and surrounding areas – this is out on the first Friday of each month. Mayo has its own online TV channel Mayo TV which was launched in 2011. It covers news and events from around the county and regularly broadcasts live to a worldwide audience. Local radio stations include Erris FM, Community Radio Castlebar, Westport Community Radio, BCR FM (Ballina Community Radio) and M.W.R. (Mid West Radio). The documentary Pipe Down, which won best feature documentary at the 2009 Waterford Film Festival, was made in Mayo. Energy Energy controversy There is local resistance to Shell's decision to process raw gas from the Corrib gas field at an onshore terminal. In 2005, five local men were jailed for contempt of court after refusing to follow an Irish court order. Subsequent protests against the project led to the Shell to Sea and related campaigns. Energy audit The Mayo Energy Audit 2009–2020 is an investigation into the implications of peak oil and subsequent fossil fuel depletion for a rural county in west of Ireland. The study draws together many different strands to examine current energy supply and demand within the area of study, and assesses these demands in the face of the challenges posed by the declining production of fossil fuels and expected disruptions to supply chains, and by long-term economic recession. Sport The Mayo GAA senior team last won the Sam Maguire Cup in 1951, when the team was captained by Seán Flanagan. The team's third title followed victories in 1936 and the previous year, 1950. Since 1951, the team have made numerous All-Ireland Final appearances (in 1989, twice in 1996, 1997, 2004, 2006, 2012, 2013, twice again in 2016 against Dublin, with their latest appearance coming in 2017 against Dublin, again), though the team have failed on all occasions to achieve victory over their opponents. The team's unofficial supporters club are Mayo Club '51, named after the last team who won the Sam Maguire. The county colours of Mayo GAA are traditionally green and red. The county's most popular association football teams are Westport United and Castlebar Celtic. Although Gaelic football and association football are the most popular sport in the county, other sports are popular in the county as well such as rugby, basketball, hurling, swimming, tennis, badminton, athletics, handball and racquetball. Notable people Richard Bourke, 6th Earl of Mayo (1822–1872) – Viceroy of India (1869–1872) Patrick Browne (1720–1790) – physician and botanist Michael Davitt (1846–1906) – Irish republican, agrarian campaigner, labour leader, Home Rule politician and Member of Parliament (MP) who founded the Irish National Land League. Grace O'Malley (circa 1530 – circa 1603) – Lord of the O'Malley dynasty in the 16th century Admiral William Brown (1777–1857) – Founder of the Argentine Navy Charles Haughey (1925–2006) – Taoiseach of Ireland (1979–1982; 1987–1992) Enda Kenny (born 1951) – Politician, leader of Fine Gael (2002–2017), and Taoiseach of Ireland (2011–2017) John MacBride (1868–1916) – Republican and military leader, executed by the British for his participation in the 1916 Easter Rising William O'Dwyer (1890–1964) – 100th mayor of New York City (1946–1950) Mary Robinson (born 1944) – First female President of Ireland (1990–1997), and United Nations High Commissioner for Human rights. Sally Rooney (born 1991) – Author (Conversations with Friends, Normal People), and screenwriter Martin Sheridan, Olympic Games gold medalist representing the United States Louis Walsh (born 1952) – Entertainment manager and judge on The X Factor (UK), and Ireland's Got Talent See also High Sheriff of Mayo List of abbeys and priories in the Republic of Ireland (County Mayo) List of loughs of County Mayo List of Mayo people List of mountains and hills of County Mayo List of rivers of County Mayo List of roads of County Mayo Lord Lieutenant of Mayo Mayo County Council River Robe Táin Bó Flidhais References External links Connaught Telegraph County Mayo: An Outline History Family History in North County Mayo Historical Ballinrobe Irish language in Mayo Wild Atlantic Way Mayo Route Map and Guide Mayo.ie Mayo County Council's website Mayo News The Mayo Peace Park and Garden of Remembrance Western People Mayo Mayo Mayo
2,623
5,833
https://en.wikipedia.org/wiki/County%20Fermanagh
County Fermanagh
County Fermanagh ( ; ) is one of the thirty-two counties of Ireland, one of the nine counties of Ulster and one of the six counties of Northern Ireland. The county covers an area of 1,691 km2 (653 sq mi) and has a population of 61,805 as of 2011. Enniskillen is the county town and largest in both size and population. Fermanagh is one of four counties of Northern Ireland to have a majority of its population from a Catholic background, according to the 2011 census. Geography Fermanagh is situated in the southwest corner of Northern Ireland. It spans an area of 1,851 km2 (715 sq; mi), accounting for 13.2% of the landmass of Northern Ireland. Nearly a third of the county is covered by lakes and waterways, including Upper and Lower Lough Erne and the River Erne. Forests cover 14% of the landmass (42,000 hectares). It is the only county in Northern Ireland that does not border Lough Neagh. The county has three prominent upland areas: the expansive West Fermanagh Scarplands to the southwest of Lough Erne, which rise to about 350m, the Sliabh Beagh hills, situated to the east on the Monaghan border, and the Cuilcagh mountain range, located along Fermanagh's southern border, which contains Cuilcagh, the county's highest point, at 665m. The county borders: County Tyrone to the north-east, County Monaghan to the south-east, County Cavan to the south-west, County Leitrim to the west, and County Donegal to the north-west. Fermanagh is by far the least populous of Northern Ireland's six counties, with just over one-third the population of Armagh, the next least populous county. It is approximately from Belfast and from Dublin. The county town, Enniskillen, is the largest settlement in Fermanagh, situated in the middle of the county. The county enjoys a temperate oceanic climate (Cfb) with cool winters, mild humid summers, and a lack of temperature extremes, according to the Köppen climate classification. The National Trust for Places of Historic Interest or Natural Beauty manages three sites of historic and natural beauty in the county: Crom Estate, Florence Court, and Castle Coole. Geology The oldest sediments in the county are found north of Lough Erne. These so-called red beds were formed approximately 550 million years ago. Extensive sandstone can be found in the eastern part of the county, laid down during the Devonian, 400 million years ago. Much of the rest of the county's sediments are shale and limestone dating from the Carboniferous, 354 to 298 million years ago. These softer sediments have produced extensive cave systems such as the Shannon Cave, the Marble Arch Caves and the Caves of the Tullybrack and Belmore hills. The carboniferous shale exists in several counties of northwest Ireland, an area known colloquially as the Lough Allen basin. The basin is estimated to contain 9.4 trillion cubic metres of natural gas, equivalent to 1.5 billion barrels of oil. The county is situated over a sequence of prominent faults, primarily the Killadeas – Seskinore Fault, the Tempo – Sixmilecross Fault, the Belcoo Fault and the Clogher Valley Fault which cross-cuts Lough Erne. History The Menapii are the only known Celtic tribe specifically named on Ptolemy's 150 AD map of Ireland, where they located their first colony—Menapia—on the Leinster coast circa 216 BC. They later settled around Lough Erne, becoming known as the Fir Manach, and giving their name to Fermanagh and Monaghan. Mongán mac Fiachnai, a 7th-century King of Ulster, is the protagonist of several legends linking him with Manannán mac Lir. They spread across Ireland, evolving into historic Irish (also Scottish and Manx) clans. The Annals of Ulster which cover medieval Ireland between AD 431 to AD 1540 were written at Belle Isle on Lough Erne near Lisbellaw. Fermanagh was a stronghold of the Maguire clan and Donn Carrach Maguire (died 1302) was the first of the chiefs of the Maguire dynasty. However, on the confiscation of lands relating to Hugh Maguire, Fermanagh was divided in a similar manner to the other five escheated counties among Scottish and English undertakers and native Irish. The baronies of Knockninny and Magheraboy were allotted to Scottish undertakers, those of Clankelly, Magherastephana and Lurg to English undertakers and those of Clanawley, Coole, and Tyrkennedy, to servitors and natives. The chief families to benefit under the new settlement were the families of Cole, Blennerhasset, Butler, Hume, and Dunbar. Fermanagh was made into a county by a statute of Elizabeth I, but it was not until the time of the Plantation of Ulster that it was finally brought under civil government. The closure of all the lines of Great Northern Railway (Ireland) within County Fermanagh in 1957 left the county as the first non-island county in the UK without a railway service. Administration The county was administered by Fermanagh County Council from 1899 until the abolition of county councils in Northern Ireland in 1973. With the creation of Northern Ireland's district councils, Fermanagh District Council became the only one of the 26 that contained all of the county from which it derived its name. After the re-organisation of local government in 2015, Fermanagh was still the only county wholly within one council area, namely Fermanagh and Omagh District Council, albeit that it constituted only a part of that entity. For the purposes of elections to the UK Parliament, the territory of Fermanagh is part of the Fermanagh and South Tyrone Parliamentary Constituency. This constituency elected Provisional IRA hunger-striker Bobby Sands as a member of parliament in the April 1981 Fermanagh and South Tyrone by-election, shortly before his death. Demography On Census Day 27th March 2011, the usually resident population of Fermanagh Local Government District, the borders of the district were very similar to those of the traditional County Fermanagh, was 61,805. Of these: 0.93% were from an ethnic minority population and the remaining 99.07% were white (including Irish Traveller) 59.16% belong to or were brought up in the Catholic religion and 37.78% belong to or were brought up in a 'Protestant and Other Christian (including Christian related)' religion 37.20% indicated that they had a British national identity, 36.08% had an Irish national identity and 29.53% had a Northern Irish national identity Industry and tourism Agriculture and tourism are two of the most important industries in Fermanagh. The main types of farming in the area are beef, dairy, sheep, pigs and some poultry. Most of the agricultural land is used as grassland for grazing and silage or hay rather than for other crops. The waterways are extensively used by cabin cruisers, other small pleasure craft and anglers. The main town of Fermanagh is Enniskillen (, 'Ceithleann's island'). The island town hosts a range of attractions including the Castle Coole Estate and Enniskillen Castle, which is home to the museum of The Royal Inniskilling Fusiliers and the 5th Royal Inniskilling Dragoon Guards. Fermanagh is also home to The Boatyard Distillery, a distillery producing gin. Attractions outside Enniskillen include: Belleek Pottery Castle Archdale Crom Estate Cuilcagh Boardwalk Trail Devenish Island Florence Court Marble Arch Caves Tempo Manor Settlements Large towns (population of 18,000 or more and under 75,000 at 2001 Census) none Medium towns (population of 10,000 or more and under 18,000 at 2001 Census) Enniskillen Small towns (population of 4,500 or more and under 10,000 at 2001 Census) none Intermediate settlements (population of 2,250 or more and under 4,500 at 2011 Census) Irvinestown Lisnaskea Villages (population of 1,000 or more and under 2,250 at 2001 Census) Ballinamallard Lisbellaw Small villages or hamlets (population of less than 1,000 at 2001 Census) Ballycassidy Belcoo Bellanaleck Belleek Boho Brookeborough Clabby Derrygonnelly Derrylin Ederney Florencecourt Garrison Kesh Maguiresbridge Newtownbutler Rosslea Teemore Tempo Wattlebridge SubdivisionsBaroniesClanawley Clankelly Coole Knockninny Lurg Magheraboy Magherastephana TirkennedyParishesTownlandsMediaNewspapers The Fermanagh Herald The Impartial Reporter Sport Fermanagh GAA has never won a Senior Provincial or an All-Ireland title in any Gaelic games. Only Ballinamallard United F.C. take part in the Northern Ireland football league system. All other Fermanagh clubs play in the Fermanagh & Western FA league systems. Fermanagh Mallards F.C. played in the Women's Premier League until 2013. Enniskillen RFC was founded in 1925 and is still going. There is also a rugby league team, the Fermanagh Redskins Famous football players from Fermanagh include - Sandy Fulton Jim Cleary Roy Carroll Harry Chatton Barry Owens Kyle Lafferty Notable people Famous people born, raised in or living in Fermanagh include: John Armstrong (1717–1795), born in Fermanagh, Major General in the Continental Army and delegate in the Continental Congress Samuel Beckett (1906–1989), author and playwright from Foxrock in Dublin, educated at Portora Royal School The 1st Viscount Brookeborough, Prime Minister of Northern Ireland, 1943-1963 Denis Parsons Burkitt (1911–1993), doctor, discoverer of Burkitt's lymphoma Roy Carroll (born 1977), association footballer Edward Cooney (1867–1960), evangelist and early leader of the Cooneyite and Go-Preachers Brian D'Arcy (born 1945), C.P., Passionist priest and media personality Brendan Dolan (born 1973), professional darts player for the PDC Adrian Dunbar (born 1958), actor Arlene Foster, Baroness Foster of Aghadrumsee (born 1970), politician Neil Hannon (born 1970), musician Robert Kerr (1882–1963), athlete and Olympic gold medalist Kyle Lafferty (born 1987), Northern Ireland International association footballer Charles Lawson (born 1959), actor (plays Jim McDonald in Coronation Street) Francis Little (1822–1890), born in Fermanagh, Wisconsin State Senator Terence MacManus (c. 1823–1861), leader in Young Irelander Rebellion of 1848 Michael Magner (1840–97), recipient of the Victoria Cross Peter McGinnity, Gaelic footballer, Fermanagh's first winner of an All-Star Award Martin McGrath, Gaelic footballer, All-Star winner Ciarán McMenamin (born 1975), actor Gilla Mochua Ó Caiside (12th century), poet Aurora Mulligan, director Barry Owens, Gaelic footballer, two-time All-Star winner Sean Quinn (born 1947), entrepreneur Michael Sleavon (1826–1902), recipient of the Victoria Cross Patrick Treacy, author and one-time physician to Michael Jackson Joan Trimble (1915–2000), pianist and composer Oscar Wilde (1854–1900), author and playwright, educated at Portora Royal School Gordon Wilson (1927–1995), peace campaigner and Irish senator Surnames The most common surnames in County Fermanagh at the time of the United Kingdom Census of 1901 were: Maguire McManus Johnston Armstrong Gallagher Elliott Murphy Reilly Cassidy Wilson Railways The railway lines in County Fermanagh connected Enniskillen railway station with Derry from 1854, Dundalk from 1861, Bundoran from 1868 and Sligo from 1882. The railway companies that served the county, prior to the establishment by the merger of Londonderry and Enniskillen Railway, Enniskillen and Bundoran Railway the Dundalk and Enniskillen Railway which was later named the Irish North Western Railway, thus forming the Great Northern Railway (Ireland). By 1883 the Great Northern Railway (Ireland) absorbed all the lines except the Sligo, Leitrim and Northern Counties Railway, which remained independent throughout its existence. In October 1957 the Government of Northern Ireland closed the GNR line, which made it impossible for the SL&NCR continue and forced it also to close. The nearest railway station to Enniskillen is Sligo station which is served by trains to Dublin Connolly and is operated by Iarnród Éireann. The Dublin-Sligo railway line has a two-hourly service run by Iarnród Éireann. The connecting bus from Sligo via Manorhamilton to Enniskillen is route 66' operated by Bus Éireann. See also Abbeys and priories in Northern Ireland (County Fermanagh) Castles in County Fermanagh Extreme points of the United Kingdom High Sheriff of Fermanagh List of parishes of County Fermanagh List of places in County Fermanagh List of townlands in County Fermanagh Lord Lieutenant of Fermanagh People from County Fermanagh Notes References Clogher Record "Fermanagh" A Dictionary of British Place-Names. A. D. Mills. Oxford University Press, 2003. Oxford Reference Online. Oxford University Press. Northern Ireland Public Libraries. 25 July 2007 "Fermanagh" Encyclopædia Britannica. 2007. Encyclopædia Britannica Online Library Edition. 25 July 2007 <Britannica Library>. Fermanagh: its special landscapes: a study of the Fermanagh countryside and its heritage /Department of the Environment for Northern Ireland. – Belfast: HMSO, 1991 Livingstone, Peadar. – The Fermanagh story:a documented history of the County Fermanagh from the earliest times to the present day – Enniskillen: Cumann Seanchais Chlochair, 1969. Lowe, Henry N. – County Fermanagh 100 years ago: a guide and directory 1880. – Belfast: Friar's Bush Press, 1990. Parke, William K. – A Fermanagh Childhood. Derrygonnelly, Co Fermanagh: Friar's Bush Press, 1988. Impartial Reporter Fermanagh Herald External links Fermanagh on the interactive map of the counties of Great Britain and Ireland – Wikishire A folk history of Fermanagh
2,624
5,835
https://en.wikipedia.org/wiki/Christian%20%28disambiguation%29
Christian (disambiguation)
Christian most often refers to: Christians, people who follow or adhere to Christianity pertaining to Christianity Christian or The Christian may also refer to: Arts and entertainment Film Christian (1939 film), a Czech comedy film Christian (1989 film), a Danish drama film The Christian (1911 film), an Australian silent film The Christian (1914 film), an American silent film directed by Frederick A. Thomson The Christian (1915 film), a British silent film directed by George Loane Tucker The Christian (1923 film), an American silent film drama directed by Maurice Tourneur Music "Christian" (song), a 1982 song by China Crisis Christian the Christian, a 2004 album by Lackthereof The Christians (band), a UK band from Liverpool, formed 1985 Other uses in arts and entertainment The Christian, an 1897 novel and play by Hall Caine, adapted for Broadway The Christian (magazine), the title of several magazines Christian, the protagonist in John Bunyan's novel The Pilgrim's Progress People Christian (given name), including a list of people and fictional characters with the given name Christian (surname), including a list of people with the surname Christian of Clogher (d. 1138), saint and Irish bishop Christian of Oliva, a 13th-century Cistercian monk Christian (bishop of Aarhus), fl. c. 1060 to c. 1102 Christian (footballer, born 1995) (Christian Savio Machado) Christian (footballer, born 2000) (Christian Roberto Alves Cardoso) Christian (singer) (Gaetano Cristiano Rossi, born 1949) Christian, ring name of professional wrestler Christian Cage (William Jason Reso, born November 30, 1973) Prince Christian (disambiguation) Christian I (disambiguation) Christian II (disambiguation) Christian III (disambiguation) Other uses Christian the lion (born 1969) Christian, West Virginia, a place in the U.S. See also The Christians (disambiguation) St. Jude storm of 2013, also called Cyclone Christian Christian Doctrine in United States federal law, arising from G. L. Christian and Associates v. United States
2,625
5,841
https://en.wikipedia.org/wiki/Transport%20in%20Colombia
Transport in Colombia
Transport in Colombia is regulated by the Ministry of Transport. Road travel is the main means of transport; 69 percent of cargo is transported by road, as compared with 27 percent by railroad, 3 percent by internal waterways, and 1 percent by air. History Indigenous peoples influence The indigenous peoples in Colombia used and some continue to use the waterways as the way of transportation using rafts and canoes. Spanish influence With the arrival of the Europeans the Spaniards brought the horses, mules and donkey (which developed into the Paso Fino) used by them in ranching duties later in the Spanish colonization of the Americas. Horses contributed greatly to the transport of the Spanish conquerors and colonizers. They also introduced the wheel, and brought wooden carts and carriages to facilitate their transport. The Spaniards also developed the first roads, rudimentary and most of these in the Caribbean region. Due to the rough terrain of Colombia communications between regions was difficult and affected the effectiveness of the central government creating isolation in some regions. Maritime navigation developed locally after Spain lifted its restrictions on ports within the Spanish Empire inducing mercantilism. Spanish also transported African slaves and forcedly migrated many indigenous tribes throughout Colombia. Post-independence With the independence and the influences of the European Industrial Revolution the main way of transport in Colombia became the navigation mainly through the Magdalena River which connected Honda in inland Colombia, with Barranquilla by the Caribbean sea to the trade with the United States and Europe. This also brought a large wave of immigrants from European and Middle Eastern countries. The industrialization process and transportation in Colombia were affected by the internal civil wars that surged after the independence from Spain and that continued throughout the 19th and 20th centuries. Standardization During the late 19th century European and American companies introduced railways to carry to the ports the local production of raw materials intended for exports and also imports from Europe. Steam ships began carrying Colombians, immigrants and goods from Europe and the United States over the Magdalena River. The Ministry of Transport was created in 1905 during the Presidency of Rafael Reyes under the name of Ministerio de Obras Públicas y Transporte or Ministry of Public Works and Transport with the main function of taking care of national assets issues, including mines, oil (fuel), patents and trade marks, railways, roads, bridges, national buildings and land without landowners. In the early 20th century roads and highways maintenance and construction regulations were established. Rivers were cleaned, dragged and channeled and the navigational industry was organized. The Public works districts were created, as well as the Ferrocarriles Nacionales de Colombia (National Railways of Colombia). Among other major projects developed were the aqueduct of Bogotá, La Regadera Dam and the Vitelma Water Treatment Plant. The Ministry also created the National Institute of Transit (from the Spanish Instituto Nacional de Tránsito), (INTRA) under the Transport and tariffs Directorate and was in charge of designing the first National roads plan with the support of many foreign multinational construction companies. Aviation was born in Barranquilla with the creation of SCADTA in 1919 a joint venture between Colombians and Germans that delivered mail to the main cities of Colombia which later merged with SACO to form Avianca. Infrastructure Railways Colombia has of rail lines, of which are gauge and of which are gauge. However, only of lines are still in use. Rail transport in Colombia remains underdeveloped. The national railroad system, once the country's main mode of transport for freight, has been neglected in favor of road development and now accounts for only about a quarter of freight transport. Passenger-rail use was suspended in 1992 resumed at the end of the 1990s, and as of 2017 it is considered abandoned (at least for long distances). Fewer than 165,000 passenger journeys were made in 1999, as compared with more than 5 million in 1972, and the figure was only 160,130 in 2005. The two still-functioning passenger trains are: one between Puerto Berrío and García Cadena, and another one between Bogotá and Zipaquirá. Short sections of railroad, mainly the Bogotá-Atlantic rim, are used to haul goods, mostly coal, to the Caribbean and Pacific ports. In 2005 a total of 27.5 million metric tons of cargo were transported by rail. Although the nation's rail network links seven of the country's 10 major cities, very little of it has been used regularly because of security concerns, lack of maintenance, and the power of the road transport union. During 2004–6, approximately 2,000 kilometers of the country's rail lines underwent refurbishment. This upgrade involved two main projects: the 1,484-kilometer line linking Bogotá to the Caribbean Coast and the 499-kilometer Pacific coastal network that links the industrial city of Cali and the surrounding coffee-growing region to the port of Buenaventura. Roads The three main north–south highways are the Caribbean, Eastern, and Central Trunk Highways (troncales). Estimates of the length of Colombia's road system in 2004 ranged from 115,000 kilometers to 145,000 kilometers, of which fewer than 15 percent were paved. However, according to 2005 data reported by the Colombian government, the road network totaled 163,000 kilometers, 68 percent of which were paved and in good condition. The increase may reflect some newly built roads. President Uribe has vowed to pave more than 2,500 kilometers of roads during his administration, and about 5,000 kilometers of new secondary roads were being built in the 2003–6 period. Despite serious terrain obstacles, almost three-quarters of all cross-border dry cargo is now transported by road, 105,251 metric tons in 2005. Highways are managed by the Colombian Ministry of Transport through the National Roads Institute. The security of the highways in Colombia is managed by the Highway Police unit of the Colombian National Police. Colombia is crossed by the Panamerican Highway. Ports, waterways, and merchant marine Seaports handle around 80 percent of international cargo. In 2005 a total of 105,251 metric tons of cargo were transported by water. Colombia's most important ocean terminals are Barranquilla, Cartagena, and Santa Marta on the Caribbean Coast and Buenaventura and Tumaco on the Pacific Coast. Exports mostly pass through the Caribbean ports of Cartagena and Santa Marta, while 65 percent of imports arrive at the port of Buenaventura. Other important ports and harbors are Bahía de Portete, Leticia, Puerto Bolívar, San Andrés, Santa Marta, and Turbo. Since privatization was implemented in 1993, the efficiency of port handling has increased greatly. Privatization, however, has had negative impacts as well. In Buenaventura, for example, privatization of the harbor has increased unemployment and social issues. There are plans to construct a deep-water port at Bahía Solano. The main inland waterways total about 18,200 kilometers, 11,000 kilometers of which are navigable by riverboats. A well-developed and important form of transport for both cargo and passengers, inland waterways transport approximately 3.8 million metric tons of freight and more than 5.5 million passengers annually. Main inland waterways are the Magdalena–Cauca River system, which is navigable for 1,500 kilometers; the Atrato, which is navigable for 687 kilometers; the Orinoco system of more than five navigable rivers, which total more than 4,000 kilometers of potential navigation (mainly through Venezuela); and the Amazonas system, which has four main rivers totaling 3,000 navigable kilometers (mainly through Brazil). The government is planning an ambitious program to more fully utilize the main rivers for transport. In addition, the navy's riverine brigade has been patrolling waterways more aggressively in order to establish safer river transport in the more remote areas in the south and east of the country. The merchant marine totals 17 ships (1,000 gross registered tons or more), including four bulk, 13 cargo, one container, one liquefied gas, and three petroleum tanker ships. Colombia also has seven ships registered in other countries (Antigua and Barbuda, two; Panama, five). Civil Aviation The Special Administrative Unit of Civil Aeronautics is responsible of regulating and controlling the use of air space by civil aviation. The customs/immigration issues are controlled by the Departamento Administrativo de Seguridad (DAS). Colombia has well-developed air routes and an estimated total of 984 airports, 100 of which have paved runways, plus two heliports. Of the 74 main airports, 20 can accommodate jet aircraft. Two airports are more than 3,047 meters in length, nine are 2,438–3,047 meters, 39 are 1,524–2,437 meters, 38 are 914–1,523 meters, 12 are shorter than 914 meters, and 880 have unpaved runways. The government has been selling its stake in local airports in order to allow their privatization. The country has 40 regional airports, and the cities of Bogotá, Medellín, Cali, Barranquilla, Bucaramanga, Cartagena, Cúcuta, Leticia, Pereira, Armenia, San Andrés, and Santa Marta have international airports. Bogotá's El Dorado International Airport handles 550 million metric tons of cargo and 22 million passengers a year, making it the largest airport in Latin America in terms of cargo and the third largest in passenger numbers. Urban transport Urban transport systems have been developed in Bogotá, Medellín, Cali and Barranquilla. Traffic congestion in Bogotá has been greatly exacerbated by the lack of rail transport. However, this problem has been alleviated somewhat by the development of one of the world's largest and highest capacity bus rapid transit (BRT) systems, known as the TransMilenio (opened 2000), and the restriction of vehicles through a daily, rotating ban on private cars depending on plate numbers. Bogotá's system consists of bus and minibus services managed by both private- and public-sector enterprises. Since 1995 Medellín has had a modern urban railway referred to as the Metro de Medellín, which also connects with the cities of Itagüí, Envigado, and Bello. An elevated cable car system, Metrocable, was added in 2004 to link some of Medellín's poorer mountainous neighborhoods with the Metro de Medellín. A BRT line called Transmetro began operating in 2011, with a second line added in 2013. Other cities have also installed BRT systems such as Cali with a six line system (opened 2008), Barranquilla with two lines (opened 2010), Bucaramanga with one line (opened 2010), Cartagena with one line (opened 2015) and Pereira with three lines (opened 2006). A light rail line in Barranquilla is planned. Pipelines Colombia has 4,350 kilometers of gas pipelines, 6,134 kilometers of oil pipelines, and 3,140 kilometers of refined-products pipelines. The country has five major oil pipelines, four of which connect with the Caribbean export terminal at Puerto Coveñas. Until at least September 2005, the United States funded efforts to help protect a major pipeline, the 769-kilometer-long Caño Limón–Puerto Coveñas pipeline, which carries about 20 percent of Colombia's oil production to Puerto Coveñas from the guerrilla-infested Arauca region in the eastern Andean foothills and Amazonian jungle. The number of attacks against pipelines began declining substantially in 2002. In 2004 there were only 17 attacks against the Caño Limón–Puerto Coveñas pipeline, down from 170 in 2001. However, a bombing in February 2005 shut the pipeline for several weeks, and attacks against the electrical gird system that provides energy to the Caño Limón oilfield have continued. New oil pipeline projects with Brazil and Venezuela are underway. In addition, the already strong cross-border trade links between Colombia and Venezuela were solidified in July 2004 with an agreement to build a US$320 million natural gas pipeline between the two countries, to be completed in 2008. See also Megabús Railway stations in Colombia References External links Colombian Ministry of Transport Invias - Colombian National Institute of Highways Colombian Maritime and Fluvial Port Authority Colombian Civil Aerospace Authority
2,630
5,844
https://en.wikipedia.org/wiki/History%20of%20Colombia
History of Colombia
The history of Colombia includes the settlements and society by indigenous peoples, most notably, the Muisca Confederation, Quimbaya Civilization, and Tairona Chiefdoms; the Spanish arrived in 1492 and initiated a period of annexation and colonization, most noteworthy being Spanish conquest; ultimately creating the Viceroyalty of New Granada, with its capital at Bogotá. Independence from Spain was won in 1819, but by 1830 the "Gran Colombia" Federation was dissolved. What is now Colombia and Panama emerged as the Republic of New Granada. The new nation experimented with federalism as the Granadine Confederation (1858), and then the United States of Colombia (1863), before the Republic of Colombia was finally declared in 1886; as well as constant political violence in the country. Panama seceded in 1903. Since the 1960s, the country has suffered from an asymmetric low-intensity armed conflict, which escalated in the 1990s, but then decreased from 2005 onward. The legacy of Colombia's history has resulted in a rich cultural heritage; while varied geography, and the imposing landscape of the country has resulted in the development of very strong regional identities. Pre-Columbian period From approximately 12,000 years BP onwards, hunter-gatherer societies existed near present-day Bogotá (at El Abra and Tequendama), and they traded with one another and with cultures living in the Magdalena River valley. Due to its location, the present territory of Colombia was a corridor of early human migration from Mesoamerica and the Caribbean to the Andes and the Amazon basin. The oldest archaeological finds are from the Pubenza archaeological site and El Totumo archaeological site in the Magdalena Valley southwest of Bogotá. These sites date from the Paleoindian period (18,000–8000 BCE). At Puerto Hormiga archaeological site and other sites, traces from the Archaic period in South America (~8000–2000 BCE) have been found. Vestiges indicate that there was also early occupation in the regions of El Abra, Tibitó and Tequendama in Cundinamarca. The oldest pottery discovered in the Americas, found at San Jacinto archaeological site, dates to 5000–4000 BCE. Indigenous people inhabited the territory that is now Colombia by 10,500 BCE. Nomadic hunter-gatherer tribes at the El Abra and Tequendama sites near present-day Bogotá traded with one another and with other cultures from the Magdalena River Valley. Serranía La Lindosa, a mountainous region of Guaviare Department is known for an extensive prehistoric rock art site which stretches for nearly eight miles. The site, near to the  Guayabero River was discovered in 2019, but was not revealed to the public until 2020. There are tens of thousands of paintings of animals and humans created up to 12,500 BP. Images of now-extinct ice age animals, such as the mastodon, helped date the site. Other ice-age animals depicted include the palaeolama, giant sloths and ice age horses. The site has gone undiscovered because of a conflict between the government and the Farc. The remote site is a two-hour drive from San José del Guaviare, followed by a four hour trek. The site was discovered by a team from National University of Colombia, University of Antioquia and the University of Exeter as part of a project funded by European Research Council as part of the Horizon 2020 Framework Programmes for Research and Technological Development. The site is to be featured in episode 2 of the Channel 4 series, Jungle Mystery: Lost Kingdoms of the Amazon, on 12 December 2020. Between 5000 and 1000 BCE, hunter-gatherer tribes transitioned to agrarian societies; fixed settlements were established, and pottery appeared. Beginning in the 1st millennium BCE, groups of Amerindians including the Muisca, Quimbaya, Tairona, Calima, Zenú, Tierradentro, San Agustín, Tolima, and Urabá became skilled in farming, mining, and metalcraft; and some developed the political system of cacicazgos with a pyramidal structure of power headed by caciques. The Muisca inhabited mainly the area of what is now the Departments of Boyacá and Cundinamarca high plateau (Altiplano Cundiboyacense) where they formed the Muisca Confederation. The Muisca had one of the most developed political systems (Muisca Confederation) in South America, surpassed only by the Incas. They farmed maize, potato, quinoa and cotton, and traded gold, emeralds, blankets, ceramic handicrafts, coca and especially salt with neighboring nations. The Tairona inhabited northern Colombia in the isolated Andes mountain range of Sierra Nevada de Santa Marta. The Quimbaya inhabited regions of the Cauca River Valley between the Western and Central Ranges. The Incas expanded their empire on the southwest part of the country. Spanish annexation Pre-Columbian history Europeans first visited the territory that became Colombia in 1499 when the first expedition of Alonso de Ojeda arrived at the Cabo de la Vela. The Spanish made several attempts to settle along the north coast of today's Colombia in the early 16th century, but their first permanent settlement, at Santa Marta, dates from 1525. The Spanish commander Pedro de Heredia founded Cartagena on June 1, 1533 in the former location of the indigenous Caribbean Calamarí village. Cartagena grew rapidly, fueled first by the gold in the tombs of the Sinú Culture, and later by trade. The thirst for gold and land lured Spanish explorers to visit Chibchan-speaking areas; resulting in the Spanish conquest of the Chibchan Nations - the conquest by the Spanish monarchy of the Chibcha language-speaking nations, mainly the Muisca and Tairona who inhabited present-day Colombia, beginning the Spanish colonization of the Americas. The Spanish advance inland from the Caribbean coast began independently from three different directions, under Jimenéz de Quesáda, Sebastián de Benalcázar (known in Colombia as Belalcázar) and Nikolaus Federmann. Although all three were drawn by the Indian treasures, none intended to reach Muisca territory, where they finally met. In August 1538, Quesáda founded Santa Fe de Bogotá on the site of Muisca village of Bacatá. In 1549, the institution of the Spanish Royal Audiencia in Bogotá gave that city the status of capital of New Granada, which comprised in large part what is now the territory of Colombia. As early as the 1500s however, secret anti-Spanish discontentment was already brewing for Colombians since Spain prohibited direct trade between the Viceroyalty of Peru, which included Colombia, and the Viceroyalty of New Spain, which included the Philippines the source of Asian products like silk and porcelain which was in demand in the Americas. Illegal trade between Peruvians, Filipinos, and Mexicans continued in secret, as smuggled Asian goods ended up in Córdoba, Colombia, the distribution center for illegal Asian imports, due to the collusion between these peoples against the authorities in Spain. They settled and traded with each other while disobeying the forced Spanish monopoly in more expensive silks and porcelain made in homeland Spain. In 1717, the Viceroyalty of New Granada was originally created, and then it was temporarily removed, to finally be reestablished in 1739. The Viceroyalty had Santa Fé de Bogotá as its capital. This Viceroyalty included some other provinces of northwestern South America which had previously been under the jurisdiction of the Viceroyalties of New Spain or Peru and correspond mainly to today's Venezuela, Ecuador and Panama. So, Bogotá became one of the principal administrative centers of the Spanish possessions in the New World, along with Lima and Mexico City. Gran Colombia: independence re-claimed From then on, the long independence struggle was led mainly by Bolívar and Francisco de Paula Santander in neighboring Venezuela. Bolívar returned to New Granada only in 1819 after establishing himself as leader of the pro-independence forces in the Venezuelan llanos. From there he led an army over the Andes and captured New Granada after a quick campaign that ended at the Battle of Boyacá, on August 7, 1819. (For more information, see Military career of Simón Bolívar.) That year, the Congress of Angostura established the Republic of Gran Colombia, which included all territories under the jurisdiction of the former Viceroyalty of New Granada. Bolívar was elected the first president of Gran Colombia and Santander, vice president. As the Federation of Gran Colombia was dissolved in 1830, the Department of Cundinamarca (as established in Angostura) became a new country, the Republic of New Granada. The Republic: Liberal and Conservative conflict In 1863 the name of the Republic was changed officially to "United States of Colombia", and in 1886 the country adopted its present name: "Republic of Colombia". Two political parties grew out of conflicts between the followers of Bolívar and Santander and their political visions—the Conservatives and the Liberals – and have since dominated Colombian politics. Bolívar's supporters, who later formed the nucleus of the Conservative Party, sought strong centralized government, alliance with the Roman Catholic Church, and a limited franchise. Santander's followers, forerunners of the Liberals, wanted a decentralized government, state rather than church control over education and other civil matters, and a broadened suffrage. Throughout the 19th and early 20th centuries, each party held the presidency for roughly equal periods of time. Colombia maintained a tradition of civilian government and regular, free elections. The military has seized power three times in Colombia's history: in 1830, after the dissolution of Great Colombia; again in 1854 (by General José María Melo); and from 1953 to 1957 (under General Gustavo Rojas Pinilla). Civilian rule was restored within one year in the first two instances. Notwithstanding the country's commitment to democratic institutions, Colombia's history has also been characterized by widespread, violent conflict. Two civil wars resulted from bitter rivalry between the Conservative and Liberal parties. The Thousand Days' War (1899–1902) cost an estimated 100,000 lives, and up to 300,000 people died during "La Violencia" of the late 1940s and 1950s, a bipartisan confrontation which erupted after the assassination of Liberal popular candidate Jorge Eliécer Gaitán. United States activity to influence the area (especially the Panama Canal construction and control) led to a military uprising in the Isthmus Department in 1903, which resulted in the separation and independence of Panama. A military coup in 1953 toppled the right-wing government of Conservative Laureano Gómez and brought General Gustavo Rojas Pinilla to power. Initially, Rojas enjoyed considerable popular support, due largely to his success in reducing "La Violencia". When he did not restore democratic rule and occasionally engaged in open repression, however, he was overthrown by the military in 1957 with the backing of both political parties, and a provisional government was installed. The National Front regime (1958–1974) In July 1957, former Conservative President Laureano Gómez (1950–1953) and former Liberal President Alberto Lleras (1945–1946, 1958–1962) issued the "Declaration of Sitges," in which they proposed a "National Front," whereby the Liberal and Conservative parties would govern jointly. The presidency would be determined by an alternating conservative and liberal president every 4 years for 16 years; the two parties would have parity in all other elective offices. The National Front ended "La Violencia", and National Front administrations attempted to institute far-reaching social and economic reforms in cooperation with the Alliance for Progress. In particular, the Liberal president Alberto Lleras Camargo (1958–1962) created the Colombian Institute for Agrarian Reform (INCORA), and Carlos Lleras Restrepo (1966–1970) further developed land entitlement. In 1968 and 1969 alone, the INCORA issued more than 60,000 land titles to farmers and workers. In the end, the contradictions between each successive Liberal and Conservative administration made the results decidedly mixed. Despite the progress in certain sectors, many social and political injustices continued. The National Front system itself eventually began to be seen as a form of political repression by dissidents and even many mainstream voters, and many protesters were victimized during this period. Especially after what was later confirmed as the fraudulent election of Conservative candidate Misael Pastrana in 1970, which resulted in the defeat of the relatively populist candidate and former president (dictator) Gustavo Rojas Pinilla. The M-19 guerrilla movement, "Movimiento 19 de Abril" (19 April Movement), would eventually be founded in part as a response to this particular event. The FARC was formed in 1964 by Manuel Marulanda Vélez and other Marxist–Leninist supporters, after a military attack on the community of Marquetalia. Although the system established by the Sitges agreement was phased out by 1974, the 1886 Colombian constitution — in effect until 1991—required that the losing political party be given adequate and equitable participation in the government which, according to many observers and later analysis, eventually resulted in some increase in corruption and legal relaxation. The current 1991 constitution does not have that requirement, but subsequent administrations have tended to include members of opposition parties. Post-National Front From 1974 until 1982, different presidential administrations chose to focus on ending the persistent insurgencies that sought to undermine Colombia's traditional political system. Both groups claimed to represent the poor and weak against the rich and powerful classes of the country, demanding the completion of true land and political reform, from an openly Communist perspective. By 1974, another challenge to the state's authority and legitimacy had come from 19th of April Movement (M-19), a mostly urban guerrilla group founded in response to an alleged electoral fraud during the final National Front election of Misael Pastrana Borrero (1970–1974) and the defeat of former dictator Gustavo Rojas Pinilla. Initially, the M-19 attracted a degree of attention and sympathy from mainstream Colombians that the FARC and National Liberation Army (ELN) had found largely elusive earlier due to extravagant and daring operations, such as stealing a sword that had belonged to Colombia's Independence hero Simon Bolívar. At the same time, its larger profile soon made it the focus of the state's counterinsurgency efforts. The ELN guerrilla had been seriously crippled by military operations in the region of Anorí by 1974, but it managed to reconstitute itself and escape destruction, in part due to the administration of Alfonso López Michelsen (1974–1978) allowing it to escape encirclement, hoping to initiate a peace process with the group. By 1982, the perceived passivity of the FARC, together with the relative success of the government's efforts against the M-19 and ELN, enabled the administration of the Liberal Party's Julio César Turbay (1978–1982) to lift a state-of-siege decree that had been in effect, on and off, for most of the previous 30 years. Under the latest such decree, president Turbay had implemented security policies that, though of some military value against the M-19 in particular, were considered highly questionable both inside and outside Colombian circles due to numerous accusations of military human rights abuses against suspects and captured guerrillas. Citizen exhaustion due to the conflict's newfound intensity led to the election of president Belisario Betancur (1982–1986), a Conservative who won 47% of the popular vote, directed peace feelers at all the insurgents, and negotiated a 1984 cease-fire with the FARC and M-19 after a 1982 release of many guerrillas imprisoned during the previous effort to overpower them. The ELN rejected entering any negotiation and continued to recover itself through the use of extortions and threats, in particular against foreign oil companies of European and U.S. origin. As these events were developing, the growing illegal drug trade and its consequences were also increasingly becoming a matter of widespread importance to all participants in the Colombian conflict. Guerrillas and newly wealthy drug lords had mutually uneven relations and thus numerous incidents occurred between them. Eventually, the kidnapping of drug cartel family members by guerrillas led to the creation of the 1981 Muerte a Secuestradores (MAS) death squad ("Death to Kidnappers"). Pressure from the U.S. government and critical sectors of Colombian society was met with further violence, as the Medellín Cartel and its hitmen, bribed or murdered numerous public officials, politicians and others who stood in its way by supporting the implementation of extradition of Colombian nationals to the U.S. Victims of cartel violence included Justice Minister Rodrigo Lara, whose assassination in 1984 made the Betancur administration begin to directly oppose the drug lords. The first negotiated cease-fire with the M-19 ended when the guerrillas resumed fighting in 1985, claiming that the cease-fire had not been fully respected by official security forces, saying that several of its members had suffered threats and assaults, and also questioning the government's real willingness to implement any accords. The Betancur administration, in turn, questioned the M-19's actions and its commitment to the peace process, as it continued to advance high-profile negotiations with the FARC, which led to the creation of the Patriotic Union (Colombia) (UP), a legal and non-clandestine political organization. On November 6, 1985, the M-19 stormed the Colombian Palace of Justice and held the Supreme Court magistrates hostage, intending to put president Betancur on trial. In the ensuing crossfire that followed the military's reaction, scores of people lost their lives, as did most of the guerrillas, including several high-ranking operatives. Both sides blamed each other for the outcome. Meanwhile, individual FARC members initially joined the UP leadership in representation of the guerrilla command, though most of the guerrilla's chiefs and militiamen did not demobilize nor disarm, as that was not a requirement of the process at that point in time. Tension soon significantly increased, as both sides began to accuse each other of not respecting the cease-fire. Political violence against FARC and UP members (including presidential candidate Jaime Pardo) was blamed on drug lords and also on members of the security forces (to a much lesser degree on the argued inaction of Betancur administration). Members of the government and security authorities increasingly accused the FARC of continuing to recruit guerrillas, as well as kidnapping, extorting and politically intimidating voters even as the UP was already participating in politics. The Virgilio Barco (1986–1990) administration, in addition to continuing to handle the difficulties of the complex negotiations with the guerrillas, also inherited a particularly chaotic confrontation against the drug lords, who were engaged in a campaign of terrorism and murder in response to government moves in favor of their extradition overseas. The UP also suffered an increasing number of losses during this term (including the assassination of presidential candidate Bernardo Jaramillo), which stemmed both from private proto-paramilitary organizations, increasingly powerful drug lords and a number of would-be paramilitary-sympathizers within the armed forces. Post-1990 Following administrations had to contend with the guerrillas, paramilitaries, narcotics traffickers and the violence and corruption that they all perpetuated, both through force and negotiation. Narcoterrorists assassinated three presidential candidates before César Gaviria was elected in 1990. Since the death of Medellín cartel leader Pablo Escobar in a police shootout during December 1993, indiscriminate acts of violence associated with that organization have abated as the "cartels" have broken up into multiple, smaller and often-competing trafficking organizations. Nevertheless, violence continues as these drug organizations resort to violence as part of their operations but also to protest government policies, including extradition. The M-19 and several smaller guerrilla groups were successfully incorporated into a peace process as the 1980s ended and the 1990s began, which culminated in the elections for a Constituent Assembly of Colombia that would write a new constitution, which took effect in 1991. The new Constitution, brought about a considerable number of institutional and legal reforms based on principles that the delegates considered as more modern, humanist, democratic and politically open than those in the 1886 constitution. Practical results were mixed and mingled emerged (such as the debate surrounding the constitutional prohibition of extradition, which later was reversed), but together with the reincorporation of some of the guerrilla groups to the legal political framework, the new Constitution inaugurated an era that was both a continuation and a gradual, but significant, departure from what had come before. Contacts with the FARC, which had irregularly continued despite the generalized de facto interruptions of the ceasefire and the official 1987 break from negotiations, were temporarily cut off in 1990 under the presidency of César Gaviria (1990–1994). The Colombian Army's assault on the FARC's Casa Verde sanctuary at La Uribe, Meta, followed by a FARC offensive that sought to undermine the deliberations of the Constitutional Assembly, began to highlight a significant break in the uneven negotiations carried over from the previous decade. President Ernesto Samper assumed office in August 1994. However, a political crisis relating to large-scale contributions from drug traffickers to Samper's presidential campaign diverted attention from governance programs, thus slowing, and in many cases, halting progress on the nation's domestic reform agenda. The military also suffered several setbacks in its fight against the guerrillas, when several of its rural bases began to be overrun and a record number of soldiers and officers were taken prisoner by the FARC (which since 1982 was attempting to implement a more "conventional" style of warfare, seeking to eventually defeat the military in the field). On August 7, 1998, Andrés Pastrana was sworn in as the President of Colombia. A member of the Conservative Party, Pastrana defeated Liberal Party candidate Horacio Serpa in a run-off election marked by high voter turnout and little political unrest. The new president's program was based on a commitment to bring about a peaceful resolution of Colombia's longstanding civil conflict and to cooperate fully with the United States to combat the trafficking of illegal drugs. While early initiatives in the Colombian peace process gave reason for optimism, the Pastrana administration also has had to combat high unemployment and other economic problems, such as the fiscal deficit and the impact of global financial instability on Colombia. During his administration, unemployment has risen to over 20%. Additionally, the growing severity of countrywide guerrilla attacks by the FARC and ELN, and smaller movements, as well as the growth of drug production, corruption and the spread of even more violent paramilitary groups such as the United Self-Defense Forces of Colombia (AUC) has made it difficult to solve the country's problems. Although the FARC and ELN accepted participation in the peace process, they did not make explicit commitments to end the conflict. The FARC suspended talks in November 2000, to protest what it called "paramilitary terrorism" but returned to the negotiating table in February 2001, following 2 days of meetings between President Pastrana and FARC leader Manuel Marulanda. The Colombian Government and ELN in early 2001 continued discussions aimed at opening a formal peace process. From 2004 and on By 2004, the security situation of Colombia had shown some measure of an improvement, and the economy, while still fragile, had also shown some positive signs. On the other hand, relatively little had been accomplished in structurally solving most of the country's other grave problems, in part due to legislative and political conflicts between the administration and the Colombian Congress (including those over the controversial 2006 project to give President Álvaro Uribe the right to be re-elected), and a relative lack of freely allocated funds and credits. In October 2006, Uribe was re-elected by a landslide. Some critical observers consider in retrospect that Uribe's policies, while admittedly reducing crime and guerrilla activity, were too slanted in favor of a military solution to Colombia's internal war, neglecting grave social and human rights concerns to a certain extent. They hoped that Uribe's government would make serious efforts towards improving the human rights situation inside the country, protecting civilians and reducing any abuses committed by the armed forces. Uribe's supporters in turn believed that increased military action was a necessary prelude to any serious negotiation attempt with the guerrillas and that the increased security situation would help the government, in the long term, to focus more actively on reducing most wide-scale abuses and human rights violations on the part of both the armed groups and any rogue security forces that might have links to the paramilitaries. In short, these supporters maintained that the security situation needed to be stabilized in favor of the government before any other social concerns could take precedence. In February 2010, the constitutional court blocked President Alvaro Uribe from seeking for a new re-election. Uribe left the presidency in 2010. In 2010 Juan Manuel Santos was elected president; he was supported by ex-president Uribe, and, in fact, he owed his election mainly through having won over former Uribe supporters. But two years after winning the presidential election, Santos (to widespread surprise) began peace talks with FARC, which took place in Havana. Re-elected in 2014, Santos revived an important infrastructure program, which in fact had been planned during the Uribe administration. Focused mainly on the provision of national highways, the program was led by former vice-president Germán Vargas Lleras. In 2015, Colombia's Congress limited presidency to single term, preventing the president from seeking re-election. Talks between the government and the guerrillas resulted in the announcement of a peace agreement. However, a referendum to ratify the deal was unsuccessful. Afterward, the Colombian government and the FARC signed a revised peace deal in November 2016, which the Colombian congress approved. In 2016, President Santos was awarded the Nobel Peace Prize. The Government began a process of attention and comprehensive reparation for victims of conflict. Colombia under President Santos showed some progress in the struggle to defend human rights, as expressed by HRW. A Special Jurisdiction of Peace was created to investigate, clarify, prosecute and punish serious human rights violations and grave breaches of international humanitarian law which occurred during the armed conflict and to satisfy victims' right to justice. During his visit to Colombia, Pope Francis paid tribute to the victims of the conflict. In May 2018, Ivan Duque, the candidate of the conservative Centro Democrático (Democratic Centre), won the presidential election. On 7 August 2018, he was sworn in as the new President of Colombia. Colombia's relations with Venezuela have fluctuated due to the ideological differences between both governments. Colombia has offered humanitarian support with food and medicines to mitigate the shortage of supplies in Venezuela. Colombia's Foreign Ministry said that all efforts to resolve Venezuela's crisis should be peaceful. Colombia proposed the idea of the Sustainable Development Goals and a final document was adopted by the United Nations. In February 2019, Venezuelan president Nicolás Maduro cut diplomatic relations with Colombia after Colombian President Ivan Duque helped Venezuelan opposition politicians deliver humanitarian aid to their country. Colombia recognized Venezuelan opposition leader Juan Guaido as the country's legitimate president. In January 2020, Colombia rejected Maduro's proposal that the two countries restore diplomatic relations. The 19 June 2022 election run-off vote ended in a win for former guerrilla, Gustavo Petro, taking 50.47% of the vote compared to 47.27% of right-wing Rodolfo Hernández. The single-term limit for the country's presidency prevented president Iván Duque from seeking re-election. Petro became the country’s first leftist president-elect. On 7 August 2022, he was sworn in. See also Colombia during World War II Economic history of Colombia History of the Americas History of Latin America History of South America List of presidents of Colombia Politics of Colombia Spanish colonization of the Americas References Bibliography Further reading Alesina, Alberto, ed. Institutional reforms: The case of Colombia (MIT press, 2005). Earle, Rebecca. Spain and the Independence of Colombia, 1810–1825. Exeter: University of Exeter Press, 2000. Echavarría, Juan José, María Angélica Arbeláez, and Alejandro Gaviria. "Recent economic history of Colombia." in Institutional Reforms: The Case of Colombia (2005): 33-72. Echeverry, Juan Carlos, et al. "Oil in Colombia: history, regulation and macroeconomic impact." Documento CEDE 2008-10 (2008). online Etter, Andrés, Clive McAlpine, and Hugh Possingham. "Historical patterns and drivers of landscape change in Colombia since 1500: a regionalized spatial approach." Annals of the Association of American Geographers 98.1 (2008): 2-23. Farnsworth-Alvear, Ann. Dulcinea in the Factory: Myths, Morals, Men, and Women in Colombia's Industrial Experiment, 1905–1960. Duke University Press 2000. Fisher, J.R. Allan J. Kuethe, and Anthony McFarlane. Reform and Insurrection in Bourbon New Granada and Peru. Baton Rouge: Louisiana State University Press 1990. Flores, Thomas Edward. "Vertical inequality, land reform, and insurgency in Colombia." Peace Economics, Peace Science and Public Policy 20.1 (2014): 5-31. online Harvey, Robert. "Liberators: Latin America's Struggle for Independence, 1810–1830". John Murray, London (2000). Kuethe, Allan J. Military Reform and Society in New Granada, 1773–1808. Gainesville: University of Florida Press 1978. LeGrand, Catherine. Frontier Expansion and Peasant Protest in Colombia, 1850–1936. Albuquerque: University of New Mexico Press 1986. López-Pedreros, A. Ricardo. Makers of democracy: a transnational history of the middle classes in Colombia (Duke University Press, 2019). McFarlane, Anthony. Colombia Before Independence: Economy, Society, and Politics under Bourbon Rule. Cambridge: Cambridge University Press, 1993. Martz, John D. The politics of clientelism in Colombia: Democracy and the state (Routledge, 2017). Murillo, Mario A., and Jesus Rey Avirama. Colombia and the United States: war, unrest, and destabilization (Seven Stories Press, 2004). Phelan, John Leddy. The People and the King: The Comunero Revolt in Colombia, 1781. Madison: University of Wisconsin Press 1978. Racine, Karen. "Simón Bolívar and friends: Recent biographies of independence figures in Colombia and Venezuela" History Compass 18#3 (Feb 2020) https://doi.org/10.1111/hic3.12608 Roldán, Mary. Blood and Fire: La Violencia in Antioquia, Colombia 1946–1953. Durham: Duke University Press 2002. Safford, Frank. Colombia: Fragmented Land, Divided Society. New York: Oxford University Press 2002. Sharp, William Frederick. Slavery on the Spanish Frontier: The Colombia Chocó, 1680–1810. Norman: University of Oklahoma Press 1976. Thorp, Rosemary, and Francisco Durand. "8. A Historical View of Business-State Relations: Colombia, Peru, and Venezuela Compared." in Business and the state in developing countries. (Cornell University Press, 2018) pp. 216–236. Twinam, Ann. Miners, Merchants, and Farmers in Colonial Colombia. Austin: University of Texas Press 1983. West, Robert C. Colonial Placer Mining in Colombia. Baton Rouge: Louisiana State University Press 1952. In Spanish Arciniegas, Germán. Los comuneros. Caracas: Bibliotecta Ayacucho 1992. Colmenares, Germán. Historia económica y social de Colombia, 1537–1719. Cali 1973. González, Margarita. El resguardo en el Nuevo Reino de Granada. 3rd edition. Bogotá: El Ancora 1992. External links U.S. State Department Background Note: Colombia
2,632
5,850
https://en.wikipedia.org/wiki/Telecommunications%20in%20the%20Czech%20Republic
Telecommunications in the Czech Republic
Telephones - main lines in use: 2.888 million (2006) Telephones - mobile cellular: 13.075 million (2007) Telephone system: domestic: 86% of exchanges now digital; existing copper subscriber systems now being improved with Asymmetric Digital Subscriber Line (ADSL) equipment to accommodate Internet and other digital signals; trunk systems include fibre-optic cable and microwave radio relay Indian Ocean regions), 1 Intelsat, 1 Eutelsat, 1 Inmarsat, 1 Globalstar Radio broadcast stations: AM 31, FM 304, shortwave 17 (2000) Radios: 3,159,134 (December 2000) Television broadcast stations: 150 (plus 1,434 repeaters) (2000) Televisions: 3,405,834 (December 2000) Internet Service Providers (ISPs): more than 300 (2000) Internet users: 4.4 million (2007) Country code: CZ See also : Czech Republic
2,637
5,905
https://en.wikipedia.org/wiki/Chalcogen
Chalcogen
The chalcogens (ore forming) ( ) are the chemical elements in group 16 of the periodic table. This group is also known as the oxygen family. Group 16 consists of the elements oxygen (O), sulfur (S), selenium (Se), tellurium (Te), and the radioactive elements polonium (Po) and livermorium (Lv). Often, oxygen is treated separately from the other chalcogens, sometimes even excluded from the scope of the term "chalcogen" altogether, due to its very different chemical behavior from sulfur, selenium, tellurium, and polonium. The word "chalcogen" is derived from a combination of the Greek word () principally meaning copper (the term was also used for bronze/brass, any metal in the poetic sense, ore or coin), and the Latinized Greek word , meaning born or produced. Sulfur has been known since antiquity, and oxygen was recognized as an element in the 18th century. Selenium, tellurium and polonium were discovered in the 19th century, and livermorium in 2000. All of the chalcogens have six valence electrons, leaving them two electrons short of a full outer shell. Their most common oxidation states are −2, +2, +4, and +6. They have relatively low atomic radii, especially the lighter ones. Lighter chalcogens are typically nontoxic in their elemental form, and are often critical to life, while the heavier chalcogens are typically toxic. All of the naturally occurring chalcogens have some role in biological functions, either as a nutrient or a toxin. Selenium is an important nutrient (among others as a building block of selenocysteine) but is also commonly toxic. Tellurium often has unpleasant effects (although some organisms can use it), and polonium (especially the isotope polonium-210) is always harmful as a result of its radioactivity. Sulfur has more than 20 allotropes, oxygen has nine, selenium has at least eight, polonium has two, and only one crystal structure of tellurium has so far been discovered. There are numerous organic chalcogen compounds. Not counting oxygen, organic sulfur compounds are generally the most common, followed by organic selenium compounds and organic tellurium compounds. This trend also occurs with chalcogen pnictides and compounds containing chalcogens and carbon group elements. Oxygen is generally obtained by separation of air into nitrogen and oxygen. Sulfur is extracted from oil and natural gas. Selenium and tellurium are produced as byproducts of copper refining. Polonium is most available in naturally occurring actinide-containing materials. Livermorium has been synthesized in particle accelerators. The primary use of elemental oxygen is in steelmaking. Sulfur is mostly converted into sulfuric acid, which is heavily used in the chemical industry. Selenium's most common application is glassmaking. Tellurium compounds are mostly used in optical disks, electronic devices, and solar cells. Some of polonium's applications are due to its radioactivity. Properties Atomic and physical Chalcogens show similar patterns in electron configuration, especially in the outermost shells, where they all have the same number of valence electrons, resulting in similar trends in chemical behavior: All chalcogens have six valence electrons. All of the solid, stable chalcogens are soft and do not conduct heat well. Electronegativity decreases towards the chalcogens with higher atomic numbers. Density, melting and boiling points, and atomic and ionic radii tend to increase towards the chalcogens with higher atomic numbers. Isotopes Out of the six known chalcogens, one (oxygen) has an atomic number equal to a nuclear magic number, which means that their atomic nuclei tend to have increased stability towards radioactive decay. Oxygen has three stable isotopes, and 14 unstable ones. Sulfur has four stable isotopes, 20 radioactive ones, and one isomer. Selenium has six observationally stable or nearly stable isotopes, 26 radioactive isotopes, and 9 isomers. Tellurium has eight stable or nearly stable isotopes, 31 unstable ones, and 17 isomers. Polonium has 42 isotopes, none of which are stable. It has an additional 28 isomers. In addition to the stable isotopes, some radioactive chalcogen isotopes occur in nature, either because they are decay products, such as 210Po, because they are primordial, such as 82Se, because of cosmic ray spallation, or via nuclear fission of uranium. Livermorium isotopes 290Lv through 293Lv have been discovered; the most stable livermorium isotope is 293Lv, which has a half-life of 0.061 seconds. Among the lighter chalcogens (oxygen and sulfur), the most neutron-poor isotopes undergo proton emission, the moderately neutron-poor isotopes undergo electron capture or β+ decay, the moderately neutron-rich isotopes undergo β− decay, and the most neutron rich isotopes undergo neutron emission. The middle chalcogens (selenium and tellurium) have similar decay tendencies as the lighter chalcogens, but their isotopes do not undergo proton emission and some of the most neutron-deficient isotopes of tellurium undergo alpha decay. Polonium's isotopes tend to decay with alpha or beta decay. Isotopes with nuclear spins are more common among the chalcogens selenium and tellurium than they are with sulfur. Allotropes Oxygen's most common allotrope is diatomic oxygen, or O2, a reactive paramagnetic molecule that is ubiquitous to aerobic organisms and has a blue color in its liquid state. Another allotrope is O3, or ozone, which is three oxygen atoms bonded together in a bent formation. There is also an allotrope called tetraoxygen, or O4, and six allotropes of solid oxygen including "red oxygen", which has the formula O8. Sulfur has over 20 known allotropes, which is more than any other element except carbon. The most common allotropes are in the form of eight-atom rings, but other molecular allotropes that contain as few as two atoms or as many as 20 are known. Other notable sulfur allotropes include rhombic sulfur and monoclinic sulfur. Rhombic sulfur is the more stable of the two allotropes. Monoclinic sulfur takes the form of long needles and is formed when liquid sulfur is cooled to slightly below its melting point. The atoms in liquid sulfur are generally in the form of long chains, but above 190 °C, the chains begin to break down. If liquid sulfur above 190 °C is frozen very rapidly, the resulting sulfur is amorphous or "plastic" sulfur. Gaseous sulfur is a mixture of diatomic sulfur (S2) and 8-atom rings. Selenium has at least eight distinct allotropes. The gray allotrope, commonly referred to as the "metallic" allotrope, despite not being a metal, is stable and has a hexagonal crystal structure. The gray allotrope of selenium is soft, with a Mohs hardness of 2, and brittle. Four other allotropes of selenium are metastable. These include two monoclinic red allotropes and two amorphous allotropes, one of which is red and one of which is black. The red allotrope converts to the black allotrope in the presence of heat. The gray allotrope of selenium is made from spirals on selenium atoms, while one of the red allotropes is made of stacks of selenium rings (Se8). Tellurium is not known to have any allotropes, although its typical form is hexagonal. Polonium has two allotropes, which are known as α-polonium and β-polonium. α-polonium has a cubic crystal structure and converts to the rhombohedral β-polonium at 36 °C. The chalcogens have varying crystal structures. Oxygen's crystal structure is monoclinic, sulfur's is orthorhombic, selenium and tellurium have the hexagonal crystal structure, while polonium has a cubic crystal structure. Chemical Oxygen, sulfur, and selenium are nonmetals, and tellurium is a metalloid, meaning that its chemical properties are between those of a metal and those of a nonmetal. It is not certain whether polonium is a metal or a metalloid. Some sources refer to polonium as a metalloid, although it has some metallic properties. Also, some allotropes of selenium display characteristics of a metalloid, even though selenium is usually considered a nonmetal. Even though oxygen is a chalcogen, its chemical properties are different from those of other chalcogens. One reason for this is that the heavier chalcogens have vacant d-orbitals. Oxygen's electronegativity is also much higher than those of the other chalcogens. This makes oxygen's electric polarizability several times lower than those of the other chalcogens. For covalent bonding a chalcogen may accept two electrons according to the octet rule, leaving two lone pairs. When an atom forms two single bonds, they form an angle between 90° and 120°. In 1+ cations, such as , a chalcogen forms three molecular orbitals arranged in a trigonal pyramidal fashion and one lone pair. Double bonds are also common in chalcogen compounds, for example in chalcogenates (see below). The oxidation number of the most common chalcogen compounds with positive metals is −2. However the tendency for chalcogens to form compounds in the −2 state decreases towards the heavier chalcogens. Other oxidation numbers, such as −1 in pyrite and peroxide, do occur. The highest formal oxidation number is +6. This oxidation number is found in sulfates, selenates, tellurates, polonates, and their corresponding acids, such as sulfuric acid. Oxygen is the most electronegative element except for fluorine, and forms compounds with almost all of the chemical elements, including some of the noble gases. It commonly bonds with many metals and metalloids to form oxides, including iron oxide, titanium oxide, and silicon oxide. Oxygen's most common oxidation state is −2, and the oxidation state −1 is also relatively common. With hydrogen it forms water and hydrogen peroxide. Organic oxygen compounds are ubiquitous in organic chemistry. Sulfur's oxidation states are −2, +2, +4, and +6. Sulfur-containing analogs of oxygen compounds often have the prefix thio-. Sulfur's chemistry is similar to oxygen's, in many ways. One difference is that sulfur-sulfur double bonds are far weaker than oxygen-oxygen double bonds, but sulfur-sulfur single bonds are stronger than oxygen-oxygen single bonds. Organic sulfur compounds such as thiols have a strong specific smell, and a few are utilized by some organisms. Selenium's oxidation states are −2, +4, and +6. Selenium, like most chalcogens, bonds with oxygen. There are some organic selenium compounds, such as selenoproteins. Tellurium's oxidation states are −2, +2, +4, and +6. Tellurium forms the oxides tellurium monoxide, tellurium dioxide, and tellurium trioxide. Polonium's oxidation states are +2 and +4. There are many acids containing chalcogens, including sulfuric acid, sulfurous acid, selenic acid, and telluric acid. All hydrogen chalcogenides are toxic except for water. Oxygen ions often come in the forms of oxide ions (), peroxide ions (), and hydroxide ions (). Sulfur ions generally come in the form of sulfides (), sulfites (), sulfates (), and thiosulfates (). Selenium ions usually come in the form of selenides () and selenates (). Tellurium ions often come in the form of tellurates (). Molecules containing metal bonded to chalcogens are common as minerals. For example, pyrite (FeS2) is an iron ore, and the rare mineral calaverite is the ditelluride . Although all group 16 elements of the periodic table, including oxygen, can be defined as chalcogens, oxygen and oxides are usually distinguished from chalcogens and chalcogenides. The term chalcogenide is more commonly reserved for sulfides, selenides, and tellurides, rather than for oxides. Except for polonium, the chalcogens are all fairly similar to each other chemically. They all form X2− ions when reacting with electropositive metals. Sulfide minerals and analogous compounds produce gases upon reaction with oxygen. Compounds With halogens Chalcogens also form compounds with halogens known as chalcohalides, or chalcogen halides. The majority of simple chalcogen halides are well-known and widely used as chemical reagents. However, more complicated chalcogen halides, such as sulfenyl, sulfonyl, and sulfuryl halides, are less well known to science. Out of the compounds consisting purely of chalcogens and halogens, there are a total of 13 chalcogen fluorides, nine chalcogen chlorides, eight chalcogen bromides, and six chalcogen iodides that are known. The heavier chalcogen halides often have significant molecular interactions. Sulfur fluorides with low valences are fairly unstable and little is known about their properties. However, sulfur fluorides with high valences, such as sulfur hexafluoride, are stable and well-known. Sulfur tetrafluoride is also a well-known sulfur fluoride. Certain selenium fluorides, such as selenium difluoride, have been produced in small amounts. The crystal structures of both selenium tetrafluoride and tellurium tetrafluoride are known. Chalcogen chlorides and bromides have also been explored. In particular, selenium dichloride and sulfur dichloride can react to form organic selenium compounds. Dichalcogen dihalides, such as Se2Cl2 also are known to exist. There are also mixed chalcogen-halogen compounds. These include SeSX, with X being chlorine or bromine. Such compounds can form in mixtures of sulfur dichloride and selenium halides. These compounds have been fairly recently structurally characterized, as of 2008. In general, diselenium and disulfur chlorides and bromides are useful chemical reagents. Chalcogen halides with attached metal atoms are soluble in organic solutions. One example of such a compound is . Unlike selenium chlorides and bromides, selenium iodides have not been isolated, as of 2008, although it is likely that they occur in solution. Diselenium diiodide, however, does occur in equilibrium with selenium atoms and iodine molecules. Some tellurium halides with low valences, such as and , form polymers when in the solid state. These tellurium halides can be synthesized by the reduction of pure tellurium with superhydride and reacting the resulting product with tellurium tetrahalides. Ditellurium dihalides tend to get less stable as the halides become lower in atomic number and atomic mass. Tellurium also forms iodides with even fewer iodine atoms than diiodies. These include TeI and Te2I. These compounds have extended structures in the solid state. Halogens and chalcogens can also form halochalcogenate anions. Organic Alcohols, phenols and other similar compounds contain oxygen. However, in thiols, selenols and tellurols; sulfur, selenium, and tellurium replace oxygen. Thiols are better known than selenols or tellurols. Thiols are the most stable chalcogenols and tellurols are the least stable, being unstable in heat or light. Other organic chalcogen compounds include thioethers, selenoethers and telluroethers. Some of these, such as dimethyl sulfide, diethyl sulfide, and dipropyl sulfide are commercially available. Selenoethers are in the form of R2Se or RSeR. Telluroethers such as dimethyl telluride are typically prepared in the same way as thioethers and selenoethers. Organic chalcogen compounds, especially organic sulfur compounds, have the tendency to smell unpleasant. Dimethyl telluride also smells unpleasant, and selenophenol is renowned for its "metaphysical stench". There are also thioketones, selenoketones, and telluroketones. Out of these, thioketones are the most well-studied with 80% of chalcogenoketones papers being about them. Selenoketones make up 16% of such papers and telluroketones make up 4% of them. Thioketones have well-studied non-linear electric and photophysic properties. Selenoketones are less stable than thioketones and telluroketones are less stable than selenoketones. Telluroketones have the highest level of polarity of chalcogenoketones. With metals There is a very large number of metal chalcogenides. There are also ternary compounds containing alkali metals and transition metals. Highly metal-rich metal chalcogenides, such as Lu7Te and Lu8Te have domains of the metal's crystal lattice containing chalcogen atoms. While these compounds do exist, analogous chemicals that contain lanthanum, praseodymium, gadolinium, holmium, terbium, or ytterbium have not been discovered, as of 2008. The boron group metals aluminum, gallium, and indium also form bonds to chalcogens. The Ti3+ ion forms chalcogenide dimers such as TiTl5Se8. Metal chalcogenide dimers also occur as lower tellurides, such as Zr5Te6. Elemental chalcogens react with certain lanthanide compounds to form lanthanide clusters rich in chalcogens. Uranium(IV) chalcogenol compounds also exist. There are also transition metal chalcogenols which have potential to serve as catalysts and stabilize nanoparticles. With pnictogens Compounds with chalcogen-phosphorus bonds have been explored for more than 200 years. These compounds include unsophisticated phosphorus chalcogenides as well as large molecules with biological roles and phosphorus-chalcogen compounds with metal clusters. These compounds have numerous applications, including organo-phosphate insecticides, strike-anywhere matches and quantum dots. A total of 130,000 compounds with at least one phosphorus-sulfur bond, 6000 compounds with at least one phosphorus-selenium bond, and 350 compounds with at least one phosphorus-tellurium bond have been discovered. The decrease in the number of chalcogen-phosphorus compounds further down the periodic table is due to diminishing bond strength. Such compounds tend to have at least one phosphorus atom in the center, surrounded by four chalcogens and side chains. However, some phosphorus-chalcogen compounds also contain hydrogen (such as secondary phosphine chalcogenides) or nitrogen (such as dichalcogenoimidodiphosphates). Phosphorus selenides are typically harder to handle that phosphorus sulfides, and compounds in the form PxTey have not been discovered. Chalcogens also bond with other pnictogens, such as arsenic, antimony, and bismuth. Heavier chalcogen pnictides tend to form ribbon-like polymers instead of individual molecules. Chemical formulas of these compounds include Bi2S3 and Sb2Se3. Ternary chalcogen pnictides are also known. Examples of these include P4O6Se and P3SbS3. salts containing chalcogens and pnictogens also exist. Almost all chalcogen pnictide salts are typically in the form of [PnxE4x]3−, where Pn is a pnictogen and E is a chalcogen. Tertiary phosphines can react with chalcogens to form compounds in the form of R3PE, where E is a chalcogen. When E is sulfur, these compounds are relatively stable, but they are less so when E is selenium or tellurium. Similarly, secondary phosphines can react with chalcogens to form secondary phosphine chalcogenides. However, these compounds are in a state of equilibrium with chalcogenophosphinous acid. Secondary phosphine chalcogenides are weak acids. Binary compounds consisting of antimony or arsenic and a chalcogen. These compounds tend to be colorful and can be created by a reaction of the constituent elements at temperatures of . Other Chalcogens form single bonds and double bonds with other carbon group elements than carbon, such as silicon, germanium, and tin. Such compounds typically form from a reaction of carbon group halides and chalcogenol salts or chalcogenol bases. Cyclic compounds with chalcogens, carbon group elements, and boron atoms exist, and occur from the reaction of boron dichalcogenates and carbon group metal halides. Compounds in the form of M-E, where M is silicon, germanium, or tin, and E is sulfur, selenium or tellurium have been discovered. These form when carbon group hydrides react or when heavier versions of carbenes react. Sulfur and tellurium can bond with organic compounds containing both silicon and phosphorus. All of the chalcogens form hydrides. In some cases this occurs with chalcogens bonding with two hydrogen atoms. However tellurium hydride and polonium hydride are both volatile and highly labile. Also, oxygen can bond to hydrogen in a 1:1 ratio as in hydrogen peroxide, but this compound is unstable. Chalcogen compounds form a number of interchalcogens. For instance, sulfur forms the toxic sulfur dioxide and sulfur trioxide. Tellurium also forms oxides. There are some chalcogen sulfides as well. These include selenium sulfide, an ingredient in some shampoos. Since 1990, a number of borides with chalcogens bonded to them have been detected. The chalcogens in these compounds are mostly sulfur, although some do contain selenium instead. One such chalcogen boride consists of two molecules of dimethyl sulfide attached to a boron-hydrogen molecule. Other important boron-chalcogen compounds include macropolyhedral systems. Such compounds tend to feature sulfur as the chalcogen. There are also chalcogen borides with two, three, or four chalcogens. Many of these contain sulfur but some, such as Na2B2Se7 contain selenium instead. History Early discoveries Sulfur has been known since ancient times and is mentioned in the Bible fifteen times. It was known to the ancient Greeks and commonly mined by the ancient Romans. It was also historically used as a component of Greek fire. In the Middle Ages, it was a key part of alchemical experiments. In the 1700s and 1800s, scientists Joseph Louis Gay-Lussac and Louis-Jacques Thénard proved sulfur to be a chemical element. Early attempts to separate oxygen from air were hampered by the fact that air was thought of as a single element up to the 17th and 18th centuries. Robert Hooke, Mikhail Lomonosov, Ole Borch, and Pierre Bayden all successfully created oxygen, but did not realize it at the time. Oxygen was discovered by Joseph Priestley in 1774 when he focused sunlight on a sample of mercuric oxide and collected the resulting gas. Carl Wilhelm Scheele had also created oxygen in 1771 by the same method, but Scheele did not publish his results until 1777. Tellurium was first discovered in 1783 by Franz Joseph Müller von Reichenstein. He discovered tellurium in a sample of what is now known as calaverite. Müller assumed at first that the sample was pure antimony, but tests he ran on the sample did not agree with this. Muller then guessed that the sample was bismuth sulfide, but tests confirmed that the sample was not that. For some years, Muller pondered the problem. Eventually he realized that the sample was gold bonded with an unknown element. In 1796, Müller sent part of the sample to the German chemist Martin Klaproth, who purified the undiscovered element. Klaproth decided to call the element tellurium after the Latin word for earth. Selenium was discovered in 1817 by Jöns Jacob Berzelius. Berzelius noticed a reddish-brown sediment at a sulfuric acid manufacturing plant. The sample was thought to contain arsenic. Berzelius initially thought that the sediment contained tellurium, but came to realize that it also contained a new element, which he named selenium after the Greek moon goddess Selene. Periodic table placing Three of the chalcogens (sulfur, selenium, and tellurium) were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner as having similar properties. Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music. His version included a "group b" consisting of oxygen, sulfur, selenium, tellurium, and osmium. After 1869, Dmitri Mendeleev proposed his periodic table placing oxygen at the top of "group VI" above sulfur, selenium, and tellurium. Chromium, molybdenum, tungsten, and uranium were sometimes included in this group, but they would be later rearranged as part of group VIB; uranium would later be moved to the actinide series. Oxygen, along with sulfur, selenium, tellurium, and later polonium would be grouped in group VIA, until the group's name was changed to group 16 in 1988. Modern discoveries In the late 19th century, Marie Curie and Pierre Curie discovered that a sample of pitchblende was emitting four times as much radioactivity as could be explained by the presence of uranium alone. The Curies gathered several tons of pitchblende and refined it for several months until they had a pure sample of polonium. The discovery officially took place in 1898. Prior to the invention of particle accelerators, the only way to produce polonium was to extract it over several months from uranium ore. The first attempt at creating livermorium was from 1976 to 1977 at the LBNL, who bombarded curium-248 with calcium-48, but were not successful. After several failed attempts in 1977, 1998, and 1999 by research groups in Russia, Germany, and the US, livermorium was created successfully in 2000 at the Joint Institute for Nuclear Research by bombarding curium-248 atoms with calcium-48 atoms. The element was known as ununhexium until it was officially named livermorium in 2012. Names and etymology In the 19th century, Jons Jacob Berzelius suggested calling the elements in group 16 "amphigens", as the elements in the group formed amphid salts (salts of oxyacids. Formerly regarded as composed of two oxides, an acid and a basic oxide) The term received some use in the early 1800s but is now obsolete. The name chalcogen comes from the Greek words χαλκος (, literally "copper"), and γενές (, born, gender, kindle). It was first used in 1932 by Wilhelm Biltz's group at Leibniz University Hannover, where it was proposed by Werner Fischer. The word "chalcogen" gained popularity in Germany during the 1930s because the term was analogous to "halogen". Although the literal meanings of the modern Greek words imply that chalcogen means "copper-former", this is misleading because the chalcogens have nothing to do with copper in particular. "Ore-former" has been suggested as a better translation, as the vast majority of metal ores are chalcogenides and the word χαλκος in ancient Greek was associated with metals and metal-bearing rock in general; copper, and its alloy bronze, was one of the first metals to be used by humans. Oxygen's name comes from the Greek words oxy genes, meaning "acid-forming". Sulfur's name comes from either the Latin word sulfurium or the Sanskrit word sulvere; both of those terms are ancient words for sulfur. Selenium is named after the Greek goddess of the moon, Selene, to match the previously-discovered element tellurium, whose name comes from the Latin word telus, meaning earth. Polonium is named after Marie Curie's country of birth, Poland. Livermorium is named for the Lawrence Livermore National Laboratory. Occurrence The four lightest chalcogens (oxygen, sulfur, selenium, and tellurium) are all primordial elements on Earth. Sulfur and oxygen occur as constituent copper ores and selenium and tellurium occur in small traces in such ores. Polonium forms naturally from the decay of other elements, even though it is not primordial. Livermorium does not occur naturally at all. Oxygen makes up 21% of the atmosphere by weight, 89% of water by weight, 46% of the earth's crust by weight, and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other mineral groups. Stars of at least eight times the mass of the sun also produce oxygen in their cores via nuclear fusion. Oxygen is the third-most abundant element in the universe, making up 1% of the universe by weight. Sulfur makes up 0.035% of the earth's crust by weight, making it the 17th most abundant element there and makes up 0.25% of the human body. It is a major component of soil. Sulfur makes up 870 parts per million of seawater and about 1 part per billion of the atmosphere. Sulfur can be found in elemental form or in the form of sulfide minerals, sulfate minerals, or sulfosalt minerals. Stars of at least 12 times the mass of the sun produce sulfur in their cores via nuclear fusion. Sulfur is the tenth most abundant element in the universe, making up 500 parts per million of the universe by weight. Selenium makes up 0.05 parts per million of the earth's crust by weight. This makes it the 67th most abundant element in the earth's crust. Selenium makes up on average 5 parts per million of the soils. Seawater contains around 200 parts per trillion of selenium. The atmosphere contains 1 nanogram of selenium per cubic meter. There are mineral groups known as selenates and selenites, but there are not many minerals in these groups. Selenium is not produced directly by nuclear fusion. Selenium makes up 30 parts per billion of the universe by weight. There are only 5 parts per billion of tellurium in the earth's crust and 15 parts per billion of tellurium in seawater. Tellurium is one of the eight or nine least abundant elements in the earth's crust. There are a few dozen tellurate minerals and telluride minerals, and tellurium occurs in some minerals with gold, such as sylvanite and calaverite. Tellurium makes up 9 parts per billion of the universe by weight. Polonium only occurs in trace amounts on earth, via radioactive decay of uranium and thorium. It is present in uranium ores in concentrations of 100 micrograms per metric ton. Very minute amounts of polonium exist in the soil and thus in most food, and thus in the human body. The earth's crust contains less than 1 part per billion of polonium, making it one of the ten rarest metals on earth. Livermorium is always produced artificially in particle accelerators. Even when it is produced, only a small number of atoms are synthesized at a time. Chalcophile elements Chalcophile elements are those that remain on or close to the surface because they combine readily with chalcogens other than oxygen, forming compounds which do not sink into the core. Chalcophile ("chalcogen-loving") elements in this context are those metals and heavier nonmetals that have a low affinity for oxygen and prefer to bond with the heavier chalcogen sulfur as sulfides. Because sulfide minerals are much denser than the silicate minerals formed by lithophile elements, chalcophile elements separated below the lithophiles at the time of the first crystallisation of the Earth's crust. This has led to their depletion in the Earth's crust relative to their solar abundances, though this depletion has not reached the levels found with siderophile elements. Production Approximately 100 million metric tons of oxygen are produced yearly. Oxygen is most commonly produced by fractional distillation, in which air is cooled to a liquid, then warmed, allowing all the components of air except for oxygen to turn to gases and escape. Fractionally distilling air several times can produce 99.5% pure oxygen. Another method with which oxygen is produced is to send a stream of dry, clean air through a bed of molecular sieves made of zeolite, which absorbs the nitrogen in the air, leaving 90 to 93% pure oxygen. Sulfur can be mined in its elemental form, although this method is no longer as popular as it used to be. In 1865 a large deposit of elemental sulfur was discovered in the U.S. states of Louisiana and Texas, but it was difficult to extract at the time. In the 1890s, Herman Frasch came up with the solution of liquefying the sulfur with superheated steam and pumping the sulfur up to the surface. These days sulfur is instead more often extracted from oil, natural gas, and tar. The world production of selenium is around 1500 metric tons per year, out of which roughly 10% is recycled. Japan is the largest producer, producing 800 metric tons of selenium per year. Other large producers include Belgium (300 metric tons per year), the United States (over 200 metric tons per year), Sweden (130 metric tons per year), and Russia (100 metric tons per year). Selenium can be extracted from the waste from the process of electrolytically refining copper. Another method of producing selenium is to farm selenium-gathering plants such as milk vetch. This method could produce three kilograms of selenium per acre, but is not commonly practiced. Tellurium is mostly produced as a by-product of the processing of copper. Tellurium can also be refined by electrolytic reduction of sodium telluride. The world production of tellurium is between 150 and 200 metric tons per year. The United States is one of the largest producers of tellurium, producing around 50 metric tons per year. Peru, Japan, and Canada are also large producers of tellurium. Until the creation of nuclear reactors, all polonium had to be extracted from uranium ore. In modern times, most isotopes of polonium are produced by bombarding bismuth with neutrons. Polonium can also be produced by high neutron fluxes in nuclear reactors. Approximately 100 grams of polonium are produced yearly. All the polonium produced for commercial purposes is made in the Ozersk nuclear reactor in Russia. From there, it is taken to Samara, Russia for purification, and from there to St. Petersburg for distribution. The United States is the largest consumer of polonium. All livermorium is produced artificially in particle accelerators. The first successful production of livermorium was achieved by bombarding curium-248 atoms with calcium-48 atoms. As of 2011, roughly 25 atoms of livermorium had been synthesized. Applications Metabolism is the most important source and use of oxygen. Minor industrial uses include Steelmaking (55% of all purified oxygen produced), the chemical industry (25% of all purified oxygen), medical use, water treatment (as oxygen kills some types of bacteria), rocket fuel (in liquid form), and metal cutting. Most sulfur produced is transformed into sulfur dioxide, which is further transformed into sulfuric acid, a very common industrial chemical. Other common uses include being a key ingredient of gunpowder and Greek fire, and being used to change soil pH. Sulfur is also mixed into rubber to vulcanize it. Sulfur is used in some types of concrete and fireworks. 60% of all sulfuric acid produced is used to generate phosphoric acid. Sulfur is used as a pesticide (specifically as an acaricide and fungicide) on "orchard, ornamental, vegetable, grain, and other crops." Around 40% of all selenium produced goes to glassmaking. 30% of all selenium produced goes to metallurgy, including manganese production. 15% of all selenium produced goes to agriculture. Electronics such as photovoltaic materials claim 10% of all selenium produced. Pigments account for 5% of all selenium produced. Historically, machines such as photocopiers and light meters used one-third of all selenium produced, but this application is in steady decline. Tellurium suboxide, a mixture of tellurium and tellurium dioxide, is used in the rewritable data layer of some CD-RW disks and DVD-RW disks. Bismuth telluride is also used in many microelectronic devices, such as photoreceptors. Tellurium is sometimes used as an alternative to sulfur in vulcanized rubber. Cadmium telluride is used as a high-efficiency material in solar panels. Some of polonium's applications relate to the element's radioactivity. For instance, polonium is used as an alpha-particle generator for research. Polonium alloyed with beryllium provides an efficient neutron source. Polonium is also used in nuclear batteries. Most polonium is used in antistatic devices. Livermorium does not have any uses whatsoever due to its extreme rarity and short half-life. Organochalcogen compounds are involved in the semiconductor process. These compounds also feature into ligand chemistry and biochemistry. One application of chalcogens themselves is to manipulate redox couples in supramolar chemistry (chemistry involving non-covalent bond interactions). This application leads on to such applications as crystal packing, assembly of large molecules, and biological recognition of patterns. The secondary bonding interactions of the larger chalcogens, selenium and tellurium, can create organic solvent-holding acetylene nanotubes. Chalcogen interactions are useful for conformational analysis and stereoelectronic effects, among other things. Chalcogenides with through bonds also have applications. For instance, divalent sulfur can stabilize carbanions, cationic centers, and radical. Chalcogens can confer upon ligands (such as DCTO) properties such as being able to transform Cu(II) to Cu(I). Studying chalcogen interactions gives access to radical cations, which are used in mainstream synthetic chemistry. Metallic redox centers of biological importance are tunable by interactions of ligands containing chalcogens, such as methionine and selenocysteine. Also, chalcogen through-bonds can provide insight about the process of electron transfer. Biological role Oxygen is needed by almost all organisms for the purpose of generating ATP. It is also a key component of most other biological compounds, such as water, amino acids and DNA. Human blood contains a large amount of oxygen. Human bones contain 28% oxygen. Human tissue contains 16% oxygen. A typical 70-kilogram human contains 43 kilograms of oxygen, mostly in the form of water. All animals need significant amounts of sulfur. Some amino acids, such as cysteine and methionine contain sulfur. Plant roots take up sulfate ions from the soil and reduce it to sulfide ions. Metalloproteins also use sulfur to attach to useful metal atoms in the body and sulfur similarly attaches itself to poisonous metal atoms like cadmium to haul them to the safety of the liver. On average, humans consume 900 milligrams of sulfur each day. Sulfur compounds, such as those found in skunk spray often have strong odors. All animals and some plants need trace amounts of selenium, but only for some specialized enzymes. Humans consume on average between 6 and 200 micrograms of selenium per day. Mushrooms and brazil nuts are especially noted for their high selenium content. Selenium in foods is most commonly found in the form of amino acids such as selenocysteine and selenomethionine. Selenium can protect against heavy metal poisoning. Tellurium is not known to be needed for animal life, although a few fungi can incorporate it in compounds in place of selenium. Microorganisms also absorb tellurium and emit dimethyl telluride. Most tellurium in the blood stream is excreted slowly in urine, but some is converted to dimethyl telluride and released through the lungs. On average, humans ingest about 600 micrograms of tellurium daily. Plants can take up some tellurium from the soil. Onions and garlic have been found to contain as much as 300 parts per million of tellurium in dry weight. Polonium has no biological role, and is highly toxic on account of being radioactive. Toxicity Oxygen is generally nontoxic, but oxygen toxicity has been reported when it is used in high concentrations. In both elemental gaseous form and as a component of water, it is vital to almost all life on earth. Despite this, liquid oxygen is highly dangerous. Even gaseous oxygen is dangerous in excess. For instance, sports divers have occasionally drowned from convulsions caused by breathing pure oxygen at a depth of more than underwater. Oxygen is also toxic to some bacteria. Ozone, an allotrope of oxygen, is toxic to most life. It can cause lesions in the respiratory tract. Sulfur is generally nontoxic and is even a vital nutrient for humans. However, in its elemental form it can cause redness in the eyes and skin, a burning sensation and a cough if inhaled, a burning sensation and diarrhoea and/or catharsis if ingested, and can irritate the mucous membranes. An excess of sulfur can be toxic for cows because microbes in the rumens of cows produce toxic hydrogen sulfide upon reaction with sulfur. Many sulfur compounds, such as hydrogen sulfide (H2S) and sulfur dioxide (SO2) are highly toxic. Selenium is a trace nutrient required by humans on the order of tens or hundreds of micrograms per day. A dose of over 450 micrograms can be toxic, resulting in bad breath and body odor. Extended, low-level exposure, which can occur at some industries, results in weight loss, anemia, and dermatitis. In many cases of selenium poisoning, selenous acid is formed in the body. Hydrogen selenide (H2Se) is highly toxic. Exposure to tellurium can produce unpleasant side effects. As little as 10 micrograms of tellurium per cubic meter of air can cause notoriously unpleasant breath, described as smelling like rotten garlic. Acute tellurium poisoning can cause vomiting, gut inflammation, internal bleeding, and respiratory failure. Extended, low-level exposure to tellurium causes tiredness and indigestion. Sodium tellurite (Na2TeO3) is lethal in amounts of around 2 grams. Polonium is dangerous as an alpha particle emitter. If ingested, polonium-210 is a million times as toxic as hydrogen cyanide by weight; it has been used as a murder weapon in the past, most famously to kill Alexander Litvinenko. Polonium poisoning can cause nausea, vomiting, anorexia, and lymphopenia. It can also damage hair follicles and white blood cells. Polonium-210 is only dangerous if ingested or inhaled because its alpha particle emissions cannot penetrate human skin. Polonium-209 is also toxic, and can cause leukemia. Amphid salts Amphid salts was a name given by Jons Jacob Berzelius in the 19th century for chemical salts derived from the 16th group of the periodic table which included oxygen, sulfur, selenium, and tellurium. The term received some use in the early 1800s but is now obsolete. The current term in use for the 16th group is chalcogens. See also Chalcogenide Gold chalcogenides Halogen Interchalcogen Pnictogen References External links Periodic table Groups (periodic table)
2,662
5,906
https://en.wikipedia.org/wiki/Carbon%20dioxide
Carbon dioxide
Carbon dioxide (chemical formula ) is a chemical compound made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% by volume (as of May 2022), having risen from pre-industrial levels of 280 ppm. Burning fossil fuels is the primary cause of these increased CO2 concentrations and also the primary cause of climate change. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric CO2 levels increase. As the source of available carbon in the carbon cycle, atmospheric CO2 is the primary carbon source for life on Earth. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian has been regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and CO2 is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. CO2 is released from organic materials when they decay or combust, such as in forest fires. Since plants require CO2 for photosynthesis, and humans and animals depend on plants for food, CO2 is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess CO2 emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the CO2 being released back into the atmosphere. CO2 is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas. Sequestered CO2 is released into the atmosphere through burning fossil fuels or naturally by volcanoes, hot springs, geysers, and when carbonate rocks dissolve in water or react with acids. CO2 is a versatile industrial material, used, for example, as an inert gas in welding and fire extinguishers, as a pressurizing gas in air guns and oil recovery, and as a supercritical fluid solvent in decaffeination of coffee and supercritical drying. It is a byproduct of fermentation of sugars in bread, beer and wine making, and is added to carbonated beverages like seltzer and beer for effervescence. It has a sharp and acidic odor and generates the taste of soda water in the mouth, but at normally encountered concentrations it is odorless. Chemical and physical properties Structure, bonding and molecular vibrations The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon-oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140-pm length of a typical single C–O bond, and shorter than most other C–O multiply-bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment. As a linear triatomic molecule, has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in by Raman spectroscopy at 1388 cm−1 (wavelength 7.2 μm). In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules (except diatomics!). In aqueous solution Carbon dioxide is soluble in water, in which it reversibly forms (carbonic acid), which is a weak acid since its ionization in water is incomplete. CO2 + H2O <=> H2CO3 The hydration equilibrium constant of carbonic acid is, at 25 °C: Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as molecules, not affecting the pH. The relative concentrations of , , and the deprotonated forms (bicarbonate) and (carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter. Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion (): H2CO3 <=> HCO3- + H+ Ka1 = ; pKa1 = 3.6 at 25 °C. This is the true first acid dissociation constant, defined as where the denominator includes only covalently bound and does not include hydrated . The much smaller and often-quoted value near is an apparent value calculated on the (incorrect) assumption that all dissolved is present as carbonic acid, so that Since most of the dissolved remains as molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1. The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion (): HCO3- <=> CO3^2- + H+ Ka2 = ; pKa2 = 10.329 In organisms carbonic acid production is catalysed by the enzyme, carbonic anhydrase. Chemical reactions of CO2 is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strong α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with are thermodynamically less favored and are often found to be highly reversible. Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with to give carboxylates: MR + CO2 -> RCO2M where M = Li or Mg Br and R = alkyl or aryl. In metal carbon dioxide complexes, serves as a ligand, which can facilitate the conversion of to other chemicals. The reduction of to CO is ordinarily a difficult and slow reaction: CO2 + 2 e- + 2H+ -> CO + H2O Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from absorbed from the air and water: \mathit{n}CO2{} + \mathit{n}H2O -> (CH2O)_\mathit{n}{} + \mathit{n}O2 The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process. Physical properties Carbon dioxide is colorless. At low concentrations the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air. Carbon dioxide has no liquid state at pressures below (). At a pressure of 1 atm (), the gas deposits directly to a solid at temperatures below () and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice. Liquid carbon dioxide forms only at pressures above (); the triple point of carbon dioxide is () at () (see phase diagram). The critical point is () at (). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released. At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide. Table of thermal and physical properties of saturated liquid carbon dioxide: Table of thermal and physical properties of carbon dioxide () at atmospheric pressure: Biological role Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration. Photosynthesis and carbon fixation Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and (cyanobacteria) into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product. Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from CO2 and ribulose bisphosphate, as shown in the diagram at left. RuBisCO is thought to be the single most abundant protein on Earth. Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales. Plants can grow as much as 50 percent faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated CO2 levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated CO2 in FACE experiments. Increased atmospheric CO2 concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that CO2 enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein. The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of CO2. Plants also emit CO2 during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of CO2 each year, a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere. Toxicity Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location. CO2 is an asphyxiant gas and not classified as toxic or harmful in accordance with Globally Harmonized System of Classification and Labelling of Chemicals standards of United Nations Economic Commission for Europe by using the OECD Guidelines for the Testing of Chemicals. In concentrations up to 1% (10,000 ppm), it will make some people feel drowsy and give the lungs a stuffy feeling. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. The physiological effects of acute carbon dioxide exposure are grouped together under the term hypercapnia, a subset of asphyxiation. Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by CO2 emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is . Adaptation to increased concentrations of CO2 occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse such condition. Below 1% There are few studies of the health effects of long-term continuous CO2 exposure on humans and animals at levels below 1%. Occupational CO2 exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this CO2 concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% CO2 have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000ppm) CO2 likely due to CO2 induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm. However a review of the literature found that most studies on the phenomenon of carbon dioxide induced cognitive impairment to have a small effect on high-level decision making and most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of CO2 in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of CO2 declined to safe levels (0.2%). Ventilation Poor ventilation is one of the main causes of excessive CO2 concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that CO2 concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher CO2 concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm). Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly. In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen CO2) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from CO2 fumes emanating from the large amount of dry ice she was transporting in her car. Outdoor areas with elevated concentrations Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about in diameter, concentrations of CO2 rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of CO2 produced by disturbance of deep lake water saturated with CO2 are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986. Human physiology Content The body produces approximately of carbon dioxide per day per person, containing of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents is shown in the adjacent table. Transport in the blood CO2 is carried in blood in three different ways. (Exact percentages vary between arterial and venous blood). Majority (about 70% to 80%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells, by the reaction . 5–10% is dissolved in blood plasma 5–10% is bound to hemoglobin as carbamino compounds Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Regulation of respiration Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue. Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of CO2 in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis. Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness. The respiratory centers try to maintain an arterial CO2 pressure of 40 mm Hg. With intentional hyperventilation, the CO2 content of arterial blood may be lowered to 10–20 mm Hg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving. Concentrations and role in the environment Atmosphere Oceans Ocean acidification Carbon dioxide dissolves in the ocean to form carbonic acid (H2CO3), bicarbonate (HCO3−) and carbonate (CO32−). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of CO2 emitted by human activity. Hydrothermal vents Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006. Production Biological processes Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce and ethanol, also known as alcohol, as follows: C6H12O6 -> 2 CO2 + 2 C2H5OH All aerobic organisms produce when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to (cellular respiration, anaerobic respiration and photosynthesis). The equation for the respiration of glucose and other monosaccharides is: C6H12O6 + 6 O2 -> 6 CO2 + 6 H2O Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane. Industrial processes Carbon dioxide can be obtained by distillation from air, but the method is inefficient. Industrially, carbon dioxide is predominantly an unrecovered waste product, produced by several methods which may be practiced at various scales. Combustion The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen: CH4 + 2 O2-> CO2 + 2 H2O Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide: Fe2O3 + 3 CO -> 3 CO2 + 2 Fe By-product from hydrogen production Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane). This is a major source of food-grade carbon dioxide for use in carbonation of beer and soft drinks, and is also used for stunning animals such as poultry. In the summer of 2018 a shortage of carbon dioxide for these purposes arose in Europe due to the temporary shut-down of several ammonia plants for maintenance. Thermal decomposition of limestone It is produced by thermal decomposition of limestone, by heating (calcining) at about , in the manufacture of quicklime (calcium oxide, ), a compound that has many industrial uses: CaCO3 -> CaO + CO2 Acids liberate from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below: CaCO3 + 2HCl -> CaCl2 + H2CO3 The carbonic acid () then decomposes to water and : H2CO3 -> CO2 + H2O Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams. Commercial uses Carbon dioxide is used by the food industry, the oil industry, and the chemical industry. The compound has varied commercial uses but one of its greatest uses as a chemical is in the production of carbonated beverages; it provides the sparkle in carbonated beverages such as soda water, beer and sparkling wine. Precursor to chemicals In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using CO2 by the Kolbe–Schmitt reaction. In addition to conventional processes using CO2 for chemical production, electrochemical methods are also being explored at a research level. In particular, the use of renewable energy for production of fuels from CO2 (such as methanol) is attractive as this could result in fuels that could be easily transported and used within conventional combustion technologies but have no net CO2 emissions. Agriculture Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional CO2 to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Foods Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US and Australia and New Zealand (listed by its INS number 290). A candy called Pop Rocks is pressurized with carbon dioxide gas at about . When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop. Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids. Beverages Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen. The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response. Winemaking Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine. Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers. Stunning animals Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress. Inert gas Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, CO2 use is sometimes referred to as MAG welding, for Metal Active Gas, as CO2 can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern. Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately , allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of CO2 are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans. Fire extinguisher Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms. Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon-dioxide systems for fire protection of ship holds and engine rooms. Carbon-dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of CO2 systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries. Supercritical CO2 as solvent Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to remove caffeine from coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide. Medical and pharmacological uses In medicine, up to 5% carbon dioxide (130 times atmospheric concentration) is added to oxygen for stimulation of breathing after apnea and to stabilize the / balance in blood. Carbon dioxide can be mixed with up to 50% oxygen, forming an inhalable gas; this is known as Carbogen and has a variety of medical and research uses. Another medical use are the mofette, dry spas that use carbon dioxide from post-volcanic discharge for therapeutic purposes. Energy Supercritical CO2 is used as the working fluid in the Allam power cycle engine. Fossil fuel recovery Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well. In mature oil fields, extensive pipe networks are used to carry the carbon dioxide to the injection points. In enhanced coal bed methane recovery, carbon dioxide would be pumped into the coal seam to displace methane, as opposed to current methods which primarily rely on the removal of water (to reduce pressure) to make the coal seam release its trapped methane. Bio transformation into fuel It has been proposed that CO2 from power generation be bubbled into ponds to stimulate growth of algae that could then be converted into biodiesel fuel. A strain of the cyanobacterium Synechococcus elongatus has been genetically engineered to produce the fuels isobutyraldehyde and isobutanol from CO2 using photosynthesis. Researchers have developed a process called electrolysis, using enzymes isolated from bacteria to power the chemical reactions which convert CO2 into fuels. Refrigerant Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below at regular atmospheric pressure, regardless of the air temperature. Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than does. physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to , systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded -based beverage coolers and the U.S. Army is interested in refrigeration and heating technology. Minor uses Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers. Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton. Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation. Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer CO2 include placing animals directly into a closed, prefilled chamber containing CO2, or exposure to a gradually increasing concentration of CO2. The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents. Percentages of CO2 vary for different species, based on identified optimal percentages to minimize distress. Carbon dioxide is also used in several related cleaning and surface-preparation techniques. History of discovery Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" or "wild spirit" (spiritus sylvestris). The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas. Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid CO2. See also (from the atmosphere) List of least carbon efficient power stations List of countries by carbon dioxide emissions (early work on CO2 and climate change) NASA's References External links Current global map of carbon dioxide concentration CDC – NIOSH Pocket Guide to Chemical Hazards – Carbon Dioxide Trends in Atmospheric Carbon Dioxide (NOAA) Acid anhydrides Acidic oxides Coolants Fire suppression agents Greenhouse gases Household chemicals Inorganic solvents Laser gain media Nuclear reactor coolants Oxocarbons Propellants Refrigerants Gaseous signaling molecules E-number additives Triatomic molecules
2,663
5,920
https://en.wikipedia.org/wiki/Celtic%20languages
Celtic languages
The Celtic languages (usually , but sometimes ) are a group of related languages descended from Proto-Celtic. They form a branch of the Indo-European language family. The term "Celtic" was first used to describe this language group by Edward Lhuyd in 1707, following Paul-Yves Pezron, who made the explicit link between the Celts described by classical writers and the Welsh and Breton languages. During the 1st millennium BC, Celtic languages were spoken across much of Europe and central Anatolia. Today, they are restricted to the northwestern fringe of Europe and a few diaspora communities. There are six living languages: the four continuously living languages Breton, Irish, Scottish Gaelic and Welsh, and the two revived languages Cornish and Manx. All are minority languages in their respective countries, though there are continuing efforts at revitalisation. Welsh is an official language in Wales and Irish is an official language of Ireland and of the European Union. Welsh is the only Celtic language not classified as endangered by UNESCO. The Cornish and Manx languages went extinct in modern times. They have been the object of revivals and now each has several hundred second-language speakers. Irish, Manx and Scottish Gaelic form the Goidelic languages, while Welsh, Cornish and Breton are Brittonic. All of these are Insular Celtic languages, since Breton, the only living Celtic language spoken in continental Europe, is descended from the language of settlers from Britain. There are a number of extinct but attested continental Celtic languages, such as Celtiberian, Galatian and Gaulish. Beyond that there is no agreement on the subdivisions of the Celtic language family. They may be divided into P-Celtic and Q-Celtic. The Celtic languages have a rich literary tradition. The earliest specimens of written Celtic are Lepontic inscriptions from the 6th century BC in the Alps. Early Continental inscriptions used Italic and Paleohispanic scripts. Between the 4th and 8th centuries, Irish and Pictish were occasionally written in an original script, Ogham, but Latin script came to be used for all Celtic languages. Welsh has had a continuous literary tradition from the 6th century AD. Living languages SIL Ethnologue lists six living Celtic languages, of which four have retained a substantial number of native speakers. These are the Goidelic languages (Irish and Scottish Gaelic, both descended from Middle Irish) and the Brittonic languages (Welsh and Breton, descended from Common Brittonic). The other two, Cornish (Brittonic) and Manx (Goidelic), died out in modern times with their presumed last native speakers in 1777 and 1974 respectively. For both these languages, however, revitalisation movements have led to the adoption of these languages by adults and children and produced some native speakers. Taken together, there were roughly one million native speakers of Celtic languages as of the 2000s. In 2010, there were more than 1.4 million speakers of Celtic languages. Demographics Mixed languages Beurla Reagaird, Highland travellers' language Shelta, based largely on Irish with influence from an undocumented source (some 86,000 speakers in 2009). Classification Celtic is divided into various branches: Lepontic, the oldest attested Celtic language (from the 6th century BC). Anciently spoken in Switzerland and in Northern-Central Italy. Coins with Lepontic inscriptions have been found in Noricum and Gallia Narbonensis. Celtiberian, also called Eastern or Northeastern Hispano-Celtic, spoken in the ancient Iberian Peninsula, in the eastern part of Old Castile and south of Aragon. Modern provinces: Segovia, Burgos, Soria, Guadalajara, Cuenca, Zaragoza and Teruel. The relationship of Celtiberian with Gallaecian, in northwest Iberia, is uncertain. Gallaecian, also known as Western or Northwestern Hispano-Celtic, anciently spoken in the northwest of the peninsula (modern Northern Portugal, Galicia, Asturias and Cantabria). Gaulish languages, including Galatian and possibly Noric. These were once spoken in a wide arc from Belgium to Turkey. They are now all extinct. Brittonic, spoken in Great Britain and Brittany. Including the living languages Breton, Cornish, and Welsh, and the lost Cumbric and Pictish, though Pictish may be a sister language rather than a daughter of Common Brittonic. Before the arrival of Scotti on the Isle of Man in the 9th century, there may have been a Brittonic language there. The theory of a Brittonic Ivernic language predating Goidelic speech in Ireland has been suggested, but is not widely accepted. Goidelic, including the extant Irish, Manx, and Scottish Gaelic. Continental/Insular Celtic and P/Q-Celtic hypotheses Scholarly handling of Celtic languages has been contentious owing to scarceness of primary source data. Some scholars (such as Cowgill 1975; McCone 1991, 1992; and Schrijver 1995) posit that the primary distinction is between Continental Celtic and Insular Celtic, arguing that the differences between the Goidelic and Brittonic languages arose after these split off from the Continental Celtic languages. Other scholars (such as Schmidt 1988) make the primary distinction that between P-Celtic and Q-Celtic languages based on their preferential use of these respective consonants in similar words. Most of the Gallic and Brittonic languages are P-Celtic, while the Goidelic and Celtiberian languages are Q-Celtic. The P-Celtic languages (also called Gallo-Brittonic) are sometimes seen (for example by Koch 1992) as a central innovating area as opposed to the more conservative peripheral Q-Celtic languages. The Breton language is Brittonic, not Gaulish, though there may be some input from the latter, having been introduced from Southwestern regions of Britain in the post-Roman era and having evolved into Breton. In the P/Q classification schema, the first language to split off from Proto-Celtic was Gaelic. It has characteristics that some scholars see as archaic, but others see as also being in the Brittonic languages (see Schmidt). In the Insular/Continental classification schema, the split of the former into Gaelic and Brittonic is seen as being late. The distinction of Celtic into these four sub-families most likely occurred about 900 BC according to Gray and Atkinson but, because of estimation uncertainty, it could be any time between 1200 and 800 BC. However, they only considered Gaelic and Brythonic. The controversial paper by Forster and Toth included Gaulish and put the break-up much earlier at 3200 BC ± 1500 years. They support the Insular Celtic hypothesis. The early Celts were commonly associated with the archaeological Urnfield culture, the Hallstatt culture, and the La Tène culture, though the earlier assumption of association between language and culture is now considered to be less strong. There are legitimate scholarly arguments for both the Insular Celtic hypothesis and the P-/Q-Celtic hypothesis. Proponents of each schema dispute the accuracy and usefulness of the other's categories. However, since the 1970s the division into Insular and Continental Celtic has become the more widely held view (Cowgill 1975; McCone 1991, 1992; Schrijver 1995), but in the middle of the 1980s, the P-/Q-Celtic theory found new supporters (Lambert 1994), because of the inscription on the Larzac piece of lead (1983), the analysis of which reveals another common phonetical innovation -nm- > -nu (Gaelic ainm / Gaulish anuana, Old Welsh enuein "names"), that is less accidental than only one. The discovery of a third common innovation would allow the specialists to come to the conclusion of a Gallo-Brittonic dialect (Schmidt 1986; Fleuriot 1986). The interpretation of this and further evidence is still quite contested, and the main argument for Insular Celtic is connected with the development of verbal morphology and the syntax in Irish and British Celtic, which Schumacher regards as convincing, while he considers the P-Celtic/Q-Celtic division unimportant and treats Gallo-Brittonic as an outdated theory. Stifter affirms that the Gallo-Brittonic view is "out of favour" in the scholarly community as of 2008 and the Insular Celtic hypothesis "widely accepted". When referring only to the modern Celtic languages, since no Continental Celtic language has living descendants, "Q-Celtic" is equivalent to "Goidelic" and "P-Celtic" is equivalent to "Brittonic". How the family tree of the Celtic languages is ordered depends on which hypothesis is used: "Insular Celtic hypothesis" Proto-Celtic Continental Celtic † Celtiberian † Gallaecian † Gaulish † Insular Celtic Brittonic Goidelic "P/Q-Celtic hypothesis" Proto-Celtic Q-Celtic Celtiberian † Gallaecian † Goidelic P-Celtic Gaulish † Brittonic Eska (2010) Eska evaluates the evidence as supporting the following tree, based on shared innovations, though it is not always clear that the innovations are not areal features. It seems likely that Celtiberian split off before Cisalpine Celtic, but the evidence for this is not robust. On the other hand, the unity of Gaulish, Goidelic, and Brittonic is reasonably secure. Schumacher (2004, p. 86) had already cautiously considered this grouping to be likely genetic, based, among others, on the shared reformation of the sentence-initial, fully inflecting relative pronoun *i̯os, *i̯ā, *i̯od into an uninflected enclitic particle. Eska sees Cisalpine Gaulish as more akin to Lepontic than to Transalpine Gaulish. Celtic Celtiberian Gallaecian Nuclear Celtic? Cisalpine Celtic: Lepontic → Cisalpine Gaulish† Transalpine–Goidelic–Brittonic (secure) Transalpine Gaulish† ("Transalpine Celtic") Insular Celtic Goidelic Brittonic Eska considers a division of Transalpine–Goidelic–Brittonic into Transalpine and Insular Celtic to be most probable because of the greater number of innovations in Insular Celtic than in P-Celtic, and because the Insular Celtic languages were probably not in great enough contact for those innovations to spread as part of a sprachbund. However, if they have another explanation (such as an SOV substratum language), then it is possible that P-Celtic is a valid clade, and the top branching would be: Transalpine–Goidelic–Brittonic (P-Celtic hypothesis) Goidelic Gallo-Brittonic Transalpine Gaulish ("Transalpine Celtic") Brittonic Italo-Celtic Within the Indo-European family, the Celtic languages have sometimes been placed with the Italic languages in a common Italo-Celtic subfamily. This hypothesis fell somewhat out of favour after reexamination by American linguist Calvert Watkins in 1966. Irrespectively, some scholars such as Ringe, Warnow and Taylor have argued in favour of an Italo-Celtic grouping in 21st century theses. Characteristics Although there are many differences between the individual Celtic languages, they do show many family resemblances. consonant mutations (Insular Celtic only) inflected prepositions (Insular Celtic only) two grammatical genders (modern Insular Celtic only; Old Irish and the Continental languages had three genders, although Gaulish may have merged the neuter and masculine in its later forms) a vigesimal number system (counting by twenties) Cornish "fifty-six" (literally "sixteen and two twenty") verb–subject–object (VSO) word order (probably Insular Celtic only) an interplay between the subjunctive, future, imperfect, and habitual, to the point that some tenses and moods have ousted others an impersonal or autonomous verb form serving as a passive or intransitive Welsh "I teach" vs. "is taught, one teaches" Irish "I teach" vs. "is taught, one teaches" no infinitives, replaced by a quasi-nominal verb form called the verbal noun or verbnoun frequent use of vowel mutation as a morphological device, e.g. formation of plurals, verbal stems, etc. use of preverbal particles to signal either subordination or illocutionary force of the following clause mutation-distinguished subordinators/relativisers particles for negation, interrogation, and occasionally for affirmative declarations pronouns positioned between particles and verbs lack of simple verb for the imperfective "have" process, with possession conveyed by a composite structure, usually BE + preposition Cornish "I have a cat", literally "there is a cat to me" Welsh "I have a cat", literally "a cat is with me" Irish "I have a cat", literally "there is a cat at me" use of periphrastic constructions to express verbal tense, voice, or aspectual distinctions distinction by function of the two versions of BE verbs traditionally labelled substantive (or existential) and copula bifurcated demonstrative structure suffixed pronominal supplements, called confirming or supplementary pronouns use of singulars or special forms of counted nouns, and use of a singulative suffix to make singular forms from plurals, where older singulars have disappeared Examples: (Literal translation) Do not bother with son the beggar's and not will-bother son the beggar's with-you. is the genitive of . The the result of affection; the is the lenited form of . is the second person singular inflected form of the preposition . The order is verb–subject–object (VSO) in the second half. Compare this to English or French (and possibly Continental Celtic) which are normally subject–verb–object in word order. (Literally) four on fifteen and four twenties is a mutated form of , which is ("five") plus ("ten"). Likewise, is a mutated form of . The multiples of ten are . Comparison table The lexical similarity between the different Celtic languages is apparent in their core vocabulary, especially in terms of actual pronunciation. Moreover, the phonetic differences between languages are often the product of regular sound change (i.e. lenition of /b/ into /v/ or Ø). The table below has words in the modern languages that were inherited directly from Proto-Celtic, as well as a few old borrowings from Latin that made their way into all the daughter languages. There is often a closer match between Welsh, Breton, and Cornish on the one hand, and Irish, Scottish Gaelic and Manx on the other. For a fuller list of comparisons, see the Swadesh list for Celtic. † Borrowings from Latin. Examples Article 1 of the Universal Declaration of Human Rights: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. Possible members of the family Several poorly-documented languages may have been Celtic. Ancient Belgian Camunic is an extinct language spoken in the first millennium BC in the Val Camonica and Valtellina valleys of the Central Alps. It has recently been proposed to be a Celtic language. Ivernic Ligurian, in the Northern Mediterranean Coast straddling the southeast French and northwest Italian coasts, including parts of Tuscany, Elba island and Corsica. Xavier Delamarre argues that Ligurian was a Celtic language, similar to Gaulish. The Ligurian-Celtic question is also discussed by Barruol (1999). Ancient Ligurian is either listed as Celtic (epigraphic), or Para-Celtic (onomastic). Lusitanian, spoken in the area between the Douro and Tagus rivers of western Iberia (a region straddling the present border of Portugal and Spain). Known from only five inscriptions and various place names. It is an Indo-European language and some scholars have proposed that it may be a para-Celtic language, which evolved alongside Celtic or formed a dialect continuum or sprachbund with Tartessian and Gallaecian. This is tied to a theory of an Iberian origin for the Celtic languages. It is also possible that the Q-Celtic languages alone, including Goidelic, originated in western Iberia (a theory that was first put forward by Edward Lhuyd in 1707) or shared a common linguistic ancestor with Lusitanian. Secondary evidence for this hypothesis has been found in research by biological scientists, who have identified (1) deep-rooted similarities in human DNA found precisely in both the former Lusitania and Ireland, and; (2) the so-called "Lusitanian distribution" of animals and plants unique to western Iberia and Ireland. Both phenomena are now generally thought to have resulted from human emigration from Iberia to Ireland, in the late Paleolithic or early Mesolithic eras. Other scholars see greater linguistic affinities between Lusitanian, proto-Gallo-Italic (particularly with Ligurian) and Old European. Prominent modern linguists such as Ellis Evans, believe Gallaecian-Lusitanian was in fact one same language (not separate languages) of the "P" Celtic variant. Rhaetic, spoken in central Switzerland, Tyrol in Austria, and the Alpine regions of northeast Italy. Documented by a limited number of short inscriptions (found through Northern Italy and Western Austria) in two variants of the Etruscan alphabet. Its linguistic categorization is not clearly established, and it presents a confusing mixture of what appear to be Etruscan, Indo-European, and uncertain other elements. Howard Hayes Scullard argues that Rhaetian was also a Celtic language. Tartessian, spoken in the southwest of the Iberia Peninsula (mainly southern Portugal and southwest Spain). Tartessian is known by 95 inscriptions, with the longest having 82 readable signs. John T. Koch argues that Tartessian was also a Celtic language. See also Ogham Celts Celts (modern) A Swadesh list of the modern Celtic languages Celtic Congress Celtic League Continental Celtic languages Italo-Celtic Language family Notes References Ball, Martin J. & James Fife (ed.) (1993). The Celtic Languages. London: Routledge. . Borsley, Robert D. & Ian Roberts (ed.) (1996). The Syntax of the Celtic Languages: A Comparative Perspective. Cambridge: Cambridge University Press. . Celtic Linguistics, 1700–1850 (2000). London; New York: Routledge. 8 vols comprising 15 texts originally published between 1706 and 1844. Lewis, Henry & Holger Pedersen (1989). A Concise Comparative Celtic Grammar. Göttingen: Vandenhoeck & Ruprecht. . Further reading . . External links Aberdeen University Celtic Department "Labara: An Introduction to the Celtic Languages", by Meredith Richard Celts and Celtic Languages (PDF)
2,672
5,926
https://en.wikipedia.org/wiki/Computation
Computation
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. An especially well-known discipline of the study of computation is computer science. Physical process of computation Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Examples of such physical systems are digital computers, mechanical computers, quantum computers, DNA computers, molecular computers, microfluidics-based computers, analog computers, and wetware computers. This point of view has been adopted by the physics of computation, a branch of theoretical physics, as well as the field of natural computing. An even more radical point of view, pancomputationalism, is the postulate of digital physics that argues that the evolution of the universe is itself a computation. The mapping account The classic account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states." The semantic account Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything. The mechanistic account Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system. Mathematical models In the theory of computation, a diversity of mathematical models of computation has been developed. Typical mathematical models of computers are the following: State models including Turing machine, pushdown automaton, finite state automaton, and PRAM Functional models including lambda calculus Logical models including logic programming Concurrent models including actor model and process calculi Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup . See also Computationalism Real computation Reversible computation Hypercomputation Lateral computing Computational problem Multiple realizability Limits of computation References Theoretical computer science Computability theory
2,674
5,928
https://en.wikipedia.org/wiki/Clown
Clown
A clown is a person who performs comedy and arts in a state of open-mindedness using physical comedy, typically while wearing distinct makeup or costuming and reversing folkway-norms. History The most ancient clowns have been found in the Fifth Dynasty of Egypt, around 2400 BC. Unlike court jesters, clowns have traditionally served a socio-religious and psychological role, and traditionally the roles of priest and clown have been held by the same persons. Peter Berger writes, "It seems plausible that folly and fools, like religion and magic, meet some deeply rooted needs in human society." For this reason, clowning is often considered an important part of training as a physical performance discipline, partly because tricky subject matter can be dealt with, but also because it requires a high level of risk and play in the performer. In anthropology, the term clown has been extended to comparable jester or fool characters in non-Western cultures. A society in which such clowns have an important position are termed clown societies, and a clown character involved in a religious or ritual capacity is known as a ritual clown. A Heyoka is an individual in Lakota and Dakota culture cultures who lives outside the constraints of normal cultural roles, playing the role of a backwards clown by doing everything in reverse. The Heyoka role is sometimes best filled by a Winkte. Many native tribes have a history of clowning. The Canadian clowning method developed by Richard Pochinko and furthered by his former apprentice, Sue Morrison, combines European and Native American clowning techniques. In this tradition, masks are made of clay while the creator's eyes are closed. A mask is made for each direction of the medicine wheel. During this process, the clown creates a personal mythology that explores their personal experiences. Modern clowns are strongly associated with the tradition of the circus clown, which developed out of earlier comedic roles in theatre or Varieté shows during the 19th to mid 20th centuries. This recognizable character features outlandish costumes, distinctive makeup, colorful wigs, exaggerated footwear, and colorful clothing, with the style generally being designed to entertain large audiences. The first mainstream clown role was portrayed by Joseph Grimaldi (who also created the traditional whiteface make-up design). In the early 1800s, he expanded the role of Clown in the harlequinade that formed part of British pantomimes, notably at the Theatre Royal, Drury Lane and the Sadler's Wells and Covent Garden theatres. He became so dominant on the London comic stage that harlequinade Clowns became known as "Joey", and both the nickname and Grimaldi's whiteface make-up design are still used by other clowns. The comedy that clowns perform is usually in the role of a fool whose everyday actions and tasks become extraordinary—and for whom the ridiculous, for a short while, becomes ordinary. This style of comedy has a long history in many countries and cultures across the world. Some writers have argued that due to the widespread use of such comedy and its long history it is a need that is part of the human condition. Origin The clown character developed out of the zanni rustic fool characters of the early modern commedia dell'arte, which were themselves directly based on the rustic fool characters of ancient Greek and Roman theatre. Rustic buffoon characters in Classical Greek theater were known as sklêro-paiktês (from paizein: to play (like a child)) or deikeliktas, besides other generic terms for rustic or peasant. In Roman theater, a term for clown was fossor, literally digger; labourer. The English word clown was first recorded c. 1560 (as clowne, cloyne) in the generic meaning rustic, boor, peasant. The origin of the word is uncertain, perhaps from a Scandinavian word cognate with clumsy. It is in this sense that Clown is used as the name of fool characters in Shakespeare's Othello and The Winter's Tale. The sense of clown as referring to a professional or habitual fool or jester developed soon after 1600, based on Elizabethan rustic fool characters such as Shakespeare's. The harlequinade developed in England in the 17th century, inspired by Arlecchino and the commedia dell'arte. It was here that Clown came into use as the given name of a stock character. Originally a foil for Harlequin's slyness and adroit nature, Clown was a buffoon or bumpkin fool who resembled less a jester than a comical idiot. He was a lower class character dressed in tattered servants' garb. The now-classical features of the clown character were developed in the early 1800s by Joseph Grimaldi, who played Clown in Charles Dibdin's 1800 pantomime Peter Wilkins: or Harlequin in the Flying World at Sadler's Wells Theatre, where Grimaldi built the character up into the central figure of the harlequinade. Modern circuses The circus clown developed in the 19th century. The modern circus derives from Philip Astley's London riding school, which opened in 1768. Astley added a clown to his shows to amuse the spectators between equestrian sequences. American comedian George L. Fox became known for his clown role, directly inspired by Grimaldi, in the 1860s. Tom Belling senior (1843–1900) developed the red clown or Auguste (Dummer August) character c. 1870, acting as a foil for the more sophisticated white clown. Belling worked for Circus Renz in Vienna. Belling's costume became the template for the modern stock character of circus or children's clown, based on a lower class or hobo character, with red nose, white makeup around the eyes and mouth, and oversized clothes and shoes. The clown character as developed by the late 19th century is reflected in Ruggero Leoncavallo's 1892 opera Pagliacci (Clowns). Belling's Auguste character was further popularized by Nicolai Poliakoff's Coco in the 1920s to 1930s. The English word clown was borrowed, along with the circus clown act, by many other languages, such as French clown, Russian (and other Slavic languages) кло́ун, Greek κλόουν, Danish/Norwegian klovn, Romanian clovn etc. Italian retains Pagliaccio, a Commedia dell'arte zanni character, and derivations of the Italian term are found in other Romance languages, such as French Paillasse, Spanish payaso, Catalan/Galician pallasso, Portuguese palhaço, Greek παλιάτσος, Turkish palyaço, German Pajass (via French) Yiddish פּאַיאַץ (payats), Russian пая́ц, Romanian paiață. 20th-century North America In the early 20th century, with the disappearance of the rustic simpleton or village idiot character of everyday experience, North American circuses developed characters such as the tramp or hobo. Examples include Marceline Orbes, who performed at the Hippodrome Theater (1905), Charlie Chaplin's The Tramp (1914), and Emmett Kelly's Weary Willie based on hobos of the Depression era. Another influential tramp character was played by Otto Griebling during the 1930s to 1950s. Red Skelton's Dodo the Clown in The Clown (1953), depicts the circus clown as a tragicomic stock character, "a funny man with a drinking problem". In the United States, Bozo the Clown was an influential Auguste character since the late 1950s. The Bozo Show premiered in 1960 and appeared nationally on cable television in 1978. McDonald's derived its mascot clown, Ronald McDonald, from the Bozo character in the 1960s. Willard Scott, who had played Bozo during 1959–1962, performed as the mascot in 1963 television spots. The McDonald's trademark application for the character dates to 1967. Based on the Bozo template, the US custom of birthday clown, private contractors who offer to perform as clowns at children's parties, developed in the 1960s to 1970s. The strong association of the (Bozo-derived) clown character with children's entertainment as it has developed since the 1960s also gave rise to Clown Care or hospital clowning in children's hospitals by the mid-1980s. Clowns of America International (established 1984) and World Clown Association (established 1987) are associations of semi-professionals and professional performers. The shift of the Auguste or red clown character from his role as a foil for the white in circus or pantomime shows to a Bozo-derived standalone character in children's entertainment by the 1980s also gave rise to the evil clown character, with the attraction of clowns for small children being based in their fundamentally threatening or frightening nature. The fear of clowns, particularly circus clowns, has become known by the term "coulrophobia." Types There are different types of clowns portrayed around the world. They include Auguste Blackface Buffoon Harlequin Jester Mime artist Pierrot Pueblo Rodeo clown Tramp Whiteface Circus Pierrot and Harlequin The classical pairing of the White Clown with Auguste in modern tradition has a precedent in the pairing of Pierrot and Harlequin in the Commedia dell'arte. Originally, Harlequin's role was that of a light-hearted, nimble and astute servant, paired with the sterner and melancholic Pierrot. In the 18th-century English Harlequinade, Harlequin was now paired with Clown. As developed by Joseph Grimaldi around 1800, Clown became the mischievous and brutish foil for the more sophisticated Harlequin, who became more of a romantic character. The most influential such pair in Victorian England were the Payne Brothers, active during the 1860s and 1870s. White and Auguste The white clown, or clown blanc in French, is a sophisticated character, as opposed to the clumsy Auguste. The two types are also distinguished as the sad clown (blanc) and happy clown (Auguste). The Auguste face base makeup color is a variation of pink, red, or tan rather than white. Features are exaggerated in size, and are typically red and black in color. The mouth is thickly outlined with white (called the muzzle) as are the eyes. Appropriate to the character, the Auguste can be dressed in either well-fitted garb or a costume that does not fit – oversize or too small, either is appropriate. Bold colors, large prints or patterns, and suspenders often characterize Auguste costumes. The Auguste character-type is often an anarchist, a joker, or a fool. He is clever and has much lower status than the whiteface. Classically the whiteface character instructs the Auguste character to perform his bidding. The Auguste has a hard time performing a given task, which leads to funny situations. Sometimes the Auguste plays the role of an anarchist and purposefully has trouble following the whiteface's directions. Sometimes the Auguste is confused or is foolish and makes errors less deliberately. The contra-auguste plays the role of the mediator between the white clown and the Auguste character. He has a lower status than the white clown but a higher status than the Auguste. He aspires to be more like the white clown and often mimics everything the white clown does to try to gain approval. If there is a contra-auguste character, he often is instructed by the whiteface to correct the Auguste when he is doing something wrong. There are two major types of clowns with whiteface makeup: The classic white clown is derived from the Pierrot character. His makeup is white, usually with facial features such as eyebrows emphasized in black. He is the more intelligent and sophisticated clown, contrasting with the rude or grotesque Auguste types. Francesco Caroli and Glenn "Frosty" Little are examples of this type. The second type of whiteface is the buffoonish clown of the Bozo type, known as Comedy or Grotesque Whiteface. This type has grotesquely emphasized features, especially a red nose and red mouth, often with partial (mostly red) hair. In the comedic partnership of Abbott and Costello, Bud Abbot would have been the classic whiteface and Lou Costello the comedy whiteface or Auguste. Traditionally, the whiteface clown uses clown white makeup to cover the entire face and neck, leaving none of the underlying natural skin visible. In the European whiteface makeup, the ears are painted red. Whiteface makeup was originally designed by Joseph Grimaldi in 1801. He began by painting a white base over his face, neck and chest before adding red triangles on the cheeks, thick eyebrows and large red lips set in a mischievous grin. Grimaldi's design is used by many modern clowns. According to Grimaldi's biographer Andrew McConnell Stott, it was one of the most important theatrical designs of the 1800s. America's first great whiteface clown was stage star George "G.L." Fox. Inspired by Grimaldi, Fox popularised the Humpty Dumpty stories throughout the U.S. in the 1860s. Scary and Evil The scary clown, also known as the evil clown or killer clown, is a subversion of the traditional comic clown character, in which the playful trope is instead depicted in a more disturbing nature through the use of horror elements and dark humor. The character can be seen as playing on the sense of unease felt by those with coulrophobia, the fear of clowns. The modern archetype of the evil clown was popularized by DC Comics character the Joker starting in 1940 and again by Pennywise in Stephen King's novel IT, which introduced the fear of an evil clown to a modern audience. In the novel, the eponymous character is a pan-dimensional monster which feeds mainly on children by luring them in the form of a clown, named "Pennywise", and then assuming the shape of whatever the victim fears the most. Character The character clown adopts an eccentric character of some type, such as a butcher, a baker, a policeman, a housewife or hobo. Prime examples of this type of clown are the circus tramps Otto Griebling and Emmett Kelly. Red Skelton, Harold Lloyd, Buster Keaton, Charlie Chaplin, Rowan Atkinson and Sacha Baron Cohen would all fit the definition of a character clown. The character clown makeup is a comic slant on the standard human face. Their makeup starts with a flesh tone base and may make use of anything from glasses, mustaches and beards to freckles, warts, big ears or strange haircuts. The most prevalent character clown in the American circus is the hobo, tramp or bum clown. There are subtle differences in the American character clown types. The primary differences among these clown types is attitude. According to American circus expert Hovey Burgess, they are: The Hobo: Migratory and finds work where he travels. Down on his luck but maintains a positive attitude. The Tramp: Migratory and does not work where he travels. Down on his luck and depressed about his situation. The Bum: Non-migratory and non-working. Organizations The World Clown Association is a worldwide organization for clowns, jugglers, magicians, and face painters. It holds an annual convention, mainly in the United States. Clowns of America International is a Minnesota-based non-profit clown arts membership organization which aims "to share, educate, and act as a gathering place for serious minded amateurs, semiprofessionals, and professional clowns". Clowns International is a British clowning organisation dating back to the 1940s. It is responsible for the Clown Egg Register. Terminology Roles and skills In the circus, a clown might perform other circus roles or skills. Clowns may perform such skills as tightrope, juggling, unicycling, Master of Ceremonies, or ride an animal. Clowns may also "sit in" with the orchestra. Other circus performers may also temporarily stand in for a clown and perform their skills in clown costume. Frameworks Frameworks are the general outline of an act that clowns use to help them build out an act. Frameworks can be loose, including only a general beginning and ending to the act, leaving it up to the clown's creativity to fill in the rest, or at the other extreme a fully developed script that allows very little room for creativity. Shows are the overall production that a clown is a part of, it may or may not include elements other than clowning, such as in a circus show. In a circus context, clown shows are typically made up of some combination of entrées, side dishes, clown stops, track gags, gags and bits. Gags, bits and business Business – the individual motions the clown uses, often used to express the clown's character. Gag – very short piece of clown comedy that, when repeated within a bit or routine, may become a running gag. Gags are, loosely, the jokes clowns play on each other. A gag may have a beginning, a middle, and an end – or may not. Gags can also refer to the prop stunts/tricks or the stunts that clowns use, such as a squirting flower. Bit – the clown's sketch or routine, made up of one or more gags either worked out and timed before going on stage, or impromptu bits composed of familiar improvisational material Menu Entrée — clowning acts lasting 5–10 minutes. Typically made up of various gags and bits, usually within a clowning framework. Entrées almost always end with a blow-off — the comedic ending of a show segment, bit, gag, stunt, or routine. Side dish — shorter feature act. Side dishes are essentially shorter versions of the entrée, typically lasting 1–3 minutes. Typically made up of various gags and bits, side dishes are usually within a clowning framework. Side dishes almost always end with a blow-off. Interludes Clown Stops or interludes are the brief appearances of clowns in a circus while the props and rigging are changed. These are typically made up of a few gags or several bits. Clown stops will always have a beginning, a middle, and an end to them, invariably culminating in a blow-off. These are also called reprises or run-ins by many, and in today's circus they are an art form in themselves. Originally they were bits of business usually parodying the preceding act. If for instance there had been a tightrope walker the reprise would involve two chairs with a piece of rope between and the clown trying to imitate the artiste by trying to walk between them, with the resulting falls and cascades bringing laughter from the audience. Today, interludes are far more complex, and in many modern shows the clowning is a thread that links the whole show together. Prop stunts Among the more well-known clown stunts are: squirting flower; the too-many-clowns-coming-out-of-a-tiny-car stunt; doing just about anything with a rubber chicken, tripping over one's own feet (or an air pocket or imaginary blemish in the floor), or riding any number of ridiculous vehicles or clown bicycles. Individual prop stunts are generally considered individual bits. Gallery See also List of clowns Bouffon Clown car Notes References Bibliography External links Quotes by and about Clowns Collection: "Clowns" from the University of Michigan Museum of Art Comedy Entertainment occupations Performing arts Stock characters Articles containing video clips
2,675
5,933
https://en.wikipedia.org/wiki/CSS%20Virginia
CSS Virginia
CSS Virginia was the first steam-powered ironclad warship built by the Confederate States Navy during the first year of the American Civil War; she was constructed as a casemate ironclad using the razéed (cut down) original lower hull and engines of the scuttled steam frigate . Virginia was one of the participants in the Battle of Hampton Roads, opposing the Union's in March 1862. The battle is chiefly significant in naval history as the first battle between ironclads. USS Merrimack becomes CSS Virginia When the Commonwealth of Virginia seceded from the Union in 1861, one of the important federal military bases threatened was Gosport Navy Yard (now Norfolk Naval Shipyard) in Portsmouth, Virginia. Accordingly, orders were sent to destroy the base rather than allow it to fall into Confederate hands. On the afternoon of 17 April, the day Virginia seceded, Engineer in Chief B. F. Isherwood managed to get the frigate's engines lit. However, the previous night secessionists had sunk light boats between Craney Island and Sewell's Point, blocking the channel. On 20 April, before evacuating the Navy Yard, the U. S. Navy burned Merrimack to the waterline and sank her to preclude capture. When the Confederate government took possession of the fully provisioned yard, the base's new commander, Flag Officer French Forrest, contracted on May 18 to salvage the wreck of the frigate. This was completed by May 30, and she was towed into the shipyard's only dry dock (today known as Drydock Number One), where the burned structures were removed. The wreck was surveyed and her lower hull and machinery were discovered to be undamaged. Stephen Mallory, Secretary of the Navy decided to convert Merrimack into an ironclad, since she was the only large ship with intact engines available in the Chesapeake Bay area. Preliminary sketch designs were submitted by Lieutenants John Mercer Brooke and John L. Porter, each of whom envisaged the ship as a casemate ironclad. Brooke's general design showed the bow and stern portions submerged, and his design was the one finally selected. The detailed design work would be completed by Porter, who was a trained naval constructor. Porter had overall responsibility for the conversion, but Brooke was responsible for her iron plate and heavy ordnance, while William P. Williamson, Chief Engineer of the Navy, was responsible for the ship's machinery. Reconstruction as an ironclad The hull's burned timbers were cut down past the vessel's original waterline, leaving just enough clearance to accommodate her large, twin-bladed screw propeller. A new fantail and armored casemate were built atop a new main deck, and a v-shaped (bulwark) was added to her bow, which attached to the armored casemate. This forward and aft main deck and fantail were designed to stay submerged and were covered in iron plate, built up in two layers. The casemate was built of of oak and pine in several layers, topped with two layers of iron plating oriented perpendicular to each other, and angled at 36 degrees from horizontal to deflect fired enemy shells. From reports in Northern newspapers, Virginias designers were aware of the Union plans to build an ironclad and assumed their similar ordnance would be unable to do much serious damage to such a ship. It was decided to equip their ironclad with a ram, an anachronism on a 19th century warship. Merrimack'''s steam engines, now part of Virginia, were in poor working order; they had been slated for replacement when the decision was made to abandon the Norfolk naval yard. The salty Elizabeth River water and the addition of tons of iron armor and pig iron ballast, added to the hull's unused spaces for needed stability after her initial refloat, and to submerge her unarmored lower levels, only added to her engines' propulsion issues. As completed, Virginia had a turning radius of about and required 45 minutes to complete a full circle, which would later prove to be a major handicap in battle with the far more nimble Monitor. The ironclad's casemate had 14 gun ports, three each in the bow and stern, one firing directly along the ship's centerline, the two others angled at 45° from the center line; these six bow and stern gun ports had exterior iron shutters installed to protect their cannon. There were four gun ports on each broadside; their protective iron shutters remained uninstalled during both days of the Battle of Hampton Roads. Virginias battery consisted of four muzzle-loading single-banded Brooke rifles and six smoothbore Dahlgren guns salvaged from the old Merrimack. Two of the rifles, the bow and stern pivot guns, were caliber and weighed each. They fired a shell. The other two were cannon of about , one on each broadside. The 9-inch Dahlgrens were mounted three to a side; each weighed approximately and could fire a shell up to a range of (or 1.9 miles) at an elevation of 15°. Both amidship Dahlgrens nearest the boiler furnaces were fitted-out to fire heated shot. On her upper casemate deck were positioned two anti-boarding/personnel 12-pounder Howitzers.Virginias commanding officer, Flag Officer Franklin Buchanan, arrived to take command only a few days before her first sortie; the ironclad was placed in commission and equipped by her executive officer, Lieutenant Catesby ap Roger Jones. Battle of Hampton Roads The Battle of Hampton Roads began on March 8, 1862, when Virginia engaged the blockading Union fleet. Despite an all-out effort to complete her, the new ironclad still had workmen on board when she sailed into Hampton Roads with her flotilla of five CSN support ships: (serving as Virginias tender) and , , , and . The first Union ship to be engaged by Virginia was the all-wood, sail-powered , which was first crippled during a furious cannon exchange, and then rammed in her forward starboard bow by Virginia. As Cumberland began to sink, the port side half of Virginias iron ram was broken off, causing a bow leak in the ironclad. Seeing what had happened to Cumberland, the captain of ordered his frigate into shallower water, where she soon grounded. Congress and Virginia traded cannon fire for an hour, after which the badly-damaged Congress finally surrendered. While the surviving crewmen of Congress were being ferried off the ship, a Union battery on the north shore opened fire on Virginia. Outraged at such a breach of war protocol, in retaliation Virginias now angry captain, Commodore Franklin Buchanan, gave the order to open fire with hot-shot on the surrendered Congress as he rushed to Virginias exposed upper casemate deck, where he was injured by enemy rifle fire. Congress, now set ablaze by the retaliatory shelling, burned for many hours into the night, a symbol of Confederate naval power and a costly wake-up call for the all-wood Union blockading squadron.Virginia did not emerge from the battle unscathed, however. Her hanging port side anchor was lost after ramming Cumberland; the bow was leaking from the loss of the ram's port side half; shot from Cumberland, Congress, and the shore-based Union batteries had riddled her smokestack, reducing her boilers' draft and already slow speed; two of her broadside cannon (without shutters) were put out of commission by shell hits; a number of her armor plates had been loosened; both of Virginias cutters had been shot away, as had both 12 pounder anti-boarding/anti-personnel howitzers, most of the deck stanchions, railings, and both flagstaffs. Even so, the now-injured Buchanan ordered an attack on , which had run aground on a sandbar trying to escape Virginia. However, because of the ironclad's draft (fully loaded), she was unable to get close enough to do any significant damage. It being late in the day, Virginia retired from the conflict with the expectation of returning the next day and completing the destruction of the remaining Union blockaders. Later that night, arrived at Union-held Fort Monroe. She had been rushed to Hampton Roads, still not quite complete, all the way from the Brooklyn Navy Yard, in hopes of defending the force of wooden ships and preventing "the rebel monster" from further threatening the Union's blockading fleet and nearby cities, like Washington, D.C. While under tow, she nearly foundered twice during heavy storms on her voyage south, arriving in Hampton Roads by the bright firelight from the still-burning triumph of Virginias first day of handiwork. The next day, on March 9, 1862, the world's first battle between ironclads took place. The smaller, nimbler, and faster Monitor was able to outmaneuver the larger, slower Virginia, but neither ship proved able to do any severe damage to the other, despite numerous shell hits by both combatants, many fired at virtually point-blank range. Monitor had a much lower freeboard and only its single, rotating, two-cannon gun turret and forward pilothouse sitting above her deck, and thus was much harder to hit with Virginias heavy cannon. After hours of shell exchanges, Monitor finally retreated into shallower water after a direct shell hit to her armored pilothouse forced her away from the conflict to assess the damage. The captain of the Monitor, Lieutenant John L. Worden, had taken a direct gunpowder explosion to his face and eyes, blinding him, while looking through the pilothouse's narrow, horizontal viewing slits. Monitor remained in the shallows, but as it was late in the day, Virginia steamed for her home port, the battle ending without a clear victor: The captain of Virginia that day, Lieutenant Catesby ap Roger Jones, received advice from his pilots to depart over the sandbar toward Norfolk until the next day. Lieutenant Jones wanted to continue the fight, but the pilots emphasized that the Virginia had "nearly three miles to run to the bar" and that she could not remain and "take the ground on a falling tide." To prevent running aground, Lieutenant Jones reluctantly moved the ironclad back toward port. Virginia retired to the Gosport Naval Yard at Portsmouth, Virginia, and remained in drydock for repairs until April 4, 1862. In the following month, the crew of Virginia were unsuccessful in their attempts to break the Union blockade. The blockade had been bolstered by the hastily ram-fitted paddle steamer , and SS Illinois as well as the and , which had been repaired. Virginia made several sorties back over to Hampton Roads hoping to draw Monitor into battle. Monitor, however, was under strict orders not to re-engage; the two combatants would never battle again. On April 11, the Confederate Navy sent Lieutenant Joseph Nicholson Barney, in command of the paddle side-wheeler , along with Virginia and five other ships in full view of the Union squadron, enticing them to fight. When it became clear that Union Navy ships were unwilling to fight, the CS Navy squadron moved in and captured three merchant ships, the brigs Marcus and Sabout and the schooner Catherine T. Dix. Their ensigns were then hoisted "Union-side down" to further taunt the Union Navy into a fight, as they were towed back to Norfolk, with the help of . By late April, the new Union ironclads USRC E. A. Stevens and had also joined the blockade. On May 8, 1862, Virginia and the James River Squadron ventured out when the Union ships began shelling the Confederate fortifications near Norfolk, but the Union ships retired under the shore batteries on the north side of the James River and on Rip Raps island. Destruction of CSS Virginia On May 10, 1862, advancing Union troops occupied Norfolk. Since Virginia was now a steam-powered heavy battery and no longer an ocean-going cruiser, her pilots judged her not seaworthy enough to enter the Atlantic, even if she were able to pass the Union blockade. Virginia was also unable to retreat further up the James River due to her deep draft (fully loaded). In an attempt to reduce it, supplies and coal were dumped overboard, even though this exposed the ironclad's unarmored lower hull; this was still not enough to make a difference. Without a home port and no place to go, Virginias new captain, flag officer Josiah Tattnall III, reluctantly ordered her destruction in order to keep the ironclad from being captured. This task fell to Lieutenant Jones, the last man to leave Virginia after her cannons had been safely removed and carried to the Confederate Marine Corps base and fortifications at Drewry's Bluff. Early on the morning of May 11, 1862, off Craney Island, fire and powder trails reached the ironclads magazine and she was destroyed by a great explosion. What remained of the ship settled to the bottom of the harbor. Only a few remnants of Virginia have been recovered for preservation in museums; reports from the era indicate that her wreck was heavily salvaged following the war.Monitor was lost on December 31 of the same year, when the vessel was swamped by high waves in a violent storm while under tow by the tug USS Rhode Island off Cape Hatteras, North Carolina. Sixteen of her 62-member crew were either lost overboard or went down with the ironclad, while many others were saved by lifeboats sent from Rhode Island. Subsequently, in August 1973, the wreckage was located on the floor of the Atlantic Ocean about southeast of Cape Hatteras. Her upside-down turret was raised from beneath her deep, capsized wreck years later with the remains of two of her crew still aboard; they were later buried with full military honors on March 8, 2013, at Arlington National Cemetery in Washington, D.C. Historical names: Merrimack, Virginia, Merrimac Although the Confederacy renamed the ship, she is still frequently referred to by her Union name. When she was first commissioned into the United States Navy in 1856, her name was Merrimack, with the K; the name was derived from the Merrimack River near where she was built. She was the second ship of the U. S. Navy to be named for the Merrimack River, which is formed by the confluence of the Pemigewasset and Winnipesaukee rivers at Franklin, New Hampshire. The Merrimack flows south across New Hampshire, then eastward across northeastern Massachusetts before finally emptying in the Atlantic at Newburyport, Massachusetts. After raising, restoring, and outfitting as an ironclad warship, the Confederacy bestowed on her the name Virginia. Nonetheless, the Union continued to refer to the Confederate ironclad by either its original name, Merrimack, or by the nickname "The Rebel Monster". In the aftermath of the Battle of Hampton Roads, the names Virginia and Merrimack were used interchangeably by both sides, as attested to by various newspapers and correspondence of the day. Navy reports and pre-1900 historians frequently misspelled the name as "Merrimac", which was actually an unrelated ship, hence "the Battle of the Monitor and the Merrimac". Both spellings are still in use in the Hampton Roads area. Memorial, heritage A large exhibit at the Jamestown Exposition held in 1907 at Sewell's Point was the "Battle of the Merrimac and Monitor," a large diorama that was housed in a special building. A small community in Montgomery County, Virginia, near where the coal burned by the Confederate ironclad was mined, is now known as Merrimac. The October 8, 1867, issue of the Norfolk Virginian newspaper carried a prominent classified advertisement in the paper's "Private Sales" section for the salvaged iron ram of CSS Virginia. The ad states: A RELIC OF WAR FOR SALE: The undersigned has had several offers for the IRON PROW! of the first iron-clad ever built, the celebrated Ram and Iron Clad Virginia, formerly the Merrimac. This immense RELIC WEIGHS 1,340 POUNDS, wrought iron, and as a sovereign of the war, and an object of interest as a revolution in naval warefare, would suit a Museum, State Institute, or some great public resort. Those desiring to purchase will please address D. A. UNDERDOWN, Wrecker, care of Virginian Office, Norfolk, Va. It is unclear from the above whether this was the first iron ram that broke off and lodged in the starboard bow of the sinking USS Cumberland, during the first day of the Battle of Hampton Roads, or was the second iron ram affixed to Virginias bow at the time she was run aground and destroyed to avoid capture by Union forces; no further mention has been found concerning the final disposition of this historic artifact. Other pieces of Virginia did survive and are on display at the Mariners' Museum in Newport News and the American Civil War Museum in Richmond, where one of her anchors resides on its front lawn. In 1907, an armor plate from the ship was melted down and used in the casting of the Pokahuntas Bell for the Jamestown Exposition. Starting around 1883, numerous souvenirs, made from recently salvaged iron and wood raised from Virginias sunken hulk, found a ready and willing market among eastern seaboard residents who remembered the historic first battle between ironclads. Various tokens, medals, medalets, sectional watch fobs, and other similar metal keepsakes are known to have been struck by private mints in limited quantities. Known examples still exist today, being held in both public and private collections, rarely coming up for public auction. Nine examples made from Virginias iron and copper can be found cataloged in great detail, with front and back photos, in David Schenkman's 1979 numismatic booklet listed in the Reference section (below). The name of the Monitor-Merrimac Memorial Bridge-Tunnel, built in Hampton Roads in the general vicinity of the famous engagement, with both Virginia and federal funds, also reflects the more recent version. See also Bibliography of American Civil War naval history Notes References Nelson, James L. (2004). The Reign of Iron: The Story of the First Battling Ironclads, the Monitor and the Merrimack , HarperCollins Publishers, New York, . Park, Carl D., (2007) Ironclad Down, USS Merrimack-CSS Virginia, From Construction to Destruction, Annapolis, Maryland: Naval Institute Press. . Quarstein, John V. (2000). C.S.S. Virginia, Mistress of Hampton Roads, self-published for the Virginia Civil War Battles and Leaders Series by H. E. Howard, Inc. Schenkman, David, (1979). Tokens & Medals Commemorating the Battle Between the Monitor and Merrimac (sic), Hampton, Virginia, 28-page booklet (the second in a series of Special Articles on the Numismatics of The Commonwealth of Virginia), Virginia Numismatic Association. No ISSN or ISBN. Smith, Gene A., (1998). Iron and Heavy Guns, Duel Between the Monitor and Merrimac (sic), Abilene, Texas, McWhiney Foundation Press, . Further reading 82 pages. Baxter, James Phinney (1968). The Introduction of the Ironclad Warship, Archon Books, p. 398. Besse, Sumner B., C. S. Ironclad Virginia and U. S. Ironclad Monitor, Newport News, Virginia, The Mariner's Museum, 1978. . DeKay, James, (1997) Monitor'', Ballantine Books, New York, NY. . External links Library of Virginia Virginia Historical Society Museum of the Confederacy in Richmond, Virginia Website devoted to the CSS Virginia Hampton Roads Visitor Guide USS Monitor Center and Exhibit , Newport News, Virginia Mariner's Museum, Newport News, Virginia Hampton Roads Naval Museum Civil War Naval History Fort Wool History Roads to the Future – I-664 Monitor-Merrimac Memorial Bridge Tunnel Ironclad warships of the Confederate States Navy Shipwrecks of the Virginia coast Ships built in Newport News, Virginia Battle of Hampton Roads 1862 ships Maritime incidents in May 1862 Shipwrecks of the American Civil War Scuttled vessels Naval magazine explosions
2,679
5,935
https://en.wikipedia.org/wiki/The%20Church%20of%20Jesus%20Christ%20of%20Latter-day%20Saints
The Church of Jesus Christ of Latter-day Saints
The Church of Jesus Christ of Latter-day Saints, informally known as the LDS Church or Mormon Church, is a nontrinitarian Christian church that considers itself to be the restoration of the original church founded by Jesus Christ. The church is headquartered in the United States in Salt Lake City, Utah, and has established congregations and built temples worldwide. According to the church, it has over 16.8 million members and 54,539 full-time volunteer missionaries. Based on these numbers, the church is the fourth-largest Christian denomination in the United States as of 2012, and reported over 6.7 million US members . It is the largest denomination in the Latter Day Saint movement founded by Joseph Smith during the early 19th-century period of religious revival known as the Second Great Awakening. Church theology includes the Christian doctrine of salvation only through Jesus Christ, and his substitutionary atonement on behalf of mankind. The church has an open canon of four scriptural texts: the Bible, the Book of Mormon, the Doctrine and Covenants, and the Pearl of Great Price. Other than the Bible, the majority of the church canon consists of material the church's members believe to have been revealed by God to Joseph Smith, including commentary and exegesis about the Bible, texts described as lost parts of the Bible, and other works believed to be written by ancient prophets, including the Book of Mormon. Because of doctrinal differences, Catholic, Orthodox and many Protestant churches consider the church to be distinct and separate from mainstream Christianity. Latter-day Saints believe that the church president is a modern-day "prophet, seer, and revelator" and that Jesus Christ, under the direction of God the Father, leads the church by revealing his will and delegating his priesthood keys to its president. The president heads a hierarchical structure descending from areas to stakes and wards. Bishops, drawn from the laity, lead the wards. Male members may be ordained to the priesthood, provided they are living the standards of the church. Women are not ordained to the priesthood, but occupy leadership roles in some church organizations. Both men and women may serve as missionaries. The church maintains a large missionary program that proselytizes and conducts humanitarian services worldwide. The LDS Church also funds and participates in humanitarian projects independent of its missionary efforts. Faithful members adhere to church laws of sexual purity, health, fasting, and Sabbath observance, and contribute ten percent of their income to the church in tithing. The church teaches sacred ordinances through which adherents make covenants with God, including baptism, confirmation, the sacrament, priesthood ordination, endowment and celestial marriage. The church has been criticized throughout its history. Modern criticisms include disputed factual claims, treatment of minorities, and financial controversies. The church's practice of polygamy (plural marriage) was controversial until it was officially rescinded in 1890. History The history of the church is typically divided into three broad time periods: (1) the early history during the lifetime of Joseph Smith, which is in common with all churches associated with the Latter Day Saint movement, (2) a pioneer era under the leadership of Brigham Young and his 19th-century successors, and (3) a modern era beginning around the turn of the 20th century as Utah achieved statehood. Beginnings Joseph Smith formally organized the church as the Church of Christ, on April 6, 1830, in western New York. Smith later changed the name to the Church of Jesus Christ of Latter Day Saints after he stated he had received a revelation to do so. Initial converts were drawn to the church in part because of the newly published Book of Mormon, a self-described chronicle of indigenous American prophets that Smith said he had translated from golden plates. Smith intended to establish the New Jerusalem in North America, called Zion. In 1831, the church moved to Kirtland, Ohio, and began establishing an outpost in Jackson County, Missouri, where Smith planned to eventually move the church headquarters. However, in 1833, Missouri settlers violently expelled the Latter Day Saints from Jackson County. The church attempted to recover the land through a paramilitary expedition, but did not succeed. Nevertheless, the church flourished in Kirtland as Smith published new revelations and the church built the Kirtland Temple, culminating in a dedication of the building similar to the day of Pentecost. The Kirtland era ended in 1838, after a financial scandal rocked the church and caused widespread defections. Smith regrouped with the remaining church in Far West, Missouri, but tensions soon escalated into violent conflicts with the old Missouri settlers. Believing the Latter Day Saints to be in insurrection, the Missouri governor ordered that they be "exterminated or driven from the State". In 1839, the Saints converted a swampland on the banks of the Mississippi River into Nauvoo, Illinois, which became the church's new headquarters. Nauvoo grew rapidly as missionaries sent to Europe and elsewhere gained new converts who then flooded into Nauvoo. Meanwhile, Smith introduced polygamy to his closest associates. He also established ceremonies, which he stated the Lord had revealed to him, to allow righteous people to become gods in the afterlife, and a secular institution to govern the Millennial kingdom. He also introduced the church to a full accounting of his First Vision, in which two heavenly "personages" appeared to him at age 14. This vision would come to be regarded by the LDS Church as the most important event in human history since the resurrection of Jesus. Members believe Joseph Smith is the first modern-day prophet. On June 27, 1844, Smith and his brother, Hyrum, were killed by a mob in Carthage, Illinois, while being held on charges of treason. Because Hyrum was Joseph's designated successor, their deaths caused a succession crisis, and Brigham Young assumed leadership over a majority of the church's membership. Young had been a close associate of Smith's and was the senior apostle of the Quorum of the Twelve. Other splinter groups followed other leaders around this time. These groups have no affiliation with the LDS Church, however they share a common heritage in their early church history. Collectively, they are called the Latter Day Saint movement. The largest of these smaller groups is the Community of Christ, based in Independence, Missouri, followed by the Church of Jesus Christ, based in Monongahela, Pennsylvania. Like the LDS Church, these faiths believe in Joseph Smith as a prophet and founder of their religion. They also accept the Book of Mormon, and most, but not all, accept at least some version of the Doctrine and Covenants. However, they tend to disagree to varying degrees with the LDS Church concerning doctrine and church leadership. Pioneer era For two years after Smith's death, conflicts escalated between Mormons and other Illinois residents. Brigham Young led his followers, later called the Mormon pioneers, westward to Nebraska and then in 1847 on to what later became the Utah Territory, which at the time had been part of the indigenous lands of the Ute, Goshute, and Shoshone nations, and claimed by Mexico until 1848. Over the course of many years, over 60,000 settlers arrived, who then branched out and colonized a large region now known as the Mormon Corridor. Young incorporated the LDS Church as a legal entity, and initially governed both the church and the state as a theocratic leader. He also publicized the practice of plural marriage in 1852. Modern research suggests that around 20 percent of Mormon families may have participated in the practice. By 1857, tensions had again escalated between Mormons and other Americans, largely as a result of accusations involving polygamy and the theocratic rule of the Utah Territory by Young. The Utah Mormon War ensued from 1857 to 1858, which resulted in the relatively peaceful invasion of Utah by the United States Army. The most notable instance of violence during this conflict was the Mountain Meadows massacre, in which leaders of a local Mormon militia ordered the massacre of a civilian emigrant party who was traveling through Utah during the escalating military tensions. After the massacre was discovered, the church became the target of significant media criticism for it. After the Army withdrew, Young agreed to step down from power and be replaced by a non-Mormon territorial governor, Alfred Cumming. Nevertheless, the LDS Church still wielded significant political power in the Utah Territory. Coterminously, tensions between Mormon settlers and indigenous tribes continued to escalate as settlers began colonizing a growing area of tribal lands. While Mormons and indigenous peoples made attempts at peaceful coexistence, skirmishes ensued from about 1849 to 1873 culminating in the armed conflicts of Walkara's War, the Bear River Massacre, and the Black Hawk War. After Young's death in 1877, he was followed by other church presidents, who resisted efforts by the United States Congress to outlaw Mormon polygamous marriages. In 1878, the United States Supreme Court, in Reynolds v. United States, decreed that "religious duty" to engage in plural marriage was not a valid defense to prosecutions for violating state laws against polygamy. Conflict between Mormons and the U.S. government escalated to the point that, in 1890, Congress disincorporated the LDS Church and seized most of its assets. Soon thereafter, church president Wilford Woodruff issued a manifesto that officially suspended the performance of new polygamous marriages in the United States. Relations with the United States markedly improved after 1890, such that Utah was admitted as a U.S. state in 1896. Relations further improved after 1904, when church president Joseph F. Smith again disavowed polygamy before the United States Congress and issued a "Second Manifesto", calling for all plural marriages in the church to cease. Eventually, the church adopted a policy of excommunicating its members found practicing polygamy. Some fundamentalist groups with relatively small memberships have broken off and continue to practice polygamy, but the Church distances itself from them. Modern times During the 20th century, the church grew substantially and became an international organization. In 2000, the church reported 60,784 missionaries and global church membership stood at just over 11 million. Nominal worldwide membership surpassed 16 million in 2018. Slightly under half of church membership lives within the United States. The church has become a strong proponent of the nuclear family and at times played a prominent role in political matters, including opposition to MX Peacekeeper missile bases in Utah and Nevada, the Equal Rights Amendment, legalized gambling, same-sex marriage, and physician-assisted death. Apart from issues that it considers to be ones of morality, however, the church maintains a position of political neutrality. Despite this it encourages its members to be politically active, to participate in elections, and to be knowledgeable about current political and social issues within their communities, states, and countries. A number of official changes have taken place to the organization during the modern era. In 1978, the church reversed its previous policy of excluding black men of African descent from the priesthood, which had been in place since 1852; members of all races can now be ordained to the priesthood. Also, since the early 1900s, the church has instituted a Priesthood Correlation Program to centralize church operations and bring them under a hierarchy of priesthood leaders. During the Great Depression, the church also began operating a church welfare system, and it has conducted humanitarian efforts in cooperation with other religious organizations including Catholic Relief Services and Muslim Aid, as well as secular organizations such as the American Red Cross. During the second half of the 20th century and beginnings of the 21st, the church has responded to various challenges to its doctrine and authority. Challenges have included rising secularization, challenges to the correctness of the translation of the Book of Abraham, and primary documents forged by Mark Hofmann purporting to contradict important aspects of official early church history. The church's positions regarding homosexuality, women, and black people have all been publicly debated during this timeframe. For over 100 years, the church was a major sponsor of Scouting programs for boys, particularly in the United States. The LDS Church was the largest chartered organization in the Boy Scouts of America, having joined the Boy Scouts of America as its first charter organization in 1913. In 2020, the church ended its relationship with the BSA and began an alternate, religion-centered youth program, which replaced all other youth programs. Prior to leaving the Scouting program, LDS Scouts made up nearly 20 percent of all enrolled Boy Scouts, more than any other church. Beliefs Nature of God LDS Church theology includes the belief in a Godhead composed of God the Father, his son, Jesus Christ, and the Holy Ghost as three separate Persons who share a unity of purpose or will; however, they are viewed as three distinct Beings making one Godhead. This is in contrast with the predominant Christian view, which holds that God is a Trinity of three distinct persons in one essence. The Latter-day Saint conception of the Godhead is similar to what contemporary Christian theologians call social trinitarianism. The beliefs of the church also include the belief that God the Father and his son, Jesus Christ, are separate beings with bodies of flesh and bone, while the Holy Ghost lacks such a physical body. According to statements by church leaders, God sits at the head of the human family and is married to a Heavenly Mother, who is the mother of human spirits. However, church leaders have also categorically discouraged prayers to her and counseled against "speculation" regarding her. Jesus Christ Church members believe in Jesus Christ as the literal Son of God and Messiah, his crucifixion as a conclusion of a sin offering, and subsequent resurrection. However, Latter-day Saints reject the ecumenical creeds and the definition of the Trinity. Jesus is also seen as the elder brother of all who live in this world. The church teaches that Jesus performed a substitutionary atonement; in contrast with other Christian denominations, the church teaches this atonement began in the garden of Gethsemane and continued it to his crucifixion (rather than the orthodox belief that the crucifixion alone was the physical atonement). The church also teaches that Christ appeared to other peoples in the period of time between his death and resurrection, including spirits of the dead in the spirit world, and his followers in the Americas. The church also teaches that Jesus is the true founder and leader of the church itself. The physical establishment of the church by Smith in 1830 is seen as simply the reestablishment of the same primitive church that existed under Jesus and his Apostles. Similarly, the church teaches that Jesus leads the church presently by means of continual and direct revelation to its leaders, especially its current president. Comparison with Nicene Christianity The LDS Church shares various teachings with other branches of Christianity. These include a belief in the Bible, the divinity of Jesus, and his atonement and resurrection. LDS theology also includes belief in the doctrine of salvation through Jesus alone, restorationism, millennialism, continuationism, conditional substitutionary atonement or penal substitution, and a form of apostolic succession. Nevertheless, the LDS Church differs from other churches within contemporary Christianity in other ways. Differences between the LDS Church and most of traditional Christianity include disagreement about the nature of God, belief in a theory of human salvation that includes three heavens, a doctrine of exaltation which includes the ability of humans to become gods and goddesses in the afterlife, a belief in continuing revelation and an open scriptural canon, and unique ceremonies performed privately in temples, such as the endowment and sealing ceremonies. A number of major Christian denominations view the LDS Church as standing apart from creedal Christianity. However, church members self-identify as Christians. The faith itself views other modern Christian faiths as having departed from true Christianity by way of a general apostasy and maintains that it is a restoration of 1st-century Christianity and the only true and authorized Christian church. Church leaders assert it is the only true church and that other churches do not have the authority to act in Jesus' name. Cosmology and plan of salvation The church’s cosmology and plan of salvation include the doctrines of a pre-mortal life, an earthly mortal existence, three degrees of heaven and exaltation. According to these doctrines, every human spirit is a spiritual child of a Heavenly Father and each has the potential to continue to learn, grow, and progress in the eternities, eventually achieving eternal life, which is to become one with God in the same way that Jesus Christ is one with the Father, thus allowing the children of God to become divine beings – that is, gods – themselves. This view on the doctrine of theosis is also referred to as becoming a "joint-heir with Christ". The process by which this is accomplished is called exaltation, a doctrine which includes the reunification of the mortal family after the resurrection and the ability to have spirit children in the afterlife and inherit a portion of God's kingdom. To obtain this state of godhood, the church teaches that one must have faith in Jesus Christ, repent of his or her sins, strive to keep the commandments faithfully, and participate in ordinances. According to LDS Church theology, men and women may be sealed to one another so that their marital bond continues into the eternities. Children may also be sealed to their biological or adoptive parents to form permanent familial bonds, thus allowing all immediate and extended family relations to endure past death. The most significant LDS ordinances may be performed via proxy in behalf of those who have died, such as baptism for the dead. The church teaches that all will have the opportunity to hear and accept or reject the gospel of Jesus Christ, either in this life or the next. Within church cosmology, the fall of Adam and Eve is seen positively. The church teaches that it was essential to allow humankind to experience separation from God, to exercise full agency in making decisions for their own happiness. Restorationism The LDS Church teaches that, subsequent to the death of Jesus and his original apostles, his church, along with the authority to act in Jesus Christ's name and the church's attendant spiritual gifts, were lost, due to a combination of external persecutions and internal heresies. The restoration—as represented by the church began by Joseph Smith—refers to a return of the authentic priesthood power, spiritual gifts, ordinances, living prophets and revelation of the primitive Church of Christ. This restoration is associated with a number of events which are understood to have been necessary to re-establish the early Christian church found in the New Testament, and to prepare the earth for the Second Coming of Jesus. In particular, Latter-day Saints believe that angels appeared to Joseph Smith and a limited number of his associates, and bestowed various priesthood authorities on them. Prophetic leadership The church is led by a president, who is considered a "prophet, seer, and revelator." He is considered the only person who is authorized to receive revelation from God on behalf of the whole world or entire church. As such, the church teaches that he is essentially infallible when speaking on behalf of God – although the exact circumstances when his pronouncements should be considered authoritative are debated within the church. In any case, modern declarations with broad doctrinal implications are often issued by joint statement of the First Presidency; they may be joined by the Quorum of the Twelve Apostles as well. Latter-day Saints believe that Jesus leads the church through revelation and has chosen a single man as his spokesman on the earth called "the Prophet" or the "President of the Church." Normally, he and two counselors are ordained apostles and form the First Presidency, the presiding body of the church; twelve other apostles form the Quorum of the Twelve Apostles. When a president dies, his successor is chosen from the remaining apostles, and is invariably the longest-tenured of the group. Following the death of church president Thomas S. Monson on January 2, 2018, senior apostle Russell M. Nelson was announced as president on January 16. Home and family The church and its members consider marriage and family highly important, with emphasis placed on large, nuclear families. In 1995, the church's First Presidency and Quorum of the Twelve issued "The Family: A Proclamation to the World", which stresses the importance of the family. The proclamation defined marriage as a union between one man and one woman and stated that the family unit is "central to the Creator's plan for the eternal destiny of His children." The document further says that "gender is an essential characteristic of individual premortal, mortal, and eternal identity and purpose," that the father and mother have differing but equal roles in raising children, and that successful marriages and families, founded upon the teachings of Jesus Christ, can last eternally. The proclamation also promotes specific roles essential to maintaining the strength of the family unit – the roles of a husband and father as the family's breadwinner and spiritual leader and those of a wife and mother as a nurturing caregiver. Both parents are charged with the duties of childrearing. The proclamation was issued, in part, due to concerns in the United States about the eroding of family values and the growing social movement promoting same-sex marriages. LDS Church members are encouraged to set aside one evening each week, typically Monday, to spend together in "Family Home Evening." Family Home Evenings typically consist of gathering as a family to study the faith's gospel principles, and other family activities. Daily family prayer is also encouraged. Sources of doctrine The theology of the LDS Church consists of a combination of biblical doctrines with modern revelations and other commentary by LDS leaders, particularly Joseph Smith. The most authoritative sources of theology are the faith's canon of four religious texts, called the "standard works". Included in the standard works are the Bible, the Book of Mormon, the Doctrine and Covenants and the Pearl of Great Price. The Book of Mormon is a foundational sacred book for the church; the terms "Mormon" and "Mormonism" come from the book itself. The LDS Church teaches that the Angel Moroni told Smith about golden plates containing the record, guided him to find them buried in the Hill Cumorah, and provided him the means of translating them from Reformed Egyptian. It claims to give a history of the inhabitants from a now-extinct society living on the American continent and their distinct Judeo-Christian teachings. The Book of Mormon is very important to modern Latter-day Saints, who consider it the world's most perfect text. The Bible, also part of the church's canon, is believed to be the word of God – subject to an acknowledgment that its translation may be incorrect, or that authoritative sections may have been lost over the centuries. Most often, the church uses the Authorized King James Version. Two extended portions of the Joseph Smith Translation of the Bible have been canonized and are thus considered authoritative. Additionally, over 600 of the more doctrinally significant verses from the translation are included as excerpts in the current LDS Church edition of the Bible. Other revelations from Smith are found in the Doctrine and Covenants, and in the Pearl of Great Price. Another source of authoritative doctrine is the pronouncements of the current Apostles and members of the First Presidency. The church teaches that the First Presidency and the Quorum of Twelve Apostles are prophets and that their teachings are generally given under inspiration from God through the Holy Spirit. In addition to doctrine given by the church as a whole, individual members of the church believe that they can also receive personal revelation from God in conducting their lives, and in revealing truth to them, especially about spiritual matters. Generally, this occurs through thoughts and feelings from the Holy Ghost, in response to prayer. Similarly, the church teaches its members may receive individual guidance and counsel from God through blessings from priesthood holders. In particular, patriarchal blessings are considered special blessings that are received only once in the recipient's life, which are recorded, transcribed, and archived. Practices Ordinances In the church, an ordinance is a sacred rite or ceremony that has spiritual and symbolic meanings, and acts as a means of conveying divine grace. Ordinances are physical acts which signify or symbolize an underlying spiritual act; for some ordinances, the spiritual act is the finalization of a covenant between the ordinance recipient and God. Ordinances are generally performed under priesthood authority. The ordinance of baptism is believed to bind its participant to Jesus Christ, who saves them in their imperfection if they continually keep their promises to him. Baptism is performed by immersion, and is typically administered to children starting at age eight. Church members believe that through the ordinances of temple sealing and temple endowment, anyone can be eternally connected with their families beyond this life and can be perfected in Jesus Christ to eventually become like their Heavenly Parents—in essence gods. Other ordinances performed in the church include confirmation, the sacrament (equivalent to the Eucharist or holy communion), and priesthood ordination. Word of Wisdom The LDS Church asks its members to adhere to a dietary code called the Word of Wisdom, in which they abstain from the consumption of alcohol, coffee, tea, tobacco, and illicit or harmful substances. The Word of Wisdom also encourages the consumption of herbs and grains along with the moderate consumption of meat. When Joseph Smith published the Word of Wisdom in 1833, it was considered only advice; violation did not restrict church membership. During the 1890s, though, church leaders started emphasizing the Word of Wisdom more. In 1921, church president Heber J. Grant made obeying the Word of Wisdom a requirement to engage in worship inside of the faith's temples. From that time, church leadership has emphasized the forbidding of coffee, tea, tobacco, and alcohol, but not the other guidelines concerning meat, grains, and herbs. Law of chastity Church members are expected to follow a moral code called the law of chastity, which prohibits adultery, homosexual behavior, and sexual relations before or outside of marriage. As part of the law of chastity, the church strongly opposes pornography, and considers masturbation an immoral act. Tithing and other donations Church members are expected to donate one-tenth of their income to support the operations of the church, including construction of temples, meetinghouses, and other buildings, and other church uses. Members are also encouraged to fast (abstain from food and drink for two meals) on the first Sunday of each month for at least two consecutive meals. They donate at least the cost of the two skipped meals as a "fast offering", which the church uses to assist the poor and needy and expand its humanitarian efforts. Local leadership is not remunerated financially, and is expected to tithe as well. Missionaries, however, are not expected to pay tithing directly as their living expenses are paid from church funds. Missionary service All able LDS young men are expected to serve a two-year, full-time proselytizing mission. Missionaries do not choose where they serve or the language in which they will proselytize, and are expected to fund their missions themselves or with the aid of their families. Prospective male missionaries must be at least 18 years old and no older than 25, not yet married, have completed secondary school, and meet certain criteria for physical fitness and spiritual worthiness. Missionary service is not compulsory, nor is it required for young men to retain their church membership. Unmarried women 19 years and older may also serve as missionaries, generally for a term of 18 months. However, the LDS Church emphasizes that women are not under the same expectation to serve as male members are, and may serve solely as a personal decision. There is no maximum age for missionary service for women. Retired couples are also encouraged to serve missions, and may serve 6-, 12-, 18-, or 23-month terms. Unlike younger missionaries, these senior missionaries may serve in non-proselytizing capacities such as humanitarian aid workers or family history specialists. Other men and women who desire to serve a mission, but may not be able to perform full-time service in another state or country due to health issues, may serve in a non-proselyting mission. They might assist at Temple Square in Salt Lake City or aid in the seminary system in schools. All proselyting missionaries are organized geographically into administrative areas called missions. The efforts in each mission are directed by an older adult male mission president. As of July 2020, there were 407 missions of the church. Sabbath day observance Church members are expected to set aside Sundays as a day of "rest and worship." Typically, weekly worship meetings occur solely on Sundays. Shopping and recreation are discouraged on Sundays as well. Worship and meetings Weekly meetings Meetings for worship and study are held at meetinghouses, which are typically utilitarian in character. The main focus of Sunday worship is the Sacrament meeting, where the sacrament is passed to church members; sacrament meetings also include prayers, the singing of hymns by the congregation or choir, and impromptu or planned sermons by church laity. Also included in weekly meetings are times for Sunday School, or separate instructional meetings based on age and gender, including the Relief Society for women. Church congregations are organized geographically. Members are generally expected to attend the congregation with their assigned geographical area; however, some geographical areas also provide separate congregations for young single adults, older single adults, or for speakers of alternate languages. For Sunday services, the church is grouped into either larger congregations known as wards, or smaller congregations known as branches. Regional church organizations, encompassing multiple congregations, include stakes, missions, districts and areas. The church's Young Men and Young Women organizations meet at the meetinghouse once a week, on a day other than Sunday, where the youth participate in activities. Temple worship In LDS theology, a temple is considered to be a holy building, dedicated as a "House of the Lord" and held as more sacred than a typical meetinghouse or chapel. In temples, church members participate in ceremonies that are considered the most sacred in the church, including marriage, and an endowment ceremony that includes a washing and anointing, receiving a temple garment, and making covenants with God. Baptisms for the dead - as well as other temple ordinances on behalf of the dead - are performed in the temples as well. Temples are considered by church members to be the most sacred structures on earth, and as such, operating temples are not open to the public. Permission to enter is reserved only for church members who pass periodic interviews with ecclesiastical leaders and receive a special recommendation card, called a temple recommend, that they present upon entry. Church members are instructed not to share details about temple ordinances with non-members or even converse about them outside the temple itself. As of November 2022, there are 175 operating temples worldwide. In order to perform ordinances in temples on behalf of deceased family members, the church emphasizes genealogical research, and encourages its lay members to participate in genealogy. It operates FamilySearch, the largest genealogical organization in the world. Conferences Twice each year, general authorities address the worldwide church through general conference. General conference sessions are translated into as many as 80 languages and are broadcast from the 21,000-seat Conference Center in Salt Lake City. During this conference, church members formally acknowledge, or "sustain", the First Presidency and Quorum of the Twelve Apostles as prophets, seers, and revelators. Individual stakes also hold formal conferences within their own boundaries biannually; wards hold conferences annually. Organization and structure Name and legal entities The church teaches that it is a continuation of the Church of Christ established in 1830 by Joseph Smith. This original church underwent several name changes during the 1830s, being called the Church of Jesus Christ, the Church of God, and then in 1834, the name was officially changed to the Church of the Latter Day Saints. In April 1838, the name was officially changed to "The Church of Jesus Christ of Latter Day Saints". After Smith died, Brigham Young and the largest body of Smith's followers incorporated the LDS Church in 1851 by legislation of the State of Deseret under the name "The Church of Jesus Christ of Latter-day Saints", which included a hyphenated "Latter-day" and a British-style lower-case d. Common informal names for the church include the LDS Church, the Latter-day Saints, and the Mormons. The term Mormon Church is in common use. The church requests that the official name be used when possible or, if necessary, shortened to "the Church", "the Church of Jesus Christ", or "Latter-day Saints". In August 2018, church president Russell M. Nelson asked members of the church and others to cease using the terms "LDS", "Mormon" and "Mormonism" to refer to the church, its membership, or its belief system and instead to call the church by its full and official name. Subsequent to this announcement, the church's premier vocal ensemble, the Mormon Tabernacle Choir, was officially renamed and became the "Tabernacle Choir at Temple Square". Reaction to the name change policy has been mixed. In 1887, the LDS Church was legally dissolved in the United States by the Edmunds–Tucker Act because of the church's practice of polygamy. For more than the next hundred years, the church as a whole operated as an unincorporated entity. During that time, tax-exempt corporations of the LDS Church included the Corporation of the Presiding Bishop of The Church of Jesus Christ of Latter-day Saints, which managed non-ecclesiastical real estate and other holdings; and the Corporation of the President of The Church of Jesus Christ of Latter-day Saints, which governed temples, other sacred buildings, and the church's employees. By 2021, the two had been merged into one corporate entity, legally named "The Church of Jesus Christ of Latter-day Saints." The church currently functions as a corporation sole, incorporated in Utah. Intellectual Reserve is a nonprofit corporation wholly owned by the church, which holds the church's intellectual property, such as copyrights, trademarks, and other media. Priesthood hierarchy The LDS Church is organized in a hierarchical priesthood structure administered by its male members. Members of the church-wide leadership are called general authorities. They exercise both ecclesiastical and administrative leadership over the church and direct the efforts of regional leaders down to the local level. General authorities and mission presidents work full-time for the church, and typically receive stipends from church funds or investments. As well as speaking in general conference, general authorities speak to church members in local congregations throughout the world; they also speak to youth and young adults in broadcasts and at the Church Educational System (CES) schools, such as Brigham Young University (BYU). Local congregations are typically led by bishops, who perform similar functions to pastors in the Protestant tradition, or a parish priest in the Roman Catholic church. Each active church member is expected to receive one or more callings, or positions of assigned responsibility within the church. Individual members are expected to neither ask for specific callings, nor decline callings that are extended to them by their leaders. Leadership positions in the church's various congregations are filled through the calling system, and the vast majority of callings are filled on a volunteer basis. Members volunteer general custodial work for local church facilities. All males who are living the standards of the church are generally considered for the priesthood and are ordained to the priesthood as early as age 11. Ordination occurs by a ceremony where hands are laid on the head of the one ordained. The priesthood is divided into an order for young men aged 11 years and older (called the Aaronic priesthood) and an order for men 18 years of age and older (called the Melchizedek priesthood). Some church leaders and scholars have spoken of women holding or exercising priesthood power. However, women are not formally ordained to the priesthood, and they do not participate in public functions administered by the priesthood - such as passing the Sacrament, giving priesthood blessings, or holding leadership positions over congregations as a whole. From 2013 to about 2014, the Ordain Women organization actively sought formal priesthood ordination for women. Programs and organizations Under the leadership of the priesthood hierarchy are five organizations that fill various roles in the church: Relief Society, the Young Men and Young Women organizations, Primary, and Sunday School. Women serve as presidents and counselors in the presidencies of the Relief Society, Young Women, and Primary, while men serve as presidents and counselors of the Young Men and Sunday School. The church also operates several programs and organizations in the fields of proselytizing, education, and church welfare such as LDS Humanitarian Services. Many of these organizations and programs are coordinated by the Priesthood Correlation Program, which is designed to provide a systematic approach to maintain worldwide consistency, orthodoxy, and control of the church's ordinances, doctrines, organizations, meetings, materials, and other programs and activities. The church operates CES, which includes BYU, BYU–Idaho, BYU–Hawaii, and Ensign College. The church also operates Institutes of Religion near the campuses of many colleges and universities. For high-school aged youth, the church operates a four-year Seminary program, which provides religious classes for students to supplement their secular education. The church also sponsors a low-interest educational loan program known as the Perpetual Education Fund, which provides educational opportunities to students from developing nations. The church's welfare system, initiated in 1930 during the Great Depression, provides aid to the poor. Leaders ask members to fast once a month and donate the money they would have spent on those meals to help the needy, in what is called a fast offering. Money from the program is used to operate Bishop's storehouses, which package and store food at low cost. Distribution of funds and food is administered by local bishops. The church also distributes money through its Philanthropies division to disaster victims worldwide. Other church programs and departments include Family Services, which provides assistance with adoption, marital and family counseling, psychotherapy, and addiction counseling; the LDS Church History Department, which collects church history and records; and the Family History Department, which administers the church's large family history efforts, including FamilySearch, the world's largest family history library and organization. Other facilities owned and operated by the church include Temple Square, the Church Office Building, the Church Administration Building, the Church History Library and the Granite Mountain Records Vault. Finances Since 1941, the church has been classified by the IRS as a 501(c)(3) organization and is therefore tax-exempt. Donations are tax-deductible in the United States. The church has not released church-wide financial statements since 1959. In the absence of official statements, people interested in knowing the church's financial status and behavior, including both members of the church and people outside the church, have attempted to estimate or guess. In 1997, Time magazine called the LDS Church one of the world's wealthiest churches per capita. Its for-profit, non-profit, and educational subsidiary entities are audited by an independent accounting firm. In addition, the church employs an independent audit department that provides its certification at each annual general conference that church contributions are collected and spent in accordance with church policy. The church receives significant funds from tithes and fast offerings. According to the church, tithing and fast offering money is devoted to ecclesiastical purposes and not used in for-profit ventures. It has been estimated that the LDS Church received $33 billion in donations from its members in 2010, and that during the 2010s its net worth increased by about $15 billion per year. According to estimates by Bloomberg Businessweek, the LDS Church's net worth was $40 billion as of 2012. The church's assets are held in a variety of holding companies, subsidiary corporations, and for-profit companies including: Bonneville International, KSL, Deseret Book Company, and holding companies for cattle ranches and farms in at least 12 U.S. States, Canada, New Zealand, and Argentina. Also included are banks and insurance companies, hotels and restaurants, real estate development, forestry and mining operations, and transportation and railway companies. Investigative journalism from the Truth & Transparency Foundation in 2022 suggests the church may be the owner of the most valuable real estate portfolio in the United States, with a minimum market value of $15.7 billion. The church has also invested in for-profit business and real estate ventures such as City Creek Center. The Church-owned investment firm Ensign Peak Advisors publicly reports management of approximately $37.8 billion of financial securities, as of 2020. Culture Due to the differences in lifestyle promoted by church doctrine and history, members of the church have developed a distinct culture. Some scholars have even argued that church members form a distinctive ethnic group. It is primarily concentrated in the Intermountain West. Many of the church's more distinctive practices follow from their adherence to the Word of Wisdom – which includes abstinence from tobacco, alcohol, coffee, and tea – and their observance of Sabbath-day restrictions on recreation and shopping. Media and arts LDS-themed media includes cinema, fiction, websites, and graphical art such as photography and paintings. The church owns a chain of bookstores called Deseret Book, which provide a channel through which publications are sold; church leaders have authored books and sold them through the publishing arm of the bookstore. BYU TV, the church-sponsored television station, also airs on several networks. The church also produces several pageants annually depicting various events of the primitive and modern-day church. Its Easter pageant Jesus the Christ has been identified as the "largest annual outdoor Easter pageant in the world". The church encourages entertainment without violence, sexual content, or vulgar language; many church members specifically avoid rated-R movies. The church's official choir, the Tabernacle Choir at Temple Square, was formed in the mid-19th century and performs in the Salt Lake Tabernacle. They have travelled to more than 28 countries, and are considered one of the most famous choirs in the world. The choir has received a Grammy Award, three Emmy Awards, two Peabody Awards, and the National Medal of Arts. Notable members of the church in the media and arts include: Donny Osmond, an American singer, dancer, and actor; Orson Scott Card, author of Ender's Game; Stephenie Meyer, author of the Twilight series; and Glenn Beck, a conservative radio host, television producer, and author. Notable productions related to the church include Murder Among the Mormons, a 2021 Netflix documentary; and The Book of Mormon, a big-budget musical about Mormon missionaries that received nine Tony Awards. Political involvement The LDS Church states it generally takes no partisan role in politics, but encourages its members to play an active role as responsible citizens in their communities, including becoming informed about issues and voting. The church maintains that the faith's values can be found among many political parties. A 2012 Pew Center on Religion and Public Life survey indicates that 74 percent of U.S. members lean towards the Republican Party. Some liberal members say they feel that they have to defend their worthiness due to political differences. Democrats and those who lean Democrat made up 18% of church members surveyed in the 2014 Pew Research Center's Religious Landscape Survey. The official church stance on staying out of politics does not include if there are instances of what church leaders deem to be moral issues, or issues the church "believes ... directly affect [its] interests." It has previously opposed same-sex marriage in California Prop 8, supported a gay rights bill in Salt Lake City which bans discrimination against homosexual persons in housing and employment, opposed gambling, opposed storage of nuclear waste in Utah, and supported an approach to U.S. immigration policy as outlined in the Utah Compact. It also opposed a ballot initiative legalizing medicinal marijuana in Utah, but supported a possible alternative to it. In 2019 and 2021, the church stated its opposition to the Equality Act, which would prohibit discrimination in the United States on the basis of sexual orientation and gender identity, but supports alternate legislation that it says would protect both LGBTQ rights and religious freedom. In 2022, the church stated its support for the Respect for Marriage Act - which would codify same-sex marriage as legal in the United States - due to the "protections for religious freedom" it includes. In the 117th United States Congress, there are nine LDS Church members, including all six members of Utah's congressional delegation, all of whom are Republicans. Utah's current governor, Spencer Cox, is also a church member, as are supermajorities in both houses of the Utah State Legislature. Church member and current U.S. Senator Mitt Romney was the Republican Party's nominee in the U.S. 2012 presidential election. Demographics The church reports a worldwide membership of 16 million. The church's definition of "membership" includes all persons who were ever baptized, or whose parents were members while the person was under the age of eight (called "members of record"), who have neither been excommunicated nor asked to have their names removed from church records with approximately 8.3 million residing outside the United States, as of December 2011. According to its statistics, the church is the fourth largest religious body in the United States. Although the church does not publish attendance figures, researchers estimate that attendance at weekly LDS worship services globally is around 4 million. Members living in the U.S. and Canada constitute 46 percent of membership, Latin America 38 percent, and members in the rest of the world 16 percent. The 2012 Pew Forum on Religion & Public Life survey, conducted by Princeton Survey Research Associates International, found that approximately 2 percent of the U.S. adult population self-identified as Mormon. Membership is concentrated geographically in the Intermountain West, in a specific region sometimes known as the Mormon corridor. Church members and some others from the United States colonized this region in the mid-to-late 1800s, dispossessing several indigenous tribes in the process. The church experienced rapid numerical growth in the 20th century, especially in the 1980s and 1990s. In the 21st century, however, church membership growth has slowed. Still, in the last decade, the church has more than doubled in size in Africa; the largest regional increases by raw numbers occurred in the United States, South America, and Africa. In the United States, church members tend to be more highly educated than the general population. , 54 percent of LDS men and 44 percent of women have post-secondary education; the general American population stands at 37 percent and 28 percent, respectively. The racial and ethnic composition of membership in the United States is one of the least diverse in the country. Church membership is predominantly white; the membership of blacks is significantly lower than the general U.S. population. The LDS Church does not release official statistics on church activity, but it is likely that only approximately 40 percent of its recorded membership in the United States and 30 percent worldwide regularly attend weekly Sunday worship services. A statistical analysis of the 2014 Pew Religious Landscape Survey assessed that "about one-third of those with a Latter-day Saint background... left the Church", identifying as disaffiliated. Activity rates vary with age, and disengagement occurs most frequently between age 16 and 25. Young single adults are more likely to become inactive than their married counterparts, and overall, women tend to be more active than men. Humanitarian services The LDS Church provides worldwide humanitarian service, and is considered widely known for it. The church's welfare and humanitarian efforts are coordinated by Philanthropies, a church department under the direction of the Presiding Bishopric. Welfare efforts, originally initiated during the Great Depression, provide aid for the poor, financed by donations from church members. Philanthropies is also responsible for philanthropic (that is, not tithing or fast offering) donations to the LDS Church and other affiliated charities, such as the Church History Library, the Church Educational System (and its subsidiary organizations), the Tabernacle Choir at Temple Square and funds for LDS missionaries. Donations are also used to operate bishop's storehouses, which package and store food for the poor at low cost, and provide other local services. In 2016, the church reported that it had spent a total of $1.2 billion on humanitarian aid over the previous 30 years. Church humanitarian aid includes organizing food security, clean water, mobility, and healthcare initiatives, operating thrift stores, maintaining a service project website, and directly funding or partnering with other organizations. The church reports that the value of all charitable donations in 2021 was $906 million. Independent reporting has found that the Church's charity organization, LDS Charities, gave a total of $177 million from 2008 to 2020. The church also distributes money and aid to disaster victims worldwide. In 2005, the church partnered with Catholic Relief Services to provide aid to Niger. In 2010, it partnered with Islamic Relief to help victims of flooding in Pakistan. Latter-day Saint Charities (a branch of the church's welfare department) increased food production during the COVID-19 pandemic and donated healthcare supplies to 16 countries affected by the crisis. The church has donated $4 million to aid refugees fleeing from the 2022 Russian invasion of Ukraine. In 2022, the church gave $32 million to the United Nations World Food Programme, in its largest one-time donation to a humanitarian organization to that point. Discrimination and persecution The LDS Church and other churches within Mormonism have been the subject of discrimination and sometimes violent persecution. The most vocal and strident opposition occurred during the 19th century, particularly the forceful expulsion from Missouri and Illinois in the 1830s and 1840s, during the Utah War of the 1850s, and in the second half of the century. Violent persecution against the LDS Church occurred in the early 1830s in Missouri. Mormons quickly earned long-lasting enmity in the frontier communities, due to discordant cultural attitudes (including opposition to slavery in a slave state) and their practice of voting as a bloc (thus gaining significant political power). This enmity culminated in the Missouri Mormon War and Governor Boggs’ “extermination order.” Modern-day opposition to the church and Mormonism can broadly be divided into two separate strains of thought: the secular, and the religious. Secular criticism focuses primarily on refuting the church’s truth claims; religious opposition, led mostly by Evangelicals, argues that Mormonism is heretical or diabolical. In recent years, an increasing number of meetinghouses and other church facilities have been the targets of vandalism or arson. In 2022, the Orem Utah Temple was damaged by arson while under construction. Criticism and controversy The LDS Church has been subject to criticism and the subject of controversy since its early years in New York and Pennsylvania. Modern criticism of the church includes disputed factual claims, allegations of historical revisionism by the church, child sexual abuse, anti-gay teachings, racism, and sexism. Notable 20th-century critics include Jerald and Sandra Tanner and historian Fawn Brodie. Child sexual abuse The church has been criticized for a number of alleged abuses perpetrated by local church leadership. In other cases, church leaders have been criticized for allegedly failing to properly report abuse to law enforcement. Scriptures In the late 1820s, criticism centered on the claim by Joseph Smith to have been led to a set of gold plates from which the Book of Mormon was reputedly translated. Mainstream academic scholarship does not conclude the Book of Mormon is of an ancient origin and considers the book to be a 19th-century composition. Scholars have pointed out a number of anachronisms within the text. They argue that no evidence of a reformed Egyptian language has ever been discovered. Also, general archaeological and genetic evidence has not supported the book's statements about any known indigenous peoples of the Americas. Since its publication in 1842, the Book of Abraham (currently published as part of the canonical Pearl of Great Price) has also been a major source of controversy. Numerous non-Mormon Egyptologists, beginning in the late 19th century, have disagreed with Joseph Smith's explanations of the book's facsimiles. Translations of the original papyri - by both Mormon and non-Mormon Egyptologists - do not match the text of the Book of Abraham as purportedly translated by Joseph Smith. Indeed, the transliterated text from the recovered papyri and facsimiles published in the Book of Abraham contain no direct references to Abraham. Scholars have also asserted that damaged portions of the papyri have been reconstructed incorrectly by Smith or his associates. Polygamy Polygamy (called plural marriage within the church) was practiced by church leaders for more than half of the 19th century, and practiced publicly from 1852 to 1890 by between 20 and 30 percent of Latter-day Saint families. It was instituted privately in the 1830s by founder Joseph Smith and announced publicly in 1852 at the direction of Brigham Young. For over 60 years, the church and the United States were at odds over the issue: at one point, the Republican platform referenced "the twin relics of barbarism—polygamy and slavery." The church defended the practice as a matter of religious freedom, while the federal government aggressively sought to eradicate it; in 1862, the United States Congress passed the Morrill Anti-Bigamy Act, which prohibited plural marriage in the territories. In 1890, church president Wilford Woodruff issued a Manifesto that officially terminated the practice, although it did not dissolve existing plural marriages. Some church members continued to enter into polygamous marriages, but these eventually stopped in 1904 when church president Joseph F. Smith disavowed polygamy before Congress and issued a "Second Manifesto," calling for all plural marriages in the church to cease. Several small fundamentalist groups, seeking to continue the practice, split from the LDS Church, but the mainline church now excommunicates members found practicing polygamy and distances itself from those fundamentalist groups. Ethnic minorities Black people From the administration of Brigham Young until 1978, the church did not allow black people to receive the priesthood - and therefore to hold many leadership positions - or to enter the temple. The priesthood ban was removed when the church leadership claimed they received divine inspiration. Public pressure during the United States' civil rights movement had preceded the priesthood ban being rescinded. Native American people Church leadership and publications have previously taught that Native Americans are descendants of Lamanites, a dark-skinned and cursed people from the Book of Mormon. More recently, claims by Mormon researchers and publications generally favor a smaller geographic footprint of Lamanite descendants. Mainstream science and archaeology fail to provide any evidence for the existence of populations of Lamanites. Current church publications state that the exact extent and identity of Lamanite descendants is unknown. The church ran an Indian Placement Program between the 1950s and the 1990s, wherein indigenous children were adopted by white church members. Criticism resulted during and after the program, including claims of improper assimilation and even abuse. However, many of the involved students and families praised the program, and positive outcomes were reported for many participants. Jewish people Some Jewish groups criticized the LDS Church in 1995 after discovering that vicarious baptisms for the dead for victims of the Holocaust had been performed by members of the church. After that criticism, church leaders put a policy in place to stop the practice, with an exception for baptisms specifically requested or approved by victims' relatives. Jewish organizations again criticized the church in 2002, 2004, 2008, and 2012 stating that the church failed to honor the 1995 agreement. The LDS Church says it has put institutional safeguards in place to avoid the submission of the names of Holocaust victims not related to Mormon members, but that the sheer number of names submitted makes policing the database of names impractical. LGBT people The church's policies and treatment of sexual minorities and gender minorities have long been the subject of external criticism, as well as internal controversy and disaffection by members. Because of its ban against same-sex sexual activity and same-sex marriage the LDS church taught for decades that its adherents who are attracted to the same sex can and should change their attractions through righteous striving and sexual orientation change efforts, and provided therapy and programs with that goal. Current teachings and policies leave homosexual members with the option of potentially harmful attempts to change their sexual orientation, entering a mixed-orientation opposite-sex marriage, or lifelong celibacy. Some individuals and organizations have argued that church teachings against homosexuality and the treatment of LGBT Mormons by other members and leaders have contributing to their elevated rates of PTSD and depression, as well as suicide and teen homelessness. The church's decades-long, political involvement opposing US same-sex marriage laws has further garnered criticism and protests. Criticism of Joseph Smith In the 1830s, the church was criticized for Smith's handling of a banking failure in Kirtland, Ohio. After the Mormons migrated west, there was fear and suspicion about the LDS Church's political and military power in Missouri, culminating in the 1838 Mormon War and the Mormon Extermination Order (Missouri Executive Order 44) by Governor Lilburn Boggs. In the 1840s, criticism of the church included its theocratic aspirations in Nauvoo, Illinois. Criticism of the practice of plural marriage and other doctrines taught by Smith were published in the Nauvoo Expositor. Opposition led to a series of events culminating in the killing of Smith and his brother while jailed in 1844. Financial allegations The church's failure to make its finances public has drawn criticism from commentators who consider its practices too secretive. In December 2019, a whistleblower alleged the church held over $100 billion in investment funds through its investment management company, Ensign Peak Advisors (EP); that it failed to use the funds for charitable purposes and instead used them in for-profit ventures; and that it misled contributors and the public about the usage and extent of those funds. In response, the church's First Presidency stated that "the Church complies with all applicable law governing our donations, investments, taxes, and reserves," and that "a portion" of funds received by the church are "methodically safeguarded through wise financial management and the building of a prudent reserve for the future". The church has not directly addressed the fund's size to the public, but third parties have treated the disclosures as legitimate. The disclosure of Ensign Peak has led to criticism that the church's wealth may be excessive. Critical commentators have asserted that the church uses its corporate structure to "optimize its asset and capital management by moving money and assets between [its] tax-exempt and regular businesses as loans, donations or investments." The church has been accused of "significant tax evasion" in Australia. According to an investigation by Australian newspapers, The Daily Age and The Sun Herald, the church's corporation LDS Charities Australia was the recipient of nearly $70 million in donations annually (which is tax exempt under Australian law, as opposed to religious donations, which are not) but appeared to actually spend very little of it on charity. According to the investigation, tithing and other religious donations were routed through the corporation to ensure they would be tax exempt. The investigation does not reference any internal church documents to confirm their findings. The church has previously fought to keep its internal financial information out of the public record. In Canada, a total of more than 1 billion dollars collected through tithing has been transferred tax-free to church universities over a 15-year period. In October 2022, The Sydney Morning Herald announced the results of an investigation it conducted together with multiple other media organizations - that while the church publicly claimed to have donated US$1.35 billion to charity between 2008 and 2020, its private financial reports showed that it actually donated only US$0.177 billion to charity in that period. In February 2023, the U.S. Securities and Exchange Commission (SEC) issued a $5 million penalty to the church and its investment company, EP. The SEC alleged that the church concealed its investments and their management in multiple shell companies from 1997 to 2019; the SEC believes these shell companies were approved by senior church leadership to avoid public transparency. The church released a statement stating that in 2000 EP "received and relied upon legal counsel regarding how to comply with its reporting obligations while attempting to maintain the privacy of the portfolio." After initial SEC concern in June 2019, the church stated that EP "adjusted its approach and began filing a single aggregated report." Responses Mormon apologetics organizations, such as FAIR and the Maxwell Institute, seek to counter criticisms of the church and its leaders. Most of the apologetic work focuses on providing and discussing evidence supporting the claims of Smith and the Book of Mormon. Scholars and authors such as Hugh Nibley, Daniel C. Peterson, and others are well-known apologists within the church. See also Christianity in the United States Index of articles related to the Church of Jesus Christ of Latter-day Saints List of missions of the Church of Jesus Christ of Latter-day Saints List of temples of the Church of Jesus Christ of Latter-day Saints Mormon (word) Mormonism and Islam Mormonism and Judaism List of new religious movements Notes References Bibliography . . See Doctrine and Covenants. External links Official church websites ChurchofJesusChrist.org – Official church website ComeUntoChrist.org – Official church website, with information about basic beliefs. (formerly Mormon.org) Other sites 1830 establishments in New York (state) Christian denominations established in the 19th century Joseph Smith Religious organizations based in the United States Religious organizations established in 1830 Religious belief systems founded in the United States
2,681
5,938
https://en.wikipedia.org/wiki/Standard%20works
Standard works
The Standard Works of the Church of Jesus Christ of Latter-day Saints (LDS Church, the largest in the Latter Day Saint movement) are the four books that currently constitute its open scriptural canon. The four books of the standard works are: The Authorized King James Version as the official scriptural text of the Bible (other versions of the Bible are used in non-English-speaking countries) The Book of Mormon, subtitled since 1981 "Another Testament of Jesus Christ" The Doctrine and Covenants (D&C) The Pearl of Great Price (containing the Book of Moses, the Book of Abraham, Joseph Smith–Matthew, Joseph Smith–History, and the Articles of Faith) The Standard Works are printed and distributed by the LDS Church both in a single binding called a quadruple combination and as a set of two books, with the Bible in one binding, and the other three books in a second binding called a triple combination. Current editions of the Standard Works include a number of non-canonical study aids, including a Bible dictionary, photographs, maps and gazetteer, topical guide, index, footnotes, cross references, and excerpts from the Joseph Smith Translation of the Bible. The scriptural canon is "open" due to the Latter-day Saint belief in continuous revelation. Additions can be made to the scriptural canon with the "common consent" of the church's membership. Other branches of the Latter Day Saint movement reject some of the Standard Works or add other scriptures, such as the Book of the Law of the Lord and The Word of the Lord Brought to Mankind by an Angel. Differences in canonicity across sects Canons of various Latter Day Saint denominations reject some of the Standard Works canonized by the LDS Church or have included additional works. For instance, the Bickertonite sect does not consider the Pearl of Great Price or Doctrine and Covenants to be scriptural. Rather, they believe that the New Testament scriptures contain a true description of the church as established by Jesus Christ, and that both the King James Version of the Bible and the Book of Mormon are the inspired word of God. Some Latter Day Saint denominations accept earlier versions of the Standard Works or work to develop corrected translations. Others have purportedly received additional revelations. The Community of Christ points to Jesus Christ as the living Word of God, and it affirms the Bible, along with the Book of Mormon, as well as its own regularly appended version of Doctrines and Covenants as scripture for the church. While it publishes a version of the Joseph Smith Translation of the Bible—which includes material from the Book of Moses—the Community of Christ also accepts the use of other English translations of the Bible, such as the standard King James Version and the New Revised Standard Version. Like the aforementioned Bickertonites, the Church of Christ (Temple Lot) rejects the Doctrine and Covenants and the Pearl of Great Price, as well as the Joseph Smith Translation of the Bible, preferring to use only the King James Bible and the Book of Mormon as doctrinal standards. The Book of Commandments is accepted as being superior to the Doctrine and Covenants as a compendium of Joseph Smith's early revelations, but is not accorded the same status as the Bible or the Book of Mormon. The Word of the Lord and The Word of the Lord Brought to Mankind by an Angel are two related books considered to be scriptural by Fettingite factions that separated from the Temple Lot church. Both books contain revelations allegedly given to former Church of Christ (Temple Lot) Apostle Otto Fetting by an angelic being who claimed to be John the Baptist. The latter title (120 messages) contains the entirety of the former's material (30 msgs.) with additional revelations (90 msgs.) purportedly given to William A. Draves by this same being, after Fetting's death. Neither are accepted by the larger Temple Lot body of believers. The Church of Jesus Christ of Latter Day Saints (Strangite) considers the Bible (when correctly translated), the Book of Mormon, and editions of the Doctrine and Covenants published prior to Joseph Smith's death (which contained the Lectures on Faith) to be inspired scripture. They also hold the Joseph Smith Translation of the Bible to be inspired, but do not believe modern publications of the text are accurate. Other portions of The Pearl of Great Price, however, are not considered to be scriptural—though are not necessarily fully rejected either. The Book of Jasher was consistently used by both Joseph Smith and James Strang, but as with other Latter Day Saint denominations and sects, there is no official stance on its authenticity, and it is not considered canonical. This sect likewise holds as scriptural several prophecies, visions, revelations, and translations printed by James Strang, and published in the Revelations of James J. Strang. An additional work called The Book of the Law of the Lord is also accepted as inspired scripture by the Strangites. They likewise hold as scriptural several prophecies, visions, revelations, and translations printed by James Strang, and published in the Revelations of James J. Strang. Among other things, this text contains his purported "Letter of Appointment" from Joseph Smith and his translation of the Voree plates. The Church of Jesus Christ (Cutlerite) accepts the following as scripture: the Inspired Version of the Bible (including the Book of Moses and Joseph Smith–Matthew), the Book of Mormon, and the 1844 edition of the Doctrine and Covenants (including the Lectures on Faith). However, the revelation on tithing (section 107 in the 1844 edition; 119 in modern LDS editions) is emphatically rejected by members of this church, as it is not believed to be given by Joseph Smith. The Book of Abraham is rejected as scripture, as are the other portions of the Pearl of Great Price that do not appear in the Inspired Version of the Bible. Many Latter Day Saint denominations have also either adopted the Articles of Faith or at least view them as a statement of basic theology. (They are considered scriptural by the larger LDS church and are included in The Pearl of Great Price.) At times, the Articles have been adapted to fit the respective belief systems of various faith communities. Continuing revelation Under the LDS Church's doctrine of continuing revelation, Latter-day Saints believe in the principle of revelation from God to his children. Individual members are entitled to divine revelation for confirmation of truths, gaining knowledge or wisdom, meeting personal challenges, and so forth. Parents are entitled to revelation for raising their families. Church members believe that divine revelation for the direction of the entire church comes from God to the President of the Church, who they consider to be a prophet in the same sense as Noah, Abraham, Moses, Peter, and other biblical leaders. When other members of the First Presidency or Quorum of the Twelve speak as "moved upon by the Holy Ghost", it "shall be scripture, shall be the will of the Lord, shall be the mind of the Lord, shall be the word of the Lord, shall be the voice of the Lord, and the power of God unto salvation." Members are encouraged to ponder these revelations and pray to determine for themselves the truthfulness of doctrine. Adding to the canon of scripture The D&C teaches that "all things must be done in order, and by common consent in the church" (). This applies to adding new scripture. LDS Church president Harold B. Lee taught "The only one authorized to bring forth any new doctrine is the President of the Church, who, when he does, will declare it as revelation from God, and it will be so accepted by the Council of the Twelve and sustained by the body of the Church." There are several instances of this happening in the LDS Church: June 9, 1830: First conference of the church, The Articles and Covenants of the Church of Christ, now known as D&C 20. If the Bible and Book of Mormon were not sustained on April 6 then they were by default when the Articles and Covenants were sustained. (see D&C 20:8-11) August 17, 1835: Select revelations from Joseph Smith were unanimously accepted as scripture. These were later printed in the D&C. October 10, 1880: The Pearl of Great Price was unanimously accepted as scripture. Also at that time, other revelations in the Doctrine and Covenants – which had not been accepted as scripture in 1835 because they were received after that date – were unanimously accepted as scripture. October 6, 1890: Official Declaration 1 was accepted unanimously as scripture. It later began to be published in the Doctrine and Covenants. April 3, 1976: Two visions (one received by Joseph Smith and the other by Joseph F. Smith) were accepted as scripture and added to the Pearl of Great Price. (The two visions were later moved to the D&C as sections 137 and 138.) September 30, 1978: Official Declaration 2 was accepted unanimously as scripture. It immediately was added to the Doctrine and Covenants. When a doctrine undergoes this procedure, the LDS Church treats it as the word of God, and it is used as a standard to compare other doctrines. Lee taught: It is not to be thought that every word spoken by the General Authorities is inspired, or that they are moved upon by the Holy Ghost in everything they speak and write. Now you keep that in mind. I don't care what his position is, if he writes something or speaks something that goes beyond anything that you can find in the standard works, unless that one be the prophet, seer, and revelator—please note that one exception—you may immediately say, "Well, that is his own idea!" And if he says something that contradicts what is found in the standard works (I think that is why we call them "standard"—it is the standard measure of all that men teach), you may know by that same token that it is false; regardless of the position of the man who says it. The Bible English-speaking Latter-day Saints typically study a custom edition of the King James Version of the Bible (KJV), which includes custom chapter headings, footnotes referencing books in the Standard Works, and select passages from the Joseph Smith Translation of the Bible. Though the KJV was always commonly used, it was officially adopted in the 1950s when J. Reuben Clark, of the church's First Presidency, argued extensively that newer translations, such as Revised Standard Version of 1952, were of lower quality and less compatible with LDS tradition. After publishing its own KJV edition in 1979, the First Presidency announced in 1992 that the KJV was the church's official English Bible, stating "[w]hile other Bible versions may be easier to read than the King James Version, in doctrinal matters latter-day revelation supports the King James Version in preference to other English translations." In 2010 this was written into the church's Handbook, which directs official church policy and programs. A Spanish version, with a similar format and using a slightly revised version of the 1909 Reina-Valera translation, was published in 2009. Latter-day Saints in other non-English speaking areas may use other versions of the Bible. Though the Bible is part of the LDS canon and members believe it to be the word of God, they believe that omissions and mistranslations are present in even the earliest known manuscripts. They claim that the errors in the Bible have led to incorrect interpretations of certain passages. Thus, as church founder Joseph Smith explained, the church believes the Bible to be the word of God "as far as it is translated correctly". The church teaches that "[t]he most reliable way to measure the accuracy of any biblical passage is not by comparing different texts, but by comparison with the Book of Mormon and modern-day revelations". The manuscripts of the Joseph Smith Translation of the Bible state that "the Songs of Solomon are not inspired scripture," and therefore it is not included in LDS canon and rarely studied by members of the LDS Church. However, it is still printed in every version of the King James Bible published by the church. The Apocrypha Although the Apocrypha was part of the 1611 edition of the KJV, the LDS Church does not currently use the Apocrypha as part of its canon. Joseph Smith taught that while the contemporary edition of the Apocrypha was not to be relied on for doctrine, it was potentially useful when read with a spirit of discernment. Joseph Smith Translation of the Bible Joseph Smith translated selected verses of the Bible, working by subject. His complete work is known as the Joseph Smith Translation of the Bible, or the Inspired Version. Although this selected translation is not generally quoted by church members, the English Bible issued by the church and commonly used by Latter-day Saints contains cross-references to the Joseph Smith Translation (JST), as well as an appendix containing longer excerpts from it. Excerpts that were too long to include in the Bible appendix are included in the Pearl of Great Price as the Book of Moses (for Genesis 1:1-6:13) and Joseph Smith-Matthew (for Matthew 23:39-24:51 and Mark 13). The Book of Mormon Latter-day Saints consider The Book of Mormon a volume of holy scripture comparable to the Bible. It contains a record of God's dealings with the prophets and ancient inhabitants of the Americas. The introduction to the book asserts that it "contains, as does the Bible, the fullness of the everlasting gospel. The book was written by many ancient prophets by the spirit of prophecy and revelation. Their words, written on gold plates, were quoted and abridged by a prophet-historian named Mormon." Segments of the Book of Mormon provide an account of the culture, religious teachings, and civilizations of some of the groups who immigrated to the New World. One came from Jerusalem in 600 B.C., and afterward separated into two nations, identified in the book as the Nephites and the Lamanites. Some years after their arrival, the Nephites met with a similar group, the Mulekites who left the Middle East during the same period. An older group arrived in America much earlier, when the Lord confounded the tongues at the Tower of Babel. This group is known as the Jaredites and their story is condensed in the Book of Ether. The crowning event recorded in the Book of Mormon is the personal ministry of Jesus Christ among Nephites soon after his resurrection. This account presents the doctrines of the gospel, outlines the plan of salvation, and offers men peace in this life and eternal salvation in the life to come. The latter segments of the Book of Mormon detail the destruction of these civilizations, as all were destroyed except the Lamanites. The book asserts that the Lamanites are among the ancestors of the indigenous peoples of the Americas. According to his record, Joseph Smith translated the Book of Mormon by gift and power of God through a set of interpreters later referred to as the Urim and Thummim. Eleven witnesses signed testimonies of its authenticity, which are now included in the preface to the Book of Mormon. The Three Witnesses testified to have seen an angel present the gold plates and to have heard God bear witness to its truth. Eight others stated that Joseph Smith showed them the plates and that they handled and examined them. The Doctrine and Covenants The church's D&C is a collection of revelations, policies, letters, and statements given to the modern church by past church presidents. This record contains points of church doctrine and direction on church government. The book has existed in numerous forms, with varying content, throughout the history of the church and has also been published in differing formats by the various Latter Day Saint denominations. When the church chooses to canonize new material, it is typically added to the Doctrine and Covenants; the most recent changes were made in 1981. The Pearl of Great Price The Pearl of Great Price is a selection of material produced by Joseph Smith and deals with many significant aspects of the faith and doctrine of the church. Many of these materials were initially published in church periodicals in the early days of the church. The Pearl of Great Price contains five sections: Selections from the Book of Moses: portions of the Book of Genesis from the Joseph Smith Translation of the Bible. The Book of Abraham: a translation from papyri acquired by Smith in 1835, dealing with Abraham's journeys in Egypt. The work contains many distinctive Mormon doctrines such as exaltation. Joseph Smith–Matthew: portions of the Gospel of Matthew and Gospel of Mark from the Joseph Smith Translation of the Bible. Joseph Smith–History: a first-person narrative of Smith's life before the founding of the church. The material is taken from Documentary History of the Church and is based on a history written by Smith in 1838. The Articles of Faith: concise listing of thirteen fundamental doctrines of Mormonism composed by Smith in 1842. Church instruction Historically, in the church's Sunday School and Church Educational System (CES) classes, the standard works have been studied and taught in a four-year rotation: Year One : Old Testament (also includes some coverage of related topics in the Book of Moses and Book of Abraham from the Pearl of Great Price) Year Two : New Testament Year Three: Book of Mormon Year Four: Doctrine and Covenants and Church History However, church leaders have emphasized that members should not restrict their study of the standard works to the particular book being currently studied in Sunday School or other religious courses. Specifically, church president Ezra Taft Benson taught: At present, the Book of Mormon is studied in our Sunday School and seminary classes every fourth year. This four-year pattern, however, must not be followed by Church members in their personal and family study. We need to read daily from the pages of [that] book .... In November 2014, the church announced changes in the curriculum to be used within CES, including the church's four institutions of higher education, such as Brigham Young University. The church's seminary program will retain the current four-year rotation of study. Beginning in the fall of 2015, incoming institute of religion and CES higher education students will be required to take four new cornerstone courses: Jesus Christ and the Everlasting Gospel Foundations of the Restoration The Teachings and Doctrine of the Book of Mormon The Eternal Family The church's intent is to further integrate the teachings found in the Standard Works with that of church leaders and other current sources. See also Book of Joseph, untranslated scripture from Joseph Smith Papyri Kinderhook plates, incomplete non-canonized translation made by Joseph Smith Lectures on Faith, decanonized in 1921 List of non-canonical revelations in The Church of Jesus Christ of Latter-day Saints References External links Quadruple Combination: Official Edition of the Standard Works (King James Bible, the Book of Mormon, Doctrine and Covenants, and Pearl of Great Price) in PDF format, including footnotes, chapter headings and supplemental material. Official Edition of the LDS standard works with cross references and study helps Latter Day Saint terms
2,683
5,950
https://en.wikipedia.org/wiki/Columbus%2C%20Ohio
Columbus, Ohio
Columbus () is the state capital and the most populous city in the U.S. state of Ohio. With a 2020 census population of 905,748, it is the 14th-most populous city in the U.S., the second-most populous city in the Midwest, after Chicago, and the third-most populous state capital. Columbus is the county seat of Franklin County; it also extends into Delaware and Fairfield counties. It is the core city of the Columbus metropolitan area, which encompasses 10 counties in central Ohio. It had a population of 2,138,926 in 2020, making it the largest metropolitan entirely in Ohio and 32nd-largest city in the U.S. Columbus originated as numerous Native American settlements on the banks of the Scioto River. Franklinton, now a city neighborhood, was the first European settlement, laid out in 1797. The city was founded in 1812 at the confluence of the Scioto and Olentangy rivers, and laid out to become the state capital. The city was named for Italian explorer Christopher Columbus. The city assumed the function of state capital in 1816 and county seat in 1824. Amid steady years of growth and industrialization, the city has experienced numerous floods and recessions. Beginning in the 1950s, Columbus began to experience significant growth; it became the largest city in Ohio in land and population by the early 1990s. The 1990s and 2000s saw redevelopment in numerous city neighborhoods, including Downtown. The city has a diverse economy based on education, government, insurance, banking, defense, aviation, food, clothes, logistics, steel, energy, medical research, health care, hospitality, retail and technology. The metropolitan area is home to the Battelle Memorial Institute, the world's largest private research and development foundation; Chemical Abstracts Service, the world's largest clearinghouse of chemical information; and the Ohio State University, one of the largest universities in the United States. the Greater Columbus area is home to the headquarters of six corporations in the U.S. Fortune 500: Cardinal Health, American Electric Power, L Brands, Nationwide, Bread Financial and Huntington Bancshares. Name The city of Columbus was named after 15th-century Italian explorer Christopher Columbus at the city's founding in 1812. It is the largest city in the world named for the explorer, who sailed to and settled parts of the Americas on behalf of Isabella I of Castile and Spain. Although no reliable history exists as to why Columbus, who had no connection to the city or state of Ohio before the city's founding, was chosen as the name for the city, the book Columbus: The Story of a City indicates a state lawmaker and local resident admired the explorer enough to persuade other lawmakers to name the settlement Columbus. Since the late 20th century, historians have criticized Columbus for initiating the European conquest of America and for abuse, enslavement, and subjugation of natives. Efforts to remove symbols related to the explorer in the city date to the 1990s. Amid the George Floyd protests in 2020, several petitions pushed for the city to be renamed. Nicknames for the city have included "the Discovery City," "Arch City," "Cap City," "Cowtown," "The Biggest Small Town in America" and "Cbus." History Ancient and early history Between 1000 B.C. and 1700 A.D., the Columbus metropolitan area was a center to indigenous cultures known as the Mound Builders, including the Adena, Hopewell and Fort Ancient peoples. Remaining physical evidence of the cultures are their burial mounds and what they contained. Most of Central Ohio's remaining mounds are located outside of Columbus city boundaries, though the Shrum Mound is maintained, now as part of a public park and historic site. The city's Mound Street derives its name from a mound that existed by the intersection of Mound and High Streets. The mound's clay was used in bricks for most of the city's initial brick buildings; many were subsequently used in the Ohio Statehouse. The city's Ohio History Center maintains a collection of artifacts from these cultures. 18th century: Ohio Country The area including modern-day Columbus once comprised the Ohio Country, under the nominal control of the French colonial empire through the Viceroyalty of New France from 1663 until 1763. In the 18th century, European traders flocked to the area, attracted by the fur trade. The area was often caught between warring factions, including American Indian and European interests. In the 1740s, Pennsylvania traders overran the territory until the French forcibly evicted them. Fighting for control of the territory in the French and Indian War (1754-1763) became part of the international Seven Years' War (1756-1763). During this period, the region routinely suffered turmoil, massacres and battles. The 1763 Treaty of Paris ceded the Ohio Country to the British Empire. Until just before the American Revolution, Central Ohio had continuously been the home of numerous indigenous villages. A Mingo village was located at the forks of the Scioto and Olentangy rivers, with Shawnee villages to the south and Wyandot and Delaware villages to the north. Colonial militiamen burned down the Mingo village in 1774 during a raid. Virginia Military District After the American Revolution, the Virginia Military District became part of the Ohio Country as a territory of Virginia. Colonists from the East Coast moved in, but rather than finding an empty frontier, they encountered people of the Miami, Delaware, Wyandot, Shawnee and Mingo nations, as well as European traders. The tribes resisted expansion by the fledgling United States, leading to years of bitter conflict. The decisive Battle of Fallen Timbers resulted in the Treaty of Greenville in 1795, which finally opened the way for new settlements. By 1797, a young surveyor from Virginia named Lucas Sullivant had founded a permanent settlement on the west bank of the forks of the Scioto and Olentangy rivers. An admirer of Benjamin Franklin, Sullivant chose to name his frontier village "Franklinton." The location was desirable for its proximity to the navigable rivers – but Sullivant was initially foiled when, in 1798, a large flood wiped out the new settlement. He persevered, and the village was rebuilt, though somewhat more inland. After the Revolution, land comprising parts of Franklin and adjacent counties was set aside by the United States Congress for settlement by Canadians and Nova Scotians who were sympathetic to the colonial cause and had their land and possessions seized by the British government. The Refugee Tract, consisting of , was long and wide, and was claimed by 67 eligible men. The Ohio Statehouse sits on land once contained in the Refugee Tract. 19th century: state capital, city establishment, and development After Ohio achieved statehood in 1803, political infighting among prominent Ohio leaders led to the state capital moving from Chillicothe to Zanesville and back again. Desiring to settle on a location, the state legislature considered Franklinton, Dublin, Worthington and Delaware before compromising on a plan to build a new city in the state's center, near major transportation routes, primarily rivers. As well, Franklinton landowners had donated two plots in an effort to convince the state to move its capital there. The two spaces were set to become Capitol Square (for the Ohio Statehouse) and the Ohio Penitentiary. Named in honor of Christopher Columbus, the city was founded on February 14, 1812, on the "High Banks opposite Franklinton at the Forks of the Scioto most known as Wolf's Ridge." At the time, this area was a dense forestland, used only as a hunting ground. The city was incorporated as a borough on February 10, 1816. Nine people were elected to fill the municipality's various positions of mayor, treasurer and several others. Between 1816 and 1817, Jarvis W. Pike served as the first appointed mayor. Although the recent War of 1812 had brought prosperity to the area, the subsequent recession and conflicting claims to the land threatened the new town's success. Early conditions were abysmal, with frequent bouts of fevers – attributed to malaria from the flooding rivers – and an outbreak of cholera in 1833. It led Columbus to create the Board of Health, now part of the Columbus Public Health department. The outbreak, which remained in the city from July to September 1833, killed 100 people. Columbus was without direct river or trail connections to other Ohio cities, leading to slow initial growth. The National Road reached Columbus from Baltimore in 1831, which complemented the city's new link to the Ohio and Erie Canal, both of which facilitated a population boom. A wave of European immigrants led to the creation of two ethnic enclaves on the city's outskirts. A large Irish population settled in the north along Naghten Street (presently Nationwide Boulevard), while the Germans took advantage of the cheap land to the south, creating a community that came to be known as the Das Alte Südende (The Old South End). Columbus's German population constructed numerous breweries, Trinity Lutheran Seminary and Capital University. With a population of 3,500, Columbus was officially chartered as a city on March 3, 1834. On that day, the legislature carried out a special act, which granted legislative authority to the city council and judicial authority to the mayor. Elections were held in April of that year, with voters choosing John Brooks as the first popularly elected mayor. Columbus annexed the then-separate city of Franklinton in 1837. In 1850, the Columbus and Xenia Railroad became the first railroad into the city, followed by the Cleveland, Columbus and Cincinnati Railroad in 1851. The two railroads built a joint Union Station on the east side of High Street just north of Naghten (then called North Public Lane). Rail traffic into Columbus increased: by 1875, eight railroads served Columbus, and the rail companies built a new, more elaborate station. Another cholera outbreak hit Columbus in 1849, prompting the opening of the city's Green Lawn Cemetery. On January 7, 1857, the Ohio Statehouse finally opened after 18 years of construction. Site construction continued until 1861. Before the abolition of slavery in the Southern United States in 1863, the Underground Railroad was active in Columbus and was led, in part, by James Preston Poindexter. Poindexter arrived in Columbus in the 1830s and became a Baptist preacher and leader in the city's African-American community until the turn of the century. During the Civil War, Columbus was a major base for the volunteer Union Army. It housed 26,000 troops and held up to 9,000 Confederate prisoners of war at Camp Chase, at what is now the Hilltop neighborhood of west Columbus. Over 2,000 Confederate soldiers remain buried at the site, making it one of the North's largest Confederate cemeteries. North of Columbus, along the Delaware Road, the Regular Army established Camp Thomas, where the 18th U.S. Infantry organized and trained. By virtue of the Morrill Act of 1862, the Ohio Agricultural and Mechanical College – which eventually became the Ohio State University – was founded in 1870 on the former estate of William and Hannah Neil. By the end of the 19th century, Columbus was home to several major manufacturing businesses. The city became known as the "Buggy Capital of the World," thanks to the two dozen buggy factories – notably the Columbus Buggy Company, founded in 1875 by C.D. Firestone. The Columbus Consolidated Brewing Company also rose to prominence during this time and might have achieved even greater success were it not for the Anti-Saloon League in neighboring Westerville. In the steel industry, a forward-thinking man named Samuel P. Bush presided over the Buckeye Steel Castings Company. Columbus was also a popular location for labor organizations. In 1886, Samuel Gompers founded the American Federation of Labor in Druid's Hall on South Fourth Street, and in 1890, the United Mine Workers of America was founded at the old City Hall. In 1894, James Thurber, who would go on to an illustrious literary career in Paris and New York City, was born in the city. Today, Ohio State's theater department has a performance center named in his honor, and his childhood home, the Thurber House, is located in the Discovery District and is on the National Register of Historic Places. 20th century Columbus earned one of its nicknames, "The Arch City," because of the dozens of wooden arches that spanned High Street at the turn of the 20th century. The arches illuminated the thoroughfare and eventually became the means by which electric power was provided to the new streetcars. The city tore down the arches and replaced them with cluster lights in 1914 but reconstructed them from metal in the Short North neighborhood in 2002 for their unique historical interest. On March 25, 1913, the Great Flood of 1913 devastated the neighborhood of Franklinton, leaving over 90 people dead and thousands of West Side residents homeless. To prevent flooding, the Army Corps of Engineers recommended widening the Scioto River through downtown, constructing new bridges and building a retaining wall along its banks. With the strength of the post-World War I economy, a construction boom occurred in the 1920s, resulting in a new civic center, the Ohio Theatre, the American Insurance Union Citadel and to the north, a massive new Ohio Stadium. Although the American Professional Football Association was founded in Canton in 1920, its head offices moved to Columbus in 1921 to the New Hayden Building and remained in the city until 1941. In 1922, the association's name was changed to the National Football League. Nearly a decade later, in 1931, at a convention in the city, the Jehovah's Witnesses took that name by which they are known today. The effects of the Great Depression were less severe in Columbus, as the city's diversified economy helped it fare better than its Rust Belt neighbors. World War II brought many new jobs and another population surge. This time, most new arrivals were migrants from the "extraordinarily depressed rural areas" of Appalachia, who would soon account for more than a third of Columbus's growing population. In 1948, the Town and Country Shopping Center opened in suburban Whitehall, and it is now regarded as one of the first modern shopping centers in the United States. The construction of the Interstate Highway System signaled the arrival of rapid suburb development in central Ohio. To protect the city's tax base from this suburbanization, Columbus adopted a policy of linking sewer and water hookups to annexation to the city. By the early 1990s, Columbus had grown to become Ohio's largest city in land area and in population. Efforts to revitalize downtown Columbus have had some success in recent decades, though like most major American cities, some architectural heritage was lost in the process. In the 1970s, landmarks such as Union Station and the Neil House hotel were razed to construct high-rise offices and big retail space. The PNC Bank building was constructed in 1977, as well as the Nationwide Plaza buildings and other towers that sprouted during this period. The construction of the Greater Columbus Convention Center has brought major conventions and trade shows to the city. 21st century The Scioto Mile began development along the riverfront, an area that already had the Miranova Corporate Center and The Condominiums at North Bank Park. The 2010 United States foreclosure crisis forced the city to purchase numerous foreclosed, vacant properties to renovate or demolish them – at a cost of tens of millions of dollars. In February 2011, Columbus had 6,117 vacant properties, according to city officials. Since 2010, Columbus has been growing in population and economy; from 2010 to 2017, the city added 164,000 jobs, which ranked second in the United States. The city is focused on downtown revitalization, with recent projects being the Columbus Commons park, parks along the Scioto Mile developed along with a reshaped riverfront, and developments in the Arena District and Franklinton. In February and March 2020, Columbus reported its first official cases of COVID-19 and declared a state of emergency, with all nonessential businesses closed statewide. There were 69,244 cases of the disease across the city, . Later in 2020, protests over the murder of George Floyd took place in the city from May 28 into August. Geography The confluence of the Scioto and Olentangy rivers is just northwest of Downtown Columbus. Several smaller tributaries course through the Columbus metropolitan area, including Alum Creek, Big Walnut Creek and Darby Creek. Columbus is considered to have relatively flat topography thanks to a large glacier that covered most of Ohio during the Wisconsin Ice Age. However, there are sizable differences in elevation through the area, with the high point of Franklin County being above sea level near New Albany, and the low point being where the Scioto River leaves the county near Lockbourne. Numerous ravines near the rivers and creeks also add variety to the landscape. Tributaries to Alum Creek and the Olentangy River cut through shale, while tributaries to the Scioto River cut through limestone. The numerous rivers and streams beside low-lying areas in Central Ohio contribute to a history of flooding in the region; the most significant was the Great Flood of 1913 in Columbus, Ohio. The city has a total area of , of which is land and is water. Columbus currently has the largest land area of any Ohio city; this is due to Jim Rhodes's tactic to annex suburbs while serving as mayor. As surrounding communities grew or were constructed, they came to require access to waterlines, which was under the sole control of the municipal water system. Rhodes told these communities that if they wanted water, they would have to submit to assimilation into Columbus. Neighborhoods Columbus has a wide diversity of neighborhoods with different characters, and is thus sometimes known as a "city of neighborhoods." Some of the most prominent neighborhoods include the Arena District, the Brewery District, Clintonville, Franklinton, German Village, The Short North and Victorian Village. Climate The city's climate is humid continental (Köppen climate classification Dfa) transitional with the humid subtropical climate to the south characterized by warm, muggy summers and cold, dry winters. Columbus is within USDA hardiness zone 6a, now 6b bordering on 7a in the 1991 to 2020 normals chart. Winter snowfall is relatively light, since the city is not in the typical path of strong winter lows, such as the Nor'easters that strike cities farther east. It is also too far south and west for lake-effect snow from Lake Erie to have much effect, although the lakes to the north contribute to long stretches of cloudy spells in winter. The highest temperature recorded in Columbus is , which occurred twice during the Dust Bowl of the 1930s: once on July 21, 1934, and again on July 14, 1936. The lowest recorded temperature was , occurring on January 19, 1994. Columbus is subject to severe weather typical to the Midwestern United States. Severe thunderstorms can bring lightning, large hail and on rare occasions tornadoes, especially during the spring and sometimes through fall. A tornado that occurred on October 11, 2006, caused F2 damage. Floods, blizzards and ice storms can also occur from time to time. Demographics 2020 census In the 2020 United States census, there were 905,748 people and 362,626 households residing in the city. The racial makeup of the city was 57.4% White, 29.2% Black or African American, 0.2% Native American or Alaska Native, and 5.9% Asian. Hispanic or Latino of any race made up 6.3% of the population. 2010 census In the 2010 United States census, there were 787,033 people, 331,602 households and 176,037 families residing in the city. The population density was . There were 370,965 housing units at an average density of . The racial makeup of the city included 815,985 races tallied, as some residents recognized multiple races. The racial makeup was 61.9% White, 29.1% Black or African American, 1% Native American or Alaska Native, 4.6% Asian, 0.2% Native Hawaiian or Pacific Islander, and 3.2% from other races. Hispanic or Latino of any race were 5.9% of the population. Of the 331,602 households, 29.1% had children under the age of 18, 32% were married couples living together, 15.9% had a female householder with no husband present, 5.1% had a male householder with no wife present and 46.9% were non-families. 35.1% of all households were made up of individuals, and 7.2% had someone living alone who was 65 years of age or older. The average household size was 2.31, and the average family size was 3.04. The median age in the city was 31.2 years. The age makeup of the city comprised 23.2% of residents under the age of 18, 14% between the ages of 18 and 24, 32.3% between 25 and 44, 21.8% between 45 and 64, and 8.6% 65 years of age or older. The gender makeup of the city was 48.8% male and 51.2% female. Population makeup Columbus historically had a significant population of white people. In 1900, whites made up 93.4% of the population. Although European immigration has declined, the Columbus metropolitan area has recently experienced increases in African, Asian and Latin American immigration, including groups from Mexico, India, Nepal, Bhutan, Somalia and China. While the Asian population is diverse, the city's Hispanic community is mainly made up of Mexican Americans, although there is a notable Puerto Rican population. Many other countries of origin are represented in lesser numbers, largely due to the international draw of Ohio State University. 2008 estimates indicate that roughly 116,000 of the city's residents are foreign-born, accounting for 82% of the new residents between 2000 and 2006 at a rate of 105 per week. 40% of the immigrants came from Asia, 23% from Africa, 22% from Latin America and 13% from Europe. The city had the second-largest Somali and Somali American population in the country, as of 2004, as well as the largest expatriate Bhutanese-Nepali population in the world, as of 2018. Due to its demographics, which include a mix of races and a wide range of incomes, as well as urban, suburban and nearby rural areas, Columbus is considered a "typical" American city, leading retail and restaurant chains to use it as a test market for new products. Columbus has maintained a steady population growth since its establishment. Its slowest growth, from 1850 to 1860, is primarily attributed to the city's cholera epidemic in the 1850s. According to the 2017 Japanese Direct Investment Survey by the Consulate-General of Japan, Detroit, 838 Japanese nationals lived in Columbus, making it the municipality with the state's second-largest Japanese national population, after Dublin. Columbus is home to a proportional LGBT community, with an estimated 34,952 gay, lesbian or bisexual residents. The 2018 American Community Survey (ACS) reported an estimated 366,034 households, 32,276 of which were held by unmarried partners. 1,395 of these were female householder and female-partner households, and 1,456 were male householder and male-partner households. Columbus has been rated as one of the best cities in the country for gays and lesbians to live, and also as the most underrated gay city in the country. In July 2012, three years prior to legal same-sex marriage in the United States, the Columbus City Council unanimously passed a domestic partnership registry. Italian-American community and symbols Columbus has numerous Italian Americans, with groups including the Columbus Italian Club, Columbus Piave Club and the Abruzzi Club. Italian Village, a neighborhood near Downtown Columbus, has had a prominent Italian American community since the 1890s. The community has helped promote the influence Christopher Columbus had in drawing European attention to the Americas. The Italian explorer, erroneously credited with the lands' discovery, has been posthumously criticized by historians for initiating colonization and for abuse, enslavement and subjugation of natives. In addition to the city being named for the explorer, its seal and flag depict a ship he used for his first voyage to the Americas, the . A similar-size replica of the ship, the Santa Maria Ship & Museum, was displayed downtown from 1991 to 2014. The city's Discovery District and Discovery Bridge are named in reference to Columbus's "discovery" of the Americas; the bridge includes artistic bronze medallions featuring symbols of the explorer. Genoa Park, downtown, is named after Genoa, the birthplace of Christopher Columbus and one of Columbus's sister cities. The Christopher Columbus Quincentennial Jubilee, celebrating the 500th anniversary of Columbus's first voyage, was held in the city in 1992. Its organizers spent $95 million on it, creating the horticultural exhibition AmeriFlora '92. The organizers also planned to create a replica Native American village, among other attractions. Local and national native leaders protested the event with a day of mourning, followed by protests and fasts at City Hall. The protests prevented the native village from being exhibited, and annual fasts continued until 1997. A protest also took place during the dedication of the Santa Maria replica, an event held in late 1991 on the day before Columbus Day and in time for the jubilee. The city has three outdoor statues of the explorer; the statue at City Hall was acquired, delivered and dedicated with the assistance of the Italian American community. Protests in 2017 aimed for this statue to be removed, followed by the city in 2018 ceasing to recognize Columbus Day as a city holiday. During the 2020 George Floyd protests, petitions were created to remove all three statues and rename the city of Columbus. Two of the statues – one at City Hall and the other at Columbus State Community College – were removed, while the city is also looking into changing its flag and seal to remove the reference to Christopher Columbus. The future of the third statue, at the Ohio Statehouse, will be discussed in a meeting on July 16. The city was the first of eight cities to be offered the Birth of the New World statue, in 1993. The statue, also of Christopher Columbus, was completed in Puerto Rico in 2016, and is the tallest in the United States – taller than the Statue of Liberty, including its pedestal. At least six U.S. cities rejected it, including Columbus, based on its height and design. Religion According to the 2019 American Values Atlas, 26% of Columbus metropolitan area residents are unaffiliated with a religious tradition. 17% of area residents identify as White evangelical Protestants, 14% as White mainline Protestants, 11% as Black Protestants, 11% as White Catholics, 5% as Hispanic Catholics, 3% as other nonwhite Catholics, 2% as other nonwhite Protestants and 2% as Mormons. Hindus, Buddhists, Jews and Latino Protestants each made up 1% of the population, while Jehovah's Witnesses, Orthodox Christians, Muslims, Unitarians, and members of New Age or other religions each made up under 0.5% of the population. Places of worship include Baptist, Evangelical, Greek Orthodox, Latter-day Saints, Lutheran, Presbyterian, Quaker, Roman Catholic, and Unitarian Universalist churches. Columbus also hosts several Islamic mosques, Jewish synagogues, Buddhist centers, Hindu temples and a branch of the International Society for Krishna Consciousness. Religious teaching institutions include the Pontifical College Josephinum and several private schools led by Christian organizations. Economy Columbus has a generally strong and diverse economy based on education, insurance, banking, fashion, defense, aviation, food, logistics, steel, energy, medical research, health care, hospitality, retail and technology. In 2010, it was one of the 10 best big cities in the country, according to Relocate America, a real estate research firm. According to the Federal Reserve Bank of St Louis, the GDP of Columbus in 2019 was $134 billion. During the Great Recession between 2007 and 2009, Columbus's economy was not impacted as much as the rest of the country, due to decades of diversification work by long-time corporate residents, business leaders and political leaders. The administration of former mayor Michael B. Coleman continued this work, although the city faced financial turmoil and had to increase taxes, allegedly due in part to fiscal mismanagement. Because Columbus is the state capital, there is a large government presence in the city. Including city, county, state and federal employers, government jobs provide the largest single source of employment within Columbus. In 2019, the city had six corporations named to the U.S. Fortune 500 list: Alliance Data, Nationwide Mutual Insurance Company, American Electric Power, L Brands, Huntington Bancshares and Cardinal Health in suburban Dublin. Other major employers include schools (e.g., the Ohio State University) and hospitals (among others, Wexner Medical Center and Nationwide Children's Hospital, which are among the teaching hospitals of the Ohio State University College of Medicine), high-tech research and development such as the Battelle Memorial Institute, information/library companies such as OCLC and Chemical Abstracts Service, steel processing and pressure cylinder manufacturer Worthington Industries, financial institutions such as JPMorgan Chase and Huntington Bancshares, as well as Owens Corning. Fast-food chains Wendy's and White Castle are also headquartered in the Columbus area. Major foreign corporations operating or with divisions in the city include Germany-based Siemens and Roxane Laboratories, Finland-based Vaisala, Tomasco Mulciber Inc., A Y Manufacturing, as well as Switzerland-based ABB and Mettler Toledo. The city also has a significant fashion and retail presence, home to companies such as Big Lots, L Brands, Abercrombie & Fitch, DSW and Express. Food and beverage industry North Market, a public market and food hall, is located downtown near the Short North. It is the only remaining public market of Columbus's original four marketplaces. Numerous restaurant chains are based in the Columbus area, including Charleys Philly Steaks, Bibibop Asian Grill, Steak Escape, White Castle, Cameron Mitchell Restaurants, Bob Evans Restaurants, Max & Erma's, Damon's Grill, Donatos Pizza and Wendy's. Wendy's, the world's third-largest hamburger fast-food chain, operated its first store downtown as both a museum and a restaurant until March 2007, when the establishment was closed due to low revenue. The company is presently headquartered outside the city in nearby Dublin. Budweiser has a major brewery located on the north side, just south of I-270 and Worthington. Columbus is also home to many local micro breweries and pubs. Asian frozen food manufacturer Kahiki Foods was located on the east side of Columbus, created during the operation of the Kahiki Supper Club restaurant in Columbus. The food company now operates in the suburb of Gahanna and has been owned by the South Korean-based company CJ CheilJedang since 2018. Wasserstrom Company, a major supplier of equipment and supplies for restaurants, is located on the north side. Arts and culture Landmarks Columbus has over 170 notable buildings listed on the National Register of Historic Places; it also maintains its own register, the Columbus Register of Historic Properties, with 82 entries. The city also maintains four historic districts not listed on its register: German Village, Italian Village, Victorian Village, and the Brewery District. Construction of the Ohio Statehouse began in 1839 on a plot of land donated by four prominent Columbus landowners. This plot formed Capitol Square, which was not part of the city's original layout. Built of Columbus limestone from the Marble Cliff Quarry Co., the Statehouse stands on foundations deep that were laid by prison labor gangs rumored to have been composed largely of masons jailed for minor infractions. It features a central recessed porch with a colonnade of a forthright and primitive Greek Doric mode. A broad and low central pediment supports the windowed astylar drum under an invisibly low saucer dome that lights the interior rotunda. There are several artworks within and outside the building, including the William McKinley Monument dedicated in 1907. Unlike many U.S. state capitol buildings, the Ohio State Capitol owes little to the architecture of the national Capitol. During the Statehouse's 22-year construction, seven architects were employed. The Statehouse was opened to the legislature and the public in 1857 and completed in 1861, and is located at the intersection of Broad and High streets in downtown Columbus. Within the Driving Park heritage district lies the original home of Eddie Rickenbacker, a World War I fighter pilot ace. Built in 1895, the house was designated a National Historic Landmark in 1976. Demolitions and redevelopment Demolition has been a common trend in Columbus for a long period of time, and continues into the present day. Preservationists and the public have sometimes run into conflict with developers hoping to revitalize an area, and historically with the city and state government, which led programs of urban renewal in the 20th century. Museums and public art Columbus has a wide variety of museums and galleries. Its primary art museum is the Columbus Museum of Art, which operates its main location as well as the Pizzuti Collection, featuring contemporary art. The museum, founded in 1878, focuses on European and American art up to early modernism that includes extraordinary examples of Impressionism, German Expressionism and Cubism. Another prominent art museum in the city is the Wexner Center for the Arts, a contemporary art gallery and research facility operated by the Ohio State University. The Ohio History Connection is headquartered in Columbus, with its flagship museum, the Ohio History Center, north of downtown. Adjacent to the museum is Ohio Village, a replica of a village around the time of the American Civil War. The Columbus Historical Society also features historical exhibits, which focus more closely on life in Columbus. COSI is a large science and children's museum in downtown Columbus. The present building, the former Central High School, was completed in November 1999, opposite downtown on the west bank of the Scioto River. In 2009, Parents magazine named COSI one of the 10 best science centers for families in the country. Other science museums include the Orton Geological Museum and the Museum of Biological Diversity, which are both part of the Ohio State University. The Franklin Park Conservatory is the city's botanical garden, which opened in 1895. It features over 400 species of plants in a large Victorian-style glass greenhouse building that includes rain forest, desert and Himalayan mountain biomes. The conservatory is located just east of Downtown in Franklin Park Biographical museums include the Thurber House (documenting the life of cartoonist James Thurber), the Jack Nicklaus Museum (documenting the golfer's career, located on the OSU campus) and the Kelton House Museum and Garden, the latter of which being a historic house museum memorializing three generations of the Kelton family, the house's use as a documented station on the Underground Railroad, and overall Victorian life. The National Veterans Memorial and Museum, which opened in 2018, focuses on the personal stories of military veterans throughout U.S. history. The museum replaced the Franklin County Veterans Memorial, which opened in 1955. Other notable museums in the city include the Central Ohio Fire Museum, Billy Ireland Cartoon Library & Museum and the Ohio Craft Museum. Performing arts Columbus is the home of many performing arts institutions including the Columbus Symphony Orchestra, Opera Columbus, BalletMet Columbus, the ProMusica Chamber Orchestra, CATCO, Columbus Children's Theatre, Shadowbox Live, and the Columbus Jazz Orchestra. Throughout the summer, the Actors' Theatre of Columbus offers free performances of Shakespearean plays in an open-air amphitheater in Schiller Park in historic German Village. The Columbus Youth Ballet Academy was founded in the 1980s by ballerina and artistic director Shir Lee Wu, a discovery of Martha Graham. Wu is now the artistic director of the Columbus City Ballet School. Columbus has several large concert venues, including the Nationwide Arena, Value City Arena, Express Live!, Mershon Auditorium and the Newport Music Hall. In May 2009, the Lincoln Theatre, formerly a center for Black culture in Columbus, reopened after an extensive restoration. Not far from the Lincoln Theatre is the King Arts Complex, which hosts a variety of cultural events. The city also has several theaters downtown, including the historic Palace Theatre, the Ohio Theatre and the Southern Theatre. Broadway Across America often presents touring Broadway musicals in these larger venues. The Vern Riffe Center for Government and the Arts houses the Capitol Theatre and three smaller studio theaters, providing a home for resident performing arts companies. Film Movies filmed in the Columbus metropolitan area include Teachers in 1984, Tango & Cash in 1989, Little Man Tate in 1991, Air Force One in 1997, Traffic in 2000, Speak in 2004, Bubble in 2005, Liberal Arts in 2012, Parker in 2013, and I Am Wrath in 2016, Aftermath in 2017, They/Them/Us in 2021, and Bones and All in 2022. The 2018 film Ready Player One is set in Columbus, though not filmed in the city. Sports Professional teams Columbus hosts two major league professional sports teams: the Columbus Blue Jackets of the National Hockey League (NHL), which play at Nationwide Arena, and the Columbus Crew of Major League Soccer (MLS), which play at Lower.com Field. The Crew previously played at Historic Crew Stadium, the first soccer-specific stadium built in the United States for a Major League Soccer team. The Crew were one of the original members of MLS and won their first MLS Cup in 2008, with a second title in 2020. The Columbus Crew moved into Lower.com Field in the summer of 2021, which will also feature a mixed-use development site named Confluence Village. The Columbus Clippers, the International League affiliate of the Cleveland Guardians, play in Huntington Park, which opened in 2009. The city was home to the Panhandles/Tigers football team from 1901 to 1926; they are credited with playing in the first NFL game against another NFL opponent. In the late 1990s, the Columbus Quest won the only two championships during American Basketball League's two-and-a-half season existence. The Ohio Aviators were based in Obetz, Ohio, and began play in the only PRO Rugby season before the league folded. Ohio State Buckeyes Columbus is home to one of the nation's most competitive intercollegiate programs, the Ohio State Buckeyes of Ohio State University. The program has placed in the top 10 final standings of the Director's Cup five times since 2000–2001, including No. 3 for the 2002–2003 season and No. 4 for the 2003–2004 season. The university funds 36 varsity teams, consisting of 17 male, 16 female and three co-educational teams. In 2007–2008 and 2008–2009, the program generated the second-most revenue for college programs behind the Texas Longhorns of The University of Texas at Austin. The Ohio State Buckeyes are a member of the NCAA's Big Ten Conference, and their football team plays home games at Ohio Stadium. The Ohio State–Michigan football game (known colloquially as "The Game") is the final game of the regular season and is played in November each year, alternating between Columbus and Ann Arbor, Michigan. In 2000, ESPN ranked the Ohio State–Michigan game as the greatest rivalry in North American sports. Moreover, "Buckeye fever" permeates Columbus culture year-round and forms a major part of Columbus's cultural identity. Former New York Yankees owner George Steinbrenner, an Ohio native who received a master's degree from Ohio State and coached in Columbus, was an Ohio State football fan and major donor to the university who contributed to the construction of the band facility at the renovated Ohio Stadium, which bears his family's name. During the winter months, the Buckeyes basketball and hockey teams are also major sporting attractions. Other sports Columbus has a long history in motorsports, hosting the world's first 24-hour car race at the Columbus Driving Park in 1905, which was organized by the Columbus Auto Club. The Columbus Motor Speedway was built in 1945 and held its first motorcycle race in 1946. In 2010, the Ohio State University student-built Buckeye Bullet 2, a fuel-cell vehicle, set an FIA world speed record for electric vehicles in reaching 303.025 mph, eclipsing the previous record of 302.877 mph. The annual All American Quarter Horse Congress, the world's largest single-breed horse show, attracts approximately 500,000 visitors to the Ohio Expo Center each October. Columbus hosts the annual Arnold Sports Festival. Hosted by Arnold Schwarzenegger, the event has grown to eight Olympic sports and 22,000 athletes competing in 80 events. In conjunction with the Arnold Classic, the city hosted three consecutive Ultimate Fighting Championship events between 2007 and 2009, as well as other mixed martial arts events. Westside Barbell, a world-renowned powerlifting gym, is located in Columbus. Its founder, Louie Simmons, is known for his popularization of the "Conjugate Method," while he is also credited with inventing training machines for reverse hyper-extensions and belt squats. Westside Barbell is known for producing multiple world record holders in powerlifting. The Columbus Bullies were two-time champions of the American Football League (1940–1941). The Columbus Thunderbolts were formed in 1991 for the Arena Football League, and then relocated to Cleveland as the Cleveland Thunderbolts; the Columbus Destroyers were the next team of the AFL, playing from 2004 until the league's demise in 2008 and returned for single season in 2019 until the league folded a second time. Ohio Roller Derby (formerly Ohio Roller Girls) was founded in Columbus in 2005 and still competes internationally in Women's Flat Track Derby Association play. The team is regularly ranked in the top 60 internationally. Parks and attractions Columbus's Recreation and Parks Department oversees about 370 city parks. Also in the area are 19 regional parks and the Metro Parks, which are part of the Columbus and Franklin County Metropolitan Park District. These parks include Clintonville's Whetstone Park and the Columbus Park of Roses, a rose garden. The Chadwick Arboretum on Ohio State's campus features a large and varied collection of plants, while its Olentangy River Wetland Research Park is an experimental wetland open to the public. Downtown, the painting A Sunday Afternoon on the Island of La Grande Jatte is represented in topiary at Columbus's Topiary Park. Also near downtown, the Scioto Audubon Metro Park on the Whittier Peninsula opened in 2009 and includes a large Audubon nature center focused on the birdwatching the area is known for. The Columbus Zoo and Aquarium's collections include lowland gorillas, polar bears, manatees, Siberian tigers, cheetahs and kangaroos. Also in the zoo complex is the Zoombezi Bay water park and amusement park. Fairs and festivals Annual festivities in Columbus include the Ohio State Fair – one of the largest state fairs in the country – as well as the Columbus Arts Festival and the Jazz & Rib Fest, both of which occur on the downtown riverfront. In mid-May from 2007 to 2018, Columbus was home to Rock on the Range, which was held at Historic Crew Stadium and marketed as America's biggest rock festival. The festival, which took place on a Friday, Saturday and Sunday, has hosted Metallica, Red Hot Chili Peppers, Slipknot and other notable bands. In May 2019, it was officially replaced by the Sonic Temple Art & Music Festival. During the first weekend in June, the bars of Columbus's North Market District host the Park Street Festival, which attracts thousands of visitors to a massive party in bars and on the street. June's second-to-last weekend sees one of the Midwest's largest gay pride parades, Columbus Pride, reflecting the city's sizable gay population. During the last weekend of June, Goodale Park hosts ComFest (short for "Community Festival"), an immense three-day music festival marketed as the largest non-commercial festival in the U.S., with art vendors, live music on multiple stages, hundreds of local social and political organizations, body painting and beer. Greek Festival is held in August or September at the Greek Orthodox Church downtown. The Hot Times Community Arts & Music Festival, a celebration of music, arts, food and diversity, is held annually in the Olde Towne East neighborhood. The city's largest dining event, Restaurant Week Columbus, is held twice a year in mid-January and mid-July. In 2010, more than 40,000 diners went to 40 participating restaurants, and $5,000 was donated the Mid-Ohio Foodbank on behalf of sponsors and participating restaurants. The Juneteenth Ohio Festival is held each year at Franklin Park on Father's Day weekend. Started by Mustafaa Shabazz, Juneteenth Ohio is one of the largest African American festivals in the United States, including three full days of music, food, dance and entertainment by local and national recording artists. The festival holds a Father's Day celebration, honoring local fathers. Around the Fourth of July, Columbus hosts Red, White & Boom! on the Scioto riverfront downtown, attracting crowds of over 500,000 people and featuring the largest fireworks display in Ohio. The Doo Dah Parade is also held at this time. During Memorial Day Weekend, the Asian Festival is held in Franklin Park. Hundreds of restaurants, vendors and companies open up booths, and traditional music is played, martial arts are performed and cultural exhibits are set up. The Jazz & Rib Fest is a free downtown event held each July, featuring jazz artists like Randy Weston, D. Bohannon Clark and Wayne Shorter, along with rib vendors from around the country. The Short North is host to the monthly Gallery Hop, which attracts hundreds to the neighborhood's art galleries (which all open their doors to the public until late at night) and street musicians. The Hilltop Bean Dinner is an annual event held on Columbus's West Side that celebrates the city's Civil War heritage near the historic Camp Chase Cemetery. At the end of September, German Village throws an annual Oktoberfest celebration that features German food, beer, music and crafts. The Short North also hosts HighBall Halloween and Masquerade on High, a fashion show and street parade that closes down High Street. In 2011, in its fourth year, HighBall Halloween gained notoriety as it accepted its first Expy award. HighBall Halloween has much to offer for those interested in fashion and the performing and visual arts, or for those who want to celebrate Halloween with food and drinks from all around the city. Each year, the event is put on with a different theme. Columbus also hosts many conventions in the Greater Columbus Convention Center, a large convention center on the north edge of downtown. Completed in 1993, the convention center was designed by architect Peter Eisenman, who also designed the Wexner Center. Shopping Both of the metropolitan area's major shopping centers are located in Columbus: Easton Town Center and Polaris Fashion Place. Developer Richard E. Jacobs built the area's first three major shopping malls in the 1960s: Westland, Northland and Eastland. Of these, only Eastland remains in operation. Near Northland Mall was The Continent, an open-air mall in the Northland area, mostly vacant and pending redevelopment. Columbus City Center was built downtown in 1988, alongside the first location of Lazarus; this mall closed in 2009 and was demolished in 2011. Easton Town Center was built in 1999 and Polaris Fashion Place in 2001. Environment The City of Columbus has focused on reducing its environmental impact and carbon footprint. In 2020, a citywide ballot measure was approved, giving Columbus an electricity aggregation plan which will supply it with 100% renewable energy by the start of 2023. Its vendor, AEP Energy, plans to construct new wind and solar farms in Ohio to help supply the electricity. The largest sources of pollution in the county, as of 2019, are the Ohio State University's McCracken Power Plant, the landfill operated by the Solid Waste Authority of Central Ohio (SWACO) and the Anheuser-Busch Columbus Brewery. Anheuser-Busch has a company-wide goal of reducing emissions by 25% by 2025. Ohio State plans to construct a new heat and power plant, also powered by fossil fuels, but set to reduce emissions by about 30%. SWACO manages to capture 75% of its methane emissions to use in producing energy, and is looking to reduce emissions further. Government Mayor and city council The city is administered by a mayor and a seven-member unicameral council elected in two classes every two years to four-year terms at large. Columbus is the largest city in the United States that elects its city council at large as opposed to districts. The mayor appoints the director of safety and the director of public service. The people elect the auditor, municipal court clerk, municipal court judges and city attorney. A charter commission, elected in 1913, submitted a new charter in May 1914, offering a modified federal form, with a number of progressive features, such as nonpartisan ballot, preferential voting, recall of elected officials, the referendum and a small council elected at large. The charter was adopted, effective January 1, 1916. Andrew Ginther has been the mayor of Columbus since 2016. Government offices As Ohio's capital and the county seat, Columbus hosts numerous federal, state, county and city government offices and courts. Federal offices include the Joseph P. Kinneary U.S. Courthouse, one of several courts for the District Court for the Southern District of Ohio, after moving from 121 E. State St. in 1934. Another federal office, the John W. Bricker Federal Building, has offices for U.S. Senator Sherrod Brown, as well as for the Internal Revenue Service, the Social Security Administration and the Departments of Housing & Urban Development and Agriculture. The State of Ohio's capitol building, the Ohio Statehouse, is located in the center of downtown on Capitol Square. It houses the Ohio House of Representatives and Ohio Senate. It also contains the ceremonial offices of the governor, lieutenant governor, state treasurer and state auditor. The Supreme Court, Court of Claims and Judicial Conference are located in the Thomas J. Moyer Ohio Judicial Center downtown by the Scioto River. The building, built in 1933 to house 10 state agencies along with the State Library of Ohio, became the Supreme Court after extensive renovations from 2001 to 2004. Franklin County operates the Franklin County Government Center, a complex at the southern end of downtown Columbus. The center includes the county's municipal court, common pleas court, correctional center, juvenile detention center and sheriff's office. Near City Hall, the Michael B. Coleman Government Center holds offices for the departments of building and zoning services, public service, development and public utilities. Also nearby is 77 North Front Street, which holds Columbus's city attorney office, income-tax division, public safety, human resources, civil service and purchasing departments. The structure, built in 1929, was the police headquarters until 1991, and was then dormant until it was given a $34 million renovation from 2011 to 2013. Emergency services and homeland security Municipal police duties are performed by the Columbus Division of Police, while emergency medical services (EMS) and fire protection are through the Columbus Division of Fire. Ohio Homeland Security operates the Strategic Analysis and Information Center (SAIC) fusion center in Columbus's Hilltop neighborhood. The facility is the state's primary public intelligence hub and one of the few in the country that uses state, local, federal and private resources. Social services and homelessness Columbus has a history of governmental and nonprofit support for low-income residents and the homeless. Nevertheless, the homelessness rate has steadily risen since at least 2007. Poverty and differences in quality of life have grown, as well; Columbus was noted as the second-most economically segregated large metropolitan area in 2015, in a study by the University of Toronto. It also ranked 45th of the 50 largest metropolitan areas in terms of social mobility, according to a 2015 Harvard University study. Education Colleges and universities Columbus is the home of two public colleges: the Ohio State University, one of the largest college campuses in the United States, and Columbus State Community College. In 2009, Ohio State University was ranked No. 19 in the country by U.S. News & World Report on its list of best public universities, and No. 56 overall, scoring in the first tier of schools nationally. Some of Ohio State's graduate school programs placed in the top 5, including No. 5 for both best veterinary programs and best pharmacy programs. The specialty graduate programs of social psychology was ranked No. 2, dispute resolution was No. 5, vocational education was No. 2, and elementary education, secondary teacher education, administration/supervision was No. 5. Private institutions in Columbus include Capital University Law School, the Columbus College of Art and Design, Fortis College, DeVry University, Ohio Business College, Miami-Jacobs Career College, Ohio Institute of Health Careers, Bradford School and Franklin University, as well as the religious schools Bexley Hall Episcopal Seminary, Mount Carmel College of Nursing, Ohio Dominican University, Pontifical College Josephinum and Trinity Lutheran Seminary. Three major suburban schools also have an influence on Columbus's educational landscape: Bexley's Capital University, Westerville's Otterbein University and Delaware's Ohio Wesleyan University. Primary and secondary schools Columbus City Schools (CCS) is the largest district in Ohio, with 55,000 pupils. CCS operates 142 elementary, middle and high schools, including a number of magnet schools (which are referred to as alternative schools within the school system). The suburbs operate their own districts, typically serving students in one or more townships, with districts sometimes crossing municipal boundaries. The Roman Catholic Diocese of Columbus also operates several parochial elementary and high schools. The area's second-largest school district is South-Western City Schools, which encompasses southwestern Franklin County, including a slice of Columbus itself. Other portions of Columbus are zoned to the Dublin, New Albany-Plain, Westerville and Worthington school districts. There are also several private schools in the area, such as St. Paul's Lutheran School, a K-8 Christian school of the Wisconsin Evangelical Lutheran Synod in Columbus. Some sources determine that the first kindergarten in the United States was established here by Louisa Frankenberg, a former student of Friedrich Fröbel. Frankenberg immigrated to the city in 1838 and opened her kindergarten in the German Village neighborhood in that year. The school did not work out, so she returned to Germany in 1840. In 1858, Frankenberg returned to Columbus and established another early kindergarten in the city. Frankenberg is often overlooked, with Margarethe Schurz instead given credit for her "First Kindergarten" she operated for two years. In addition, Indianola Junior High School (now the Graham Elementary and Middle School) became the nation's first junior high school in 1909, helping to bridge the difficult transition from elementary to high school at a time when only 48% of students continued their education after the ninth grade. Libraries The Columbus Metropolitan Library (CML) has served central Ohio residents since 1873. The system has 23 locations throughout Central Ohio, with a total collection of 3 million items. This library is one of the country's most-used library systems and is consistently among the top-ranked large city libraries according to Hennen's American Public Library Ratings. CML was rated the No. 1 library system in the nation in 1999, 2005 and 2008. It has been in the top four every year since 1999, when the rankings were first published in the American Libraries magazine, often challenging upstate neighbor Cuyahoga County Public Library for the top spot. Weekend education The classes of the Columbus Japanese Language School, a weekend Japanese school, are held in a facility from the school district in Marysville, while the school office is in Worthington. Previously it held classes at facilities in the city of Columbus. Media Several weekly and daily newspapers serve Columbus and Central Ohio. The major daily newspaper in Columbus is The Columbus Dispatch. There are also neighborhood- or suburb-specific papers, such as the Dispatch Printing Company's ThisWeek Community News, the Columbus Messenger, the Clintonville Spotlight and the Short North Gazette. The Lantern and 1870 serve the Ohio State University community. Alternative arts, culture or politics-oriented papers include ALIVE (formerly the independent Columbus Alive and now owned by the Columbus Dispatch), Columbus Free Press and Columbus Underground (digital-only). The Columbus Magazine, CityScene, 614 Magazine and Columbus Monthly are the city's magazines. Columbus is the base for 12 television stations and is the 32nd-largest television market as of September 24, 2016. Columbus is also home to the 36th-largest radio market. Infrastructure Healthcare Numerous medical systems operate in Columbus and Central Ohio. These include OhioHealth, which has three hospitals in the city proper: Grant Medical Center, Riverside Methodist Hospital, and Doctors Hospital; Mount Carmel Health System, which has one hospital among other facilities; the Ohio State University Wexner Medical Center, which has a primary hospital complex and an east campus in Columbus; and Nationwide Children's Hospital, which is an independently operated hospital for pediatric health care. Hospitals in Central Ohio are ranked favorably by the U.S. News & World Report, where numerous hospitals are ranked as among the best in particular fields in the United States. Nationwide Children's is regarded as among the top 10 children's hospitals in the country, according to the report. Utilities Numerous utility companies operate in Central Ohio. Within Columbus, power is sourced from Columbus Southern Power, an American Electric Power subsidiary. Natural gas is provided by Columbia Gas of Ohio, while water is sourced from the City of Columbus Division of Water. Transportation Local roads, grid and address system The city's two main corridors since its founding are Broad and High Streets. They both traverse beyond the extent of the city; High Street is the longest in Columbus, running (23.4 across the county), while Broad Street is longer across the county, at . The city's street plan originates downtown and extends into the old-growth neighborhoods, following a grid pattern with the intersection of High Street (running north–south) and Broad Street (running east–west) at its center. North–south streets run 12 degrees west of due north, parallel to High Street; the avenues (vis. Fifth Avenue, Sixth Avenue, Seventh Avenue, and so on) run 12 degrees off from east–west. The address system begins its numbering at the intersection of Broad and High, with numbers increasing in magnitude with distance from Broad or High, as well as cardinal directions used alongside street names. Numbered avenues begin with First Avenue, about north of Broad Street, and increase in number as one progresses northward. Numbered streets begin with Second Street, which is two blocks west of High Street, and Third Street, which is a block east of High Street, then progress eastward from there. Even-numbered addresses are on the north and east sides of streets, putting odd addresses on the south and west sides of streets. A difference of 700 house numbers means a distance of about (along the same street). For example, 351 W. Fifth Ave. is approximately west of High Street on the south side of Fifth Avenue. Buildings along north–south streets are numbered in a similar manner: the building number indicates the approximate distance from Broad Street, the prefixes "N" and "S" indicate whether that distance is to be measured to the north or south of Broad Street and the street number itself indicates how far the street is from the center of the city at the intersection of Broad and High. This street numbering system does not hold true over a large area. The area served by numbered avenues runs from about Marble Cliff to South Linden to the Airport, and the area served by numbered streets covers Downtown and nearby neighborhoods to the east and south, with only a few exceptions. There are quite few intersections between numbered streets and avenues. Furthermore, named streets and avenues can have any orientation. For example, while all of the numbered avenues run east–west, perpendicular to High Street, many named, non-numbered avenues run north–south, parallel to High. The same is true of many named streets: while the numbered streets in the city run north–south, perpendicular to Broad Street, many named, non-numbered streets run east–west, perpendicular to High Street. The addressing system, however, covers nearly all of Franklin County, with only a few older suburbs retaining self-centered address systems. The address scale of 700 per mile results in addresses approaching, but not usually reaching, 10,000 at the county's borders. Other major, local roads in Columbus include Main Street, Morse Road, Dublin-Granville Road (SR-161), Cleveland Avenue/Westerville Road (SR-3), Olentangy River Road, Riverside Drive, Sunbury Road, Fifth Avenue and Livingston Avenue. Highways Columbus is bisected by two major Interstate Highways: Interstate 70 running east–west and Interstate 71 running north to roughly southwest. They combine downtown for about in an area locally known as "The Split," which is a major traffic congestion point, especially during rush hour. U.S. Route 40, originally known as the National Road, runs east–west through Columbus, comprising Main Street to the east of downtown and Broad Street to the west. U.S. Route 23 runs roughly north–south, while U.S. Route 33 runs northwest-to-southeast. The Interstate 270 Outerbelt encircles most of the city, while the newly redesigned Innerbelt consists of the Interstate 670 spur on the north side (which continues to the east past the Airport and to the west where it merges with I-70), State Route 315 on the west side, the I-70/71 split on the south side and I-71 on the east. Due to its central location within Ohio and abundance of outbound roadways, nearly all of the state's destinations are within a two- or three-hour drive of Columbus. Bridges The Columbus riverfront hosts several bridges. The Discovery Bridge connects downtown to Franklinton across Broad Street. The bridge opened in 1992, replacing a 1921 concrete arch bridge; the first bridge at the site was built in 1816. The Main Street Bridge opened on July 30, 2010. The bridge has three lanes for vehicular traffic (one westbound and two eastbound) and another separated lane for pedestrians and bikes. The Rich Street Bridge opened in July 2012 adjacent to the Main Street Bridge, connecting Rich Street on the east side of the river with Town Street on the west. The Lane Avenue Bridge is a cable-stayed bridge that opened on November 14, 2003, in the University District. The bridge spans the Olentangy River with three lanes of traffic each way. Airports The city's primary airport, John Glenn Columbus International Airport, is on the city's east side. Formerly known as Port Columbus, John Glenn provides service to Toronto, Ontario, Canada, and Cancun, Mexico (on a seasonal basis), as well as to most domestic destinations, including all the major hubs along with San Francisco, Salt Lake City and Seattle. The airport was a hub for discount carrier Skybus Airlines and continues to be home to NetJets, the world's largest fractional ownership air carrier. According to a 2005 market survey, John Glenn Columbus International Airport attracts about 50% of its passengers from outside of its radius primary service region. It is the 52nd-busiest airport in the United States by total passenger boardings. Rickenbacker International Airport, in southern Franklin County, is a major cargo facility that is used by the Ohio Air National Guard. Allegiant Air offers nonstop service from Rickenbacker to Florida destinations. Ohio State University Don Scott Airport and Bolton Field are other large general-aviation facilities in the Columbus area. Aviation history In 1907, 14-year-old Cromwell Dixon built the SkyCycle, a pedal-powered blimp, which he flew at Driving Park. Three years later, one of the Wright brothers' exhibition pilots, Phillip Parmalee, conducted the world's first commercial cargo flight when he flew two packages containing 88 kilograms of silk from Dayton to Columbus in a Wright Model B. Military aviators from Columbus distinguished themselves during World War I. Six Columbus pilots, led by top ace Eddie Rickenbacker, achieved 42 "kills" – a full 10% of all US aerial victories in the war, and more than the aviators of any other American city. After the war, Port Columbus Airport (now known as John Glenn Columbus International Airport) became the axis of a coordinated rail-to-air transcontinental system that moved passengers from the East Coast to the West. TAT, which later became TWA, provided commercial service, following Charles Lindbergh's promotion of Columbus to the nation for such a hub. Following the failure of a bond levy in 1927 to build the airport, Lindbergh campaigned in the city in 1928, and the next bond levy passed that year. On July 8, 1929, the airport opened for business with the inaugural TAT westbound flight from Columbus to Waynoka, Oklahoma. Among the 19 passengers on that flight was Amelia Earhart, with Henry Ford and Harvey Firestone attending the opening ceremonies. In 1964, Ohio native Geraldine Fredritz Mock became the first woman to fly solo around the world, leaving from Columbus and piloting the Spirit of Columbus. Her flight lasted nearly a month and set a record for speed for planes under . Public transit Columbus maintains a widespread municipal bus service called the Central Ohio Transit Authority (COTA). The service operates 41 routes with a fleet of 440 buses, serving approximately 19 million passengers per year. COTA operates 23 regular fixed-service routes, 14 express services, a bus rapid transit route, a free downtown circulator, night service, an airport connector and other services. LinkUS, an initiative between COTA, the city, and the Mid-Ohio Regional Planning Commission, is planning to add more rapid transit to Columbus, with three proposed corridors operating by 2030, and potentially a total of five by 2050. Intercity bus service is provided at the Columbus Bus Station by Greyhound, Barons Bus Lines, Miller Transportation, GoBus and other carriers. Columbus does not have passenger rail service. The city's major train station, Union Station, was a stop along Amtrak's National Limited train service until 1977 and was razed in 1979, and the Greater Columbus Convention Center now stands in its place. Until Amtrak's founding in 1971, the Penn Central ran the Cincinnati Limited to Cincinnati to the southwest (in prior years the train continued to New York City to the east); the Ohio State Limited between Cincinnati and Cleveland, with Union Station serving as a major intermediate stop (the train going unnamed between 1967 and 1971); and the Spirit of St. Louis, which ran between St. Louis and New York City until 1971. The station was also a stop along the Pennsylvania Railroad, the New York Central Railroad, the Chesapeake and Ohio Railway, the Baltimore and Ohio Railroad, the Norfolk and Western Railway, the Cleveland, Columbus and Cincinnati Railroad, and the Pittsburgh, Cincinnati, Chicago and St. Louis Railroad. As the city lacks local, commuter or intercity trains, Columbus is now the largest city and metropolitan area in the U.S. without any passenger rail service. Numerous proposals to return rail service have been introduced; currently Amtrak plans to restore service to Columbus by 2035. Cycling network Cycling as transportation is steadily increasing in Columbus with its relatively flat terrain, intact urban neighborhoods, large student population and off-road bike paths. The city has put forth the 2012 Bicentennial Bikeways Plan, as well as a move toward a Complete Streets policy. Grassroots efforts such as Bike to Work Week, Consider Biking, Yay Bikes, Third Hand Bicycle Co-op, Franklinton Cycleworks and Cranksters, a local radio program focused on urban cycling, have contributed to cycling as transportation. Columbus also hosts urban cycling "off-shots" with messenger-style "alleycat" races, as well as unorganized group rides, a monthly Critical Mass ride, bicycle polo, art showings, movie nights and a variety of bicycle-friendly businesses and events throughout the year. All this activity occurs despite Columbus's frequently inclement weather. The Main Street Bridge, opened in 2010, features a dedicated bike and pedestrian lane separated from traffic. The city has its own public bicycle system. CoGo Bike Share has a network of about 600 bicycles and 80 docking stations. PBSC Urban Solutions, a company based in Canada, supplies technology and equipment. Bird electric scooters have also been introduced. Modal share The city of Columbus has a higher-than-average percentage of households without a car. In 2015, 9.8% of Columbus households lacked a car, a number that fell slightly to 9.4% in 2016. The national average was 8.7% in 2016. Columbus averaged 1.55 cars per household in 2016, compared to a national average of 1.8. Notable people Sister cities Columbus has 10 sister cities as designated by Sister Cities International. Columbus established its first sister city relationship in 1955 with Genoa, Italy. To commemorate this relationship, Columbus received as a gift from the people of Genoa, a bronze statue of Christopher Columbus. The statue overlooked Broad Street in front of Columbus City Hall from 1955 to 2020; it was removed during the George Floyd protests. List of sister cities: See also Racism in Columbus, Ohio Notes References Bibliography Further reading External links A program that features the history of and literary life in Columbus. 1812 establishments in Ohio Cities in Delaware County, Ohio Cities in Fairfield County, Ohio Cities in Franklin County, Ohio Cities in Ohio County seats in Ohio National Road Planned cities in the United States Populated places established in 1812 Populated places on the Underground Railroad
2,689
5,954
https://en.wikipedia.org/wiki/Callisto
Callisto
Callisto most commonly refers to: Callisto (mythology), a nymph Callisto (moon), a moon of Jupiter Callisto may also refer to: Art and entertainment Callisto series, a sequence of novels by Lin Carter Callisto, a novel by Torsten Krol Callisto (comics), a fictional mutant in X-Men Callisto (Xena), a character on Xena: Warrior Princess "Callisto" (Xena: Warrior Princess episode) Callisto family, a fictional family in the Miles from Tomorrowland TV series Callisto, a toy in the Mattel Major Matt Mason series Callisto (band), a band from Turku, Finland People with the name Callisto Cosulich (1922–2015), Italian film critic, author, journalist and screenwriter Callisto Pasuwa, Zimbabwean soccer coach Callisto Piazza (1500–1561), Italian painter Other uses Callisto (moth), a genus of moths in the family Gracillariidae CALLISTO, a reusable test rocket Callisto Corporation, a software development company Callisto, a release of version 3.2 of Eclipse Callisto, an AMD Phenom II processor core Callisto (organization), a non-profit organization See also Calisto (disambiguation) Kallisto (disambiguation) Callista (disambiguation) Callistus (disambiguation) Castillo (disambiguation)
2,691
5,958
https://en.wikipedia.org/wiki/CPR%20%28disambiguation%29
CPR (disambiguation)
Cardiopulmonary resuscitation (CPR) is an emergency procedure to assist someone who has suffered cardiac arrest. CPR may also refer to: Science and technology Classification of Pharmaco-Therapeutic Referrals, a taxonomy to define situations requiring a referral from pharmacists to physicians Continuous Plankton Recorder, marine biological monitoring program Cubase Project Files, work files used in Steinberg Cubase Cytochrome P450 reductase, an enzyme Cursor Position Report, an ANSI X3.64 escape sequence Candidate phyla radiation, bacteria precursors. Competent Persons Report, in Oil and Gas; see Lancaster oilfield Organizations American Bar Association Model Code of Professional Responsibility Center for Performance Research Centre for Policy Research, a think tank in New Delhi, India Chicago Project Room, former art gallery in Chicago and Los Angeles Communist Party of Réunion, in the French département of Réunion Communist Party of Russia (disambiguation), various meanings Congress for the Republic, a Tunisian political party Conservatives for Patients' Rights, a pressure group founded and funded by Rick Scott that argues for private insurance methods to pay for healthcare Det Centrale Personregister (Civil Registration System), Denmark's nationwide civil registry Transportation Canadian Pacific Railway, serving major cities in Canada and the northeastern US Car plate recognition, or automatic number plate recognition Casper–Natrona County International Airport (IATA Code), in Casper, Wyoming, US Cornelius Pass Road, in Oregon, US Compact Position Reporting, a method of encoding an aircraft's latitude and longitude in ADS-B position messages Entertainment and music Chicago Public Radio, former name of WBEZ Club Penguin Rewritten, 2017 fangame Colorado Public Radio CPR (band) or Crosby, Pevar & Raymond, a former rock/jazz band CPR (album) Corporate Punishment Records, a record label CPR (EP), a 2003 EP by Dolour "CPR", a song by CupcakKe from the album Queen Elizabitch Other uses Calendar of the Patent Rolls, a book series translating and summarising the medieval Patent Rolls documents Chinese People's Republic, another alternate official name for China (UNDP country code CPR) Civil Procedure Rules, a civil court procedure rules for England and Wales Common-pool resource, a type of good, including a resource system Common property regime Concrete Pavement Restoration, a method used by the International Grooving & Grinding Association Conditional Prepayment Rate, a measurement for Prepayment of loan Condominium Property Regime, a type of condominium conversion common in Hawai'i Construction Products Regulation, Regulation (EU) No. 305/2011 Critique of Pure Reason, a 1781 philosophical work by Immanuel Kant See also CPR-1000, a Generation II+ pressurized water reactor Central Pacific Railroad (CPRR), between California and Utah, US
2,694
5,962
https://en.wikipedia.org/wiki/Comet
Comet
A comet is an icy, small Solar System body that warms and begins to release gases when passing close to the Sun, a process called outgassing. This produces an extended, gravitationally unbound atmosphere or coma surrounding the nucleus, and sometimes a tail of gas and dust gas blown out from the coma. These phenomena are due to the effects of solar radiation and the outstreaming solar wind plasma acting upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times Earth's diameter, while the tail may stretch beyond one astronomical unit. If sufficiently close and bright, a comet may be seen from Earth without the aid of a telescope and can subtend an arc of up to 30° (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures and religions. Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star. Long-period comets are set in motion towards the Sun by gravitational perturbations from passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition. Extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids. Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System. However, the discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets. In the early 21st century, the discovery of some minor bodies with long-period comet orbits, but characteristics of inner solar system asteroids, were called Manx comets. They are still classified as comets, such as C/2014 S3 (PANSTARRS). Twenty-seven Manx comets were found from 2013 to 2017. there are 4,584 known comets. However, this represents a very small fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is about one trillion. Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular. Particularly bright examples are called "great comets". Comets have been visited by unmanned probes such as the European Space Agency's Rosetta, which became the first to land a robotic spacecraft on a comet, and NASA's Deep Impact, which blasted a crater on Comet Tempel 1 to study its interior. Etymology The word comet derives from the Old English from the Latin or . That, in turn, is a romanization of the Greek 'wearing long hair', and the Oxford English Dictionary notes that the term () already meant 'long-haired star, comet' in Greek. was derived from () 'to wear the hair long', which was itself derived from () 'the hair of the head' and was used to mean 'the tail of a comet'. The astronomical symbol for comets (represented in Unicode) is , consisting of a small disc with three hairlike extensions. Physical characteristics Nucleus The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia. As such, they are popularly described as "dirty snowballs" after Fred Whipple's model. Comets with a higher dust content have been called "icy dirtballs". The term "icy dirtballs" arose after observation of Comet 9P/Tempel 1 collision with an "impactor" probe sent by NASA Deep Impact mission in July 2005. Research conducted in 2014 suggests that comets are like "deep fried ice cream", in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense. The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. The nuclei contains a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, ethane, and perhaps more complex molecules such as long-chain hydrocarbons and amino acids. In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA's Stardust mission. In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets. The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley's Comet (1P/Halley) reflects about four percent of the light that falls on it, and Deep Space 1 discovered that Comet Borrelly's surface reflects less than 3.0%; by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes. Comet nuclei with radii of up to have been observed, but ascertaining their exact size is difficult. The nucleus of 322P/SOHO is probably only in diameter. A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than across. Known comets have been estimated to have an average density of . Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes. Roughly six percent of the near-Earth asteroids are thought to be the extinct nuclei of comets that no longer experience outgassing, including 14827 Hypnos and 3552 Don Quixote. Results from the Rosetta and Philae spacecraft show that the nucleus of 67P/Churyumov–Gerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals. Further, the ALICE spectrograph on Rosetta determined that electrons (within above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma. Instruments on the Philae lander found at least sixteen organic compounds at the comet's surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet. Coma The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the "coma". The force exerted on the coma by the Sun's radiation pressure and solar wind cause an enormous "tail" to form pointing away from the Sun. The coma is generally made of water and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000 km; 280,000,000 to 370,000,000 mi) of the Sun. The parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry. Larger dust particles are left along the comet's orbital path whereas smaller particles are pushed away from the Sun into the comet's tail by light pressure. Although the solid nucleus of comets is generally less than across, the coma may be thousands or millions of kilometers across, sometimes becoming larger than the Sun. For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun. The Great Comet of 1811 had a coma roughly the diameter of the Sun. Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around from the Sun. At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail. Ion tails have been observed to extend one astronomical unit (150 million km) or more. Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects sunlight directly while the gases glow from ionisation. Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye. Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes. In 1996, comets were found to emit X-rays. This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, "stealing" one or more electrons from the atom in a process called "charge exchange". This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons. Bow shock Bow shocks form as a result of the interaction between the solar wind and the cometary ionosphere, which is created by the ionization of gases in the coma. As the comet approaches the Sun, increasing outgassing rates cause the coma to expand, and the sunlight ionizes gases in the coma. When the solar wind passes through this ion coma, the bow shock appears. The first observations were made in the 1980s and 1990s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at, for example, Earth. These observations were all made near perihelion when the bow shocks already were fully developed. The Rosetta spacecraft observed the bow shock at comet 67P/Churyumov–Gerasimenko at an early stage of bow shock development when the outgassing increased during the comet's journey toward the Sun. This young bow shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks. Tails In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope but these detections have been questioned. As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them. The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet's orbit in such a manner that it often forms a curved tail called the type II or dust tail. At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory. On occasions—such as when Earth passes through a comet's orbital plane, the antitail, pointing in the opposite direction to the ion and dust tails, may be seen. The observation of antitails contributed significantly to the discovery of solar wind. The ion tail is formed as a result of the ionization by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an "induced magnetosphere" around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called "pick-up ions") congregate and act to "load" the solar magnetic field with plasma, such that the field lines "drape" around the comet forming the ion tail. If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a "tail disconnection event". This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke's Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe. In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions." Jets Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet's nucleus, like a geyser. These streams of gas and dust can cause the nucleus to spin, and even split apart. In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus. Infrared imaging of Hartley 2 shows such jets exiting and carrying with it dust grains into the coma. Orbital characteristics Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder. Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse. Short period Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years. They usually orbit more-or-less in the ecliptic plane in the same direction as the planets. Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley's Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet's orbit are called its "family". Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits. At the shorter orbital period extreme, Encke's Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs). Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs). , 70 Encke-type comets, 100 HTCs, and 755 JFCs have been reported. Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt. Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations. Short-period comets have a tendency for their aphelia to coincide with a giant planet's semi-major axis, with the JFCs being the largest group. It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods. Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc —a disk of objects in the trans-Neptunian region—whereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesized its existence). Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable. When flung into the orbit of the sun, and being continuously dragged towards it, tons of matter are stripped from the comets which greatly influence their lifetime; the more stripped, the shorter they live and vice versa. Long period Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands or even millions of years. An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System. For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having "periods". The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as C/1999 F1 and C/2017 T2 (PANSTARRS) can have aphelion distances of nearly with orbital periods estimated around 6 million years. Single-apparition or non-periodic comets are similar to long-period comets because they have parabolic or slightly hyperbolic trajectories when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun. The Sun's Hill sphere has an unstable maximum boundary of . Only a few hundred comets have been seen to reach a hyperbolic orbit (e > 1) when near perihelion that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System. , only two objects have been discovered with an eccentricity significantly greater than one: 1I/ʻOumuamua and 2I/Borisov, indicating an origin outside the Solar System. While ʻOumuamua, with an eccentricity of about 1.2, showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectory—which suggests outgassing—indicate that it is probably a comet. On the other hand, 2I/Borisov, with an estimated eccentricity of about 3.36, has been observed to have the coma feature of comets, and is considered the first detected interstellar comet. Comet C/1980 E1 had an orbital period of roughly 7.1 million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known solar comet with a reasonable observation arc. Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS). Some authorities use the term "periodic comet" to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets), whereas others use it to mean exclusively short-period comets. Similarly, although the literal meaning of "non-periodic comet" is the same as "single-apparition comet", some use it to mean all comets that are not "periodic" in the second sense (that is, to include all comets with a period greater than 200 years). Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. Comets from interstellar space are moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). When such objects enter the Solar System, they have a positive specific orbital energy resulting in a positive velocity at infinity () and have notably hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter's orbit, give or take one and perhaps two orders of magnitude. Oort cloud and Hills cloud The Oort cloud is thought to occupy a vast space starting from between to as far as from the Sun. This cloud encases the celestial bodies that start at the middle of the Solar System—the Sun, all the way to outer limits of the Kuiper Belt. The Oort cloud consists of viable materials necessary for the creation of celestial bodies. The Solar System's planets exist only because of the planetesimals (chunks of leftover space that assisted in the creation of planets) that were condensed and formed by the gravity of the Sun. The eccentric made from these trapped planetesimals is why the Oort Cloud even exists. Some estimates place the outer edge at between . The region can be subdivided into a spherical outer Oort cloud of , and a doughnut-shaped inner cloud, the Hills cloud, of . The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune. The inner Oort cloud is also known as the Hills cloud, named after J. G. Hills, who proposed its existence in 1981. Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo; it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years. Exocomets Exocomets beyond the Solar System have been detected and may be common in the Milky Way. The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987. A total of 11 such exocomet systems have been identified , using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star. For ten years the Kepler space telescope was responsible for searching for planets and other forms outside of the solar system. The first transiting exocomets were found in February 2018 by a group consisting of professional astronomers and citizen scientists in light curves recorded by the Kepler Space Telescope. After Kepler Space Telescope retired in October 2018, a new telescope called TESS Telescope has taken over Kepler's mission. Since the launch of TESS, astronomers have discovered the transits of comets around the star Beta Pictoris using a light curve from TESS. Since TESS has taken over, astronomers have since been able to better distinguish exocomets with the spectroscopic method. New planets are detected by the white light curve method which is viewed as a symmetrical dip in the charts readings when a planet overshadows its parent star. However, after further evaluation of these light curves, it has been discovered that the asymmetrical patterns of the dips presented are caused by the tail of a comet or of hundreds of comets. Effects of comets Connection to meteor showers As a comet is heated during close passes to the Sun, outgassing of its icy components releases solid debris too large to be swept away by radiation pressure and the solar wind. If Earth's orbit sends it through that trail of debris, which is composed mostly of fine grains of rocky material, there is likely to be a meteor shower as Earth passes through. Denser trails of debris produce quick but intense meteor showers and less dense trails create longer but less intense showers. Typically, the density of the debris trail is related to how long ago the parent comet released the material. The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet Swift–Tuttle. Halley's Comet is the source of the Orionid shower in October. Comets and impact on life Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4 billion years ago brought the vast quantities of water that now fill Earth's oceans, or at least a significant portion of it. Others have cast doubt on this idea. The detection of organic molecules, including polycyclic aromatic hydrocarbons, in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of life—or even life itself—to Earth. In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis. The speed at which the comets entered the atmosphere, combined with the magnitude of energy created after initial contact, allowed smaller molecules to condense into the larger macro-molecules that served as the foundation for life. In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed. It is suspected that comet impacts have, over long timescales, delivered significant quantities of water to Earth's Moon, some of which may have survived as lunar ice. Comet and meteoroid impacts are thought to be responsible for the existence of tektites and australites. Fear of comets Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650. The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near. He listed ten pages of comet-related disasters, including "earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices". By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmond Halley's records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters. Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley's Comet, causing panicked buying of gas masks and quack "anti-comet pills" and "anti-comet umbrellas" by the public. Fate of comets Departure (ejection) from Solar System If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such, they are called hyperbolic comets. Solar comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter. An example of this is Comet C/1980 E1, which was shifted from an orbit of 7.1 million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter. Interstellar comets such as 1I/ʻOumuamua and 2I/Borisov never orbited the Sun and therefore do not require a 3rd-body interaction to be ejected from the Solar System. Volatiles exhausted Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages. Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid. Some asteroids in elliptical orbits are now identified as extinct comets. Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei. Breakup and collisions The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart. A significant cometary disruption was that of Comet Shoemaker–Levy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter's atmosphere—the first time astronomers had observed a collision between two objects in the Solar System. Other splitting comets include 3D/Biela in 1846 and 73P/Schwassmann–Wachmann from 1995 to 2006. Greek historian Ephorus reported that a comet split apart as far back as the winter of 372–373 BC. Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact. Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical. Some comets have been observed to break up during their perihelion passage, including great comets West and Ikeya–Seki. Biela's Comet was one significant example when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when Earth crosses the orbit of Biela's Comet. Some comets meet a more spectacular end – either falling into the Sun or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter. Nomenclature The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the "Great Comet of 1680", the "Great Comet of 1882", and the "Great January Comet of 1910". After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley's Comet. Similarly, the second and third known periodic comets, Encke's Comet and Biela's Comet, were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance. In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers or an instrument or program that helped to find it. For example, in 2019, astronomer Gennadiy Borisov observed a comet that appeared to have originated outside of the solar system; the comet was named 2I/Borisov after him. History of study Early observations and thought From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia. Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants. Aristotle (384–322 BC) was the first known scientist to use various theories and observational facts to employ a consistent, structured cosmological theory of comets. He believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the zodiac and vary in brightness over the course of a few days. Aristotle's cometary theory arose from his observations and cosmological theory that everything in the cosmos is arranged in a distinct configuration. Part of this configuration was a clear separation between the celestial and terrestrial, believing comets to be strictly associated with the latter. According to Aristotle, comets must be within the sphere of the moon and clearly separated from the heavens. Also in the 4th century BC, Apollonius of Myndus supported the idea that comets moved like the planets. Aristotelian theory on comets continued to be widely accepted throughout the Middle Ages, despite several discoveries from various individuals challenging aspects of it. In the 1st century AD, Seneca the Younger questioned Aristotle's logic concerning comets. Because of their regular movement and imperviousness to wind, they cannot be atmospheric, and are more permanent than suggested by their brief flashes across the sky. He pointed out that only the tails are transparent and thus cloudlike, and argued that there is no reason to confine their orbits to the zodiac. In criticizing Apollonius of Myndus, Seneca argues, "A comet cuts through the upper regions of the universe and then finally becomes visible when it reaches the lowest point of its orbit." While Seneca did not author a substantial theory of his own, his arguments would spark much debate among Aristotle's critics in the 16th and 17th centuries. In the 1st century, Pliny the Elder believed that comets were connected with political unrest and death. Pliny observed comets as "human like", often describing their tails with "long hair" or "long beard". His system for classifying comets according to their color and shape was used for centuries. In India, by the 6th century astronomers believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varāhamihira and Bhadrabahu, and the 10th-century astronomer Bhaṭṭotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were. In 1301, the Italian painter Giotto was the first person to accurately and anatomically portray a comet. In his work Adoration of the Magi, Giotto's depiction of Halley's Comet in the place of the Star of Bethlehem would go unmatched in accuracy until the 19th century and be bested only with the invention of photography. Astrological interpretations of comets proceeded to take precedence clear into the 15th century, despite the presence of modern scientific astronomy beginning to take root. Comets continued to forewarn of disaster, as seen in the Luzerner Schilling chronicles and in the warnings of Pope Callixtus III. In 1578, German Lutheran bishop Andreas Celichius defined comets as "the thick smoke of human sins ... kindled by the hot and fiery anger of the Supreme Heavenly Judge". The next year, Andreas Dudith stated that "If comets were caused by the sins of mortals, they would never be absent from the sky." Scientific approach Crude attempts at a parallax measurement of Halley's Comet were made in 1456, but were erroneous. Regiomontanus was the first to attempt to calculate diurnal parallax by observing the great comet of 1472. His predictions were not very accurate, but they were conducted in the hopes of estimating the distance of a comet from Earth. In the 16th century, Tycho Brahe and Michael Maestlin demonstrated that comets must exist outside of Earth's atmosphere by measuring the parallax of the Great Comet of 1577. Within the precision of the measurements, this implied the comet must be at least four times more distant than from Earth to the Moon. Based on observations in 1664, Giovanni Borelli recorded the longitudes and latitudes of comets that he observed, and suggested that cometary orbits may be parabolic. Despite being a skilled astronomer, in his 1623 book The Assayer, Galileo Galilei rejected Brahe's theories on the parallax of comets and claimed that they may be a mere optical illusion, despite little personal observation. In 1625, Maestlin's student Johannes Kepler upheld that Brahe's view of cometary parallax was correct. Additionally, mathematician Jacob Bernoulli published a treatise on comets in 1682. During the early modern period comets were studied for their astrological significance in medical disciplines. Many healers of this time considered medicine and astronomy to be inter-disciplinary and employed their knowledge of comets and other astrological signs for diagnosing and treating patients. Isaac Newton, in his Principia Mathematica of 1687, proved that an object moving under the influence of gravity by an inverse square law must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet's path through the sky to a parabolic orbit, using the comet of 1680 as an example. He describes comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. He suspected that comets were the origin of the life-supporting component of air. He pointed out that comets usually appear near the Sun, and therefore most likely orbit it. On their luminosity, he stated, "The comets shine by the Sun's light, which they reflect," with their tails illuminated by "the Sun's light reflected by a smoke arising from [the coma]". In 1705, Edmond Halley (1656–1742) applied Newton's method to 23 cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 1758–59. Halley's predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet's 1759 perihelion to within one month's accuracy. When the comet returned as predicted, it became known as Halley's Comet. As early as the 18th century, some scientists had made correct hypotheses as to comets' physical composition. In 1755, Immanuel Kant hypothesized in his Universal Natural History that comets were condensed from "primitive matter" beyond the known planets, which is "feebly moved" by gravity, then orbit at arbitrary inclinations, and are partially vaporized by the Sun's heat as they near perihelion. In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley's Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit, and he argued that the non-gravitational movements of Encke's Comet resulted from this phenomenon. In the 19th century, the Astronomical Observatory of Padova was an epicenter in the observational study of comets. Led by Giovanni Santini (1787–1877) and followed by Giuseppe Lorenzoni (1843–1914), this observatory was devoted to classical astronomy, mainly to the new comets and planets orbit calculation, with the goal of compiling a catalog of almost ten thousand stars. Situated in the Northern portion of Italy, observations from this observatory were key in establishing important geodetic, geographic, and astronomical calculations, such as the difference of longitude between Milan and Padua as well as Padua to Fiume.Correspondence within the observatory, particularly between Santini and another astronomer Giuseppe Toaldo, mentioned the importance of comet and planetary orbital observations. In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock. This "dirty snowball" model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency's Giotto probe and the Soviet Union's Vega 1 and Vega 2) that flew through the coma of Halley's Comet in 1986, photographed the nucleus, and observed jets of evaporating material. On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, , and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). Spacecraft missions The Halley Armada describes the collection of spacecraft missions that visited and/or made observations of Halley's Comet 1980s perihelion. The space shuttle Challenger was intended to do a study of Halley's Comet in 1986, but exploded shortly after being launched. Deep Impact. Debate continues about how much ice is in a comet. In 2001, the Deep Space 1 spacecraft obtained high-resolution images of the surface of Comet Borrelly. It was found that the surface of comet Borrelly is hot and dry, with a temperature of between , and extremely dark, suggesting that the ice has been removed by solar heating and maturation, or is hidden by the soot-like material that covers Borrelly. In July 2005, the Deep Impact probe blasted a crater on Comet Tempel 1 to study its interior. The mission yielded results suggesting that the majority of a comet's water ice is below the surface and that these reservoirs feed the jets of vaporized water that form the coma of Tempel 1. Renamed EPOXI, it made a flyby of Comet Hartley 2 on 4 November 2010. Ulysses. In 2007, the Ulysses probe unexpectedly passed through the tail of the comet C/2006 P1 (McNaught) which was discovered in 2006. Ulysses was launched in 1990 and the intended mission was for Ulysses to orbit around the sun for further study at all latitudes. Stardust. Data from the Stardust mission show that materials retrieved from the tail of Wild 2 were crystalline and could only have been "born in fire", at extremely high temperatures of over . Although comets formed in the outer Solar System, radial mixing of material during the early formation of the Solar System is thought to have redistributed material throughout the proto-planetary disk. As a result, comets contain crystalline grains that formed in the early, hot inner Solar System. This is seen in comet spectra as well as in sample return missions. More recent still, the materials retrieved demonstrate that the "comet dust resembles asteroid materials". These new results have forced scientists to rethink the nature of comets and their distinction from asteroids. Rosetta. The Rosetta probe orbited Comet Churyumov–Gerasimenko. On 12 November 2014, its lander Philae successfully landed on the comet's surface, the first time a spacecraft has ever landed on such an object in history. Classification Great comets Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets. Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet's brightness to depart drastically from predictions. Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so. Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet. The Great Comet of 1577 is a well-known example of a great comet. It passed near Earth as a non-periodic comet and was seen by many, including well-known astronomers Tycho Brahe and Taqi ad-Din. Observations of this comet led to several significant findings regarding cometary science, especially for Brahe. The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick succession—Comet Hyakutake in 1996, followed by Hale–Bopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years. Sungrazing comets A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometers. Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation. About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System. The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is the parent of two meteor streams, the Quadrantids and the Arietids. Unusual comets Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/Schwassmann–Wachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn. 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed. Similarly, Comet Shoemaker–Levy 2 was originally designated asteroid . Largest The largest known periodic comet is 95P/Chiron at 200 km in diameter that comes to perihelion every 50 years just inside of Saturn's orbit at 8 AU. The largest known Oort cloud comet is suspected of being Comet Bernardinelli-Bernstein at ≈150 km that will not come to perihelion until January 2031 just outside of Saturn's orbit at 11 AU. The Comet of 1729 is estimated to have been ≈100 km in diameter and came to perihelion inside of Jupiter's orbit at 4 AU. Centaurs Centaurs typically behave with characteristics of both asteroids and comets. Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active, and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for Cassini involved sending it to a centaur, but NASA decided to destroy it instead. Observation A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO. SOHO's 2000th comet was discovered by Polish amateur astronomer Michał Kusiak on 26 December 2010 and both discoverers of Hale–Bopp used amateur equipment (although Hale was not an amateur). Lost A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a "new" comet is discovered, and calculation of its orbit shows it to be an old "lost" comet. An example is Comet 11P/Tempel–Swift–LINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001. There are at least 18 comets that fit this category. In popular culture The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change. Halley's Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he'd "go out with the comet" in 1910) and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song "Halley Came to Jackson". In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley's Comet in 1910, Earth passed through the comet's tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions, whereas the appearance of Comet Hale–Bopp in 1997 triggered the mass suicide of the Heaven's Gate cult. In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films Deep Impact and Armageddon), or as a trigger of global apocalypse (Lucifer's Hammer, 1979) or zombies (Night of the Comet, 1984). In Jules Verne's Off on a Comet a group of people are stranded on a comet orbiting the Sun, while a large crewed space expedition visits Halley's Comet in Sir Arthur C. Clarke's novel 2061: Odyssey Three. In Literature The long-period comet first recorded by Pons in Florence on 15 July 1825 inspired Lydia Sigourney's humorous poem in which all the celestial bodies argue over the comet's appearance and purpose. Gallery Videos See also The Big Splash Comet vintages List of impact craters on Earth List of possible impact structures on Earth Lists of comets References Footnotes Citations Bibliography Further reading External links Comets at NASA's Solar System Exploration International Comet Quarterly by Harvard University Catalogue of the Solar System Small Bodies Orbital Evolution Science Demos: Make a Comet by the National High Magnetic Field Laboratory Comets: from myths to reality, exhibition on Paris Observatory digital library Astronomical objects Articles containing video clips Ice Extraterrestrial water Concepts in astronomy
2,697
5,987
https://en.wikipedia.org/wiki/Coal
Coal
Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen. Coal is a type of fossil fuel, formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times. Many significant coal deposits are younger than this and originate from the Mesozoic and Cenozoic eras. Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal. The extraction and use of coal causes premature death and illness. The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide was emitted by burning coal in 2020, which is 40% of the total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020. Global coal use peaked in 2013. To meet the Paris Agreement target of keeping global warming below coal use needs to halve from 2020 to 2030, and phasing out coal was agreed upon in the Glasgow Climate Pact. The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia. Etymology The word originally took the form col in Old English, from Proto-Germanic *kula(n), which in turn is hypothesized to come from the Proto-Indo-European root *g(e)u-lo- "live coal". Germanic cognates include the Old Frisian kole, Middle Dutch cole, Dutch kool, Old High German chol, German Kohle and Old Norse kol, and the Irish word gual is also a cognate via the Indo-European root. Geology Coal is composed of macerals, minerals and water. Fossils and amber may be found in coal. Formation The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying wetland areas. In these wetlands, the process of coalification began when dead plant matter was protected from biodegradation and oxidation, usually by mud or acidic water, and was converted into peat. This trapped the carbon in immense peat bogs that were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called "brown coal") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called "hard coal" or "black coal") produced in turn with increasing temperature and pressure. Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as while anthracite requires a temperature of at least . Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelfs that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare. Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods. One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa. One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps. Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae. Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf. Chemistry of coalification The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This implies that chemical processes during coalification must remove most of the oxygen and much of the hydrogen, leaving carbon, a process called carbonization. Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as 2 R–OH → R–O–R + H2O 2 R-CH2-O-CH2-R → R-CH=CH-R + H2O Decarboxylation removes carbon dioxide from the maturing coal and proceeds by reaction such as RCOOH → RH + CO2 while demethanation proceeds by reaction such as 2 R-CH3 → R-CH2-R + CH4 R-CH2-CH2-CH2-R → R-CH=CH-R + CH4 In each of these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached. Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon). As carbonization proceeds, aliphatic compounds (carbon compounds characterized by chains of carbon atoms) are replaced by aromatic compounds (carbon compounds characterized by rings of carbon atoms) and aromatic rings begin to fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite. Chemical changes are accompanied by physical changes, such as decrease in average pore size. The macerals (organic particles) of lignite are composed of huminite, which is earthy in appearance. As the coal matures to sub-bituminous coal, huminite begins to be replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks. Types As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into: Peat, a precursor of coal Lignite, or brown coal, the lowest rank of coal, most harmful to health when burned, used almost exclusively as fuel for electric power generation Jet, a compact form of lignite, sometimes polished; used as an ornamental stone since the Upper Palaeolithic Sub-bituminous coal, whose properties range between those of lignite and those of bituminous coal, is used primarily as fuel for steam-electric power generation. Bituminous coal, a dense sedimentary rock, usually black, but sometimes dark brown, often with well-defined bands of bright and dull material. It is used primarily as fuel in steam-electric power generation and to make coke. Known as steam coal in the UK, and historically used to raise steam in steam locomotives and ships Anthracite coal, the highest rank of coal, is a harder, glossy black coal used primarily for residential and commercial space heating. Graphite is difficult to ignite and not commonly used as fuel; it is most used in pencils, or powdered for lubrication. Cannel coal (sometimes called "candle coal") is a variety of fine-grained, high-rank coal with significant hydrogen content, which consists primarily of liptinite. There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel. Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal. History The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones ... which burn like logs", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC): Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore. No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as "seacoal" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist. These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal", because it came from mines. Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated. The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain but the last deep coal mine in the UK closed in 2015. A grade between bituminous coal and anthracite was once known as "steam coal" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as "sea coal" in the United States. Small "steam coal", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating. Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity. Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel. Chemistry Composition The composition of coal is reported either as a proximate analysis (moisture, volatile matter, fixed carbon, and ash) or an ultimate analysis (ash, carbon, hydrogen, nitrogen, oxygen, and sulfur). The "volatile matter" does not exist by itself (except for some adsorbed methane) but designates the volatile compounds that are produced and driven off by heating the coal. A typical bituminous coal may have an ultimate analysis on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. The composition of ash, given in terms of oxides, varies: Other minor components include: Coking coal and use of coke to smelt iron Coke is a solid carbonaceous residue derived from coking coal (a low-ash, low-sulfur bituminous coal, also known as metallurgical coal), which is used in manufacturing steel and other iron products. Coke is made from coking coal by baking in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron. Waste carbon dioxide is also produced together with pig iron, which is too rich in dissolved carbon so must be treated further to make steel. Coking coal should be low in ash, sulfur, and phosphorus, so that these do not migrate to the metal. The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some cokemaking processes produce byproducts, including coal tar, ammonia, light oils, and coal gas. Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications. Use in foundry components Finely ground bituminous coal, known in this application as sea coal, is a constituent of foundry sand. While the molten metal is in the mould, the coal burns slowly, releasing reducing gases at pressure, and so preventing the metal from penetrating the pores of the sand. It is also contained in 'mould wash', a paste or liquid with the same function applied to the mould before casting. Sea coal can be mixed with the clay lining (the "bod") used for the bottom of a cupola furnace. When heated, the coal decomposes and the bod becomes slightly friable, easing the process of breaking open holes for tapping the molten metal. Alternatives to coke Scrap steel can be recycled in an electric arc furnace; and an alternative to making iron by smelting is direct reduced iron, where any carbonaceous fuel can be used to make sponge or pelletised iron. To lessen carbon dioxide emissions hydrogen can be used as the reducing agent and biomass or waste as the source of carbon. Historically, charcoal has been used as an alternative to coke in a blast furnace, with the resultant iron being known as charcoal iron. Gasification Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal. During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking. 3C (as Coal) + O2 + H2O → H2 + 3CO If the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated: CO + H2O → CO2 + H2 Liquefaction Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using CCS would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more. Coal liquefaction may also refer to the cargo hazard when shipping coal. Production of chemicals Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important. Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production. Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution. Electricity generation Energy density The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated of coal to power a 100 W lightbulb for one year. 27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it. Precombustion treatment Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers. Power plant combustion Coal burnt as a solid fuel in coal power stations to generate electricity is called thermal coal. Coal is also used to produce very high temperatures through combustion. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and electricity from lower carbon sources. When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler (see also Pulverized coal-fired boiler). The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant. A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants; however the technology for carbon capture and storage (CCS) after gasification and before burning has so far proved to be too expensive to use with coal. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit. In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity. Maximum use of coal was reached in 2013. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours. Coal industry Mining About 8000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. just over half is from underground mines. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called "metcoal" or "coking coal" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use. As a traded commodity China mines almost half the world's coal, followed by India with about a tenth. Australia accounts for about a third of world coal exports, followed by Indonesia and Russia, while the largest importers are Japan and India. The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management. In some countries new onshore wind or solar generation already costs less than coal power from existing plants. However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables. Market trends Of the countries which produce coal China mines by far the most, almost half the world's coal, followed by less than 10% by India. China is also by far the largest consumer. Therefore, market trends depend on Chinese energy policy. Although the effort to reduce pollution means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries. Major producers Countries with annual production higher than 300 million tonnes are shown. Major consumers Countries with annual consumption higher than 500 million tonnes are shown. Shares are based on data expressed in tonnes oil equivalent. Major exporters Exporters are at risk of a reduction in import demand from India and China. Major importers Damage to human health The use of coal as fuel causes ill health and deaths. Mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents. The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China. Burning coal is a major emitter of sulfur dioxide, which creates PM2.5 particulates, the most dangerous form of air pollution. Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer. Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion. In China, improvements to air quality and human health would increase with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit. A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, "a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period." Breathing in coal dust causes coalworker's pneumoconiosis or "black lung", so called because the coal dust literally turns the lungs black from their usual pink color. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust. Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium. Around 10% of coal is ash: coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics. Damage to the environment Coal mining, coal combustion wastes and flue gas are causing major environmental damage. Water systems are affected by coal mining. For example, mining affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants would use significant quantities of water. One of the earliest known impacts of coal on the water cycle was acid rain. In 2014 approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems. Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668 and is still burning in the 21st century. The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment. Emission intensity Emission intensity is the greenhouse gas emitted over the life of a generator per unit of electricity generated. The emission intensity of coal power stations is high, as they emit around 1000 g of CO2eq for each kWh generated, while natural gas is medium-emission intensity at around 500 g CO2eq per kWh. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries. Underground fires Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels. In Centralia, Pennsylvania (a borough located in the Coal Region of the U.S.), an exposed vein of anthracite ignited in 1962 due to a trash fire in the borough landfill, located in an abandoned anthracite strip mine pit. Attempts to extinguish the fire were unsuccessful, and it continues to burn underground to this day. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash come from a coal fire that has been burning for some 6,000 years. At Kuh i Malik in Yagnob Valley, Tajikistan, coal deposits have been burning for thousands of years, creating vast underground labyrinths full of unique minerals, some of them very beautiful. The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin in Wyoming and in western North Dakota is called porcelanite, which resembles the coal burning waste "clinker" or volcanic "scoria". Clinker is rock that has been fused by the natural burning of coal. In the Powder River Basin approximately 27 to 54 billion tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark Expedition as well as explorers and settlers in the area. Climate change The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas. In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early. Pollution mitigation Standards Local pollution standards include GB13223-2011 (China), India, the Industrial Emissions Directive (EU) and the Clean Air Act (United States). Satellite monitoring Satellite monitoring is now used to crosscheck national data, for example Sentinel-5 Precursor has shown that Chinese control of SO2 has only been partially successful. It has also revealed that low use of technology such as SCR has resulted in high NO2 emissions in South Africa and India. Combined cycle power plants A few Integrated gasification combined cycle (IGCC) coal-fired power plants have been built with coal gasification. Although they burn coal more efficiently and therefore emit less pollution, the technology has not generally proved economically viable for coal, except possibly in Japan although this is controversial. Carbon capture and storage Although still being intensively researched and considered economically viable for some uses other than with coal; carbon capture and storage has been tested at the Petra Nova and Boundary Dam coal-fired power plants and has been found to be technically feasible but not economically viable for use with coal, due to reductions in the cost of solar PV technology. Economics In 2018 was invested in coal supply but almost all for sustaining production levels rather than opening new mines. In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy. China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making. Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China. Subsidies Subsidies for coal in 2021 have been estimated at , not including electricity subsidies, and are expected to rise in 2022. G20 countries provide at least of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include in domestic and international public finance, in fiscal support, and in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021. Stranded assets Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts. Politics Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation. Opposition to coal Opposition to coal pollution was one of the main reasons the modern environmental movement started in the 19th century. Transition away from coal In order to meet global climate goals and provide power to those that do not currently have it coal power must be reduced from nearly 10,000 TWh to less than 2,000 TWh by 2040. Phasing out coal has short-term health and environmental benefits which exceed the costs, but some countries still favor coal, and there is much disagreement about how quickly it should be phased out. However many countries, such as the Powering Past Coal Alliance, have already or are transitioned away from coal; the largest transition announced so far being Germany, which is due to shut down its last coal-fired power station between 2035 and 2038. Some countries use the ideas of a "Just Transition", for example to use some of the benefits of transition to provide early pensions for coal miners. However, low-lying Pacific Islands are concerned the transition is not fast enough and that they will be inundated by sea level rise, so they have called for OECD countries to completely phase out coal by 2030 and other countries by 2040. In 2020, although China built some plants, globally more coal power was retired than built: the UN Secretary General has also said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040. Phasing down coal was agreed at COP26 in the Glasgow Climate Pact. Peak coal Switch to cleaner fuels and lower carbon electricity generation Coal-fired generation puts out about twice as much carbon dioxide—around a tonne for every megawatt hour generated—as electricity generated by burning natural gas at 500 kg of greenhouse gas per megawatt hour. In addition to generating electricity, natural gas is also popular in some countries for heating and as an automotive fuel. The use of coal in the United Kingdom declined as a result of the development of North Sea oil and the subsequent dash for gas during the 1990s. In Canada some coal power plants, such as the Hearn Generating Station, switched from coal to natural gas. In 2017, coal power in the US provided 30% of the electricity, down from approximately 49% in 2008, due to plentiful supplies of low cost natural gas obtained by hydraulic fracturing of tight shale formations. Coal regions in transition Some coal-mining regions are highly dependent on coal. Employment Some coal miners are concerned their jobs may be lost in the transition. A just transition from coal is supported by the European Bank for Reconstruction and Development. Bioremediation The white rot fungus Trametes versicolor can grow on and metabolize naturally occurring coal. The bacteria Diplococcus has been found to degrade coal, raising its temperature. Cultural usage Coal is the official state mineral of Kentucky and the official state rock of Utah; both US states have a historic link to coal mining. Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents. It is also customary and considered lucky in Scotland and the North of England to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come. See also (stratigraphic unit) Notes References Sources Further reading External links Coal Transitions World Coal Association Coal – International Energy Agency Coal Online – International Energy Agency CoalExit European Association for Coal and Lignite Coal news and industry magazine Global Coal Plant Tracker Centre for Research on Energy and Clean Air Coal mining Economic geology Fuels Sedimentary rocks Solid fuels Fossil fuels
2,707
5,999
https://en.wikipedia.org/wiki/Climate
Climate
Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents. Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme was the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region. Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales from various factors. Recent warming is discussed in global warming, which results in redistributions. For example, "a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately in latitude (in the temperate zone) or in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones." Definition Climate () is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) 2001 glossary definition is as follows: The World Meteorological Organization (WMO) describes "climate normals" as "reference points used by climatologists to compare current climatological trends to that of the past or what is considered typical. A climate normal is defined as the arithmetic average of a climate element (e.g. temperature) over a 30-year period. A 30-year period is used as it is long enough to filter out any interannual variation or anomalies such as El Niño–Southern Oscillation, but also short enough to be able to show longer climatic trends." The WMO originated from the International Meteorological Organization which set up a technical commission for climatology in 1929. At its 1934 Wiesbaden meeting, the technical commission designated the thirty-year period from 1901 to 1930 as the reference time frame for climatological standard normals. In 1982, the WMO agreed to update climate normals, and these were subsequently completed on the basis of climate data from 1 January 1961 to 31 December 1990. The 1961–1990 climate normals serve as the baseline reference period. The next set of climate normals to be published by WMO is from 1991 to 2010. Aside from collecting from the most common atmospheric variables (air temperature, pressure, precipitation and wind), other variables such as humidity, visibility, cloud amount, solar radiation, soil temperature, pan evaporation rate, days with thunder and days with hail are also collected to measure change in climate conditions. The difference between climate and weather is usefully summarized by the popular phrase "Climate is what you expect, weather is what you get." Over historical time spans, there are a number of nearly constant variables that determine climate, including latitude, altitude, proportion of land to water, and proximity to oceans and mountains. All of these variables change only over periods of millions of years due to processes such as plate tectonics. Other climate determinants are more dynamic: the thermohaline circulation of the ocean leads to a warming of the northern Atlantic Ocean compared to other ocean basins. Other ocean currents redistribute heat between land and water on a more regional scale. The density and type of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level. Alterations in the quantity of atmospheric greenhouse gases determines the amount of solar energy retained by the planet, leading to global warming or global cooling. The variables which determine climate are numerous and the interactions complex, but there is general agreement that the broad outlines are understood, at least insofar as the determinants of historical climate change are concerned. Climate classification Climate classifications are systems that categorize the world's climates. A climate classification may correlate closely with a biome classification, as climate is a major influence on life in a region. One of the most used is the Köppen climate classification scheme first developed in 1899. There are several ways to classify climates into similar regimes. Originally, climes were defined in Ancient Greece to describe the weather depending upon a location's latitude. Modern climate classification methods can be broadly divided into genetic methods, which focus on the causes of climate, and empiric methods, which focus on the effects of climate. Examples of genetic classification include methods based on the relative frequency of different air mass types or locations within synoptic weather disturbances. Examples of empiric classifications include climate zones defined by plant hardiness, evapotranspiration, or more generally the Köppen climate classification which was originally designed to identify the climates associated with certain biomes. A common shortcoming of these classification schemes is that they produce distinct boundaries between the zones they define, rather than the gradual transition of climate properties more common in nature. Record Paleoclimatology Paleoclimatology is the study of past climate over a great period of the Earth's history. It uses evidence with different time scales (from decades to millennia) from ice sheets, tree rings, sediments, pollen, coral, and rocks to determine the past state of the climate. It demonstrates periods of stability and periods of change and can indicate whether changes follow patterns such as regular cycles. Modern Details of the modern climate record are known through the taking of measurements from such weather instruments as thermometers, barometers, and anemometers during the past few centuries. The instruments used to study weather over the modern time scale, their observation frequency, their known error, their immediate environment, and their exposure have changed over the years, which must be considered when studying the climate of centuries past. Long-term modern climate records skew towards population centres and affluent countries. Since the 1960s, the launch of satellites allow records to be gathered on a global scale, including areas with little to no human presence, such as the Arctic region and oceans. Climate variability Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused systematically and occurs at random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns. There are close correlations between Earth's climate oscillations and astronomical factors (barycenter changes, solar variation, cosmic ray flux, cloud albedo feedback, Milankovic cycles), and modes of heat distribution between the ocean-atmosphere climate system. In some cases, current, historical and paleoclimatological natural oscillations may be masked by significant volcanic eruptions, impact events, irregularities in climate proxy data, positive feedback processes or anthropogenic emissions of substances such as greenhouse gases. Over the years, the definitions of climate variability and the related term climate change have shifted. While the term climate change now implies change that is both long-term and of human causation, in the 1960s the word climate change was used for what we now describe as climate variability, that is, climatic inconsistencies and anomalies. Climate change Climate change is the variation in global or regional climates over time. It reflects changes in the variability or average state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes internal to the Earth, external forces (e.g. variations in sunlight intensity) or, more recently, human activities. In recent usage, especially in the context of environmental policy, the term "climate change" often refers only to changes in modern climate, including the rise in average surface temperature known as global warming. In some cases, the term is also used with a presumption of human causation, as in the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC uses "climate variability" for non-human caused variations. Earth has undergone periodic climate shifts in the past, including four major ice ages. These consisting of glacial periods where conditions are colder than normal, separated by interglacial periods. The accumulation of snow and ice during a glacial period increases the surface albedo, reflecting more of the Sun's energy into space and maintaining a lower atmospheric temperature. Increases in greenhouse gases, such as by volcanic activity, can increase the global temperature and produce an interglacial period. Suggested causes of ice age periods include the positions of the continents, variations in the Earth's orbit, changes in the solar output, and volcanism. However, these naturally-caused changes in climate occur on a much slower time scale than the present rate of change which is caused by the emission of greenhouse gases by human activities. Climate models Climate models use quantitative methods to simulate the interactions and transfer of radiative energy between the atmosphere, oceans, land surface and ice through a series of physics equations. They are used for a variety of purposes; from the study of the dynamics of the weather and climate system, to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the earth. Any imbalance results in a change in the average temperature of the earth. Climate models are available on different resolutions ranging from >100 km to 1 km. High resolutions in global climate models are computational very demanding and only few global datasets exists. Global climate models can be dynamically or statistically downscaled to regional climate models to analyze impacts of climate change on a local scale. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the earth's land surface areas). The most talked-about applications of these models in recent years have been their use to infer the consequences of increasing greenhouse gases in the atmosphere, primarily carbon dioxide (see greenhouse gas). These models predict an upward trend in the global mean surface temperature, with the most rapid increase in temperature being projected for the higher latitudes of the Northern Hemisphere. Models can range from relatively simple to quite complex: Simple radiant heat transfer model that treats the earth as a single point and averages outgoing energy this can be expanded vertically (radiative-convective models), or horizontally finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange. See also Climate inertia Climate Prediction Center Climatic map Climograph Ecosystem Effect of Sun angle on climate Greenhouse effect List of climate scientists List of weather records Microclimate National Climatic Data Center Outline of meteorology Tectonic–climatic interaction References Sources . AR5 Climate Change 2013: The Physical Science Basis — IPCC Further reading The Study of Climate on Alien Worlds; Characterizing atmospheres beyond our Solar System is now within our reach Kevin Heng July–August 2012 American Scientist Reumert, Johannes: "Vahls climatic divisions. An explanation" (Geografisk Tidsskrift, Band 48; 1946) External links NOAA Climate Services Portal NOAA State of the Climate NASA's Climate change and global warming portal Climate Prediction Project Climate index and mode information – Arctic Climate: Data and charts for world and US locations IPCC Data Distribution Centre – Climate data and guidance on use. HistoricalClimatology.com – Past, present and future climates – 2013. Globalclimatemonitor – Contains climatic information from 1901. ClimateCharts – Webapplication to generate climate charts for recent and historical data. International Disaster Database Paris Climate Conference Meteorological concepts
2,711
6,000
https://en.wikipedia.org/wiki/History%20of%20the%20Comoros
History of the Comoros
The history of the Comoros extends to about 800–1000 AD when the archipelago was first inhabited. The Comoros have been inhabited by various groups throughout this time. France colonised the islands in the 19th century, and they became independent in 1975. Early inhabitants There is uncertainty about the early population of Comoros. According to one study of early crops, the islands may have been settled first by South East Asian sailors the same way Madagascar was. This influx of Austronesian sailors, who had earlier settled nearby Madagascar, arrived in the 8th to 13 centuries CE. They are the source for the earliest archeological evidence of farming in the islands. Crops from archeological sites in Sima are predominantly rice strains of both indica and japonica varieties from Southeast Asia, as well as various other Asian crops like mung bean and cotton. Only a minority of the examined crops were African-derived, like finger millet, African sorghum, and cowpea. The Comoros are believed to be the first site of contact and subsequent admixture between African and Asian populations (earlier than Madagascar). Comorians today still display at most 20% Austronesian admixture. From around the 15th century AD, Shirazi slave traders established trading ports and brought in slaves from the mainland. In the 16th century, social changes on the East African coast probably linked to the arrival of the Portuguese saw the arrival of a number of Arabs of Hadrami who established alliances with the Shirazis and founded several royal clans. Over the centuries, the Comoros have been settled by a succession of diverse groups from the coast of Africa, the Persian Gulf, Southeast Asia and Madagascar. Europeans Portuguese explorers first visited the archipelago in 1505. Apart from a visit by the French Parmentier brothers in 1529, for much of the 16th century the only Europeans to visit the islands were Portuguese. British and Dutch ships began arriving around the start of the 17th century and the island of Ndzwani soon became a major supply point on the route to the East Indies. Ndzwani was generally ruled by a single sultan, who occasionally attempted to extend his authority to Mayotte and Mwali; Ngazidja was more fragmented, on occasion being divided into as many as 12 small kingdoms. Sir James Lancaster's voyage to the Indian Ocean in 1591 was the first attempt by the English to break into the spice trade, which was dominated by the Portuguese. Only one of his four ships made it back from the Indies on that voyage, and that one with a decimated crew of 5 men and a boy. Lancaster himself was marooned by a cyclone on the Comoros. Many of his crew were speared to death by angry islanders although Lancaster found his way home in 1594. (Dalrymple W. 2019; Bloomsbury Publishing ). Both the British and the French turned their attention to the Comoros islands in the middle of the 19th century. The French finally acquired the islands through a cunning mixture of strategies, including the policy of "divide and conquer", chequebook politics and a serendipitous affair between a sultana and a French trader that was put to good use by the French, who kept control of the islands, quelling unrest and the occasional uprising. William Sunley, a planter and British Consul from 1848 to 1866, was an influence on Anjouan. French Comoros France's presence in the western Indian Ocean dates to the early seventeenth century. The French established a settlement in southern Madagascar in 1634 and occupied the islands of Reunion and Rodrigues; in 1715 France claimed Mauritius (Ile de France), and in 1756 Seychelles. When France ceded Mauritius, Rodrigues, and Seychelles to Britain in 1814, it lost its Indian Ocean ports; Reunion, which remained French, did not offer a suitable natural harbor. In 1840 France acquired the island of Nosy-Be off the northwestern coast of Madagascar, but its potential as a port was limited. In 1841 the governor of Reunion, Admiral de Hell, negotiated with Andrian Souli, the Malagasy ruler of Mayotte, to cede Mayotte to France. Mahore offered a suitable site for port facilities, and its acquisition was justified by de Hell on the grounds that if France did not act, Britain would occupy the island. Although France had established a foothold in Comoros, the acquisition of the other islands proceeded fitfully. At times the French were spurred on by the threat of British intervention, especially on Nzwani, and at other times, by the constant anarchy resulting from the sultans' wars upon each other. In the 1880s, Germany's growing influence on the East African coast added to the concerns of the French. Not until 1908, however, did the four Comoro Islands become part of France's colony of Madagascar and not until 1912 did the last sultan abdicate. Then, a colonial administration took over the islands and established a capital at Dzaoudzi on Mahore. Treaties of protectorate status marked a transition point between independence and annexation; such treaties were signed with the rulers of Njazidja, Nzwani, and Mwali in 1886. The effects of French colonialism were mixed, at best. Colonial rule brought an end to the institution of slavery, but economic and social differences between former slaves and free persons and their descendants persisted. Health standards improved with the introduction of modern medicine, and the population increased about 50 percent between 1900 and 1960. France continued to dominate the economy. Food crop cultivation was neglected as French societes (companies) established cash crop plantations in the coastal regions. The result was an economy dependent on the exporting of vanilla, ylang-ylang, cloves, cocoa, copra, and other tropical crops. Most profits obtained from exports were diverted to France rather than invested in the infrastructure of the islands. Development was further limited by the colonial government's practice of concentrating public services on Madagascar. One consequence of this policy was the migration of large numbers of Comorans to Madagascar, where their presence would be a long-term source of tension between Comoros and its giant island neighbor. The Shirazi elite continued to play a prominent role as large landowners and civil servants. On the eve of independence, Comoros remained poor and undeveloped, having only one secondary school and practically nothing in the way of national media. Isolated from important trade routes by the opening of the Suez Canal in 1869, having few natural resources, and largely neglected by France, the islands were poorly equipped for independence. On September 25 1942, British forces landed in the Comoros, occupying them until October 13 1946. In 1946 the Comoro Islands became an overseas department of France with representation in the French National Assembly. The following year, the islands' administrative ties to Madagascar were severed; Comoros established its own customs regime in 1952. A Governing Council was elected in August 1957 on the four islands in conformity with the loi-cadre (enabling law) of June 23, 1956. A constitution providing for internal self-government was promulgated in 1961, following a 1958 referendum in which Comorans voted overwhelmingly to remain a part of France. This government consisted of a territorial assembly having, in 1975, thirty-nine members, and a Governing Council of six to nine ministers responsible to it. Agreement was reached with France in 1973 for the Comoros to become independent in 1978. On July 6, 1975, however, the Comorian parliament passed a resolution declaring unilateral independence. The deputies of Mayotte abstained. In 1961 the Comoros was granted autonomous rule and, in 1975, it broke all ties with France and established itself as an independent republic. From the very beginning Mayotte refused to join the new republic and aligned itself even more firmly to the French Republic, but the other islands remained committed to independence. The first president of the Comoros, Ahmed Abdallah Abderemane, did not last long before being ousted in a coup d'état by Ali Soilih, an atheist with an Islamic background. Soilih began with a set of solid socialist ideals designed to modernize the country. However, the regime faced problems. A French mercenary by the name of Bob Denard, arrived in the Comoros at dawn on 13 May 1978, and removed Soilih from power. Solih was shot and killed during the coup. The mercenaries returned Abdallah to power and the mercenaries were given key positions in government. In two referendums, in December 1974 and February 1976, the population of Mayotte voted against independence from France (by 63.8% and 99.4% respectively). Mayotte thus remains under French administration, and the Comorian Government has effective control over only Grande Comore, Anjouan, and Mohéli. Later, French settlers, French-owned companies, and Arab merchants established a plantation-based economy that now uses about one-third of the land for export crops. Abdallah regime In 1978, president Ali Soilih, who had a firm anti-French line, was killed and Ahmed Abdallah came to power. Under the reign of Abdallah, Denard was commander of the Presidential Guard (PG) and de facto ruler of the country. He was trained, supported and funded by the white regimes in South Africa (SA) and Rhodesia (now Zimbabwe) in return for permission to set up a secret listening post on the islands. South-African agents kept an ear on the important ANC bases in Lusaka and Dar es Salaam and watched the war in Mozambique, in which SA played an active role. The Comoros were also used for the evasion of arms sanctions. When in 1981 François Mitterrand was elected president Denard lost the support of the French intelligence service, but he managed to strengthen the link between SA and the Comoros. Besides the military, Denard established his own company SOGECOM, for both the security and construction, and seemed to profit by the arrangement. Between 1985 and 1987 the relationship of the PG with the local Comorians became worse. At the end of the 1980s the South Africans did not wish to continue to support the mercenary regime and France was in agreement. Also President Abdallah wanted the mercenaries to leave. Their response was a (third) coup resulting in the death of President Abdallah, in which Denard and his men were probably involved. South Africa and the French government subsequently forced Denard and his mercenaries to leave the islands in 1989. 1989–1996 Said Mohamed Djohar became president. His time in office was turbulent, including an impeachment attempt in 1991 and a coup attempt in 1992. On September 28, 1995 Bob Denard and a group of mercenaries took over the Comoros islands in a coup (named operation Kaskari by the mercenaries) against President Djohar. France immediately and severely denounced the coup, and backed by the 1978 defense agreement with the Comoros, President Jacques Chirac ordered his special forces to retake the island. Bob Denard began to take measures to stop the coming invasion. A new presidential guard was created. Strong points armed with heavy machine guns were set up around the island, particularly around the island's two airports. On October 3, 1995, 11 p.m., the French deployed 600 men against a force of 33 mercenaries and a 300-man dissident force. Denard however ordered his mercenaries not to fight. Within 7 hours the airports at Iconi and Hahaya and the French Embassy in Moroni were secured. By 3:00 p.m. the next day Bob Denard and his mercenaries had surrendered. This (response) operation, codenamed Azalée, was remarkable, because there were no casualties, and just in seven days, plans were drawn up and soldiers were deployed. Denard was taken to France and jailed. Prime minister Caambi El-Yachourtu became acting president until Djohar returned from exile in January, 1996. In March 1996, following presidential elections, Mohamed Taki Abdoulkarim, a member of the civilian government that Denard had tried to set up in October 1995, became president. On 23 November 1996, Ethiopian Airlines Flight 961 crashed near a beach on the island after it was hijacked and ran out of fuel killing 125 people and leaving 50 survivors. Secession of Anjouan and Mohéli In 1997, the islands of Anjouan and Mohéli declared their independence from the Comoros. A subsequent attempt by the government to re-establish control over the rebellious islands by force failed, and presently the African Union is brokering negotiations to effect a reconciliation. This process is largely complete, at least in theory. According to some sources, Mohéli did return to government control in 1998. In 1999, Anjouan had internal conflicts and on August 1 of that year, the 80-year-old first president Foundi Abdallah Ibrahim resigned, transferring power to a national coordinator, Said Abeid. The government was overthrown in a coup by army and navy officers on August 9, 2001. Mohamed Bacar soon rose to leadership of the junta that took over and by the end of the month he was the leader of the country. Despite two coup attempts in the following three months, including one by Abeid, Bacar's government remained in power, and was apparently more willing to negotiate with the Comoros. Presidential elections were held for all of the Comoros in 2002, and presidents have been chosen for all three islands as well, which have become a confederation. Most notably, Mohammed Bacar was elected for a 5-year term as president of Anjouan. Grande Comore had experienced troubles of its own in the late 1990s, when President Taki died on November 6, 1998. Colonel Azali Assoumani became president following a military coup in 1999. There have been several coup attempts since, but he gained firm control of the country after stepping down temporarily and winning a presidential election in 2002. In May 2006, Ahmed Abdallah Sambi was elected from the island of Anjouan to be the president of the Union of the Comoros. He is a Sunni cleric who studied in the Sudan, Iran and Saudi Arabia. He is nicknamed "Ayatollah" due to his time in Iran and his penchant for turbans. 2007–2008 Anjouan crisis Azali Assoumani in power since 2016 Azali Assoumani is a former army officer, first came to power in a coup in 1999. Then he won presidency in 2002 election, having power until 2006. After ten years, he was elected again in 2016 election. In March 2019, he was re-elected in the elections opposition claimed to be full of irregularities. Before the 2019 presidential election president Azali Assoumani had arranged a constitutional referendum in 2018 that approved extending the presidential mandate from one five-year term to two. The opposition had boycotted the referendum. In January 2020, his party The Convention for the Renewal of the Comoros (CRC) won 20 out of 24 parliamentary seats in the parliamentary election. See also List of heads of state of the Comoros List of heads of government of the Comoros History of Africa History of Southern Africa Politics of the Comoros Footnotes References Attribution: Further reading Walker, Iain. Islands in a Cosmopolitan Sea: A History of the Comoros (Oxford University Press, 2019) online review. Wright, Henry T., et al. "Early seafarers of the Comoro Islands: The Dembeni phase of the IXth-Xth centuries AD." AZANIA: Journal of the British Institute in Eastern Africa 19.1 (1984): 13–59. External links
2,712
6,008
https://en.wikipedia.org/wiki/Comorian%20Armed%20Forces
Comorian Armed Forces
The Comorian Armed Forces (; ) are the national military of the Comoros. The armed forces consist of a small standing army and a 500-member police force, as well as a 500-member defense force. A defense treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains a small troop presence in the Comoros at government request. France maintains a small Navy base and a Foreign Legion Detachment (DLEM) in Mayotte. Equipment inventory FN FAL battle rifle AK-47 assault rifle Type 81 assault rifle NSV HMG RPG-7 anti-tank weapon Mitsubishi L200 pickup truck Aircraft Note: The last comprehensive aircraft inventory list was from Aviation Week & Space Technology in 2007. References Government of the Comoros Comoros
2,719
6,014
https://en.wikipedia.org/wiki/Cathode-ray%20tube
Cathode-ray tube
A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms (oscilloscope), pictures (television set, computer monitor), radar targets, or other phenomena. A CRT on a television set is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons. In CRT television sets and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and televisions the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes. A CRT is a glass envelope which is deep (i.e., long from front screen face to rear end), heavy, and fragile. The interior is evacuated to to or less, to facilitate the free flight of electrons from the gun(s) to the tube's face without scattering due to collisions with air molecules. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. CRTs make up most of the weight of CRT TVs and computer monitors. Since the mid-late 2000's, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and less bulky. Flat-panel displays can also be made in very large sizes whereas to was about the largest size of a CRT. A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons. History Discoveries Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the charge-mass-ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891. The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society. The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode. Development In 1926, Kenjiro Takayanagi demonstrated a CRT television receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. In 1927, Philo Farnsworth created a television prototype. The CRT was named in 1929 by inventor Vladimir K. Zworykin. RCA was granted a trademark for the term (for its cathode-ray tube) in 1932; it voluntarily released the term to the public domain in 1950. In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of television. The first commercially made electronic television sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934. In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created. From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT. In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass-produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well. The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938. In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs. 1968 marks the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction. In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass. In 1990, the first CRTs with HD resolution were released to the market by Sony. In the mid-1990s, some 160 million CRTs were made per year. In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays. The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time. In 2015, several CRT manufacturers were convicted in the US for price fixing. The same occurred in Canada in 2018. Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units. Decline Beginning in the late 90s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005. Despite being a mainstay of display technology for decades, CRT-based computer monitors and televisions are now virtually a dead technology. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as pluses. Some industries still use CRTs because it is either too much effort, downtime, and/or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons. , at least one company manufactures new CRTs for these markets. A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware, and some games play better. Reasons for this include: CRTs refresh faster than LCDs, because they use interlaced lines. CRTs are able to correctly display certain oddball resolutions, such as the 256x224 resolution of the Nintendo Entertainment System (NES). Light guns only work on CRTs because they depend on the progressive timing properties of CRTs. Construction Body The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope. The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties. The optical properties of the glass used on the screen affects color reproduction and purity in Color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside. The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern. CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass. The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays. Another glass formulation uses 2-3% of lead on the screen. Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24 to 32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or approx. 1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating. The leaded glass in the funnels of CRTs may contain 21 to 25% of lead oxide (PbO), The neck may contain 30 to 40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s. Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation. The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is .005-.01uF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury. The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved. Size and weight The size of the screen of a CRT is measured in two ways: the size of the screen or the face diagonal, and the viewable image size/area or viewable screen diagonal, which is the part of the screen with phosphor. The size of the screen is the viewable image size plus its black edges which are not coated with phosphor. The viewable image may be perfectly square or rectangular while the edges of the CRT are black and have a curvature (such as in black stripe CRTs) or the edges may be black and truly flat (such as in Flatron CRTs), or the edges of the image may follow the curvature of the edges of the CRT, which may be the case in CRTs without and with black edges and curved edges. Black stripe CRTs were first made by Toshiba in 1972. Small CRTs below 3 inches were made for handheld televisions such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat. Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT. The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel is thinner than on the screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass. Anode The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback. For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs. The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT. The anode cap connection in modern CRTs must be able to handle up to 55–60 kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge. The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40 to 49% of Nickel and 3 to 6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip. The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field. The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays. Electron gun The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons. Construction and method of operation It has a hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5 to 2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but doesn't touch it; the cathode has its own separate electrical connection. The cathode is coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it. There are several shortcircuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering. The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation occurs during evacuation of (at the same time a vacuum is formed in) the CRT. After activation the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800-1000°C, at which point it starts shedding electrons. Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%. The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic. The second (screen) grid of the gun (G2) accelerates the electrons towards the screen using several hundred DC volts. A negative current is applied to the first (control) grid (G1) to converge the electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. A third grid (G3) electrostatically focuses the electron beam before it is deflected and accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an Einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun. However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600 to 8000 volt) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an Einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT. There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid. During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image. The electron beam may be affected by the earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk. Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage. After the CRTs were manufactured, they were aged to allow cathode emission to stabilize. The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40-170v per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30Mhz of bandwidth can usually provide 720p or 1080i resolution, while 20Mhz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light. Gamma CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity). Deflection There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2000 volts, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT. Magnetic deflection Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue. The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15 to 240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance. Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require approximately 24 volts while the horizontal deflection coils require approx. 120 volts to operate. The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen. Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness. Electrostatic deflection Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle. Burn-in Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays. Evacuation CRTs are evacuated or exhausted (a vacuum is formed) inside an oven at approx. 375–475 °C, in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems. The vacuum inside of a CRT causes atmospheric pressure to exert (in a 27-inch CRT) a pressure of in total. Rebuilding CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013. Reactivation Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short. Phosphors Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam. The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching. Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yittrium oxide for red and yittrium silicide for blue, while examples of earlier phosphors are copper cadmium sulfide for red, SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors. The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kv. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape. Phosphor persistence Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates. Limitations and workarounds Blooming Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current. Doming Doming is a phenomenon found on some CRT televisions in which parts of the shadow mask become heated. In televisions that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns. During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening. Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast. Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles don't block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through. High voltage Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions. Size Size is limited by anode voltage, as it would require a higher dielectric strength to prevent arcing (corona discharge) and the electrical losses and ozone generation it causes, without sacrificing image brightness. The weight of the CRT, which originates from the thick glass needed to safely sustain a vacuum, imposes a practical limit on the size of a CRT. The 43-inch Sony PVM-4300 CRT monitor weighs . Smaller CRTs weigh significantly less, as an example, 32-inch CRTs weigh up to and 19-inch CRTs weigh up to . For comparison, a 32-inch flat panel TV only weighs approx. and a 19-inch flat panel TV weighs . Shadow masks become more difficult to make with increasing resolution and size. Limits imposed by deflection At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 watts to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass decreases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress. Comparison with other technologies LCD advantages over CRT: Lower bulk, power consumption and heat generation, higher refresh rates (up to 360 Hz), higher contrast ratios CRT advantages over LCD: Better color reproduction, no motion blur, multisyncing available in many monitors, no input lag OLED advantages over CRT: Lower bulk, similar color reproduction, higher contrast ratios, similar refesh rates (over 60 Hz, up to 120 Hz) except for computer monitors. On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs. CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are sometimes preferred by PC gamers in spite of their bulk, weight and heat generation. CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist. Types CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes had no overscan and were of higher resolution. Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes. Monochrome CRTs If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen. The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen. The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit. Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image. Color CRTs Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs). Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). (The triangular configuration is often called "delta-gun", based on its relation to the shape of the Greek letter delta Δ.) The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor. A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually 1/2 inch behind the screen. Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen. The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba). The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag. Shadow mask The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at approx. 435 °C of the frit seal between the faceplate and the funnel of the CRT. Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape. Screen manufacture Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons stricking wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness. Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used. The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size. After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65 to 88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and funnel of the CRT are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435 to 475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT. Convergence and purity in color CRTs Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes. Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen. The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one ring has two poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary. On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. The deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask aren't spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen isn't spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen. The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate. Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence. Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control. The guns are aligned with one another (converged) using convergence rings placed right outside the neck; there is one ring per gun. The rings have north and south poles. There are 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence. Magnetic shielding and degaussing If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT. Color CRT displays in television sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect. Resolution Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern. Projection CRTs Projection CRTs were used in CRT projectors and CRT rear-projection televisions, and are usually small (being 7 to 9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933. Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT. Beam-index tube Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it didn't use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that wasn't covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings. Flat CRTs Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells. Radar CRTs Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor. The deflection yoke rotated, causing the beam to rotate in a circular fashion. Oscilloscope CRTs In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. Televisions use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15,000 volts. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles. Microchannel plate When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well. Graticules Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility. Image storage tubes These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image. Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen. When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube. Vector monitors Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids. They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits. Data storage tubes The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen. Cat's eye In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment. Charactrons Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems. Nimo Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon. Flood-beam CRT Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011. Print-head CRT CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. Zeus – thin CRT display In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The devices were demonstrated but never marketed. Slimmer CRT Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was . Health concerns Ionizing radiation CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, television receivers to 0.5 milliroentgens per hour at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem. The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs. Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused television industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US. Toxicity Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware). Flicker At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most televisions run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL televisions that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred. High-frequency audible noise 50 Hz/60 Hz CRTs used for television operate with horizontal scanning frequencies of 15,750 and 15,734.25 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT television. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads. This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz). Implosion High vacuum inside glass-walled cathode-ray tubes permits electron beams to fly freely—without colliding into molecules of air or other gas. If the glass is damaged, atmospheric pressure can collapse the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid personal injury. Implosion protection Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm². Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode. The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel. Electric shock To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1 to 1.5 kV of anode voltage per inch. Security concerns Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general. Recycling Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment. As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and , both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage. Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System. In Europe, disposal of CRT televisions and monitors is covered by the WEEE Directive. Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7g of phosphor. The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire. Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting. A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled. See also Cathodoluminescence Crookes tube Scintillation (physics) Applying CRT in different display-purpose: Analog television Image displaying Comparison of CRT, LCD, plasma, and OLED Overscan Raster scan Scan line Historical aspects: Direct-view bistable storage tube Flat-panel display Geer tube History of display technology Image dissector LCD television, LED-backlit LCD, LED display Penetron Surface-conduction electron-emitter display Trinitron Safety and precautions: Monitor filter Photosensitive epilepsy TCO Certification References Selected patents : Zworykin Television System External links Consumer electronics Display technology Television technology Vacuum tube displays Audiovisual introductions in 1897 Telecommunications-related introductions in 1897 Articles containing video clips
2,723
6,024
https://en.wikipedia.org/wiki/Calvinism
Calvinism
Calvinism (also called the Reformed tradition, Reformed Protestantism, Reformed Christianity, or simply Reformed) is a major branch of Protestantism that follows the theological tradition and forms of Christian practice set down by John Calvin and other Reformation-era theologians. It emphasizes the sovereignty of God and the authority of the Bible. Calvinists broke from the Roman Catholic Church in the 16th century. Calvinists differ from Lutherans (another major branch of the Reformation) on the spiritual real presence of Christ in the Lord's Supper, theories of worship, the purpose and meaning of baptism, and the use of God's law for believers, among other points. The label Calvinism can be misleading, because the religious tradition it denotes has always been diverse, with a wide range of influences rather than a single founder; however, almost all of them drew heavily from the writings of Augustine of Hippo twelve hundred years prior to the Reformation. The namesake and founder of the movement, French reformer John Calvin, embraced Protestant beliefs in the late 1520s or early 1530s, as the earliest notions of later Reformed tradition were already espoused by Huldrych Zwingli. The movement was first called Calvinism in the early 1550s by Lutherans who opposed it. Many in the tradition find it either a nondescript or inappropriate term and prefer the term Reformed. The most important Reformed theologians include Calvin, Zwingli, Martin Bucer, William Farel, Heinrich Bullinger, Thomas Cranmer, Nicholas Ridley, Peter Martyr Vermigli, Theodore Beza, and John Knox. In the twentieth century, Abraham Kuyper, Herman Bavinck, B. B. Warfield, J. Gresham Machen, Louis Berkhof, Karl Barth, Martyn Lloyd-Jones, Cornelius Van Til, R. C. Sproul, and J. I. Packer were influential. Contemporary Reformed theologians include Albert Mohler, John MacArthur, Tim Keller, John Piper, Joel Beeke, and Michael Horton. The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Evangelical Anglican, Congregationalist, and Reformed Baptist denominations. Several forms of ecclesiastical polity are exercised by a group of Reformed churches, including presbyterian, congregationalist, and some episcopal. The biggest Reformed association is the World Communion of Reformed Churches, with more than 100 million members in 211 member denominations around the world. More conservative Reformed federations include the World Reformed Fellowship and the International Conference of Reformed Churches. Etymology Calvinism is named after John Calvin and was first used by a Lutheran theologian in 1552. Even though a common practice of the Roman Catholic Church was to name what it viewed as heresy after its founder, the term originated in Lutheran circles. Calvin denounced the designation himself: Despite its negative connotation, this designation became increasingly popular in order to distinguish Calvinists from Lutherans and other Protestant branches that emerged later. The vast majority of churches that trace their history back to Calvin (including Presbyterians, Congregationalists, and other Calvinist churches) do not use it themselves because the designation "Reformed" is more generally accepted and preferred, especially in the English-speaking world. These churches claim to be—in accordance with John Calvin's own words—"renewed accordingly with the true order of gospel". Since the Arminian controversy, the Reformed tradition—as a branch of Protestantism distinguished from Lutheranism—divided into two groups: Arminians and Calvinists. However, it is now rare to call Arminians a part of the Reformed tradition, with the majority of Arminians today being members of the Methodist Churches, General Baptist Churches or Pentecostal churches. While the Reformed theological tradition addresses all of the traditional topics of Christian theology, the word Calvinism is sometimes used to refer to particular Calvinist views on soteriology and predestination, which are summarized in part by the Five Points of Calvinism. Some have also argued that Calvinism as a whole stresses the sovereignty or rule of God in all things including salvation. History The first wave of reformist theologians include Huldrych Zwingli (1484–1531), Martin Bucer (1491–1551), Wolfgang Capito (1478–1541), John Oecolampadius (1482–1531), and Guillaume Farel (1489 – 1565). While from diverse academic backgrounds, their work already contained key themes within Reformed theology, especially the priority of scripture as a source of authority. Scripture was also viewed as a unified whole, which led to a covenantal theology of the sacraments of baptism and the Lord's Supper as visible signs of the covenant of grace. Another shared perspective was their denial of the Real presence of Christ in the Eucharist. Each understood salvation to be by grace alone and affirmed a doctrine of unconditional election, the teaching that some people are chosen by God to be saved. Martin Luther and his successor, Philipp Melanchthon were significant influences on these theologians, and to a larger extent, those who followed. The doctrine of justification by faith alone, also known as sola fide, was a direct inheritance from Luther. The second generation featured John Calvin (1509–1564), Heinrich Bullinger (1504–1575), Wolfgang Musculus (1497–1563), Peter Martyr Vermigli (1500–1562), and Andreas Hyperius (1511–1564). Written between 1536 and 1539, Calvin's Institutes of the Christian Religion was one of the most influential works of the era. Toward the middle of the 16th century, these beliefs were formed into one consistent creed, which would shape the future definition of the Reformed faith. The 1549 Consensus Tigurinus unified Zwingli and Bullinger's memorialist theology of the Eucharist, which taught that it was simply a reminder of Christ's death, with Calvin's view of it as a means of grace with Christ actually present, though spiritually rather than bodily as in Catholic doctrine. The document demonstrates the diversity as well as unity in early Reformed theology, giving it a stability that enabled it to spread rapidly throughout Europe. This stands in marked contrast to the bitter controversy experienced by Lutherans prior to the 1579 Formula of Concord. Due to Calvin's missionary work in France, his program of reform eventually reached the French-speaking provinces of the Netherlands. Calvinism was adopted in the Electorate of the Palatinate under Frederick III, which led to the formulation of the Heidelberg Catechism in 1563. This and the Belgic Confession were adopted as confessional standards in the first synod of the Dutch Reformed Church in 1571. In 1573, William the Silent joined the Calvinist Church. Calvinism was declared the official religion of the Kingdom of Navarre by the queen regnant Jeanne d'Albret after her conversion in 1560. Leading divines, either Calvinist or those sympathetic to Calvinism, settled in England, including Martin Bucer, Peter Martyr, and Jan Łaski, as did John Knox in Scotland. During the First English Civil War, English and Scots Presbyterians produced the Westminster Confession, which became the confessional standard for Presbyterians in the English-speaking world. Having established itself in Europe, the movement continued to spread to areas including North America, South Africa and Korea. While Calvin did not live to see the foundation of his work grow into an international movement, his death allowed his ideas to spread far beyond their city of origin and their borders and to establish their own distinct character. Spread Although much of Calvin's work was in Geneva, his publications spread his ideas of a correctly Reformed church to many parts of Europe. In Switzerland, some cantons are still Reformed, and some are Catholic. Calvinism became the dominant doctrine within the Church of Scotland, the Dutch Republic, some communities in Flanders, and parts of Germany, especially those adjacent to the Netherlands in the Palatinate, Kassel and Lippe, spread by Olevianus and Zacharias Ursinus among others. Protected by the local nobility, Calvinism became a significant religion in Eastern Hungary and Hungarian-speaking areas of Transylvania. Today there are about 3.5 million Hungarian Reformed people worldwide. It was influential in France, Lithuania and Poland before being mostly erased during the Counter Reformation. In Poland, a faction called the Polish Brethren broke away from Calvinism on January 22, 1556, when Piotr of Goniądz, a Polish student, spoke out against the doctrine of the Trinity during the general synod of the Reformed churches of Poland held in the village of Secemin. Calvinism gained some popularity in Scandinavia, especially Sweden, but was rejected in favor of Lutheranism after the Synod of Uppsala in 1593. Many 17th century European settlers in the North America were Calvinist in doctrine, who emigrated because of arguments over church structure, like the Pilgrim Fathers, or were forced into exile, such as the French Huguenots. Dutch and French Calvinist settlers were also among the first European colonizers of South Africa, beginning in the 17th century, who became known as Boers or Afrikaners. Sierra Leone was largely colonized by Calvinist settlers from Nova Scotia, many of whom were Black Loyalists who fought for the British Empire during the American War of Independence. John Marrant had organized a congregation there under the auspices of the Huntingdon Connection. Some of the largest Calvinist communions were started by 19th- and 20th-century missionaries. Especially large are those in Indonesia, Korea and Nigeria. In South Korea there are 20,000 Presbyterian congregations with about 9–10 million church members, scattered in more than 100 Presbyterian denominations. In South Korea, Presbyterianism is the largest Christian denomination. A 2011 report of the Pew Forum on Religious and Public Life estimated that members of Presbyterian or Reformed churches make up 7% of the estimated 801 million Protestants globally, or approximately 56 million people. Though the broadly defined Reformed faith is much larger, as it constitutes Congregationalist (0.5%), most of the United and uniting churches (unions of different denominations) (7.2%) and most likely some of the other Protestant denominations (38.2%). All three are distinct categories from Presbyterian or Reformed (7%) in this report. The Reformed family of churches is one of the largest Christian denominations. According to adherents.com the Reformed/Presbyterian/Congregational/United churches represent 75 million believers worldwide. The World Communion of Reformed Churches, which includes some United Churches (most of these are primarily Reformed; see Uniting and united churches for details), has 80 million believers. WCRC is the third largest Christian communion in the world, after the Roman Catholic Church and the Eastern Orthodox Churches. Many conservative Reformed churches which are strongly Calvinistic formed the World Reformed Fellowship which has about 70 member denominations. Most are not part of the World Communion of Reformed Churches because of its ecumenical attire. The International Conference of Reformed Churches is another conservative association. Church of Tuvalu is an officially established state church in the Calvinist tradition. Theology Revelation and scripture Reformed theologians believe that God communicates knowledge of himself to people through the Word of God. People are not able to know anything about God except through this self-revelation. (With the exception of general revelation of God; "His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made, so that they are without excuse" (Romans 1:20).) Speculation about anything which God has not revealed through his Word is not warranted. The knowledge people have of God is different from that which they have of anything else because God is infinite, and finite people are incapable of comprehending an infinite being. While the knowledge revealed by God to people is never incorrect, it is also never comprehensive. According to Reformed theologians, God's self-revelation is always through his son Jesus Christ, because Christ is the only mediator between God and people. Revelation of God through Christ comes through two basic channels. The first is creation and providence, which is God's creating and continuing to work in the world. This action of God gives everyone knowledge about God, but this knowledge is only sufficient to make people culpable for their sin; it does not include knowledge of the gospel. The second channel through which God reveals himself is redemption, which is the gospel of salvation from condemnation which is punishment for sin. In Reformed theology, the Word of God takes several forms. Jesus Christ himself is the Word Incarnate. The prophecies about him said to be found in the Old Testament and the ministry of the apostles who saw him and communicated his message are also the Word of God. Further, the preaching of ministers about God is the very Word of God because God is considered to be speaking through them. God also speaks through human writers in the Bible, which is composed of texts set apart by God for self-revelation. Reformed theologians emphasize the Bible as a uniquely important means by which God communicates with people. People gain knowledge of God from the Bible which cannot be gained in any other way. Reformed theologians affirm that the Bible is true, but differences emerge among them over the meaning and extent of its truthfulness. Conservative followers of the Princeton theologians take the view that the Bible is true and inerrant, or incapable of error or falsehood, in every place. This view is very similar to that of Catholic orthodoxy as well as modern Evangelicalism. Another view, influenced by the teaching of Karl Barth and neo-orthodoxy, is found in the Presbyterian Church (U.S.A.)'s Confession of 1967. Those who take this view believe the Bible to be the primary source of our knowledge of God, but also that some parts of the Bible may be false, not witnesses to Christ, and not normative for today's church. In this view, Christ is the revelation of God, and the scriptures witness to this revelation rather than being the revelation itself. Covenant theology Reformed theologians use the concept of covenant to describe the way God enters into fellowship with people in history. The concept of covenant is so prominent in Reformed theology that Reformed theology as a whole is sometimes called "covenant theology". However, sixteenth- and seventeenth-century theologians developed a particular theological system called "covenant theology" or "federal theology" which many conservative Reformed churches continue to affirm today. This framework orders God's life with people primarily in two covenants: the covenant of works and the covenant of grace. The covenant of works is made with Adam and Eve in the Garden of Eden. The terms of the covenant are that God provides a blessed life in the garden on condition that Adam and Eve obey God's law perfectly. Because Adam and Eve broke the covenant by eating the forbidden fruit, they became subject to death and were banished from the garden. This sin was passed down to all mankind because all people are said to be in Adam as a covenantal or "federal" head. Federal theologians usually infer that Adam and Eve would have gained immortality had they obeyed perfectly. A second covenant, called the covenant of grace, is said to have been made immediately following Adam and Eve's sin. In it, God graciously offers salvation from death on condition of faith in God. This covenant is administered in different ways throughout the Old and New Testaments, but retains the substance of being free of a requirement of perfect obedience. Through the influence of Karl Barth, many contemporary Reformed theologians have discarded the covenant of works, along with other concepts of federal theology. Barth saw the covenant of works as disconnected from Christ and the gospel, and rejected the idea that God works with people in this way. Instead, Barth argued that God always interacts with people under the covenant of grace, and that the covenant of grace is free of all conditions whatsoever. Barth's theology and that which follows him has been called "monocovenantal" as opposed to the "bi-covenantal" scheme of classical federal theology. Conservative contemporary Reformed theologians, such as John Murray, have also rejected the idea of covenants based on law rather than grace. Michael Horton, however, has defended the covenant of works as combining principles of law and love. God For the most part, the Reformed tradition did not modify the medieval consensus on the doctrine of God. God's character is described primarily using three adjectives: eternal, infinite, and unchangeable. Reformed theologians such as Shirley Guthrie have proposed that rather than conceiving of God in terms of his attributes and freedom to do as he pleases, the doctrine of God is to be based on God's work in history and his freedom to live with and empower people. Traditionally, Reformed theologians have also followed the medieval tradition going back to before the early church councils of Nicaea and Chalcedon on the doctrine of the Trinity. God is affirmed to be one God in three persons: Father, Son, and Holy Spirit. The Son (Christ) is held to be eternally begotten by the Father and the Holy Spirit eternally proceeding from the Father and Son. However, contemporary theologians have been critical of aspects of Western views here as well. Drawing on the Eastern tradition, these Reformed theologians have proposed a "social trinitarianism" where the persons of the Trinity only exist in their life together as persons-in-relationship. Contemporary Reformed confessions such as the Barmen Confession and Brief Statement of Faith of the Presbyterian Church (USA) have avoided language about the attributes of God and have emphasized his work of reconciliation and empowerment of people. Feminist theologian Letty Russell used the image of partnership for the persons of the Trinity. According to Russell, thinking this way encourages Christians to interact in terms of fellowship rather than reciprocity. Conservative Reformed theologian Michael Horton, however, has argued that social trinitarianism is untenable because it abandons the essential unity of God in favor of a community of separate beings. Christ and atonement Reformed theologians affirm the historic Christian belief that Christ is eternally one person with a divine and a human nature. Reformed Christians have especially emphasized that Christ truly became human so that people could be saved. Christ's human nature has been a point of contention between Reformed and Lutheran Christology. In accord with the belief that finite humans cannot comprehend infinite divinity, Reformed theologians hold that Christ's human body cannot be in multiple locations at the same time. Because Lutherans believe that Christ is bodily present in the Eucharist, they hold that Christ is bodily present in many locations simultaneously. For Reformed Christians, such a belief denies that Christ actually became human. Some contemporary Reformed theologians have moved away from the traditional language of one person in two natures, viewing it as unintelligible to contemporary people. Instead, theologians tend to emphasize Jesus' context and particularity as a first-century Jew. John Calvin and many Reformed theologians who followed him describe Christ's work of redemption in terms of three offices: prophet, priest, and king. Christ is said to be a prophet in that he teaches perfect doctrine, a priest in that he intercedes to the Father on believers' behalf and offered himself as a sacrifice for sin, and a king in that he rules the church and fights on believers' behalf. The threefold office links the work of Christ to God's work in ancient Israel. Many, but not all, Reformed theologians continue to make use of the threefold office as a framework because of its emphasis on the connection of Christ's work to Israel. They have, however, often reinterpreted the meaning of each of the offices. For example, Karl Barth interpreted Christ's prophetic office in terms of political engagement on behalf of the poor. Christians believe Jesus' death and resurrection makes it possible for believers to receive forgiveness for sin and reconciliation with God through the atonement. Reformed Protestants generally subscribe to a particular view of the atonement called penal substitutionary atonement, which explains Christ's death as a sacrificial payment for sin. Christ is believed to have died in place of the believer, who is accounted righteous as a result of this sacrificial payment. Sin In Christian theology, people are created good and in the image of God but have become corrupted by sin, which causes them to be imperfect and overly self-interested. Reformed Christians, following the tradition of Augustine of Hippo, believe that this corruption of human nature was brought on by Adam and Eve's first sin, a doctrine called original sin. Although earlier Christian authors taught the elements of physical death, moral weakness, and a sin propensity within original sin, Augustine was the first Christian to add the concept of inherited guilt (reatus) from Adam whereby every infant is born eternally damned and humans lack any residual ability to respond to God. Reformed theologians emphasize that this sinfulness affects all of a person's nature, including their will. This view, that sin so dominates people that they are unable to avoid sin, has been called total depravity. As a consequence, every one of their descendants inherited a stain of corruption and depravity. This condition, innate to all humans, is known in Christian theology as original sin. Calvin thought original sin was “a hereditary corruption and depravity of our nature, extending to all the parts of the soul.” Calvin asserted people were so warped by original sin that “everything which our mind conceives, meditates, plans, and resolves, is always evil.” The depraved condition of every human being is not the result of sins people commit during their lives. Instead, before we are born, while we are in our mother's womb, “we are in God's sight defiled and polluted.” Calvin thought people were justly condemned to hell because their corrupted state is “naturally hateful to God.” In colloquial English, the term "total depravity" can be easily misunderstood to mean that people are absent of any goodness or unable to do any good. However the Reformed teaching is actually that while people continue to bear God's image and may do things that appear outwardly good, their sinful intentions affect all of their nature and actions so that they are not pleasing to God. From a Calvinist viewpoint, a person who has sinned was predestined to sin, and no matter what a person does, they will go to Heaven or Hell based on that determination. There is no repenting from sin since the most evil thing is the sinner's own actions, thoughts, and words. Some contemporary theologians in the Reformed tradition, such as those associated with the Presbyterian Church (USA)'s Confession of 1967, have emphasized the social character of human sinfulness. These theologians have sought to bring attention to issues of environmental, economic, and political justice as areas of human life that have been affected by sin. Salvation Reformed theologians, along with other Protestants, believe salvation from punishment for sin is to be given to all those who have faith in Christ. Faith is not purely intellectual, but involves trust in God's promise to save. Protestants do not hold there to be any other requirement for salvation, but that faith alone is sufficient. Justification is the part of salvation where God pardons the sin of those who believe in Christ. It is historically held by Protestants to be the most important article of Christian faith, though more recently it is sometimes given less importance out of ecumenical concerns. People are not on their own able to fully repent of their sin or prepare themselves to repent because of their sinfulness. Therefore, justification is held to arise solely from God's free and gracious act. Sanctification is the part of salvation in which God makes the believer holy, by enabling them to exercise greater love for God and for other people. The good works accomplished by believers as they are sanctified are considered to be the necessary outworking of the believer's salvation, though they do not cause the believer to be saved. Sanctification, like justification, is by faith, because doing good works is simply living as the child of God one has become. Predestination Reformed theologians teach that sin so affects human nature that they are unable even to exercise faith in Christ by their own will. While people are said to retain will, in that they willfully sin, they are unable not to sin because of the corruption of their nature due to original sin. Reformed Christians believe that God predestined some people to be saved and others were predestined to eternal damnation. This choice by God to save some is held to be unconditional and not based on any characteristic or action on the part of the person chosen. This view is opposed to the Arminian view that God's choice of whom to save is conditional or based on his foreknowledge of who would respond positively to God. Karl Barth reinterpreted the Reformed doctrine of predestination to apply only to Christ. Individual people are only said to be elected through their being in Christ. Reformed theologians who followed Barth, including Jürgen Moltmann, David Migliore, and Shirley Guthrie, have argued that the traditional Reformed concept of predestination is speculative and have proposed alternative models. These theologians claim that a properly trinitarian doctrine emphasizes God's freedom to love all people, rather than choosing some for salvation and others for damnation. God's justice towards and condemnation of sinful people is spoken of by these theologians as out of his love for them and a desire to reconcile them to himself. Five Points of Calvinism Much attention surrounding Calvinism focuses on the "Five Points of Calvinism" (also called the doctrines of grace). The five points have been summarized under the acrostic TULIP. The five points are popularly said to summarize the Canons of Dort; however, there is no historical relationship between them, and some scholars argue that their language distorts the meaning of the Canons, Calvin's theology, and the theology of 17th-century Calvinistic orthodoxy, particularly in the language of total depravity and limited atonement. The five points were more recently popularized in the 1963 booklet The Five Points of Calvinism Defined, Defended, Documented by David N. Steele and Curtis C. Thomas. The origins of the five points and the acrostic are uncertain, but they appear to be outlined in the Counter Remonstrance of 1611, a lesser-known Reformed reply to the Arminians, which was written prior to the Canons of Dort. The acrostic was used by Cleland Boyd McAfee as early as circa 1905. An early printed appearance of the acrostic can be found in Loraine Boettner's 1932 book, The Reformed Doctrine of Predestination. The central assertion of TULIP is that God saves every person upon whom he has mercy, and that his efforts are not frustrated by the unrighteousness or inability of humans. Total depravity (also called radical corruption or pervasive depravity) asserts that as a consequence of the fall of man into sin, every person is enslaved to sin. People are not by nature inclined to love God, but rather to serve their own interests and to reject the rule of God. Thus, all people by their own faculties are morally unable to choose to trust God for their salvation and be saved (the term "total" in this context refers to sin affecting every part of a person, not that every person is as evil as they could be). This doctrine is derived from Calvin's interpretation of Augustine's explanation about Original Sin. While the phrases "totally depraved" and "utterly perverse" were used by Calvin, what was meant was the inability to save oneself from sin rather than being absent of goodness. Phrases like "total depravity" cannot be found in the Canons of Dort, and the Canons as well as later Reformed orthodox theologians arguably offer a more moderate view of the nature of fallen humanity than Calvin. Unconditional election (also called sovereign election or unconditional grace) asserts that God has chosen from eternity those whom he will bring to himself not based on foreseen virtue, merit, or faith in those people; rather, his choice is unconditionally grounded in his mercy alone. God has chosen from eternity to extend mercy to those he has chosen and to withhold mercy from those not chosen. Those chosen receive salvation through Christ alone. Those not chosen receive the just wrath that is warranted for their sins against God. Limited atonement (also called definite atonement or particular redemption) asserts that Jesus's substitutionary atonement was definite and certain in its purpose and in what it accomplished. This implies that only the sins of the elect were atoned for by Jesus's death. Calvinists do not believe, however, that the atonement is limited in its value or power, but rather that the atonement is limited in the sense that it is intended for some and not all. Some Calvinists have summarized this as "The atonement is sufficient for all and efficient for the elect." Irresistible grace (also called effectual grace, effectual calling, or efficacious grace) asserts that the saving grace of God is effectually applied to those whom he has determined to save (that is, the elect) and overcomes their resistance to obeying the call of the gospel, bringing them to a saving faith. This means that when God sovereignly purposes to save someone, that individual certainly will be saved. The doctrine holds that this purposeful influence of God's Holy Spirit cannot be resisted, but that the Holy Spirit, "graciously causes the elect sinner to cooperate, to believe, to repent, to come freely and willingly to Christ." This is not to deny the fact that the Spirit's outward call (through the proclamation of the Gospel) can be, and often is, rejected by sinners; rather, it is that inward call which cannot be rejected. Perseverance of the saints (also called preservation of the saints; the "saints" being those whom God has predestined to salvation) asserts that since God is sovereign and his will cannot be frustrated by humans or anything else, those whom God has called into communion with himself will continue in faith until the end. Those who apparently fall away either never had true faith to begin with (1 John 2:19), or, if they are saved but not presently walking in the Spirit, they will be divinely chastened (Hebrews 12:5–11) and will repent (1 John 3:6–9). Church Reformed Christians see the Christian Church as the community with which God has made the covenant of grace, a promise of eternal life and relationship with God. This covenant extends to those under the "old covenant" whom God chose, beginning with Abraham and Sarah. The church is conceived of as both invisible and visible. The invisible church is the body of all believers, known only to God. The visible church is the institutional body which contains both members of the invisible church as well as those who appear to have faith in Christ, but are not truly part of God's elect. In order to identify the visible church, Reformed theologians have spoken of certain marks of the Church. For some, the only mark is the pure preaching of the gospel of Christ. Others, including John Calvin, also include the right administration of the sacraments. Others, such as those following the Scots Confession, include a third mark of rightly administered church discipline, or exercise of censure against unrepentant sinners. These marks allowed the Reformed to identify the church based on its conformity to the Bible rather than the Magisterium or church tradition. Worship Regulative principle of worship The regulative principle of worship is a teaching shared by some Calvinists and Anabaptists on how the Bible orders public worship. The substance of the doctrine regarding worship is that God institutes in the Scriptures everything he requires for worship in the Church and that everything else is prohibited. As the regulative principle is reflected in Calvin's own thought, it is driven by his evident antipathy toward the Roman Catholic Church and its worship practices, and it associates musical instruments with icons, which he considered violations of the Ten Commandments' prohibition of graven images. On this basis, many early Calvinists also eschewed musical instruments and advocated a cappella exclusive psalmody in worship, though Calvin himself allowed other scriptural songs as well as psalms, and this practice typified Presbyterian worship and the worship of other Reformed churches for some time. The original Lord's Day service designed by John Calvin was a highly liturgical service with the Creed, Alms, Confession and Absolution, the Lord's supper, Doxologies, prayers, Psalms being sung, the Lords prayer being sung, and Benedictions. Since the 19th century, however, some of the Reformed churches have modified their understanding of the regulative principle and make use of musical instruments, believing that Calvin and his early followers went beyond the biblical requirements and that such things are circumstances of worship requiring biblically rooted wisdom, rather than an explicit command. Despite the protestations of those who hold to a strict view of the regulative principle, today hymns and musical instruments are in common use, as are contemporary worship music styles with elements such as worship bands. Sacraments The Westminster Confession of Faith limits the sacraments to baptism and the Lord's Supper. Sacraments are denoted "signs and seals of the covenant of grace." Westminster speaks of "a sacramental relation, or a sacramental union, between the sign and the thing signified; whence it comes to pass that the names and effects of the one are attributed to the other." Baptism is for infant children of believers as well as believers, as it is for all the Reformed except Baptists and some Congregationalists. Baptism admits the baptized into the visible church, and in it all the benefits of Christ are offered to the baptized. On the Lord's supper, Westminster takes a position between Lutheran sacramental union and Zwinglian memorialism: "the Lord's supper really and indeed, yet not carnally and corporally, but spiritually, receive and feed upon Christ crucified, and all benefits of his death: the body and blood of Christ being then not corporally or carnally in, with, or under the bread and wine; yet, as really, but spiritually, present to the faith of believers in that ordinance as the elements themselves are to their outward senses." The 1689 London Baptist Confession of Faith does not use the term sacrament, but describes baptism and the Lord's supper as ordinances, as do most Baptists, Calvinist or otherwise. Baptism is only for those who "actually profess repentance towards God", and not for the children of believers. Baptists also insist on immersion or dipping, in contradistinction to other Reformed Christians. The Baptist Confession describes the Lord's supper as "the body and blood of Christ being then not corporally or carnally, but spiritually present to the faith of believers in that ordinance", similarly to the Westminster Confession. There is significant latitude in Baptist congregations regarding the Lord's supper, and many hold the Zwinglian view. Logical order of God's decree There are two schools of thought regarding the logical order of God's decree to ordain the fall of man: supralapsarianism (from the Latin: supra, "above", here meaning "before" + lapsus, "fall") and infralapsarianism (from the Latin: infra, "beneath", here meaning "after" + lapsus, "fall"). The former view, sometimes called "high Calvinism", argues that the Fall occurred partly to facilitate God's purpose to choose some individuals for salvation and some for damnation. Infralapsarianism, sometimes called "low Calvinism", is the position that, while the Fall was indeed planned, it was not planned with reference to who would be saved. Supralapsarians believe that God chose which individuals to save logically prior to the decision to allow the race to fall and that the Fall serves as the means of realization of that prior decision to send some individuals to hell and others to heaven (that is, it provides the grounds of condemnation in the reprobate and the need for salvation in the elect). In contrast, infralapsarians hold that God planned the race to fall logically prior to the decision to save or damn any individuals because, it is argued, in order to be "saved", one must first need to be saved from something and therefore the decree of the Fall must precede predestination to salvation or damnation. These two views vied with each other at the Synod of Dort, an international body representing Calvinist Christian churches from around Europe, and the judgments that came out of that council sided with infralapsarianism (Canons of Dort, First Point of Doctrine, Article 7). The Westminster Confession of Faith also teaches (in Hodge's words "clearly impl[ies]") the infralapsarian view, but is sensitive to those holding to supralapsarianism. The Lapsarian controversy has a few vocal proponents on each side today, but overall it does not receive much attention among modern Calvinists. Reformed churches The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Evangelical Anglican, Congregationalist, and Reformed Baptist denominational families. Continental Reformed churches Considered to be the oldest and most orthodox bearers of the Reformed faith, the continental Reformed Churches uphold the Helvetic Confessions and Heidelberg Catechism, which were adopted in Zurich and Heidelberg, respectively. In the United States, immigrants belonging to the continental Reformed Churches joined the Dutch Reformed Church there, as well as the Anglican Church. Congregational churches The Congregational churches are a part of the Reformed tradition founded under the influence of New England Puritanism. The Savoy Declaration is the confession of faith held by the Congregationalist churches. An example of a Christian denomination belonging to the Congregationalist tradition is the Conservative Congregational Christian Conference. Presbyterian churches The Presbyterian churches are part of the Reformed tradition and were influenced by John Knox's teachings in the Church of Scotland. Presbyterianism upholds the Westminster Confession of Faith. Evangelical Anglicanism Historic Anglicanism is a part of the wider Reformed tradition, as "the founding documents of the Anglican church—the Book of Homilies, the Book of Common Prayer, and the Thirty-Nine Articles of Religion—expresses a theology in keeping with the Reformed theology of the Swiss and South German Reformation." The Most Rev. Peter Robinson, presiding bishop of the United Episcopal Church of North America, writes: Reformed Baptist churches Reformed Baptist churches are Baptists (a Christian denominational family that teaches credobaptism rather than infant baptism) who adhere to Reformed theology as explicated in the 1689 Baptist Confession of Faith. Variants in Reformed theology Amyraldism Amyraldism (or sometimes Amyraldianism, also known as the School of Saumur, hypothetical universalism, post redemptionism, moderate Calvinism, or four-point Calvinism) is the belief that God, prior to his decree of election, decreed Christ's atonement for all alike if they believe, but seeing that none would believe on their own, he then elected those whom he will bring to faith in Christ, thereby preserving the Calvinist doctrine of unconditional election. The efficacy of the atonement remains limited to those who believe. Named after its formulator Moses Amyraut, this doctrine is still viewed as a variety of Calvinism in that it maintains the particularity of sovereign grace in the application of the atonement. However, detractors like B. B. Warfield have termed it "an inconsistent and therefore unstable form of Calvinism." Hyper-Calvinism Hyper-Calvinism first referred to a view that appeared among the early English Particular Baptists in the 18th century. Their system denied that the call of the gospel to "repent and believe" is directed to every single person and that it is the duty of every person to trust in Christ for salvation. The term also occasionally appears in both theological and secular controversial contexts, where it usually connotes a negative opinion about some variety of theological determinism, predestination, or a version of Evangelical Christianity or Calvinism that is deemed by the critic to be unenlightened, harsh, or extreme. The Westminster Confession of Faith says that the gospel is to be freely offered to sinners, and the Larger Catechism makes clear that the gospel is offered to the non-elect. Neo-Calvinism Neo-Calvinism, a form of Dutch Calvinism, is the movement initiated by the theologian and former Dutch prime minister Abraham Kuyper. James Bratt has identified a number of different types of Dutch Calvinism: The Seceders—split into the Reformed Church "West" and the Confessionalists; and the Neo-Calvinists—the Positives and the Antithetical Calvinists. The Seceders were largely infralapsarian and the Neo-Calvinists usually supralapsarian. Kuyper wanted to awaken the church from what he viewed as its pietistic slumber. He declared: No single piece of our mental world is to be sealed off from the rest and there is not a square inch in the whole domain of human existence over which Christ, who is sovereign over all, does not cry: 'Mine!' This refrain has become something of a rallying call for Neo-Calvinists. Christian Reconstructionism Christian Reconstructionism is a fundamentalist Calvinist theonomic movement that has remained rather obscure. Founded by R. J. Rushdoony, the movement has had an important influence on the Christian Right in the United States. The movement peaked in the 1990s. However, it lives on in small denominations such as the Reformed Presbyterian Church in the United States and as a minority position in other denominations. Christian Reconstructionists are usually postmillennialists and followers of the presuppositional apologetics of Cornelius Van Til. They tend to support a decentralized political order resulting in laissez-faire capitalism. New Calvinism New Calvinism is a growing perspective within conservative Evangelicalism that embraces the fundamentals of 16th century Calvinism while also trying to be relevant in the present day world. In March 2009, Time magazine described the New Calvinism as one of the "10 ideas changing the world". Some of the major figures who have been associated with the New Calvinism are John Piper, Mark Driscoll, Al Mohler, Mark Dever, C. J. Mahaney, and Tim Keller. New Calvinists have been criticized for blending Calvinist soteriology with popular Evangelical positions on the sacraments and continuationism and for rejecting tenants seen as crucial to the Reformed faith such as confessionalism and covenant theology. Social and economic influences Calvin expressed himself on usury in a 1545 letter to a friend, Claude de Sachin, in which he criticized the use of certain passages of scripture invoked by people opposed to the charging of interest. He reinterpreted some of these passages, and suggested that others of them had been rendered irrelevant by changed conditions. He also dismissed the argument (based upon the writings of Aristotle) that it is wrong to charge interest for money because money itself is barren. He said that the walls and the roof of a house are barren, too, but it is permissible to charge someone for allowing him to use them. In the same way, money can be made fruitful. He qualified his view, however, by saying that money should be lent to people in dire need without hope of interest, while a modest interest rate of 5% should be permitted in relation to other borrowers. In The Protestant Ethic and the Spirit of Capitalism, Max Weber wrote that capitalism in Northern Europe evolved when the Protestant (particularly Calvinist) ethic influenced large numbers of people to engage in work in the secular world, developing their own enterprises and engaging in trade and the accumulation of wealth for investment. In other words, the Protestant work ethic was an important force behind the unplanned and uncoordinated emergence of modern capitalism. Expert researchers and authors have referred to the United States as a "Protestant nation" or "founded on Protestant principles," specifically emphasizing its Calvinist heritage. Politics and society Calvin's concepts of God and man led to ideas which were gradually put into practice after his death, in particular in the fields of politics and society. After their fight for independence from Spain (1579), the Netherlands, under Calvinist leadership, granted asylum to religious minorities, e.g. French Huguenots, English Independents (Congregationalists), and Jews from Spain and Portugal. The ancestors of the philosopher Baruch Spinoza were Portuguese Jews. Aware of the trial against Galileo, René Descartes lived in the Netherlands, out of reach of the Inquisition, from 1628 to 1649. Pierre Bayle, a Reformed Frenchman, also felt safer in the Netherlands than in his home country. He was the first prominent philosopher who demanded tolerance for atheists. Hugo Grotius (1583–1645) was able to publish a rather liberal interpretation of the Bible and his ideas about natural law in the Netherlands. Moreover, the Calvinist Dutch authorities allowed the printing of books that could not be published elsewhere, such as Galileo's Discorsi (1638). Alongside the liberal development of the Netherlands came the rise of modern democracy in England and North America. In the Middle Ages, state and church had been closely connected. Martin Luther's doctrine of the two kingdoms separated state and church in principle. His doctrine of the priesthood of all believers raised the laity to the same level as the clergy. Going one step further, Calvin included elected laymen (church elders, presbyters) in his concept of church government. The Huguenots added synods whose members were also elected by the congregations. The other Reformed churches took over this system of church self-government, which was essentially a representative democracy. Baptists, Quakers, and Methodists are organized in a similar way. These denominations and the Anglican Church were influenced by Calvin's theology in varying degrees. In another factor in the rise of democracy in the Anglo-American world, Calvin favored a mixture of democracy and aristocracy as the best form of government (mixed government). He appreciated the advantages of democracy. His political thought aimed to safeguard the rights and freedoms of ordinary men and women. In order to minimize the misuse of political power he suggested dividing it among several institutions in a system of checks and balances (separation of powers). Finally, Calvin taught that if worldly rulers rise up against God they should be put down. In this way, he and his followers stood in the vanguard of resistance to political absolutism and furthered the cause of democracy. The Congregationalists who founded Plymouth Colony (1620) and Massachusetts Bay Colony (1628) were convinced that the democratic form of government was the will of God. Enjoying self-rule, they practiced separation of powers. Rhode Island, Connecticut, and Pennsylvania, founded by Roger Williams, Thomas Hooker, and William Penn, respectively, combined democratic government with a limited freedom of religion that did not extend to Catholics (Congregationalism being the established, tax-supported religion in Connecticut. These colonies became safe havens for persecuted religious minorities, including Jews.) In England, Baptists Thomas Helwys ( 1575 – 1616), and John Smyth ( 1554–) influenced the liberal political thought of the Presbyterian poet and politician John Milton (1608–1674) and of the philosopher John Locke (1632–1704), who in turn had both a strong impact on the political development in their home country (English Civil War of 1642–1651, Glorious Revolution of 1688) as well as in North America. The ideological basis of the American Revolution was largely provided by the radical Whigs, who had been inspired by Milton, Locke, James Harrington (1611–1677), Algernon Sidney (1623–1683), and other thinkers. The Whigs' "perceptions of politics attracted widespread support in America because they revived the traditional concerns of a Protestantism that had always verged on Puritanism". The United States Declaration of Independence, the United States Constitution and (American) Bill of Rights initiated a tradition of human and civil rights that continued in the French Declaration of the Rights of Man and of the Citizen and the constitutions of numerous countries around the world, e. g. Latin America, Japan, India, Germany, and other European countries. It is also echoed in the United Nations Charter and the Universal Declaration of Human Rights. In the 19th century, churches based on or influenced by Calvin's theology became deeply involved in social reforms, e.g. the abolition of slavery (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln, and others), women suffrage, and prison reforms. Members of these churches formed co-operatives to help the impoverished masses. The founders of the Red Cross Movement, including Henry Dunant, were Reformed Christians. Their movement also initiated the Geneva Conventions. Others view Calvinist influence as not always being solely positive. The Boers and Afrikaner Calvinists combined ideas from Calvinism and Kuyperian theology to justify apartheid in South Africa. As late as 1974 the majority of the Dutch Reformed Church in South Africa was convinced that their theological stances (including the story of the Tower of Babel) could justify apartheid. In 1990 the Dutch Reformed Church document Church and Society maintained that although they were changing their stance on apartheid, they believed that within apartheid and under God's sovereign guidance, "...everything was not without significance, but was of service to the Kingdom of God." These views were not universal and were condemned by many Calvinists outside South Africa. Pressure from both outside and inside the Dutch Reformed Calvinist church helped reverse apartheid in South Africa. Throughout the world, the Reformed churches operate hospitals, homes for handicapped or elderly people, and educational institutions on all levels. For example, American Congregationalists founded Harvard (1636), Yale (1701), and about a dozen other colleges. A particular stream of influence of Calvinism concerns art. Visual art cemented society in the first modern nation state, the Netherlands, and also Neo-Calvinism put much weight on this aspect of life. Hans Rookmaaker is the most prolific example. In literature one can think of Marilynne Robinson. In her non-fiction she powerfully demonstrates the modernity of Calvin's thinking, calling him a humanist scholar (pg 174, The Death of Adam). See also List of Calvinist educational institutions in North America List of Reformed denominations Synod of Jerusalem (1672): Eastern Orthodox council rejecting Calvinist beliefs Criticism of Protestantism The Protestant Ethic and the Spirit of Capitalism (1905) – Max Weber's analysis of Calvinism's influence on society and economics Doctrine Common grace Reformed confessions of faith Related Boer Calvinists: Boere-Afrikaners that hold to Reformed theology Huguenots: followers of Calvinism in France, originating in the 16th and 17th century Pilgrims: English Separatists who left Europe for America in search of religious toleration, eventually settling in New England Presbyterians: Calvinists in countries worldwide Puritans: English Protestants who wanted to purify the Church of England Continental Reformed church: Calvinist churches originating in continental Europe Waldensians: Italian Protestants, preceded Calvinism but today identify with Reformed theology Opposing views Amyraldism Arminianism Catholicism Augustinianism Christian universalism Eastern Orthodoxy Palamism Free Grace theology Open theism Lutheranism Molinism Socinianism Notes References Bibliography . . . . Further reading Bratt, James D. (1984) Dutch Calvinism in Modern America: A History of a Conservative Subculture excerpt and text search Hart, D.G. (2013). Calvinism: A History. New Haven, CT: Yale University Press, excerpt and text search External links "Five Points of Calvinism" by Robert Lewis Dabney (PDF). Calvinist theology Trinitarianism
2,730
6,034
https://en.wikipedia.org/wiki/Cahn%E2%80%93Ingold%E2%80%93Prelog%20priority%20rules
Cahn–Ingold–Prelog priority rules
In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named for R.S. Cahn, C.K. Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer describing the number of stereocenters will usually have stereoisomers, and diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the "number of neighboring atoms" bonded to a center). The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases." A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable. Steps for naming The steps for naming molecules using the CIP system are often presented as: Identification of stereocenters and double bonds; Assignment of priorities to the groups attached to each stereocenter or double-bonded atom; and Assignment of R/S and E/Z descriptors. Assignment of priorities R/S and E/Z descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as the sequence rules, is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases. Compare the atomic number (Z) of the atoms directly attached to the stereocenter; the group having the atom of higher atomic number Z receives higher priority (i.e. number 1). If there is a tie, the atoms at distance 2 from the stereocenter have to be considered: a list is made for each group of further atoms bonded to the one directly attached to the stereocenter. Each list is arranged in order of decreasing atomic number Z. Then the lists are compared atom by atom; at the earliest difference, the group containing the atom of higher atomic number Z receives higher priority. If there is still a tie, each atom in each of the two lists is replaced with a sublist of the other atoms bonded to it (at distance 3 from the stereocenter), the sublists are arranged in decreasing order of atomic number Z, and the entire structure is again compared atom by atom. This process is repeated recursively, each time with atoms one bond farther from the stereocenter, until the tie is broken. Isotopes If two groups differ only in isotopes, then the larger atomic mass is used to set the priority. Double and triple bonds If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is "connected to the same atom twice". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, you are allowed to visit the same atom twice as you create an arc. When B is replaced with a list of attached atoms, A itself, but not its "phantom", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other. Geometrical isomers If two substituents on an atom are geometric isomers of each other, the Z-isomer has higher priority than the E-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond (cis) is classified as "Z." The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond (trans) is classified as "E." Cyclic molecules To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree. Assigning descriptors Stereocenters: R/S A chiral sp3 hybridized isomer contains four different substituents. All four substituents are assigned prorites based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from you. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the rectus (R) assignment. An arc drawn counterclockwise, has the sinister (S) assignment. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as (R) 3-methyl-1-pentene. The names are derived from the Latin for 'right' and 'left', respectively. A practical method of determining whether an enantiomer is R or S is by using the right-hand rule: one wraps the molecule with the fingers in the direction . If the thumb points in the direction of the fourth substituent, the enantiomer is R; otherwise, it is S. It is possible in rare cases that two substituents on an atom differ only in their absolute configuration (R or S). If the relative priorities of these substituents need to be established, R takes priority over S. When this happens, the descriptor of the stereocenter is a lowercase letter (r or s) instead of the uppercase letter normally used. Double bonds: E/Z For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond (cis configuration), then the stereoisomer is assigned the configuration Z (zusammen, German word meaning "together"). If the high priority groups are on opposite sides of the double bond ( trans configuration ), then the stereoisomer is assigned the configuration E (entgegen, German word meaning "opposed") Coordination Compounds In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that "non-covalent interactions have a fictitious number between 0 and 1" when assigning priority. Compounds in which this occurs are referred to as coordination compounds. Spiro Compounds Spiro structures contain chiral molecules with no say asymmetric center. The rings of a spiro structure lie at right angles to each other. It's important to note that the mirror images of spiro structures are non-superimposable and are enantiomers. Optical Isomerism Optical isomers are compounds that have four different substituents attached to a central carbon. Optical isomers play a significant role in biological activity. Optical isomers have the ability to rotate plane-polarized clockwise (R) or counterclockwise (S). When optical isomers create two enantiomers, one will rotate clockwise while the other rotates counterclockwise. A mixture of the two isomers, however, will not rotate plane-polarized light. These two isomers may be identical chemically, but are indistinguishable. Examples The following are examples of application of the nomenclature. {| align="center" class="wikitable" width=800px |- !colspan="2"|R/S assignments for several compounds |- |bgcolor="#FFFFFF"| |bgcolor="#FFFFFF" valign=top| The hypothetical molecule bromochlorofluoroiodomethane shown in its (R)-configuration would be a very simple chiral compound. The priorities are assigned based on atomic number (Z): iodine (Z = 53) > bromine (Z = 35) > chlorine (Z = 17) > fluorine (Z = 9). Allowing fluorine (lowest priority, number 4) to point away from the viewer the rotation is clockwise hence the R assignment. |- |bgcolor="#FFFFFF"| | bgcolor="#FFFFFF" valign="top" | In the assignment of L-serine highest priority (i.e. number 1) is given to the nitrogen atom (Z = 7) in the amino group (NH2). Both the hydroxymethyl group (CH2OH) and the carboxylic acid group (COOH) have carbon atoms (Z = 6) but priority is given to the latter because the carbon atom in the COOH group is connected to a second oxygen (Z = 8) whereas in the CH2OH group carbon is connected to a hydrogen atom (Z = 1). Lowest priority (i.e. number 4) is given to the hydrogen atom and as this atom points away from the viewer, the counterclockwise decrease in priority over the three remaining substituents completes the assignment as S. |- |bgcolor="#FFFFFF"| |bgcolor="#FFFFFF" valign=top| The stereocenter in (S)-carvone is connected to one hydrogen atom (not shown, priority 4) and three carbon atoms. The isopropenyl group has priority 1 (carbon atoms only), and for the two remaining carbon atoms, priority is decided with the carbon atoms two bonds removed from the stereocenter, one part of the keto group (O, O, C, priority number 2) and one part of an alkene (C, C, H, priority number 3). The resulting counterclockwise rotation results in S. |} Describing multiple centers If a compound has more than one chiral stereocenter, each center is denoted by either R or S. For example, ephedrine exists in (1R,2S) and (1S,2R) stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1R,2R) and (1S,2S), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each. More generally, for any pair of enantiomers, all of the descriptors are opposite: (R,R) and (S,S) are enantiomers, as are (R,S) and (S,R). Diastereomers have at least one descriptor in common; for example (R,S) and (R,R) are diastereomers, as are (S,R) and (S,S). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers. A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is "superimposable" on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2n rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which (R,S) is the same as the (S,R) form. In meso compounds the R and S stereocenters occur in symmetrically positioned pairs. Relative configuration The relative configuration of two stereoisomers may be denoted by the descriptors R and S with an asterisk (*). (R*,R*) means two centers having identical configurations, (R,R) or (S,S); (R*,S*) means two centers having opposite configurations, (R,S) or (S,R). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the R* descriptor. To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the anomeric carbon atom and the reference atom do have opposite configurations (R,S) or (S,R), whereas in the β anomer they are the same (R,R) or (S,S). Faces Stereochemistry also plays a role assigning faces to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical (enantiotopic) and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter (R or S) also apply when assigning the face of a molecular group. The faces are then called the Re-face and Si-face. In the example displayed on the right, the compound acetophenone is viewed from the Re-face. Hydride addition as in a reduction process from this side will form the (S)-enantiomer and attack from the opposite Si-face will give the (R)-enantiomer. However, one should note that adding a chemical group to the prochiral center from the Re-face will not always lead to an (S)-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride (Z = 17) were added to the prochiral center from the Re-face, this would result in an (R)-enantiomer. See also Chirality (chemistry) Descriptor (chemistry) E–Z notation Isomer Stereochemistry References Chemical nomenclature Stereochemistry
2,732
6,045
https://en.wikipedia.org/wiki/Clodius
Clodius
Clodius is an alternate form of the Roman nomen Claudius, a patrician gens that was traditionally regarded as Sabine in origin. The alternation of o and au is characteristic of the Sabine dialect. The feminine form is Clodia. Republican era Publius Clodius Pulcher During the Late Republic, the spelling Clodius is most prominently associated with Publius Clodius Pulcher, a popularis politician who gave up his patrician status through an order in order to qualify for the office of tribune of the plebs. Clodius positioned himself as a champion of the urban plebs, supporting free grain for the poor and the right of association in guilds (collegia); because of this individual's ideology, Clodius has often been taken as a more "plebeian" spelling and a gesture of political solidarity. Clodius's two elder brothers, the Appius Claudius Pulcher who was consul in 54 BC and the C. Claudius Pulcher who was praetor in 56 BC, conducted more conventional political careers and are referred to in contemporary sources with the traditional spelling. The view that Clodius represents a plebeian or politicized form has been questioned by Clodius's chief modern-era biographer. In The Patrician Tribune, W. Jeffrey Tatum points out that the spelling is also associated with Clodius's sisters and that "the political explanation … is almost certainly wrong." A plebeian branch of the gens, the Claudii Marcelli, retained the supposedly patrician spelling, while there is some inscriptional evidence that the -o- form may also have been used on occasion by close male relatives of the "patrician tribune" Clodius. Tatum argues that the use of -o- by the "chic" Clodia was a fashionable affectation, and that Clodius, whose perhaps inordinately loving relationship with his sister was the subject of much gossip and insinuation, was imitating his stylish sibling. The linguistic variation of o for au was characteristic of the Umbrian language, of which Sabine was a branch. Forms using o were considered archaic or rustic in the 50s BC, and the use of Clodius would have been either a whimsical gesture of pastoral fantasy, or a trendy assertion of antiquarian authenticity. Other Clodii of the Republic In addition to Clodius, Clodii from the Republican era include: Gnaeus Cornelius Lentulus Clodianus, presumably a "Clodius" before his adoption Clodius Aesopus, a tragic actor in the 50s BC who may have been a freedman of one of the Clodii Pulchri. Claudia, daughter of Clodius Pulcher and Fulvia, the first wife of emperor Augustus. Clodia, sister of Publius Clodius Pulcher, sometimes identified in Catullus' poems as "Lesbia". Women of the Claudii Marcelli branch were often called "Clodia" in the late Republic. Imperial era People using the name Clodius during the period of the Roman Empire include: Gaius Clodius Licinus, consul suffectus in AD 4. Gaius Clodius Vestalis, possible builder of the Via Clodia Publius Clodius Thrasea Paetus, senator and philosopher during the reign of Nero Lucius Clodius Macer, a legatus who revolted against Nero Publius Clodius Quirinalis, from Arelate in Gaul, teacher of rhetoric in time of Nero Decimus Clodius Septimius Albinus, commonly known as Clodius Albinus, rival emperor 196-197 Marcus Clodius Pupienus Maximus, known as Pupienus, co-emperor 238 Titus Clodius Pupienus Pulcher Maximus, son of emperor Pupienus and suffect consul c. 235 Clodii Celsini The Clodii Celsini continued to practice the traditional religions of antiquity in the face of Christian hegemony through at least the 4th century, when Clodius Celsinus Adelphius (see below) converted. Members of this branch include: Quintus Fabius Clodius Agrippianus Celsinus, proconsul of Caria in 249 and the son of Clodius Celsinus (b. ca. 185); see for other members of the family. Clodius Celsinus Adelphius, praefectus urbi in 351. Quintus Clodius Hermogenianus Olybrius, consul 379 See also Clodio the Longhair, a chieftain of the Salian Franks, sometimes called "Clodius I" Leges Clodiae, legislation sponsored by Clodius Pulcher as tribune References Selected bibliography Tatum, W. Jeffrey. The Patrician Tribune: P. Clodius Pulcher. Studies in the History of Greece and Rome series. University of North Carolina Press, 1999. Limited preview online. Hardcover . Further reading Fezzi, L. Il tribuno Clodio. Roma-Bari, Laterza, 2008. . Ancient Roman prosopographical lists Ancient Roman names
2,738
6,082
https://en.wikipedia.org/wiki/Cortex
Cortex
Cortex or cortical may refer to: Biology Cortex (anatomy), the outermost layer of an organ Cerebral cortex, the outer layer of the vertebrate cerebrum, part of which is the forebrain Motor cortex, the regions of the cerebral cortex involved in voluntary motor functions Prefrontal cortex, the anterior part of the frontal lobes of the brain Visual cortex, regions of the cerebral cortex involved in visual functions Cerebellar cortex, the outer layer of the vertebrate cerebellum Renal cortex, the outer portion of the kidney Adrenal cortex, a portion of the adrenal gland Cell cortex, the region of a cell directly underneath the membrane Cortex (hair), the middle layer of a strand of hair Cortex (botany), the outer portion of the stem or root of a plant Entertainment Cortex (film), a 2008 French film directed by Nicolas Boukhrief Cortex (podcast), a 2015 podcast Cortex Command, a 2012 video game Doctor Neo Cortex, a fictional character in the Crash Bandicoot video game series Nina Cortex, the niece of Neo Cortex Cortex, a French jazz funk band featuring Alain Mion Cortex, a Swedish post-punk alternative band featuring Freddie Wadling Other uses Cortex (archaeology), the outer layer of rock formed on the exterior of raw materials by chemical and mechanical weathering processes Cortex (journal), cognitive science journal published by Elsevier Cortex, a family of the ARM architecture of CPUs Cortex, a division of Gemini Sound Products Cortex, a digital lending platform by Think Finance Cortex Pharmaceuticals, a company of New Jersey, United States Cortex Innovation Community, a district in St. Louis, Missouri, United States See also Cordtex, a type of detonating cord used in mining Corex (disambiguation)
2,754
6,095
https://en.wikipedia.org/wiki/Chechnya
Chechnya
Chechnya (, ; ), officially the Chechen Republic, is a republic of Russia. It is situated in the North Caucasus of Eastern Europe, close to the Caspian Sea. The republic forms a part of the North Caucasian Federal District, and shares land borders with the country of Georgia to its south; with the Russian republics of Dagestan, Ingushetia, and North Ossetia-Alania to its east, north, and west; and with Stavropol Krai to its northwest. After the dissolution of the Soviet Union in 1991, the Checheno-Ingush ASSR split into two parts: the Republic of Ingushetia and the Chechen Republic. The latter proclaimed the Chechen Republic of Ichkeria, which sought independence. Following the First Chechen War of 1994–1996 with Russia, Chechnya gained de facto independence as the Chechen Republic of Ichkeria, although de jure it remained a part of Russia. Russian federal control was restored in the Second Chechen War of 1999–2009, with Chechen politics being dominated by Akhmad Kadyrov, and later his son Ramzan Kadyrov. The republic covers an area of , with a population of over 1.5 million residents . It is home to the indigenous Chechens, part of the Nakh peoples, and of primarily Muslim faith. Grozny is the capital and largest city. History Origin of Chechnya's population According to Leonti Mroveli, the 11th-century Georgian chronicler, the word Caucasian is derived from the Nakh ancestor Kavkas. According to George Anchabadze of Ilia State University: American linguist Johanna Nichols "has used language to connect the modern people of the Caucasus region to the ancient farmers of the Fertile Crescent" and her research suggests that "farmers of the region were proto-Nakh-Daghestanians." Nichols stated: "The Nakh–Dagestanian languages are the closest thing we have to a direct continuation of the cultural and linguistic community that gave rise to Western civilisation." Prehistory Traces of human settlement dating back to 40,000 BC were found near Lake Kezanoi. Cave paintings, artifacts, and other archaeological evidence indicate continuous habitation for some 8,000 years. People living in these settlements used tools, fire, and clothing made of animal skins. The Caucasian Epipaleolithic and early Caucasian Neolithic era saw the introduction of agriculture, irrigation, and the domestication of animals in the region. Settlements near Ali-Yurt and Magas, discovered in modern times, revealed tools made out of stone: stone axes, polished stones, stone knives, stones with holes drilled in them, clay dishes etc. Settlements made out of clay bricks were discovered in the plains. In the mountains there were settlements made from stone and surrounded by walls; some of them dated back to 8000 BC. This period also saw the appearance of the wheel (3000 BC), horseback riding, metal works (copper, gold, silver, iron), dishes, armor, daggers, knives and arrow tips in the region. The artifacts were found near Nasare-Cort, Muzhichi, Ja-E-Bortz (alternatively known as Surkha-khi), Abbey-Gove (also known as Nazran or Nasare). Pre-imperial era The German scientist Peter Simon Pallas believed that the Vainakh people (Chechens and Ingush) were the direct descendants from Alania. In 1239, the Alania capital of Maghas and the Alan confederacy of the Northern Caucasian highlanders, nations, and tribes was destroyed by Batu Khan (a Mongol leader and a grandson of Genghis Khan). According to the missionary Pian de Carpine, a part of the Alans had successfully resisted a Mongol siege on a mountain for 12 years: This twelve year old siege is not found in any other report, however the Russian historian A.I. Krasnov connected this battle with two Chechen folktales he recorded in 1967 that spoke of an old hunter named Idig who with his companions defended the Dakuoh mountain for 12 years against Tatar-Mongols. He also reported to have found several arrowheads and spears from the 13th century near the very mountain at which the battle took place: In the 14th and 15th centuries, there was frequent warfare between the Chechens, Tamerlan and Tokhtamysh, culminating in the Battle of the Terek River. The Chechen tribes built fortresses, castles, and defensive walls, protecting the mountains from the invaders. Part of the lowland tribes were occupied by Mongols. However, during the mid-14th century a strong Chechen Princedom called Simsim emerged under Khour II, a Chechen king that led the Chechen politics and wars. He was in charge of an army of Chechens against the rogue warlord Mamai and defeated him in the Battle of Tatar-tup in 1362. The kingdom of Simsim was almost destroyed during the Timurid invasion of the Caucasus, when Khour II allied himself with the Golden Horde Khan Tokhtamysh in the Battle of the Terek River. Timur sought to punish the highlanders for their allegiance to Tokhtamysh and as a consequence invaded Simsim in 1395. The 16th century saw the first Russian involvement in the Caucasus. In 1558, Temryuk of Kabarda sent his emissaries to Moscow requesting help from Ivan the Terrible against the Vainakh tribes. Ivan the Terrible married Temryuk's daughter Maria Temryukovna. An alliance was formed to gain the ground in the central Caucasus for the expanding Tsardom of Russia against stubborn Vainakh defenders. Chechnya was a nation in the Northern Caucasus that fought against foreign rule continually since the 15th century. Several Chechen leaders such as the 17th century Mehk-Da Aldaman Gheza led the Chechen politics and fought off encroachments of foreign powers. He defended the borders of Chechnya from invasions of Kabardinians and Avars during the Battle of Khachara in 1667. The Chechens converted over the next few centuries to Sunni Islam, as Islam was associated with resistance to Russian encroachment. Imperial rule Peter the Great first sought to increase Russia's political influence in the Caucasus and the Caspian Sea at the expense of Safavid Persia when he launched the Russo-Persian War of 1722–1723. Notable in Chechen history, this particular Russo-Persian War marked the first military encounter between Imperial Russia and the Vainakh. Russian forces succeeded in taking much of the Caucasian territories from Iran for several years. As the Russians took control of the Caspian corridor and moved into Persian-ruled Dagestan, Peter's forces ran into mountain tribes. Peter sent a cavalry force to subdue them, but the Chechens routed them. In 1732, after Russia already ceded back most of the Caucasus to Persia, now led by Nader Shah, following the Treaty of Resht, Russian troops clashed again with Chechens in a village called Chechen-aul along the Argun River. The Russians were defeated again and withdrew, but this battle is responsible for the apocryphal story about how the Nokchi came to be known as "Chechens"-the people ostensibly named for the place the battle had taken place. The name Chechen was however already used since as early as 1692. Under intermittent Persian rule since 1555, in 1783 the eastern Georgians of Kartl-Kakheti led by Erekle II and Russia signed the Treaty of Georgievsk. According to this treaty, Kartl-Kakheti received protection from Russia, and Georgia abjured any dependence on Iran. In order to increase its influence in the Caucasus and to secure communications with Kartli and other Christian regions of the Transcaucasia which it considered useful in its wars against Persia and Turkey, the Russian Empire began conquering the Northern Caucasus mountains. The Russian Empire used Christianity to justify its conquests, allowing Islam to spread widely because it positioned itself as the religion of liberation from tsardom, which viewed Nakh tribes as "bandits". The rebellion was led by Mansur Ushurma, a Chechen Naqshbandi (Sufi) sheikh—with wavering military support from other North Caucasian tribes. Mansur hoped to establish a Transcaucasus Islamic state under sharia law. He was unable to fully achieve this because in the course of the war he was betrayed by the Ottomans, handed over to Russians, and executed in 1794. Following the forced ceding of the current territories of Dagestan, most of Azerbaijan, and Georgia by Persia to Russia, following the Russo-Persian War of 1804–1813 and its resultant Treaty of Gulistan, Russia significantly widened its foothold in the Caucasus at the expense of Persia. Another successful Caucasus war against Persia several years later, starting in 1826 and ending in 1828 with the Treaty of Turkmenchay, and a successful war against Ottoman Turkey in 1828 and 1829, enabled Russia to use a much larger portion of its army in subduing the natives of the North Caucasus. The resistance of the Nakh tribes never ended and was a fertile ground for a new Muslim-Avar commander, Imam Shamil, who fought against the Russians from 1834 to 1859 (see Murid War). In 1859, Shamil was captured by Russians at aul Gunib. Shamil left Baysangur of Benoa, a Chechen with one arm, one eye, and one leg, in charge of command at Gunib. Baysangur broke through the siege and continued to fight Russia for another two years until he was captured and killed by Russians. The Russian tsar hoped that by sparing the life of Shamil, the resistance in the North Caucasus would stop, but it did not. Russia began to use a colonization tactic by destroying Nakh settlements and building Cossack defense lines in the lowlands. The Cossacks suffered defeat after defeat and were constantly attacked by mountaineers, who were robbing them of food and weaponry. The tsarists' regime used a different approach at the end of the 1860s. They offered Chechens and Ingush to leave the Caucasus for the Ottoman Empire (see Muhajir (Caucasus)). It is estimated that about 80% of Chechens and Ingush left the Caucasus during the deportation. It weakened the resistance which went from open warfare to insurgent warfare. One of the notable Chechen resistance fighters at the end of the 19th century was a Chechen abrek Zelimkhan Gushmazukaev and his comrade-in-arms Ingush abrek Sulom-Beck Sagopshinski. Together they built up small units which constantly harassed Russian military convoys, government mints, and government post-service, mainly in Ingushetia and Chechnya. Ingush aul Kek was completely burned when the Ingush refused to hand over Zelimkhan. Zelimkhan was killed at the beginning of the 20th century. The war between Nakh tribes and Russia resurfaced during the times of the Russian Revolution, which saw the Nakh struggle against Anton Denikin and later against the Soviet Union. On 21 December 1917, Ingushetia, Chechnya, and Dagestan declared independence from Russia and formed a single state: "United Mountain Dwellers of the North Caucasus" (also known as the Mountainous Republic of the Northern Caucasus) which was recognized by major world powers. The capital of the new state was moved to Temir-Khan-Shura (Dagestan). Tapa Tchermoeff, a prominent Chechen statesman, was elected the first prime minister of the state. The second prime minister elected was Vassan-Girey Dzhabagiev, an Ingush statesman, who also was the author of the constitution of the republic in 1917, and in 1920 he was re-elected for the third term. In 1921 the Russians attacked and occupied the country and forcefully absorbed it into the Soviet state. The Caucasian war for independence restarted, and the government went into exile. Soviet rule During Soviet rule, Chechnya and Ingushetia were combined to form the Checheno-Ingush Autonomous Soviet Socialist Republic. In the 1930s, Chechnya was flooded with many Ukrainians fleeing from a famine. As a result, many of the Ukrainians settled in Chechen-Ingush ASSR permanently and survived the famine. Although over 50,000 Chechens and over 12,000 Ingush were fighting against Nazi Germany on the front line (including Heroes of the USSR: Abukhadzhi Idrisov, Khanpasha Nuradilov, Movlid Visaitov), and although Nazi German troops advanced as far as the Ossetian ASSR city of Ordzhonikidze and the Chechen-Ingush ASSR city of Malgobek after capturing half of the Caucasus in less than a month, Chechens and Ingush were falsely accused as Nazi supporters and entire nations were deported during Operation Lentil to the Kazakh SSR (later Kazakhstan) in 1944 near the end of World War II where over 60% of Chechen and Ingush populations perished. American historian Norman Naimark writes: The deportation was justified by the materials prepared by NKVD officer Bogdan Kobulov accusing Chechens and Ingush in a mass conspiracy preparing rebellion and providing assistance to the German forces. Many of the materials were later proven to be fabricated. Even distinguished Red Army officers who fought bravely against Germans (e.g. the commander of 255th Separate Chechen-Ingush regiment Movlid Visaitov, the first to contact American forces at Elbe river) were deported. There is a theory that the real reason why Chechens and Ingush were deported was the desire of Russia to attack Turkey, an anti-communist country, as Chechens and Ingush could impede such plans. In 2004, the European Parliament recognized the deportation of Chechens and Ingush as an act of genocide. The territory of the Chechen-Ingush Autonomous Soviet Socialist Republic was divided between Stavropol Krai (where Grozny Okrug was formed), the Dagestan ASSR, the North Ossetian ASSR, and the Georgian SSR. The Chechens and Ingush were allowed to return to their land after 1956 during de-Stalinisation under Nikita Khrushchev when the Chechen-Ingush ASSR was restored but with both the boundaries and ethnic composition of the territory significantly changed. There were many (predominantly Russian) migrants from other parts of the Soviet Union, who often settled in the abandoned family homes of Chechens and Ingushes. The republic lost its Prigorodny District which transferred to North Ossetian ASSR but gained predominantly Russian Naursky District and Shelkovskoy District that is considered the homeland for Terek Cossacks. The Russification policies towards Chechens continued after 1956, with Russian language proficiency required in many aspects of life to provide Chechens better opportunities for advancement in the Soviet system. On 26 November 1990, the Supreme Council of Chechen-Ingush ASSR adopted the "Declaration of State Sovereignty of the Chechen-Ingush Republic". This declaration was part of the reorganisation of the Soviet Union. This new treaty was to be signed 22 August 1991, which would have transformed 15 republic states into more than 80. The 19–21 August 1991 Soviet coup d'état attempt led to the abandonment of this reorganisation. With the impending dissolution of the Soviet Union in 1991, an independence movement, the Chechen National Congress, was formed, led by ex-Soviet Air Force general and new Chechen President Dzhokhar Dudayev. It campaigned for the recognition of Chechnya as a separate nation. This movement was opposed by Boris Yeltsin's Russian Federation, which argued that Chechnya had not been an independent entity within the Soviet Union—as the Baltic, Central Asian, and other Caucasian States had—but was part of the Russian Soviet Federative Socialist Republic and hence did not have a right under the Soviet constitution to secede. It also argued that other republics of Russia, such as Tatarstan, would consider seceding from the Russian Federation if Chechnya were granted that right. Finally, it argued that Chechnya was a major hub in the oil infrastructure of Russia and hence its secession would hurt the country's economy and energy access. In the ensuing decade, the territory was locked in an ongoing struggle between various factions, usually fighting unconventionally. Chechen Wars The First Chechen War took place from 1994 to 1996, when Russian forces attempted to regain control over Chechnya, which had declared independence in November 1991. Despite overwhelming numerical superiority in men, weaponry, and air support, the Russian forces were unable to establish effective permanent control over the mountainous area due to numerous successful full-scale battles and insurgency raids. The Budyonnovsk hospital hostage crisis in 1995 shocked the Russian public. In April 1996 the first democratically elected president of Chechnya, Dzhokhar Dudayev, was killed by Russian forces using a booby trap bomb and a missile fired from a warplane after he was located by triangulating the position of a satellite phone he was using. The widespread demoralisation of the Russian forces in the area and a successful offensive to re-take Grozny by Chechen rebel forces led by Aslan Maskhadov prompted Russian President Boris Yeltsin to declare a ceasefire in 1996, and sign a peace treaty a year later that saw a withdrawal of Russian forces. After the war, parliamentary and presidential elections took place in January 1997 in Chechnya and brought to power new President Aslan Maskhadov, chief of staff and prime minister in the Chechen coalition government, for a five-year term. Maskhadov sought to maintain Chechen sovereignty while pressing the Russian government to help rebuild the republic, whose formal economy and infrastructure were virtually destroyed. Russia continued to send money for the rehabilitation of the republic; it also provided pensions and funds for schools and hospitals. Nearly half a million people (40% of Chechnya's prewar population) had been internally displaced and lived in refugee camps or overcrowded villages. There was an economic downturn. Two Russian brigades were permanently stationed in Chechnya. In lieu of the devastated economic structure, kidnapping emerged as the principal source of income countrywide, procuring over US$200 million during the three-year independence of the chaotic fledgling state, although victims were rarely killed. In 1998, 176 people were kidnapped, 90 of whom were released, according to official accounts. President Maskhadov started a major campaign against hostage-takers, and on 25 October 1998, Shadid Bargishev, Chechnya's top anti-kidnapping official, was killed in a remote-controlled car bombing. Bargishev's colleagues then insisted they would not be intimidated by the attack and would go ahead with their offensive. Political violence and religious extremism, blamed on "Wahhabism", was rife. In 1998, Grozny authorities declared a state of emergency. Tensions led to open clashes between the Chechen National Guard and Islamist militants, such as the July 1998 confrontation in Gudermes. The War of Dagestan began on 7 August 1999, during which the Islamic International Peacekeeping Brigade (IIPB) began an unsuccessful incursion into the neighboring Russian republic of Dagestan in favor of the Shura of Dagestan which sought independence from Russia. In September, a series of apartment bombs that killed around 300 people in several Russian cities, including Moscow, were blamed on the Chechen separatists. Some journalists contested the official explanation, instead blaming the Russian Secret Service for blowing up the buildings to initiate a new military campaign against Chechnya. In response to the bombings, a prolonged air campaign of retaliatory strikes against the Ichkerian regime and a ground offensive that began in October 1999 marked the beginning of the Second Chechen War. Much better organized and planned than in the first Chechen War, the Russian armed forces took control of most regions. The Russian forces used brutal force, killing 60 Chechen civilians during a mop-up operation in Aldy, Chechnya on 5 February 2000. After the re-capture of Grozny in February 2000, the Ichkerian regime fell apart. Post-war reconstruction and insurgency Chechen rebels continued to fight Russian troops and conduct terrorist attacks. In October 2002, 40–50 Chechen rebels seized a Moscow theater and took about 900 civilians hostage. The crisis ended with 117 hostages and up to 50 rebels dead, mostly due to an unknown aerosol pumped into the building by Russian special forces to incapacitate the people inside. In response to the increasing terrorism, Russia tightened its grip on Chechnya and expanded its anti-terrorist operations throughout the region. Russia installed a pro-Russian Chechen regime. In 2003, a referendum was held on a constitution that reintegrated Chechnya within Russia but provided limited autonomy. According to the Chechen government, the referendum passed with 95.5% of the votes and almost 80% turnout. The Economist was skeptical of the results, arguing that "few outside the Kremlin regard the referendum as fair". In September 2004, separatist rebels occupied a school in the town of Beslan, North Ossetia, demanding recognition of the independence of Chechnya and a Russian withdrawal. 1,100 people (including 777 children) were taken hostage. The attack lasted three days, resulting in the deaths of over 331 people, including 186 children. After the 2004 school siege, Russian president Vladimir Putin announced sweeping security and political reforms, sealing borders in the Caucasus region and revealing plans to give the central government more power. He also vowed to take tougher action against domestic terrorism, including preemptive strikes against Chechen separatists. In 2005 and 2006, separatist leaders Aslan Maskhadov and Shamil Basayev were killed. Since 2007, Chechnya has been governed by Ramzan Kadyrov. Kadyrov's rule has been characterized by high-level corruption, a poor human rights record, widespread use of torture, and a growing cult of personality. Allegations of anti-gay purges in Chechnya were initially reported on 1 April 2017. In April 2009, Russia ended its counter-terrorism operation and pulled out the bulk of its army. The insurgency in the North Caucasus continued even after this date. The Caucasus Emirate had fully adopted the tenets of a Salafist jihadist group through its strict adherence to the Sunni Hanbali obedience to the literal interpretation of the Quran and the Sunnah. In June 2022, the US State Department advised citizens not to travel to Chechnya, due to terrorism, kidnapping, and risk of civil unrest. Geography Situated in the eastern part of the North Caucasus in Eastern Europe, Chechnya is surrounded on nearly all sides by Russian Federal territory. In the west, it borders North Ossetia and Ingushetia, in the north, Stavropol Krai, in the east, Dagestan, and to the south, Georgia. Its capital is Grozny. Chechnya is well known for being mountainous, but it is in fact split between the more flat areas north of the Terek, and the highlands south of the Terek. Area: 17,300 km2 (6680 sq mi) Borders: Internal: Dagestan (NE) Ingushetia (W) North Ossetia–Alania (W) Stavropol Krai (NW) Foreign: Georgia (Kakheti and Mtskheta-Mtianeti) (S) Rivers: Terek Sunzha Argun Climate Despite a relatively small territory, Chechnya is characterized by a significant variety of climate conditions. The average temperature in Grozny is 11.2 °C (52.1 °F). Cities and towns with over 20,000 people Grozny (capital) Shali Urus-Martan Gudermes Argun Administrative divisions The Chechen Republic is divided into 15 districts and 3 cities of republican significance. Informal divisions There are no true districts of Chechnya, but the different dialects of the Chechen language informally define different districts. The main dialects are: Grozny, also known as the Dzhokhar dialect, is the dialect of people who live in and in some towns around Grozny. Naskhish, a dialect spoken to the northeast of Chechnya. The most notable difference in this dialect is the addition of the letters "ȯ", "ј" and "є" Day, pronounced like the word 'die' is spoken in a small section of the south, around and in the town of Day. There are other dialects which are believed to define districts, but because these areas are so isolated, not much research has been done on these areas. Demographics According to the 2021 Census, the population of the republic is 1,510,824, up from 1,268,989 in the 2010 Census. As of the 2021 Census, Chechens at 1,456,792 make up 96.4% of the republic's population. Other groups include Russians (18,225, or 1.2%), Kumyks (12,184, or 0.8%) and a host of other small groups, each accounting for less than 0.5% of the total population. The Armenian community, which used to number around 15,000 in Grozny alone, has dwindled to a few families. The Armenian church of Grozny was demolished in 1930. The birth rate was 25.41 in 2004. (25.7 in Achkhoi Martan, 19.8 in Groznyy, 17.5 in Kurchaloi, 28.3 in Urus Martan and 11.1 in Vedeno). At the end of the Soviet era, ethnic Russians (including Cossacks) comprised about 23% of the population (269,000 in 1989), but now Russians number only about 16,400 people (about 1.2% of the population) and still some emigration is happening. The languages used in the Republic are Chechen and Russian. Chechen belongs to the Vaynakh or North-central Caucasian language family, which also includes Ingush and Batsb. Some scholars place it in a wider North Caucasian languages. Life expectancy Despite its difficult past, Chechnya has a high life expectancy, one of the highest in Russia. But the pattern of life expectancy is unusual, and in according to numerous statistics, Chechnya stands out from the overall picture. In 2020 Chechnya had the deepest fall in life expectancy, but in 2021 it had the biggest rise. Chechnya has the highest excess of life expectancy in rural areas over cities. Settlements Vital statistics Source: Fedstat (Суммарный коэффициент рождаемости) Ethnic groups (in the territory of modern Chechnya) Religion Islam Islam is the predominant religion in Chechnya, practiced by 95% of those polled in Grozny in 2010. Most of the population follows either the Shafi'i or the Hanafi schools of fiqh (Islamic jurisprudence). The Shafi'i school of jurisprudence has a long tradition among the Chechens, and thus it remains the most practiced. Many Chechens are also Sufis, of either the Qadiri or Naqshbandi orders. Following the end of the Soviet Union, there has been an Islamic revival in Chechnya, and in 2011 it was estimated that there were 465 mosques, including the Akhmad Kadyrov Mosque in Grozny accommodating 10,000 worshippers, as well 31 madrasas, including an Islamic university named Kunta-haji and a Center of Islamic Medicine in Grozny which is the largest such institution in Europe. Christianity From the 11th to 13th centuries (i.e. before Mongol invasions of Durdzuketia), there was a mission of Georgian Orthodox missionaries to the Nakh peoples. Their success was limited, though a couple of highland teips did convert (conversion was largely by teip). However, during the Mongol invasions, these Christianized teips gradually reverted to paganism, perhaps due to the loss of trans-Caucasian contacts as the Georgians fought the Mongols and briefly fell under their dominion. The once-strong Russian minority in Chechnya, mostly Terek Cossacks and estimated as numbering approximately 25,000 in 2012, are predominantly Russian Orthodox, although currently only one church exists in Grozny. In August 2011, Archbishop Zosima of Vladikavkaz and Makhachkala performed the first mass baptism ceremony in the history of the Chechen Republic in the Terek River of Naursky District in which 35 citizens of Naursky and Shelkovsky districts were converted to Orthodoxy. As of 2020, there are eight Orthodox churches in Chechnya, the largest is the temple of the Archangel Michael in Grozny. Politics Since 1990, the Chechen Republic has had many legal, military, and civil conflicts involving separatist movements and pro-Russian authorities. Today, Chechnya is a relatively stable federal republic, although there is still some separatist movement activity. Its regional constitution entered into effect on 2 April 2003, after an all-Chechen referendum was held on 23 March 2003. Some Chechens were controlled by regional teips, or clans, despite the existence of pro- and anti-Russian political structures. Regional government The former separatist religious leader (mufti) Akhmad Kadyrov, looked upon as a traitor by many separatists, was elected president with 83% of the vote in an internationally monitored election on 5 October 2003. Incidents of ballot stuffing and voter intimidation by Russian soldiers and the exclusion of separatist parties from the polls were subsequently reported by Organization for Security and Co-operation in Europe (OSCE) monitors. On 9 May 2004, Kadyrov was assassinated in Grozny football stadium by a landmine explosion that was planted beneath a VIP stage and detonated during a parade, and Sergey Abramov was appointed acting prime minister after the incident. However, since 2005 Ramzan Kadyrov (son of Akhmad Kadyrov) has been the caretaker prime minister, and in 2007 was appointed as the new president. Many allege he is the wealthiest and most powerful man in the republic, with control over a large private militia (the Kadyrovites). The militia, which began as his father's security force, has been accused of killings and kidnappings by human rights organisations such as Human Rights Watch. Separatist government Ichkeria was a member of the Unrepresented Nations and Peoples Organisation between 1991 and 2010. Former president of Georgia Zviad Gamsakhurdia deposed in a military coup of 1991 and a participant of the Georgian Civil War, recognized the independence of the Chechen Republic of Ichkeria in 1993. Diplomatic relations with Ichkeria were also established by the partially recognised Islamic Emirate of Afghanistan under the Taliban government on 16 January 2000. This recognition ceased with the fall of the Taliban in 2001. However, despite Taliban recognition, there were no friendly relations between the Taliban and Ichkeria—Maskhadov rejected their recognition, stating that the Taliban were illegitimate. Ichkeria also received vocal support from the Baltic countries, a group of Ukrainian nationalists, and Poland; Estonia once voted to recognize, but the act never was followed through due to pressure applied by both Russia and the EU. The president of this government was Aslan Maskhadov, the foreign minister was Ilyas Akhmadov, who was the spokesman for Maskhadov. Aslan Maskhadov had been elected in an internationally monitored election in 1997 for four years, which took place after signing a peace agreement with Russia. In 2001 he issued a decree prolonging his office for one additional year; he was unable to participate in the 2003 presidential election since separatist parties were barred by the Russian government, and Maskhadov faced accusations of terrorist offenses in Russia. Maskhadov left Grozny and moved to the separatist-controlled areas of the south at the onset of the Second Chechen War. Maskhadov was unable to influence a number of warlords who retain effective control over Chechen territory, and his power was diminished as a result. Russian forces killed Maskhadov on 8 March 2005, and the assassination of Maskhadov was widely criticized since it left no legitimate Chechen separatist leader with whom to conduct peace talks. Akhmed Zakayev, deputy prime minister and a foreign minister under Maskhadov, was appointed shortly after the 1997 election and is currently living under asylum in England. He and others chose Abdul Khalim Saidullayev, a relatively unknown Islamic judge who was previously the host of an Islamic program on Chechen television, to replace Maskhadov following his death. On 17 June 2006, it was reported that Russian special forces killed Abdul Khalim Saidullayev in a raid in the Chechen town of Argun. The successor of Saidullayev became Doku Umarov. On 31 October 2007, Umarov abolished the Chechen Republic of Ichkeria and its presidency and in its place proclaimed the Caucasus Emirate with himself as its Emir. This change of status has been rejected by many Chechen politicians and military leaders who continue to support the existence of the republic. During the 2022 Russian invasion of Ukraine, the Ukrainian parliament voted to recognize the "Chechen republic of Ichkeria as territory temporarily occupied by the Russian Federation". Human rights The Internal Displacement Monitoring Center reports that after hundreds of thousands of ethnic Russians and Chechens fled their homes following inter-ethnic and separatist conflicts in Chechnya in 1994 and 1999, more than 150,000 people still remain displaced in Russia today. Human rights groups criticized the conduct of the 2005 parliamentary elections as unfairly influenced by the central Russian government and military. In 2006 Human Rights Watch reported that pro-Russian Chechen forces under the command of Ramzan Kadyrov, as well as federal police personnel, used torture to get information about separatist forces. "If you are detained in Chechnya, you face a real and immediate risk of torture. And there is little chance that your torturer will be held accountable", said Holly Cartner, Director of the Europe and Central Asia division of the Human Rights Watch. In 2009, the US government financed American organization Freedom House included Chechnya in the "Worst of the Worst" list of most repressive societies in the world, together with Burma, North Korea, Tibet, and others. On 1 February 2009, The New York Times released extensive evidence to support allegations of consistent torture and executions under the Kadyrov government. The accusations were sparked by the assassination in Austria of a former Chechen rebel who had gained access to Kadyrov's inner circle, 27-year-old Umar Israilov. On 1 July 2009, Amnesty International released a detailed report covering the human rights violations committed by the Russian Federation against Chechen citizens. Among the most prominent features was that those abused had no method of redress against assaults, ranging from kidnapping to torture, while those responsible were never held accountable. This led to the conclusion that Chechnya was being ruled without law, being run into further devastating destabilization. On 10 March 2011, Human Rights Watch reported that since Chechenization, the government has pushed for enforced Islamic dress code. The president Ramzan Kadyrov is quoted as saying "I have the right to criticize my wife. She doesn't [have the right to criticize me]. With us [in Chechen society], a wife is a housewife. A woman should know her place. A woman should give her love to us [men]... She would be [man's] property. And the man is the owner. Here, if a woman does not behave properly, her husband, father, and brother are responsible. According to our tradition, if a woman fools around, her family members kill her... That's how it happens, a brother kills his sister or a husband kills his wife... As a president, I cannot allow for them to kill. So, let women not wear shorts...". He has also openly defended honor killings on several occasions. On 9 July 2017, Russian newspaper Novaya Gazeta reported that a number of people were subject to an extrajudicial execution on the night of 26 January 2017. It published 27 names of the people known to be dead, but stressed that the list is "not all [of those killed]"; the newspaper asserted that 50 people may have been killed in the execution. Some of the dead were gay, but not all; the deaths appeared to have been triggered by the death of a policeman, and according to the author of the report, Elena Milashina, were executed for alleged terrorism. In December 2021, up to 50 family members of critics of the Kadyrov government were abducted in a wave of mass kidnappings beginning on 22 December. LGBT rights On 1 September 1997, Criminal Code reportedly being implemented in the Chechen Republic-Ichkeriya, Article 148 punishes "anal sexual intercourse between a man and a woman or a man and a man". For first- and second-time offenders, the punishment is caning. A third conviction leads to the death penalty, which can be carried out in a number of ways including stoning or beheading. In 2017, it was reported by Novaya Gazeta and human rights groups that Chechen authorities had set up concentration camps, one of which is in Argun, where gay men are interrogated and subjected to physical violence. On 27 June 2018, the Parliamentary Assembly of the Council of Europe noted "cases of abduction, arbitrary detention and torture ... with the direct involvement of Chechen law enforcement officials and on the orders of top-level Chechen authorities" and expressed dismay "at the statements of Chechen and Russian public officials denying the existence of LGBTI people in the Chechen Republic". Kadyrov's spokesman Alvi Karimov told Interfax that gay people "simply do not exist in the republic" and made an approving reference to honor killings by family members "if there were such people in Chechnya". In a 2021 Council of Europe report into anti-LGBTI hate crimes, rapporteur Foura ben Chikha described the "state-sponsored attacks carried out against LGBTI people in Chechnya in 2017" as "the single most egregious example of violence against LGBTI people in Europe that has occurred in decades". On 11 January 2019, it was reported that another 'gay purge' had begun in the country in December 2018, with several gay men and women being detained. The Russian LGBT Network believes that around 40 people were detained and two killed. Economy During the war, the Chechen economy fell apart. In 1994, the separatists planned to introduce a new currency, but the change did not occur due to the re-taking of Chechnya by Russian troops in the Second Chechen War. The economic situation in Chechnya has improved considerably since 2000. According to the New York Times, major efforts to rebuild Grozny have been made, and improvements in the political situation have led some officials to consider setting up a tourism industry, though there are claims that construction workers are being irregularly paid and that poor people have been displaced. Chechnya's unemployment was 67% in 2006 and fell to 21.5% in 2014. Total revenue of the budget of Chechnya for 2017 was 59.2 billion rubles. Of these, 48.5 billion rubles were grants from the federal budget of the Russian Federation. In late 1970s, Chechnya produced up to 20 million tons of oil annually, production declined sharply to approximately 3 million tons in the late 1980s, and to below 2 million tons before 1994, first (1994–1996) second Russian invasion of Chechnya (1999) inflicted material damage on the oil-sector infrastructure, oil production decreased to 750,000 tons in 2001 only to increase to 2 million tons in 2006, by 2012 production was 1 million tons. References Notes Sources Further reading Anderson, Scott. The Man Who Tried to Save the World. Babchenko, Arkady. One Soldier's War in Chechnya. Portobello, London Baiev, Khassan. The Oath: A Surgeon Under Fire. Bennigsen-Broxup, Marie. The North Caucasus Barrier: The Russian Advance Towards the Muslim World. Bird, Chris. To Catch a Tartar: Notes from the Caucasus. Bornstein, Yvonne and Ribowsky, Mark. "Eleven Days of Hell: My True Story of Kidnapping, Terror, Torture And Historic FBI & KGB Rescue" AuthorHouse, 2004. . Conrad, Roy. Roy Conrad. Grozny. A few days... Dunlop, John B. Russia Confronts Chechnya: Roots of a Separatist Conflict Evangelista, Mathew. The Chechen Wars: Will Russia Go the Way of the Soviet Union?. . Gall, Charlotta & de Waal, Thomas. Chechnya: A Small Victorious War. Gall, Carlotta, and de Waal, Thomas Chechnya: Calamity in the Caucasus Goltz, Thomas. Chechnya Diary : A War Correspondent's Story of Surviving the War in Chechnya. M E Sharpe (2003). Hasanov, Zaur. The Man of the Mountains. (facts based novel on growing influence of the radical Islam during 1st and 2nd Chechnya wars) Khan, Ali. The Chechen Terror: The Play within the Play Khlebnikov, Paul. Razgovor s varvarom (Interview with a barbarian). . Lieven, Anatol. Chechnya : Tombstone of Russian Power Mironov, Vyacheslav. Ya byl na etoy voyne. (I was in this war) Biblion – Russkaya Kniga, 2001. Partial translation available online. Mironov, Vyacheslav. Vyacheslav Mironov. Assault on Grozny Downtown Mironov, Vyacheslav. Vyacheslav Mironov. I was in that war. Oliker, Olga Russia's Chechen Wars 1994–2000: Lessons from Urban Combat. . (A strategic and tactical analysis of the Chechen Wars.) Pelton, Robert Young. Hunter Hammer and Heaven, Journeys to Three World's Gone Mad () Politkovskaya, Anna. A Small Corner of Hell: Dispatches from Chechnya Rasizade, Alec. Chechnya: the Achilles heel of Russia. = Contemporary Review (Oxford) in three parts: 1) April 2005 issue, volume 286, number 1671, pages 193–197; 2) May 2005 issue, volume 286, number 1672, pages 277–284; 3) June 2005 issue, volume 286, number 1673, pages 327–332. Seirstad, Asne. The Angel of Grozny. Wood, Tony. Chechnya: The Case For Independence Book review in The Independent, 2007 External links of the Republic of Chechnya (video) Islamist Extremism in Chechnya: A Threat to U.S. Homeland?: Joint Hearing before the Subcommittee on Europe, Eurasia, and Emerging Threats and the Subcommittee on Terrorism, Nonproliferation, and Trade of the Committee on Foreign Affairs, House of Representatives, One Hundred Thirteenth Congress, First Session, April 26, 2013 Chechnya Guide 1993 establishments in Russia Chechen-speaking countries and territories North Caucasian Federal District North Caucasus Regions of Europe with multiple official languages States and territories established in 1993
2,760
6,099
https://en.wikipedia.org/wiki/Carboxylic%20acid
Carboxylic acid
In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group () attached to an R-group. The general formula of a carboxylic acid is or , with R referring to the alkyl, alkenyl, aryl, or other group. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion. Examples and nomenclature Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid (C3H7CO2H) is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran. The carboxylate anion (R–COO− or RCO2−) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate. Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group. Physical properties Solubility Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl –C=O) and hydrogen-bond donors (the hydroxyl –OH), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water. Boiling points Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilised dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporised, increasing the enthalpy of vaporization requirements significantly. Acidity Carboxylic acids are Brønsted–Lowry acids because they are proton (H+) donors. They are the most common type of organic acid. Carboxylic acids are typically weak acids, meaning that they only partially dissociate into H3O+ cations and RCOO− anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10−5 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of formic acid is 3.75 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76) Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -1/2 negative charges on the 2 oxygen atoms. Odour Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume. Characterization Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm−1. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm−1 region. By 1H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water. Occurrence and applications Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins. Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps. Synthesis Industrial routes In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment. Carbonylation of alcohols as illustrated by the Cativa process for the production of acetic acid. Formic acid is prepared by a different carbonylation pathway, also starting from methanol. Oxidation of aldehydes with air using cobalt and manganese catalysts. The required aldehydes are readily obtained from alkenes by hydroformylation. Oxidation of hydrocarbons using air. For simple alkanes, this method is inexpensive but not selective enough to be useful. Allylic and benzylic compounds undergo more selective oxidations. Alkyl groups on a benzene ring are oxidized to the carboxylic acid, regardless of its chain length. Benzoic acid from toluene, terephthalic acid from para-xylene, and phthalic acid from ortho-xylene are illustrative large-scale conversions. Acrylic acid is generated from propene. Oxidation of ethene using silicotungstic acid catalyst. Base-catalyzed dehydrogenation of alcohols. Carbonylation coupled to the addition of water. This method is effective and versatile for alkenes that generate secondary and tertiary carbocations, e.g. isobutylene to pivalic acid. In the Koch reaction, the addition of water and carbon monoxide to alkenes is catalyzed by strong acids. Hydrocarboxylations involve the simultaneous addition of water and CO. Such reactions are sometimes called "Reppe chemistry." HCCH + CO + H2O → CH2=CHCO2H Hydrolysis of triglycerides obtained from plant or animal oils. These methods of synthesizing some long-chain carboxylic acids are related to soap making. Fermentation of ethanol. This method is used in the production of vinegar. The Kolbe–Schmitt reaction provides a route to salicylic acid, precursor to aspirin. Laboratory methods Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents. Oxidation of primary alcohols or aldehydes with strong oxidants such as potassium dichromate, Jones reagent, potassium permanganate, or sodium chlorite. The method is more suitable for laboratory conditions than the industrial use of air, which is "greener" because it yields less inorganic side products such as chromium or manganese oxides. Oxidative cleavage of olefins by ozonolysis, potassium permanganate, or potassium dichromate. Hydrolysis of nitriles, esters, or amides, usually with acid- or base-catalysis. Carbonation of a Grignard reagent and organolithium reagents: RLi + CO2 → RCO2Li RCO2Li + HCl → RCO2H + LiCl Halogenation followed by hydrolysis of methyl ketones in the haloform reaction Base-catalyzed cleavage of non-enolizable ketones, especially aryl ketones: RC(O)Ar + H2O → RCO2H + ArH Less-common reactions Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest. Disproportionation of an aldehyde in the Cannizzaro reaction Rearrangement of diketones in the benzilic acid rearrangement Involving the generation of benzoic acids are the von Richter reaction from nitrobenzenes and the Kolbe–Schmitt reaction from phenols. Reactions The most widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols. Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water: CH3COOH + NaHCO3 → CH3COO−Na+ + CO2 + H2O Carboxylic acids also react with alcohols to give esters. This process is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP. The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters. Reduction Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group. N,N-Dimethyl(chloromethylene)ammonium chloride (ClHC=N+(CH3)2Cl−) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties. Specialized reactions As with all carbonyl compounds, the protons on the α-carbon are labile due to keto–enol tautomerization. Thus, the α-carbon is easily halogenated in the Hell–Volhard–Zelinsky halogenation. The Schmidt reaction converts carboxylic acids to amines. Carboxylic acids are decarboxylated in the Hunsdiecker reaction. The Dakin–West reaction converts an amino acid to the corresponding amino ketone. In the Barbier–Wieland degradation, a carboxylic acid on an aliphatic chain having a simple methylene bridge at the alpha position can have the chain shortened by one carbon. The inverse procedure is the Arndt–Eistert synthesis, where an acid is converted into acyl halide, which is then reacted with diazomethane to give one additional methylene in the aliphatic chain. Many acids undergo oxidative decarboxylation. Enzymes that catalyze these reactions are known as carboxylases (EC 6.4.1) and decarboxylases (EC 4.1.1). Carboxylic acids are reduced to aldehydes via the ester and DIBAL, via the acid chloride in the Rosenmund reduction and via the thioester in the Fukuyama reduction. In ketonic decarboxylation carboxylic acids are converted to ketones. Organolithium reagents (>2 equiv) react with carboxylic acids to give a dilithium 1,1-diolate, a stable tetrahedral intermediate which decomposes to give a ketone upon acidic workup. The Kolbe electrolysis is an electrolytic, decarboxylative dimerization reaction. It gets rid of the carboxyl groups of two acid molecules, and joins the remaining fragments together. Carboxyl radical The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid. See also Acid anhydride Acid chloride Amide Amino acid Ester List of carboxylic acids Dicarboxylic acid Polyhydroxy carboxylic acid (PHC). Pseudoacid Thiocarboxy References External links Carboxylic acids pH and titration – freeware for calculations, data analysis, simulation, and distribution diagram generation PHC. Functional groups
2,762
6,105
https://en.wikipedia.org/wiki/Conventional%20insulin%20therapy
Conventional insulin therapy
Conventional insulin therapy is a therapeutic regimen for treatment of diabetes mellitus which contrasts with the newer intensive insulin therapy. This older method (prior to the development home blood glucose monitoring) is still in use in a proportion of cases. Conventional insulin therapy is characterized by: Insulin injections of a mixture of regular (or rapid) and intermediate acting insulin are performed two times a day, or to improve overnight glucose, mixed in the morning to cover breakfast and lunch, but with regular (or rapid) acting insulin alone for dinner and intermediate acting insulin at bedtime (instead of being mixed in at dinner). Meals are scheduled to match the anticipated peaks in the insulin profiles. The target range for blood glucose levels is higher than is desired in the intensive regimen. Frequent measurements of blood glucose levels were not used. The down side of this method is that it is difficult to achieve as good results of glycemic control as with intensive insulin therapy. The advantage is that, for diabetics with a regular lifestyle, the regime is less intrusive than the intensive therapy. References Insulin therapies
2,765
6,111
https://en.wikipedia.org/wiki/Chemical%20vapor%20deposition
Chemical vapor deposition
Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films. In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber. Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics. The term chemical vapour deposition was coined 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD). Types CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated. Classified by operating conditions: Atmospheric pressure CVD (APCVD) – CVD at atmospheric pressure. Low-pressure CVD (LPCVD) – CVD at sub-atmospheric pressures. Reduced pressures tend to reduce unwanted gas-phase reactions and improve film uniformity across the wafer. Ultrahigh vacuum CVD (UHVCVD) – CVD at very low pressure, typically below 10−6 Pa (≈ 10−8 torr). Note that in other fields, a lower division between high and ultra-high vacuum is common, often 10−7 Pa. Sub-atmospheric CVD (SACVD) – CVD at sub-atmospheric pressures. Uses tetraethyl orthosilicate (TEOS) and ozone to fill high aspect ratio Si structures with silicon dioxide (SiO2). Most modern CVD is either LPCVD or UHVCVD. Classified by physical characteristics of vapor: Aerosol assisted CVD (AACVD) – CVD in which the precursors are transported to the substrate by means of a liquid/gas aerosol, which can be generated ultrasonically. This technique is suitable for use with non-volatile precursors. Direct liquid injection CVD (DLICVD) – CVD in which the precursors are in liquid form (liquid or solid dissolved in a convenient solvent). Liquid solutions are injected in a vaporization chamber towards injectors (typically car injectors). The precursor vapors are then transported to the substrate as in classical CVD. This technique is suitable for use on liquid or solid precursors. High growth rates can be reached using this technique. Classified by type of substrate heating: Hot wall CVD – CVD in which the chamber is heated by an external power source and the substrate is heated by radiation from the heated chamber walls. Cold wall CVD – CVD in which only the substrate is directly heated either by induction or by passing current through the substrate itself or a heater in contact with the substrate. The chamber walls are at room temperature. Plasma methods (see also Plasma processing): Microwave plasma-assisted CVD (MPCVD) Plasma-enhanced CVD (PECVD) – CVD that utilizes plasma to enhance chemical reaction rates of the precursors. PECVD processing allows deposition at lower temperatures, which is often critical in the manufacture of semiconductors. The lower temperatures also allow for the deposition of organic coatings, such as plasma polymers, that have been used for nanoparticle surface functionalization. Remote plasma-enhanced CVD (RPECVD) – Similar to PECVD except that the wafer substrate is not directly in the plasma discharge region. Removing the wafer from the plasma region allows processing temperatures down to room temperature. Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) - CVD employing a high density, low energy plasma to obtain epitaxial deposition of semiconductor materials at high rates and low temperatures. Atomic-layer CVD (ALCVD) – Deposits successive layers of different substances to produce layered, crystalline films. See Atomic layer epitaxy. Combustion chemical vapor deposition (CCVD) – Combustion Chemical Vapor Deposition or flame pyrolysis is an open-atmosphere, flame-based technique for depositing high-quality thin films and nanomaterials. Hot filament CVD (HFCVD) – also known as catalytic CVD (Cat-CVD) or more commonly, initiated CVD, this process uses a hot filament to chemically decompose the source gases. The filament temperature and substrate temperature thus are independently controlled, allowing colder temperatures for better absorption rates at the substrate and higher temperatures necessary for decomposition of precursors to free radicals at the filament. Hybrid physical-chemical vapor deposition (HPCVD) – This process involves both chemical decomposition of precursor gas and vaporization of a solid source. Metalorganic chemical vapor deposition (MOCVD) – This CVD process is based on metalorganic precursors. Rapid thermal CVD (RTCVD) – This CVD process uses heating lamps or other methods to rapidly heat the wafer substrate. Heating only the substrate rather than the gas or chamber walls helps reduce unwanted gas-phase reactions that can lead to particle formation. Vapor-phase epitaxy (VPE) Photo-initiated CVD (PICVD) – This process uses UV light to stimulate chemical reactions. It is similar to plasma processing, given that plasmas are strong emitters of UV radiation. Under certain conditions, PICVD can be operated at or near atmospheric pressure. Laser chemical vapor deposition (LCVD) - This CVD process uses lasers to heat spots or lines on a substrate in semiconductor applications. In MEMS and in fiber production the lasers are used rapidly to break down the precursor gas—process temperature can exceed 2000 °C—to build up a solid structure in much the same way as laser sintering based 3-D printers build up solids from powders. Uses CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores. Commercially important materials prepared by CVD Polysilicon Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions: SiHCl3 → Si + Cl2 + HCl SiH4 → Si + 2 H2 This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it. Silicon dioxide Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows: SiH4 + O2 → SiO2 + 2 H2 SiCl2H2 + 2 N2O → SiO2 + 2 N2 + 2 HCl Si(OC2H5)4 → SiO2 + byproducts The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing. Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen: 4 PH3 + 5 O2 → 2 P2O5 + 6 H2 Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing. Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine. Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems. Silicon nitride Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase: 3 SiH4 + 4 NH3 → Si3N4 + 12 H2 3 SiCl2H2 + 4 NH3 → Si3N4 + 6 HCl + 6 H2 Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively). Another two reactions may be used in plasma to deposit SiNH: 2 SiH4 + N2 → 2 SiNH + 3 H2 SiH4 + NH3 → SiNH + 3 H2 These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm). Metals Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways: WF6 → W + 3 F2 WF6 + 3 H2 → W + 6 HF Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds. CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows: 2 MCl5 + 5 H2 → 2 M + 10 HCl whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows: M(CO)n → M + n CO the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide. Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation: 2 Nb(OC2H5)5 → Nb2O5 + 5 C2H5OC2H5 Graphene Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet. Carbon source The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with. Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved. Use of catalyst The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material. The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process. Physical conditions Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene. Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate. On the other hand, temperatures used range from 800–1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs. Carrier gas Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate. Chamber material Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions. Methods of analysis of results Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples. Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography. Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism. Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry. Graphene nanoribbon In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance. Diamond CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used. Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications. The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically. CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more. Chalcogenides Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements. See also Apollo Diamond Bubbler cylinder Carbonyl metallurgy Electrostatic spray assisted vapour deposition Element Six Ion plating Metalorganic vapour phase epitaxy Virtual metrology Lisa McElwee-White List of metal-organic chemical vapour deposition precursors List of synthetic diamond manufacturers References Further reading Okada K. (2007). "Plasma-enhanced chemical vapor deposition of nanocrystalline diamond" Sci. Technol. Adv. Mater. 8, 624 free-download review Liu T., Raabe D. and Zaefferer S. (2008). "A 3D tomographic EBSD analysis of a CVD diamond thin film" Sci. Technol. Adv. Mater. 9 (2008) 035013 free-download Wild, Christoph (2008). "CVD Diamond Properties and Useful Formula" CVD Diamond Booklet PDF free-download Hess, Dennis W. (1988). Chemical vapor deposition of dielectric and metal films . Free-download from Electronic Materials and Processing: Proceedings of the First Electronic Materials and Processing Congress held in conjunction with the 1988 World Materials Congress Chicago, Illinois, USA, 24–30 September 1988, Edited by Prabjit Singh (Sponsored by the Electronic Materials and Processing Division of ASM International). Chemical processes Coatings Glass coating and surface modification Industrial processes Plasma processing Semiconductor device fabrication Synthetic diamond Thin film deposition Vacuum Forming processes
2,767
6,112
https://en.wikipedia.org/wiki/CN%20Tower
CN Tower
The CN Tower () is a concrete communications and observation tower in downtown Toronto, Ontario, Canada. Built on the former Railway Lands, it was completed in 1976. Its name "CN" referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for real estate development. The CN Tower held the record for the world's tallest free-standing structure for 32 years, from 1975 until 2007, when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is currently the tenth-tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers. It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually. It houses several observation decks, a revolving restaurant at some , and an entertainment complex. History The original concept of the CN Tower was first conceived in 1968 when the Canadian National Railway wanted to build a large television and radio communication platform to serve the Toronto area, and to demonstrate the strength of Canadian industry and CN in particular. These plans evolved over the next few years, and the project became official in 1972. The tower would have been part of Metro Centre (see CityPlace), a large development south of Front Street on the Railway Lands, a large railway switching yard that was being made redundant after the opening of the MacMillan Yard north of the city in 1965 (then known as Toronto Yard). Key project team members were NCK Engineering as structural engineer; John Andrews Architects; Webb, Zerafa, Menkes, Housden Architects; Foundation Building Construction; and Canron (Eastern Structural Division). As Toronto grew rapidly during the late 1960s and early 1970s, multiple skyscrapers were constructed in the downtown core, most notably First Canadian Place. The reflective nature of the new buildings reduced the quality of broadcast signals, requiring new, higher antennas that were at least tall. The radio wire is estimated to be 102 m long in 44 pieces, the heaviest of which weighs around 8 t. At the time, most data communications took place over point-to-point microwave links, whose dish antennae covered the roofs of large buildings. As each new skyscraper was added to the downtown, former line-of-sight links were no longer possible. CN intended to rent "hub" space for microwave links, visible from almost any building in the Toronto area. The original plan for the tower envisioned a tripod consisting of three independent cylindrical "pillars" linked at various heights by structural bridges. Had it been built, this design would have been considerably shorter, with the metal antenna located roughly where the concrete section between the main level and the SkyPod lies today. As the design effort continued, it evolved into the current design with a single continuous hexagonal core to the SkyPod, with three support legs blended into the hexagon below the main level, forming a large Y-shape structure at the ground level. The idea for the main level in its current form evolved around this time, but the Space Deck (later renamed SkyPod) was not part of the plans until later. One engineer in particular felt that visitors would feel the higher observation deck would be worth paying extra for, and the costs in terms of construction were not prohibitive. Also around this time, it was realized that the tower could become the world's tallest free-standing structure to improve signal quality and attract tourists, and plans were changed to incorporate subtle modifications throughout the structure to this end. Construction The CN Tower was built by Canada Cement Company (also known as the Cement Foundation Company of Canada at the time), a subsidiary of Sweden's Skanska, a global project-development and construction group. Construction began on February 6, 1973, with massive excavations at the tower base for the foundation. By the time the foundation was complete, of earth and shale were removed to a depth of in the centre, and a base incorporating of concrete with of rebar and of steel cable had been built to a thickness of . This portion of the construction was fairly rapid, with only four months needed between the start and the foundation being ready for construction on top. To create the main support pillar, workers constructed a hydraulically raised slipform at the base. This was a fairly unprecedented engineering feat on its own, consisting of a large metal platform that raised itself on jacks at about per day as the concrete below set. Concrete was poured Monday to Friday (not continuously) by a small team of people until February 22, 1974, at which time it had already become the tallest structure in Canada, surpassing the recently built Inco Superstack in Sudbury, built using similar methods. The tower contains of concrete, all of which was mixed on-site in order to ensure batch consistency. Through the pour, the vertical accuracy of the tower was maintained by comparing the slip form's location to massive plumb bobs hanging from it, observed by small telescopes from the ground. Over the height of the tower, it varies from true vertical accuracy by only . In August 1974, construction of the main level commenced. Using 45 hydraulic jacks attached to cables strung from a temporary steel crown anchored to the top of the tower, twelve giant steel and wooden bracket forms were slowly raised, ultimately taking about a week to crawl up to their final position. These forms were used to create the brackets that support the main level, as well as a base for the construction of the main level itself. The Space Deck (currently named SkyPod) was built of concrete poured into a wooden frame attached to rebar at the lower level deck, and then reinforced with a large steel compression band around the outside. While still under construction, the CN Tower officially became the world's tallest free-standing structure on March 31, 1975. The antenna was originally to be raised by crane as well, but, during construction, the Sikorsky S-64 Skycrane helicopter became available when the United States Army sold one to civilian operators. The helicopter, named "Olga", was first used to remove the crane, and then flew the antenna up in 36 sections. The flights of the antenna pieces were a minor tourist attraction of their own, and the schedule was printed in local newspapers. Use of the helicopter saved months of construction time, with this phase taking only three and a half weeks instead of the planned six months. The tower was topped-off on April 2, 1975, after 26 months of construction, officially capturing the height record from Moscow's Ostankino Tower, and bringing the total mass to . Two years into the construction, plans for Metro Centre were scrapped, leaving the tower isolated on the Railway Lands in what was then a largely abandoned light-industrial space. This caused serious problems for tourists to access the tower. Ned Baldwin, project architect with John Andrews, wrote at the time that "All of the logic which dictated the design of the lower accommodation has been upset," and that "Under such ludicrous circumstances Canadian National would hardly have chosen this location to build." Phases of construction Opening The CN Tower opened on June 26, 1976. The construction costs of approximately ($ in dollars) were repaid in fifteen years. From the mid-1970s to the mid-1980s, the CN Tower was practically the only development along Front Street West; it was still possible to see Lake Ontario from the foot of the CN Tower due to the expansive parking lots and lack of development in the area at the time. As the area around the tower was developed, particularly with the completion of the Metro Toronto Convention Centre (north building) in 1984 and SkyDome in 1989 (renamed Rogers Centre in 2005), the former Railway Lands were redeveloped and the tower became the centre of a newly developing entertainment area. Access was greatly improved with the construction of the SkyWalk in 1989, which connected the tower and SkyDome to the nearby Union Station railway and subway station, and, in turn, to the city's Path underground pedestrian system. By the mid-1990s, it was the centre of a thriving tourist district. The entire area continues to be an area of intense building, notably a boom in condominium construction in the first quarter of the 21st century, as well as the 2013 opening of the Ripley's Aquarium by the base of the tower. Early years When the CN Tower opened in 1976, there were three public observation points: the SkyPod (then known as the Space Deck) that stands at , the Indoor Observation Level (later named Indoor Lookout Level) at , and the Outdoor Observation Terrace (at the same level as the Glass Floor) at . One floor above the Indoor Observation Level was the Top of Toronto Restaurant, which completed a revolution once every 72 minutes. The tower would garner worldwide media attention when stuntman Dar Robinson jumped off of the CN Tower on two occasions in 1979 and 1980. The first was for a scene from the movie Highpoint, in which Robinson received ($ in dollars) for the stunt. The second was for a personal documentary. Both stunts used a wire decelerator attached to his back as a safety measure. On June 26, 1986, the tenth anniversary of the tower's opening, high-rise firefighting and rescue advocate Dan Goodwin, in a sponsored publicity event, used his hands and feet to climb the outside of the tower, a feat he performed twice on the same day. Following both ascents, he used multiple rappels to descend to the ground. The 1990s and 2000s A glass floor at an elevation of was installed in 1994. Canadian National Railway sold the tower to Canada Lands Company prior to privatizing the company in 1995, when it divested all operations not directly related to its core freight shipping businesses. The tower's name and wordmark were adjusted to remove the CN railways logo, and the tower was renamed Canada's National Tower (from Canadian National Tower), though the tower is commonly called the CN Tower. Further changes were made from 1997 to January 2004: TrizecHahn Corporation managed the tower and instituted several expansion projects including a entertainment expansion, the 1997 addition of two new elevators (to a total of six) and the consequential relocation of the staircase from the north side leg to inside the core of the building, a conversion that also added nine stairs to the climb. TrizecHahn also owned the Willis Tower (Sears Tower at the time) in Chicago approximately at the same time. In 2007, light-emitting diode (LED) lights replaced the incandescent lights that lit the CN Tower at night, the reason cited being that LED lights are more cost and energy efficient than the incandescent lights. The colour of the LED lights can change, compared to the constant white colour of the incandescent lights. On September 12, 2007, Burj Khalifa, then under construction and known as Burj Dubai, surpassed the CN Tower as the world's tallest free-standing structure. In 2008, glass panels were installed in one of the CN Tower elevators, which established a world record (346 m) for highest glass floor panelled elevator in the world. 2010s: EdgeWalk On August 1, 2011, the CN Tower opened the EdgeWalk, an amusement in which thrill-seekers can walk on and around the roof of the main pod of the tower at , which is directly above the 360 Restaurant. It is the world's highest full-circle, hands-free walk. Visitors are tethered to an overhead rail system and walk around the edge of the CN Tower's main pod above the 360 Restaurant on a metal floor. The attraction is closed throughout the winter and during periods of electrical storms and high winds. One of the notable guests who visited EdgeWalk was Canadian comedian Rick Mercer, featured as the first episode of the ninth season of his CBC Television news satire show, Rick Mercer Report. There, he was accompanied by Canadian pop singer Jann Arden. The episode first aired on April 10, 2013. Pan Am Games The tower and surrounding areas were prominent in the 2015 Pan American Games production. In the opening ceremony a pre-recorded segment featured track-and-field athlete Bruny Surin passing the flame to sprinter Donovan Bailey on the EdgeWalk and parachuting into Rogers Centre. A fireworks display off the tower concluded both the opening and closing ceremonies. Canada 150 On July 1, 2017, as part of the nationwide celebrations for Canada 150, which celebrated the 150th anniversary of Canadian Confederation, fireworks were once again shot from the tower in a five-minute display coordinated with the tower lights and music broadcast on a local radio station. Closures The CN Tower was closed during the G20 summit on June 26–27, 2010, for security reasons, given its proximity to the Metro Toronto Convention Centre and ongoing citywide protests and riots. The CN Tower was closed from 2020 to 2021 due to COVID-19 pandemic restrictions in Ontario. The CN Tower was closed on December 16, 2021, due to glass falling off from heavy winds. Structure The CN Tower consists of several substructures. The main portion of the tower is a hollow concrete hexagonal pillar containing the stairwells and power and plumbing connections. The tower's six elevators are located in the three inverted angles created by the Tower's hexagonal shape (two elevators per angle). Each of the three elevator shafts is lined with glass, allowing for views of the city as the glass-windowed elevators make their way through the tower. The stairwell was originally located in one of these angles (the one facing north), but was moved into the central hollow of the tower; the tower's new fifth and sixth elevators were placed in the hexagonal angle that once contained the stairwell. On top of the main concrete portion of the tower is a tall metal broadcast antenna, carrying television and radio signals. There are three visitor areas: the Glass Floor and Outdoor Observation Terrace, which are both located at an elevation of , the Indoor Lookout Level (formerly known as "Indoor Observation Level") located at , and the higher SkyPod (formerly known as "Space Deck") at , just below the metal antenna. The hexagonal shape is visible between the two highest areas; however, below the main deck, three large supporting legs give the tower the appearance of a large tripod. The main deck level has seven storeys, some of which are open to the public. Below the public areas—at —is a large white donut-shaped radome containing the structure's UHF transmitters. The glass floor and outdoor observation deck are at . The glass floor has an area of and can withstand a pressure of . The floor's thermal glass units are thick, consisting of a pane of laminated glass, airspace and a pane of laminated glass. In 2008, one elevator was upgraded to add a glass floor panel, believed to have the highest vertical rise of any elevator equipped with this feature. The Horizons Cafe and the lookout level are at . The 360 Restaurant, a revolving restaurant that completes a full rotation once every 72 minutes, is at . When the tower first opened, it also featured a disco named Sparkles (at the Indoor Observation Level), billed as the highest disco and dance floor in the world. The SkyPod was once the highest public observation deck in the world until it was surpassed by the Shanghai World Financial Center in 2008. A metal staircase reaches the main deck level after 1,776 steps, and the SkyPod above after 2,579 steps; it is the tallest metal staircase on Earth. These stairs are intended for emergency use only except for charity stair-climb events two times during the year. The average climber takes approximately 30 minutes to climb to the base of the radome, but the fastest climb on record is 7 minutes and 52 seconds in 1989 by Brendan Keenoy, an Ontario Provincial Police officer. In 2002, Canadian Olympian and Paralympic champion Jeff Adams climbed the stairs of the tower in a specially designed wheelchair. The stairs were originally on one of the three sides of the tower (facing north), with a glass view, but these were later replaced with the third elevator pair and the stairs were moved to the inside of the core. Top climbs on the new, windowless stairwell used since around 2003 have generally been over ten minutes. Architects WZMH Architects John Hamilton Andrews Webb Zerafa Menkes Housden with the help of Edward R. Baldwin Falling ice danger A freezing rain storm on March 2, 2007, resulted in a layer of ice several centimetres thick forming on the side of the tower and other downtown buildings. The sun thawed the ice, then winds of up to blew some of it away from the structure. There were fears that cars and windows of nearby buildings would be smashed by large chunks of ice. In response, police closed some streets surrounding the tower. During morning rush hour on March 5 of the same year, police expanded the area of closed streets to include the Gardiner Expressway away from the tower as increased winds blew the ice farther, as far north as King Street West, away, where a taxicab window was shattered. Subsequently, on March 6, 2007, the Gardiner Expressway reopened after winds abated. On April 16, 2018, falling ice from the CN Tower punctured the roof of the nearby Rogers Centre stadium, causing the Toronto Blue Jays to postpone the game that day to the following day as a doubleheader; this was the third doubleheader held at the Rogers Centre. On April 20 of the same year, the CN Tower reopened. Safety features In August 2000, a fire broke out at the Ostankino Tower in Moscow killing three people and causing extensive damage. The fire was blamed on poor maintenance and outdated equipment. The failure of the fire-suppression systems and the lack of proper equipment for firefighters allowed the fire to destroy most of the interior and spark fears the tower might even collapse. The Ostankino Tower was completed nine years before the CN Tower and is only shorter. The parallels between the towers led to some concern that the CN Tower could be at risk of a similar tragedy. However, Canadian officials subsequently stated that it is "highly unlikely" that a similar disaster could occur at the CN Tower, as it has important safeguards that were not present in the Ostankino Tower. Specifically, officials cited: the fireproof building materials used in the tower's construction, frequent and stringent safety inspections, an extensive sprinkler system, a 24-hour emergency monitoring operation, two 68,160-litre (15,000-imperial gallon; 18,006-US gallon) water reservoirs at the top, which are automatically replenished, a fire hose at the base of the structure capable of sending to any location in the tower, a ban on natural gas appliances anywhere in the tower (including the restaurant in the main pod), an elevator that can be used during a fire as it runs up the outside of the building and can be powered by three emergency generators at the base of the structure (unlike the elevator at the Ostankino Tower, which malfunctioned). Officials also noted that the CN Tower has an excellent safety record, although there was an electrical fire in the antennae on August 16, 2017 — the tower's first fire. Moreover, other supertall structures built between 1967 and 1976 — such as the Willis Tower (formerly the Sears Tower), the World Trade Center (until its destruction on September 11, 2001), the Fernsehturm Berlin, the Aon Center, 875 North Michigan Avenue (formerly the John Hancock Center), and First Canadian Place — also have excellent safety records, which suggests that the Ostankino Tower accident was a rare safety failure, and that the likelihood of similar events occurring at other supertall structures is extremely low. Lighting The CN Tower was originally lit at night with incandescent lights, which were removed in 1997 because they were inefficient and expensive to repair. In June 2007, the tower was outfitted with 1,330 super-bright LED lights inside the elevator shafts, shooting over the main pod and upward to the top of the tower's mast to light the tower from dusk until 2 a.m. The official opening ceremony took place on June 28, 2007, before the Canada Day holiday weekend. The tower changes its lighting scheme on holidays and to commemorate major events. After the 95th Grey Cup in Toronto, the tower was lit in green and white to represent the colours of the Grey Cup champion Saskatchewan Roughriders. From sundown on August 27, 2011, to sunrise the following day, the tower was lit in orange, the official colour of the New Democratic Party (NDP), to commemorate the death of federal NDP leader and leader of the official opposition Jack Layton. When former South African president Nelson Mandela died, the tower was lit in the colours of the South African flag. When former federal finance minister under Stephen Harper's Conservatives Jim Flaherty died, the tower was lit in green to reflect his Irish Canadian heritage. On the night of the attacks on Paris on November 13, 2015, the tower displayed the colours of the French flag. On June 8, 2021, the tower displayed the colours of the Toronto Maple Leafs' archrivals Montreal Canadiens after they advanced to the semifinals of 2021 Stanley Cup playoffs. The CN Tower was lit in the colours of the Ukrainian flag during the beginning of the 2022 Russian invasion of Ukraine in late February 2022. Programmed remotely from a desktop computer with a wireless network interface controller in Burlington, Ontario, the LEDs use less energy to light than the previous incandescent lights (10% less energy than the dimly lit version and 60% less than the brightly lit version). The estimated cost to use the LEDs is $1,000 per month. During the spring and autumn bird migration seasons, the lights are turned off to comply with the voluntary Fatal Light Awareness Program, which "encourages buildings to dim unnecessary exterior lighting to mitigate bird mortality during spring and summer migration." Height comparisons The CN Tower is the tallest freestanding structure in the Western Hemisphere. As of 2013, there were two other freestanding structures in the Western Hemisphere exceeding in height: the Willis Tower in Chicago, which stands at when measured to its pinnacle, and One World Trade Center in New York City, which has a pinnacle height of , or approximately shorter than the CN Tower. Due to the symbolism of the number 1776 (the year of the signing of the United States Declaration of Independence), the height of One World Trade Center is unlikely to be increased. The proposed Chicago Spire was expected to exceed the height of the CN Tower, but its construction was halted early due to financial difficulties amid the Great Recession, and was eventually cancelled in 2010. Height distinction debate "World's Tallest Tower" title Guinness World Records has called the CN Tower "the world's tallest self-supporting tower" and "the world's tallest free-standing tower". Although Guinness did list this description of the CN Tower under the heading "tallest building" at least once, it has also listed it under "tallest tower", omitting it from its list of "tallest buildings." In 1996, Guinness changed the tower's classification to "World's Tallest Building and Freestanding Structure". Emporis and the Council on Tall Buildings and Urban Habitat both listed the CN Tower as the world's tallest free-standing structure on land, and specifically state that the CN Tower is not a true building, thereby awarding the title of world's tallest building to Taipei 101, which is shorter than the CN Tower. The issue of what was tallest became moot when Burj Khalifa, then under construction, exceeded the height of the CN Tower in 2007 (see below). Although the CN Tower contains a restaurant, a gift shop and multiple observation levels, it does not have floors continuously from the ground, and therefore it is not considered a building by the Council on Tall Buildings and Urban Habitat (CTBUH) or Emporis. CTBUH defines a building as "a structure that is designed for residential, business, or manufacturing purposes. An essential characteristic of a building is that it has floors." The CN Tower and other similar structures—such as the Ostankino Tower in Moscow, Russia; the Oriental Pearl Tower in Shanghai, China; The Strat in Las Vegas, Nevada, United States; and the Eiffel Tower in Paris, France—are categorized as "towers", which are free-standing structures that may have observation decks and a few other habitable levels, but do not have floors from the ground up. The CN Tower was the tallest tower by this definition until 2010 (see below). Taller than the CN Tower are numerous radio masts and towers, which are held in place by guy-wires, the tallest being the KVLY-TV mast in Blanchard, North Dakota, in the United States at tall, leading to a distinction between these and "free-standing" structures. Additionally, the Petronius Platform stands above its base on the bottom of the Gulf of Mexico, but only the top of this oil and natural gas platform are above water, and the structure is thus partially supported by its buoyancy. Like the CN Tower, none of these taller structures are commonly considered buildings. On September 12, 2007, Burj Khalifa, which is a hotel, residential and commercial building in Dubai, United Arab Emirates (formerly known as Burj Dubai before opening), passed the CN Tower's 553.33-m height. The CN Tower held the record of tallest freestanding structure on land for over 30 years. After Burj Khalifa had been formally recognized by the Guinness World Records as the world's tallest freestanding structure, Guinness re-certified CN Tower as the world's tallest freestanding tower. The tower definition used by Guinness was defined by the Council on Tall Buildings and Urban Habitat as 'a building in which less than 50% of the construction is usable floor space'. Guinness World Records editor-in-chief Craig Glenday announced that Burj Khalifa was not classified as a tower because it has too much usable floor space to be considered to be a tower. CN Tower still held world records for highest above ground wine cellar (in 360 Restaurant) at 351 m, highest above ground restaurant at 346 m (Horizons Restaurant), and tallest free-standing concrete tower during Guinness's recertification. The CN Tower was surpassed in 2009 by the Canton Tower in Guangzhou, China, which stands at tall, as the world's tallest tower; which in turn was surpassed by the Tokyo Skytree in 2011, which currently is the tallest tower at in height. The CN Tower, as of 2022, stands as the tenth-tallest free-standing structure on land, remains the tallest free-standing structure in the Western Hemisphere, and is the third-tallest tower. Height records Since its construction, the tower has gained the following world height records: Use The CN Tower has been and continues to be used as a communications tower for a number of different media and by numerous companies. Television broadcasters Radio There is no AM broadcasting from the CN Tower. The FM transmitters are situated in a metal broadcast antenna, on top of the main concrete portion of the tower at an elevation above . Communications Bell Canada Toronto Transit Commission Amateur radio repeaters "2-Tango" (VHF) and "4-Tango" (440/70 cm UHF)—owned and operated by the Toronto FM Communications Society, under callsign VE3TWR In popular culture The CN Tower has been featured in numerous films, television shows, music recording covers, and video games. The tower also has its own official mascot, which resembles the tower itself. Highpoint is a Canadian 1982 action film starring Richard Harris, Christopher Plummer and Beverly D'Angelo. It features a shot of stuntman Dar Robinson jumping off of the CN Tower in 1979. Views is a 2016 studio album released on April 29, 2016 by Canadian rapper Drake. The cover artwork features Drake sitting atop the CN Tower in Toronto. Drake appeared significantly larger than life-size on the cover, and the CN Tower's Twitter account later confirmed it to be photo edited. See also Architecture of Toronto List of tallest buildings in Toronto List of tallest structures in Canada List of tallest freestanding structures List of tallest towers List of tallest buildings and structures List of tallest structures References External links CBC Archives – CN Tower opens to the public. (Multimedia) Official CN Tower Website Edgewalk The Design, Engineering and Construction of the CN Tower – 1972 through to 1976 A visual construction history of the CN Tower – at 40th year anniversaries How the CN Tower was Built - Art Of Engineering (YouTube documentary) Towers completed in 1976 Buildings and structures in Toronto Towers with revolving restaurants Canadian National Railway facilities Communication towers in Canada Observation towers in Canada Towers in Ontario Modernist architecture in Canada Stairways Transmitter sites in Canada Tourist attractions in Toronto WZMH Architects buildings Railway Lands Articles containing video clips 1976 establishments in Ontario
2,768
6,136
https://en.wikipedia.org/wiki/Carbon%20monoxide
Carbon monoxide
Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry. The most common source of carbon monoxide is the partial combustion of carbon-containing compounds, when insufficient oxygen or heat is present to produce carbon dioxide. There are also numerous environmental and biological sources that generate and emit a significant amount of carbon monoxide. It is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change. Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN−. History Prehistory Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals. Ancient history Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning. Pre-Industrial Revolution Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s. Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800. Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, "prevents arterials blood from becoming venous". Felix Hoppe-Seyler independently published similar conclusions in the following year. Advent of industrial chemistry Carbon monoxide gained recognition as an invaluable reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes. Physical and chemical properties Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply-bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8. The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known. The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons. Table of thermal and physical properties of carbon monoxide (CO) at atmospheric pressure: Bonding and dipole moment Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ– remains at the carbon end and the molecule has a small dipole moment of 0.122 D. The molecule is therefore asymmetric: oxygen has more electron density than carbon and is also slightly positively charged compared to carbon being negative. By contrast, the isoelectronic dinitrogen molecule has no dipole moment. Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, –C≡O+ is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme. If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex. See also the section "Coordination chemistry" below. Bond polarity and oxidation state Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds. The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom. Occurrence Carbon monoxide occurs in various natural and artificial environments. Photochemical degradation of plant matter for example generates an estimated 60 billion kilograms/year. Typical concentrations in parts per million are as follows: Atmospheric presence Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 1012 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas. Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, •OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration. Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes. Pollution Urban pollution Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash). Large CO pollution events can be observed from space over cities. Role in ground level ozone formation Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (•OH) to produce a radical intermediate •HOCO, which transfers rapidly its radical hydrogen to O2 to form peroxy radical (HO2•) and carbon dioxide (). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide (NO2) and hydroxyl radical. NO2 gives O(3P) via photolysis, thereby forming O3 following reaction with O2. Since hydroxyl radical is formed during the formation of NO2, the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is: CO + 2O2 + hν → + O3 (where hν refers to the photon of light absorbed by the NO2 molecule in the sequence) Although the creation of NO2 is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone. Indoor pollution In closed environments, the concentration of carbon monoxide can rise to lethal levels. On average, 170 people in the United States die every year from carbon monoxide produced by non-automotive consumer products. These products include malfunctioning fuel-burning appliances such as furnaces, ranges, water heaters, and gas and kerosene room heaters; engine-powered equipment such as portable generators (and cars left running in attached garages); fireplaces; and charcoal that is burned in homes and other enclosed areas. Many deaths have occurred during power outages due to severe weather such as Hurricane Katrina and the 2021 Texas power crisis. Mining Miners refer to carbon monoxide as "whitedamp" or the "silent killer". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom "Canary in the coal mine" pertained to an early warning of a carbon monoxide presence. Astronomy Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form. Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star. In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton. Solid carbon monoxide is a component of comets. The volatile or "ice" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto. Carbon monoxide can react with water to form carbon dioxide and hydrogen: CO + H2O → + This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution. If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed: CO + H2O → HCOOH These reactions can take place in a few million years even at temperatures such as found on Pluto. Chemistry Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries. Coordination chemistry Most metals form coordination complexes containing covalently attached carbon monoxide. Only metals in lower oxidation states will complex with carbon monoxide ligands. This is because there must be sufficient electron density to facilitate back-donation from the metal dxz-orbital, to the π* molecular orbital from CO. The lone pair on the carbon atom in CO also donates electron density to the dx²−y² on the metal to form a sigma bond. This electron donation is also exhibited with the cis effect, or the labilization of CO ligands in the cis position. Nickel carbonyl, for example, forms by the direct combination of carbon monoxide and nickel metal: Ni + 4 CO → Ni(CO)4 (1 bar, 55 °C) For this reason, nickel in any tubing or part must not come into prolonged contact with carbon monoxide. Nickel carbonyl decomposes readily back to Ni and CO upon contact with hot surfaces, and this method is used for the industrial purification of nickel in the Mond process. In nickel carbonyl and other carbonyls, the electron pair on the carbon interacts with the metal; the carbon monoxide donates the electron pair to the metal. In these situations, carbon monoxide is called the carbonyl ligand. One of the most important metal carbonyls is iron pentacarbonyl, Fe(CO)5: Many metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford IrCl(CO)(PPh3)2. Metal carbonyls in coordination chemistry are usually studied using infrared spectroscopy. Organic and main group chemistry In the presence of strong acids and water, carbon monoxide reacts with alkenes to form carboxylic acids in a process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of AlCl3 and HCl. Organolithium compounds (e.g. butyl lithium) react with carbon monoxide, but these reactions have little scientific use. Although CO reacts with carbocations and carbanions, it is relatively nonreactive toward organic compounds without the intervention of metal catalysts. With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct H3BCO, which is isoelectronic with the acetylium cation [H3CCO]+. CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate 2·. It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate 2·, potassium benzenehexolate 6, and potassium rhodizonate 2·. The compounds cyclohexanehexone or triquinoyl (C6O6) and cyclopentanepentone or leuconic acid (C5O5), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive. Laboratory preparation Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide: Zn + CaCO3 → ZnO + CaO + CO Silver nitrate and iodoform also afford carbon monoxide: CHI3 + 3AgNO3 + H2O → 3HNO3 + CO + 3AgI Finally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct: → + Production Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (), such as when operating a stove or an internal combustion engine in an enclosed space. For example, during World War II, a gas mixture including carbon monoxide was used to keep motor vehicles running in parts of the world where gasoline and diesel fuel were scarce. External (with a few exceptions) charcoals or wood gas generators were fitted, and the mixture of atmospheric nitrogen, hydrogen, carbon monoxide, and small amounts of other gases produced by gasification was piped to a gas mixer. The gas mixture produced by this process is known as wood gas. A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified. Many methods have been developed for carbon monoxide production. Industrial production A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced equilibrates with the remaining hot carbon to give CO. The reaction of with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product: (g) + C (s) → 2 CO (g) (ΔHr = 170 kJ/mol) Another source is "water gas", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon: H2O (g) + C (s) → H2 (g) + CO (g) (ΔHr = 131 kJ/mol) Other similar "synthesis gases" can be obtained from natural gas and other fuels. Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst. 2 → 2 CO + O2 Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows: MO + C → M + CO Carbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air. 2 C + O2 → 2 CO Since CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over in high temperatures. Use Chemical industry Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and H2. Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents. Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989. CO + Cl2 → COCl2 Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel. In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous Iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid. Metallurgy Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide. Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking. Lasers Carbon monoxide has also been used as a lasing medium in high-powered infrared lasers. Proposed use as fuel on Mars Carbon monoxide has been proposed for use as a fuel on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel. Biological and physiological properties Physiology Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator. Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care. Medicine Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide. Microbiology Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown. The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen. Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases. Food science Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold, carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%. The technology was first given "generally recognized as safe" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union. Toxicity Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. The Centers for Disease Control and Prevention estimates that several thousand people go to hospital emergency rooms every year to be treated for carbon monoxide poisoning. According to the Florida Department of Health, "every year more than 500 Americans die from accidental exposure to carbon monoxide and thousands more across the U.S. require emergency medical care for non-fatal carbon monoxide poisoning." The American Association of Poison Control Centers (AAPCC) reported 15,769 cases of carbon monoxide poisoning resulting in 39 deaths in 2007. In 2005, the CPSC reported 94 generator-related carbon monoxide poisoning deaths. Carbon monoxide is colorless, odorless, and tasteless. As such, it is relatively undetectable. It readily combines with hemoglobin to produce carboxyhemoglobin which potentially affects gas exchange; therefore exposure can be highly toxic. Concentrations as low as 667 ppm may cause up to 50% of the body's hemoglobin to convert to carboxyhemoglobin. A level of 50% carboxyhemoglobin may result in seizure, coma, and fatality. In the United States, the OSHA limits long-term workplace exposure levels above 50 ppm. In addition to affecting oxygen delivery, carbon monoxide also binds to other hemoproteins such as myoglobin and mitochondrial cytochrome oxidase, metallic and non-metallic cellular targets to affect many cell operations. Weaponization In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War. Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 "euthanasia" program. See also Hydrocarbonate (gas) Smoker's paradox – hyperbaric treatment for CO poisoning research articles on CO poisoning References External links Global map of carbon monoxide distribution Explanation of the structure International Chemical Safety Card 0023 CDC NIOSH Pocket Guide to Chemical Hazards: Carbon monoxide—National Institute for Occupational Safety and Health (NIOSH), US Centers for Disease Control and Prevention (CDC) Carbon Monoxide—NIOSH Workplace Safety and Health Topic—CDC Carbon Monoxide Poisoning—Frequently Asked Questions—CDC External MSDS data sheet Carbon Monoxide Detector Placement Microscale Gas Chemistry Experiments with Carbon Monoxide Airborne pollutants Articles containing video clips Gaseous signaling molecules Industrial gases Oxocarbons Smog Toxicology Diatomic molecules
2,781
6,139
https://en.wikipedia.org/wiki/Christoph%20Ludwig%20Agricola
Christoph Ludwig Agricola
Christoph Ludwig Agricola (5 November 1665 – 8 August 1724) was a German landscape painter and etcher. He was born and died at Regensburg (Ratisbon). Life and career Christoph Ludwig Agricola was born on 5 November 1665 at Regensburg in Germany. He trained, as many painters of the period did, by studying nature. He spent a great part of his life in travel, visiting England, the Netherlands and France, and residing for a considerable period at Naples, where he may have been influenced by Nicolas Poussin. He also stayed for some years circa 1712 in Venice, where he painted many works for the patron Zaccaria Sagredo. He died in Regensburg in 1724. Work Although he primarily worked in gouache and oils, documentary sources reveal that he also produced a small number of etchings. He was a good draughtsman, used warm lighting and exhibited a warm, masterly brushstroke. His numerous landscapes, chiefly cabinet pictures, are remarkable for fidelity to nature, and especially for their skilful representation of varied phases of climate, especially nocturnal scenes and weather anomalies such as thunderstorms. In composition his style shows the influence of Nicolas Poussin and his work often displays the idealistic scenes associated with Poussin. In light and colour he imitates Claude Lorrain. His compositions include ruins of ancient buildings in the foreground, but his favourite figure for the foreground was men dressed in Oriental attire. He also produced a series of etchings of birds. His pictures can be found in Dresden, Braunschweig, Vienna, Florence, Naples and many other towns of both Germany and Italy. Legacy He probably tutored the artist, Johann Theile, and had an enormous influence on him. Art historians have also noted that the work of the landscape painter, Christian Johann Bendeler (1699–1728), was also influenced by Agricola. Gallery References Further reading 1665 births 1724 deaths 17th-century German painters 18th-century German painters 18th-century German male artists German male painters German landscape painters
2,783
6,140
https://en.wikipedia.org/wiki/Claudius
Claudius
Tiberius Claudius Caesar Augustus Germanicus (; ; 1 August 10 BC – 13 October AD 54) was the fourth Roman emperor, ruling from AD 41 to 54. A member of the Julio-Claudian dynasty, Claudius was born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate. He was the first Roman emperor to be born outside Italy. Nonetheless, Claudius was an Italian of Sabine origins. As he had a limp and slight deafness due to sickness at a young age, he was ostracized by his family and was excluded from public office until his consulship (which was shared with his nephew, Caligula, in 37). Claudius's infirmity probably saved him from the fate of many other nobles during the purges throughout the reigns of Tiberius and Caligula, as potential enemies did not see him as a serious threat. His survival led to him being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last adult male of his family. Despite his lack of experience, Claudius was an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped restore the empire's finances after the excesses of Caligula's reign. He was also an ambitious builder, constructing new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain. Having a personal interest in law, he presided at public trials, and issued edicts daily. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position, which resulted in the deaths of many senators. Those events damaged his reputation among the ancient writers, though more recent historians have revised that opinion. Many authors contend that he was murdered by his own wife, Agrippina the Younger. After his death at the age of 63, his grand-nephew and legally adopted step-son, Nero, succeeded him as emperor. Family and youth Early life Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia Minor, may have had two other children who died young. Claudius's maternal grandparents were Mark Antony and Octavia Minor, Augustus's sister, and he was therefore the great-great-grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus's third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumor that his father Nero Claudius Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius's paternal grandfather. In 9 BC, Claudius's father Drusus died on campaign in Germania from a fall from a horse. Claudius was then raised by his mother, who never remarried. When his disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years. Livia was a little kinder, but nevertheless sent Claudius short, angry letters of reproof. He was put under the care of a "former mule-driver" to keep him disciplined, under the logic that his condition was due to laziness and a lack of willpower. However, by the time he reached his teenage years, his symptoms apparently waned and his family began to take some notice of his scholarly interests. In AD 7, Livy was hired to tutor Claudius in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter, as well as the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius's oratory. Public life Claudius' work as a historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, he began work on a history of the Civil Wars that was either too truthful or too critical of Octavian, then reigning as Caesar Augustus. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office, since he could not be trusted to toe the existing party line. When Claudius returned to the narrative later in life, he skipped over the wars of the Second Triumvirate altogether; but the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honor the Imperial clan in AD 8, Claudius's name (now Tiberius Claudius Nero Germanicus after his elevation to pater familias of the Claudii Nerones on the adoption of his brother) was inscribed on the edge, past the deceased princes, Gaius and Lucius, and Germanicus's children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all. When Augustus died in AD 14, Claudius – then aged 23 – appealed to his uncle Tiberius to allow him to begin the cursus honorum. Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life. Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus's death, the equites, or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained. During the period immediately after the death of Tiberius's son, Drusus, Claudius was pushed by some quarters as a potential heir. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility. After the death of Tiberius, the new emperor Caligula (the son of Claudius's brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 to emphasize the memory of Caligula's deceased father Germanicus. Despite this, Caligula tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio, Claudius became sickly and thin by the end of Caligula's reign, most likely due to stress. A possible surviving portrait of Claudius from this period may support this. Assassination of Caligula (AD 41) On 24 January 41, Caligula was assassinated in a conspiracy involving Cassius Chaerea – a military tribune in the Praetorian Guard – and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot – particularly since he left the scene of the crime shortly before his nephew was murdered. However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family. In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly declared him princeps. A section of the guard may have planned in advance to seek out Claudius, perhaps with his approval. They reassured him that they were not one of the battalions looking for revenge. He was spirited away to the Praetorian camp and put under their protection. The Senate met and debated a change of government, but this devolved into an argument over which of them would be the new princeps. When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus, claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role so it remains uncertain. Eventually the Senate was forced to give in. In return, Claudius granted a general amnesty, although he executed a few junior officers involved in the conspiracy. The actual assassins, including Cassius Chaerea and Julius Lupus, the murderer of Caligula's wife and daughter, were put to death to ensure Claudius's own safety and as a future deterrent. As Emperor Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name "Caesar" as a cognomen, as the name still carried great weight with the populace. To do so, he dropped the cognomen "Nero", which he had adopted as pater familias of the Claudii Nerones when his brother Germanicus was adopted out. As Pharaoh of Egypt, Claudius adopted the royal titulary Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet ("Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon"). While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus's sister Octavia, and so he felt that he had the right of family. He also adopted the name "Augustus" as the two previous emperors had done at their accessions. He kept the honorific "Germanicus" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term "filius Drusi" (son of Drusus) in his titles, to remind the people of his legendary father and lay claim to his reputation. Since Claudius was the first emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, he was the first emperor who resorted to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces. Tiberius and Augustus had both left gifts to the army and guard in their wills, and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, however, issuing coins with tributes to the Praetorians in the early part of his reign. Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, "... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor." Claudius restored the status of the peaceful Imperial Roman provinces of Macedonia and Achaea as senatorial provinces. Expansion of the Empire Under Claudius, the Empire underwent its first major expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, and the official division of the former client kingdom into two Imperial provinces. The most far-reaching conquest was that of Britannia. In 43, Claudius sent Aulus Plautius with four legions to Britain (Britannia) after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its material wealth – mines and slaves – as well as being a haven for Gallic rebels. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The Roman colonia of Colonia Claudia Victricensis was established as the provincial capital of the newly established province of Britannia at Camulodunum, where a large temple was dedicated in his honour. He left after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific "Britannicus" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander. Claudius conducted a census in 48 that found 5,984,072 (adult male) Roman citizens (women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus's death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible. Judicial and legislative affairs Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law. He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system. He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 to ensure a more experienced jury pool. Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria sent him two embassies at once after riots broke out between the two communities. This resulted in the famous "Letter to the Alexandrians", which reaffirmed Jewish rights in the city but also forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire. One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens. The Emperor issued a declaration, contained in the Tabula clesiana, that they would be considered to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were sold back into slavery. Numerous edicts were issued throughout Claudius's reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite. Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health. One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder. Public works Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built two aqueducts, the Aqua Claudia, begun by Caligula, and the Aqua Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo. He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth. The construction also had the effect of reducing flooding in Rome. The port at Ostia was part of Claudius's solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine. The last part of Claudius's plan was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, which would have the added benefit of making the nearby river navigable year-round. A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over of new arable land. He expanded the Claudian tunnel to three times its original size. Senate Due to the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune (the Emperor could not officially serve as a Tribune of the Plebes as he was a patrician, but it was a power taken by previous rulers). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also put the Imperial provinces of Macedonia and Achaea back under Senate control. Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech: In 47, he assumed the office of censor with Lucius Vitellius, which had been allowed to lapse for some time. He struck the names of many senators and equites who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even jokes about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons, France), i.e. himself. He also increased the number of patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar. Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor. Plots and coup attempts Several coup attempts were made during Claudius's reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius's reign under questionable circumstances. Shortly after, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus, the governor of Dalmatia, and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus's troops, which led to the suicide of the main conspirators. Many other senators tried different conspiracies and were condemned. Claudius's son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusiius Saturninus, Cornelius Lupus, and Pompeius Pedo. In 46, Asinius Gallus, the grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius's own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. The ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious. Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with the Titus Statilius Taurus Corvinus mentioned above. Most of these conspiracies took place before Claudius's term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus Annal. This section of Tacitus history narrates the alleged conspiracy of Claudius's third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius's reign. Needless to say, the responses to these conspiracies could not have helped Senate–emperor relations. Secretariat and centralization of powers Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He was, however, forced to increase their role as the powers of the princeps became more centralized and the burden larger. This was partly due to the ongoing hostility of the Senate, as mentioned above, but also due to his respect for the senators. Claudius did not want free-born magistrates to have to serve under him, as if they were not peers. The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius's stead before the conquest of Britain. Since these were important positions, the senators were aghast at their being placed in the hands of former slaves and "well-known eunuchs". If freedmen had total control of money, letters, and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by the ancient sources. However, these same sources admit that the freedmen were loyal to Claudius. He was similarly appreciative of them and gave them due credit for policies where he had used their advice. However, if they showed treasonous inclinations, the Emperor did punish them with just force, as in the case of Polybius and Pallas's brother, Felix. There is no evidence that the character of Claudius's policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout. Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder notes that several of them were richer than Crassus, the richest man of the Republican era. Religious reforms Claudius, as the author of a treatise on Augustus's religious reforms, felt himself in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods. He restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He re-instituted old observances and archaic language. Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian Mysteries which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities. Public games and entertainments According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters. Claudius also presided over many new and original events. Soon after coming into power, Claudius instituted games to be held in honor of his father on the latter's birthday. Annual games were also held in honour of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor. Claudius organised a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus's excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning. Claudius also presented naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows. At Ostia, in front of a crowd of spectators, Claudius fought an orca which was trapped in the harbour. The event was witnessed by Pliny the Elder: Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track. Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators. He rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication which he observed from a special platform in the orchestra box. Marriages and personal life Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer. Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day. Plautia Urgulanilla Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, the daughter of Sejanus. Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. This action made him later the target of criticism by his enemies. Aelia Paetina Soon after (possibly in 28), Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability, although Leon (1948) suggests it may have been due to emotional and mental abuse by Paetina. Valeria Messalina Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed (Claudius's grandmother, Octavia the Younger, was Valeria's great-grandmother on both her mother and father's side) and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius's accession. This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have more sexual partners in a nightand manipulated his policies to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia. Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage. Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining rank and protecting her children. The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point. Whatever the case, the result was the execution of Silius, Messalina, and most of her circle. Agrippina the Younger Claudius did marry once more. The ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles. She gradually seized power from Emperor Claudius and successfully conspired to eliminate his son's rivals and she was able to successfully open the way for her son to become emperor. The truth is probably more political. The attempted coup d'état by Silius and Messalina had probably made Claudius realize the weakness of his position as a member of the Claudian but not the Julian family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy. Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Emperor Nero) was one of the last males of the Imperial family. Coup attempts could rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, to end the feud between the Julian and Claudian branches. This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions which Tiberius had gladly punished. In any case, Claudius accepted Agrippina and later adopted the newly mature Nero as his son. Nero was married to Claudius's daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs, and Tiberius had named Caligula joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome, when a suitable natural adult heir was unavailable as was the case during Britannicus's minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign. Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family to prevent doubts (although that did not stop others from making him the object of a coup attempt against Nero a few years later). Besides which, he was the half-brother of Valeria Messalina and at this time those wounds were still fresh. Nero was more popular with the general public as the grandson of Germanicus and the direct descendant of Augustus. Affliction and personality The historian Suetonius describes the physical manifestations of Claudius's condition in relatively good detail. His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his Apocolocyntosis that Claudius's voice belonged to no land animal, and that his hands were weak as well. However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of dignitas. When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne. Claudius himself claimed that he had exaggerated his ailments to save his life. Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves's Claudius novels, first published in the 1930s. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause, as outlined by Ernestine Leon. Tourette syndrome has also been considered a possibility. As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians. They also paint him as bloodthirsty and cruel, over-fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper. According to the ancient historians he was also excessively trusting, and easily manipulated by his wives and freedmen. But at the same time they portray him as paranoid and apathetic, dull and easily confused. Claudius's extant works present a different view, painting a picture of an intelligent, scholarly, well-read, and conscientious administrator with an eye to detail and justice. Thus, Claudius becomes an enigma. Since the discovery of his "Letter to the Alexandrians" in the last century, much work has been done to rehabilitate Claudius and determine where the truth lies. Scholarly works and their impact Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius – which covers the peak of Claudius's literary career – it became impolitic to speak of republican Rome. The trend among the young historians was to either write about the new empire or obscure antiquarian subjects. Claudius was the rare scholar who covered both. Besides the history of Augustus's reign that caused him so much grief, his major works included Tyrrhenica, a twenty-book Etruscan history, and Carchedonica, an eight-volume history of Carthage, as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history. He proposed a reform of the Latin alphabet by the addition of three new letters. He officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste. Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches. None of the works survive but live on as sources for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius's autobiography once and must have used it as a source numerous times. Tacitus uses Claudius's arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's Natural History. The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. The speech is meticulous in details, a common mark of all his extant works, and he goes into long digressions on related matters. This indicates a deep knowledge of a variety of historical subjects that he could not but share. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies. His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect, and his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the censorship to introduce the letter "R" and so used his own term to introduce his new letters. Death The consensus of ancient historians was that Claudius was murdered by poison – possibly contained in mushrooms or on a feather – and died in the early hours of 13 October 54. Nearly all implicate his final and powerful wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family. Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power. Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance. Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again. Among contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors on his poisoning. Some historians have cast doubt on whether Claudius was murdered or merely died from illness or old age. Evidence against his murder include his old age, his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime. Claudius's ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier. Legacy Divine honours Already, while alive, he received the widespread private worship of a living princeps and was worshipped in Britannia in his own temple in Camulodunum. Claudius was deified by Nero and the Senate almost immediately. Views of the new regime Agrippina had sent away Narcissus shortly before Claudius's death, and now murdered the freedman. The last act of this secretary of letters was to burn all of Claudius's correspondence – most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius's private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts (see below), Nero often criticized the deceased Emperor and many Claudian laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them. Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all, and realigned with his birth family. Claudius's temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House. Flavian and later perspectives The Flavians, who had risen to prominence under Claudius, took a different tack. They were in a position where they needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were good associated with good. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of the Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill. However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state cult in Rome probably continued until the abolition of all such cults of dead Emperors by Maximinus Thrax in 237–238. The Feriale Duranum, probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August. And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century. Views of ancient historians The ancient historians Tacitus, Suetonius (in The Twelve Caesars), and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or equites. They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus's letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and attributing the objectively good works to his retinue. Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing. He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and often in public life. During his censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius's writings and to have omitted Claudius's character from his works. Even his version of Claudius's Lyons tablet speech is edited to be devoid of the Emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus, the conception of Claudius as the weak fool, controlled by those he supposedly ruled, was preserved for the ages. As time passed, Claudius was mostly forgotten outside of the historians's accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius. In modern media The best known fictional representation of the Emperor Claudius was contained in the books I, Claudius and Claudius the God (published in 1934 and 1935, respectively) by Robert Graves, both written in the first-person to give the reader the impression that they are Claudius's autobiography. Graves employed a fictive artifice to suggest that they were recently discovered, genuine translations of Claudius's writings. Claudius's extant letters, speeches, and sayings were incorporated into the text (mostly in the second book, Claudius the God), to add authenticity. In 1937, director Josef von Sternberg attempted a film version of I, Claudius, with Charles Laughton as Claudius. However, the lead actress, Merle Oberon, had a near-fatal car accident and the movie was never finished. The surviving reels were featured in the BBC documentary The Epic That Never Was (1965). The motion picture rights for a new film eventually passed to producer Scott Rudin. Graves's two books were the basis for a British television adaptation I, Claudius, produced by the BBC. The series starred Derek Jacobi as Claudius and was broadcast in 1976 on BBC2. It was a substantial critical success, and won several BAFTA awards. The series was later broadcast in the United States on Masterpiece Theatre in 1977. The 1996 7-VHS release and the later DVD release of the television series, include The Epic That Never Was documentary. A radio adaptation of the Graves novels by Robin Brooks and directed by Jonquil Panting, was broadcast in six one-hour episodes on BBC Radio 4 beginning 4 December 2010. The cast featured Tom Goodman-Hill as Claudius, Derek Jacobi as Augustus, Harriet Walter as Livia, Tim McInnerny as Tiberius and Samuel Barnett as Caligula. In 2011, it was announced rights for a miniseries adaptation passed to HBO and BBC Two. Anne Thomopoulos and Jane Tranter, producers of the popular HBO–BBC2 Rome miniseries, were attached to the I, Claudius project. However, as of 2018, it has yet to be produced, and no release date is pending. The 1954 film Demetrius and the Gladiators also portrayed him sympathetically, played by Barry Jones. In the 1960 film Messalina, Claudius is portrayed by Mino Doro. On television, Freddie Jones portrayed Claudius in the 1968 British television series The Caesars. The 1975 TV Special Further Up Pompeii! (based on the Frankie Howerd sit-com Up Pompeii!) featured Cyril Appleton as Claudius. In the 1979 motion picture Caligula, where the role was performed by Giancarlo Badessi, Claudius is depicted as an idiot, in contrast to Robert Graves' portrait of Claudius as a cunning and deeply intelligent man, who is perceived by others to be an idiot. In the 1981 Franco-Italian film Caligula and Messalina, he was portrayed by Gino Turini (as John Turner). The 1985 made-for-television miniseries A.D. features actor Richard Kiley as Claudius. Kiley portrays him as thoughtful, but willing to cater to public opinion as well as being under the influence of Agrippina. In the 2004 TV film Imperium: Nero, Claudius is portrayed by Massimo Dapporto. He is portrayed in Season 3 of the Netflix documentary series Roman Empire, which focused on the reign of Caligula, by Kelson Henderson. The series concludes with Claudius's accession. There is also a reference to Claudius's suppression of a coup in the movie Gladiator, though the incident is entirely fictional. In the series Britannia (2018), Claudius visits Britannia, played by Steve Pemberton as a fool who is drugged by Aulus Plautius. He is portrayed by Derek Jacobi in the 2019 BBC film Horrible Histories: The Movie - Rotten Romans In literature, Claudius and his contemporaries appear in the historical novel The Roman by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves's Claudius story, in his two novels, Empire of the Atom and The Wizard of Linn. The historical novel Chariot of the Soul by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius. See also Julio-Claudian family tree List of Roman emperors Temple of Claudius Notes References Bibliography Baldwin, B. (1964). "Executions under Claudius: Seneca's Ludus de Morte Claudii". Phoenix 18 (1): 39–48. Griffin, M. (1990). "Claudius in Tacitus". Classical Quarterly 40 (2): 482–501. Levick, B.M. (1978). "Claudius: Antiquarian or Revolutionary?" American Journal of Philology, 99 (1): 79–105. Leon, E.F. (1948). "The Imbecillitas of the Emperor Claudius", Transactions and Proceedings of the American Philological Association, 79 79–86. McAlindon, D. (1957). "Claudius and the Senators", American Journal of Philology, 78 (3): 279–286. Major, A. (1992). "Was He Pushed or Did He Leap? Claudius' Ascent to Power", Ancient History, 22 25–31. Malloch, S. J. V. (2013). The Annals of Tacitus, book 11. Cambridge University Press. Minaud, Gérard, Les vies de 12 femmes d'empereur romain – Devoirs, Intrigues & Voluptés, Paris, L'Harmattan, 2012, ch. 2, La vie de Messaline, femme de Claude, p. 39–64; ch. 3, La vie d'Agrippine, femme de Claude, p. 65–96. . Momigliano, Arnaldo (1934). Claudius: the Emperor and His Achievement Trans. W.D. Hogarth. Cambridge: W. Heffer and Sons. Oost, S.V. (1958). "The Career of M. Antonius Pallas", American Journal of Philology, 79 (2): 113–139. Renucci, Pierre (2012). Claude, l'empereur inattendu, Paris: Perrin. Ryan, F.X. (1993). "Some Observations on the Censorship of Claudius and Vitellius, AD 47–48", American Journal of Philology, 114 (4): 611–618. Scramuzza, Vincent (1940). The Emperor Claudius Cambridge: Harvard University Press. Vessey, D.W.T.C. (1971). "Thoughts on Tacitus' Portrayal of Claudius" American Journal of Philology 92 (3) 385–409. External links Ancient sources Suetonius Life of Claudius Tacitus Tacitus, books 11–12 Dio Cassius Dio's account of Claudius' reign, part I Cassius Dio's account, part II Josephus The works of Josephus Seneca The Apocolocyntosis of the Divine Claudius Claudius Claudius' Letter to the Alexandrians Lyons tablet Extract from first half of the Lyons Tablet Second half of the Lyons Tablet Tacitus' version of the Lyons Tablet speech Edict confirming the rights of the people of Trent. Full Latin text here. Modern biographies Biography from De Imperatoribus Romanis Claudius Page Claudius I at BBC History 10 BC births 54 deaths 1st-century Gallo-Roman people 1st-century historians 1st-century murdered monarchs 1st-century Roman emperors Ancient Romans in Britain Claudii Nerones Creators of writing systems Husbands of Agrippina the Younger Deified Roman emperors Imperial Roman consuls Julio-Claudian dynasty Latin historians People from Lugdunum People with cerebral palsy Poisoned Romans Royalty and nobility with disabilities Roman pharaohs Murdered Roman emperors
2,784
6,141
https://en.wikipedia.org/wiki/Cardinal
Cardinal
Cardinal or The Cardinal may refer to: Animals Cardinal (bird) or Cardinalidae, a family of North and South American birds Cardinalis, genus of cardinal in the family Cardinalidae Cardinalis cardinalis, or northern cardinal, the common cardinal of eastern North America Argynnis pandora, a species of butterfly Cardinal tetra, a freshwater fish Paroaria, a South American genus of birds, called red-headed cardinals or cardinal-tanagers Businesses Cardinal Brewery, a brewery founded in 1788 by François Piller, located in Fribourg, Switzerland Cardinal Health, a health care services company Christianity Cardinal (Catholic Church), a senior official of the Catholic Church Member of the College of Cardinals Cardinal (Church of England), either of two members of the College of Minor Canons of St. Paul's Cathedral Entertainment Films Cardinals (film), a 2017 Canadian film The Cardinal (1936 film), a British historical drama The Cardinal, a 1963 American film Games Cardinal (chess), a fairy chess piece, also known as the archbishop Cardinal, a participant in the army drinking game Cardinal Puff Music Groups Cardinal (band), indie pop duo formed in 1992 The Cardinals (rock band), a group formed in 2003 The Cardinals, a 1950s R&B group Albums Cardinal (Cardinal album), 1994 Cardinal (Pinegrove album), 2016 Television Cardinal (TV series), a 2017 Canadian television series "Cardinal" (The Americans), the second episode of the second season of the television series The Americans Other arts, entertainment, and media Cardinal (comics), a supervillain appearing in Marvel Comics The Cardinal (play), a 1641 Caroline era tragedy by James Shirley The Cardinal System, a system appearing in the Sword Art Online series Cardinal, a stormtrooper officer featured in Star Wars: Phasma, a novel by Delilah S. Dawson Linguistics Cardinal numeral, a part of speech for expressing numbers by name Cardinal vowel, a concept in phonetics Mathematics Cardinal number Large cardinal Navigation Cardinal direction, one of the four primary directions: north, south, east, and west Cardinal mark, a sea mark used in navigation Places Cardinal, Manitoba, Canada Cardinal, Ontario, Canada Cardinal High School (Middlefield, Ohio), a public high school in Middlefield, Ohio, Geauga County, United States Cardinal Mountain, a summit in California Cardinal Power Plant, a power plant in Jefferson County, Ohio Cardinal, Virginia, United States C/2008 T2 (Cardinal), a comet Plants Cardinal (grape), a table grape first produced in California in 1939 Lobelia cardinalis, also known as "cardinal flower" Sports Arizona Cardinals, an American professional football team Assindia Cardinals, an American football club from Essen, Germany Ball State Cardinals, the athletic teams of Ball State University Cardenales de Lara, a Venezuelan baseball team Catholic University Cardinals, the athletic teams of the Catholic University of America Front Royal Cardinals, an American baseball team Incarnate Word Cardinals, the athletic teams of the University of the Incarnate Word Lamar Cardinals, the athletic teams of Lamar University in Beaumont, Texas, USA Louisville Cardinals, the athletic teams of University of Louisville Mapúa Cardinals, the athletic teams of Mapúa Institute of Technology North Central Cardinals, the athletic teams of North Central College St. John Fisher Cardinals, the athletic teams of St. John Fisher College in Rochester, NY St. Louis Cardinals, an American professional baseball team Stanford Cardinal, the athletic teams of Stanford University; named for the color but not the bird Wesleyan Cardinals, the athletic teams of Wesleyan University West Perth Football Club, an Australian rules football club in Western Australia Woking F.C., an English football team Transport Aircraft Cessna 177 Cardinal, a single engine aircraft St. Louis Cardinal C-2-110, a light aircraft built in 1928 NCSIST Cardinal, a family of small UAVs Trains Cardinal (train) The Cardinal (railcar) Other uses Cardinal (color), a vivid red Cardinal (name), a surname Cardinal, a Ruby programming language implementation using for the Parrot virtual machine See also Cardenal, a surname Cardinal sin or cardinal syn Cardinale, a surname
2,785
6,173
https://en.wikipedia.org/wiki/Cardinal%20number
Cardinal number
In mathematics, cardinal numbers, or cardinals for short, are a generalization of the natural numbers used to measure the cardinality (size) of sets. The cardinality of a finite set is a natural number: the number of elements in the set. The transfinite cardinal numbers, often denoted using the Hebrew symbol (aleph) followed by a subscript, describe the sizes of infinite sets. Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of size. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets. There is a transfinite sequence of cardinal numbers: This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers (infinite cardinals of well-ordered sets). The aleph numbers are indexed by ordinal numbers. Under the assumption of the axiom of choice, this transfinite sequence includes every cardinal number. If one rejects that axiom, the situation is more complicated, with additional infinite cardinals that are not alephs. Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets. History The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not equal, but have the same cardinality, namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}. Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N denumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is called , aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers. Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number z may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0, a1, ..., an), ai ∈ Z together with a pair of rationals (b0, b1) such that z is the unique root of the polynomial with coefficients (a0, a1, ..., an) that lies in the interval (b0, b1). In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol for it. Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (, aleph-null), and that for every cardinal number there is a next-larger cardinal His continuum hypothesis is the proposition that the cardinality of the set of real numbers is the same as . This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. Motivation In informal use, a cardinal number is what is normally referred to as a counting number, provided that 0 is included: 0, 1, 2, .... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic. More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements. However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here. The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions. A set Y is at least as big as a set X if there is an injective mapping from the elements of X to the elements of Y. An injective mapping identifies each element of the set X with a unique element of the set Y. This is most easily understood by an example; suppose we have the sets X = {1,2,3} and Y = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping: 1 → a 2 → b 3 → c which is injective, and hence conclude that Y has cardinality greater than or equal to X. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets. We can then extend this to an equality-style relation. Two sets X and Y are said to have the same cardinality if there exists a bijection between X and Y. By the Schroeder–Bernstein theorem, this is equivalent to there being both an injective mapping from X to Y, and an injective mapping from Y to X. We then write |X| = |Y|. The cardinal number of X itself is often defined as the least ordinal a with |a| = |X|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as some ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects. The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping: 1 → 2 2 → 3 3 → 4 ... n → n + 1 ... With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}. When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers. It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals. Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as equipotence, equipollence, or equinumerosity. It is thus said that two sets with the same cardinality are, respectively, equipotent, equipollent, or equinumerous. Formal definition Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal number α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the class [X] of all sets that are equinumerous with X. This does not work in ZFC or other related systems of axiomatic set theory because if X is non-empty, this collection is too large to be a set. In fact, for X ≠ ∅ there is an injection from the universe into [X] by mapping a set m to {m} × X, and so by the axiom of limitation of size, [X] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set). Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, in ordinal arithmetic while in cardinal arithmetic, although the von Neumann assignment puts . On the other hand, Scott's trick implies that the cardinal number 0 is , which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets. Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists an injective function from X to Y. The Cantor–Bernstein–Schroeder theorem states that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two sets X and Y, either |X| ≤ |Y| or |Y| ≤ |X|. A set X is Dedekind-infinite if there exists a proper subset Y of X with |X| = |Y|, and Dedekind-finite if such a subset doesn't exist. The finite cardinals are just the natural numbers, in the sense that a set X is finite if and only if |X| = |n| = n for some natural number n. Any other set is infinite. Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented ) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality ). The next larger cardinal is denoted by , and so on. For every ordinal α, there is a cardinal number and this list exhausts all infinite cardinal numbers. Cardinal arithmetic We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic. Successor cardinal If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+ such that ) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal. Cardinal addition If X and Y are disjoint, addition is given by the union of X and Y. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace X by X×{0} and Y by Y×{1}). Zero is an additive identity κ + 0 = 0 + κ = κ. Addition is associative (κ + μ) + ν = κ + (μ + ν). Addition is commutative κ + μ = μ + κ. Addition is non-decreasing in both arguments: Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either κ or μ is infinite, then Subtraction Assuming the axiom of choice and, given an infinite cardinal σ and a cardinal μ, there exists a cardinal κ such that μ + κ = σ if and only if μ ≤ σ. It will be unique (and equal to σ) if and only if μ < σ. Cardinal multiplication The product of cardinals comes from the Cartesian product. κ·0 = 0·κ = 0. κ·μ = 0 → (κ = 0 or μ = 0). One is a multiplicative identity κ·1 = 1·κ = κ. Multiplication is associative (κ·μ)·ν = κ·(μ·ν). Multiplication is commutative κ·μ = μ·κ. Multiplication is non-decreasing in both arguments: κ ≤ μ → (κ·ν ≤ μ·ν and ν·κ ≤ ν·μ). Multiplication distributes over addition: κ·(μ + ν) = κ·μ + κ·ν and (μ + ν)·κ = μ·κ + ν·κ. Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either κ or μ is infinite and both are non-zero, then Division Assuming the axiom of choice and, given an infinite cardinal π and a non-zero cardinal μ, there exists a cardinal κ such that μ · κ = π if and only if μ ≤ π. It will be unique (and equal to π) if and only if μ < π. Cardinal exponentiation Exponentiation is given by where XY is the set of all functions from Y to X. κ0 = 1 (in particular 00 = 1), see empty function. If 1 ≤ μ, then 0μ = 0. 1μ = 1. κ1 = κ. κμ + ν = κμ·κν. κμ · ν = (κμ)ν. (κ·μ)ν = κν·μν. Exponentiation is non-decreasing in both arguments: (1 ≤ ν and κ ≤ μ) → (νκ ≤ νμ) and (κ ≤ μ) → (κν ≤ μν). 2|X| is the cardinality of the power set of the set X and Cantor's diagonal argument shows that 2|X| > |X| for any set X. This proves that no largest cardinal exists (because for any cardinal κ, we can always find a larger cardinal 2κ). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.) All the remaining propositions in this section assume the axiom of choice: If κ and μ are both finite and greater than 1, and ν is infinite, then κν = μν. If κ is infinite and μ is finite and non-zero, then κμ = κ. If 2 ≤ κ and 1 ≤ μ and at least one of them is infinite, then: Max (κ, 2μ) ≤ κμ ≤ Max (2κ, 2μ). Using König's theorem, one can prove κ < κcf(κ) and κ < cf(2κ) for any infinite cardinal κ, where cf(κ) is the cofinality of κ. Roots Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying will be . Logarithms Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 1, there may or may not be a cardinal λ satisfying . However, if such a cardinal exists, it is infinite and less than κ, and any finite cardinality ν greater than 1 will also satisfy . The logarithm of an infinite cardinal number κ is defined as the least cardinal number μ such that κ ≤ 2μ. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess. The continuum hypothesis The continuum hypothesis (CH) states that there are no cardinals strictly between and The latter cardinal number is also often denoted by ; it is the cardinality of the continuum (the set of real numbers). In this case Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal , there are no cardinals strictly between and . Both the continuum hypothesis and the generalized continuum hypothesis have been proved independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC). Indeed, Easton's theorem shows that, for regular cardinals , the only restrictions ZFC places on the cardinality of are that , and that the exponential function is non-decreasing. See also Aleph number Beth number The paradox of the greatest cardinal Cardinal number (linguistics) Counting Inclusion–exclusion principle Large cardinal Names of numbers in English Nominal number Ordinal number Regular cardinal References Notes Bibliography Hahn, Hans, Infinity, Part IX, Chapter 2, Volume 3 of The World of Mathematics. New York: Simon and Schuster, 1956. Halmos, Paul, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). External links
2,787
6,176
https://en.wikipedia.org/wiki/Cecil%20B.%20DeMille
Cecil B. DeMille
Cecil Blount DeMille (; August 12, 1881January 21, 1959) was an American film director, producer and actor. Between 1914 and 1958, he made 70 features, both silent and sound films. He is acknowledged as a founding father of the American cinema and the most commercially successful producer-director in film history. His films were distinguished by their epic scale and by his cinematic showmanship. His silent films included social dramas, comedies, Westerns, farces, morality plays, and historical pageants. He was an active Freemason and member of Prince of Orange Lodge #16 in New York City. DeMille was born in Ashfield, Massachusetts, and grew up in New York City. He began his career as a stage actor in 1900. He later moved to writing and directing stage productions, some with Jesse Lasky, who was then a vaudeville producer. DeMille's first film, The Squaw Man (1914), was also the first full-length feature film shot in Hollywood. Its interracial love story made it commercially successful and it first publicized Hollywood as the home of the U.S. film industry. The continued success of his productions led to the founding of Paramount Pictures with Lasky and Adolph Zukor. His first biblical epic, The Ten Commandments (1923), was both a critical and commercial success; it held the Paramount revenue record for twenty-five years. DeMille directed The King of Kings (1927), a biography of Jesus, which gained approval for its sensitivity and reached more than 800 million viewers. The Sign of the Cross (1932) is said to be the first sound film to integrate all aspects of cinematic technique. Cleopatra (1934) was his first film to be nominated for the Academy Award for Best Picture. After more than thirty years in film production, DeMille reached a pinnacle in his career with Samson and Delilah (1949), a biblical epic which became the highest-grossing film of 1950. Along with biblical and historical narratives, he also directed films oriented toward "neo-naturalism", which tried to portray the laws of man fighting the forces of nature. He received his first nomination for the Academy Award for Best Director for his circus drama The Greatest Show on Earth (1952), which won both the Academy Award for Best Picture and the Golden Globe Award for Best Motion Picture – Drama. His last and best known film, The Ten Commandments (1956), also a Best Picture Academy Award nominee, is currently the eighth-highest-grossing film of all time, adjusted for inflation. In addition to his Best Picture Awards, he received an Academy Honorary Award for his film contributions, the Palme d'Or (posthumously) for Union Pacific (1939), a DGA Award for Lifetime Achievement, and the Irving G. Thalberg Memorial Award. He was the first recipient of the Golden Globe Cecil B. DeMille Award, which was named in his honor. DeMille's reputation had a renaissance in the 2010s and his work has influenced numerous other films and directors. Biography 1881–1899: Early years Cecil Blount DeMille was born on August 12, 1881, in a boarding house on Main Street in Ashfield, Massachusetts, where his parents had been vacationing for the summer. On September 1, 1881, the family returned with the newborn DeMille to their flat in New York. DeMille was named after his grandmothers Cecelia Wolff and Margarete Blount. He was the second of three children of Henry Churchill de Mille (September 4, 1853 – February 10, 1893) and his wife Matilda Beatrice deMille (née Samuel; January 30, 1853 – October 8, 1923), known as Beatrice. His brother, William C. DeMille, was born on July 25, 1878. Henry de Mille, whose ancestors were of English and Dutch-Belgian descent, was a North Carolina-born dramatist, actor, and lay reader in the Episcopal Church. DeMille's father was also an English teacher at Columbia College (now Columbia University). He worked as a playwright, administrator, and faculty member during the early years of the American Academy of Dramatic Arts, established in New York City in 1884. Henry deMille frequently collaborated with David Belasco in playwriting; their best-known collaborations included "The Wife", "Lord Chumley", "The Charity Ball", and "Men and Women". Cecil B. DeMille's mother, Beatrice, a literary agent and scriptwriter, was the daughter of German Jews. She had emigrated from England with her parents in 1871 when she was 18; the newly arrived family settled in Brooklyn, New York, where they maintained a middle-class, English-speaking household. DeMille's parents met as members of a music and literary society in New York. Henry was a tall, red-headed student. Beatrice was intelligent, educated, forthright, and strong-willed. The two were married on July 1, 1876, despite Beatrice's parents' objections because of the young couple's differing religions; Beatrice converted to Episcopalianism. DeMille was a brave and confident child. He gained his love of theater while watching his father and Belasco rehearse their plays. A lasting memory for DeMille was a lunch with his father and actor Edwin Booth. As a child, DeMille created an alter-ego, Champion Driver, a Robin Hood-like character, evidence of his creativity and imagination. The family lived in Washington, North Carolina, until Henry built a three-story Victorian-style house for his family in Pompton Lakes, New Jersey; they named this estate "Pamlico". John Philip Sousa was a friend of the family, and DeMille recalled throwing mud balls in the air so neighbor Annie Oakley could practice her shooting. DeMille's sister Agnes was born on April 23, 1891; his mother nearly did not survive the birth. Agnes would die on February 11, 1894, at the age of three from spinal meningitis. DeMille's parents operated a private school in town and attended Christ Episcopal Church. DeMille recalled that this church was the place where he visualized the story of his 1923 version of The Ten Commandments. On January 8, 1893, at age 40, Henry de Mille died suddenly from typhoid fever, leaving Beatrice with three children. To provide for her family, she opened the Henry C. DeMille School for Girls in her home in February 1893. The aim of the school was to teach young women to properly understand and fulfill the women's duty to herself, her home, and her country. Before Henry deMille's death, Beatrice had "enthusiastically supported" her husband's theatrical aspirations. She later became the second female play broker on Broadway. On Henry DeMille's deathbed, he told his wife that he did not want his sons to become playwrights. DeMille's mother sent him to Pennsylvania Military College (now Widener University) in Chester, Pennsylvania, at age 15. He fled the school to join the Spanish–American War, but failed to meet the age requirement. At the military college, even though his grades were average, he reportedly excelled in personal conduct. DeMille attended the American Academy of Dramatic Arts (tuition-free due to his father's service to the Academy). He graduated in 1900, and for graduation, his performance was the play The Arcady Trail. In the audience was Charles Frohman who would cast DeMille in his play Hearts are Trumps, DeMille's Broadway debut. 1900–1912: Theater Charles Frohman, Constance Adams, and David Belasco Cecil B. DeMille began his career as an actor on the stage in the theatrical company of Charles Frohman in 1900. He debuted as an actor on February 21, 1900, in the play Hearts Are Trumps at New York's Garden Theater. In 1901, DeMille starred in productions of A Repentance, To Have and to Hold, and Are You a Mason? At the age of twenty-one, Cecil B. DeMille married Constance Adams on August 16, 1902, at Adams's father's home in East Orange, New Jersey. The wedding party was small. Beatrice DeMille's family was not in attendance, and Simon Louvish suggests that this was to conceal DeMille's partial Jewish heritage. Adams was 29 years old at the time of their marriage, eight years older than DeMille. They had met in a theater in Washington D.C. while they were both acting in Hearts Are Trumps. They were sexually incompatible; according to DeMille, Adams was too "pure" to "feel such violent and evil passions." DeMille had more violent sexual preferences and fetishes than his wife. Adams allowed DeMille to have several long term mistresses during their marriage as an outlet, while maintaining an outward appearance of a faithful marriage. One of DeMille's affairs was with his screenwriter Jeanie MacPherson. Despite his reputation for extramarital affairs, DeMille did not like to have affairs with his stars, as he believed it would cause him to lose control as a director. He related a story that he maintained his self-control when Gloria Swanson sat on his lap, refusing to touch her. In 1902, he played a small part in Hamlet. Publicists wrote that he became an actor in order to learn how direct and produce, but DeMille admitted that he became an actor in order to pay the bills. From 1904 to 1905, DeMille attempted to make a living as a stock theatre actor with his wife Constance. DeMille made a 1905 reprise in Hamlet as Osric. In the summer of 1905 DeMille joined the stock cast at the Elitch Theatre in Denver, Colorado. He appeared in eleven of the fifteen plays presented that season, although all were minor roles. Maude Fealy would appear as the featured actress in several productions that summer and would develop a lasting friendship with DeMille. (He would later cast her in The Ten Commandments.) His brother William was establishing himself as a playwright and sometimes invited him to collaborate. DeMille and William collaborated on The Genius, The Royal Mounted, and After Five. However, none of these were very successful; William deMille was most successful when he worked alone. DeMille and his brother at times worked with the legendary impresario David Belasco, who had been a friend and collaborator of their father. DeMille would later adapt Belasco's The Girl of the Golden West, Rose of the Rancho, and The Warrens of Virginia into films. DeMille was credited with creating the premise of Belasco's The Return of Peter Grimm. The Return of Peter Grimm sparked controversy; however, because Belasco had taken DeMille's unnamed screenplay, changed the characters and named it The Return of Peter Grimm, producing and presenting it as his own work. DeMille was credited in small print as "based on an idea by Cecil DeMille". The play was successful, and DeMille was distraught that his childhood idol had plagiarized his work. Losing interest in theatre DeMille performed on stage with actors whom he would later direct in films: Charlotte Walker, Mary Pickford, and Pedro de Cordoba. DeMille also produced and directed plays. His 1905 performance in The Prince Chap as the Earl of Huntington was well received by audiences. DeMille wrote a few of his own plays in-between stage performances, but his playwriting was not as successful. His first play was The Pretender-A Play in a Prologue and 4 Acts set in seventeenth century Russia. Another unperformed play he wrote was Son of the Winds, a mythological Native American story. Life was difficult for DeMille and his wife as traveling actors; however, traveling allowed him to experience part of the United States he had not yet seen. DeMille sometimes worked with the director E.H. Sothern, who influenced DeMille's later perfectionism in his work. In 1907, due to a scandal with one of Beatrice's students, Evelyn Nesbit, the Henry deMille School lost students. The school closed, and Beatrice filed for bankruptcy. DeMille wrote another play originally called Sergeant Devil May Care which was renamed The Royal Mounted. He also toured with the Standard Opera Company, but there are few records to indicate DeMille's singing ability. DeMille had a daughter, Cecilia, on November 5, 1908, who would be his only biological child. In the 1910s, DeMille began directing and producing other writer's plays. DeMille was poor and struggled to find work. Consequently, his mother hired him for her agency The DeMille Play Company and taught him how to be an agent and a playwright. Eventually, he became manager of the agency and later, a junior partner with his mother. In 1911, DeMille became acquainted with vaudeville producer Jesse Lasky when Lasky was searching for a writer for his new musical. He initially sought out William deMille. William had been a successful playwright, but DeMille was suffering from the failure of his plays The Royal Mounted and The Genius. However, Beatrice introduced Lasky to DeMille instead. The collaboration of DeMille and Lasky produced a successful musical called California which opened in New York in January 1912. Another DeMille-Lasky production that opened in January 1912 was The Antique Girl. DeMille found success in the spring of 1913 producing Reckless Age by Lee Wilson, a play about a high society girl wrongly accused of manslaughter starring Frederick Burton and Sydney Shields. However, changes in the theater rendered DeMille's melodramas obsolete before they were produced, and true theatrical success eluded him. He produced many flops. Having become disinterested in working in theatre, DeMille's passion for film was ignited when he watched the 1912 French film Les Amours de la reine Élisabeth. 1913–1914: Entering films Desiring a change of scene, Cecil B. DeMille, Jesse Lasky, Sam Goldfish (later Samuel Goldwyn), and a group of East Coast businessmen created the Jesse L. Lasky Feature Play Company in 1913 over which DeMille became director-general. Lasky and DeMille were said to have sketched out the organization of the company on the back of a restaurant menu. As director-general, DeMille's job was to make the films. In addition to directing, DeMille was the supervisor and consultant for the first year of films made by the Lasky Feature Play Company. Sometimes, he directed scenes for other directors at the Feature Play Company in order to release films on time. Moreover, when he was busy directing other films, he would co-author other Lasky Company scripts as well as create screen adaptations that others directed. The Lasky Play Company sought out William DeMille to join the company, but he rejected the offer because he did not believe there was any promise in a film career. When William found out that DeMille had begun working in the motion picture industry, he wrote DeMille a letter, disappointed that he was willing "to throw away [his] future" when he was "born and raised in the finest traditions of the theater". The Lasky Company wanted to attract high-class audiences to their films so they began producing films from literary works. The Lasky Company bought the rights to the play The Squaw Man by Edwin Milton Royle and cast Dustin Farnum in the lead role. They offered Farnum a choice to have a quarter stock in the company (similar to William deMille) or $250 per week as salary. Farnum chose $250 per week. Already $15,000 in debt to Royle for the screenplay of The Squaw Man, Lasky's relatives bought the $5,000 stock to save the Lasky Company from bankruptcy. With no knowledge of filmmaking, DeMille was introduced to observe the process at film studios. He was eventually introduced to Oscar Apfel, a stage director who had been a director with the Edison Company. On December 12, 1913, DeMille, his cast, and crew boarded a Southern Pacific train bound for Flagstaff via New Orleans. His tentative plan was to shoot a film in Arizona, but he felt that Arizona did not typify the Western look they were searching for. They also learned that other filmmakers were successfully shooting in Los Angeles, even in winter. He continued to Los Angeles. Once there, he chose not to shoot in Edendale, where many studios were, but in Hollywood. DeMille rented a barn to function as their film studio. Filming began on December 29, 1913, and lasted three weeks. Apfel filmed most of The Squaw Man due to DeMille's inexperience; however, DeMille learned quickly and was particularly adept at impromptu screenwriting as necessary. He made his first film run sixty minutes, as long as a short play. The Squaw Man (1914), co-directed by Oscar Apfel, was a sensation and it established the Lasky Company. This was the first feature-length film made in Hollywood. There were problems; however, with the perforation of the film stock and it was discovered the DeMille had brought a cheap British film perforator which had punched in sixty-five holes per foot instead of the industry-standard of sixty-four. Lasky and DeMille convinced film pioneer Siegmund Lubin of the Lubin Manufacturing Company of Philadelphia to have his experienced technicians reperforate the film This was also the first American feature film; however, only by release date, as D. W. Griffith's Judith of Bethulia was filmed earlier than The Squaw Man, but released later. Additionally, this was the only film in which DeMille shared director's credit with Oscar C. Apfel. The Squaw Man was a success, which led to the eventual founding of Paramount Pictures and Hollywood becoming the "film capital of the world". The film grossed over ten times its budget after its New York premiere in February 1914. DeMille's next project was to aid Oscar Apfel and directing Brewster's Millions, which was wildly successful. In December 1914, Constance Adams brought home John DeMille, a fifteen-month-old, whom the couple legally adopted three years later. Biographer Scott Eyman suggested that this may have been a result of Adams's recent miscarriage. 1915–1928: Silent era Westerns, Paradise, and World War I Cecil B. DeMille's second film credited exclusively to him was The Virginian. This is the earliest of DeMille's films available in a quality, color-tinted video format. However, this version is actually a 1918 re-release. The first few years of the Lasky Company were spent in making films nonstop, literally writing the language of film. DeMille himself directed twenty films by 1915. The most successful films during the beginning of the Lasky Company were Brewster's Millions (co-directed by DeMille), Rose of the Rancho, and The Ghost Breaker. DeMille adapted Belasco's dramatic lighting techniques to film technology, mimicking moonlight with U.S. cinema's first attempts at "motivated lighting" in The Warrens of Virginia. This was the first of few film collaborations with his brother William. They struggled to adapt the play from the stage to the set. After the film was shown, viewers complained that the shadows and lighting prevented the audience from seeing the actors' full faces, complaining that they would only pay half price. However, Sam Goldwyn realized that if they called it "Rembrandt" lighting, the audience would pay double the price. Additionally, because of DeMille's cordiality after the Peter Grimm incident, DeMille was able to rekindle his partnership with Belasco. He adapted several of Belasco's screenplays into film. DeMille's most successful film was The Cheat; DeMille's direction in the film was acclaimed. In 1916, exhausted from three years of nonstop filmmaking, DeMille purchased land in the Angeles National Forest for a ranch which would become his getaway. He called this place, "Paradise", declaring it a wildlife sanctuary; no shooting of animals was allowed besides snakes. His wife did not like Paradise, so DeMille often brought his mistresses there with him including actress Julia Faye. In addition to his Paradise, DeMille purchased a yacht in 1921 which he called The Seaward. While filming The Captive in 1915, an extra, Bob Fleming, died on set when another extra failed to heed to DeMille's orders to unload all guns for rehearsal. DeMille instructed the guilty man to leave town and would never reveal his name. Lasky and DeMille maintained the widow Fleming on the payroll; however, according to leading actor House Peters Sr. DeMille refused to stop production for the funeral of Fleming. Peters claimed that he encouraged the cast to attend the funeral with him anyway since DeMille would not be able to shoot the film without him. On July 19, 1916, the Jesse Lasky Feature Play Company merged with Adolph Zukor's Famous Players Film Company, becoming Famous Players-Lasky. Zukor became president with Lasky as the vice president. DeMille was maintained as director-general and Goldwyn became chairman of the board. Goldwyn was later fired from Famous Players-Lasky due to frequent clashes with Lasky, DeMille, and finally Zukor. While on a European vacation in 1921, DeMille contracted rheumatic fever in Paris. He was confined to bed and unable to eat. His poor physical condition upon his return home affected the production of his 1922 film Manslaughter. According to Richard Birchard, DeMille's weakened state during production may have led to the film being received as uncharacteristically substandard. During World War I, the Famous Players-Lasky organized a military company underneath the National Guard called the Home Guard made up of film studio employees with DeMille as captain. Eventually, the Guard was enlarged to a battalion and recruited soldiers from other film studios. They took time off weekly from film production to practice military drills. Additionally, during the war, DeMille volunteered for the Justice Department's Intelligence Office, investigating friends, neighbors, and others he came in contact with in connection with the Famous Players-Lasky. He volunteered for the Intelligence Office during World War II as well. Although DeMille considered enlisting in World War I, he stayed in the United States and made films. However, he did take a few months to set up a movie theater for the French front. Famous Players-Lasky donated the films. DeMille and Adams adopted Katherine Lester in 1920 whom Adams had found in the orphanage over which she was the director. In 1922, the couple adopted Richard deMille. Scandalous dramas, Biblical epics, and departure from Paramount Film started becoming more sophisticated and the subsequent films of the Lasky company were criticized for primitive and unrealistic set design. Consequently, Beatrice deMille introduced the Famous Players-Lasky to Wilfred Buckland, who DeMille had known from his time at the American Academy of Dramatic Arts, and he became DeMille's art director. William deMille reluctantly became a story editor. William deMille would later convert from theater to Hollywood and would spend the rest of his career as a film director. Throughout his career, DeMille would frequently remake his own films. In his first instance, in 1917, he remade The Squaw Man (1918), only waiting four years from the 1914 original. Despite its quick turnaround, the film was fairly successful. However, DeMille's second remake at MGM in 1931 would be a failure. After five years and thirty hit films, DeMille became the American film industry's most successful director. In the silent era, he was renowned for Male and Female (1919), Manslaughter (1922), The Volga Boatman (1926), and The Godless Girl (1928). DeMille's trademark scenes included bathtubs, lion attacks, and Roman orgies. Many of his films featured scenes in two-color Technicolor. In 1923, DeMille released a modern melodrama The Ten Commandments which was a significant change from his previous stint of irreligious films. The film was produced on a large budget of $600,000, the most expensive production at Paramount. This concerned the executives at Paramount; however, the film turned out to be the studio's highest-grossing film. It held the Paramount record for twenty-five years until DeMille broke the record again himself. In the early 1920s, scandal surrounded Paramount; religious groups and the media opposed portrayals of immorality in films. A censorship board called the Hays Code was established. DeMille's film The Affairs of Anatol came under fire. Furthermore, DeMille argued with Zukor over his extravagant and over-budget production costs. Consequently, DeMille left Paramount in 1924 despite having helped establish it. He joined the Producers Distributing Corporation. His first film in the new production company, DeMille Pictures Corporation, was The Road to Yesterday in 1925. He directed and produced four films on his own, working with Producers Distributing Corporation because he found front office supervision too restricting. Aside from The King of Kings, none of DeMille's films away from Paramount were successful. The King of Kings established DeMille as "master of the grandiose and of biblical sagas". Considered at the time to be the most successful Christian film of the silent era, DeMille calculated that it had been viewed over 800 million times around the world. After the release of DeMille's The Godless Girl, silent films in America became obsolete and DeMille was forced to shoot a shoddy final reel with the new sound production technique. Although this final reel looked so different from the previous eleven reels that it appeared to be from another movie, according to Simon Louvish, the film is one of DeMille's strangest and most "DeMillean" film. The immense popularity of DeMille's silent films enabled him to branch out into other areas. The Roaring Twenties were the boom years and DeMille took full advantage, opening the Mercury Aviation Company, one of America's first commercial airlines. He was also a real estate speculator, an underwriter of political campaigns, and vice president of Bank of America. He was additionally vice president of the Commercial National Trust and Savings Bank in Los Angeles where he approved loans for other filmmakers. In 1916, DeMille purchased a mansion in Hollywood. Charlie Chaplin lived next door for a time, and after he moved, DeMille purchased the other house and combined the estates. 1929–1956: Sound era MGM and return to Paramount When "talking pictures" were invented in 1928, Cecil B. DeMille made a successful transition, offering his own innovations to the painful process; he devised a microphone boom and a soundproof camera blimp. He also popularized the camera crane. His first three sound films were produced at Metro-Goldwyn-Mayer. These three films, Dynamite, Madame Satan, and his 1931 remake of The Squaw Man were both critically and financially unsuccessful. He had completely adapted to the production of sound film despite the film's poor dialogue. After his contract ended at MGM, he left, but no production studios would hire him. He attempted to create a guild of a half a dozen directors with the same creative desires called the Director's Guild. However, the idea failed due to lack of funding and commitment. Moreover, DeMille was audited by the Internal Revenue Service due to issues with his production company. This was, according to DeMille, the lowest point of his career. DeMille traveled abroad to find employment until he was offered a deal at Paramount. In 1932, DeMille returned to Paramount at the request of Lasky, bringing with him his own production unit. His first film back at Paramount, The Sign of the Cross, was also his first success since leaving Paramount besides The King of Kings. DeMille's return was approved by Zukor under the condition that DeMille not exceed his production budget of $650,000 for The Sign of the Cross. Produced in eight weeks without exceeding budget, the film was financially successful. The Sign of the Cross was the first film to integrate all cinematic techniques. The film was considered a "masterpiece" and surpassed the quality of other sound films of the time. DeMille followed this epic uncharacteristically with two dramas released in 1933 and 1934. This Day and Age and Four Frightened People were box office disappointments, though Four Frightened People received good reviews. DeMille would stick to his large-budget spectaculars for the rest of his career. Politics and Lux Radio Theatre Cecil B. DeMille was outspoken about his strong Episcopalian integrity but his private life included mistresses and adultery. DeMille was a conservative Republican activist, becoming more conservative as he aged. He was known as anti-union and worked to prevent unionizing of film production studios. However, according to DeMille himself, he was not anti-union and belonged to a few unions himself. He said he was rather against union leaders such as Walter Reuther and Harry Bridges whom he compared to dictators. He supported Herbert Hoover and in 1928 made his largest campaign donation to Hoover. DeMille also liked Franklin D. Roosevelt, however, finding him charismatic, tenacious, and intelligent and agreeing with Roosevelt's abhorrence of Prohibition. DeMille lent Roosevelt a car for his campaign for the 1932 United States presidential election and voted for him. However, he would never again vote for a Democratic candidate in a presidential election. From June 1, 1936, until January 22, 1945, Cecil B. DeMille hosted and directed Lux Radio Theater, a weekly digest of current feature films. Broadcast on the Columbia Broadcasting System (CBS) from 1935 to 1954, the Lux Radio show was one of the most popular weekly shows in the history of radio. While DeMille was host, the show had forty million weekly listeners, gaining DeMille an annual salary of $100,000. From 1936 to 1945, he produced, hosted, and directed all shows with the occasional exception of a guest director. He resigned from the Lux Radio Show because he refused to pay a dollar to the American Federation of Radio Artists (AFRA) because he did not believe that any organization had the right to "levy a compulsory assessment upon any member." Consequently, he had to resign from the radio show. DeMille sued the union for reinstatement but lost. He then appealed to the California Supreme Court and lost again. When the AFRA expanded to television, DeMille was banned from television appearances. Consequently, he formed the DeMille Foundation for Political Freedom in order to campaign for the right to work. He began presenting speeches across the United States for the next few years. DeMille's primary criticism was of closed shops, but later included criticism of communism and unions in general. The United States Supreme Court declined to review his case. Despite his loss, DeMille continued to lobby for the Taft–Hartley Act, which passed. This prohibited denying anyone the right to work if they refuse to pay a political assessment, however, the law did not apply retroactively. Consequently, DeMille's television and radio appearance ban lasted for the remainder of his life, though he was permitted to appear on radio or television to publicize a movie. William Keighley was his replacement. DeMille would never again work on radio. Adventure films and dramatic spectacles In 1939, DeMille's Union Pacific was successful through DeMille's collaboration with the Union Pacific Railroad. The Union Pacific gave DeMille access to historical data, early period trains, and expert crews, adding to the authenticity of the film. During pre-production of Union Pacific, DeMille was dealing with his first serious health issue. In March 1938, he underwent a major emergency prostatectomy. He suffered from a post-surgery infection from which he nearly did not recover, citing streptomycin as his saving grace. The surgery caused him to suffer from sexual dysfunction for the rest of his life, according to some family members. Following his surgery and the success of Union Pacific, in 1940, DeMille first used three-strip Technicolor in North West Mounted Police. DeMille wanted to film in Canada; however, due to budget constraints, the film was instead shot in Oregon and Hollywood. Critics were impressed with the visuals but found the scripts dull, calling it DeMille's "poorest Western". Despite the criticism, it was Paramount's highest-grossing film of the year. Audiences liked its highly saturated color, so DeMille made no further black-and-white features. DeMille was anti-communist and abandoned a project in 1940 to film Ernest Hemingway's For Whom the Bell Tolls due to its communist themes despite the fact he had already paid $100,000 for the rights to the novel. He was so eager to produce the film, that he hadn't yet read the novel. He claimed he abandoned the project in order to complete a different project, but in reality, it was to preserve his reputation and avoid appearing reactionary. While concurrently filmmaking, he served in World War II at the age of sixty as his neighborhood air-raid warden. In 1942, DeMille worked with Jeanie MacPherson and brother William deMille in order to produce a film called Queen of Queens which was intended to be about Mary, mother of Jesus. After reading the screenplay, Daniel A. Lord warned DeMille that Catholics would find the film too irreverent, while non-Catholics would have considered the film Catholic propaganda. Consequently, the film was never made. Jeanie MacPherson would work as a scriptwriter for many of DeMille's films. In 1938, DeMille supervised the compilation of film Land of Liberty to represent the contribution of the American film industry to the 1939 New York World's Fair. DeMille used clips from his own films in Land of Liberty. Though the film was not high-grossing, it was well-received and DeMille was asked to shorten its running time to allow for more showings per day. MGM distributed the film in 1941 and donated profits to World War II relief charities. In 1942, DeMille released Paramount's most successful film, Reap the Wild Wind. It was produced with a large budget and contained many special effects including an electronically operated giant squid. After working on Reap the Wild Wind, in 1944, he was the master of ceremonies at the massive rally organized by David O. Selznick in the Los Angeles Coliseum in support of the Dewey–Bricker ticket as well as Governor Earl Warren of California. DeMille's subsequent film Unconquered (1947) had the longest running time (146 minutes), longest filming schedule (102 days) and largest budget of $5 million. The sets and effects were so realistic that 30 extras needed to be hospitalized due to a scene with fireballs and flaming arrows. It was commercially very successful. DeMille's next film, Samson and Delilah in 1949, became Paramount's highest-grossing film up to that time. A Biblical epic with sex, it was a characteristically DeMille film. Again, 1952's The Greatest Show on Earth became Paramount's highest-grossing film to that point. Furthermore, DeMille's film won the Academy Award for Best Picture and the Academy Award for Best Story. The film began production in 1949, Ringling Brothers-Barnum and Bailey were paid $250,000 for use of the title and facilities. DeMille toured with the circus while helping write the script. Noisy and bright, it was not well-liked by critics, but was a favorite among audiences. DeMille signed a contract with Prentice Hall publishers in August 1953 to publish an autobiography. DeMille would reminisce into a voice recorder, the recording would be transcribed, and the information would be organized in the biography based on the topic. Art Arthur also interviewed people for the autobiography. DeMille did not like the first draft of the biography, saying that he thought the person portrayed in the biography was an "SOB"; he said it made him sound too egotistical. Besides filmmaking and finishing his autobiography, DeMille was involved in other projects. In the early 1950s, DeMille was recruited by Allen Dulles and Frank Wisner to serve on the board of the anti-communist National Committee for a Free Europe, the public face of the organization that oversaw the Radio Free Europe service. In 1954, Secretary of the Air Force Harold E. Talbott asked DeMille for help in designing the cadet uniforms at the newly established United States Air Force Academy. DeMille's designs, most notably his design of the distinctive cadet parade uniform, won praise from Air Force and Academy leadership, were ultimately adopted, and are still worn by cadets. Final works and unrealized projects In 1952, DeMille sought approval for a lavish remake of his 1923 silent film The Ten Commandments. He went before the Paramount board of directors, which was mostly Jewish-American. The members rejected his proposal, even though his last two films, Samson and Delilah and The Greatest Show on Earth, had been record-breaking hits. Adolph Zukor convinced the board to change their minds on the grounds of morality. DeMille did not have an exact budget proposal for the project, and it promised to be the most costly in U.S. film history. Still, the members unanimously approved it. The Ten Commandments, released in 1956, was DeMille's final film. It was the longest (3 hours, 39 minutes) and most expensive ($13 million) film in Paramount history. Production of The Ten Commandments began in October 1954. The Exodus scene was filmed on-site in Egypt with the use of four Technicolor-VistaVision camera filming 12,000 people. They continued filming in 1955 in Paris and Hollywood on 30 different sound stages. They were even required to expand to RKO sound studios for filming. Post-production lasted a year and the film premiered in Salt Lake City. Nominated for an Academy Award for Best Picture, it grossed over $80 million, which surpassed the gross of The Greatest Show on Earth and every other film in history, except for Gone with the Wind. A unique practice at the time, DeMille offered ten percent of his profit to the crew. On November 7, 1954, while in Egypt filming the Exodus sequence for The Ten Commandments, DeMille (who was seventy-three) climbed a ladder to the top of the massive Per Rameses set and suffered a serious heart attack. Despite the urging of his associate producer, DeMille wanted to return to the set right away. DeMille developed a plan with his doctor to allow him to continue directing while reducing his physical stress. Although DeMille completed the film, his health was diminished by several more heart attacks. His daughter Cecilia took over as director as DeMille sat behind the camera with Loyal Griggs as the cinematographer. This film would be his last. Due to his frequent heart attacks, DeMille asked his son-in-law, actor Anthony Quinn, to direct a remake of his 1938 film The Buccaneer. DeMille served as executive producer, overseeing producer Henry Wilcoxon. Despite a cast led by Charlton Heston and Yul Brynner, the 1958 film The Buccaneer was a disappointment. DeMille attended the Santa Barbara premiere of The Buccaneer in December 1958. DeMille was unable to attend the Los Angeles premiere of The Buccaneer. In the months before his death, DeMille was researching a film biography of Robert Baden-Powell, the founder of the Scout Movement. DeMille asked David Niven to star in the film, but it was never made. DeMille also was planning a film about the space race as well as another biblical epic about the Book of Revelation. DeMille's autobiography was mostly completed by the time DeMille died and was published in November 1959. Death Cecil B. DeMille suffered a series of heart attacks from June 1958 to January 1959, and died on January 21, 1959, following an attack. DeMille's funeral was held on January 23 at St. Stephen's Episcopal Church. He was entombed at the Hollywood Memorial Cemetery (now known as Hollywood Forever). After his death, notable news outlets such as The New York Times, the Los Angeles Times, and The Guardian honored DeMille as "pioneer of movies", "the greatest creator and showman of our industry", and "the founder of Hollywood". DeMille left his multi-million dollar estate in Los Feliz, Los Angeles in Laughlin Park to his daughter Cecilia because his wife had dementia and was unable to care for an estate. She would die one year later. His personal will drew a line between Cecilia and his three adopted children, with Cecilia receiving a majority of DeMille's inheritance and estate. The other three children were surprised by this, as DeMille did not treat the children differently in life. Cecilia lived in the house for many years until her death in 1984, but the house was auctioned by his granddaughter Cecilia DeMille Presley who also lived there in the late 1980s. Filmmaking Influences DeMille believed his first influences to be his parents, Henry and Beatrice DeMille. His playwright father introduced him to the theater at a young age. Henry was heavily influenced by the work of Charles Kingsley whose ideas trickled down to DeMille. DeMille noted that his mother had a "high sense of the dramatic" and was determined to continue the artistic legacy of her husband after he died. Beatrice became a play broker and author's agent, influencing DeMille's early life and career. DeMille's father worked with David Belasco theatrical producer, impresario, and playwright. Belasco was known for adding realistic elements in his plays such as real flowers, food, and aromas that could transport his audiences into the scenes. While working in theatre, DeMille used real fruit trees in his play California as influenced by Belasco. Similar to Belasco, DeMille's theatre was revolved around entertainment, rather than artistry. Generally, Belasco's influence of DeMille's career can be seen in DeMille's showmanship and narration. E.H. Sothern's early influence on DeMille's work can be seen in DeMille's perfectionism. DeMille recalled that one of the most influential plays he saw was Hamlet, directed by Sothern. Method DeMille's filmmaking process always began with extensive research. Next, he would work with writers to develop the story that he was envisioning. Then, he would help writers construct a script. Finally, he would leave the script with artists and allow them to create artistic depictions and renderings of each scene. Plot and dialogue were not a strong point of DeMille's films. Consequently, he focused his efforts on his films' visuals. He worked with visual technicians, editors, art directors, costume designers, cinematographers, and set carpenters in order to perfect the visual aspects of his films. With his editor, Anne Bauchens, DeMille used editing techniques to allow the visual images to bring the plot to climax rather than dialogue. DeMille had large and frequent office conferences to discuss and examine all aspects of the working film including story-boards, props, and special effects. DeMille rarely gave direction to actors; he preferred to "office-direct" where he would work with actors in his office, going over characters and reading through scripts. Any problems on the set were often fixed by writers in the office rather than on the set. DeMille did not believe a large movie set was the place to discuss minor character or line issues. DeMille was particularly adept at directing and managing large crowds in his films. Martin Scorsese recalled that DeMille had the skill to maintain control of not only the lead actors in a frame but the many extras in the frame as well. DeMille was adept at directing "thousands of extras", and many of his pictures include spectacular set pieces: the toppling of the pagan temple in Samson and Delilah; train wrecks in The Road to Yesterday, Union Pacific and The Greatest Show on Earth; the destruction of an airship in Madam Satan; and the parting of the Red Sea in both versions of The Ten Commandments. DeMille experimented in his early films with photographic light and shade which created dramatic shadows instead of glare. His specific use of lighting, influenced by his mentor David Belasco, was for the purpose of creating "striking images" and heightening "dramatic situations". DeMille was unique in using this technique. In addition to his use of volatile and abrupt film editing, his lighting and composition were innovative for the time period as filmmakers were primarily concerned with a clear, realistic image. Another important aspect of DeMille's editing technique was to put the film away for a week or two after an initial edit in order to re-edit the picture with a fresh mind. This allowed for the rapid production of his films in the early years of the Lasky Company. The cuts were sometimes rough, but the movies were always interesting. DeMille often edited in a manner that favored psychological space rather than physical space through his cuts. In this way, the characters' thoughts and desires are the visual focus rather than the circumstances regarding the physical scene. As DeMille's career progressed, he increasingly relied on artist Dan Sayre Groesbeck's concept, costume, and storyboard art. Groesbeck's art was circulated on set to give actors and crew members a better understanding of DeMille's vision. His art was even shown at Paramount meetings when pitching new films. DeMille adored the art of Groesbeck, even hanging it above his fireplace, but film staff found it difficult to convert his art into three-dimensional sets. As DeMille continued to rely on Groesbeck, the nervous energy of his early films transformed into more steady compositions of his later films. While visually appealing, this made the films appear more old-fashioned. Composer Elmer Bernstein described DeMille as "sparing no effort" when filmmaking. Bernstein recalled that DeMille would scream, yell, or flatter, whatever it took to achieve the perfection he required in his films. DeMille was painstakingly attentive to details on set and was as critical of himself as he was of his crew. Costume designer Dorothy Jeakins, who worked with DeMille on The Ten Commandments (1956), said that he was skilled in humiliating people. Jeakins admitted that she received quality training from him, but that it was necessary to become a perfectionist on a DeMille set to avoid being fired. DeMille had an authoritarian persona on set; he required absolute attention from the cast and crew. He had a band of assistants who catered to his needs. He would speak to the entire set, sometimes enormous with countless numbers of crew members and extras, via a microphone to maintain control of the set. He was disliked by many inside and outside of the film industry for his cold and controlling reputation. DeMille was known for autocratic behavior on the set, singling out and berating extras who were not paying attention. Many of these displays were thought to be staged, however, as an exercise in discipline. He despised actors who were unwilling to take physical risks, especially when he had first demonstrated that the required stunt would not harm them. This occurred with Victor Mature in Samson and Delilah. Mature refused to wrestle Jackie the Lion, even though DeMille had just tussled with the lion, proving that he was tame. DeMille told the actor that he was "one hundred percent yellow". Paulette Goddard's refusal to risk personal injury in a scene involving fire in Unconquered cost her DeMille's favor and a role in The Greatest Show on Earth. DeMille did receive help in his films, notably from Alvin Wyckoff who shot forty-three of DeMille's films; brother William deMille who would occasionally serve as his screenwriter; and Jeanie Macpherson, who served as DeMille's exclusive screenwriter for fifteen years; and Eddie Salven, DeMille's favorite assistant director. DeMille made stars of unknown actors: Gloria Swanson, Bebe Daniels, Rod La Rocque, William Boyd, Claudette Colbert, and Charlton Heston. He also cast established stars such as Gary Cooper, Robert Preston, Paulette Goddard and Fredric March in multiple pictures. DeMille cast some of his performers repeatedly, including: Henry Wilcoxon, Julia Faye, Joseph Schildkraut, Ian Keith, Charles Bickford, Theodore Roberts, Akim Tamiroff and William Boyd. DeMille was credited by actor Edward G. Robinson with saving his career following his eclipse in the Hollywood blacklist. Style and themes Cecil B. DeMille's film production career evolved from critically significant silent films to financially significant sound films. He began his career with reserved yet brilliant melodramas; from there, his style developed into marital comedies with outrageously melodramatic plots. In order to attract a high-class audience, DeMille based many of his early films on stage melodramas, novels, and short stories. He began the production of epics earlier in his career until they began to solidify his career in the 1920s. By 1930, DeMille had perfected his film style of mass-interest spectacle films with Western, Roman, or Biblical themes. DeMille was often criticized for making his spectacles too colorful and for being too occupied with entertaining the audience rather than accessing the artistic and auteur possibilities that film could provide. However, others interpreted DeMille's work as visually impressive, thrilling, and nostalgic. Along the same lines, critics of DeMille often qualify him by his later spectacles and fail to consider several decades of ingenuity and energy that defined him during his generation. Throughout his career, he did not alter his films to better adhere to contemporary or popular styles. Actor Charlton Heston admitted DeMille was, "terribly unfashionable" and Sidney Lumet called Demille, "the cheap version of D.W. Griffith," adding that DeMille, "[didn't have]...an original thought in his head," though Heston added that DeMille was much more than that. According to Scott Eyman, DeMille's films were at the same time masculine and feminine due to his thematic adventurousness and his eye for the extravagant. DeMille's distinctive style can be seen through camera and lighting effects as early as The Squaw Man with the use of daydream images; moonlight and sunset on a mountain; and side-lighting through a tent flap. In the early age of cinema, DeMille differentiated the Lasky Company from other production companies due to the use of dramatic, low-key lighting they called "Lasky lighting" and marketed as "Rembrandt lighting" to appeal to the public. DeMille achieved international recognition for his unique use of lighting and color tint in his film The Cheat. DeMille's 1956 version of The Ten Commandments, according to director Martin Scorsese, is renowned for its level of production and the care and detail that went into creating the film. He stated that The Ten Commandments was the final culmination of DeMille's style. DeMille was interested in art and his favorite artist was Gustave Doré; DeMille based some of his most well-known scenes on the work of Doré. DeMille was the first director to connect art to filmmaking; he created the title of "art director" on the film set. DeMille was also known for his use of special effects without the use of digital technology. Notably, DeMille had cinematographer John P. Fulton create the parting of the Red Sea scene in his 1956 film The Ten Commandments, which was one of the most expensive special effects in film history, and has been called by Steven Spielberg "the greatest special effect in film history". The actual parting of the sea was created by releasing 360,000 gallons of water into a huge water tank split by a U-shaped trough, overlaying it with film of a giant waterfall that was built on the Paramount backlot, and playing the clip backwards. Aside from his Biblical and historical epics which are concerned with how man relates to God, some of DeMille's films contained themes of "neo-naturalism" which portray the conflict between the laws of man and the laws of nature. Although he is known for his later "spectacular" films, his early films are held in high regard by critics and film historians. DeMille discovered the possibilities of the "bathroom" or "boudoir" in film without being "vulgar" or "cheap". DeMille's films Male and Female, Why Change Your Wife?, and The Affairs of Anatol can be retrospectively described as high camp and are categorized as "early DeMille films" due to their particular style of production and costume and set design. However, his earlier films The Captive, Kindling, Carmen, and The Whispering Chorus are more serious films. It is difficult to typify DeMille's films into one specific genre. His first three films were Westerns, and he filmed many Westerns throughout his career. However, throughout his career, he filmed comedies, periodic and contemporary romances, dramas, fantasies, propaganda, Biblical spectacles, musical comedies, suspense, and war films. At least one DeMille film can represent each film genre. DeMille produced the majority of his films before the 1930s, and by the time sound films were invented, film critics saw DeMille as antiquated, with his best filmmaking years behind him. DeMille's films contained many similar themes throughout his career. However, the films of his silent era were often thematically different from the films of his sound era. His silent era films often included the "battle of the sexes" theme due to the era of women's suffrage and the enlarging role of women in society. Moreover, before his religious-themed films, many of his silent era films revolved around "husband-and-wife-divorce-and-remarry satires", considerably more adult-themed. According to Simon Louvish, these films reflected DeMille's inner thoughts and opinions about marriage and human sexuality. Religion was a theme that DeMille returned to throughout his career. Of his seventy films, five revolved around stories of the Bible and the New Testament; however many others, while not direct retellings of Biblical stories, had themes of faith and religious fanaticism in films such as The Crusades and The Road to Yesterday. Western and frontier American were also themes that DeMille returned to throughout his career. His first several films were westerns and he produced a chain of westerns during the sound era. Instead of portraying the danger and anarchy of the West, he portrayed the opportunity and redemption found in Western America. Another common theme in DeMille's films is the reversal of fortune and the portrayal of the rich and the poor, including the war of the classes and man versus society conflicts such as in The Golden Chance and The Cheat. In relation to his own interests and sexual preferences, sadomasochism was a minor theme present in some of his films. Another minor characteristic of DeMille's films include train crashes which can be found in several of his films. Legacy Known as the father of the Hollywood motion picture industry, Cecil B. DeMille made 70 films including several box-office hits. DeMille is one of the more commercially successful film directors in history with his films before the release of The Ten Commandments estimated to have grossed $650 million worldwide. Adjusted for inflation, DeMille's remake of The Ten Commandments is the eighth highest-grossing film in the world. According to Sam Goldwyn, critics did not like DeMille's films, but the audiences did and "they have the final word". Similarly, scholar David Blanke, argued that DeMille had lost the respect of his colleagues and film critics by his late film career. However, his final films maintained that DeMille was still respected by his audiences. Five of DeMille's film were the highest-grossing films at the year of their release, with only Spielberg topping him with six of his films as the highest-grossing films of the year. DeMille's highest-grossing films include: The Sign of the Cross (1932), Unconquered (1947), Samson and Delilah (1949), The Greatest Show on Earth (1952), and The Ten Commandments (1956). Director Ridley Scott has been called "the Cecil B. DeMille of the digital era" due to his classical and medieval epics. Despite his box-office success, awards, and artistic achievements, DeMille has been dismissed and ignored by critics both during his life and posthumously. He consistently was criticized for producing shallow films without talent or artistic care. Compared to other directors, few film scholars have taken the time to academically analyze his films and style. During the French New Wave, critics began to categorize certain filmmakers as auteurs such as Howard Hawks, John Ford, and Raoul Walsh. DeMille was omitted from the list, thought to be too unsophisticated and antiquated to be considered an auteur. However, Simon Louvish wrote "he was the complete master and auteur of his films" and Anton Kozlovic called him the "unsung American auteur". Andrew Sarris, a leading proponent of the auteur theory, ranked DeMille highly as an auteur in the "Far Side of Paradise", just below the "Pantheon". Sarris added that despite the influence of styles of contemporary directors throughout his career, DeMille's style remained unchanged. Robert Birchard wrote that one could argue auteurship of DeMille on the basis that DeMille's thematic and visual style remained consistent throughout his career. However, Birchard acknowledged that Sarris's point was more likely that DeMille's style was behind the development of film as an art form. Meanwhile, Sumiko Higashi sees DeMille as "not only a figure who was shaped and influenced by the forces of his era but as a filmmaker who left his own signature on the culture industry." The critic Camille Paglia has called The Ten Commandments one of the ten greatest films of all time. DeMille was one of the first directors to become a celebrity in his own right. He cultivated the image of the omnipotent director, complete with megaphone, riding crop, and jodhpurs. He was known for his unique, working wardrobe which included riding boots, riding pants, and soft, open necked shirts. Joseph Henabery recalled that DeMille looked like "a king on a throne surrounded by his court" while directing films on a camera platform. DeMille was liked by some of his fellow directors and disliked by others, though his actual films were usually dismissed by his peers as vapid spectacle. Director John Huston intensely disliked both DeMille and his films. "He was a thoroughly bad director," Huston said. "A dreadful showoff. Terrible. To diseased proportions." Said fellow director William Wellman: "Directorially, I think his pictures were the most horrible things I've ever seen in my life. But he put on pictures that made a fortune. In that respect, he was better than any of us." Producer David O. Selznick wrote: "There has appeared only one Cecil B. DeMille. He is one of the most extraordinarily able showmen of modern times. However much I may dislike some of his pictures, it would be very silly of me, as a producer of commercial motion pictures, to demean for an instant his unparalleled skill as a maker of mass entertainment." Salvador Dalí wrote that DeMille, Walt Disney and the Marx Brothers were "the three great American Surrealists". DeMille appeared as himself in numerous films, including the MGM comedy Free and Easy. He often appeared in his coming-attraction trailers and narrated many of his later films, even stepping on screen to introduce The Ten Commandments. DeMille was immortalized in Billy Wilder's Sunset Boulevard when Gloria Swanson spoke the line: "All right, Mr. DeMille. I'm ready for my close-up." DeMille plays himself in the film. DeMille's reputation had a renaissance in the 2010s. As a filmmaker, DeMille was the aesthetic inspiration of many directors and films due to his early influence during the crucial development of the film industry. DeMille's early silent comedies influenced the comedies of Ernst Lubitsch and Charlie Chaplin's A Woman of Paris. Additionally, DeMille's epics such as The Crusades influenced Sergei Eisenstein's Alexander Nevsky. Moreover, DeMille's epics inspired directors such as Howard Hawks, Nicholas Ray, Joseph L. Mankiewicz, and George Stevens to try producing epics. Cecil B. DeMille has influenced the work of several well-known directors. Alfred Hitchcock cited DeMille's 1921 film Forbidden Fruit as an influence of his work and one of his top ten favorite films. DeMille has influenced the careers of many modern directors. Martin Scorsese cited Unconquered, Samson and Delilah, and The Greatest Show on Earth as DeMille films that have imparted lasting memories on him. Scorsese said he had viewed The Ten Commandments forty or fifty times. Famed director Steven Spielberg stated that DeMille's The Greatest Show on Earth was one of the films that influenced him to become a filmmaker. Furthermore, DeMille influenced about half of Spielberg's films, including War of the Worlds. The Ten Commandments inspired DreamWorks Animation's later film about Moses, The Prince of Egypt. As one of the establishing members of Paramount Pictures and co-founder of Hollywood, DeMille had a role in the development of the film industry. Consequently, the name "DeMille" has become synonymous with filmmaking. Publicly Episcopalian, DeMille drew on his Christian and Jewish ancestors to convey a message of tolerance. DeMille received more than a dozen awards from Christian and Jewish religious and cultural groups, including B'nai B'rith. However, not everyone received DeMille's religious films favorably. DeMille was accused of antisemitism after the release of The King of Kings, and director John Ford despised DeMille for what he saw as "hollow" biblical epics meant to promote DeMille's reputation during the politically turbulent 1950s. In response to the claims, DeMille donated some of the profits from The King of Kings to charity. In the 2012 Sight & Sound poll, both DeMille's Samson and Delilah and 1923 version of The Ten Commandments received votes, but did not make the top 100 films. Although many of DeMille's films are available on DVD and Blu-ray release, only 20 of his silent films are commercially available on DVD Commemoration and tributes The original Lasky-DeMille Barn in which The Squaw Man was filmed was converted into a museum named the "Hollywood Heritage Museum". It opened on December 13, 1985, and features some of DeMille's personal artifacts. The Lasky-DeMille Barn was dedicated as a California historical landmark in a ceremony on December 27, 1956; DeMille was the keynote speaker. and was listed on the National Register of Historic Places in 2014. The Dunes Center in Guadalupe, California contains an exhibition of artifacts uncovered in the desert near Guadalupe from DeMille's set of his 1923 version of The Ten Commandments, known as the "Lost City of Cecil B. DeMille". Donated by the Cecil B. DeMille Foundation in 2004, the moving image collection of Cecil B. DeMille is held at the Academy Film Archive and includes home movies, outtakes, and never-before-seen test footage. In summer 2019, The Friends of the Pompton Lakes Library hosted a Cecil B DeMille film festival to celebrate DeMille's achievements and connection to Pompton Lakes. They screened four of his films at Christ Church, where DeMille and his family attended church when they lived there. Two schools have been named after him: Cecil B. DeMille Middle School, in Long Beach, California, which was closed and demolished in 2010 to make way for a new high school; and Cecil B. DeMille Elementary School in Midway City, California. The former film building at Chapman University in Orange, California, is named in honor of DeMille. During the Apollo 11 mission, Buzz Aldrin refers to himself in one instance as "Cecil B. DeAldrin", as a humorous nod to DeMille. The title of the 2000 John Waters film Cecil B. Demented alludes to DeMille. DeMille's legacy is maintained by his granddaughter Cecilia DeMille Presley who serves as the president of the Cecil B. DeMille Foundation, which strives to support higher education, child welfare, and film in Southern California. In 1963, the Cecil B. DeMille Foundation donated the "Paradise" ranch to the Hathaway Foundation, which cares for emotionally disturbed and abused children. A large collection of DeMille's materials including scripts, storyboards, and films resides at Brigham Young University in L. Tom Perry Special Collections. Awards and recognition Cecil B. DeMille received many awards and honors, especially later in his career. The American Academy of Dramatic Arts honored DeMille with an Alumni Achievement Award in 1958. In 1957, DeMille gave the commencement address for the graduation ceremony of Brigham Young University wherein he received an honorary Doctorate of Letter degree. Additionally, in 1958, he received an honorary Doctorate of Law degree from Temple University. From the film industry, DeMille received the Irving G. Thalberg Memorial Award at the Academy Awards in 1953, and a Lifetime Achievement Award from the Directors Guild of America Award the same year. In the same ceremony, DeMille received a nomination from Directors Guild of America Award for Outstanding Directorial Achievement in Motion Pictures for The Greatest Show on Earth. In 1952, DeMille was awarded the first Cecil B. DeMille Award at the Golden Globes. An annual award, the Golden Globe's Cecil B. DeMille Award recognizes lifetime achievement in the film industry. For his contribution to the motion picture and radio industry, DeMille has two stars on the Hollywood Walk of Fame. The first, for radio contributions, is located at 6240 Hollywood Blvd. The second star is located at 1725 Vine Street. DeMille received two Academy Awards: an Honorary Award for "37 years of brilliant showmanship" in 1950 and a Best Picture award in 1953 for The Greatest Show on Earth. DeMille received a Golden Globe Award for Best Director and was additionally nominated for the Best Director category at the 1953 Academy Awards for the same film. He was further nominated in the Best Picture category for The Ten Commandments at the 1957 Academy Awards. DeMille's Union Pacific received a Palme d'Or in retrospect at the 2002 Cannes Film Festival. Two of DeMille's films have been selected for preservation in the National Film Registry by the United States Library of Congress: The Cheat (1915) and The Ten Commandments (1956). Filmography Cecil B. DeMille made 70 features. Fifty-two of his features are silent films. The first 24 of his silent films were made in the first three years of his career (1913-1916). Eight of his films were "epics" with five of those classified as "Biblical". Six of DeMille's films—The Arab, The Wild Goose Chase, The Dream Girl, The Devil-Stone, We Can't Have Everything, and The Squaw Man (1918)—were destroyed due to nitrate decomposition, and are considered lost. The Ten Commandments is broadcast every Saturday at Passover in the United States on the ABC Television Network. Directed features Filmography obtained from Fifty Hollywood Directors. Silent films Sound films Directing or producing credit These films represent those which DeMille produced or assisted in directing, credited or uncredited. Brewster's Millions (1914, lost) The Master Mind (1914) The Only Son (1914, lost) The Man on the Box (1914) The Ghost Breaker (1914, lost) After Five (1915) Nan of Music Mountain (1917) Chicago (1927, Producer) When Worlds Collide (1951, executive producer) The War of the Worlds (1953, executive producer) The Buccaneer (1958, producer) Acting and cameos DeMille frequently made cameos as himself in other Paramount films. Additionally, he often starred in prologues and special trailers that he created for his films, having an opportunity to personally address the audience. Explanatory notes Citations General sources Orrison, Katherine. Written in Stone: Making Cecil B. DeMille's Epic, The Ten Commandments. New York: Vestal Press, 1990. . External links by the Cecil B. DeMille Foundation Cecil B. DeMille at Virtual History Cecil B. DeMille's Early Films' Costs and Grosses by David Pierce Silent Film Bookshelf Archival materials Cecil B. DeMille papers, Vault MSS 1400, L. Tom Perry Special Collections, Harold B. Lee Library, Brigham Young University, DeMille's personal and business papers including correspondence, audio, and video recordings, financial ledgers, and memorabilia Other collections related to DeMille at the L. Tom Perry Special Collections, Harold B. Lee Library, Brigham Young University The Mary Roberts Rinehart Papers, Vault SC.1958.03, ULS Special Collections, University of Pittsburgh Library, includes conversations with DeMille about her plays 1881 births 1959 deaths 20th-century American dramatists and playwrights 20th-century American male actors 20th-century American male writers 20th-century American memoirists 20th-century American screenwriters Academy Honorary Award recipients American Academy of Dramatic Arts alumni American anti-communists American Episcopalians American film editors American male dramatists and playwrights American male film actors American male screenwriters American male stage actors American people of Belgian descent American people of British-Jewish descent American people of Dutch descent American people of English descent American people of German-Jewish descent American radio directors American radio personalities American radio producers American theatre directors American theatre managers and producers Articles containing video clips Best Director Golden Globe winners Burials at Hollywood Forever Cemetery California Republicans Cecil B. DeMille Award Golden Globe winners Cecil B. Directors of Palme d'Or winners Film directors from Massachusetts Film directors from New Jersey Film directors from North Carolina Film producers from Massachusetts Film producers from New Jersey Golden Globe Award-winning producers Harold B. Lee Library-related film articles Old Right (United States) People from Ashfield, Massachusetts People from Los Feliz, Los Angeles People from Pompton Lakes, New Jersey People from Washington, North Carolina Producers who won the Best Picture Academy Award Recipients of the Irving G. Thalberg Memorial Award Screenwriters from Massachusetts Screenwriters from New Jersey Screenwriters from North Carolina Silent film directors Special effects people Western (genre) film directors Widener University alumni Members of The Lambs Club
2,789
6,210
https://en.wikipedia.org/wiki/Clark%20Ashton%20Smith
Clark Ashton Smith
Clark Ashton Smith (January 13, 1893 – August 14, 1961) was an American writer and artist. He achieved early local recognition, largely through the enthusiasm of George Sterling, for traditional verse in the vein of Swinburne. As a poet, Smith is grouped with the West Coast Romantics alongside Joaquin Miller, Sterling, and Nora May French and remembered as "The Last of the Great Romantics" and "The Bard of Auburn". Smith's work was praised by his contemporaries. H. P. Lovecraft stated that "in sheer daemonic strangeness and fertility of conception, Clark Ashton Smith is perhaps unexcelled", and Ray Bradbury said that Smith "filled my mind with incredible worlds, impossibly beautiful cities, and still more fantastic creatures". Smith was one of "the big three of Weird Tales, with Robert E. Howard and H. P. Lovecraft", but some readers objected to his morbidness and violation of pulp traditions. The fantasy critic L. Sprague de Camp said of him that "nobody since Poe has so loved a well-rotted corpse." Smith was a member of the Lovecraft circle, and his literary friendship with Lovecraft lasted from 1922 until Lovecraft's death in 1937. His work is marked by an extraordinarily rich and ornate vocabulary, a cosmic perspective and a vein of sardonic and sometimes ribald humor. Of his writing style, Smith stated: "My own conscious ideal has been to delude the reader into accepting an impossibility, or series of impossibilities, by means of a sort of verbal black magic, in the achievement of which I make use of prose-rhythm, metaphor, simile, tone-color, counter-point, and other stylistic resources, like a sort of incantation." Biography Early life and education Smith was born January 13, 1893, in Long Valley, Placer County, California, into a family of English and New England heritage. He spent most of his life in the small town of Auburn, California, living in a cabin built by his parents, Fanny and Timeus Smith. Smith professed to hate the town's provincialism but rarely left it until he married late in life. His formal education was limited: he suffered from psychological disorders including intense agoraphobia, and although he was accepted to high school after attending eight years of grammar school, his parents decided it was better for him to be taught at home. An insatiable reader with an extraordinary eidetic memory, Smith appeared to retain most or all of whatever he read. After leaving formal education, he embarked upon a self-directed course of literature, including Robinson Crusoe, Gulliver's Travels, the fairy tales of Hans Christian Andersen and Madame d'Aulnoy, the Arabian Nights and the poems of Edgar Allan Poe. He read an unabridged dictionary word for word, studying not only the definitions of the words but also their etymology. The other main course in Smith's self-education was to read the complete 11th edition of the Encyclopædia Britannica at least twice. Smith later taught himself French and Spanish to translate verse out of those languages, including works by Gérard de Nerval, Paul Verlaine, Amado Nervo, Gustavo Adolfo Bécquer and all but 6 of Charles Baudelaire's 157 poems in The Flowers of Evil. Early writing His first literary efforts, at the age of 11, took the form of fairy tales and imitations of the Arabian Nights. Later, he wrote long adventure novels dealing with Oriental life. By 14 he had already written a short adventure novel called The Black Diamonds which was lost for years until published in 2002. Another juvenile novel was written in his teenaged years: The Sword of Zagan (unpublished until 2004). Like The Black Diamonds, it uses a medieval, Arabian Nights-like setting, and the Arabian Nights, like the fairy tales of the Brothers Grimm and the works of Edgar Allan Poe, are known to have strongly influenced Smith's early writing, as did William Beckford's Vathek. At age 17, he sold several tales to The Black Cat, a magazine which specialized in unusual tales. He also published some tales in the Overland Monthly in this brief foray into fiction which preceded his poetic career. However, it was primarily poetry that motivated the young Smith and he confined his efforts to poetry for more than a decade. In his later youth, Smith made the acquaintance of the San Francisco poet George Sterling through a member of the local Auburn Monday Night Club, where he read several of his poems with considerable success. On a month-long visit to Sterling in Carmel, California, Smith was introduced by Sterling to the poetry of Baudelaire. He became Sterling's protégé and Sterling helped him to publish his first volume of poems, The Star-Treader and Other Poems, at the age of 19. Smith received international acclaim for the collection. The Star-Treader was received very favorably by American critics, one of whom named Smith "the Keats of the Pacific". Smith briefly moved among the circle that included Ambrose Bierce and Jack London, but his early fame soon faded away. Health breakdown period A little later, Smith's health broke down and for eight years his literary production was intermittent, though he produced his best poetry during this period. A small volume, Odes and Sonnets, was brought out in 1918. Smith came into contact with literary figures who would later form part of H.P. Lovecraft's circle of correspondents; Smith knew them far earlier than Lovecraft. These figures include poet Samuel Loveman and bookman George Kirk. It was Smith who in fact later introduced Donald Wandrei to Lovecraft. For this reason, it has been suggested that Lovecraft might as well be referred to as a member of a "Smith" circle as Smith was a member of a Lovecraft one. In 1920 Smith composed a celebrated long poem in blank verse, The Hashish Eater, or The Apocalypse of Evil which was published in Ebony and Crystal (1922). This was followed by a fan letter from H. P. Lovecraft, which was the beginning of 15 years of friendship and correspondence. With studied playfulness, Smith and Lovecraft borrowed each other's coinages of place names and the names of strange gods for their stories, though so different is Smith's treatment of the Lovecraft theme that it has been dubbed the "Clark Ashton Smythos." In 1925 Smith published Sandalwood, which was partly funded by a gift of $50 from Donald Wandrei. He wrote little fiction in this period with the exception of some imaginative vignettes or prose poems. Smith was poor for most of his life and often did hard manual jobs such as fruit picking and woodcutting to support himself and his parents. He was an able cook and made many kinds of wine. He also did well digging, typing and journalism, as well as contributing a column to The Auburn Journal and sometimes worked as its night editor. One of Smith's artistic patrons and frequent correspondents was San Francisco businessman Albert M. Bender. Prolific fiction-writing period At the beginning of the Depression in 1929, with his aged parents' health weakening, Smith resumed fiction writing and turned out more than a hundred short stories between 1929 and 1934, nearly all of which can be classed as weird horror or science fiction. Like Lovecraft, he drew upon the nightmares that had plagued him during youthful spells of sickness. Brian Stableford has written that the stories written during this brief phase of hectic productivity "constitute one of the most remarkable oeuvres in imaginative literature". He published at his own expense a volume containing six of his best stories, The Double Shadow and Other Fantasies, in an edition of 1000 copies printed by the Auburn Journal. The theme of much of his work is egotism and its supernatural punishment; his weird fiction is generally macabre in subject matter, gloatingly preoccupied with images of death, decay and abnormality. Most of Smith's weird fiction falls into four series set variously in Hyperborea, Poseidonis, Averoigne and Zothique. Hyperborea, which is a lost continent of the Miocene period, and Poseidonis, which is a remnant of Atlantis, are much the same, with a magical culture characterized by bizarreness, cruelty, death and postmortem horrors. Averoigne is Smith's version of pre-modern France, comparable to James Branch Cabell's Poictesme. Zothique exists millions of years in the future. It is "the last continent of earth, when the sun is dim and tarnished". These tales have been compared to the Dying Earth sequence of Jack Vance. In 1933 Smith began corresponding with Robert E. Howard, the Texan creator of Conan the Barbarian. From 1933 to 1936, Smith, Howard and Lovecraft were the leaders of the Weird Tales school of fiction and corresponded frequently, although they never met. The writer of oriental fantasies E. Hoffmann Price is the only man known to have met all three in the flesh. Critic Steve Behrends has suggested that the frequent theme of 'loss' in Smith's fiction (many of his characters attempt to recapture a long-vanished youth, early love, or picturesque past) may reflect Smith's own feeling that his career had suffered a "fall from grace": Mid-late career: return to poetry and sculpture In September 1935, Smith's mother Fanny died. Smith spent the next two years nursing his father through his last illness. Timeus died in December 1937. Aged 44, Smith now virtually ceased writing fiction. He had been severely affected by several tragedies occurring in a short period of time: Robert E. Howard's death by suicide (1936), Lovecraft's death from cancer (1937) and the deaths of his parents, which left him exhausted. As a result, he withdrew from the scene, marking the end of Weird Taless Golden Age. He began sculpting and resumed the writing of poetry. However, Smith was visited by many writers at his cabin, including Fritz Leiber, Rah Hoffman, Francis T. Laney and others. In 1942, three years after August Derleth founded Arkham House for the purpose of preserving the work of H.P. Lovecraft, Derleth published the first of several major collections of Smith's fiction, Out of Space and Time (1942). This was followed by Lost Worlds (1944). The books sold slowly, went out of print and became costly rarities. Derleth published five more volumes of Smith's prose and two of his verse, and at his death in 1971 had a large volume of Smith's poems in press. Later life, marriage and death In 1953, Smith suffered a coronary attack. Aged 61, he married Carol(yn) Jones Dorman on November 10, 1954. Dorman had much experience in Hollywood and radio public relations. After honeymooning at the Smith cabin, they moved to Pacific Grove, California, where he set up a household including her three children from a previous marriage. For several years he alternated between the house on Indian Ridge and their house in Pacific Grove. Smith having sold most of his father's tract, in 1957 the old house burned – the Smiths believed by arson, others said by accident. Smith now reluctantly did gardening for other residents at Pacific Grove, and grew a goatee. He spent much time shopping and walking near the seafront but despite Derleth's badgering, resisted the writing of more fiction. In 1961 he suffered a series of strokes and in August 1961 he quietly died in his sleep, aged 68. After Smith's death, Carol remarried (becoming Carolyn Wakefield) and subsequently died of cancer. The poet's ashes were buried beside, or beneath, a boulder to the immediate west of where his childhood home (destroyed by fire in 1957) stood; some were also scattered in a stand of blue oaks near the boulder. There was no marker. Plaques recognizing Smith have been erected at the Auburn Placer County Library in 1985 and in Bicentennial Park in Auburn in 2003. Bookseller Roy A. Squires was appointed Smith's "west coast executor", with Jack L. Chalker as his "east coast executor". Squires published many letterpress editions of individual Smith poems. Smith's literary estate is represented by his stepson, Prof William Dorman, director of CASiana Literary Enterprises. Arkham House owns the copyright to many Smith stories, though some are now in the public domain. For 'posthumous collaborations' of Smith (stories completed by Lin Carter), see the entry on Lin Carter. Artistic periods While Smith was always an artist who worked in several very different media, it is possible to identify three distinct periods in which one form of art had precedence over the others. Poetry: until 1925 Smith published most of his volumes of poetry in this period, including the aforementioned The Star-Treader and Other Poems, as well as Odes and Sonnets (1918), Ebony and Crystal (1922) and Sandalwood (1925). His long poem The Hashish-Eater; Or, the Apocalypse of Evil was written in 1920. Weird fiction: 1926–1935 Smith wrote most of his weird fiction and Cthulhu Mythos stories, inspired by H. P. Lovecraft. Creatures of his invention include Aforgomon, Rlim-Shaikorth, Mordiggian, Tsathoggua, the wizard Eibon, and various others. In an homage to his friend, Lovecraft referred in "The Whisperer in Darkness" and "The Battle That Ended the Century" (written in collaboration with R. H. Barlow) to an Atlantean high-priest, "Klarkash-Ton". Smith's weird stories form several cycles, called after the lands in which they are set: Averoigne, Hyperborea, Mars, Poseidonis, Zothique. To some extent Smith was influenced in his vision of such lost worlds by the teachings of Theosophy and the writings of Helena Blavatsky. Stories set in Zothique belong to the Dying Earth subgenre. Amongst Smith's science fiction tales are stories set on Mars and the invented planet of Xiccarph. His short stories originally appeared in the magazines Weird Tales, Strange Tales, Astounding Stories, Stirring Science Stories and Wonder Stories. Clark Ashton Smith was the third member of the great triumvirate of Weird Tales, with Lovecraft and Robert E. Howard. Many of Smith's stories were published in six hardcover volumes by August Derleth under his Arkham House imprint. For a full bibliography to 1978, see Sidney-Fryer, Emperor of Dreams (cited below). S.T. Joshi is working with other scholars to produce an updated bibliography of Smith's work. A selection of Smith's best-known tales includes: "The Last Incantation" — Weird Tales, June 1930 LW2 "A Voyage to Sfanomoe" — Weird Tales, August 1931 LW2 "The Tale of Satampra Zeiros" — Weird Tales November 1931 LW2 "The Door to Saturn" — Strange Tales, January 1932 LW2 "The Planet of the Dead" — Weird Tales, March 1932 LW2 "The Gorgon" — Weird Tales, April 1932 LW2 "The Letter from Mohaun Los" (under the title of "Flight into Super-Time") — Wonder Stories, August 1932 LW1 "The Empire of the Necromancers" — Weird Tales, September 1932 LW1 "The Hunters from Beyond" — Strange Tales, October 1932 LW1 "The Isle of the Torturers" — Weird Tales, March 1933 LW1 "The Light from Beyond" — Wonder Stories, April 1933 LW1 "The Beast of Averoigne" — Weird Tales, May 1933 LW1 "The Holiness of Azedarac" — Weird Tales, November 1933 LW1 "The Demon of the Flower" — Astounding Stories, December 1933 LW2 "The Death of Malygris" — Weird Tales, April 1934 LW2 "The Plutonium Drug" — Amazing Stories, September 1934 LW2 "The Seven Geases" — Weird Tales, October 1934 LW2 "Xeethra" — Weird Tales, December 1934 LW1 "The Flower-Women" — Weird Tales, May 1935 LW2 "The Treader of the Dust" — Weird Tales, August 1935 LW1 "Necromancy in Naat" — Weird Tales, July 1936 LW1 "The Maze of Maal Dweb" — Weird Tales, October 1938 LW2 "The Coming of the White Worm" — Stirring Science Stories, April 1941 LW2 Visual art: 1935–1961 By this time his interest in writing fiction began to lessen and he turned to creating sculptures from soft rock such as soapstone. Smith also made hundreds of fantastic paintings and drawings. Bibliography The authoritative bibliography on Smith's work is S. T. Joshi, David E. Schultz and Scott Conners. Clark Ashton Smith: A Comprehensive Bibliography. NY: Hippocampus Press, 2020. The first Smith bibliography, which foccussed on his short fiction, was The Tales Of Clark Ashton Smith,published by Thomas G L Cockcroft in New Zealand in 1951. Books published in Smith's lifetime 1912: The Star-Treader and Other Poems. San Francisco: A.M. Robertson, Nov 1912. 100 pages. 2000 copies. Some copies have a frontispiece photo by Bianca Conti; others lack it. 1918: Odes and Sonnets. San Francisco: The Book Club of California, June 1918. 28 pages. 300 copies. 1922: Ebony and Crystal: Poems in Verse and Prose. Auburn CA: The Auburn Journal Press, Oct 1925. 43 pages. Limited to 500 copies signed by Smith. Some copies are found with corrections in Smith's hand to typos in the text. 1925: Sandalwood. Auburn CA: The Auburn Journal Press, Oct 1925. Verse. 43 pages. Limited to 250 (i.e. 225) numbered copies signed by Smith. Some copies are found with corrections in Smith's hand to typos in the text. 1933: The Double Shadow and Other Fantasies. Auburn, CA: The Auburn Journal Press, 1933. Short stories. Limited to 1000 copies in grey paper wrappers. 1937: Nero and Other Poems. Lakeport CA: The Futile Press, May 1937. 24 pages. c.250 copies. The poems herein were revised by Smith from their first appearances in The Star-Treader and Other Poems. Some copies have laid in the three page essay "The Price of Poetry", on Smith's verse, by David Warren Ryder, which was printed to accompany the book. According to the official Smith bibliography,the volume was also issued with a broadside, "Outlanders" - a 1934 sonnet which made its first appearance in print here. Roy A. Squires purchased spare sheets of the volume from Groo Beck of Futile Press, sufficient to produce a 'second state' of 13 copies,which was specially bound by Kristina Etchison and published by bookdealer Terence McVicker. (This 2nd state not noted in the official bibliography.) 1951: The Dark Chateau and Other Poems. Sauk City, WI: Arkham House, Dec 1951. 63 pages. 563 copies. 1958: Spells and Philtres. Sauk City: Arkham House, March 1958. Verse. 54 pages. 519 copies. Books published posthumously 1962: The Hill of Dionysus – A Selection. Pacific Grove, CA: Roy A. Squires and Clyde Beck. Verse. This volume was prepared while Smith was still living but he died before it could see print. It was published 'In memoriam'. 1971: Selected Poems. Sauk City, WI: Arkham House, Nov 1971. This volume was delivered by the author to Arkham House in December 1949 but remained unpublished until 1971. Night Shade Books The Collected Fantasies of Clark Ashton Smith 5-volume work Miscellaneous Writings. Originally announced as Tales of India and Irony (a collection of non-fantasy/science fiction/horror tales, planned to be available only to subscribers of above collection). Now commercially available. Red World of Polaris (complete tales of Captain Volmar) Hippocampus Press The Complete Poetry and Translations of Clark Ashton Smith (3 vols) The Black Diamonds. A juvenile Oriental fantasy. The Last Oblivion: Best Fantastic Poems of Clark Ashton Smith The Sword of Zagan and Other Writings. Juvenile Oriental fantasy. The Shadow of the Unattained: Letters of George Sterling and Clark Ashton Smith The Freedom of Fantastic Things: Selected Criticism on Clark Ashton Smith The Hashish-Eater. (2008). Edited with notes etc. by Donald Sidney-Fryer. Introduction by Ron Hilger. Includes a CD audio recording of Sidney-Fryer reading "The Hashish-Eater" and (on a hidden track) a selection of other poems by Smith. The Averoigne Chronicles: The Complete Averoigne Stories of Clark Ashton Smith Zothique: The Final Cycle by Clark Ashton Smith Arkham House Out of Space and Time Lost Worlds Genius Loci and Other Tales The Dark Chateau Spells and Philtres The Abominations of Yondo Tales of Science and Sorcery Poems in Prose Other Dimensions (o.o.p.) Selected Poems The Black Book of Clark Ashton Smith A Rendezvous in Averoigne Selected Letters of Clark Ashton Smith Spearman (reprinted from Arkham House) Lost Worlds hardcover 1971 Out of Space and Time 1971 Genius Loci hardcover 1971 Abominations of Yondo 1972 Panther (reprinted from Arkham House) Lost Worlds (published in 2 volumes, , ) Genius Loci The Abominations of Yondo Other Dimensions (published in 2 volumes, , ) Out of Space and Time (published in 2 volumes, , ) Tales of Science and Sorcery Ballantine Adult Fantasy series Zothique 1970 Hyperborea 1971 Xiccarph 1972 Poseidonis 1973 Averoigne (reportedly compiled by series editor Lin Carter, but never released) Roy A. Squires Roy A. Squires, California-based bookman and letterpress printer, issued many limited edition pamphlets consisting of individual Smith poems and prose poems during the 1970s. Wildside Press The Double Shadow The Maker of Gargoyles and Other Stories The White Sybil and Other Stories Timescape Books The City of the Singing Flame 1981 The Last Incantation 1982 The Monster of the Prophecy 1983 Gollancz Emperor of Dreams. Ed, Stephen Jones. 2002. An omnibus edition in paperback of Smith's best tales. Bancroft Library In the Line of the Grotesque and Monstrous. Introduction by D.S. Black. Berkeley: The Bancroft Library, 2004. Prints the text of three letters by Smith to Samuel Loveman. 50 copies only printed, in burnt orange wrappers. Printed on the Bancroft library's 1856 Albion handpress. The RAS Press The Black Abbot of Puthuum. Glendale, CA: The RAS Press, Oct 2007. Limited to 250 numbered copies.(This press was founded by Roy. A. Squires and is currently run by bookseller Terence McVicker). HIH Art Studios Shadows Seen and Unseen: Poetry from the Shadows. San Jose, CA: HIH Art Studios, 2007. Edited by Raymond L. Johnson and Ardath W. Winterowd and signed by both editors. Limited to 540 copies. Hardcover in slipcase. Includes reproductions of poetry manuscripts by Smith, and color plates of several Smith paintings. Penguin Books The Dark Eidolon and Other Fantasies. Ed, S. T. Joshi. 2014. Other (Essays, Letters, etc) Smith, Clark Ashton. Planets and Dimensions: Collected Essays. Edited by Charles K. Wolfe. Baltimore MD: Mirage Press, 1973. David E. Schultz and S. T. Joshi (eds). The Shadow of the Unattained: The Letters of George Sterling and Clark Ashton Smith NY: Hippocampus Press, 2005. David E. Schultz and S. T. Joshi (eds). Dawnward Spire, Lonely Hill: The Letters of H. P. Lovecraft and Clark Ashton Smith. NY: Hippocampus Press, 2017. David E. Schultz and S.T. Joshi (eds). Eccentric, Impractical Devils: The Letters of August Derleth and Clark Ashton Smith. NY: Hippocampus Press, 2020. S.T. Joshi and David E. Schultz (eds). Born Under Saturn: The Letters of Samuel Loveman and Clark Ashton Smith NY: Hippocampus Press, 2021, an expanded edition of Samuel Loveman's Out of the Immortal Night (2004) Scholars S.T. Joshi and David E. Schultz are preparing various additional volumes of Smith's letters to such of his individual correspondents as Donald Wandrei and Robert H. Barlow. Media adaptations Visual "The Double Shadow" was filmed by Azathoth Productions, Newcastle, Australia, on Super 8 film in 1975, with a script by Leigh Blackmore. "The Return of the Sorcerer" was adapted for an episode of the television series Night Gallery, starring Vincent Price and Bill Bixby. "The Seed from the Sepulcher", "The Vaults of Yoh Vombis" and "The Return of the Sorcerer" were adapted as ten-page comics by Richard Corben, published in DenSaga 1, 2 and 3 respectively (Fantagor Press 1992–1993). "Mother of Toads" was adapted as segment one of the six-segment horror anthology film The Theatre Bizarre (2011). Audio Clark Ashton Smith: Live from Auburn: The Elder Tapes. In the late 1950s Smith recorded a number of his poems on the tape-recorder of his friend Robert B. Elder. Elder chose the 11 poems at random from Smith's books The Dark Chateau and "Spells and Philtres". (Elder had first met Smith when reporting on his 1954 wedding to the former Carol Dorman for The Auburn Courier and they became friends when Smith praised Elder's novel Whom the Gods Destroy.) In 1995 Necronomicon Press released the audiocassette Clark Ashton Smith: Live from Auburn: The Elder Tapes, which includes an introduction by Elder and then Smith reading his poems. The recording was produced by Wayne Haigh. The cassette was accompanied by a booklet featuring a c.1960 photo of Smith and reprints all 11 poems. Gahan Wilson provided the cover art for the cassette and booklet. The recording has not been released on CD. The Hashish-Eater and Other Poems. Nampa, Idaho: Fedogan and Bremer, 2018. Running time 68 mins. Includes Donald Sidney-Fryer's readings of "The Hashish-Eater" and a selection of other Smith poems, identical to the selection on the CD which accompanied the 2008 Hippocampus Press volume "The Hashish-Eater"; here, however, an orchestral soundtrack by Graham Plowman has been added. Booklet notes by Ron Hilger. See also Cordwainer Smith Rediscovery Award References Citations General and cited sources Herron, Don (October 2000). "Collecting Clark Ashton Smith". Firsts. Joshi, S. T. (2008). "Clark Ashton Smith: Beauty Is for the Few," chapter 2 in Emperors of Dreams: Some Notes on Weird Poetry. Sydney: P'rea Press. (pbk) and (hbk). Murray, Will. "The Clark Ashton Smythos" in Price, Robert M. (ed.). The Horror of It All: Encrusted Gems from the Crypt of Cthulhu. Mercer Island WA: Starmont House, 1990. . Further reading Bibliographies Cockcroft, Thomas G. L. The Tales of Clark Ashton Smith: A Bibliography. Lower Hutt, New Zealand: Cockcroft, Nov 1961 (500 copies). The first published bibliography on Smith; superseded by Donald Sidney-Fryer's Emperor of Dreams (1978) – see below. Joshi, S. T., David E. Schultz and Scott Connors. Clark Ashton Smith: A Comprehensive Bibliography. NY: NY: Hippocampus Press, 2020. Sidney-Fryer, Donald. Emperor of Dreams: A Clark Ashton Smith Bibliography. West Kingston, RI: Donald M. Grant Publishers, 1978. A substantial work of scholarship which remains valuable for its critical appreciations but is now over thirty years out of date. A quantity of more recent bibliographical information can be found at the Bibliography section of the Eldritch Dark site online (see External Links). Both are completely superseded bibliographically by the Joshi, Schultz and Connors bibliography of 2020. Journals devoted to Smith's life and work Behrends, Steve. Klarkash-Ton: The Journal of Smith Studies No 1 (June 1988), Cryptic Publications. This journal was retitled by the new publisher as from Issue 2, thus the first issue of The Dark Eidolon: The Journal of Smith Studies, (Necronomicon Press) is numbered "2" (it appeared June 1989). There were only 3 issues in total. No 3 appeared in Dec 2002. Connors, Scott and Ronald S. Hilger (eds). Lost Worlds: The Journal of Clark Ashton Smith Studies, Seele Brennt Publications. Issued annually, five numbers (2003–2008). Morris, Harry O. (ed). Nyctalops magazine. Special Clark Ashton Smith issue, 96 pp. (1973) Essays and standalone critical works Behrends, Steve. Clark Ashton Smith. Starmont Reader's Guide 49. Mercer Island, WA: Starmont House, 1990. Behrends, Steve. "The Song of the Necromancer: 'Loss' in Clark Ashton Smith's Fiction." Studies in Weird Fiction, 1, No 1 (Summer 1986): 3-12. Connors, Scott. The Freedom of Fantastic Things: Selected Criticism on Clark Ashton Smith. NY: Hippocampus Press, 2006. de Camp, L. Sprague. "Sierra Shaman: Clark Ashton Smith," in Literary Swordsmen and Sorcerers: The Makers of Heroic Fantasy. Sauk City,. WI: Arkham House, 1976, 211–12. Fait, Eleanor. "Auburn Artist-Poet Utilizes Native Rock in Sculptures.". Sacramento Union (Dec 21, 1941), 4C. Haefele, John D. "Far from Time: Clark Ashton Smith, August Derleth, and Arkham House." Weird Fiction Review No 1 (Fall 2010), 154–189. Hilger, Ronald. One Hundred Years of Klarkash-Ton. Averon Press, 1996. Schultz, David E. and Scott Connors (ed). Selected Letters of Clark Ashton Smith. Sauk City, WI: Arkham House, 2003. Schultz, David E and S.T. Joshi. The Shadow of the Unattained: The Letters of George Sterling and Clark Ashton Smith. NY: Hippocampus Press, 2005. Sidney-Fryer, Donald. The Last of the Great Romantic Poets. Albuquerque NM: Silver Scarab Press, 1973. Sidney-Fryer, Donald. Clark Ashton Smith: The Sorcerer Departs. West Hills, CA: Tsathoggua Press, Jan 1997. Dole: Silver Key Press, 2007. An updated/revised version of Sidney-Fryer's essay in the Special CAS Issue of Nyctalops (see above under Morris). An uncredited extract from this work, as "A Biography of Clark Ashton Smith," may be found online at External links The Eldritch Dark – This website contains almost all of Clark Ashton Smith's written work, as well as a comprehensive selection of his art, biographies, a bibliography, a discussion board, readings, fiction tributes and more. Eldonejo 'Mistera Sturno' – A growing collection of authorized translations into Esperanto for free distribution as ebooks. Smith's poem "A Chant to Sirius" read by Leigh Blackmore Clark Ashton Smith: Poems – A collection of Clark Ashton Smith's early poetry. Clark Ashton Smith at the Encyclopedia of Science Fiction Clark Ashton Smith at the Encyclopedia of Fantasy 1893 births 1961 deaths 20th-century American male writers 20th-century American novelists American fantasy writers American horror writers American male novelists American male poets American male short story writers American people of English descent American science fiction writers American short story writers Cthulhu Mythos writers People from Auburn, California People from Mono County, California People from Pacific Grove, California Pulp fiction writers Weird fiction writers Writers from California
2,810
6,220
https://en.wikipedia.org/wiki/Circle
Circle
A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. Equivalently, it is the curve traced out by a point that moves in a plane so that its distance from a given point is constant. The distance between any point of the circle and the centre is called the radius. Usually, the radius is required to be a positive number. A circle with (a single point) is a degenerate case. This article is about circles in Euclidean geometry, and, in particular, the Euclidean plane, except where otherwise noted. Specifically, a circle is a simple closed curve that divides the plane into two regions: an interior and an exterior. In everyday use, the term "circle" may be used interchangeably to refer to either the boundary of the figure, or to the whole figure including its interior; in strict technical usage, the circle is only the boundary and the whole figure is called a disc. A circle may also be defined as a special kind of ellipse in which the two foci are coincident, the eccentricity is 0, and the semi-major and semi-minor axes are equal; or the two-dimensional shape enclosing the most area per unit perimeter squared, using calculus of variations. Euclid's definition Topological definition In the field of topology, a circle is not limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy). Terminology Annulus: a ring-shaped object, the region bounded by two concentric circles. Arc: any connected part of a circle. Specifying two end points of an arc and a centre allows for two arcs that together make up a full circle. Centre: the point equidistant from all points on the circle. Chord: a line segment whose endpoints lie on the circle, thus dividing a circle into two segments. Circumference: the length of one circuit along the circle, or the distance around the circle. Diameter: a line segment whose endpoints lie on the circle and that passes through the centre; or the length of such a line segment. This is the largest distance between any two points on the circle. It is a special case of a chord, namely the longest chord for a given circle, and its length is twice the length of a radius. Disc: the region of the plane bounded by a circle. Lens: the region common to (the intersection of) two overlapping discs. Passant: a coplanar straight line that has no point in common with the circle. Radius: a line segment joining the centre of a circle with any single point on the circle itself; or the length of such a segment, which is half (the length of) a diameter. Sector: a region bounded by two radii of equal length with a common centre and either of the two possible arcs, determined by this centre and the endpoints of the radii. Segment: a region bounded by a chord and one of the arcs connecting the chord's endpoints. The length of the chord imposes a lower boundary on the diameter of possible arcs. Sometimes the term segment is used only for regions not containing the centre of the circle to which their arc belongs to. Secant: an extended chord, a coplanar straight line, intersecting a circle in two points. Semicircle: one of the two possible arcs determined by the endpoints of a diameter, taking its midpoint as centre. In non-technical common usage it may mean the interior of the two dimensional region bounded by a diameter and one of its arcs, that is technically called a half-disc. A half-disc is a special case of a segment, namely the largest one. Tangent: a coplanar straight line that has one single point in common with a circle ("touches the circle at this point"). All of the specified regions may be considered as open, that is, not containing their boundaries, or as closed, including their respective boundaries. History The word circle derives from the Greek κίρκος/κύκλος (kirkos/kuklos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "hoop" or "ring". The origins of the words circus and circuit are closely related. The circle has been known since before the beginning of recorded history. Natural circles would have been observed, such as the Moon, Sun, and a short plant stalk blowing in the wind on sand, which forms a circle shape in the sand. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles. Some highlights in the history of the circle are: 1700 BCE – The Rhind papyrus gives a method to find the area of a circular field. The result corresponds to (3.16049...) as an approximate value of . 300 BCE – Book 3 of Euclid's Elements deals with the properties of circles. In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. 1880 CE – Lindemann proves that is transcendental, effectively settling the millennia-old problem of squaring the circle. Analytic results Circumference The ratio of a circle's circumference to its diameter is (pi), an irrational constant approximately equal to 3.141592654. Thus the circumference C is related to the radius r and diameter d by: Area enclosed As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to multiplied by the radius squared: Equivalently, denoting diameter by d, that is, approximately 79% of the circumscribing square (whose side is of length d). The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality. Equations Cartesian coordinates Equation of a circle In an x–y Cartesian coordinate system, the circle with centre coordinates (a, b) and radius r is the set of all points (x, y) such that This equation, known as the equation of the circle, follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x − a| and |y − b|. If the circle is centred at the origin (0, 0), then the equation simplifies to Parametric form The equation can be written in parametric form using the trigonometric functions sine and cosine as where t is a parametric variable in the range 0 to 2, interpreted geometrically as the angle that the ray from (a, b) to (x, y) makes with the positive x axis. An alternative parametrisation of the circle is In this parameterisation, the ratio of t to r can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the x axis (see Tangent half-angle substitution). However, this parameterisation works only if t is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted. 3-point form The equation of the circle determined by three points not on a line is obtained by a conversion of the 3-point form of a circle equation: Homogeneous form In homogeneous coordinates, each conic section with the equation of a circle has the form It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points I(1: i: 0) and J(1: −i: 0). These points are called the circular points at infinity. Polar coordinates In polar coordinates, the equation of a circle is where a is the radius of the circle, are the polar coordinates of a generic point on the circle, and are the polar coordinates of the centre of the circle (i.e., r0 is the distance from the origin to the centre of the circle, and φ is the anticlockwise angle from the positive x axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. , this reduces to . When , or when the origin lies on the circle, the equation becomes In the general case, the equation can be solved for r, giving Note that without the ± sign, the equation would in some cases describe only half a circle. Complex plane In the complex plane, a circle with a centre at c and radius r has the equation In parametric form, this can be written as The slightly generalised equation for real p, q and complex g is sometimes called a generalised circle. This becomes the above equation for a circle with , since . Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line. Tangent lines The tangent line through a point P on the circle is perpendicular to the diameter passing through P. If and the circle has centre (a, b) and radius r, then the tangent line is perpendicular to the line from (a, b) to (x1, y1), so it has the form . Evaluating at (x1, y1) determines the value of c, and the result is that the equation of the tangent is or If , then the slope of this line is This can also be found using implicit differentiation. When the centre of the circle is at the origin, then the equation of the tangent line becomes and its slope is Properties The circle is the shape with the largest area for a given length of perimeter (see Isoperimetric inequality). The circle is a highly symmetric shape: every line through the centre forms a line of reflection symmetry, and it has rotational symmetry around the centre for every angle. Its symmetry group is the orthogonal group O(2,R). The group of rotations alone is the circle group T. All circles are similar. A circle circumference and radius are proportional. The area enclosed and the square of its radius are proportional. The constants of proportionality are 2 and respectively. The circle that is centred at the origin with radius 1 is called the unit circle. Thought of as a great circle of the unit sphere, it becomes the Riemannian circle. Through any three points, not all on the same line, there lies a unique circle. In Cartesian coordinates, it is possible to give explicit formulae for the coordinates of the centre of the circle and the radius in terms of the coordinates of the three given points. See circumcircle. Chord Chords are equidistant from the centre of a circle if and only if they are equal in length. The perpendicular bisector of a chord passes through the centre of a circle; equivalent statements stemming from the uniqueness of the perpendicular bisector are: A perpendicular line from the centre of a circle bisects the chord. The line segment through the centre bisecting a chord is perpendicular to the chord. If a central angle and an inscribed angle of a circle are subtended by the same chord and on the same side of the chord, then the central angle is twice the inscribed angle. If two angles are inscribed on the same chord and on the same side of the chord, then they are equal. If two angles are inscribed on the same chord and on opposite sides of the chord, then they are supplementary. For a cyclic quadrilateral, the exterior angle is equal to the interior opposite angle. An inscribed angle subtended by a diameter is a right angle (see Thales' theorem). The diameter is the longest chord of the circle. Among all the circles with a chord AB in common, the circle with minimal radius is the one with diameter AB. If the intersection of any two chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then . If the intersection of any two perpendicular chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then equals the square of the diameter. The sum of the squared lengths of any two chords intersecting at right angles at a given point is the same as that of any other two perpendicular chords intersecting at the same point and is given by 8r2 − 4p2, where r is the circle radius, and p is the distance from the centre point to the point of intersection. The distance from a point on the circle to a given chord times the diameter of the circle equals the product of the distances from the point to the ends of the chord. Tangent A line drawn perpendicular to a radius through the end point of the radius lying on the circle is a tangent to the circle. A line drawn perpendicular to a tangent through the point of contact with a circle passes through the centre of the circle. Two tangents can always be drawn to a circle from any point outside the circle, and these tangents are equal in length. If a tangent at A and a tangent at B intersect at the exterior point P, then denoting the centre as O, the angles ∠BOA and ∠BPA are supplementary. If AD is tangent to the circle at A and if AQ is a chord of the circle, then . Theorems The chord theorem states that if two chords, CD and EB, intersect at A, then . If two secants, AE and AD, also cut the circle at B and C respectively, then (corollary of the chord theorem). A tangent can be considered a limiting case of a secant whose ends are coincident. If a tangent from an external point A meets the circle at F and a secant from the external point A meets the circle at C and D respectively, then (tangent–secant theorem). The angle between a chord and the tangent at one of its endpoints is equal to one half the angle subtended at the centre of the circle, on the opposite side of the chord (tangent chord angle). If the angle subtended by the chord at the centre is 90°, then , where ℓ is the length of the chord, and r is the radius of the circle. If two secants are inscribed in the circle as shown at right, then the measurement of angle A is equal to one half the difference of the measurements of the enclosed arcs ( and ). That is, , where O is the centre of the circle (secant–secant theorem). Inscribed angles An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180°). Sagitta The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle. Given the length y of a chord and the length x of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines: Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length y and with sagitta of length x, since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is () in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (. Solving for r, we find the required result. Compass and straightedge constructions There are many compass-and-straightedge constructions resulting in circles. The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass. Construction with given diameter Construct the midpoint of the diameter. Construct the circle with centre passing through one of the endpoints of the diameter (it will also pass through the other endpoint). Construction through three noncollinear points Name the points , and , Construct the perpendicular bisector of the segment . Construct the perpendicular bisector of the segment . Label the point of intersection of these two perpendicular bisectors . (They meet because the points are not collinear). Construct the circle with centre passing through one of the points , or (it will also pass through the other two points). Circle of Apollonius Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant ratio (other than 1) of distances to two fixed foci, A and B. (The set of points where the distances are equal is the perpendicular bisector of segment AB, a line.) That circle is sometimes said to be drawn about two points. The proof is in two parts. First, one must prove that, given two foci A and B and a ratio of distances, any point P satisfying the ratio of distances must fall on a particular circle. Let C be another point, also satisfying the ratio and lying on segment AB. By the angle bisector theorem the line segment PC will bisect the interior angle APB, since the segments are similar: Analogously, a line segment PD through some point D on AB extended bisects the corresponding exterior angle BPQ where Q is on AP extended. Since the interior and exterior angles sum to 180 degrees, the angle CPD is exactly 90 degrees; that is, a right angle. The set of points P such that angle CPD is a right angle forms a circle, of which CD is a diameter. Second, see for a proof that every point on the indicated circle satisfies the given ratio. Cross-ratios A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If A, B, and C are as above, then the circle of Apollonius for these three points is the collection of points P for which the absolute value of the cross-ratio is equal to one: Stated another way, P is a point on the circle of Apollonius if and only if the cross-ratio is on the unit circle in the complex plane. Generalised circles If C is the midpoint of the segment AB, then the collection of points P satisfying the Apollonius condition is not a circle, but rather a line. Thus, if A, B, and C are given distinct points in the plane, then the locus of points P satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius. Inscription in or circumscription about other figures In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle. About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices. A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon. A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon. A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle. Limiting case of other figures The circle can be viewed as a limiting case of each of various other figures: A Cartesian oval is a set of points such that a weighted sum of the distances from any of its points to two fixed points (foci) is a constant. An ellipse is the case in which the weights are equal. A circle is an ellipse with an eccentricity of zero, meaning that the two foci coincide with each other as the centre of the circle. A circle is also a different special case of a Cartesian oval in which one of the weights is zero. A superellipse has an equation of the form for positive a, b, and n. A supercircle has . A circle is the special case of a supercircle in which . A Cassini oval is a set of points such that the product of the distances from any of its points to two fixed points is a constant. When the two fixed points coincide, a circle results. A curve of constant width is a figure whose width, defined as the perpendicular distance between two distinct parallel lines each intersecting its boundary in a single point, is the same regardless of the direction of those two parallel lines. The circle is the simplest example of this type of figure. In other p-norms Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In p-norm, distance is determined by In Euclidean geometry, p = 2, giving the familiar In taxicab geometry, p = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length using a Euclidean metric, where r is the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog to is 4 in this geometry. The formula for the unit circle in taxicab geometry is in Cartesian coordinates and in polar coordinates. A circle of radius 1 (using this distance) is the von Neumann neighborhood of its centre. A circle of radius r for the Chebyshev distance (L∞ metric) on a plane is also a square with side length 2r parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1 and L∞ metrics does not generalize to higher dimensions. Locus of constant sum Consider a finite set of points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points. A generalization for higher powers of distances is obtained if under points the vertices of the regular polygon are taken. The locus of points such that the sum of the -th power of distances to the vertices of a given regular polygon with circumradius is constant is a circle, if , where =1,2,…, -1; whose centre is the centroid of the . In the case of the equilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For the regular pentagon the constant sum of the eighth powers of the distances will be added and so forth. Squaring the circle Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge. In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Despite the impossibility, this topic continues to be of interest for pseudomath enthusiasts. Significance in art and symbolism From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas. However, differences in worldview (beliefs and culture) had a great impact on artists’ perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits. The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. See also Affine sphere Apeirogon Circle fitting Gauss circle problem Inversion in a circle Line–circle intersection List of circle topics Sphere Three points determine a circle Translation of axes Specially named circles Apollonian circles Archimedean circle Archimedes' twin circles Bankoff circle Carlyle circle Chromatic circle Circle of antisimilitude Ford circle Geodesic circle Johnson circles Schoch circles Woo circles Of a triangle Apollonius circle of the excircles Brocard circle Excircle Incircle Lemoine circle Lester circle Malfatti circles Mandart circle Nine-point circle Orthocentroidal circle Parry circle Polar circle (geometry) Spieker circle Van Lamoen circle Of certain quadrilaterals Eight-point circle of an orthodiagonal quadrilateral Of a conic section Director circle Directrix circle Of a torus Villarceau circles References Further reading "Circle" in The MacTutor History of Mathematics archive External links Elementary shapes Conic sections Pi
2,815
6,231
https://en.wikipedia.org/wiki/Comic%20book
Comic book
A comic book, also called comicbook, comic magazine or (in the United Kingdom and Ireland) simply comic, is a publication that consists of comics art in the form of sequential juxtaposed panels that represent individual scenes. Panels are often accompanied by descriptive prose and written narrative, usually, dialogue contained in word balloons emblematic of the comics art form. "Comic Cuts" was a British comic published from 1890 to 1953. It was preceded by "Ally Sloper's Half Holiday" (1884) which is notable for its use of sequential cartoons to unfold narrative. These British comics existed alongside of the popular lurid "Penny dreadfuls" (such as "Spring-heeled Jack"), boys' "Story papers" and the humorous Punch (magazine) which was the first to use the term "cartoon" in its modern sense of a humorous drawing. The interweaving of drawings and the written word had been pioneered by, among others, William Blake (1757 - 1857) in works such as Blake's "The Descent Of Christ" (1804 - 1820). The first modern (American style) comic book, Famous Funnies, was released in the US in 1934 and was a reprinting of earlier newspaper humor comic strips, which had established many of the story-telling devices used in comics. The term comic book derives from American comic books once being a compilation of comic strips of a humorous tone; however, this practice was replaced by featuring stories of all genres, usually not humorous in tone. The largest comic book market is Japan. By 1995, the manga market in Japan was valued at (), with annual sales of 1.9billion manga books (tankōbon volumes and manga magazines) in Japan, equivalent to 15issues per person. In 2020 the manga market in Japan reached a new record value of ¥612.5 billion due to a fast growth of digital manga sales as well as an increase in print sales. The comic book market in the United States and Canada was valued at in 2016. , the largest comic book publisher in the United States is manga distributor Viz Media, followed by DC Comics and Marvel Comics the original feature full length special edition franchises including Superman, Batman, Wonder Woman, Spider-Man, the Incredible Hulk and the X-Men. The best-selling comic book categories in the US are juvenile children's fiction at 41%, manga at 28% and superhero comics at 10% of the market. Another major comic book market is France, where Franco-Belgian comics and Japanese manga each represent 40% of the market, followed by American comics at 10% market share. Structure Comic books are reliant on their organization and appearance. Authors largely focus on the frame of the page, size, orientation, and panel positions. These characteristic aspects of comic books are necessary in conveying the content and messages of the author. The key elements of comic books include panels, balloons (speech bubbles), text (lines), and characters. Balloons are usually convex spatial containers of information that are related to a character using a tail element. The tail has an origin, path, tip, and pointed direction. Key tasks in the creation of comic books are writing, drawing, and coloring. There are many technological formulas used to create comic books, including directions, axes, data, and metrics. Following these key formatting procedures is the writing, drawing, and coloring. In the United States, the term comic book, is generally used for comics periodicals and trade paperbacks while graphic novel is the term used for standalone books. American comic books Comics as a print medium have existed in the United States since the printing of The Adventures of Mr. Obadiah Oldbuck in 1842 in hardcover, making it the first known American prototype comic book. Proto-comics periodicals began appearing early in the 20th century, with the first comic standard-sized comic being Funnies on Parade. Funnies on Parades was the first book that established the size, duration, and format of the modern comic book. Following this was, Dell Publishing's 36-page Famous Funnies: A Carnival of Comics as the first true newsstand American comic book; Goulart, for example, calls it "the cornerstone for one of the most lucrative branches of magazine publishing". In 1905 G.W. Dillingham Company published 24 select strips by the cartoonist Gustave Verbeek in an anthology book called 'The Incredible Upside-Downs of Little Lady Lovekins and Old Man Muffaroo'. The introduction of Jerry Siegel and Joe Shuster's Superman in 1938 turned comic books into a major industry and ushered in the Golden Age of Comic Books. The Golden Age originated the archetype of the superhero. According to historian Michael A. Amundson, appealing comic-book characters helped ease young readers' fear of nuclear war and neutralize anxiety about the questions posed by atomic power. Historians generally divide the timeline of the American comic book into eras. The Golden Age of Comic Books began in 1938, with the debut of Superman in Action Comics #1, published by Detective Comics (predecessor of DC Comics), which is generally considered the beginning of the modern comic book as it is known today. The Silver Age of Comic Books is generally considered to date from the first successful revival of the then-dormant superhero form, with the debut of the Flash in Showcase #4 (Oct. 1956). The Silver Age lasted through the late 1960s or early 1970s, during which time Marvel Comics revolutionized the medium with such naturalistic superheroes as Stan Lee and Jack Kirby's Fantastic Four and Lee and Steve Ditko's Spider-Man. The demarcation between the Silver Age and the following era, the Bronze Age of Comic Books, is less well-defined, with the Bronze Age running from the very early 1970s through the mid-1980s. The Modern Age of Comic Books runs from the mid-1980s to the present day. A notable event in the history of the American comic book came with psychiatrist Fredric Wertham's criticisms of the medium in his book Seduction of the Innocent (1954), which prompted the American Senate Subcommittee on Juvenile Delinquency to investigate comic books. Wertham claimed that comic books were responsible for an increase in juvenile delinquency, as well as potential influence on a child's sexuality and morals. In response to attention from the government and from the media, the US comic book industry set up the Comics Magazine Association of America. The CMAA instilled the Comics Code Authority in 1954 and drafted the self-censorship Comics Code that year, which required all comic books to go through a process of approval. It was not until the 1970s that comic books could be published without passing through the inspection of the CMAA. The Code was made formally defunct in November 2011. Underground comic books In the late 1960s and early 1970s, a surge of creativity emerged in what became known as underground comics. Published and distributed independently of the established comics industry, most of such comics reflected the youth counterculture and drug culture of the time. Underground comix "reflected and commented on the social divisions and tensions of American society". Many had an uninhibited, often irreverent style; their frank depictions of nudity, sex, profanity, and politics had no parallel outside their precursors, the pornographic and even more obscure "Tijuana bibles". Underground comics were almost never sold at newsstands, but rather in such youth-oriented outlets as head shops and record stores, as well as by mail order. The underground comics encouraged creators to publish their work independently so that they would have full ownership rights to their characters. Frank Stack's The Adventures of Jesus, published under the name Foolbert Sturgeon, has been credited as the first underground comic; while R. Crumb and the crew of cartoonists who worked on Zap Comix popularized the form. Alternative comics The rise of comic book specialty stores in the late 1970s created/paralleled a dedicated market for "independent" or "alternative comics" in the US. The first such comics included the anthology series Star Reach, published by comic book writer Mike Friedrich from 1974 to 1979, and Harvey Pekar's American Splendor, which continued sporadic publication into the 21st century and which Shari Springer Berman and Robert Pulcini adapted into a 2003 film. Some independent comics continued in the tradition of underground comics. While their content generally remained less explicit, others resembled the output of mainstream publishers in format and genre but were published by smaller artist-owned companies or by single artists. A few (notably RAW) represented experimental attempts to bring comics closer to the status of fine art. During the 1970s the "small press" culture grew and diversified. By the 1980s, several independent publishers – such as Pacific, Eclipse, First, Comico, and Fantagraphics – had started releasing a wide range of styles and formats—from color-superhero, detective, and science-fiction comic books to black-and-white magazine-format stories of Latin American magical realism. A number of small publishers in the 1990s changed the format and distribution of their comics to more closely resemble non-comics publishing. The "minicomics" form, an extremely informal version of self-publishing, arose in the 1980s and became increasingly popular among artists in the 1990s, despite reaching an even more limited audience than the small press. Small publishers regularly releasing titles include Avatar Press, Hyperwerks, Raytoons, and Terminal Press, buoyed by such advances in printing technology as digital print-on-demand. Graphic novels In 1964, Richard Kyle coined the term "graphic novel". Precursors of the form existed by the 1920s, which saw a revival of the medieval woodcut tradition by Belgian Frans Masereel, American Lynd Ward and others, including Stan Lee. In 1947 Fawcett Publications published "Comics Novel No. 1", as the first in an intended series of these "comics novels". The story in the first issue was "Anarcho, Dictator of Death", a five chapter spy genre tale written by Otto Binder and drawn by Al Carreno. It is readable online in the Digital Comic Museum (<https://digitalcomicmuseum.com/index.php?dlid=8272>). The magazine never reached a second issue. In 1950 St. John Publications produced the digest-sized, adult-oriented "picture novel" It Rhymes with Lust, a 128-page digest by pseudonymous writer "Drake Waller" (Arnold Drake and Leslie Waller), penciler Matt Baker and inker Ray Osrin, touted as "an original full-length novel" on its cover. "It Rhymes with Lust" is also available to read online in the Digital Comic Museum (<https://digitalcomicmuseum.com/index.php?dlid=27911>)> In 1971, writer-artist Gil Kane and collaborators applied a paperback format to their "comics novel" Blackmark. Will Eisner popularized the term "graphic novel" when he used it on the cover of the paperback edition of his work A Contract with God, and Other Tenement Stories in 1978 and, subsequently, the usage of the term began to increase. Digital comics Market size In 2017, the comic book market size for North America was just over $1 billion with digital sales being flat, book stores having a 1 percent decline, and comic book stores having a 10 percent decline over 2016. The global comic book market size increased by 12% in 2020 to reach USD 8.49 billion. In 2021, the annual valuation of the market amounted to USD 9.21 billion. The popularity of the product is soaring across the world, led by collaborative efforts being made between brands to deliver more appealing comic content. Comic book collecting The 1970s saw the advent of specialty comic book stores. Initially, comic books were marketed by publishers to children because comic books were perceived as children's entertainment. However, with increasing recognition of comics as an art form and the growing pop culture presence of comic book conventions, they are now embraced by many adults. Comic book collectors are often lifelong enthusiasts of the comic book stories, and they usually focus on particular heroes and attempt to assemble the entire run of a title. Comics are published with a sequential number. The first issue of a long-running comic book series is commonly the rarest and most desirable to collectors. The first appearance of a specific character, however, might be in a pre-existing title. For example, Spider-Man's first appearance was in Amazing Fantasy #15. New characters were often introduced this way and did not receive their own titles until there was a proven audience for the hero. As a result, comics that feature the first appearance of an important character will sometimes be even harder to find than the first issue of a character's own title. Some rare comic books include copies of the unreleased Motion Picture Funnies Weekly #1 from 1939. Eight copies, plus one without a cover, emerged in the estate of the deceased publisher in 1974. The "Pay Copy" of this book sold for $43,125 in a 2005 Heritage auction. The most valuable American comics have combined rarity and quality with the first appearances of popular and enduring characters. Four comic books have sold for over US$1 million , including two examples of Action Comics #1, the first appearance of Superman, both sold privately through online dealer ComicConnect.com in 2010, and Detective Comics #27, the first appearance of Batman, via public auction. Updating the above price obtained for Action Comics #1, the first appearance of Superman, the highest sale on record for this book is $3.2 million, for a 9.0 copy. Misprints, promotional comic-dealer incentive printings, and issues with extremely low distribution also generally have scarcity value. The rarest modern comic books include the original press run of The League of Extraordinary Gentlemen #5, which DC executive Paul Levitz recalled and pulped due to the appearance of a vintage Victorian era advertisement for "Marvel Douche", which the publisher considered offensive; only 100 copies exist, most of which have been CGC graded. (See Recalled comics for more pulped, recalled, and erroneous comics.) In 2000, a company named Comics Guaranty (CGC) began to "slab" comics, encasing them in thick plastic and giving them a numeric grade. Since then, other grading companies have arisen. Because condition is important to the value of rare comics, the idea of grading by a company that does not buy or sell comics seems like a good one. However, there is some controversy about whether this grading service is worth the high cost, and whether it is a positive development for collectors, or if it primarily services speculators who wish to make a quick profit trading in comics as one might trade in stocks or fine art. Comic grading has created valuation standards that online price guides such as GoCollect and GPAnalysis have used to report on real-time market values. The original artwork pages from comic books are also collected, and these are perhaps the rarest of all comic book collector's items, as there is only one unique page of artwork for each page that was printed and published. These were created by a writer, who created the story; a pencil artist, who laid out the sequential panels on the page; an ink artist, who went over the pencil with pen and black ink; a letterer, who provided the dialogue and narration of the story by hand lettering each word; and finally a colorist, who added color as the last step before the finished pages went to the printer. When the original pages of artwork are returned by the printer, they are typically given back to the artists, who sometimes sell them at comic book conventions, or in galleries and art shows related to comic book art. The original pages of DC and Marvel the first appearances of such legendary characters as Superman, Batman, Wonder Woman, Hulk and Spider-Man are considered priceless. History of race in U.S. comic books Many early iterations of black characters in comics "became variations on the 'single stereotypical image of Sambo'." Sambo was closely related to the coon stereotype but had some subtle differences. They are both a derogatory way of portraying black characters. "The name itself, an abbreviation of raccoon, is dehumanizing. As with Sambo, the coon was portrayed as a lazy, easily frightened, chronically idle, inarticulate, buffoon." This portrayal "was of course another attempt to solidify the intellectual inferiority of the black race through popular culture." However, in the 1940s there was a change in portrayal of black characters. "A cursory glance...might give the impression that situations had improved for African Americans in comics." In many comics being produced in this time there was a major push for tolerance between races. "These equality minded heroes began to spring to action just as African Americans were being asked to participate in the war effort." During this time, a government ran program, the Writers' War Board, became heavily involved in what would be published in comics. "The Writers' War Board used comic books to shape popular perceptions of race and ethnicity..." Not only were they using comic books as a means of recruiting all Americans, they were also using it as propaganda to "[construct] a justification for race-based hatred of America's foreign enemies." The Writers' War Board created comics books that were meant to "[promote] domestic racial harmony". However, "these pro-tolerance narratives struggled to overcome the popular and widely understood negative tropes used for decades in American mass culture...". However, they were not accomplishing this agenda within all of their comics. In Captain Marvel Adventures, a character named Steamboat was an amalgamation of some of the worst stereotypes of the time. The Writers' War Board did not ask for any change with this character. "Eliminating Steamboat required the determined efforts of a black youth group in New York City." Originally their request was refused by individuals working on the comic stating, "Captain Marvel Adventures included many kinds of caricatures 'for the sake of humor'." The black youth group responded with "this is not the Negro race, but your one-and-a-half millions readers will think it so." Afterwards, Steamboat disappeared from the comics all together. There was a comic created about the 99th Squadron, also known as the Tuskegee Airmen, an all-black air force unit. Instead of making the comic about their story, the comic was about Hop Harrigan. A white pilot who captures a Nazi, shows him videos of the 99th Squadron defeating his men and then reveals to the Nazi that his men were defeated by African Americans which infuriated him as he sees them as a less superior race and cannot believe they bested his men."The Tuskegee Airmen, and images of black aviators appear in just three of the fifty three panels... the pilots of the 99th Squadron have no dialogue and interact with neither Hop Harrigan nor his Nazi captive." During this time, they also used black characters in comic books as a means to invalidate the militant black groups that were fighting for equality within the U.S. "Spider-Man 'made it clear that militant black power was not the remedy for racial injustice'." "The Falcon openly criticized black behavior stating' maybe it's important us to cool things down-so we can protect the rights we been fightin' for'." This portrayal and character development of black characters can be partially blamed on the fact that, during this time, "there had rarely been a black artist or writer allowed in a major comics company." Asian characters faced some of the same treatment in comics as black characters did. They were dehumanized and the narrative being pushed was that they were "incompetent and subhuman." "A 1944 issue of the United States Marines included a narrative entitled The Smell of the Monkeymen. The story depicts Japanese soldiers as simian brutes whose sickening body odor betrays their concealed locations." Chinese characters received the same treatment. "By the time the United States entered WWII, negative perceptions of Chinese were an established part of mass culture...." However, concerned that the Japanese could use America's anti-Chinese material as propaganda they began "to present a more positive image of America's Chinese allies..." Just as they tried to show better representation for Black people in comics they did the same for Asian people. However, "Japanese and Filipino characters were visually indistinguishable. Both groups have grotesque buckteeth, tattered clothing, and bright yellow skin." "Publishers depicted America's Asian allies through derogatory images and language honed over the preceding decades." Asian characters were previously portrayed as, "ghastly yellow demons". During WWII, "[every] major superhero worth his spandex devoted himself to the eradication of Asian invaders." There was "a constant relay race in which one Asian culture merely handed off the baton of hatred to another with no perceptible changes in the manner in which the characters would be portrayed." "The only specific depiction of a Hispanic superhero did not end well. In 1975 Marvel gave us Hector Ayala (a.k.a The White Tiger)." "Although he fought for several years alongside the likes of much more popular heroes such as Spider-Man and Daredevil, he only lasted six years before sales of comics featuring him got so bad that Marvel had him retire. The most famous Hispanic character is Bane, a villain from Batman." The Native American representation in comic books "can be summed up in the noble savage stereotype" " a recurring theme...urged American indians to abandon their traditional hostility towards the United States. They were the ones painted as intolerant and disrespectful of the dominant concerns of white America". East Asian comics Japanese manga Manga (漫画) are comic books or graphic novels originating from Japan. Most manga conform to a style developed in Japan in the late 19th century, though the art form has a long prehistory in earlier Japanese art. The term manga is used in Japan to refer to both comics and cartooning. Outside Japan, the word is typically used to refer to comics originally published in the country. Dōjinshi , fan-made Japanese comics, operate in a far larger market in Japan than the American "underground comics" market; the largest dōjinshi fair, Comiket, attracts 500,000 visitors twice a year. Korean manhwa Korean manhwa has quickly gained popularity outside Korea in recent times as a result of the Korean Wave. The manhwa industry has suffered through two crashes and strict censorship since its early beginnings as a result of the Japanese occupation of the peninsula which stunts the growth of the industry but has now started to flourish thanks in part to the internet and new ways to read manhwa whether on computers or through smartphones. In the past manhwa would be marketed as manga outside the country in order to make sure they would sell well but now that is no longer needed since more people are now more knowledgeable about the industry and Korean culture. Webtoons Webtoons have become popular in South Korea as a new way to read comics. Thanks in part to different censorship rules, color and unique visual effects, and optimization for easier reading on smartphones and computers. More manhwa have made the switch from traditional print manhwa to online webtoons thanks to better pay and more freedom than traditional print manhwa. The webtoon format has also expanded to other countries outside of Korea like China, Japan, Southeast Asia, and Western countries. Major webtoon distributors include Lezhin, Naver, and Kakao. Chinese manhua Vietnamese truyện tranh European comics Franco-Belgian comics France and Belgium have a long tradition in comics and comic books, often called BDs (an abbreviation of bandes dessinées, meaning literally "drawn strips") in French, and strips in Dutch or Flemish. Belgian comic books originally written in Dutch show the influence of the Francophone "Franco-Belgian" comics but have their own distinct style. British comics Although Ally Sloper's Half Holiday (1884) was aimed at an adult market, publishers quickly targeted a younger demographic, which has led to most publications being for children and has created an association in the public's mind of comics as somewhat juvenile. The Guardian refers to Ally Sloper as "one of the world's first iconic cartoon characters", and "as famous in Victorian Britain as Dennis the Menace would be a century later." British comics in the early 20th century typically evolved from illustrated penny dreadfuls of the Victorian era (featuring Sweeney Todd, Dick Turpin and Varney the Vampire). First published in the 1830s, penny dreadfuls were "Britain's first taste of mass-produced popular culture for the young." The two most popular British comic books, The Beano and The Dandy, were first published by DC Thomson in the 1930s. By 1950 the weekly circulation of both reached two million. Explaining the enormous popularity of comics in the UK during this period, Anita O'Brien, director curator at London's Cartoon Museum, states: "When comics like the Beano and Dandy were invented back in the 1930s – and through really to the 1950s and 60s – these comics were almost the only entertainment available to children." Dennis the Menace was created in the 1950s, which saw sales for The Beano soar. He features in the cover of The Beano, with the BBC referring to him as the "definitive naughty boy of the comic world." In 1954, Tiger comics introduced Roy of the Rovers, the hugely popular football based strip recounting the life of Roy Race and the team he played for, Melchester Rovers. The stock media phrase "real 'Roy of the Rovers' stuff" is often used by football writers, commentators and fans when describing displays of great skill, or surprising results that go against the odds, in reference to the dramatic storylines that were the strip's trademark. Other comic books such as Eagle, Valiant, Warrior, Viz and 2000 AD also flourished. Some comics, such as Judge Dredd and other 2000 AD titles, have been published in a tabloid form. Underground comics and "small press" titles have also appeared in the UK, notably Oz and Escape Magazine. The content of Action, another title aimed at children and launched in the mid-1970s, became the subject of discussion in the House of Commons. Although on a smaller scale than similar investigations in the US, such concerns led to a moderation of content published within British comics. Such moderation never became formalized to the extent of promulgating a code, nor did it last long. The UK has also established a healthy market in the reprinting and repackaging of material, notably material originating in the US. The lack of reliable supplies of American comic books led to a variety of black-and-white reprints, including Marvel's monster comics of the 1950s, Fawcett's Captain Marvel, and other characters such as Sheena, Mandrake the Magician, and the Phantom. Several reprint companies became involved in repackaging American material for the British market, notably the importer and distributor Thorpe & Porter. Marvel Comics established a UK office in 1972. DC Comics and Dark Horse Comics also opened offices in the 1990s. The repackaging of European material has occurred less frequently, although The Adventures of Tintin and Asterix serials have been successfully translated and repackaged in softcover books. The number of European comics available in the UK has increased in the last two decades. The British company Cinebook, founded in 2005, has released English translated versions of many European series. In the 1980s, a resurgence of British writers and artists gained prominence in mainstream comic books, which was dubbed the "British Invasion" in comic book history. These writers and artists brought with them their own mature themes and philosophy such as anarchy, controversy and politics common in British media. These elements would pave the way for mature and "darker and edgier" comic books and jump start the Modern Age of Comics. Writers included Alan Moore, famous for his V for Vendetta, From Hell, Watchmen, Marvelman, and The League of Extraordinary Gentlemen; Neil Gaiman with The Sandman mythos and Books of Magic; Warren Ellis, creator of Transmetropolitan and Planetary; and others such as Mark Millar, creator of Wanted and Kick-Ass. The comic book series John Constantine, Hellblazer, which is largely set in Britain and starring the magician John Constantine, paved the way for British writers such as Jamie Delano. At Christmas, publishers repackage and commission material for comic annuals, printed and bound as hardcover A4-size books; "Rupert" supplies a famous example of the British comic annual. DC Thomson also repackages The Broons and Oor Wullie strips in softcover A4-size books for the holiday season. On 19 March 2012, the British postal service, the Royal Mail, released a set of stamps depicting British comic book characters and series. The collection featured The Beano, The Dandy, Eagle, The Topper, Roy of the Rovers, Bunty, Buster, Valiant, Twinkle and 2000 AD. Spanish comics It has been stated that the 13th century Cantigas de Santa María could be considered as the first Spanish "comic", although comic books (also known in Spain as historietas or tebeos) made their debut around 1857. The magazine TBO was influential in popularizing the medium. After the Spanish Civil War, the Franco regime imposed strict censorship in all media: superhero comics were forbidden and as a result, comic heroes were based on historical fiction (in 1944 the medieval hero El Guerrero del Antifaz was created by Manuel Gago and another popular medieval hero, Capitán Trueno, was created in 1956 by Víctor Mora and Miguel Ambrosio Zaragoza). Two publishing houses — Editorial Bruguera and Editorial Valenciana — dominated the Spanish comics market during its golden age (1950–1970). The most popular comics showed a recognizable style of slapstick humor (influenced by Franco-Belgian authors such as Franquin): Escobar's Carpanta and Zipi y Zape, Vázquez's Las hermanas Gilda and Anacleto, Ibáñez's Mortadelo y Filemón and 13. Rue del Percebe or Jan's Superlópez. After the end of the Francoist period, there was an increased interest in adult comics with magazines such as Totem, El Jueves, 1984, and El Víbora, and works such as Paracuellos by Carlos Giménez. Spanish artists have traditionally worked in other markets finding great success, either in the American (e.g., Eisner Award winners Sergio Aragonés, Salvador Larroca, Gabriel Hernández Walta, Marcos Martín or David Aja), the British (e.g., Carlos Ezquerra, co-creator of Judge Dredd) or the Franco-Belgian one (e.g., Fauve d'Or winner Julio Ribera or Blacksad authors Juan Díaz Canales and Juanjo Guarnido). Italian comics In Italy, comics (known in Italian as fumetti) made their debut as humor strips at the end of the 19th century, and later evolved into adventure stories. After World War II, however, artists like Hugo Pratt and Guido Crepax exposed Italian comics to an international audience. Popular comic books such as Diabolik or the Bonelli line—namely Tex Willer or Dylan Dog—remain best-sellers. Mainstream comics are usually published on a monthly basis, in a black-and-white digest size format, with approximately 100 to 132 pages. Collections of classic material for the most famous characters, usually with more than 200 pages, are also common. Author comics are published in the French BD format, with an example being Pratt's Corto Maltese. Italian cartoonists show the influence of comics from other countries, including France, Belgium, Spain, and Argentina. Italy is also famous for being one of the foremost producers of Walt Disney comic stories outside the US; Donald Duck's superhero alter ego, Paperinik, known in English as Superduck, was created in Italy. Comics in other countries Distribution Distribution has historically been a problem for the comic book industry with many mainstream retailers declining to carry extensive stocks of the most interesting and popular comics. The smartphone and the tablet have turned out to be an ideal medium for online distribution. Digital distribution On 13 November 2007, Marvel Comics launched Marvel Digital Comics Unlimited, a subscription service allowing readers to read many comics from Marvel's history online. The service also includes periodic release new comics not available elsewhere. With the release of Avenging Spider-Man #1, Marvel also became the first publisher to provide free digital copies as part of the print copy of the comic book. With the growing popularity of smartphones and tablets, many major publishers have begun releasing titles in digital form. The most popular platform is comiXology. Some platforms, such as Graphicly, have shut down. Comic collections in libraries Many libraries have extensive collections of comics in the form of graphic novels. This is a convenient way for many in the public to become familiar with the medium. Guinness World Records In 2015, the Japanese manga artist Eiichiro Oda was awarded the Guinness World Records title for having the "Most copies published for the same comic book series by a single author". His manga series One Piece, which he writes and illustrates, has been serialized in the Japanese magazine Weekly Shōnen Jump since December 1997, and by 2015, 77 collected volumes had been released. Guinness World Records reported in their announcement that the collected volumes of the series had sold a total of 320,866,000 units. One Piece also holds the Guinness World Records title for "Most copies published for the same manga series". On 5 August 2018, the Guinness World Records title for the "Largest comic book ever published" was awarded to the Brazilian comic book Turma da Mônica — O Maior Gibi do Mundo!, published by Panini Comics Brasil and Mauricio de Sousa Produções. The comic book measures 69.9 cm by 99.8 cm (2 ft 3.51 in by 3 ft 3.29 in). The 18-page comic book had a print run of 120 copies. With the July 2021 publication of the 201st collected volume of his manga series Golgo 13, Japanese manga artist Takao Saito was awarded the Guinness World Records title for "Most volumes published for a single manga series." Golgo 13 has been continuously serialized in the Japanese magazine Big Comic since October 1968, which also makes it the oldest manga still in publication. See also Cartoon Comic book archive Comic book therapy Comics studies Comics vocabulary Comparison of image viewers List of best-selling comic series List of best-selling manga Webcomic References Further reading External links Comic book Speculation Reference Comic book Reference Bibliographic Datafile Sequart Research & Literacy Organization Comic Art Collection at the University of Missouri Collectorism – a place for collectors and collectibles Book Comics publications eo:Bildliteraturo#Specifaj nomoj kaj difinoj
2,821
6,233
https://en.wikipedia.org/wiki/Connected%20space
Connected space
In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that are used to distinguish topological spaces. A subset of a topological space is a if it is a connected space when viewed as a subspace of . Some related but stronger conditions are path connected, simply connected, and -connected. Another related notion is locally connected, which neither implies nor follows from connectedness. Formal definition A topological space is said to be if it is the union of two disjoint non-empty open sets. Otherwise, is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space the following conditions are equivalent: is connected, that is, it cannot be divided into two disjoint non-empty open sets. The only subsets of which are both open and closed (clopen sets) are and the empty set. The only subsets of with empty boundary are and the empty set. cannot be written as the union of two non-empty separated sets (sets for which each is disjoint from the other's closure). All continuous functions from to are constant, where is the two-point space endowed with the discrete topology. Historically this modern formulation of the notion of connectedness (in terms of no partition of into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See for details. Connected components Given some point in a topological space the union of any collection of connected subsets such that each contains will once again be a connected subset. The connected component of a point in is the union of all connected subsets of that contain it is the unique largest (with respect to ) connected subset of that contains The maximal connected subsets (ordered by inclusion ) of a non-empty topological space are called the connected components of the space. The components of any topological space form a partition of : they are disjoint, non-empty and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers are in different components. Take an irrational number and then set and Then is a separation of and . Thus each component is a one-point set. Let be the connected component of in a topological space and be the intersection of all clopen sets containing (called quasi-component of ) Then where the equality holds if is compact Hausdorff or locally connected. Disconnected spaces A space in which all components are one-point sets is called . Related to this property, a space is called if, for any two distinct elements and of , there exist disjoint open sets containing and containing such that is the union of and . Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example take two copies of the rational numbers , and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. Examples The closed interval in the standard subspace topology is connected; although it can, for example, be written as the union of and the second set is not open in the chosen topology of The union of and is disconnected; both of these intervals are open in the standard topological space is disconnected. A convex subset of is connected; it is actually simply connected. A Euclidean plane excluding the origin, is connected, but is not simply connected. The three-dimensional Euclidean space without the origin is connected, and even simply connected. In contrast, the one-dimensional Euclidean space without the origin is not connected. A Euclidean plane with a straight line removed is not connected since it consists of two half-planes. , the space of real numbers with the usual topology, is connected. The Sorgenfrey line is disconnected. If even a single point is removed from , the remainder is disconnected. However, if even a countable infinity of points are removed from , where the remainder is connected. If , then remains simply connected after removal of countably many points. Any topological vector space, e.g. any Hilbert space or Banach space, over a connected field (such as or ), is simply connected. Every discrete topological space with at least two elements is disconnected, in fact such a space is totally disconnected. The simplest example is the discrete two-point space. On the other hand, a finite set might be connected. For example, the spectrum of a discrete valuation ring consists of two points and is connected. It is an example of a Sierpiński space. The Cantor set is totally disconnected; since the set contains uncountably many points, it has uncountably many components. If a space is homotopy equivalent to a connected space, then is itself connected. The topologist's sine curve is an example of a set that is connected but is neither path connected nor locally connected. The general linear group (that is, the group of -by- real, invertible matrices) consists of two connected components: the one with matrices of positive determinant and the other of negative determinant. In particular, it is not connected. In contrast, is connected. More generally, the set of invertible bounded operators on a complex Hilbert space is connected. The spectra of commutative local ring and integral domains are connected. More generally, the following are equivalent The spectrum of a commutative ring is connected Every finitely generated projective module over has constant rank. has no idempotent (i.e., is not a product of two rings in a nontrivial way). An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space. Path connectedness A is a stronger notion of connectedness, requiring the structure of a path. A path from a point to a point in a topological space is a continuous function from the unit interval to with and . A of is an equivalence class of under the equivalence relation which makes equivalent to if there is a path from to . The space is said to be path-connected (or pathwise connected or -connected) if there is exactly one path-component, i.e. if there is a path joining any two points in . Again, many authors exclude the empty space (by this definition, however, the empty space is not path-connected because it has zero path-components; there is a unique equivalence relation on the empty set which has zero equivalence classes). Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line and the topologist's sine curve. Subsets of the real line are connected if and only if they are path-connected; these subsets are the intervals of . Also, open subsets of or are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. Arc connectedness A space is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding . An arc-component of is a maximal arc-connected subset of ; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable. Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a -Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of can be connected by a path but not by an arc. Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces: Continuous image of arc-connected space may not be arc-connected: for example, a quotient map from an arc-connected space to its quotient with countably many (at least 2) topologically distinguishable points cannot be arc-connected due to too small cardinality. Arc-components may not be disjoint. For example, has two overlapping arc-components. Arc-connected product space may not be a product of arc-connected spaces. For example, is arc-connected, but is not. Arc-components of a product space may not be products of arc-components of the marginal spaces. For example, has a single arc-component, but has two arc-components. If arc-connected subsets have a non-empty intersection, then their union may not be arc-connected. For example, the arc-components of intersect, but their union is not arc-connected. Local connectedness A topological space is said to be locally connected at a point if every neighbourhood of contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space is locally connected if and only if every component of every open set of is open. Similarly, a topological space is said to be if it has a base of path-connected sets. An open subset of a locally path-connected space is connected if and only if it is path-connected. This generalizes the earlier statement about and , each of which is locally path-connected. More generally, any topological manifold is locally path-connected. Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in , such as . A classical example of a connected space that is not locally connected is the so called topologist's sine curve, defined as , with the Euclidean topology induced by inclusion in . Set operations The intersection of connected sets is not necessarily connected. The union of connected sets is not necessarily connected, as can be seen by considering . Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets and . This means that, if the union is disconnected, then the collection can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in (see picture). This implies that in several cases, a union of connected sets necessarily connected. In particular: If the common intersection of all sets is not empty (), then obviously they cannot be partitioned to collections with disjoint unions. Hence the union of connected sets with non-empty intersection is connected. If the intersection of each pair of sets is not empty () then again they cannot be partitioned to collections with disjoint unions, so their union must be connected. If the sets can be ordered as a "linked chain", i.e. indexed by integer indices and , then again their union must be connected. If the sets are pairwise-disjoint and the quotient space is connected, then must be connected. Otherwise, if is a separation of then is a separation of the quotient space (since are disjoint and open in the quotient space). The set difference of connected sets is not necessarily connected. However, if and their difference is disconnected (and thus can be written as a union of two open sets and ), then the union of with each such component is connected (i.e. is connected for all ). Theorems Main theorem of connectedness: Let and be topological spaces and let be a continuous function. If is (path-)connected then the image is (path-)connected. This result can be considered a generalization of the intermediate value theorem. Every path-connected space is connected. Every locally path-connected space is locally connected. A locally path-connected space is path-connected if and only if it is connected. The closure of a connected subset is connected. Furthermore, any subset between a connected subset and its closure is connected. The connected components are always closed (but in general not open) The connected components of a locally connected space are also open. The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed). Every quotient of a connected (resp. locally connected, path-connected, locally path-connected) space is connected (resp. locally connected, path-connected, locally path-connected). Every product of a family of connected (resp. path-connected) spaces is connected (resp. path-connected). Every open subset of a locally connected (resp. locally path-connected) space is locally connected (resp. locally path-connected). Every manifold is locally path-connected. Arc-wise connected space is path connected, but path-wise connected space may not be arc-wise connected Continuous image of arc-wise connected set is arc-wise connected. Graphs Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them. But it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any -cycle with odd) is one such example. As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets . Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs. However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space. Stronger forms of connectedness There are stronger forms of connectedness for topological spaces, for instance: If there exist no two disjoint non-empty open sets in a topological space , must be connected, and thus hyperconnected spaces are also connected. Since a simply connected space is, by definition, also required to be path connected, any simply connected space is also connected. If the "path connectedness" requirement is dropped from the definition of simple connectivity, a simply connected space does not need to be connected. Yet stronger versions of connectivity include the notion of a contractible space. Every contractible space is path connected and thus also connected. In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve. See also References Further reading . General topology Properties of topological spaces
2,822
6,237
https://en.wikipedia.org/wiki/Christmas
Christmas
Christmas is an annual festival commemorating the birth of Jesus Christ, observed primarily on December 25 as a religious and cultural celebration among billions of people around the world. A feast central to the Christian liturgical year, it is preceded by the season of Advent or the Nativity Fast and initiates the season of Christmastide, which historically in the West lasts twelve days and culminates on Twelfth Night. Christmas Day is a public holiday in many countries, is celebrated religiously by a majority of Christians, as well as culturally by many non-Christians, and forms an integral part of the holiday season organized around it. The traditional Christmas narrative recounted in the New Testament, known as the Nativity of Jesus, says that Jesus was born in Bethlehem, in accordance with messianic prophecies. When Joseph and Mary arrived in the city, the inn had no room and so they were offered a stable where the Christ Child was soon born, with angels proclaiming this news to shepherds who then spread the word. There are different hypotheses regarding the date of Jesus' birth and in the early fourth century, the church fixed the date as December 25. This corresponds to the traditional date of the winter solstice on the Roman calendar. It is exactly nine months after Annunciation on March 25, also the date of the spring equinox. Most Christians celebrate on December 25 in the Gregorian calendar, which has been adopted almost universally in the civil calendars used in countries throughout the world. However, part of the Eastern Christian Churches celebrate Christmas on December 25 of the older Julian calendar, which currently corresponds to January 7 in the Gregorian calendar. For Christians, believing that God came into the world in the form of man to atone for the sins of humanity, rather than knowing Jesus' exact birth date, is considered to be the primary purpose in celebrating Christmas. The celebratory customs associated in various countries with Christmas have a mix of pre-Christian, Christian, and secular themes and origins. Popular modern customs of the holiday include gift giving; completing an Advent calendar or Advent wreath; Christmas music and caroling; viewing a Nativity play; an exchange of Christmas cards; church services; a special meal; and the display of various Christmas decorations, including Christmas trees, Christmas lights, nativity scenes, garlands, wreaths, mistletoe, and holly. In addition, several closely related and often interchangeable figures, known as Santa Claus, Father Christmas, Saint Nicholas, and Christkind, are associated with bringing gifts to children during the Christmas season and have their own body of traditions and lore. Because gift-giving and many other aspects of the Christmas festival involve heightened economic activity, the holiday has become a significant event and a key sales period for retailers and businesses. Over the past few centuries, Christmas has had a steadily growing economic effect in many regions of the world. Etymology The English word "Christmas" is a shortened form of "Christ's Mass". The word is recorded as Crīstesmæsse in 1038 and Cristes-messe in 1131. Crīst (genitive Crīstes) is from Greek Khrīstos (Χριστός), a translation of Hebrew Māšîaḥ (מָשִׁיחַ), "Messiah", meaning "anointed"; and mæsse is from Latin missa, the celebration of the Eucharist. The form Christenmas was also used during some periods, but is now considered archaic and dialectal. The term derives from Middle English Cristenmasse, meaning "Christian mass". Xmas is an abbreviation of Christmas found particularly in print, based on the initial letter chi (Χ) in Greek Khrīstos (Χριστός) ("Christ"), although some style guides discourage its use. This abbreviation has precedent in Middle English Χρ̄es masse (where "Χρ̄" is an abbreviation for Χριστός). Other names In addition to "Christmas", the holiday has had various other English names throughout its history. The Anglo-Saxons referred to the feast as "midwinter", or, more rarely, as Nātiuiteð (from Latin nātīvitās below). "Nativity", meaning "birth", is from Latin nātīvitās. In Old English, Gēola (Yule) referred to the period corresponding to December and January, which was eventually equated with Christian Christmas. "Noel" (also "Nowel" or "Nowell", as in "The First Nowell") entered English in the late 14th century and is from the Old French noël or naël, itself ultimately from the Latin nātālis (diēs) meaning "birth (day)". Koleda is the traditional Slavic name for Christmas and the period from Christmas to Epiphany or, more generally, to Slavic Christmas-related rituals, some dating to pre-Christian times. It is actually used in Bulgarian. Nativity The gospels of Luke and Matthew describe Jesus as being born in Bethlehem to the Virgin Mary. In the gospel of Luke, Joseph and Mary traveled from Nazareth to Bethlehem for the census, and Jesus was born there and placed in a manger. Angels proclaimed him a savior for all people, and shepherds came to adore him. The gospel of Matthew adds that the magi followed a star to Bethlehem to bring gifts to Jesus, born the king of the Jews. King Herod ordered the massacre of all the boys less than two years old in Bethlehem, but the family fled to Egypt and later returned to Nazareth. History The nativity sequences included in the Gospels of Matthew and Luke prompted early Christian writers to suggest various dates for the anniversary. At the time of the 2nd century, the "earliest church records" indicate that "Christians were remembering and celebrating the birth of the Lord", an "observance [that] sprang up organically from the authentic devotion of ordinary believers." Though Christmas did not appear on the lists of festivals given by the early Christian writers Irenaeus and Tertullian, the Chronograph of 354 records that a Christmas celebration took place in Rome eight days before the calends of January. This section was written in AD 336, during the brief pontificate of Pope Mark. In the East, the birth of Jesus was celebrated in connection with the Epiphany on January 6. This holiday was not primarily about the nativity, but rather the baptism of Jesus. Christmas was promoted in the East as part of the revival of Orthodox Christianity that followed the death of the pro-Arian Emperor Valens at the Battle of Adrianople in 378. The feast was introduced in Constantinople in 379, in Antioch by John Chrysostom towards the end of the fourth century, probably in 388, and in Alexandria in the following century. The first recorded Christmas celebration was in Rome on December 25, AD 336. In the 3rd century, the date of the nativity was the subject of great interest. Around AD 200, Clement of Alexandria wrote: Various factors contributed to the selection of December 25 as a date of celebration: it was nine months after the date linked to the conception of Jesus—March 25, which also marked the vernal equinox (celebrated as the Feast of the Annunciation) and it was the date of the winter solstice on the Roman calendar. Adam C. English, Professor of Religion at Campbell University, writes: The early Church Fathers John Chrysostom, Augustine of Hippo, and Jerome attested to 25 December as the date of Christmas. The primitive Church connected Jesus to the Sun through the use of such phrases as "Sun of righteousness." The early Christian writer Lactantius wrote "the east is attached to God because he is the source of light and the illuminator of the world and he makes us rise toward eternal life." It is for this reason that the early Christians established the direction of prayer as being eastward, towards the rising sun. In the Roman Empire, in which many Christians resided, the winter solstice was marked on December 25. In 567, the Council of Tours put in place the season of Christmastide, proclaiming "the twelve days from Christmas to Epiphany as a sacred and festive season, and established the duty of Advent fasting in preparation for the feast." This was done in order to solve the "administrative problem for the Roman Empire as it tried to coordinate the solar Julian calendar with the lunar calendars of its provinces in the east." Christmas played a role in the Arian controversy of the fourth century. After this controversy ran its course, the prominence of the holiday declined for a few centuries. The feast regained prominence after 800 when Charlemagne was crowned emperor on Christmas Day. In Puritan England, Christmas was banned, with Puritans considering it a Catholic invention and also associating the day with drunkenness and other misbehaviour. It was restored as a legal holiday in England in 1660 when Puritan legislation was declared null and void, but it remained disreputable in the minds of some. In the early 19th century, Christmas festivities and services became widespread with the rise of the Oxford Movement in the Church of England that emphasized the centrality of Christmas in Christianity and charity to the poor, along with Washington Irving, Charles Dickens, and other authors emphasizing family, children, kind-heartedness, gift-giving, and Santa Claus (for Irving), or Father Christmas (for Dickens). Various theories have been offered with respect to the establishment of the dates on which the Christian Churches came to celebrate Christmas: Calculation hypothesis The calculation hypothesis suggests that an earlier holiday, the Annunciation (which celebrated the conception of Jesus), held on March 25 became associated with the Incarnation. Christmas was then calculated as nine months later. The calculation hypothesis was proposed by French writer Louis Duchesne in 1889. The Bible in records the annunciation to Mary to be at the time when Elizabeth, mother of John the Baptist, was in her sixth month of pregnancy (cf. Nativity of Saint John the Baptist). Thus, the ecclesiastical holiday to commemorate the Annunciation of the Lord was created in the seventh century and was assigned to be celebrated on March 25; this date is nine months before Christmas, in addition to being the traditional date of the equinox. It is unrelated to the Quartodeciman, which had been forgotten by this time. Early Christians celebrated the life of Jesus on a date considered equivalent to 14 Nisan (Passover) on the local calendar. Because Passover was held on the 14th of the month, this feast is referred to as the Quartodeciman. All the major events of Christ's life, especially the passion, were celebrated on this date. In his letter to the Corinthians, Paul mentions Passover, presumably celebrated according to the local calendar in Corinth. Tertullian (d. 220), who lived in Latin-speaking North Africa, gives the date of passion celebration as March 25. The date of the passion was moved to Good Friday in 165. According to the calculation hypothesis, the celebration of the Quartodeciman continued in some areas and the feast became associated with Incarnation. The calculation hypothesis is considered academically to be "a thoroughly viable hypothesis", though not certain. It was a traditional Jewish belief that great men were born and died on the same day, so lived a whole number of years, without fractions: Jesus was therefore considered to have been conceived on March 25, as he died on March 25, which was calculated to have coincided with 14 Nisan. A passage in Commentary on the Prophet Daniel (204) by Hippolytus of Rome identifies December 25 as the date of the nativity. This passage is generally considered a late interpolation. But the manuscript includes another passage, one that is more likely to be authentic, that gives the passion as March 25. In 221, Sextus Julius Africanus (c. 160 – c. 240) gave March 25 as the day of creation and of the conception of Jesus in his universal history. This conclusion was based on solar symbolism, with March 25 the date of the equinox. As this implies a birth in December, it is sometimes claimed to be the earliest identification of December 25 as the nativity. However, Africanus was not such an influential writer that it is likely he determined the date of Christmas. The treatise De solstitia et aequinoctia conceptionis et nativitatis Domini nostri Iesu Christi et Iohannis Baptistae, pseudepigraphically attributed to John Chrysostom and dating to the early fourth century, also argued that Jesus was conceived and crucified on the same day of the year and calculated this as March 25. This anonymous tract also states: "But Our Lord, too, is born in the month of December ... the eight before the calends of January [25 December] ..., But they call it the 'Birthday of the Unconquered'. Who indeed is so unconquered as Our Lord...? Or, if they say that it is the birthday of the Sun, He is the Sun of Justice." Solstice date hypothesis December 25 was considered the date of the winter solstice in the Roman calendar, though actually it occurred on the 23rd or 24th at that time. A late fourth-century sermon by Saint Augustine explains why this was a fitting day to celebrate Christ's nativity: "Hence it is that He was born on the day which is the shortest in our earthly reckoning and from which subsequent days begin to increase in length. He, therefore, who bent low and lifted us up chose the shortest day, yet the one whence light begins to increase." Linking Jesus to the Sun was supported by various Biblical passages. Jesus was considered to be the "Sun of righteousness" prophesied by Malachi: "Unto you shall the sun of righteousness arise, and healing is in his wings." Such solar symbolism could support more than one date of birth. An anonymous work known as De Pascha Computus (243) linked the idea that creation began at the spring equinox, on March 25, with the conception or birth (the word nascor can mean either) of Jesus on March 28, the day of the creation of the sun in the Genesis account. One translation reads: "O the splendid and divine providence of the Lord, that on that day, the very day, on which the sun was made, March 28, a Wednesday, Christ should be born". In the 17th century, Isaac Newton, who, coincidentally, was born on December 25, argued that the date of Christmas may have been selected to correspond with the solstice. Conversely, according to Steven Hijmans of the University of Alberta, "It is cosmic symbolism ... which inspired the Church leadership in Rome to elect the southern solstice, December 25, as the birthday of Christ, and the northern solstice as that of John the Baptist, supplemented by the equinoxes as their respective dates of conception." History of religions hypothesis The rival "History of Religions" hypothesis suggests that the Church selected December 25 date to appropriate festivities held by the Romans in honor of the Sun god Sol Invictus. This cult was established by Aurelian in 274. An explicit expression of this theory appears in an annotation of uncertain date added to a manuscript of a work by 12th-century Syrian bishop Jacob Bar-Salibi. The scribe who added it wrote: In 1743, German Protestant Paul Ernst Jablonski argued Christmas was placed on December 25 to correspond with the Roman solar holiday Dies Natalis Solis Invicti and was therefore a "paganization" that debased the true church. However, it has been also argued that, on the contrary, the Emperor Aurelian, who in 274 instituted the holiday of the Dies Natalis Solis Invicti, did so partly as an attempt to give a pagan significance to a date already important for Christians in Rome. Hermann Usener and others proposed that the Christians chose this day because it was the Roman feast celebrating the birthday of Sol Invictus. Modern scholar S. E. Hijmans, however, states that "While they were aware that pagans called this day the 'birthday' of Sol Invictus, this did not concern them and it did not play any role in their choice of date for Christmas." Moreover, Thomas J. Talley holds that the Roman Emperor Aurelian placed a festival of Sol Invictus on December 25 in order to compete with the growing rate of the Christian Church, which had already been celebrating Christmas on that date first. In the judgement of the Church of England Liturgical Commission, the History of Religions hypothesis has been challenged by a view based on an old tradition, according to which the date of Christmas was fixed at nine months after March 25, the date of the vernal equinox, on which the Annunciation was celebrated. Adam C. English, Professor of Religion at Campbell University, writes: With regard to a December religious feast of the deified Sun (Sol), as distinct from a solstice feast of the birth (or rebirth) of the astronomical sun, Hijmans has commented that "while the winter solstice on or around December 25 was well established in the Roman imperial calendar, there is no evidence that a religious celebration of Sol on that day antedated the celebration of Christmas". "Thomas Talley has shown that, although the Emperor Aurelian's dedication of a temple to the sun god in the Campus Martius (C.E. 274) probably took place on the 'Birthday of the Invincible Sun' on December 25, the cult of the sun in pagan Rome ironically did not celebrate the winter solstice nor any of the other quarter-tense days, as one might expect." The Oxford Companion to Christian Thought remarks on the uncertainty about the order of precedence between the religious celebrations of the Birthday of the Unconquered Sun and of the birthday of Jesus, stating that the hypothesis that December 25 was chosen for celebrating the birth of Jesus on the basis of the belief that his conception occurred on March 25 "potentially establishes 25 December as a Christian festival before Aurelian's decree, which, when promulgated, might have provided for the Christian feast both opportunity and challenge". Relation to concurrent celebrations Many popular customs associated with Christmas developed independently of the commemoration of Jesus' birth, with some claiming that certain elements are Christianized and have origins in pre-Christian festivals that were celebrated by pagan populations who were later converted to Christianity; other scholars reject these claims and affirm that Christmas customs largely developed in a Christian context. The prevailing atmosphere of Christmas has also continually evolved since the holiday's inception, ranging from a sometimes raucous, drunken, carnival-like state in the Middle Ages, to a tamer family-oriented and children-centered theme introduced in a 19th-century transformation. The celebration of Christmas was banned on more than one occasion within certain groups, such as the Puritans and Jehovah's Witnesses (who do not celebrate birthdays in general), due to concerns that it was too unbiblical. Prior to and through the early Christian centuries, winter festivals were the most popular of the year in many European pagan cultures. Reasons included the fact that less agricultural work needed to be done during the winter, as well as an expectation of better weather as spring approached. Celtic winter herbs such as mistletoe and ivy, and the custom of kissing under a mistletoe, are common in modern Christmas celebrations in the English-speaking countries. The pre-Christian Germanic peoples—including the Anglo-Saxons and the Norse—celebrated a winter festival called Yule, held in the late December to early January period, yielding modern English yule, today used as a synonym for Christmas. In Germanic language-speaking areas, numerous elements of modern Christmas folk custom and iconography may have originated from Yule, including the Yule log, Yule boar, and the Yule goat. Often leading a ghostly procession through the sky (the Wild Hunt), the long-bearded god Odin is referred to as "the Yule one" and "Yule father" in Old Norse texts, while other gods are referred to as "Yule beings". On the other hand, as there are no reliable existing references to a Christmas log prior to the 16th century, the burning of the Christmas block may have been an early modern invention by Christians unrelated to the pagan practice. In eastern Europe also, pre-Christian traditions were incorporated into Christmas celebrations there, an example being the Koleda, which shares parallels with the Christmas carol. Post-classical history In the Early Middle Ages, Christmas Day was overshadowed by Epiphany, which in western Christianity focused on the visit of the magi. But the medieval calendar was dominated by Christmas-related holidays. The forty days before Christmas became the "forty days of St. Martin" (which began on November 11, the feast of St. Martin of Tours), now known as Advent. In Italy, former Saturnalian traditions were attached to Advent. Around the 12th century, these traditions transferred again to the Twelve Days of Christmas (December 25 – January 5); a time that appears in the liturgical calendars as Christmastide or Twelve Holy Days. The prominence of Christmas Day increased gradually after Charlemagne was crowned Emperor on Christmas Day in 800. King Edmund the Martyr was anointed on Christmas in 855 and King William I of England was crowned on Christmas Day 1066. By the High Middle Ages, the holiday had become so prominent that chroniclers routinely noted where various magnates celebrated Christmas. King Richard II of England hosted a Christmas feast in 1377 at which 28 oxen and 300 sheep were eaten. The Yule boar was a common feature of medieval Christmas feasts. Caroling also became popular, and was originally performed by a group of dancers who sang. The group was composed of a lead singer and a ring of dancers that provided the chorus. Various writers of the time condemned caroling as lewd, indicating that the unruly traditions of Saturnalia and Yule may have continued in this form. "Misrule"—drunkenness, promiscuity, gambling—was also an important aspect of the festival. In England, gifts were exchanged on New Year's Day, and there was special Christmas ale. Christmas during the Middle Ages was a public festival that incorporated ivy, holly, and other evergreens. Christmas gift-giving during the Middle Ages was usually between people with legal relationships, such as tenant and landlord. The annual indulgence in eating, dancing, singing, sporting, and card playing escalated in England, and by the 17th century the Christmas season featured lavish dinners, elaborate masques, and pageants. In 1607, King James I insisted that a play be acted on Christmas night and that the court indulge in games. It was during the Reformation in 16th–17th-century Europe that many Protestants changed the gift bringer to the Christ Child or Christkindl, and the date of giving gifts changed from December 6 to Christmas Eve. Modern history 17th and 18th centuries Following the Protestant Reformation, many of the new denominations, including the Anglican Church and Lutheran Church, continued to celebrate Christmas. In 1629, the Anglican poet John Milton penned On the Morning of Christ's Nativity, a poem that has since been read by many during Christmastide. Donald Heinz, a professor at California State University, states that Martin Luther "inaugurated a period in which Germany would produce a unique culture of Christmas, much copied in North America." Among the congregations of the Dutch Reformed Church, Christmas was celebrated as one of the principal evangelical feasts. However, in 17th century England, some groups such as the Puritans strongly condemned the celebration of Christmas, considering it a Catholic invention and the "trappings of popery" or the "rags of the Beast". In contrast, the established Anglican Church "pressed for a more elaborate observance of feasts, penitential seasons, and saints' days. The calendar reform became a major point of tension between the Anglican party and the Puritan party." The Catholic Church also responded, promoting the festival in a more religiously oriented form. King Charles I of England directed his noblemen and gentry to return to their landed estates in midwinter to keep up their old-style Christmas generosity. Following the Parliamentarian victory over Charles I during the English Civil War, England's Puritan rulers banned Christmas in 1647. Protests followed as pro-Christmas rioting broke out in several cities and for weeks Canterbury was controlled by the rioters, who decorated doorways with holly and shouted royalist slogans. The book, The Vindication of Christmas (London, 1652), argued against the Puritans, and makes note of Old English Christmas traditions, dinner, roast apples on the fire, card playing, dances with "plow-boys" and "maidservants", old Father Christmas and carol singing. During the ban, semi-clandestine religious services marking Christ's birth continued to be held, and people sang carols in secret. The Restoration of King Charles II in 1660 ended the ban, and Christmas was again freely celebrated in England. Many Calvinist clergymen disapproved of Christmas celebration. As such, in Scotland, the Presbyterian Church of Scotland discouraged the observance of Christmas, and though James VI commanded its celebration in 1618, attendance at church was scant. The Parliament of Scotland officially abolished the observance of Christmas in 1640, claiming that the church had been "purged of all superstitious observation of days". Whereas in England, Wales and Ireland Christmas Day is a common law holiday, having been a customary holiday since time immemorial, it was not until 1871 that it was designated a bank holiday in Scotland. Following the Restoration of Charles II, Poor Robin's Almanack contained the lines: "Now thanks to God for Charles return, / Whose absence made old Christmas mourn. / For then we scarcely did it know, / Whether it Christmas were or no." The diary of James Woodforde, from the latter half of the 18th century, details the observance of Christmas and celebrations associated with the season over a number of years. As in England, Puritans in Colonial America staunchly opposed the observation of Christmas. The Pilgrims of New England pointedly spent their first December 25 in the New World working normally. Puritans such as Cotton Mather condemned Christmas both because scripture did not mention its observance and because Christmas celebrations of the day often involved boisterous behavior. Many non-Puritans in New England deplored the loss of the holidays enjoyed by the laboring classes in England. Christmas observance was outlawed in Boston in 1659. The ban on Christmas observance was revoked in 1681 by English governor Edmund Andros, but it was not until the mid-19th century that celebrating Christmas became fashionable in the Boston region. At the same time, Christian residents of Virginia and New York observed the holiday freely. Pennsylvania Dutch settlers, predominantly Moravian settlers of Bethlehem, Nazareth, and Lititz in Pennsylvania and the Wachovia settlements in North Carolina, were enthusiastic celebrators of Christmas. The Moravians in Bethlehem had the first Christmas trees in America as well as the first Nativity Scenes. Christmas fell out of favor in the United States after the American Revolution, when it was considered an English custom. George Washington attacked Hessian (German) mercenaries on the day after Christmas during the Battle of Trenton on December 26, 1776, Christmas being much more popular in Germany than in America at this time. With the atheistic Cult of Reason in power during the era of Revolutionary France, Christian Christmas religious services were banned and the three kings cake was renamed the "equality cake" under anticlerical government policies. 19th century In the early-19th century, writers imagined Tudor Christmas as a time of heartfelt celebration. In 1843, Charles Dickens wrote the novel A Christmas Carol, which helped revive the "spirit" of Christmas and seasonal merriment. Its instant popularity played a major role in portraying Christmas as a holiday emphasizing family, goodwill, and compassion. Dickens sought to construct Christmas as a family-centered festival of generosity, linking "worship and feasting, within a context of social reconciliation." Superimposing his humanitarian vision of the holiday, in what has been termed "Carol Philosophy", Dickens influenced many aspects of Christmas that are celebrated today in Western culture, such as family gatherings, seasonal food and drink, dancing, games, and a festive generosity of spirit. A prominent phrase from the tale, "Merry Christmas", was popularized following the appearance of the story. This coincided with the appearance of the Oxford Movement and the growth of Anglo-Catholicism, which led a revival in traditional rituals and religious observances. The term Scrooge became a synonym for miser, with "Bah! Humbug!" dismissive of the festive spirit. In 1843, the first commercial Christmas card was produced by Sir Henry Cole. The revival of the Christmas Carol began with William Sandys's "Christmas Carols Ancient and Modern" (1833), with the first appearance in print of "The First Noel", "I Saw Three Ships", "Hark the Herald Angels Sing" and "God Rest Ye Merry, Gentlemen", popularized in Dickens' A Christmas Carol. In Britain, the Christmas tree was introduced in the early 19th century by the German-born Queen Charlotte. In 1832, the future Queen Victoria wrote about her delight at having a Christmas tree, hung with lights, ornaments, and presents placed round it. After her marriage to her German cousin Prince Albert, by 1841 the custom became more widespread throughout Britain. An image of the British royal family with their Christmas tree at Windsor Castle created a sensation when it was published in the Illustrated London News in 1848. A modified version of this image was published in Godey's Lady's Book, Philadelphia in 1850. By the 1870s, putting up a Christmas tree had become common in America. In America, interest in Christmas had been revived in the 1820s by several short stories by Washington Irving which appear in his The Sketch Book of Geoffrey Crayon, Gent. and "Old Christmas". Irving's stories depicted harmonious warm-hearted English Christmas festivities he experienced while staying in Aston Hall, Birmingham, England, that had largely been abandoned, and he used the tract Vindication of Christmas (1652) of Old English Christmas traditions, that he had transcribed into his journal as a format for his stories. In 1822, Clement Clarke Moore wrote the poem A Visit From St. Nicholas (popularly known by its first line: Twas the Night Before Christmas). The poem helped popularize the tradition of exchanging gifts, and seasonal Christmas shopping began to assume economic importance. This also started the cultural conflict between the holiday's spiritual significance and its associated commercialism that some see as corrupting the holiday. In her 1850 book The First Christmas in New England, Harriet Beecher Stowe includes a character who complains that the true meaning of Christmas was lost in a shopping spree. While the celebration of Christmas was not yet customary in some regions in the U.S., Henry Wadsworth Longfellow detected "a transition state about Christmas here in New England" in 1856. "The old puritan feeling prevents it from being a cheerful, hearty holiday; though every year makes it more so." In Reading, Pennsylvania, a newspaper remarked in 1861, "Even our presbyterian friends who have hitherto steadfastly ignored Christmas—threw open their church doors and assembled in force to celebrate the anniversary of the Savior's birth." The First Congregational Church of Rockford, Illinois, "although of genuine Puritan stock", was 'preparing for a grand Christmas jubilee', a news correspondent reported in 1864. By 1860, fourteen states including several from New England had adopted Christmas as a legal holiday. In 1875, Louis Prang introduced the Christmas card to Americans. He has been called the "father of the American Christmas card". On June 28, 1870, Christmas was formally declared a United States federal holiday. 20th century During the First World War and particularly (but not exclusively) in 1914, a series of informal truces took place for Christmas between opposing armies. The truces, which were organised spontaneously by fighting men, ranged from promises not to shoot shouted at a distance in order to ease the pressure of war for the day to friendly socializing, gift giving and even sport between enemies. These incidents became a well known and semi-mythologised part of popular memory. They have been described as a symbol of common humanity even in the darkest of situations and used to demonstrate to children the ideals of Christmas. Up to the 1950s in the UK, many Christmas customs were restricted to the upper classes and better-off families. The mass of the population had not adopted many of the Christmas rituals that later became general. The Christmas tree was rare. Christmas dinner might be beef or goose – certainly not turkey. In their stockings children might get an apple, orange, and sweets. Full celebration of a family Christmas with all the trimmings only became widespread with increased prosperity from the 1950s. National papers were published on Christmas Day until 1912. Post was still delivered on Christmas Day until 1961. League football matches continued in Scotland until the 1970s while in England they ceased at the end of the 1950s. Under the state atheism of the Soviet Union, after its foundation in 1917, Christmas celebrations—along with other Christian holidays—were prohibited in public. During the 1920s, '30s, and '40s, the League of Militant Atheists encouraged school pupils to campaign against Christmas traditions, such as the Christmas tree, as well as other Christian holidays, including Easter; the League established an antireligious holiday to be the 31st of each month as a replacement. At the height of this persecution, in 1929, on Christmas Day, children in Moscow were encouraged to spit on crucifixes as a protest against the holiday. Instead, the importance of the holiday and all its trappings, such as the Christmas tree and gift-giving, was transferred to the New Year. It was not until the dissolution of the Soviet Union in 1991 that the persecution ended and Orthodox Christmas became a state holiday again for the first time in Russia after seven decades. European History Professor Joseph Perry wrote that likewise, in Nazi Germany, "because Nazi ideologues saw organized religion as an enemy of the totalitarian state, propagandists sought to deemphasize—or eliminate altogether—the Christian aspects of the holiday" and that "Propagandists tirelessly promoted numerous Nazified Christmas songs, which replaced Christian themes with the regime's racial ideologies." As Christmas celebrations began to be held around the world even outside traditional Christian cultures in the 20th century, some Muslim-majority countries subsequently banned the practice of Christmas, claiming it undermines Islam. Observance and traditions Christmas Day is celebrated as a major festival and public holiday in countries around the world, including many whose populations are mostly non-Christian. In some non-Christian areas, periods of former colonial rule introduced the celebration (e.g. Hong Kong); in others, Christian minorities or foreign cultural influences have led populations to observe the holiday. Countries such as Japan, where Christmas is popular despite there being only a small number of Christians, have adopted many of the cultural aspects of Christmas, such as gift-giving, decorations, and Christmas trees. A similar example is in Turkey, being Muslim-majority and with a small number of Christians, where Christmas trees and decorations tend to line public streets during the festival. Among countries with a strong Christian tradition, a variety of Christmas celebrations have developed that incorporate regional and local cultures. Church attendance Christmas Day (inclusive of its vigil, Christmas Eve), is a Festival in the Lutheran Churches, a solemnity in the Roman Catholic Church, and a Principal Feast of the Anglican Communion. Other Christian denominations do not rank their feast days but nevertheless place importance on Christmas Eve/Christmas Day, as with other Christian feasts like Easter, Ascension Day, and Pentecost. As such, for Christians, attending a Christmas Eve or Christmas Day church service plays an important part in the recognition of the Christmas season. Christmas, along with Easter, is the period of highest annual church attendance. A 2010 survey by LifeWay Christian Resources found that six in ten Americans attend church services during this time. In the United Kingdom, the Church of England reported an estimated attendance of people at Christmas services in 2015. Decorations Nativity scenes are known from 10th-century Rome. They were popularised by Saint Francis of Assisi from 1223, quickly spreading across Europe. Different types of decorations developed across the Christian world, dependent on local tradition and available resources, and can vary from simple representations of the crib to far more elaborate sets – renowned manger scene traditions include the colourful Kraków szopka in Poland, which imitate Kraków's historical buildings as settings, the elaborate Italian presepi (Neapolitan, Genoese and Bolognese), or the Provençal crèches in southern France, using hand-painted terracotta figurines called santons. In certain parts of the world, notably Sicily, living nativity scenes following the tradition of Saint Francis are a popular alternative to static crèches. The first commercially produced decorations appeared in Germany in the 1860s, inspired by paper chains made by children. In countries where a representation of the Nativity scene is very popular, people are encouraged to compete and create the most original or realistic ones. Within some families, the pieces used to make the representation are considered a valuable family heirloom. The traditional colors of Christmas decorations are red, green, and gold. Red symbolizes the blood of Jesus, which was shed in his crucifixion; green symbolizes eternal life, and in particular the evergreen tree, which does not lose its leaves in the winter; and gold is the first color associated with Christmas, as one of the three gifts of the Magi, symbolizing royalty. The Christmas tree was first used by German Lutherans in the 16th century, with records indicating that a Christmas tree was placed in the Cathedral of Strassburg in 1539, under the leadership of the Protestant Reformer, Martin Bucer. In the United States, these "German Lutherans brought the decorated Christmas tree with them; the Moravians put lighted candles on those trees." When decorating the Christmas tree, many individuals place a star at the top of the tree symbolizing the Star of Bethlehem, a fact recorded by The School Journal in 1897. Professor David Albert Jones of Oxford University writes that in the 19th century, it became popular for people to also use an angel to top the Christmas tree in order to symbolize the angels mentioned in the accounts of the Nativity of Jesus. Additionally, in the context of a Christian celebration of Christmas, the Christmas tree, being evergreen in colour, is symbolic of Christ, who offers eternal life; the candles or lights on the tree represent the Light of the World—Jesus—born in Bethlehem. Christian services for family use and public worship have been published for the blessing of a Christmas tree, after it has been erected. The Christmas tree is considered by some as Christianisation of pagan tradition and ritual surrounding the Winter Solstice, which included the use of evergreen boughs, and an adaptation of pagan tree worship; according to eighth-century biographer Æddi Stephanus, Saint Boniface (634–709), who was a missionary in Germany, took an ax to an oak tree dedicated to Thor and pointed out a fir tree, which he stated was a more fitting object of reverence because it pointed to heaven and it had a triangular shape, which he said was symbolic of the Trinity. The English language phrase "Christmas tree" is first recorded in 1835 and represents an importation from the German language. Since the 16th century, the poinsettia, a native plant from Mexico, has been associated with Christmas carrying the Christian symbolism of the Star of Bethlehem; in that country it is known in Spanish as the Flower of the Holy Night. Other popular holiday plants include holly, mistletoe, red amaryllis, and Christmas cactus. Other traditional decorations include bells, candles, candy canes, stockings, wreaths, and angels. Both the displaying of wreaths and candles in each window are a more traditional Christmas display. The concentric assortment of leaves, usually from an evergreen, make up Christmas wreaths and are designed to prepare Christians for the Advent season. Candles in each window are meant to demonstrate the fact that Christians believe that Jesus Christ is the ultimate light of the world. Christmas lights and banners may be hung along streets, music played from speakers, and Christmas trees placed in prominent places. It is common in many parts of the world for town squares and consumer shopping areas to sponsor and display decorations. Rolls of brightly colored paper with secular or religious Christmas motifs are manufactured for the purpose of wrapping gifts. In some countries, Christmas decorations are traditionally taken down on Twelfth Night. Nativity play For the Christian celebration of Christmas, the viewing of the Nativity play is one of the oldest Christmastime traditions, with the first reenactment of the Nativity of Jesus taking place in A.D. 1223. In that year, Francis of Assisi assembled a Nativity scene outside of his church in Italy and children sung Christmas carols celebrating the birth of Jesus. Each year, this grew larger and people travelled from afar to see Francis' depiction of the Nativity of Jesus that came to feature drama and music. Nativity plays eventually spread throughout all of Europe, where they remain popular. Christmas Eve and Christmas Day church services often came to feature Nativity plays, as did schools and theatres. In France, Germany, Mexico and Spain, Nativity plays are often reenacted outdoors in the streets. Music and carols The earliest extant specifically Christmas hymns appear in fourth-century Rome. Latin hymns such as "Veni redemptor gentium", written by Ambrose, Archbishop of Milan, were austere statements of the theological doctrine of the Incarnation in opposition to Arianism. "Corde natus ex Parentis" ("Of the Father's love begotten") by the Spanish poet Prudentius (d. 413) is still sung in some churches today. In the 9th and 10th centuries, the Christmas "Sequence" or "Prose" was introduced in North European monasteries, developing under Bernard of Clairvaux into a sequence of rhymed stanzas. In the 12th century the Parisian monk Adam of St. Victor began to derive music from popular songs, introducing something closer to the traditional Christmas carol. Christmas carols in English appear in a 1426 work of John Awdlay who lists twenty five "caroles of Cristemas", probably sung by groups of 'wassailers', who went from house to house. The songs now known specifically as carols were originally communal folk songs sung during celebrations such as "harvest tide" as well as Christmas. It was only later that carols began to be sung in church. Traditionally, carols have often been based on medieval chord patterns, and it is this that gives them their uniquely characteristic musical sound. Some carols like "Personent hodie", "Good King Wenceslas", and "In dulci jubilo" can be traced directly back to the Middle Ages. They are among the oldest musical compositions still regularly sung. "Adeste Fideles" (O Come all ye faithful) appears in its current form in the mid-18th century. The singing of carols increased in popularity after the Protestant Reformation in the Lutheran areas of Europe, as the Reformer Martin Luther wrote carols and encouraged their use in worship, in addition to spearheading the practice of caroling outside the Mass. The 18th-century English reformer Charles Wesley, an early Methodist divine, understood the importance of music to Christian worship. In addition to setting many psalms to melodies, he wrote texts for at least three Christmas carols. The best known was originally entitled "Hark! How All the Welkin Rings", later renamed "Hark! the Herald Angels Sing". Christmas seasonal songs of a nonreligious nature emerged in the late 18th century. The Welsh melody for "Deck the Halls" dates from 1794, with the lyrics added by Scottish musician Thomas Oliphant in 1862, and the American "Jingle Bells" was copyrighted in 1857. Other popular carols include "The First Noel", "God Rest You Merry, Gentlemen", "The Holly and the Ivy", "I Saw Three Ships", "In the Bleak Midwinter", "Joy to the World", "Once in Royal David's City" and "While Shepherds Watched Their Flocks". In the 19th and 20th centuries, African American spirituals and songs about Christmas, based in their tradition of spirituals, became more widely known. An increasing number of seasonal holiday songs were commercially produced in the 20th century, including jazz and blues variations. In addition, there was a revival of interest in early music, from groups singing folk music, such as The Revels, to performers of early medieval and classical music. One of the most ubiquitous festive songs is "We Wish You a Merry Christmas", which originates from the West Country of England in the 1930s. Radio has covered Christmas music from variety shows from the 1940s and 1950s, as well as modern-day stations that exclusively play Christmas music from late November through December 25. Hollywood movies have featured new Christmas music, such as "White Christmas" in Holiday Inn and Rudolph the Red-Nosed Reindeer. Traditional carols have also been included in Hollywood films, such as "Hark! the Herald Angels Sing" in It's a Wonderful Life (1946), and "Silent Night" in A Christmas Story. Traditional cuisine A special Christmas family meal is traditionally an important part of the holiday's celebration, and the food that is served varies greatly from country to country. Some regions have special meals for Christmas Eve, such as Sicily, where 12 kinds of fish are served. In the United Kingdom and countries influenced by its traditions, a standard Christmas meal includes turkey, goose or other large bird, gravy, potatoes, vegetables, sometimes bread and cider. Special desserts are also prepared, such as Christmas pudding, mince pies, Christmas cake, Panettone and Yule log cake. Traditional Christmas meal in Central Europe is fried carp or other fish. Cards Christmas cards are illustrated messages of greeting exchanged between friends and family members during the weeks preceding Christmas Day. The traditional greeting reads "wishing you a Merry Christmas and a Happy New Year", much like that of the first commercial Christmas card, produced by Sir Henry Cole in London in 1843. The custom of sending them has become popular among a wide cross-section of people with the emergence of the modern trend towards exchanging E-cards. Christmas cards are purchased in considerable quantities and feature artwork, commercially designed and relevant to the season. The content of the design might relate directly to the Christmas narrative, with depictions of the Nativity of Jesus, or Christian symbols such as the Star of Bethlehem, or a white dove, which can represent both the Holy Spirit and Peace on Earth. Other Christmas cards are more secular and can depict Christmas traditions, mythical figures such as Santa Claus, objects directly associated with Christmas such as candles, holly, and baubles, or a variety of images associated with the season, such as Christmastide activities, snow scenes, and the wildlife of the northern winter. Some prefer cards with a poem, prayer, or Biblical verse; while others distance themselves from religion with an all-inclusive "Season's greetings". Commemorative stamps A number of nations have issued commemorative stamps at Christmastide. Postal customers will often use these stamps to mail Christmas cards, and they are popular with philatelists. These stamps are regular postage stamps, unlike Christmas seals, and are valid for postage year-round. They usually go on sale sometime between early October and early December and are printed in considerable quantities. Gift giving The exchanging of gifts is one of the core aspects of the modern Christmas celebration, making it the most profitable time of year for retailers and businesses throughout the world. On Christmas, people exchange gifts based on the Christian tradition associated with Saint Nicholas, and the gifts of gold, frankincense, and myrrh which were given to the baby Jesus by the Magi. The practice of gift giving in the Roman celebration of Saturnalia may have influenced Christian customs, but on the other hand the Christian "core dogma of the Incarnation, however, solidly established the giving and receiving of gifts as the structural principle of that recurrent yet unique event", because it was the Biblical Magi, "together with all their fellow men, who received the gift of God through man's renewed participation in the divine life." However, Thomas J. Talley holds that the Roman Emperor Aurelian placed the alternate festival on December 25 in order to compete with the growing rate of the Christian Church, which had already been celebrating Christmas on that date first. Gift-bearing figures A number of figures are associated with Christmas and the seasonal giving of gifts. Among these are Father Christmas, also known as Santa Claus (derived from the Dutch for Saint Nicholas), Père Noël, and the Weihnachtsmann; Saint Nicholas or Sinterklaas; the Christkind; Kris Kringle; Joulupukki; tomte/nisse; Babbo Natale; Saint Basil; and Ded Moroz. The Scandinavian tomte (also called nisse) is sometimes depicted as a gnome instead of Santa Claus. The best known of these figures today is red-dressed Santa Claus, of diverse origins. The name Santa Claus can be traced back to the Dutch Sinterklaas, which means simply Saint Nicholas. Nicholas was a 4th-century Greek bishop of Myra, a city in the Roman province of Lycia, whose ruins are from modern Demre in southwest Turkey. Among other saintly attributes, he was noted for the care of children, generosity, and the giving of gifts. His feast day, December 6, came to be celebrated in many countries with the giving of gifts. Saint Nicholas traditionally appeared in bishop's attire, accompanied by helpers, inquiring about the behaviour of children during the past year before deciding whether they deserved a gift or not. By the 13th century, Saint Nicholas was well known in the Netherlands, and the practice of gift-giving in his name spread to other parts of central and southern Europe. At the Reformation in 16th–17th-century Europe, many Protestants changed the gift bringer to the Christ Child or Christkindl, corrupted in English to Kris Kringle, and the date of giving gifts changed from December 6 to Christmas Eve. The modern popular image of Santa Claus, however, was created in the United States, and in particular in New York. The transformation was accomplished with the aid of notable contributors including Washington Irving and the German-American cartoonist Thomas Nast (1840–1902). Following the American Revolutionary War, some of the inhabitants of New York City sought out symbols of the city's non-English past. New York had originally been established as the Dutch colonial town of New Amsterdam and the Dutch Sinterklaas tradition was reinvented as Saint Nicholas. Current tradition in several Latin American countries (such as Venezuela and Colombia) holds that while Santa makes the toys, he then gives them to the Baby Jesus, who is the one who actually delivers them to the children's homes, a reconciliation between traditional religious beliefs and the iconography of Santa Claus imported from the United States. In South Tyrol (Italy), Austria, Czech Republic, Southern Germany, Hungary, Liechtenstein, Slovakia, and Switzerland, the Christkind (Ježíšek in Czech, Jézuska in Hungarian and Ježiško in Slovak) brings the presents. Greek children get their presents from Saint Basil on New Year's Eve, the eve of that saint's liturgical feast. The German St. Nikolaus is not identical with the Weihnachtsmann (who is the German version of Santa Claus / Father Christmas). St. Nikolaus wears a bishop's dress and still brings small gifts (usually candies, nuts, and fruits) on December 6 and is accompanied by Knecht Ruprecht. Although many parents around the world routinely teach their children about Santa Claus and other gift bringers, some have come to reject this practice, considering it deceptive. Multiple gift-giver figures exist in Poland, varying between regions and individual families. St Nicholas (Święty Mikołaj) dominates Central and North-East areas, the Starman (Gwiazdor) is most common in Greater Poland, Baby Jesus (Dzieciątko) is unique to Upper Silesia, with the Little Star (Gwiazdka) and the Little Angel (Aniołek) being common in the South and the South-East. Grandfather Frost (Dziadek Mróz) is less commonly accepted in some areas of Eastern Poland. It is worth noting that across all of Poland, St Nicholas is the gift giver on the Saint Nicholas Day on December 6. Date according to Julian calendar Some jurisdictions of the Eastern Orthodox Church, including those of Russia, Georgia, Ukraine, Macedonia, Montenegro, Serbia, and Jerusalem, mark feasts using the older Julian calendar. As of , there is a difference of 13 days between the Julian calendar and the modern Gregorian calendar, which is used internationally for most secular purposes. As a result, December 25 on the Julian calendar currently corresponds to January 7 on the calendar used by most governments and people in everyday life. Therefore, the aforementioned Orthodox Christians mark December 25 (and thus Christmas) on the day that is internationally considered to be January 7. On 18 October 2022 the Orthodox Church of Ukraine allowed its dioceses to hold Christmas services according to the Revised Julian calendar, i.e., December 25. However, following the Council of Constantinople in 1923, other Orthodox Christians, such as those belonging to the jurisdictions of Constantinople, Bulgaria, Greece, Romania, Antioch, Alexandria, Albania, Cyprus, Finland, and the Orthodox Church in America, among others, began using the Revised Julian calendar, which at present corresponds exactly to the Gregorian calendar. Therefore, these Orthodox Christians mark December 25 (and thus Christmas) on the same day that is internationally considered to be December 25. A further complication is added by the fact that the Armenian Apostolic Church continues the original ancient Eastern Christian practice of celebrating the birth of Christ not as a separate holiday, but on the same day as the celebration of his baptism (Theophany), which is on January 6. This is a public holiday in Armenia, and it is held on the same day that is internationally considered to be January 6, because since 1923 the Armenian Church in Armenia has used the Gregorian calendar. However, there is also a small Armenian Patriarchate of Jerusalem, which maintains the traditional Armenian custom of celebrating the birth of Christ on the same day as Theophany (January 6), but uses the Julian calendar for the determination of that date. As a result, this church celebrates "Christmas" (more properly called Theophany) on the day that is considered January 19 on the Gregorian calendar in use by the majority of the world. In summary, there are four different dates used by different Christian groups to mark the birth of Christ, given in the table below. Listing Economy Christmas is typically a peak selling season for retailers in many nations around the world. Sales increase dramatically as people purchase gifts, decorations, and supplies to celebrate. In the United States, the "Christmas shopping season" starts as early as October. In Canada, merchants begin advertising campaigns just before Halloween (October 31), and step up their marketing following Remembrance Day on November 11. In the UK and Ireland, the Christmas shopping season starts from mid-November, around the time when high street Christmas lights are turned on. In the United States, it has been calculated that a quarter of all personal spending takes place during the Christmas/holiday shopping season. Figures from the U.S. Census Bureau reveal that expenditure in department stores nationwide rose from $20.8 billion in November 2004 to $31.9 billion in December 2004, an increase of 54 percent. In other sectors, the pre-Christmas increase in spending was even greater, there being a November–December buying surge of 100 percent in bookstores and 170 percent in jewelry stores. In the same year employment in American retail stores rose from 1.6 million to 1.8 million in the two months leading up to Christmas. Industries completely dependent on Christmas include Christmas cards, of which 1.9 billion are sent in the United States each year, and live Christmas Trees, of which 20.8 million were cut in the U.S. in 2002. For 2019, the average US adult was projected to spend $920 on gifts alone. In the UK in 2010, up to £8 billion was expected to be spent online at Christmas, approximately a quarter of total retail festive sales. In most Western nations, Christmas Day is the least active day of the year for business and commerce; almost all retail, commercial and institutional businesses are closed, and almost all industries cease activity (more than any other day of the year), whether laws require such or not. In England and Wales, the Christmas Day (Trading) Act 2004 prevents all large shops from trading on Christmas Day. Similar legislation was approved in Scotland in 2007. Film studios release many high-budget movies during the holiday season, including Christmas films, fantasy movies or high-tone dramas with high production values to hopes of maximizing the chance of nominations for the Academy Awards. One economist's analysis calculates that, despite increased overall spending, Christmas is a deadweight loss under orthodox microeconomic theory, because of the effect of gift-giving. This loss is calculated as the difference between what the gift giver spent on the item and what the gift receiver would have paid for the item. It is estimated that in 2001, Christmas resulted in a $4 billion deadweight loss in the U.S. alone. Because of complicating factors, this analysis is sometimes used to discuss possible flaws in current microeconomic theory. Other deadweight losses include the effects of Christmas on the environment and the fact that material gifts are often perceived as white elephants, imposing cost for upkeep and storage and contributing to clutter. Controversies Christmas has at times been the subject of controversy and attacks from various sources, both Christian and non-Christian. Historically, it was prohibited by Puritans during their ascendency in the Commonwealth of England (1647–1660), and in Colonial New England where the Puritans outlawed the celebration of Christmas in 1659 on the grounds that Christmas was not mentioned in Scripture and therefore violated the Reformed regulative principle of worship. The Parliament of Scotland, which was dominated by Presbyterians, passed a series of acts outlawing the observance of Christmas between 1637 and 1690; Christmas Day did not become a public holiday in Scotland until 1871. Today, some conservative Reformed denominations such as the Free Presbyterian Church of Scotland and the Reformed Presbyterian Church of North America likewise reject the celebration of Christmas based on the regulative principle and what they see as its non-Scriptural origin. Christmas celebrations have also been prohibited by atheist states such as the Soviet Union and more recently majority Muslim states such as Somalia, Tajikistan and Brunei. Some Christians and organizations such as Pat Robertson's American Center for Law and Justice cite alleged attacks on Christmas (dubbing them a "war on Christmas"). Such groups claim that any specific mention of the term "Christmas" or its religious aspects is being increasingly censored, avoided, or discouraged by a number of advertisers, retailers, government (prominently schools), and other public and private organizations. One controversy is the occurrence of Christmas trees being renamed Holiday trees. In the U.S. there has been a tendency to replace the greeting Merry Christmas with Happy Holidays, which is considered inclusive at the time of the Jewish celebration of Hanukkah. In the U.S. and Canada, where the use of the term "Holidays" is most prevalent, opponents have denounced its usage and avoidance of using the term "Christmas" as being politically correct. In 1984, the U.S. Supreme Court ruled in Lynch v. Donnelly that a Christmas display (which included a Nativity scene) owned and displayed by the city of Pawtucket, Rhode Island, did not violate the First Amendment. American Muslim scholar Abdul Malik Mujahid has said that Muslims must treat Christmas with respect, even if they disagree with it. The government of the People's Republic of China officially espouses state atheism, and has conducted antireligious campaigns to this end. In December 2018, officials raided Christian churches prior to Christmastide and coerced them to close; Christmas trees and Santa Clauses were also forcibly removed. See also List of Christmas films Mithraism in comparison with other belief systems#25th of December Notes References Further reading Bowler, Gerry, The World Encyclopedia of Christmas (October 2004: McClelland & Stewart). Bowler, Gerry, Santa Claus: A Biography (November 2007: McClelland & Stewart). Comfort, David, Just Say Noel: A History of Christmas from the Nativity to the Nineties (November 1995: Fireside). Count, Earl W., 4000 Years of Christmas: A Gift from the Ages (November 1997: Ulysses Press). Federer, William J., There Really Is a Santa Claus: The History of St. Nicholas & Christmas Holiday Traditions (December 2002: Amerisearch). Kelly, Joseph F., The Origins of Christmas (August 2004: Liturgical Press). Miles, Clement A., Christmas Customs and Traditions (1976: Dover Publications). Nissenbaum, Stephen, The Battle for Christmas (1996; New York: Vintage Books, 1997). Rosenthal, Jim, St. Nicholas: A Closer Look at Christmas (July 2006: Nelson Reference). External links Christmas: Its Origin and Associations, by William Francis Dawson, 1902, from Project Gutenberg December observances Quarter days Birthdays Feasts of Jesus Christ Federal holidays in the United States Public holidays in Albania Public holidays in Andorra Public holidays in Angola Public holidays in Argentina Public holidays in Armenia Public holidays in Australia Public holidays in Austria Public holidays in the Bahamas Public holidays in Bangladesh Public holidays in Barbados Public holidays in Belarus Public holidays in Belgium Public holidays in Belize Public holidays in Bolivia Public holidays in Botswana Public holidays in Brazil Public holidays in Brunei Public holidays in Bulgaria Public holidays in Canada Public holidays in Cape Verde Public holidays in Colombia Public holidays in Costa Rica Public holidays in Croatia Public holidays in Cuba Public holidays in Cyprus Public holidays in the Czech Republic Public holidays in Denmark Public holidays in the Dominican Republic Public holidays in El Salvador Public holidays in Estonia Public holidays in Finland Public holidays in France Public holidays in Germany Public holidays in Ghana Public holidays in Greece Public holidays in Guatemala Public holidays in Guyana Public holidays in Haiti Public holidays in Honduras Public holidays in Hungary Public holidays in Iceland Public holidays in Indonesia Public holidays in the Republic of Ireland Public holidays in Italy Public holidays in Jamaica Public holidays in Kazakhstan Public holidays in Kenya Public holidays in Kyrgyzstan Public holidays in Latvia Public holidays in Lebanon Public holidays in Liechtenstein Public holidays in Lithuania Public holidays in Luxembourg Public holidays in Malaysia Public holidays in Malta Public holidays in the Marshall Islands Public holidays in Mauritius Public holidays in Mexico Public holidays in the Federated States of Micronesia Public holidays in Moldova Public holidays in Montenegro Public holidays in Myanmar Public holidays in Namibia Public holidays in Nepal Public holidays in the Netherlands Public holidays in New Zealand Public holidays in Nicaragua Public holidays in Niger Public holidays in Nigeria Public holidays in Norway Public holidays in Papua New Guinea Public holidays in Paraguay Public holidays in Peru Public holidays in the Philippines Public holidays in Poland Public holidays in Portugal Public holidays in North Macedonia Public holidays in Romania Public holidays in Russia Public holidays in Rwanda Public holidays in Serbia Public holidays in Singapore Public holidays in Slovakia Public holidays in Slovenia Public holidays in South Africa Public holidays in South Korea Public holidays in Spain Public holidays in Sri Lanka Public holidays in Suriname Public holidays in Sweden Public holidays in Switzerland Public holidays in Syria Public holidays in Tanzania Public holidays in Trinidad and Tobago Public holidays in Uganda Public holidays in Ukraine Public holidays in the United Kingdom Public holidays in the United States Public holidays in Uruguay Public holidays in Venezuela Public holidays in Zambia Public holidays in Zimbabwe
2,824
6,246
https://en.wikipedia.org/wiki/Covalent%20bond
Covalent bond
A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding. Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term covalent bond dates from 1939. The prefix co- means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory. In the molecule , the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized. History The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors." The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms. He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines. Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two. While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms. Types of covalent bonds Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds. Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule. Covalent structures There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures. One- and three-electron bonds Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, . One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects. The simplest example of three-electron bonding can be found in the helium dimer cation, . It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds. Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities. Resonance There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = . Aromaticity In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene. In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent. Hypervalence Certain molecules such as xenon difluoride and sulfur hexafluoride have higher co-ordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory. Electron deficiency In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms, so that the molecules can instead be classified as electron-precise. Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated. Quantum mechanical description After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states. Comparison of VB and MO theories The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons. The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands. At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene. Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it. Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals. Covalency from atomic contribution to the electronic density of states In COOP, COHP and BCOOP, evaluation of bond covalency is dependent on the basis set. To overcome this issue, an alternative formulation of the bond covalency can be provided in this way. The center mass of an atomic orbital with quantum numbers for atom A is defined as where is the contribution of the atomic orbital of the atom A to the total electronic density of states of the solid where the outer sum runs over all atoms A of the unit cell. The energy window is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond. The relative position of the center mass of levels of atom A with respect to the center mass of levels of atom B is given as where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is where, for simplicity, we may omit the dependence from the principal quantum number in the notation referring to In this formalism, the greater the value of the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity is denoted as the covalency of the bond, which is specified in the same units of the energy . Analogous effect in nuclear systems An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common. See also Bonding in solids Bond order Coordinate covalent bond, also known as a dipolar bond or a dative covalent bond Covalent bond classification (or LXZ notation) Covalent radius Disulfide bond Hybridization Hydrogen bond Ionic bond Linear combination of atomic orbitals Metallic bonding Noncovalent bonding Resonance (chemistry) References Sources External links Covalent Bonds and Molecular Structure Structure and Bonding in Chemistry—Covalent Bonds Chemical bonding
2,826
6,249
https://en.wikipedia.org/wiki/Timeline%20of%20computing
Timeline of computing
Timeline of computing presents events in the history of computing organized by year and grouped into six topic areas: predictions and concepts, first use and inventions, hardware systems and processors, operating systems, programming languages, and new application areas. Detailed computing timelines: before 1950, 1950–1979, 1980–1989, 1990–1999, 2000-2009, 2010-2019, 2020–present Graphical timeline See also History of compiler construction History of computing hardware – up to third generation (1960s) History of computing hardware (1960s–present) – third generation and later History of the graphical user interface History of the Internet History of the World Wide Web List of pioneers in computer science Timeline of electrical and electronic engineering Microprocessor chronology Resources Stephen White, A Brief History of Computing The Computer History in time and space, Graphing Project, an attempt to build a graphical image of computer history, in particular operating systems. External links Visual History of Computing 1944-2013 (archived) Digital Revolution
2,828
6,250
https://en.wikipedia.org/wiki/Colorado%20Springs%2C%20Colorado
Colorado Springs, Colorado
Colorado Springs is a home rule municipality in and the county seat of El Paso County, Colorado, United States. It is the largest city in El Paso County, with a population of 478,961 at the 2020 United States Census, a 15.02% increase since 2010. Colorado Springs is the second-most populous city and the most extensive city in the state of Colorado, and the 40th-most populous city in the United States. It is the principal city of the Colorado Springs metropolitan area and the second-most prominent city of the Front Range Urban Corridor. It is located in east-central Colorado, on Fountain Creek, south of Denver. At the city stands over above sea level. Colorado Springs is near the base of Pikes Peak, which rises above sea level on the eastern edge of the Southern Rocky Mountains. The city is the largest city north of Mexico above 6000 feet in elevation. History The Ute, Arapaho and Cheyenne peoples were the first recorded inhabiting the area which would become Colorado Springs. Part of the territory included in the United States' 1803 Louisiana Purchase, the current city area was designated part of the 1854 Kansas Territory. In 1859, after the first local settlement was established, it became part of the Jefferson Territory on October 24 and of El Paso County on November 28. Colorado City at the Front Range confluence of Fountain and Camp creeks was "formally organized on August13, 1859" during the Pikes Peak Gold Rush. It served as the capital of the Colorado Territory from November 5, 1861, until August 14, 1862, when the capital was moved to Golden, before it was finally moved to Denver in 1867. So many immigrants from England had settled in Colorado Springs by the early 1870s that Colorado Springs was locally referred to as "Little London." In 1871 the Colorado Springs Company laid out the towns of La Font (later called Manitou Springs) and Fountain Colony, upstream and downstream respectively, of Colorado City. Within a year, Fountain Colony was renamed Colorado Springs and officially incorporated. The El Paso County seat shifted from Colorado City in 1873 to the Town of Colorado Springs. On December 1, 1880, Colorado Springs expanded northward with two annexations. The second period of annexations was during 188990, and included Seavey's Addition, West Colorado Springs, East End, and another North End addition. In 1891 the Broadmoor Land Company built the Broadmoor suburb, which included the Broadmoor Casino, and by December 12, 1895, the city had "four Mining Exchanges and 275 mining brokers." By 1898, the city was designated into quadrants by the north-south Cascade Avenue and the east-west Washington/Pikes Peak avenues. From 1899 to 1901 Tesla Experimental Station operated on Knob Hill, and aircraft flights to the Broadmoor's neighboring fields began in 1919. Alexander Airport north of the city opened in 1925, and in 1927 the original Colorado Springs Municipal Airport land was purchased east of the city. The city's military presence began during World War II, beginning with Camp Carson (now the 135,000-acre Fort Carson base) that was established in 1941. During the war, the United States Army Air Forces leased land adjacent to the municipal airfield, naming it Peterson Field in December 1942. In November 1950, Ent Air Force Base was selected as the Cold War headquarters for Air Defense Command (ADC). The former WWII Army Air Base, Peterson Field, which had been inactivated at the end of the war, was re-opened in 1951 as a U.S. Air Force base.  North American Aerospace Defense Command (NORAD) was established as a hardened command and control center within the Cheyenne Mountain Complex during the Cold War. Between 1965 and 1968, the University of Colorado Colorado Springs, Pikes Peak State College and Colorado Technical University were established in or near the city. In 1977 most of the former Ent AFB became a US Olympic training center. The Libertarian Party was founded within the city in the 1970s. On October 1, 1981, the Broadmoor Addition, Cheyenne Canon, Ivywild, Skyway, and Stratton Meadows were annexed after the Colorado Supreme Court "overturned a district court decision that voided the annexation". Further annexations expanding the city include the Nielson Addition and Vineyard Commerce Park Annexation in September 2008. Geography The city lies in a semi-arid Steppe climate region with the Southern Rocky Mountains to the west, the Palmer Divide to the north, high plains further east, and high desert lands to the south when leaving Fountain and approaching Pueblo. Colorado Springs is or one hour and five minutes south of Denver by car using I-25. Colorado Springs has the greatest total area of any municipality in Colorado. At the 2020 United States Census, the city had a total area of including of water. Metropolitan area Colorado Springs has many features of a modern urban area such as parks, bike trails, and open spaces. However, it is not exempt from problems that typically plague cities experiencing tremendous growth such as overcrowded roads and highways, crime, sprawl, and government budgetary issues. Many of the problems are indirectly or directly caused by the city's difficulty in coping with the large population growth experienced since 1997, and the annexation of the Banning Lewis Ranch area to accommodate further population growth of 175,000 future residents. Climate Colorado Springs has a cooler, dry-winter semi-arid climate (Köppen BSk), and its location just east of the Rocky Mountains affords it the rapid warming influence from chinook winds during winter but also subjects it to drastic day-to-day variability in weather conditions. The city has abundant sunshine year-round, averaging 243 sunny days per year, and receives approximately of annual precipitation. Due to unusually low precipitation for several years after flooding in 1999, Colorado Springs enacted lawn water restrictions in 2002. These were lifted in 2005 but permanently reinstated in December 2019. Colorado Springs is one of the most active lightning strike areas in the United States. This natural phenomenon led Nikola Tesla to select Colorado Springs as the preferred location to build his lab and study electricity. Seasonal climate December is typically the coldest month, averaging . Historically, January had been the coldest month, but, in recent years, December has had both lower daily maxima and minima. Typically, there are 5.2 nights with sub- lows and 23.6 days where the high does not rise above freezing. Snowfall is usually moderate and remains on the ground briefly because of direct sun, with the city receiving per season, although the mountains to the west often receive in excess of triple that amount; March is the snowiest month in the region, both by total accumulation and number of days with measurable snowfall. In addition, 8 of the top 10 heaviest 24-hour snowfalls have occurred from March to May. Summers are warm, with July, the warmest month, averaging , and 18 days of + highs annually. Due to the high elevation and aridity, nights are usually relatively cool and rarely does the low remain above . Dry weather generally prevails, but brief afternoon thunderstorms are common, especially in July and August when the city receives the majority of its annual rainfall, due to the North American monsoon. The first autumn freeze and the last freeze in the spring, on average, occur on October 2 and May 6, respectively; the average window for measurable snowfall (≥) is October 21 through April 25. Extreme temperatures range from on June 26, 2012 and most recently on June 21, 2016, down to on February 1, 1951, and December 9, 1919. Climate data Cityscape Demographics As of the 2020 United States Census, the population of the City of Colorado Springs was 478,961 (40th most populous U.S. city), the population of the Colorado Springs Metropolitan Statistical Area was 755,105 (79th most populous MSA), and the population of the Front Range Urban Corridor was 5,055,344. As of the April 2010 census, 78.8% of the population of the city was White (non-Hispanic Whites were 70.7% of the population, compared with 86.6% in 1970), 16.1% Hispanic or Latino of any race (compared with 7.4% in 1970), 6.3% Black or African American, 3.0% Asian, 1.0% descended from indigenous peoples of the Americas, 0.3% descended from indigenous Hawaiians and other Pacific islanders, 5.5% of some other race, and 5.1% of two or more races. Mexican Americans made up 14.6% of the city's population, compared with 9.1% in 1990. The median age in the city was 35 years. Economy Colorado Springs's economy is driven primarily by the military, the high-tech industry, and tourism, in that order. The city is experiencing growth in the service sectors. In June 2019, before the COVID-19 pandemic, the unemployment rate was 3.3%. The state's unemployment rate in June 2022 was 3.4% compared to 3.6% for the nation. Military , there are nearly 45,000 active-duty troops in Colorado Springs. There are more than 100,000 veterans and thousands of reservists. The military and defense contractors supply more than 40% of the Pikes Peak region's economy. Colorado Springs is home to the Peterson Space Force Base, Schriever Space Force Base, Cheyenne Mountain Space Force Station, U.S. Space Command, and Space Operations Command— the largest contingent of space service military installations. They are responsible for intelligence gathering, space operations, and cyber missions. Peterson Space Force Base is responsible for the North American Aerospace Defense Command (NORAD) and the United States Northern Command (USNORTHCOM) headquarters, Space Operations Command, and Space Deltas 2, 3, and 7. Located at Peterson is the 302nd Airlift Wing, an Air Force Reserve unit, that transports passengers and cargo and fights wildfires. Schriever Space Force Base is responsible for Joint Task Force-Space Defense and Space Deltas 6, 8, and 9. The NORAD and USNORTHCOM Alternate Command Center is located at the Cheyenne Mountain Complex. Within the mountain complex, the Cheyenne Mountain Space Force Station has been operated by Space Operations Command. On January 13, 2021, the Air Force announced a new permanent home for Space Command, moving it from Colorado Springs to Huntsville, Alabama in 2026, but the decision could be reversed by Congress. Army divisions are trained and stationed at Fort Carson. The United States Air Force Academy was established after World War II, on land donated by the City of Colorado Springs. Defense industry The defense industry forms a significant part of the Colorado Springs economy, with some of the city's largest employers being defense contractors. Some defense corporations have left or downsized city campuses, but slight growth has been recorded. Significant defense corporations in the city include Northrop Grumman, Boeing, General Dynamics, L3Harris Technologies, SAIC, ITT, Lockheed Martin, and Bluestaq. The Space Foundation is based in Colorado Springs. High-tech industry A large percentage of Colorado Springs's economy is still based on manufacturing high-tech and complex electronic equipment. The high-tech sector in the Colorado Springs area has decreased its overall presence from 2000 to 2006 (from around 21,000 to around 8,000), with notable reductions in information technology and complex electronic equipment. Current trends project the high-tech employment ratio will continue to decrease. High-tech corporations with connections to the city include: Verizon Business, a telecommunications firm, at its height had nearly 1300 employees in 2008. Hewlett-Packard still has some sales, support, and SAN storage engineering center for the computer industry. Storage Networking Industry Association is the home of the SNIA Technology Center. Keysight Technologies, spun off in 2014 from Agilent, which was itself spun off from HP in 1999 as an independent, publicly traded company, has its oscilloscope research and development division based in Colorado Springs. Intel had 250 employees in 2009. The Intel facility is now used for the centralized unemployment offices, social services, El Paso county offices, and a bitcoin mining facility. Microchip Technology (formerly Atmel), is a chip fabrication organization. The Apple Inc. facility was sold to Sanmina-SCI in 1996. Culture and contemporary life Tourism Almost immediately following the arrival of railroads beginning in 1871, the city's location at the base of Pikes Peak and the Rocky Mountains made it a popular tourism destination. Tourism is the third largest employer in the Pikes Peak region, accounting for more than 16,000 jobs. In 2018, 23 million day and overnight visitors came to the area, contributing $2.4 billion in revenue. Colorado Springs has more than 55 attractions and activities in the area, including Garden of the Gods park, United States Air Force Academy, the ANA Money Museum, Cheyenne Mountain Zoo, Colorado Springs Fine Arts Center at Colorado College, Old Colorado City, The National Museum of World War II Aviation, and the U.S. Olympic & Paralympic Training Center. In 2020, the U.S. Olympic & Paralympic Museum opened; the Flying W Ranch Chuckwagon Dinner & Western Show reopened in 2020. A new Pikes Peak Summit Complex opened at the 14,115-foot summit in 2021. The Manitou and Pikes Peak Railway also reopened in 2021. The downtown Colorado Springs Visitor Information Center offers free area information to leisure and business travelers. The Cultural Office of the Pikes Peak Region (COPPeR), also downtown, supports and advocates for the arts throughout the Pikes Peak Region. It operates the PeakRadar website to communicate city events. Annual cultural events Colorado Springs is home to the annual Colorado Springs Labor Day Lift Off, a hot air balloon festival that takes place over Labor Day weekend at the city's Memorial Park. Other annual events include: a comic book convention and science fiction convention called GalaxyFest in February, a pride parade called PrideFest in July, the Greek Festival, the Pikes Peak Ascent and Marathon, and the Steers & Beers Whiskey and Beer Festival in August, and the Emma Crawford Coffin Races and Festival in nearby Manitou Springs and Arts Month in October. The Colorado Springs Festival of Lights Parade is held the first Saturday in December. The parade is held on Tejon Street in Downtown Colorado Springs. Breweries In 2017, Colorado had the third-most craft breweries at 348. Breweries and microbreweries have become popular in Colorado Springs, which hosts over 30 of them. Religious institutions Although houses of worship of almost every major world religion are within the city, Colorado Springs has in particular attracted a large influx of Evangelical Christians and Christian organizations in recent years. At one time Colorado Springs was the national headquarters for 81 different religious organizations, earning the city the tongue-in-cheek nicknames "the Evangelical Vatican" and "The Christian Mecca." Religious groups with regional or international headquarters in Colorado Springs include: Association of Christian Schools International Biblica Children's HopeChest Community Bible Study Compassion International David C. Cook/Integrity Music Development Associates International Engineering Ministries International Family Talk Focus on the Family Global Action HCJB Hope & Home The Navigators One Child Matters Roman Catholic Diocese of Colorado Springs VisionTrust WAY-FM Media Group Young Life Marijuana Although Colorado voters approved Colorado Amendment 64, a constitutional amendment in 2012 legalizing retail sales of marijuana for recreational purposes, the Colorado Springs city council voted not to permit retail shops in the city, as was allowed in the amendment. Medical marijuana outlets continue to operate in Colorado Springs. In 2015, there were 91 medical marijuana clinics in the city, which reported sales of $59.6 million in 2014, up 11 percent from the previous year but without recreational marijuana shops. On April 26, 2016, Colorado Springs city council decided to extend the current six-month moratorium to eighteen months with no new licenses to be granted until May 2017. A scholarly paper suggested the city will give up $25.4 million in tax revenue and fees if the city continues to thwart the industry from opening within the city limits. As of March 1, 2018, there were 131 medical marijuana centers and no recreational cannabis stores. As of 2019 Colorado Springs is still one of seven towns that have only allowed for medical marijuana. In popular culture Colorado Springs has been the subject of or setting for many books, films and television shows, and is a frequent backdrop for political thrillers and military-themed stories because of its many military installations and vital importance to the United States' continental defense. Notable television series using the city as a setting include Dr. Quinn, Medicine Woman, Homicide Hunter and the Stargate series Stargate SG-1, as well as the films WarGames, The Prestige, and BlacKkKlansman. In a North Korean propaganda video released in April 2013, Colorado Springs was inexplicably singled out as one of four targets for a missile strike. The video failed to pinpoint Colorado Springs on the map, instead showing a spot somewhere in Louisiana. Sports Olympic sports Colorado Springs, dubbed Olympic City USA, is home to the United States Olympic & Paralympic Training Center and the headquarters of the United States Olympic & Paralympic Committee and the United States Anti-Doping Agency. Further, over 50 national sports organizations (non-Olympic) headquarter in Colorado Springs. These include the National Strength and Conditioning Association, Sports Incubator, a various non-Olympic Sports (such as USA Ultimate), and more. Colorado Springs and Denver hosted the 1962 World Ice Hockey Championships. The city has a long association with the sport of figure skating, having hosted the U.S. Figure Skating Championships six times and the World Figure Skating Championships five times. It is home to the World Figure Skating Museum and Hall of Fame and the Broadmoor Skating Club, a notable training center for the sport. In recent years, the Broadmoor World Arena has hosted skating events such as Skate America and the Four Continents Figure Skating Championships. Baseball The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball. Pikes Peak International Hill Climb The Pikes Peak International Hill Climb (PPIHC), also known as The Race to the Clouds, is an annual invitational automobile and motorcycle hill climb to the summit of Pikes Peak, every year on the last Sunday of June. The highway wasn't completely paved until 2011. Local professional teams Local collegiate teams The local colleges feature many sports teams. Notable among them are several nationally competitive NCAA Division I teams: United States Air Force Academy (Falcons) Football, Basketball and Hockey and Colorado College (Tigers) Hockey, and Women's Soccer. Rodeo Colorado Springs was the original headquarters of the Professional Bull Riders (PBR) from its founding in 1992 until 2005, when the organization was moved to Pueblo. Parks, trails and open space The city's Parks, Recreation and Cultural Services manage 136 neighborhood parks, eight community parks, seven regional parks, and five sports complexes, totaling . They also manage of trails, of which are park trails and are urban. There are of open space in 48 open-space areas. Parks Garden of the Gods is on Colorado Springs's western edge. It is a National Natural Landmark, with red/orange sandstone rock formations often viewed against a backdrop of the snow-capped Pikes Peak. This park is free to the public and offers many recreational opportunities, such as hiking, rock climbing, cycling, horseback riding and tours. It offers a variety of annual events, one of the most popular of which is the Starlight Spectacular, a recreational bike ride held every summer to benefit the Trails and Open Space Coalition of Colorado Springs. Colorado Springs has several major city parks, such as Palmer Park, America the Beautiful Park in downtown Colorado Springs, Memorial Park, which includes many sports fields, an indoor swimming pool and skating rink, a skateboard bowl and two half-pipes, and Monument Valley Park, which has walking and biking paths, an outdoor swimming pool and pickleball courts. Monument Valley Park also has Tahama Spring, the original spring in Colorado Springs. Austin Bluffs Park affords a place of recreation in eastern Colorado Springs. El Paso County Regional Parks include Bear Creek Regional Park, Bear Creek Dog Park, Fox Run Regional Park and Fountain Creek Regional Park and Nature Center. Ponderosa pine (Pinus ponderosa), Gambel oak (Quercus gambelii), narrowleaf yucca (Yucca angustissima, syn. Yucca glauca) and prickly pear cactus (Opuntia macrorhiza). Trails Three trails, the New Santa Fe Regional Trail, Pikes Peak Greenway and Fountain Creek Regional Trail, form a continuous path from Palmer Lake, through Colorado Springs, to Fountain, Colorado. The majority of the trail between Palmer Lake and Fountain is a soft surface breeze gravel trail. A major segment of the trail within the Colorado Springs city limits is paved. The trails, except Monument Valley Park trails, may be used for equestrian traffic. Motorized vehicles are not allowed on the trails. Many of the trails are interconnected, having main spine trails, like the Pikes Peak Greenway, that lead to secondary trails. Government On November 2, 2010, Colorado Springs voters adopted a council-strong mayor form of government. The City of Colorado Springs transitioned to the new system of government in 2011. Under the council-strong mayor system of government, the mayor is the chief executive and the city council is the legislative branch. The mayor is a full-time elected position and not a member of the council. The council has nine members, six of whom represent one of six equally populated districts each. The remaining three members are elected at-large. Colorado Springs City Hall was built from 1902 to 1904 on land donated by W. S. Stratton. City Council The Colorado Springs City Council consists of nine elected officials, six of whom represent districts and three of whom represent the city at-large. District 1 – Dave Donelson District 2 – Randy Helms – Council President Pro-Tem District 3 – Stephannie Fortune District 4 – Yolanda Avila District 5 – Nancy Henjum District 6 – Mike O'Malley At-Large – Bill Murray At-Large – Tom Strand – Council President At-Large – Wayne Williams Politics In 2017 Caleb Hannan wrote in Politico that Colorado Springs was "staunchly Republican", "a right-wing counterweight to liberal Boulder", and that a study ranked it "the fourth most conservative city in America". In the 2016 presidential election, Donald Trump's margin of victory in El Paso County was 22 points. That year Hannan wrote that downtown Colorado Springs had a different political vibe from the overall area's and that there were "superficial signs of changing demographics". In 2020 the shift toward the political center continued as the incumbent Republican, Donald Trump, edged out Democrat Joe Biden by only 10.8% in El Paso County Education Primary and secondary education Public schools The public education in the city is divided into several school districts: Colorado Springs School District 11 (center of the city) Academy School District 20 (north end) Falcon School District 49 (east side) Widefield School District 3 (south end) Fountain-Fort Carson School District 8 (far south end) Harrison School District 2 (south central area) James Irwin Charter High School (east central area) Cheyenne Mountain School District 12 (southwest corner) The Vanguard School, CIVA Charter High School and The Classical Academy are charter schools. Private schools Roman Catholic Diocese of Colorado Springs schools including within the boundaries of the city Corpus Christi Catholic School - PreK-8 Divine Redeemer Catholic School - PreK-8 St. Gabriel Classical Academy - PreK-3 St. Paul Catholic School - PreK-8 St. Mary's High School - an independent Catholic high school Fountain Valley School of Colorado - a residential high school established in 1930 with a current enrollment of about 240. The Colorado Springs School - a preK-12 school established in 1962 with a current enrollment of about 300. Colorado Springs Christian Schools - A PreK–12th grade Christian school with two campuses started in 1972 and with an enrollment of about 1,150 in 2021. Evangelical Christian Academy - a preK-12 school established in 1971 with a current enrollment of about 350. Pikes Peak Christian School - a preK-12 Christian school with a current enrollment of about 210 In addition the state of Colorado runs the Colorado School for the Deaf and Blind, a residential school for people up to age 21 and established in 1874, in the city. Higher education State institutions offering bachelors and graduate degree programs in Colorado Springs include University of Colorado Colorado Springs (UCCS) with more than 12,000 students, and, Pikes Peak State College which offers mostly two-year degree associate degrees. The United States Air Force Academy is a federal institution offering bachelor degrees for officer candidates. Private non-profit institutions include Colorado College established in 1874 with about 2,000 undergraduates. Colorado Christian University has its Colorado Springs Center in the city. Private for-profit institutions include Colorado Technical University whose main campus is in Colorado Springs and IntelliTec College a technical training school. Transportation Major highways and roads Interstate highways Colorado Springs is primarily served by one interstate highway. I-25 runs north and south through Colorado, and traverses the city for nearly , entering the city south of Circle Drive and exiting north of North Gate Boulevard. In El Paso County it is known as Ronald Reagan Highway. State and U.S. highways A number of state and U.S. highways serve the city. State Highway 21 is a major east side semi-expressway from Black Forest to Fountain, known locally and co-signed as Powers Boulevard. State Highway 83 runs north–south from central Denver to northern Colorado Springs. State Highway 94 runs east–west from western Cheyenne County to eastern Colorado Springs where it terminates at US 24. US 24 is a major route through the city and county, providing access to Woodland Park via Ute Pass to the west and downtown, Nob Hill and numerous suburbs to the east. It is co-signed with Platte Ave after SH 21 and originally carried local traffic through town. The Martin Luther King Jr Bypass runs from I-25 near Circle Drive along Fountain Blvd to SH 21, then east again. State Highway 115 begins in Cañon City, traveling north along the western edge of Fort Carson; when it reaches the city limits it merges with Nevada Avenue, a signed Business Route of US 85. US 85 and SH 115 are concurrent between Lake Avenue and I-25. US 85 enters the city at Fountain and was signed at Venetucci Blvd, Lake Avenue, and Nevada Avenue at various points in history; however most of US 85 is concurrent with I-25 and is not signed. County and city roads In November 2015, voters in Colorado Springs overwhelmingly passed ballot measure 2C, dedicating funds from a temporary sales tax increase to much needed road and infrastructure improvements over five years. This temporary increase is estimated to bring in approximately $50 million annually, which will be used solely to improve roads and infrastructure. The ballot measure passed by a margin of approximately 65–35%. In 2004, the voters of Colorado Springs and El Paso County established the Pikes Peak Rural Transportation Authority. In early 2010, the city of Colorado Springs approved an expansion of the northernmost part of Powers Boulevard in order to create an Interstate 25 bypass commonly referred to as the Copper Ridge Expansion. Airport Colorado Springs Airport (COS; ICAO: KCOS) has been in operation since 1925. It is the second-largest commercial airport in the state, after Denver International Airport (DEN; ICAO: KDEN). It covers of land at an elevation of approximately . COS is considered to be a joint-use civilian and military airport, as Peterson Space Force Base is a tenant of the airport. It has three paved runways: 17L/35R is , the runway 17R/35L is and the runway 13/31 is . The airport handled 817,000 passengers from October 2020–October 2021, and is served by American, Delta, Southwest, and United. Bicycling In April 2018, the Colorado Springs City Council approved a Bike Master Plan. The vision of the city's Bike Master Plan is "a healthy and vibrant Colorado Springs where bicycling is one of many transportation options for a large portion of the population, and where a well-connected and well-maintained network of urban trails, single-track, and on-street infrastructure offers a bicycling experience for present and future generations that is safe, convenient, and fun for getting around, getting in shape, or getting away." Bike lanes in Colorado Springs have not been deployed without controversy. According to The Gazette, their readers "have mixed feelings for new bike lanes." In December 2016, the City removed a bike lane along Research Parkway due to overwhelming opposition; an online survey found that 80.5% of respondents opposed the bike lane. The Gazette has stated that since the Bike Master Plan was adopted by City Council, "no issue has elicited more argument in The Gazette pages," and due to this immense public interest, on February 25, 2019, The Gazette hosted a town hall meeting called "Battle of the Bike Lanes." Walkability A 2011 study by Walk Score ranked Colorado Springs 34th most walkable of fifty largest U.S. cities. Mountain Metropolitan Transit Mountain Metropolitan Transit (MMT) is testing Battery Electric Buses (BEB), and if the buses perform well, the agency plans to acquire its first three BEBs in 2021 using funds from the Volkswagen emissions scandal and resulting lawsuit and settlement. On April 22, 2022, Mountain Metro unveiled four new all-electric Proterra ZX5 buses to be added to their fleet. The new buses join their current fleet of 67 clean diesel buses. They are funded by the Colorado Department of Transportation Division of Transit and Rail Settlement Transit Bus Replacement Program, Volkswagen Diesel Emission Settlement trust, and Federal transit Administration 5339(b) Buses and Bus Facilities Program grant. The Proterra ZX5 buses run 220 to 330 miles on a single charge, and cost $1.2 million per bus. Mountain Metro Mobility Mountain Metro Mobility is an Americans with Disabilities Act (ADA) federally mandated complementary ADA paratransit service, which provides demand-response service for individuals with mobility needs that prevent them from using the fixed-route bus system. Mountain Metro Rides Mountain Metro Rides offers alternative transportation options to residents of the Pikes Peak Region. The program is designed to reduce congestion and pollution by encouraging people to commute by carpool, vanpool, bicycling or walking. Bustang Bustang provides intercity transportation to Colorado Springs. It is part of the South and Outrider lines, which connect to Denver and to Lamar. There is an additional line that connects Colorado Springs directly to the Denver Tech Center. Notable people Sister cities Colorado Springs' sister cities are: Fujiyoshida, Yamanashi, Japan (1962) Kaohsiung, Taiwan (1983) Smolensk, Smolensk Oblast, Russia (1993) Bishkek, Kyrgyzstan (1994) Nuevo Casas Grandes, Chihuahua, Mexico (1996) Canterbury-Bankstown, Sydney, New South Wales, Australia (1999) Olympia, Peloponnese, Greece (2014) Colorado Springs's sister city organization began when it became partners with Fujiyoshida. The torii gate erected to commemorate the relationship stands at the corner of Bijou Street and Nevada Avenue, and is one of the city's most recognizable landmarks. The torii gate, crisscrossed bridge and shrine, in the median between Platte and Bijou Streets downtown, were a gift to Colorado Springs, erected in 1966 by the Rotary Club of Colorado Springs to celebrate the friendship between the two communities. A plaque near the torii gate states that "the purpose of the sister city relationship is to promote understanding between the people of our two countries and cities". The Fujiyoshida Student exchange program has become an annual event. In 2006 and 2010, the Bankstown TAP (Talent Advancement Program) performed with the Youth Symphony and the Colorado Springs Children's Chorale as part of the annual "In Harmony" program. A notable similarity between Colorado Springs and its sister cities is their geographic positions: three of the seven cities are near the foot of a major mountain or mountain range, as is Colorado Springs. See also Club Q nightclub shooting Colorado Bibliography of Colorado Index of Colorado-related articles Outline of Colorado List of counties in Colorado List of municipalities in Colorado List of places in Colorado List of statistical areas in Colorado Front Range Urban Corridor South Central Colorado Urban Area Colorado Springs, CO Metropolitan Statistical Area Media in Colorado Springs, Colorado Tuberculosis treatment in Colorado Springs Notes References External links City of Colorado Springs website CDOT map of the City of Colorado Springs Visit Colorado Springs official website Cities in Colorado County seats in Colorado Pikes Peak Populated places established in 1871 Cities in El Paso County, Colorado Former colonial and territorial capitals in the United States 1871 establishments in Colorado Territory
2,829
6,255
https://en.wikipedia.org/wiki/Carl%20Menger
Carl Menger
Carl Menger von Wolfensgrün (; ; 28 February 1840 – 26 February 1921) was an Austrian economist and the founder of the Austrian School of economics. Menger contributed to the development of the theories of marginalism and marginal utility, which rejected cost-of-production theory of value, such as developed by the classical economists such as Adam Smith and David Ricardo. As a departure from such, he would go on to call his resultant perspective, the subjective theory of value. Biography Family and education Carl Menger von Wolfensgrün was born in the city of Neu-Sandez in Galicia, Austrian Empire, which is now Nowy Sącz in Poland. He was the son of a wealthy family of minor nobility; his father, Anton Menger, was a lawyer. His mother, Caroline Gerżabek, was the daughter of a wealthy Bohemian merchant. He had two brothers, Anton and Max, both prominent as lawyers. His son, Karl Menger, was a mathematician who taught for many years at Illinois Institute of Technology. After attending gymnasium he studied law at the Universities of Prague and Vienna and later received a doctorate in jurisprudence from the Jagiellonian University in Kraków. In the 1860s Menger left school and enjoyed a stint as a journalist reporting and analyzing market news, first at the Lemberger Zeitung in Lemberg, Austrian Galicia (now Lviv, Ukraine) and later at the in Vienna. Career During the course of his newspaper work, he noticed a discrepancy between what the classical economics he was taught in school said about price determination and what real world market participants believed. In 1867 Menger began a study of political economy which culminated in 1871 with the publication of his Principles of Economics (), thus becoming the father of the Austrian School of economic thought. It was in this work that he challenged classical cost-based theories of value with his theory of marginality – that price is determined at the margin. In 1872 Menger was enrolled into the law faculty at the University of Vienna and spent the next several years teaching finance and political economy both in seminars and lectures to a growing number of students. In 1873, he received the university's chair of economic theory at the very young age of 33. In 1876 Menger began tutoring Archduke Rudolf von Habsburg, the Crown Prince of Austria in political economy and statistics. For two years, Menger accompanied the prince during his travels, first through continental Europe and then later through the British Isles. He is also thought to have assisted the crown prince in the composition of a pamphlet, published anonymously in 1878, which was highly critical of the higher Austrian aristocracy. His association with the prince would last until Rudolf's suicide in 1889. In 1878 Rudolf's father, Emperor Franz Joseph, appointed Menger to the chair of political economy at Vienna. The title of Hofrat was conferred on him, and he was appointed to the Austrian in 1900. Dispute with the historical school Ensconced in his professorship, he set about refining and defending the positions he took and methods he utilized in Principles, the result of which was the 1883 publication of Investigations into the Method of the Social Sciences with Special Reference to Economics (). The book caused a firestorm of debate, during which members of the historical school of economics began to derisively call Menger and his students the "Austrian School" to emphasize their departure from mainstream German economic thought – the term was specifically used in an unfavorable review by Gustav von Schmoller. In 1884 Menger responded with the pamphlet The Errors of Historicism in German Economics and launched the infamous , or methodological debate, between the Historical School and the Austrian School. During this time Menger began to attract like-minded disciples who would go on to make their own mark on the field of economics, most notably Eugen von Böhm-Bawerk, and Friedrich von Wieser. In the late 1880s, Menger was appointed to head a commission to reform the Austrian monetary system. Over the course of the next decade, he authored a plethora of articles which would revolutionize monetary theory, including "The Theory of Capital" (1888) and "Money" (1892). Largely due to his pessimism about the state of German scholarship, Menger resigned his professorship in 1903 to concentrate on study. Philosophical influences There are different opinions on Menger's philosophical influences. But it is without discussion that there is a rudimentary dispute of Menger with Plato and a very meticulous one with Aristotle, especially with his ethics. "Plato holds that money is an agreed sign for change and Aristotle says, that money came into being as an agreement, not by nature, but by law." Also, the influence of Kant is provable. Many authors emphasize also rationalism and idealism, as is represented by Christian Wolff. Looking at the literature, most writers think that Menger represents an essential Aristotelian position. This is surprisingly a position that is contrary to his theory of the subjective value and his individualistic methodological position. Another entry is the use of deduction or induction.  With his price theory can be shown that Menger is nominalistic and, stronger, anti-essentialistic. That is to say that his approach is inductionalistic. Economics Menger used his subjective theory of value to arrive at what he considered one of the most powerful insights in economics: "both sides gain from exchange". Unlike William Jevons, Menger did not believe that goods provide "utils," or units of utility. Rather, he wrote, goods are valuable because they serve various uses whose importance differs. Menger also came up with an explanation of how money develops that is still accepted by some schools of thought today. Works 1871 – Principles of Economics 1883 – Investigations into the Method of the Social Sciences with Special Reference to Economics 1884 – The Errors of Historicism in German Economics 1888 – The Theory of Capital 1892 – On the Origins of Money See also Methodenstreit History of macroeconomic thought Historical school of economics References Further reading Ebeling, Richard M., "Carl Menger and the Sesquicentennial Founding of the Austrian School," American Institute for Economic Research, January 5, 2021 Ebeling, Richard M., "Carl Menger's Theory of Institutions and Market Processes," American Institute for Economic Research, April 13, 202 von Wieser, Friedrich, "Carl Menger: A Biographical Appreciation" [1923], American Institute for Economic Research, February 25, 2019 External links The Epistemological Import of Carl Menger's Theory of the Origin of Money Ludwig von Mises in Human Action on Menger's Theory of the Origins of Money Profile on Carl Menger at the History of Economic Thought Website Principles of Economics , online version provided by the Ludwig von Mises Institute. Grundsätze der Volkswirtschaftslehre (Principles of Economics) Principles of Economics (PDF Spanish) On the Origin of Money (English Translation), online version provided by the Monadnock Press Carl Menger Papers, 1857–1985, Rubenstein Library, Duke University 1840 births 1921 deaths 20th-century Austrian economists 19th-century Austrian economists 19th-century Austrian writers Austrian School economists Charles University alumni University of Vienna alumni Jagiellonian University alumni Edlers of Austria German Bohemian people Austrian people of German Bohemian descent People from the Kingdom of Galicia and Lodomeria People from Nowy Sącz Neoclassical economists
2,831
6,259
https://en.wikipedia.org/wiki/Civilization%20%28video%20game%29
Civilization (video game)
Sid Meier's Civilization is a 1991 turn-based strategy 4X video game developed and published by MicroProse. The game was originally developed for MS-DOS running on a PC, and has undergone numerous revisions for various platforms. The player is tasked with leading an entire human civilization over the course of several millennia by controlling various areas such as urban development, exploration, government, trade, research, and military. The player can control individual units and advance the exploration, conquest and settlement of the game's world. The player can also make such decisions as setting forms of government, tax rates and research priorities. The player's civilization is in competition with other computer-controlled civilizations, with which the player can enter diplomatic relationships that can either end in alliances or lead to war. Civilization was designed by Sid Meier and Bruce Shelley following the successes of Silent Service, Sid Meier's Pirates! and Railroad Tycoon. Civilization has sold 1.5 million copies since its release, and is considered one of the most influential computer games in history due to its establishment of the 4X genre. In addition to its commercial and critical success, the game has been deemed pedagogically valuable due to its presentation of historical relationships. A multiplayer remake, Sid Meier's CivNet, was released for the PC in 1995. Civilization was followed by several sequels starting with Civilization II, with similar or modified scenarios. Gameplay Civilization is a turn-based single-player strategy game. The player takes on the role of the ruler of a civilization, starting with one (or occasionally two) settler units, and attempts to build an empire in competition with two to seven other civilizations. The game requires a fair amount of micromanagement (although less than other simulation games). Along with the larger tasks of exploration, warfare and diplomacy, the player has to make decisions about where to build new cities, which improvements or units to build in each city, which advances in knowledge should be sought (and at what rate), and how to transform the land surrounding the cities for maximum benefit. From time to time the player's towns may be harassed by barbarians, units with no specific nationality and no named leader. These threats only come from huts, unclaimed land or sea, so that over time and turns of exploration, there are fewer and fewer places from which barbarians will emanate. Before the game begins, the player chooses which historical or current civilization to play. In contrast to later games in the Civilization series, this is largely a cosmetic choice, affecting titles, city names, musical heralds, and color. The choice does affect their starting position on the "Play on Earth" map, and thus different resources in one's initial cities, but has no effect on starting position when starting a random world game or a customized world game. The player's choice of civilization also prevents the computer from being able to play as that civilization or the other civilization of the same color, and since computer-controlled opponents display certain traits of their civilizations this affects gameplay as well. The Aztecs are both fiercely expansionist and generally extremely wealthy, for example. Other civilizations include the Americans, the Mongols, and Romans. Each civilization is led by a famous historical figure, such as Mahatma Gandhi for India. The scope of Civilization is larger than most other games. The game begins in 4000 BC, before the Bronze Age, and can last through to AD 2100 (on the easiest setting) with Space Age and "future technologies". At the start of the game there are no cities anywhere in the world: the player controls one or two settler units, which can be used to found new cities in appropriate sites (and those cities may build other settler units, which can go out and found new cities, thus expanding the empire). Settlers can also alter terrain, build improvements such as mines and irrigation, build roads to connect cities, and later in the game they can construct railroads which offer unlimited movement. As time advances, new technologies are developed; these technologies are the primary way in which the game changes and grows. At the start, players choose from advances such as pottery, the wheel, and the alphabet to, near the end of the game, nuclear fission and spaceflight. Players can gain a large advantage if their civilization is the first to learn a particular technology (the secrets of flight, for example) and put it to use in a military or other context. Most advances give access to new units, city improvements or derivative technologies: for example, the chariot unit becomes available after the wheel is developed, and the granary building becomes available to build after pottery is developed. The whole system of advancements from beginning to end is called the technology tree, or simply the Tech tree; this concept has been adopted in many other strategy games. Since only one tech may be "researched" at any given time, the order in which technologies are chosen makes a considerable difference in the outcome of the game and generally reflects the player's preferred style of gameplay. Players can also build Wonders of the World in each of the epochs of the game, subject only to obtaining the prerequisite knowledge. These wonders are important achievements of society, science, culture and defense, ranging from the Pyramids and the Great Wall in the Ancient age, to Copernicus' Observatory and Magellan's Expedition in the middle period, up to the Apollo program, the United Nations, and the Manhattan Project in the modern era. Each wonder can only be built once in the world, and requires a lot of resources to build, far more than most other city buildings or units. Wonders provide unique benefits to the controlling civilization. For example, Magellan's Expedition increases the movement rate of naval units. Wonders typically affect either the city in which they are built (for example, the Colossus), every city on the continent (for example, J.S. Bach's Cathedral), or the civilization as a whole (for example, Darwin's Voyage). Some wonders are made obsolete by new technologies. The game can be won by conquering all other civilizations or by winning the space race by reaching the star system of Alpha Centauri. Development Prior Civilization-named games British designer Francis Tresham released his Civilization board game in 1980 under his company Hartland Trefoil. Avalon Hill had obtained the rights to publish it in the United States in 1981. There were at least two attempts to make a computerized version of Tresham's game prior to 1990. Danielle Bunten Berry planned to start work on the game after completing M.U.L.E. in 1983, and again in 1985, after completing The Seven Cities of Gold at Electronic Arts. In 1983 Bunten and producer Joe Ybarra opted to first do Seven Cities of Gold. The success of Seven Cities in 1985 in turn led to a sequel, Heart of Africa. Bunten never returned to the idea of Civilization. Don Daglow, designer of Utopia, the first simulation game, began work programming a version of Civilization in 1987. He dropped the project, however, when he was offered an executive position at Brøderbund, and never returned to the game. Development at MicroProse Sid Meier and Bill Stealey co-founded MicroProse in 1982 to develop flight simulators and other military strategy video games based on Stealey's past experiences as a United States Air Force pilot. Around 1989, Meier wanted to expand his repertoire beyond these types of games, as just having finished F-19 Stealth Fighter (1988, 1990), he said "Everything I thought was cool about a flight simulator had gone into that game." He took to heart the success of the new god game genre in particular SimCity (1989) and Populous (1989). Specifically with SimCity, Meier recognized that video games could still be entertaining based on building something up. By then, Meier was not an official employee of MicroProse but worked under contract where the company paid him upfront for game development, a large payment on delivery of the game, and additional royalties on each game of his sold. MicroProse had hired a number of Avalon Hill game designers, including Bruce Shelley. Among other works, Shelley had been responsible for adapting the railroad-based 1829 board game developed by Tresham into 1830: The Game of Railroads and Robber Barons. Shelley had joined MicroProse finding that the board game market was weakening in contrast to the video game market, and initially worked on F-19 Stealth Fighter. Meier recognized Shelley's abilities and background in game design and took him on as personal assistant designer to brainstorm new game ideas. The two initially worked on ideas for Covert Action, but had put these aside when they came up with the concepts for Railroad Tycoon (1990), based loosely on the 1829/1830 board games. Railroad Tycoon was generally well received at its release, but the title did not fit within the nature of flight simulators and military strategy from MicroProse's previous catalog. Meier and Shelley had started a sequel to Railroad Tycoon shortly after its release, but Stealey canceled the project. One positive aspect both had taken from Railroad Tycoon was the idea of multiple smaller systems working together at the same time and the player having to manage them. Both Meier and Shelley recognized that the complex interactions between these systems led players to "make a lot of interesting decisions", and that ruling a whole civilization would readily work well with these underlying systems. Some time later, both discussed their love of the original Empire computer games, and Meier challenged Shelley to give him ten things he would change about Empire; Shelley provided him with twelve. Around May 1990, Meier presented Shelley with a 5-1/4" floppy disk which contained the first prototype of Civilization based on their past discussions and Shelley's list. Meier described his development process as sculpting with clay. His prototype took elements from Empire, Railroad Tycoon, SimCity and the Civilization board game. This initial version of this game was a real-time simulation, with the player defining zones for their population to grow similar to zoning in SimCity. Meier and Shelley went back and forth with this, with Shelley providing suggestions based on his playthrough and acting as the game's producer, and Meier coding and reworking the game to address these points, and otherwise without involvement of other MicroProse staff. During this period, Stealey and the other managers became concerned that this game did not fit MicroProse's general catalog as strategy computer games had not yet proven successful. A few months into the development, Stealey requested them to put the project on hold and complete Covert Action, after which they could go back to their new game. Meier and Shelley completed Covert Action which was published in 1990. Once Covert Action was released, Meier and Shelley returned to the prototype. The time away from the project allowed them to recognize that the real-time aspect was not working well, and reworked the game to become turn-based and dropped the zoning aspect, a change that Meier described as "like tossing the clay in the trash and getting a new lump". They incorporated elements of city management and military aspect from Empire, including creating individual military units as well as settler units that replaced the functionality of the zoning approach. Meier felt adding military and combat to the game was necessary as "The game really isn't about being civilized. The competition is what makes the game fun and the players play their best. At times, you have to make the player uncomfortable for the good of the player". Meier also opted to include a technology tree that would help to open the game to many more choices to the player as it continued, creating a non-linear experience. Meier felt players would be able to use the technology tree to adopt a style of play and from which they could use technologies to barter with the other opponents. While the game relies on established recorded history, Meier admitted he did not spend much time in research, usually only to assure the proper chronology or spellings; Shelley noted that they wanted to design for fun, not accuracy, and that "Everything we needed was pretty much available in the children’s section of the library." Computer Gaming World reported in 1994 that "Sid Meier has stated on numerous occasions that he emphasizes the 'fun parts' of a simulation and throws out the rest". Meier described the process as "Add another bit [of clay]—no, that went too far. Scrape it off". He eliminated the potential for any civilization to fall on its own, believing this would be punishing to the player. "Though historically accurate", Meier said, "The moment the Krakatoa volcano blew up, or the bubonic plague came marching through, all anybody wanted to do was reload from a saved game". Meier omitted multiplayer alliances because the computer used them too effectively, causing players to think that it was cheating. He said that by contrast, minefields and minesweepers caused the computer to do "stupid things ... If you've got a feature that makes the AI look stupid, take it out. It's more important not to have stupid AI than to have good AI". Meier also omitted jets and helicopters because he thought players would not find obtaining new technologies in the endgame useful, and online multiplayer support because of the small number of online players ("if you had friends, you wouldn't need to play computer games"); he also did not believe that online play worked well with turn-based play. The game was developed for the IBM PC platform, which at the time had support for both 16-color EGA to 256-color VGA; Meier opted to support both 16-color and 256-color graphics to allow the game to run on both EGA/Tandy and VGA/MCGA systems. "I’ve never been able to decide if it was a mistake to keep Civ isolated as long as I did", Meier wrote; while "as many eyes as possible" are beneficial during development, Meier and Shelley worked very quickly together, combining the roles of playtester, game designer, and programmer. Meier and Shelley neared the end of their development and started presenting the game to the rest of MicroProse for feedback towards publication. This process was slowed by the current vice president of development, who had taken over Meier's former position at the company. This vice president did not receive any financial bonuses for successful publication of Meier's games due to Meier's contract terms, forgoing any incentive to provide the needed resources to finish the game. The management had also expressed issue with the lack of a firm completion date, as according to Shelley, Meier would consider a game completed only when he felt he had completed it. Eventually the two got the required help for publication, with Shelley overseeing these processes and Meier making the necessary coding changes. "One of my big rules has always been, 'double it, or cut it in half, Meier wrote. He cut the map's size in half less than a month before Civilization release after playtesting revealed that the previous size was too large and made for boring and repetitive gameplay. Other automated features, like city management, were modified to require more player involvement. They also eliminated a secondary branch of the technology tree with minor skills like beer brewing, and spent time reworking the existing technologies and units to make sure they felt appropriate and did not break the game. Most of the game was originally developed with art crafted by Meier, and MicroProse's art department helped to create most of the final assets, though some of Meier's original art was used. Shelley wrote out the "Civilopedia" entries for all the elements of the game and the game's large manual. The name Civilization came late in the development process. MicroProse recognized at this point the 1980 Civilization board game may conflict with their video game, as it shared a similar theme including the technology tree. Meier had noted the board game's influence but considered it not as great as Empire or SimCity, while others have noted significant differences that made the video game far different from the board game such as the non-linearity introduced by Meier's technology tree. To avoid any potential legal issues, MicroProse negotiated a license to use the Civilization name from Avalon Hill. The addition of Meier's name to the title was from a current practice established by Stealey to attach games like Civilization that diverged from MicroProse's past catalog to Meier's name, so that players that played Meier's combat simulators and recognized Meier's name would give these new games a try. This approach worked, according to Meier, and he would continue this naming scheme for other titles in the future as a type of branding. By the time the game was completed and ready for release, Meier estimated that it had cost $170,000 in development. Civilization was released in September 1991. Because of the animosity that MicroProse's management had towards Meier's games, there was very little promotion of the title, though interest in the game through word-of-mouth helped to boost sales. Following the release on the IBM PC, the game was ported to other platforms; Meier and Shelley provided this code to contractors hired by MicroProse to complete the ports. CivNet Civilization was released with only single-player support, with the player working against multiple computer opponents. In 1991, Internet or online gaming was still in its infancy, so this option was not considered in Civilization release. Over the next few years, as home Internet accessibility took off, MicroProse looked to develop an online version of Civilization. This led to the 1995 release of Sid Meier's CivNet. CivNet allowed for up to seven players to play the game, with computer opponents available to obtain up to six active civilizations. Games could be played either on a turn-based mode, or in a simultaneous mode where each player took their turn at the same time and only progressing to the next turn once all players have confirmed being finished that turn. The game, in addition to better support for Windows 3.1 and Windows 95, supported connectivity through LAN, primitive Internet play, modem, and direct serial link, and included a local hotseat mode. CivNet also included a map editor and a "king builder" to allow a player to customize the names and looks of their civilization as seen by other players. According to Brian Reynolds, who led the development of Civilization II, MicroProse "sincerely believed that CivNet was going to be a much more important product" than the next single-player Civilization game that he and Jeff Briggs had started working on. Reynolds said that because their project was seen as a side effort with little risk, they were able to innovate new ideas into Civilization II. As a net result, CivNet was generally overshadowed by Civilization II which was released in the following year. Post-release Civilization critical success created a "golden period of MicroProse" where there was more potential for similar strategy games to succeed, according to Meier. This put stress on the company's direction and culture. Stealey wanted to continue to pursue the military-themed titles, while Meier wanted to continue his success with simulation games. Shelley left MicroProse in 1992 and joined Ensemble Studios, where he used his experience with Civilization to design the Age of Empires games. Stealey had pushed MicroProse to develop console and arcade-based versions of their games, but this put the company into debt, and Stealey eventually sold the company to Spectrum HoloByte in 1993; Spectrum HoloByte kept MicroProse as a separate company on acquisition. Meier would continue and develop Civilization II along with Brian Reynolds, who served in a similar role to Shelley as design assistant, as well as help from Jeff Briggs and Douglas Kaufman. This game was released in early 1996, and is considered the first sequel of any Sid Meier game. Stealey eventually sold his shares in MicroProse and left the company, and Spectrum HoloByte opted to consolidate the two companies under the name MicroProse in 1996, eliminating numerous positions at MicroProse in the process. As a result, Meier, Briggs, and Reynolds all opted to leave the company and founded Firaxis, which by 2005 became a subsidiary of Take-Two. After a number of acquisitions and legal actions, the Civilization brand (both as a board game and video game) is now owned by Take-Two, and Firaxis, under Meier's oversight, continues to develop games in the Civilization series. Reception Civilization has been called one of the most important strategy games of all time, and has a loyal following of fans. This high level of interest has led to the creation of a number of free and open source versions and inspired similar games by other commercial developers. Computer Gaming World stated that "a new Olympian in the genre of god games has truly emerged", comparing Civilization importance to computer games to that of the wheel. The game was reviewed in 1992 in Dragon #183 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 5 out of 5 stars. They commented: "Civilization is one of the highest dollar-to-play-ratio entertainments we've enjoyed. The scope is enormous, the strategies border on being limitless, the excitement is genuinely high, and the experience is worth every dime of the game's purchase price." Jeff Koke reviewed Civilization in Pyramid #2 (July/Aug., 1993), and stated that "Ultimately, there are games that are a lot flashier than Civilization, with cool graphics and animation, but there aren't many - or any - in my book that have the ability to absorb the player so totally and to provide an interesting, unique outcome each and every time it's played." Civilization won the Origins Award in the category Best Military or Strategy Computer Game of 1991. A 1992 Computer Gaming World survey of wargames with modern settings gave the game five stars out of five, describing it as "more addictive than crack ... so rich and textured that the documentation is incomplete". In 1992 the magazine named it the Overall Game of the Year, in 1993 added the game to its Hall of Fame, and in 1996 chose Civilization as the best game of all time: A critic for Next Generation judged the Super NES version to be a disappointing port, with a cumbersome menu system (particularly that the "City" and "Production" windows are on separate screens), an unintuitive button configuration, and ugly scaled down graphics. However, he gave it a positive recommendation due to the strong gameplay and strategy of the original game: "if you've never taken a crack at this game before, be prepared to lose hours, even days, trying to conquer those pesky Babylonians." Sir Garnabus of GamePro, in contrast, was pleased with the Super NES version's interface, and said the graphics and audio are above that of a typical strategy game. He also said the game stood out among the Super NES's generally action-oriented library. In 2000, GameSpot rated Civilization as the tenth most influential video game of all time. It was also ranked at fourth place on IGN 2000 list of the top PC games of all time. In 2004, readers of Retro Gamer voted it as the 29th top retro game. In 2007, it was named one of the 16 most influential games in history at a German technology and games trade show Telespiele. In Poland, it was included in the retrospective lists of the best Amiga games by Wirtualna Polska (ranked ninth) and CHIP (ranked fifth). In 2012, Time named it one of the 100 greatest video games of all time. In 1994, PC Gamer US named Civilization the second best computer game ever. The editors wrote, "The depth of strategies possible is impressive, and the look and feel of the game will keep you playing and exploring for months. Truly a remarkable title." That same year, PC Gamer UK named its Windows release the sixth best computer game of all time, calling it Sid Meier's "crowning glory". On March 12, 2007, The New York Times reported on a list of the ten most important video games of all time, the so-called game canon, including Civilization. By the release of Civilization II in 1996, Civilization had sold over 850,000 copies. By 2001, sales had reached 1 million copies. Shelley stated in a 2016 interview that Civilization had sold 1.5 million copies. In 2022, The Strong National Museum of Play inducted Sid Meier’s Civilization to its World Video Game Hall of Fame. Reviews Casus Belli #70 (July 1992) Legacy There have been several sequels to Civilization, including Civilization II (1996), Civilization III (2001), Civilization IV (2005), Civilization Revolution (2008), Civilization V (2010), and Civilization VI in 2016. In 1994, Meier produced a similar game titled Colonization. Civilization is generally considered the first major game in the genre of "4X", with the four "X"s equating to "explore, expand, exploit, and exterminate", a term developed by Alan Emrich in promoting 1993's Master of Orion. While other video games with the principles of 4X had been released prior to Civilization, future 4X games would attribute some of their basic design principles to Civilization. An Easter egg named "Nuclear Gandhi" in most of the games in the series references a supposed integer overflow bug in Civilization that causes a computer-controlled Gandhi, normally a highly peaceful leader, to become a nuclear warmonger. The game is said to start Gandhi's "aggression value" at 1 out of a maximum 255 possible for an 8-bit unsigned integer, making a computer-controlled Gandhi tend to avoid armed conflict. However, once a civilization achieves democracy as its form of government, its leader's aggression value falls by 2. Under normal arithmetic principles, Gandhi's "1" would be reduced to "-1", but because the value is an 8-bit unsigned integer, it wraps around to "255", causing Gandhi to suddenly become the most aggressive opponent in the game. Interviewed in 2019, developer Brian Reynolds said with "99.99% certainty" that this story was apocryphal, recalling Gandhi's coded aggression level as being no lower than other peaceful leaders in the game, and doubting that a wraparound would have had the effect described. He noted that all leaders in the game become "pretty ornery" after their acquisition of nuclear weapons, and suggested that this behaviour simply seemed more surprising and memorable when it happened to Gandhi. Meier, in his autobiography, stated "That kind of bug comes from something called unsigned characters, which are not the default in the C programming language, and not something I used for the leader traits. Brian Reynolds wrote Civ II in C++, and he didn't use them, either. We received no complaints about a Gandhi bug when either game came out, nor did we send out any revisions for one. Gandhi's military aggressiveness score remained at 1 throughout the game." He then explains the overflow error story was made up in 2012. It spread from there to a Wikia entry, then eventually to Reddit, and was picked up by news sites like Kotaku and Geek.com. Another relic of Civilization was the nature of combat where a military unit from earlier civilization periods could remain in play through modern times, gaining combat bonuses due to veteran proficiency, leading to these primitive units easily beating out modern technology against all common sense, with the common example of a veteran phalanx unit able to fend off a battleship. Meier noted that this resulted from not anticipating how players would use units, expecting them to have used their forces more like a war-based board game to protect borders and maintain zones of control rather than creating "stacks of doom". Future civilization games have had many changes in combat systems to prevent such oddities, though these games do allow for such random victories. The 1999 game Sid Meier's Alpha Centauri was also created by Meier and is in the same genre, but with a futuristic/space theme; many of the interface and gameplay innovations in this game eventually made their way into Civilization III and IV. Alpha Centauri is not actually a sequel to Civilization, despite beginning with the same event that ends Civilization and Civilization II: a crewed spacecraft from Earth arrives in the Alpha Centauri star system. Firaxis' 2014 game Civilization: Beyond Earth, although bearing the name of the main series, is a reimagining of Alpha Centauri running on the engine of Civilization V. A 1994 Computer Gaming World survey of space war games stated that "the lesson of this incredibly popular wargame has not been lost on the software community, and technological research popped up all over the place in 1993", citing Spaceward Ho! and Master of Orion as examples. That year MicroProse published Master of Magic, a similar game but embedded in a medieval-fantasy setting where instead of technologies the player (a powerful wizard) develops spells, among other things. In 1999, Activision released Civilization: Call to Power, a sequel of sorts to Civilization II but created by a completely different design team. Call to Power spawned a sequel in 2000, but by then Activision had sold the rights to the Civilization name and could only call it Call to Power II. An open source clone of Civilization has been developed under the name of Freeciv, with the slogan "'Cause civilization should be free." This game can be configured to match the rules of either Civilization or Civilization II. Another game that partially clones Civilization is a public domain game called C-evo. References The Official Guide to Sid Meier's Civilization, Keith Ferrell, Edmund Ferrell, Compute Books, 1992, . Citations External links Official website Civilization at MobyGames 1991 video games 4X video games Amiga games Amiga 1200 games Asmik Ace Entertainment games Atari ST games 1 Classic Mac OS games Cultural depictions of Abraham Lincoln Cultural depictions of Mahatma Gandhi DOS games Historical simulation games Koei games Multiplayer and single-player video games NEC PC-9801 games Origins Award winners PlayStation (console) games Sid Meier games Super Nintendo Entertainment System games Top-down video games Turn-based strategy video games Video games based on board games Video games developed in the United States Video games scored by Jeff Briggs Video games scored by John Broomhall Video games using procedural generation Windows games World Video Game Hall of Fame
2,834